text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Description IceCream is a little library for sweet and creamy debugging. See. icecream alternatives and similar packages Based on the "Debugging Tools" category. Alternatively, view icecream alternatives based on common mentions on social networks and blogs. django-debug-toolbar8.7 8.1 L4 icecream VS django-debug-toolbarA configurable set of panels that display various debug information about the current request/response. py-spy8.6 8.0 icecream VS py-spySampling profiler for Python programs line_profiler7.6 0.0 L4 icecream VS line_profilerLine-by-line profiling. memory_profiler7.2 5.3 L3 icecream VS memory_profilerMonitor Memory usage of Python code profiling7.1 1.5 L4 icecream VS profilingAn interactive Python profiler. pyflame7.1 0.1 icecream VS pyflameA ptracing profiler For Python. python-uncompyle66.7 4.6 icecream VS python-uncompyle6A cross-version Python bytecode decompiler pudb6.4 9.1 L2 icecream VS pudbFull-screen console debugger for Python pyelftools6.1 5.0 L3 icecream VS pyelftoolsParsing ELF and DWARF in Python Cyberbrain6.0 9.2 icecream VS CyberbrainPython debugging, redefined. wdb5.7 0.0 L2 icecream VS wdbAn improbable web debugger through WebSockets pyringe5.7 0.0 L4 icecream VS pyringeDebugger capable of attaching to and injecting code into python processes. ipdb5.5 6.9 L4 icecream VS ipdbIntegration of IPython pdb django-devserver5.3 0.0 L3 icecream VS django-devserverA drop-in replacement for Django's runserver. Laboratory4.8 0.1 icecream VS LaboratoryAchieving confident refactoring through experimentation with Python 2.7 & 3.3+ flask-debugtoolbar4.4 0.2 L2 icecream VS flask-debugtoolbarA toolbar overlay for debugging Flask applications lptrace4.0 0.0 icecream VS lptraceTrace any Python program, anywhere! hunter3.7 8.1 L4 icecream VS hunterHunter is a flexible code tracing toolkit. The Fil memory profiler for PythonA Python memory profiler for data processing and scientific computing applications manhole2.8 5.7 L4 icecream VS manholeDebugging manhole for python applications. remote-pdb2.4 0.0 L5 icecream VS remote-pdbRemote vanilla PDB (over TCP sockets). python-statsd2.1 0.0 L5 icecream VS python-statsdPython Client for the Etsy NodeJS Statsd Server python3-trepan1.8 6.2 icecream VS python3-trepanA gdb-like Python3 Debugger in the Trepan family winpdb1.8 3.7 L2 icecream VS winpdbFork of the official winpdb with improvements ycecream0.9 8.8 icecream VS ycecreamSweeter debugging and benchmarking Python programs. Sampling Profiler for PythonSimple Python sampling profiler pdb++Another drop-in replacement for pdb. icecream or a related project? README IceCream — Never use print() to debug again Do you ever use print() or log() to debug your code? Of course you do. IceCream, or ic for short, makes print debugging a little sweeter. ic() is like print(), but better: - It prints both expressions/variable names and their values. - It's 40% faster to type. - Data structures are pretty printed. - Output is syntax highlighted. - It optionally includes program context: filename, line number, and parent function. IceCream is well tested, [permissively licensed](LICENSE.txt), and supports Python 2, Python 3, PyPy2, and PyPy3. Inspect Variables Have you ever printed variables or expressions to debug your program? If you've ever typed something like print(foo('123')) or the more thorough print("foo('123')", foo('123')) then ic() is here to help. With arguments, ic() inspects itself and prints both its own arguments and the values of those arguments. from icecream import ic def foo(i): return i + 333 ic(foo(123)) Prints ic| foo(123): 456 Similarly, d = {'key': {1: 'one'}} ic(d['key'][1]) class klass(): attr = 'yep' ic(klass.attr) Prints ic| d['key'][1]: 'one' ic| klass.attr: 'yep' Just give ic() a variable or expression and you're done. Easy. Inspect Execution Have you ever used print() to determine which parts of your program are executed, and in which order they're executed? For example, if you've ever added print statements to debug code like def foo(): print(0) first() if expression: print(1) second() else: print(2) third() then ic() helps here, too. Without arguments, ic() inspects itself and prints the calling filename, line number, and parent function. from icecream import ic def foo(): ic() first() if expression: ic() second() else: ic() third() Prints ic| example.py:4 in foo() ic| example.py:11 in foo() Just call ic() and you're done. Simple. Return Value ic() returns its argument(s), so ic() can easily be inserted into pre-existing code. >>> a = 6 >>> def half(i): >>> return i / 2 >>> b = half(ic(a)) ic| a: 6 >>> ic(b) ic| b: 3 Miscellaneous ic.format(*args) is like ic() but the output is returned as a string instead of written to stderr. >>> from icecream import ic >>>>> out = ic.format(s) >>> print(out) ic| s: 'sup' Additionally, ic()'s output can be entirely disabled, and later re-enabled, with ic.disable() and ic.enable() respectively. from icecream import ic ic(1) ic.disable() ic(2) ic.enable() ic(3) Prints ic| 1: 1 ic| 3: 3 ic() continues to return its arguments when disabled, of course; no existing code with ic() breaks. Import Tricks To make ic() available in every file without needing to be imported in every file, you can install() it. For example, in a root A.py: #!/usr/bin/env python3 # -*- coding: utf-8 -*- from icecream import install install() from B import foo foo() and then in B.py, which is imported by A.py, just call ic(): # -*- coding: utf-8 -*- def foo(): x = 3 ic(x) install() adds ic() to the builtins module, which is shared amongst all files imported by the interpreter. Similarly, ic() can later be uninstall()ed, too. ic() can also be imported in a manner that fails gracefully if IceCream isn't installed, like in production environments (i.e. not development). To that end, this fallback import snippet may prove useful: try: from icecream import ic except ImportError: # Graceful fallback if IceCream isn't installed. ic = lambda *a: None if not a else (a[0] if len(a) == 1 else a) # noqa Configuration ic.configureOutput(prefix, outputFunction, argToStringFunction, includeContext) can be used to adopt a custom output prefix (the default is ic|), change the output function (default is to write to stderr), customize how arguments are serialized to strings, and/or include the ic() call's context (filename, line number, and parent function) in ic() output with arguments. >>> from icecream import ic >>> ic.configureOutput(prefix='hello -> ') >>> ic('world') hello -> 'world' prefix can optionally be a function, too. >>> import time >>> from icecream import ic >>> >>> def unixTimestamp(): >>> return '%i |> ' % int(time.time()) >>> >>> ic.configureOutput(prefix=unixTimestamp) >>> ic('world') 1519185860 |> 'world': 'world' outputFunction, if provided, is called with ic()'s output instead of that output being written to stderr (the default). >>> import logging >>> from icecream import ic >>> >>> def warn(s): >>> logging.warning(s) >>> >>> ic.configureOutput(outputFunction=warn) >>> ic('eep') WARNING:root:ic| 'eep': 'eep' argToStringFunction, if provided, is called with argument values to be serialized to displayable strings. The default is PrettyPrint's pprint.pformat(), but this can be changed to, for example, handle non-standard datatypes in a custom fashion. >>> from icecream import ic >>> >>> def toString(obj): >>> if isinstance(obj, str): >>> return '[!string %r with length %i!]' % (obj, len(obj)) >>> return repr(obj) >>> >>> ic.configureOutput(argToStringFunction=toString) >>> ic(7, 'hello') ic| 7: 7, 'hello': [!string 'hello' with length 5!] includeContext, if provided and True, adds the ic() call's filename, line number, and parent function to ic()'s output. >>> from icecream import ic >>> ic.configureOutput(includeContext=True) >>> >>> def foo(): >>> ic('str') >>> foo() ic| example.py:12 in foo()- 'str': 'str' includeContext is False by default. Installation Installing IceCream with pip is easy. $ pip install icecream Related Python libraries ic() uses executing by @alexmojaki to reliably locate ic() calls in Python source. It's magic. IceCream in Other Languages Delicious IceCream should be enjoyed in every language. - Dart: icecream - Rust: icecream-rs - Node.js: node-icecream - C++: IceCream-Cpp - PHP: icecream-php - Go: icecream-go - Ruby: Ricecream - Java: icecream-java - R: icecream - Lua: icecream-lua If you'd like a similar ic() function in your favorite language, please open a pull request! IceCream's goal is to sweeten print debugging with a handy-dandy ic() function in every language. *Note that all licence references and agreements mentioned in the icecream README section above are relevant to that project's source code only.
https://python.libhunt.com/icecream-alternatives
CC-MAIN-2021-31
refinedweb
1,389
50.33
I’m trying to retrieve a user’s accessToken with the correct audience using the Auth0 Lock. My code is as follows: import Auth0Lock from 'auth0-lock'; new Auth0Lock(clientId, domain, { oidcConformant: true, auth: { redirectUrl: `${ROOT_URL}/auth/signed-in`, responseMode: 'form_post', responseType: 'token', audience, }, params: { scope: 'openid', }, }).show(); When oidcConformant is true, the Lock seems to disregard the redirectUrl and responseMode fields. Instead, it shows me “Thanks for logging in.” in the modal. Is this an intended behaviour or is this a bug? Is there another way for me to get the user’s full access token while hiding it from the web client (considering we can only specify an auth.audience if we set oidcConformant to true)?
https://community.auth0.com/t/oidcconformant-breaks-redirecturl/6466
CC-MAIN-2018-30
refinedweb
118
57.67
In looking through some of the work Ecma has done to identify in a clear way how existing standards are leveraged, I found the blog post from Jesper Lund Stocholm about the use of SVG within ODF to be very interesting. This is an example of how difficult it can be to get cross-application interoperability when you have a specification that is vague. I think this is an obvious area where things could be improved (both in terms of the spec, as well as the implementations who try to follow the spec). For those of you interested in the technical details, it is a great read. Also some good discussion down in the comments: January 20. 2008 20:59 In the ODF schema, there are 9 elements in the SVG-compatible namespace. They are as follows: svg:desc, svg:font-face-src, svg:font-face-uri, svg:font-face-format, svg:font-face-name, svg:definition-src, svg:linearGradient, svg:radialGradient, and svg:stop. Where are the basic primitives of SVG such as rect, circle, ellipse, line, polyline, and polygon? They do not exist in the SVG-compatible namespace. Something similar to them appear in the draw namespace, which is specific to ODF. Anon January 21. 2008 03:08 Anon, thanks for your reply, Where are the basic primitives of SVG such as rect, circle, ellipse, line, polyline, and polygon? They do not exist in the SVG-compatible namespace. Yes - well that really emphasizes my point. ODF both augments and limits SVG and vector graphics are, at the end of the day, not handled by SVG but by ODF Draw. jlundstocholm -Brian
http://blogs.msdn.com/b/brian_jones/archive/2008/01/22/reuse-of-existing-standards.aspx
CC-MAIN-2014-52
refinedweb
273
63.09
visited Aswan in December 2011 as part of a 5 week stay in Egypt. When booking all the accommodation and mini tours we did throughout Egypt I found Aswan Individual on TA. From the moment I read about this great project and the adventures of others at Elephantine Island I knew we had to do this. So I contacted Petra Dressler and after various emails everything was booked for what we wanted to do, while in Aswan. We were given Waleed's number and when we arrived in Aswan he came to our hotel, to go over everything we had booked, and to make sure all was in order, which super impressed us from the start. Our whole time in beautiful Aswan was made magical by the friendliness and efficiency of Waleed, Captain Sero the drivers to Abu Simbel and to Luxor. We went to the west bank by motorboat, went up to see the tombs of the nobles and then trekked by camel to the Monastery of St Simeon ( 4kms parallel to the Nile) but in the desert, Once we got back to the river's edge ( passing the Mausoleum of Aga Khan) where we farewelled our friendly cameleers, Captain Sero picked us up by his motorboat and took us first to Kitchener's Island to see the beautiful botanical gardens there and then around Elephantine Island, where we stopped for a tour of the island and to have a wonderful lunch his wife had prepared for us.We enjoyed our first day with Captain Sero so much that we booked another day with him again. This time the wind was up so we sailed in his felucca up river to the first cataract, and the Old Dam where Captain Sero took us up over and around to look at the old lock. On the way back to Elephantine Island for another wonderful lunch, he took us to a Nubian house where we actually were able to cuddle a Nile crocodile, totally AWESOME!!!... a day that remains a wonderful highlight of magnificent Egypt. Thankyou Captain Sero!!! Aswan Individual also helped us with transport via minibus to Abu Simbel and return ( we stayed at the Nubian Eco Lodge, (highly recommended), and then by car back to Luxor. I cannot recommend their services enough, to anyone contemplating a visit to this beautiful region of Egypt. You will not be disappointed. Organising our trip with Aswan Individual was probably the best choice I made for our trip to Egypt. Captain Sero’s house for dinner, he showed us the ruins on the island and then gave us time to explore Elephantine Island before visiting his house for dinner. My husband and friend both say that this was by far their favourite meal of the whole trip! Thanks so much to Petra. The prices might be slightly more than some other places. But with Aswan Individual all especially Petra for communicating with me for months in advance and giving me great advice about our plans. And for Waleed for taking such great care of us while we were.
https://www.tripadvisor.com/ShowUserReviews-g294204-d1951355-r135214626-Aswan_Individual_Daily_Tour-Aswan_Aswan_Governorate_Nile_River_Valley.html
CC-MAIN-2020-10
refinedweb
515
64.85
2014-03-03 Data Mining the Internet Archive Collection Recommended for Intermediate Users. For Whom Is This Useful? This intermediate lesson is good for users of the Programming Historian who have completed general lessons on downloading files and performing text analysis on them, but would like an applied example of these principles. It will also be of interest to historians or archivists who work with the MARC format or the Internet Archive on a regular basis. Before You Begin We will be working with two Python modules that are not included in Python’s standard library. The first, internetarchive, provides programmatic access to the Internet Archive. The second, pymarc, makes it easier to parse MARC records. The easiest way to download both is to use pip, the python package manager. Begin by installing pip using Fred Gibbs’ Installing Python Modules with pip. Then issue these commands at the command line: To install internetarchive: sudo pip install internetarchive You’ll next want to install the pymarc package, which also needs version 1.9.0 of six, another Python package, also installed.1 To make sure your system has the latest version of six, first try: sudo pip install --upgrade six Then, to install pymarc: sudo pip install pymarc Now you are ready to go to work! The Antislavery Collection at the Internet Archive The Boston Public Library’s anti-slavery collection at Copley Square contains not only the letters of William Lloyd Garrison, one of the icons of the American abolitionist movement, but also large collections of letters by and to reformers somehow connected to him. And by “large collection,” I mean large. According to the library’s estimates, there are over 16,000 items at Copley. As of this writing, approximately 7,000 of those items have been thousands of antislavery letters, manuscripts, and publications. Accessing an IA Collection in Python Internet Archive (IA) collections and items all have a unique identifier, and URLs to collections and items all look like this:[IDENTIFIER] So, for example, here is a URL to the Archive item discussed above, Douglass’s letter to Garrison: And here is a URL to the entire antislavery collection at the Boston Public Library: Because the URLs are so similar, the only way to tell that you are looking at a collection page, instead of an individual item page, is to examine the page layout. An item page usually has a lefthand sidebar that says “View the Book” and lists links for reading the item online or accessing other file formats. A collection page will probably have a “Spotlight Item” in the lefthand sidebar instead. You can browse to different collections through the eBook and Texts portal, and you may also want to read a little bit about the way that items and item URLs are structured. Once you have a collection’s identifier—in this case, bplscas—seeing all of the items in the collection is as easy as navigating to the Archive’s advanced search page, selecting the id from the drop down menu next to “Collection,” and hitting the search button. Performing that search with bplscas selected returns this page, which as of this writing showed 7,029 results. We can also search the Archive using the Python module that we installed, and doing so makes it easier to iterate over all the items in the collection for purposes of further inspection and downloading. For example, let’s modify the sample code from the module’s documentation to see if we can tell, with Python, how many items are in the digital Antislavery Collection. The sample code looks something like what you see below. The only difference is that instead of importing only the search_items module from internetarchive, we are going to import the whole library. import internetarchive search = internetarchive.search_items('collection:nasa') print search.num_found All we should need to modify is the collection identifier, from nasa to bplscas. After starting your computer’s Python interpreter, try entering each of the above lines, followed by enter, but modify the collection id in the second command: search = internetarchive.search_items('collection:bplscas') After hitting enter on the print command, you should see a number that matches the number of results you saw when doing the advanced search for the collection in the browser. Accessing an IA Item in Python The internetarchive module also allows you to access individual items using their identifiers. Let’s try that using the documentation’s sample code, modifying it in order to get the Douglass letter we discussed earlier. If you are still at your Python interpreter’s command prompt, you don’t need to import internetarchive again. Since we imported the whole module, we also need to modify the sample code so that our interpreter will know that get_item is from the internetarchive module. We also need to change the sample identifier stairs to our item identifier, lettertowilliaml00doug (note that the character before the two zeroes is a lowercase L, not the number 1): item = internetarchive.get_item('lettertowilliaml00doug') item.download() Enter each of those lines in your interpreter, followed by enter. Depending on your Internet connection speed, it will now probably take a minute or two for the command prompt to return, because your computer is downloading all of the files associated with that item, including some pretty large images. But when it’s done downloading, you should be see a new directory on your computer whose name is the item identifier. To check, first exit your Python interpreter: exit() Then list the contents of the current directory to see if a folder now appears named lettertowilliaml00doug. If you list the contents of that folder, you should see a list of files similar to this: 39999066767938.djvu 39999066767938.epub 39999066767938.gif 39999066767938.pdf 39999066767938_abbyy.gz 39999066767938_djvu.txt 39999066767938_djvu.xml 39999066767938_images.zip 39999066767938_jp2.zip 39999066767938_scandata.xml lettertowilliaml00doug_archive.torrent lettertowilliaml00doug_dc.xml lettertowilliaml00doug_files.xml lettertowilliaml00doug_marc.xml lettertowilliaml00doug_meta.mrc lettertowilliaml00doug_meta.xml lettertowilliaml00doug_metasource.xml Now that we know how to use the Search and Item functions in the internetarchive module, we can turn to thinking about how to make this process more effective for downloading lots of information from the collection for further analysis. Downloading MARC Records from a Collection Downloading one item is nice, but what if we want to look at thousands of items in a collection? We’re in luck, because the internetarchive module’s Search function allows us to iterate over all the results in a search. To see how, let’s first start our Python interpreter again. We’ll need to import our module again, and perform our search again: import internetarchive search = internetarchive.search_items('collection:bplscas') Now let’s enter the documentation’s sample code for printing out the item identifier of every item returned by our search: for result in search: print result['identifier'] Note that after entering the first line, your Python interpreter will automatically print an ellipsis on line two. This is because you have started a for loop, and Python is expecting there to be more. It wants to know what you want to do for each result in the search. That’s also why, once you hit enter on the second line, you’ll see a third line with another ellipsis, because Python doesn’t know whether you are finished telling it what to do with each result. Hit enter again to end the for loop and execute the command. You should now see your terminal begin to print out the identifiers for each result returned by our bplscas search—in this case, all 7,029 of them! You can interrupt the print out by hitting Ctrl-C on your keyboard, which will return you to the prompt. If you didn’t see identifiers printing out to your screen, but instead saw an error like this, you may have forgotten to enter a few spaces before your print command: for result in search: print result['identifier'] File "", line 2 print result['identifier'] ^ IndentationError: expected an indented block Remember that whitespace matters in Python, and you need to indent the lines in a for loop so that Python can tell which command(s) to perform on each item in the loop. Understanding the for loop The for loop, expressed in plain English, tells Python to do something to each thing in a collection of things. In the above case, we printed the identifier for each result in the results of our collection search. Two additional points about the for loop: First, the word we used after for is what’s called a local variable in Python. It serves as a placeholder for whatever instance or item we are going to be working with inside the loop. Usually it makes sense to pick a name that describes what kind of thing we are working with—in this case, a search result—but we could have used other names in place of that one. For example, try running the above for loop again, but substitute a different name for the local variable, such as: for item in search: print item['identifier'] You should get the same results. The second thing to note about the for loop is that the indented block could could have contained other commands. In this case, we printed each individual search result’s identifier. But we could have chosen to do, for each result, anything that we could do to an individual Internet Archive item. For example, earlier we downloaded all the files associated with the item lettertowilliaml00doug. We could have done that to each item returned by our search by changing the line print result['identifier'] in our for loop to result.download(). We probably want to think twice before doing that, though—downloading all the files for each of the 7,029 items in the bplscas collection is a lot of files. Fortunately, the download function in the internetarchive module also allows you to download specific files associated with an item. If we had only wanted to download the MARC XML record associated with a particular item, we could have instead done this: item = internetarchive.get_item('lettertowilliaml00doug') marc = item.get_file('lettertowilliaml00doug_marc.xml') marc.download() Because Internet Archive item files are named according to specific rules, we can also figure out the name of the MARC file we want just by knowing the item’s unique identifier. And armed with that knowledge, we can proceed to … Download All the MARC XML Files from a Collection For the next section, we’re going to move from using the Python shell to writing a Python script that downloads the MARC record from each item in the BPL Antislavery Collection. Try putting this script into Komodo or your preferred text editor: #!/usr/bin/python import internetarchive search = internetarchive.search_items('collection:bplscas') for result in search: itemid = result['identifier'] item = internetarchive.get_item(itemid) marc = item.get_file(itemid + '_marc.xml') marc.download() print "Downloading " + itemid + " ..." This script looks a lot like the experiments we have done above with the Frederick Douglass letter, but since we want to download the MARC record for each item returned by our collection search, we are using an itemid variable to account for the fact that the identifier and filename will be different for each result. Before running this script (which, I should note, is going to download thousands of small XML files to your computer), make a directory where you want those MARC records to be stored and place the above script in that directory. Then run the script from within the directory so that the files will be downloaded in an easy-to-find place. (Note that if you receive what looks like a ConnectionError on your first attempt, check your Internet connection, wait a few minutes, and then try running the script again.) If all goes well, when you run your script, you should see the program begin to print out status updates telling you that it is downloading MARC records. But allowing the script to run its full course will probably take a couple of hours, so let’s stop the script and look a little more closely at ways to improve it. Pressing Ctrl-C while in your terminal window should make the script stop. Building Error Reporting into the Script Since downloading all of these records will take some time, we are probably going to want to walk away from our computer for a while. But the chances are high that during those two hours, something could go wrong that would prevent our script from working. Let’s say, for example, that we had forgotten that we already downloaded an individual file into this directory. Or maybe your computer briefly loses its Internet connection, or some sort of outage happens on the Internet Archive server that prevents the script from getting the file it wants. In those and other error cases, Python will raise an “exception” telling you what the problem is. Unfortunately, an exception will also crash your script instead of continuing on to the next item. To prevent this, we can use what’s called a try statement in Python, which does exactly what it sounds like. The statement will try to execute a certain snippet of code until it hits an exception, in which case you can give it some other code to execute instead. You can read more about handling exceptions in the Python documentation, but for now let’s just update our above script so that it looks like this: #!/usr/bin/python import internetarchive import time error_log = open('bpl-marcs-errors.log', 'a') search = internetarchive.search_items('collection:bplscas') for result in search: itemid = result['identifier'] item = internetarchive.get_item(itemid) marc = item.get_file(itemid + '_marc.xml') try: marc.download() except Exception as e: error_log.write('Could not download ' + itemid + ' because of error: %s\n' % e) print "There was an error; writing to log." else: print "Downloading " + itemid + " ..." time.sleep(1) The main thing we’ve added here, after our module import statements, is a line that opens a text file called bpl-marcs-errors.log and prepares it to have text appended to it. We are going to use this file to log exceptions that the script raises. The try statement that we have added to our for loop will attempt to download the MARC record. If it can’t, it will write a descriptive statement about what went wrong to our log file. That way we can go back to the file later and identify which items we will need to try to download again. If the try clause succeeds and can download the record, then the script will execute the code in the else clause. One other thing we have added, upon successful download, is this line: time.sleep(1) This line uses the time module that we are now importing at the beginning to tell our script to pause for one second before proceeding, which is basically just a way for us to be nice to Internet Archive’s servers by not clobbering them every millisecond or so with a request. Try updating your script to look like the above lines, and run it again in the directory where you want to store your MARC files. Don’t be surprised if you immediately encounter a string of error messages; that means the script is doing what it’s supposed to do! Calmly go into your text editor, while leaving the script running, and open the bpl-marcs-errors.log to see what exceptions have been recorded there. You’ll probably see that the script raised the exception “File already exists” for each of the files that you had already downloaded when running our earlier, shorter program. If you leave the program running for a little while, the script will eventually get to items that you have not already downloaded and resume collecting your MARCs! Scraping Information from a MARC Record Once your download script has completed, you should find yourself in the possession of nearly 7,000 detailed MARC XML records about items in the Anti-Slavery Collection (or whichever other collection you may have downloaded instead; the methods above should work on any collection whose items have MARC files attached to them). Now what? The next step depends on what sort of questions about the collection you want to answer. The MARC formatting language captures a wealth of data about an item, as you can see if you return to the MARC XML record for the Frederick Douglass letter mentioned at the outset. Notice, for example, that the Douglass letter contains information about the place where the letter was written in the datafield that is tagged 260, inside the subfield coded a. The person who prepared this MARC record knew to put place information in that specific field because of rules specified for the 260 datafield by the MARC standards. That means that it should be possible for us to look inside all of the MARC records we have downloaded, grab the information inside of datafield 260, subfield a, and make a list of every place name where items in the collection were published. To do this, we’ll use the other helpful Python module that we downloaded with pip at the beginning: pymarc. That module makes it easy to get information out of subfields. Assuming that we have a MARC record prepared for parsing by the module assigned to the variable record, we could get the information about publication place names this way: place_of_pub = record['260']['a'] The documentation for pymarc is a little less complete than that for the Internet Archive, especially when it comes to parsing XML records. But a little rooting around in the source code for the module reveals some functions that it provides for working with MARC XML records. One of these, called map_xml() is described this way: def map_xml(function, *files): """ map a function onto the file, so that for each record that is parsed the function will get called with the extracted record def do_it(r): print r map_xml(do_it, 'marc.xml') """ Translated into plain English, this function means that we can take an XML file containing MARC data (like the nearly 7,000 we now have on our computer), pass it to the map_xml function in the pymarc module, and then specify another function (that we will write) telling our program what to do with the MARC data retrieved from the XML file. In rough outline, our code will look something like this: import pymarc def get_place_of_pub(record): place_of_pub = record['260']['a'] print place_of_pub pymarc.map_xml(get_place_of_pub, 'lettertowilliaml00doug_marc.xml') Try saving that code to a script and running it from a directory where you already have the Douglass letter XML saved. If all goes well, the script should spit out this: Belfast, [Northern Ireland], Voila! Of course, this script would be much more useful if we scraped the place of publication from every letter in our collection of MARC records. Putting together what we’ve learned from earlier in the lesson, we can do that with a script that looks like this: #!/usr/bin/python import os import pymarc path = '/path/to/dir/with/xmlfiles/' def get_place_of_pub(record): try: place_of_pub = record['260']['a'] print place_of_pub except Exception as e: print e for file in os.listdir(path): if file.endswith('.xml'): pymarc.map_xml(get_place_of_pub, path + file) This script modifies our above code in several ways. First, it uses a for loop to iterate over each file in our directory. In place of the internetarchive search results that we iterated over in our first part of this lesson, we iterate over the files returned by os.listdir(path) which uses the built-in Python module os to list the contents of the directory specified in the path variable, which you will need to modify so that it matches the directory where you have downloaded all of your MARC files. We have also added some error handling to our get_place_of_pub() function to account for the fact that some records may (for whatever reason) not contain the information we are looking for. The function will try to print the place of publication, but if this raises an Exception, it will print out the information returned by the Exception instead. In this case, if the try statement failed, the exception will probably print None. Understanding why is a subject for another lesson on Python Type errors, but for now the None printout is descriptive enough of what happened, so it could be useful to us. Try running this script. If all goes well, your screen should fill with a list of the places where these letters were written. If that works, try modifying your script so that it saves the place names to a text file instead of printing them to your screen. You could then use the Counting Frequencies lesson to figure out which place names are most common in the collection. You could work with the place names to find coordinates that could be placed on a map using the Google Maps lesson. Or, to get a very rough visual sense of the places where letters were written, you could do what I’ve done below and simply make a Wordle word cloud of the text file. Wordle wordcloud of places of publication for abolitionist letters Of course, to make such techniques useful would require more cleaning of your data. And other applications of this lesson may prove more useful. For example, working with the MARC data fields for personal names, you could create a network of correspondents. Or you could analyze which subjects are common in the MARC records. Now that you have the MARC records downloaded and can use pymarc to extract information from the fields, the possibilities can multiply rapidly! Thanks to Shawn Graham for pointing out the pymarcdependency on sixand providing a solution. ↩ Suggested Citation Caleb McDaniel , "Data Mining the Internet Archive Collection," Programming Historian, (2014-03-03),
http://programminghistorian.org/lessons/data-mining-the-internet-archive
CC-MAIN-2017-26
refinedweb
3,662
57.2
Learn where Java code is written and saved, how classes relate to one another, and how to use the Greenfoot code editor. In the Wombat Object Basics article, you learned what an object is, what methods are for, and a bit about the syntax that is used in code. In this article, you will learn where that code is written and saved, how classes relate to one another, and you'll learn to use the Greenfoot code editor. In addition, certain words in this article are linked, so you can learn more than is being taught here. As in the last article, to follow along you will need: This article is aimed at anyone interested in Java programming who is between the ages of 10-100 and has no programming experience. It is recommended that you have read and followed Wombat Object Basics before moving on in this article. Now that you understand that Java programs are made up of a lot of Java objects, and they interact through methods that provide the instructions for doing things, you are ready to learn about the code. Let's not waste any time. Open Greenfoot. If the Wombats scenario is not already open, Click Scenario in the top menu, choose Open, select wombats, and then click Open. At the top of the world area, right-click wombatWorld and select void populate(). By invoking that void populate() method, you should now see a few wombats and lots of leaves in your wombat world as shown in Figure 1. So, where are these directions coming from for the void populate() method? In Java programs, all definitions of objects and all instructions for methods are written into files called classes. A class is a text file, saved with a .java extension. In that file, the code is written, and later compiled. Take a look at the Wombat class by opening the Greenfoot editor. Right-click the Wombat name and select Open editor, as shown in Figure 2. Now, you should see the Wombat class and all of its code in the editor, as shown in Figure 3. Don't let all these lines of code make you nervous. Scroll up and down and take a look at the code. Something you'll notice right off is that there are a lot of English words. There is also some punctuation, just like in the sentences you're used to reading. But Java syntax is a bit different than what you use in writing English. There are enough similarities, though, to make learning Java programming much easier. We're not going to cover all of this code, but you'll learn some important basics here. A Java class defines an object. For this program, the developer decided to create Wombat objects, and he wrote a class that defines what a wombat is and what it can do. In other words, a class is like a recipe for a pie. In a recipe, you list all of the ingredients that need to go into the pie, and you detail the instructions for what to do with the ingredients. A class does the same thing, but it's written a bit differently. Objects and Inheritance Scroll down from the top a bit, to this line of code: public class Wombat extends Actor This is a class declaration. The declaration declares, or simply says, this class is going to create an object named Wombat. The extends Actor part of the code tells us that there is a class named Actor that this Wombat class inherits from. Starting to sound complicated? It's not. In fact, inheritance simplifies code so you can reuse classes. Inheritance in Java programs works very much like inheritance works for people. In the real world, you inherit a lot from your parents. You have a body with two legs, two arms, two eyes, and so on. So, in a Java program, let's say there is a Person class. This class declares a Person as an object that has a human-shaped body, two legs, two arms, and so forth, and it has the methods eat(), walk(), sleep(). Of course, there would be many more methods, but you get the idea. Let's say you want to create a Jane object. Jane is also a person, so if you extend the Jane class from the Person class, then Jane has all those methods in the Person class available to her. In addition, you can give Jane methods and traits that are individual to her, and are not necessarily common to all people, as shown in Figure 4. Extending prevents you from having to start from scratch and rewrite these methods for any Person objects you create while allowing you to add unique features to each object. So, let's get back to Wombat objects. Each Wombat extends the Actor class, which means every Wombat automatically can use anything defined in the Actor class. In real life, people understand you better if they understand a bit about your parents, and a bit about human beings in general. Likewise, we need to understand the Actor class to better understand Wombat objects. Java Documentation, or Application Programming Interface ( API) To learn about the Actor class, you also get to learn about Java documentation, or API. Close the Greenfoot editor you opened, and then open a new editor by right-clicking the Actor under Actor classes. In the editor you'll see some wonderful information about the Actor class, and what you get by extending the Wombat class from it. The important thing in the description to note is that objects such as Wombat need to inherit from the Actor class, and that one of the most important methods of this class is the act() method. The act() method is invoked by the user clicking the Act or Run button in Greenfoot, so you need to inherit that method into the Wombat object, just as you needed to inherit the eat() method from the Person class. Scroll down through this documentation to see what other methods are available. Don't be intimidated by this documentation. You'll get very familiar with it as you learn more and more about Java programming. Making Wombats Move Back to the Wombat object. Open an editor for the Wombat class again by right-clicking Wombat and selecting Open editor. Scroll down to the act() method, as shown here: As you recall, the act() method is from the Actor class. But the details of what happens when the act() method is invoked weren't specifically defined in the Actor class. That is so each object that extends Actor can have individual behavior. For instance, you could create an object named Monsters, and give that object different instructions in the act() method than the Wombat object uses. Additionally, you needn't use the methods from the Actor class. If you look at the code for the Leaf class in the editor, you'll see that no methods are defined. That's because the Leaf objects don't do anything but sit there until they get eaten. They don't have any methods! So, what are the details of this act() method in the Wombat class? Look again at that method detailed above. It's easy enough to read: If a Wombat finds a leaf, it eats the leaf, or it simply moves along. If it can't move any farther, it will turn left. How does it do all that? Scroll down some more in the Wombat class, and you'll see the instructions for a few more methods: foundLeaf(), eatLeaf(), move(), canMove(), turnLeft(), setDirection, gentleness(). We won't get into the details of these methods yet, except to say these instructions are what tell the Wombat how to interact in the program. If you want to change the behavior of a Wombat, you'd change one of these methods. You'll do that later . . . . Syntax refers to the rules we use to decide what order to put things in, so that they make sense. For example, we have rules in the English language about the order of words in sentences, so that the words make sense together. As mentioned earlier, Java programming syntax is a bit different from English sentences syntax. A few things worth noticing: For the fun of it, change the behavior of the wombats. If your code editor is not already open, open it by either double-clicking Wombat under Actor classes, or right-clicking Wombat and selecting Open editor. Scroll down to the act() method and replace the current method with this new act() method: Notice the change in the act() method: Now the method calls turnRamdom() instead of turnLeft. Naturally, you need to add this new turnRandom() method after the one above: Click Compile at the top of the editor. If no errors occur, go back to your main screen, click Compile All, populate the world again, and then click Run. Now the Wombats move in other directions besides left to go after the leaves. If you had errors, you can use the code for the entire class with these changes made by clicking here. What is happening when you compile? In the Java programming language, all source code is first written in plain class text files that end with the .java extension. Those source files are then compiled into .class files by the javac compiler. A .class file does not contain code that is native to your processor. Instead it contains bytecodes - the machine language of the Java Virtual Machine (Java VM). You'll learn more about this when you start writing your own Java programs. Summary You learned a lot in this article! Now you should understand the following concepts: Wombat extends Actor In the next article, you'll learn about Java packages and import statements, variables and fields, and more. With everything you have learned in these two articles, you should be ready to begin exploring the Java Tutorial and learning more about the Java programming language. Its beginning lessons will make sense to you now that you have the Wombat basics to put it into context. Young Developer Series: Application Basics with Ants (Part.3)
http://www.oracle.com/technetwork/articles/javase/wombat-world-141896.html
CC-MAIN-2017-39
refinedweb
1,709
71.04
It can sometimes be useful to obtain the MAC address of your Raspberry Pi. The Media Access Control address is a unique identifier given to all networked devices. The address is different for all Pi’s and can be used to identify your device. Think of it as a digital fingerprint. There are a number of ways to identify it using the command line or using Python code. Below are some quick examples you can use to find the MAC address. From the Command Line To find the MAC address from the command line you can use the following command : cat /sys/class/net/eth0/ addressaddress or you can type : ifconfig eth0 You can swap “eth0” for “wlan0” if you have an active wireless connection. This will result in output similar to : eth0 Link encap:Ethernet HWaddr c7:35:ce:fd:8e:a1 inet addr:192.168.0.16” is the MAC address. In this example “c7:35:ce:fd:8e:a1” Finding the MAC Address Using Python To get the MAC address into a Python variable you can use the following example code : # Read MAC from file myMAC = open('/sys/class/net/eth0/address').read() # Echo to screen print myMAC The following Python function can be used to obtain the MAC address of your Raspberry Pi : def getMAC(interface): # Return the MAC address of interface try: str = open('/sys/class/net/' + interface + '/address').read() except: str = "00:00:00:00:00:00" return str[0:17] This function can be called using the following line : getMAC('eth0') Or if you have a WiFi connection : getMAC('wlan0') or you can use: `$ ifconfig` & it’s the `HWaddr` numbers Thanks for the tips. I’ve updated the article to mention “ifconfig”. You should also mention typing the following in the command line: ifconfig And what about good old-fashioned #ifconfig? ifconfig and friends are wrappers to the /sys filesystem. Isn’t there something nice and simple about read()ing and write()ing to a file rather than spawning another program, and parsing the output?)
http://www.raspberrypi-spy.co.uk/2012/06/finding-the-mac-address-of-a-raspberry-pi/
CC-MAIN-2017-30
refinedweb
342
61.06
Disclaimer: at the moment of writing this article mkdev is not running containers in production. Images built below are only used for development, tests and CI system and are never run on production servers. Once mkdev decides to use containers in production, the contents and setup of our container images will change to be actually suitable for prod. Keep this in mind when reading this post. In the previous article we looked at all the reasons why you would want to taste a Dockerless life. We decided to try two new tools that will replace Docker: Buildah and Podman. In this article we will learn what Buildah is and how to use it to put your Ruby on Rails application into a container. What is a container image? Before we learn the tool, let's first learn what a container image is by reading the article A sysadmin's guide to containers. From there we learn that container image is a TAR file of two things: - Container root filesystem. To simply say, it's a directory with all the regular directories you would expect to be inside the container, like /usr, /home etc. - JSON file, a config file that defines how to run this root filesystem -- which commands to execute, which environment variables to set and so on. Contents of container image are defined in OCI image spec, your go-to destination if you want to learn more about the structure of container images. It might sound crazy, but you don't have to use image-spec for container images, you can use it for other things too. What is Buildah? Buildah is a container image builder tool, that produces OCI-compliant images. It is distributed as a single binary and is written in Go. Buildah is available as a package in most of modern Linux distributions, just follow official installation instructions. Buildah can only be used to manipulate images. It's job is to build container images and push them to registries. There is no daemon involved. Neither does Buildah require root privileges to build images. This makes Buildah especially handy as part of a CI/CD pipeline -- you can easily run Buildah inside a container without granting this container any root rights. To me personally the whole Docker in Docker setup required on container-based CI systems (Gitlab CI with Docker executor, for example) just to be able to build new container image felt a bit of an overkill. With Buildah there is no need for this, due to the narrow focus on things it needs to do well and things it should not do at all. One place where Buildah appears to be very useful is BuildConfigurations in OpenShift. Starting from OpenShift 4.0 BuildConfigs will rely on Buildah instead of Docker, thus removing the need to share any sockets or having privileged containers inside the OpenShift platform. Needless to say that it results in a more secure and cleaner way to build container images inside one of most popular container platforms out there. Images built by Buildah can be used by Docker without any issues. They are not "Buildah Images", but rather just "Container Images", they follow OCI specification, which is understood by Docker as well. So how do we build an image with Buildah? With Buildahfile Just kidding, there is actually no Buildahfile involved. Instead, Buildah can just read Dockerfiles, making transition from Docker to Buildah as easy as it can get. At mkdev we use Mattermost at the core of our messaging platform. It is important that we are able to run Mattermost locally to be able to easily develop integrations between primary web application and the messaging system. Even though Mattermost already provides official Docker images, we had to build our own due to the way we prefer to configure it and also to make it easier to run ephemeral test instances of Mattermost. We also want to pre-install certain Mattermost plugins that our mentors rely on. So we took the official Dockerfile, modified it a bit and fed it to Buildah: FROM alpine:3.9 # Some ENV variables ENV PATH="/opt/mattermost/bin:${PATH}" ENV MM_VERSION=5.8.0 # Set defaults for the config ENV MMDBCON=localhost:5432 \ MMDBKEY=XXXXXXXXXXXX \ MMSMTPUSERNAME=postfix \ MMSMTPPASSWORD=secrets \ MMSMTPSALT=XXXXXXXXXXXX \ MMGITHUBSECRET=secret \ MMGITHUBHOOK=localhost # Build argument to set Mattermost edition ARG PUID=2000 ARG PGID=2000 # Install some needed packages RUN apk add --no-cache \ ca-certificates \ curl \ jq \ libc6-compat \ libffi-dev \ linux-headers \ mailcap \ netcat-openbsd \ xmlsec-dev \ && rm -rf /tmp/* ## Get Mattermost RUN mkdir -p /opt/mattermost/data /opt/mattermost/plugins /opt/mattermost/client/plugins \ && cd /opt \ && curl | tar -xvz \ && curl -L -o /tmp/github.tar.gz \ && cd /opt/mattermost/plugins \ && tar -xvf /tmp/github.tar.gz COPY files/entrypoint.sh / COPY files/mattermost.json /opt/mattermost/config/config.json RUN chmod +x /entrypoint.sh \ && addgroup -g ${PGID} mattermost \ && adduser -D -u ${PUID} -G mattermost -h /mattermost -D mattermost \ && chown -R mattermost:mattermost /opt/mattermost /opt/mattermost/plugins /opt/mattermost/client/plugins USER mattermost # Configure entrypoint and command ENTRYPOINT ["/entrypoint.sh"] WORKDIR /opt/mattermost CMD ["mattermost"] # Expose port 8000 of the container EXPOSE 8065 # Declare volumes for mount point directories VOLUME ["/opt/mattermost/data", "/opt/mattermost/logs", "/opt/mattermost/config", "/opt/mattermost/plugins", "/opt/mattermost/client/plugins"] If it looks to you just like any other regular Dockerfile then only because it is, in fact, just a regular Dockerfile. Let's run Buildah: buildah bud -t docker.io/mkdevme/mattermost:5.8.0 . The output that will follow is similar to what you see when you run docker build . command. The resulting image will be stored locally, you can see it when you run buildah images command. Nice little feature of Buildah is that your images are user-specific, meaning that only the user who built this image is able to see and use it. If you run buildah images as any other system user, you won't see anything. This is different from Docker, where docker images always list same set of images for all the users. Once you built the image, you can push it to the registry. Buildah supports multiple transports to push your image. Some transport examples are docker-daemon -- if you still have Docker running locally and you want this image to be seen by Docker, docker -- if you want to push the image to Docker API compatible remote registry. There are other transports that are not Docker-specific: oci, containers-storage, dir etc. Nothing stops you from using Buildah to push the image to Docker Hub, if that's your registry of choice. By using Buildah we are not thinking in terms of Docker Images. It's more like if we would have a Git repository that we could push to GitHub, GitLab or BitBucket. Same way we can push our Container Image to the registry of choice -- Docker Hub, Quay, AWS ECR and others. Inspecting the image One of the transports Buildah supports is dir. When you push your image to dir, which is just a directory on filesystem, Buildah will store there tarballs for the layers and configuration of your image and a JSON manifest file. This is only useful for debugging and perfect for seeing the internals of an image. Create some directory and run buildah push IMAGE dir:/$(pwd). I don't expect you to actually build a Mattermost image, just use any other image. If you don't have any and don't want to build any, then just buildah pull any image from Docker Hub. Once finished, you will see files with names like 96c6e3522e18ff696e9c40984a8467ee15c8cf80c2d32ffc184e79cdfd4070f6, which is actually a tarball. You can untar this file into a destination of your choice and see all the files inside this image layer. You will also see an image manifest.json file, in case of Mattermost it looks like this: { "schemaVersion": 2, "config": { "mediaType": "application/vnd.oci.image.config.v1+json", "digest": "sha256:57ea4e4c7399849779aa80c7f2dd3ce4693a139fff2bd3078f87116948d1991b", "size": 1262 }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:6bb94ea9af200b01ff2f9dc8ae76e36740961e9a65b6b23f7d918c21129b8775", "size": 2832039 }, { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:96c6e3522e18ff696e9c40984a8467ee15c8cf80c2d32ffc184e79cdfd4070f6", "size": 162162411 } ] } Image manifest is described by OCI spec. If you look closely at the example above, it defines two layers (vnd.oci.image.layer.v1.tar) and one config file (vnd.oci.image.config.v1+json). We can see that the config has a digest 57ea4e4c7399849779aa80c7f2dd3ce4693a139fff2bd3078f87116948d1991b. We have this file as well and though it looks just like layer files, it's actually a config file of the image. This might be a bit confusing, but keep in mind that this structure was created for other software to store and process, not for the human eye to read. If you need to quickly figure which file in the image stores the config, always look at the manifest.json first: { "created": "2019-05-12T16:13:28.951120907Z", "architecture": "amd64", "os": "linux", "config": { "User": "mattermost", "ExposedPorts": { "8065/tcp": {} }, "Env": [ "PATH=/opt/mattermost/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "MM_VERSION=5.8.0", "MMDBCON=localhost:5432", "MMDBKEY=XXXXXXXXXX", "MMSMTPUSERNAME=postfix", "MMSMTPPASSWORD=secrets", "MMSMTPSALT=XXXXXXXXXX", "MMGITHUBSECRET=secret", "MMGITHUBHOOK=localhost" ], "Entrypoint": [ "/entrypoint.sh" ], "Cmd": [ "mattermost" ], "Volumes": { "/opt/mattermost/client/plugins": {}, "/opt/mattermost/config": {}, "/opt/mattermost/data": {}, "/opt/mattermost/logs": {}, "/opt/mattermost/plugins": {} }, "WorkingDir": "/opt/mattermost" }, "rootfs": { "type": "layers", "diff_ids": [ "sha256:f1b5933fe4b5f49bbe8258745cf396afe07e625bdab3168e364daf7c956b6b81", "sha256:462e838baed1292fb825d078667b126433674cdc18c1ba9232e2fb8361fc8ac2" ] }, "history": [ { "created": "2019-05-11T00:07:03.358250803Z", "created_by": "/bin/sh -c #(nop) ADD file:a86aea1f3a7d68f6ae03397b99ea77f2e9ee901c5c59e59f76f93adbb4035913 in / " }, { "created": "2019-05-11T00:07:03.510395965Z", "created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]", "empty_layer": true }, { "created": "2019-05-12T16:13:28.951120907Z" } ] } So, just a bunch of tarballs and json files -- that's the whole container image! You say Dockerless but you still rely on Dockerfile! Creators of Buildah intentionally decided not to introduce new DSL for defining container images. Buildah gives you two ways to define an image: a Dockerfile or a sequence of buildah commands. We will learn the second way shortly, but I must warn you that I don't think Dockerfiles will disappear anytime soon. And there is probably nothing that runs with them, except the name itself. Imagine investing into going Dockerless only to find yourself still writing Dockerfiles! I wish they would be called Containerfiles or Imagefiles. That would be much less awkward for the community. But as of now, convention is to name this file a Dockerfile and we simply have to deal with it. Building images with Buildah directly The second way to build an image with Buildah is by using buildah commands. The way Buildah builds images is by creating a new container from a base image and then running commands inside this container. After all commands had run, you can commit this container to become an image. Let's build an image this way and then discuss if and when is this better than writing a Dockerfile. We first need to start a new container from the existing image: buildah from centos:7 If image doesn't exist yet, it will be pulled from the registry, just like when you use Docker. buildah from command will return the name of the container that was started, normally it's "IMAGE_NAME-working-container", and in our case it's centos-working-container. We need to remember to use this name for all of the future commands. We can run commands inside this container with buildah run command: buildah run centos-working-container -- yum install unzip -y And we can configure various OCI-compliant options for the future image with buildah config command, for example environment variable: buildah config -e ENVIRONMENT=test centos-working-container We can also mount the complete container filesystem inside of the build server and manipulate it directly from the host with the tools installed on the host. It is useful when we don't want to install certain tools inside the image just to do some build-time manipulations. Keep in mind that in this case you need to make sure all these tools are installed on the machine of anyone who wants to build your image (which then kind of ruins the portability of your build script). buildah mount centos-working-container In return Buildah will give you the location of a mounted filesystem, for example /home/fodoj/.local/share/containers/storage/overlay/DIGEST/merged. Just to test, we can then create a file there: touch hello-from-host /home/fodoj/.local/share/containers/storage/overlay/DIGEST/merged/home/. Once we are happy with the image, we can commit it: buildah commit centos-working-container my-first-buildah-image And remove the working container: buildah rm centos-working-container Note that even though Buildah does run containers, it provides no way to do it in a way that would be useful for anything but building images. Buildah is not a replacement for container engine, it only gives you some primitives to debug the process of building an image! Images built by Buildah are visible to Podman, which will be the topic of the next article. For now, if you want to verify that the file hello-from-host really exists, run this: image=$(buildah from my-first-buildah-image) ls $(buildah mount $image)/home $> hello-from-host This will create another working container. Mount it and show the content of /home directory. The way we did it is actually the way to go if you want to build images with Buildah and without a Dockerfile. Instead of a Dockerfile you should write a shell script that invokes all the commands, commits the image and removes the working container. That's how the "Buildahfile" (that's really just a shell script) for mkdev looks like: #!/bin/bash set -x mkdev=$(buildah from centos:7) buildah run "$mkdev" -- curl -L -o epel-release-latest-7.noarch.rpm buildah run "$mkdev" -- curl -L -o wkhtmltopdf.rpm buildah run "$mkdev" -- curl "" -o "awscli-bundle.zip" buildah run "$mkdev" -- rpm -ivh epel-release-latest-7.noarch.rpm buildah run "$mkdev" -- yum install centos-release-scl -y buildah run "$mkdev" -- yum install unzip postgresql-libs postgresql-devel ImageMagick \ autoconf bison flex gcc gcc-c++ gettext kernel-devel make m4 ncurses-devel patch \ rh-ruby25 rh-ruby25-ruby-devel rh-ruby25-rubygem-bundler rh-ruby25-rubygem-rake \ rh-postgresql96-postgresql openssl-devel libyaml-devel libffi-devel readline-devel zlib-devel \ gdbm-devel ncurses-devel gcc72-c++ \ python-devel git cmake python2-pip chromium chromedriver which -y buildah run "$mkdev" -- pip install ansible boto3 botocore buildah run "$mkdev" -- yum install wkhtmltopdf.rpm -y buildah run "$mkdev" -- ln -s /usr/local/bin/wkhtmltopdf /bin/wkhtmltopdf buildah run "$mkdev" -- unzip awscli-bundle.zip buildah run "$mkdev" -- ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws buildah run "$mkdev" -- yum clean all && rm -rf /var/cache/yum git archive -o app.tar.gz --format=tar.gz HEAD buildah add "$mkdev" app.tar.gz /app/ buildah add "$mkdev" infra/app/build/entrypoint.sh /entrypoint.sh buildah config --workingdir /app "$mkdev" buildah run "$mkdev" -- scl enable rh-ruby25 "bundle install" rm app.tar.gz buildah config --port 3000 "$mkdev" buildah config --entrypoint '[ "/entrypoint.sh" ]' "$mkdev" buildah run "$mkdev" -- chmod +x /entrypoint.sh buildah config --cmd "bundle exec rails s -b '0.0.0.0' -P /tmp/mkdev.pid" "$mkdev" buildah config --env LC_ALL="en_US.UTF-8" "$mkdev" buildah run "$mkdev" -- rm -rf /app/ buildah commit "$mkdev" "docker.io/mkdevme/app:dev" buildah rm "$mkdev" This script probably looks extremely stupid to you if you ever produced a good container image in your life. Let me explain some of the things that are happening there: - We use Centos 7 as a base image because in production we run on Centos 7. Even if we don't run containers in production just yet, it makes sense to keep the development environment as close to production one as possible. - We do install a ridiculous number of packages, including AWS CLI, Chromium, Software Collections and what not. We do it because we use the resulting image in development environment and in our CI system. Both of these locations require extra tooling to run integration tests (Chromium) or perform some packaging and deployment tasks (AWS CLI and Ansible). Software Collections are used in our production environment and it's important we use the same Ruby version in all other envs as well. - We remove the code of the application itself at the very end. For this use case, we don't really need the code to be in the image. In both development environment and CI we need the latest version of the code, not something baked into the image. We store this script inside the application repo, just like we would keep the Dockerfile there. Once we decide we want to run mkdev in containers in production, we can modify this script to do different things depending on the environment. You can use this approach only if your build server is able to run the shell script. This is not a problem because Windows has WSL, for example. Your host system doesn't have to be Linux based as long as it is able to run some kind of Linux inside! Will it work one day for MacOS users without extra Linux VM? Who knows, let's hope Buildah developers are working on it. How does Buildah work internally? Both Podman and Buildah work quite similar internally. They both make use of Linux kernel features, specifically user namespaces and network namespaces to make it possible to run containers without any root privileges. I won't talk about it in this article, but if you can't wait, then start by reading following resources: - How rootless Buildah works: building containers in unprivileged environments - Podman: A more secure way to run containers - Podman and user namespaces: A marriage made in heaven. What's next I hope you've learned a lot about container images today. Buildah is a great tool not only for local development, but for any kind of automation around building container images. It's not the only one available, Kaniko from Google being another example, though Kaniko is a bit more focused on Kubernetes environments. Now that we have an image in place, it's time to run it. In the next article I will show you how to use Podman to completely automate local development environment for a Ruby on Rails application. We will learn how to use Kube YAML feature of Podman to describe all the services in a Kubernetes-compliant YAML definition, how to run a Rails application a container and how to run tests of the Rails application in this container. Containers and Podman in particular will become really handy when we will start creating ephemeral Mattermost instances just for the integration testing. Feel free to ask any questions in the comments below, I will make sure to reply to them directly or extend this article! This is an mkdev article written by Kirill Shirinkin. You can hire our DevOps mentors to learn all about containerization yourself. Discussion The podman proposal is quite interesting. Mainly because it maintains the same interface as Docker and runs without daemon. However, in a more complex approach, it is still quite incipient. For example, in an approach with k8s.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mkdev/dockerless-part-2-how-to-build-container-image-for-rails-application-without-docker-and-dockerfile-48e8
CC-MAIN-2020-45
refinedweb
3,222
54.22
XML::ValidWriter - DOCTYPE driven valid XML output ## As a normal perl object: $writer = XML::ValidWriter->new( DOCTYPE => $xml_doc_type, OUTPUT => \*FH ) ; $writer->startTag( 'b1' ) ; $writer->startTag( 'c2' ) ; $writer->end ; ## Writing to a scalar: $writer = XML::ValidWriter->new( DOCTYPE => $xml_doc_type, OUTPUT => \$buf ) ; ## Or, in scripting mode: use XML::Doctype NAME => a, SYSTEM_ID => 'a.dtd' ; use XML::ValidWriter qw( :all :dtd_tags ) ; b1 ; # Emits <a><b1> c2( attr=>"val" ) ; # Emits </b1><b2><c2 attr="val"> endAllTags ; # Emits </c2></b2></a> ## If you've got an XML::Doctype object handy: use XML::ValidWriter qw( :dtd_tags ), DOCTYPE => $doctype ; ## If you've saved a preparsed DTD as a perl module use FooML::Doctype::v1_0001 ; use XML::ValidWriter qw( :dtd_tags ) ; # # This all assumes that the DTD contains: # # <!ELEMENT a ( b1, b2?, b3* ) > # <!ATTLIST a aa1 CDATA #REQUIRED > # <!ELEMENT b1 ( c1 ) > # <!ELEMENT b2 ( c2 ) > # Alpha. Use and patch, don't depend on things not changing drastically. Many methods supplied by XML::Writer are not yet supplied here. This module uses the DTD contained in an XML::Doctype to enable compile- and run-time checks of XML output validity. It also provides methods and functions named after the elements mentioned in the DTD. If an XML::ValidWriter uses a DTD that mentions the element type TABLE, that instance will provide the methods $writer->TABLE( $content, ...attrs... ) ; $writer->start_TABLE( ...attrs... ) ; $writer->end_TABLE() ; $writer->empty_TABLE( ...attrs... ) ; . These are created for undeclared elements--those elements not explicitly declared with an <!ELEMENT ..> declaration--as well. If an element type name conflicts with a method, it will not override the internal method. When an XML::Doctype is parsed, the name of the doctype defines the root node of the document. This name can be changed, though, see XML::Doctype for details. In addition to the object-oriented API, a function API is also provided. This allows you to import most of the methods of XML::ValidWriter as functions using standard import specifications: use XML::ValidWriter qw( :all ) ; ## Could list function names instead :all does not import the functions named after elements mentioned in the DTD, you need to import those tags using :dtd_tags: use XML::Doctype NAME => 'foo', SYSTEM_ID => 'fooml.dtd' ; use XML::ValidWriter qw( :all :dtd_tags ) ; or BEGIN { $doctype = XML::Doctype->new( ... ) ; } use XML::ValidWriter DOCTYPE => $doctype, qw( :all :dtd_tags ) ;. If you find you need a method not suported here, write it and send it in! This was not derived from XML::Writer because XML::Writer does not expose it's stack. Even if it did, it's might be difficult to store enough state in it's stack. Unlike XML::Writer, this does not call in all of the IO::* family, and method dispatch should be faster. DTD-specific methods are also supported (see "AUTOLOAD"). For quick applications that provide Unix filter application functionality, XML::ValidWriter and XML::Doctype cooperate to allow you to use XML::Doctype NAME => 'FooML, SYSTEM_ID => 'fooml.dtd' ; syntax. :dtd_tagsexport symbol like so: use XML::Doctype NAME => 'FooML, SYSTEM_ID => 'fooml.dtd' ; use XML::ValidWriter qw(:dtd_tags) ; If the elements a, b_c, and d-e are referred to in the DTD, the following functions will be exported: a() end_a() # like startTag( 'a', ... ) and endTag( 'a' ) b_c() end_b_c() d_e() end_d_e() {'d-e'}() {'end_d-e'}() These functions emit only tags, unlike the similar functions found in CGI.pm and XML::Generator, which also allow you to pass content in as parameters. See below for details on conflict resolution in the mapping of entity names containing /\W/ to Perl subroutine names. If the elements declared in the DTD might conflict with functions in your package namespace, simple put them in some safe namespace: package FooML ; use XML::Doctype NAME => 'FooML', SYSTEM_ID => 'fooml.dtd' ; use XML::ValidWriter qw(:dtd_tags) ; package Whatever ; The advantage of importing these subroutine names is that perl can then detect use of unknown tags at compile time. If you don't want to use the default DTD, use the -dtd option: BEGIN { $dtd = XML::Doctype->new( .... ) } use XML::ValidWriter qw(:dtd_tags), -dtd => \$dtd ; Since the functions created by the :dtd_tags export symbol are wrappers around startTag() and endTag(), they provide this functionality as well. So, if you have a DTD like <!ELEMENT a ( b1, b2?, b3* ) > <!ATTLIST a aa1 CDATA #REQUIRED > <!ELEMENT b1 ( c1 ) > <!ELEMENT b2 ( c2 ) > <!ELEMENT b3 ( c3 ) > you can do this: use XML::Doctype NAME => 'a', SYSTEM_ID => 'a.dtd' ; use XML::ValidWriter ':dtd_tags' ; getDoctype->element_decl('a')->attdef('aa1')->default_on_write('foo') ; a ; b1 ; c1 ; end_c1 ; end_b1 ; b3 ; c3( -attr => val ) ; end_c3 ; end_b3 ; end_a ; and emit a document like <a aa1="foo"> <b1> <c1 /> </b1> <b3> <c3 attr => "val" /> </b3> </a> . XML is a very simple langauge and does not offer a lot of room for optimization. As the spec says "Terseness in XML markup is of minimal importance." XML::ValidWriter does optimize the following on output: <a...></a> becomes '<a... />' Spurious emissions of ]]><![CDATA[ are supressed. XML::ValidWriter chooses whether or not to use a <![CDATA[...]]> section or simply escape '<' and '&'. If you are emitting content for an element in multiple calls to "characters", the first call decides whether or not to use CDATA, so it's to your advantage to emit as much in the first call as possible. You can do characters( @lots_of_segments ) ; if it helps. All of the routines in this module can be called as either functions or methods unless otherwise noted. To call these routines as functions use either the DOCTYPE or :dtd_tags options in the parameters to the use statement: use XML::ValidWriter DOCTYPE => XML::Doctype->new( ... ) ; use XML::ValidWriter qw( :dtd_tags ) ; This associates an XML::ValidWriter and an XML::Doctype with the package. These are used by the routines when called as functions. $writer = XML::ValidWriter->new( DTD => $dtd, OUTPUT => \*FH ) ; Creates an XML::ValidWriter. The value passed for OUTPUT may be: if you want to direct output to append to a scalar. This scalar is truncated whenever the XML::ValidWriter object is reset() or DESTROY()ed XML::ValidWriter does not load IO. This is the only mode compatible with XML::Writer. A simple scalar is taken to be a filename to be created or truncated and emitted to. This file will be closed when the XML::ValidWriter object is reset or deatroyed. NOTE: if you leave OUTPUT undefined, then the currently select()ed output is used at each emission (ie calling select() can alter the destination mid-stream). This eases writing command line filter applications, the select() interaction is unintentional, and please don't depend on it. I reserve the right to cache the select()ed filehandle at creation time or at time of first emission at some point in the future. Can't think of why you'd call this method directly, it gets called when you use this module: use XML::ValidWriter qw( :all ) ; In addition to the normal functionality of exporting functions like startTag() and endTag(), XML::ValidWriter's import() can create functions corresponding to all elements in a DTD. This is done using the special :dtd_tags export symbol. For example, use XML::Doctype NAME => 'FooML', SYSTEM_ID => 'fooml.dtd' ; use XML::ValidWriter qw( :dtd_tags ) ; where fooml.dtd referse to a tag type of 'blurb' causes these functions to be imported: blurb() # calls defaultWriter->startTag( 'blurb', @_ ) ; blurb_element() # calls defaultWriter->dataElement( 'blurb', @_ ) ; empty_blurb() # calls defaultWriter->emptyTag( 'blurb', @_ ) ; end_blurb() # calls defaultWriter->endTag( 'blurb' ) ; The range of characters for element types is much larger than the range of characters for bareword perl subroutine names, which are limited to [a-zA-Z0-9_]. In this case, XML::ValidWriter will export an oddly named function that you can use a symbolic reference to call (you will need no strict 'refs' ; if you are doing a use strict ;): &{"space-1999:moonbase"}( ...attributes ... ) ; . XML::ValidWriter will also try to fold the name in to bareword space by converting /\W/ symbols to '_'. If the resulting function name, space_1999_moonbase( ...attributes... ) ; has not been generated and is not the name of an element type, then it will also be exported. If you are using a DTD that might introduce function names that conflict with existing ones, simple export them in to their own namespace: package ML ; use XML::Doctype NAME => 'foo', SYSTEM_ID => 'fooml.dtd' ; use XML::ValidWriter qw( :dtd_tags ) ; package main ; use XML::ValidWriter qw( :all ) ; ML::foo ; ML::c2 ; ML::c1 ; ML::end_a ; I gave serious thought to converting ':' in element names to '::' in function declarations, which might work well in the functions-in-their-own- namespace case, but not in the default case, since Perl does not (yet) have relative namespaces. Another alternative is to allow a mapping of XML namespaces to Perl namespaces to be done. characters( "escaped text", "& more" ) ; $writer->characters( "escaped text", "& more" ) ; Emits character data. Character data will be escaped before output, by either transforming '<' and '&' to < and &, or by enclosing in a ' <![CDATA[...]]>' bracket, depending on which will be more human-readable, according to the module. $writer->dataElement( $tag ) ; $writer->dataElement( $tag, $content ) ; $writer->dataElement( $tag, $content, attr1 => $val1, ... ) ; dataElement( $tag ) ; dataElement( $tag, $content ) ; dataElement( $tag, $content, attr1 => $val1, ... ) ; Does the equivalent to ## Split the optional args in to attributes and elements arrays. $writer->startTag( $tag, @attributes ) ; $writer->characters( $content ) ; $writer->endTag( $tag ) ; This function is exportable as dataElement(), and is also exported for each element 'foo' found in the DTD as foo(). $writer = defaultWriter ; ## Not a method! $writer = defaultWriter( 'Foo::Bar' ) ; Returns the default XML::ValidWriter for the given package, or the current package if none is specified. This is useful for getting at methods like reset that are not also functions. Croaks if no default writer has been defined (see "import"). # Using the writer's associated DTD: doctype ; # Ignoring the writer's associated DTD: doctype( $type ) ; doctype( $type, undef, $system ) ; doctype( $type, $public, $system ) ; $writer->doctype ; ...etc See "internalDoctype" to emit the entire DTD in the document. This checks to make sure that no doctype or elements have been emitted. A warning is emitted if standalone="yes" was specified in the <?xml..?> declaration and a system id is specified. This is extremely likely to be an error. If you need to silence the warning, write me (see below). Passing '' or '0' (zero) as a $public_id or as a $system_id also generates a warning, as these are extremely likely to be errors. emptyTag( $tag[, attr1 => $val1... ] ) ; $writer->emptyTag( $tag[, attr1 => $val1... ] ) ; Emits an empty tag like '<foo />'. The extra space is for compatibility with XHTML.. $writer->end ; # Not a function!! Emits all necessary end tags to close the document. Available as a method only, since 'end' is a little to generic to be exported as a function name, IMHO. See 'endAllTags' for the plain function equivalent function. endAllTags ; $writer->endAllTags ; A plain function that emits all necessart end tags to close the document. Corresponds to the method end, but is exportable as a function/ $writer->exportDTDTags() ; $writer->exportDTDTags( $to_pkg ) ; Exports the tags found in the DTD to the caller's namespace. $m = getDataMode ; $m = $writer->getDataMode ; Returns TRUE if the writer is in DATA_MODE. $dtd = getDoctype ; $dtd = $writer->getDoctype ; This is used to get the writer's XML::Doctype object. $fh = getOutput ; $fh = $writer->getOutput ; Gets the filehandle an XML::ValidWriter sends output to. rawCharacters( "<unescaped text>", "& more text" ) ; $writer->rawCharacters( "<unescaped text>", "& more text" ) ; This allows you to emit raw text without any escape processing. The text is not examined for tags, so you can invalidate your document and even corrupt it's well-formedness. $writer->reset ; # Not a function! Resets a writer to be initialized, but not have emitted anything. This is useful if you need to abort output, but want to reuse the XML::ValidWriter. setDataMode( 1 ) ; $writer->setDataMode( 1 ) ; Enable or disable data mode. setDoctype $doctype ; $writer->setDoctype( $doctype ) ; This is used to set the doctype object. select_xml OUTHANDLE ; # Nnot a method!! Selects a filehandle to send the XML output to when not using the object oriented interface. This is similar to perl's builtin select, but only affects startTag and endTag functions, (not methods). This is only needed if you want to interleave output to the selected output files (usually STDOUT, see "select" in perlfunc and to an XML file on another filehandle. If you want to redirect all output (yours and XML::Writer's) to the same file, just use Perl's built-in select(), since startTag and endTag emit to the currently selected filehandle by default. Like select, this returns the old value. setOutput( \*FH ) ; $writer->setOutput( \*FH ) ; Sets the filehandle an XML::ValidWriter sends output to. startTag( 'a', attr => val ) ; # use default XML::ValidWriter. xmlDecl ; xmlDecl( "UTF-8" ) ; xmlDecl( "UTF-8", "yes" ) ; $writer->xmlDecl( ... ) ; Emits an XML declaration. Must be called before any of the other output routines. If $encoding is not defined, it is not output. This is slightly different than XML::Writer, which outputs 'UTF-8' if you pass in undef, 0, or ''. If $encoding is '' or 0, then it is output as "" or "0" and a warning is generated. If $standalone is defined and is not 'no', 0, or '', it is output as 'yes'. If it is 'no', then it is output as 'no'. If it's 0 or '' it is not output. This function is called whenever a function or method is not found in XML::ValidWriter. If it was a method being called, and the desired method name is a start or end tag found in the DTD, then a method is cooked up on the fly. These methods are slower than normal methods, but they are cached so that they don't need to be recompiled. The speed penalty is probably not significant since they do I/O and are thus usually orders of magnitude slower than normal Perl methods. DESTROY is called when an XML::ValidWriter is cleaned up. This is used to automatically close all tags that remain open. This will not work if you have closed the output filehandle that the ValidWriter was using. This method will also warn if anything was emitted bit no root node was emitted. This warning can be silenced by calling $writer->reset() ; when you abandon output. Barrie Slaymaker <[email protected]> This module is Copyright 2000, 2005 Barrie Slaymaker. All rights reserved. This module is licensed under your choice of the Artistic, BSD or General Public License.
http://search.cpan.org/~rbs/XML-AutoWriter-0.39/lib/XML/ValidWriter.pm
CC-MAIN-2014-15
refinedweb
2,374
56.25
We. Also, we have to add the statement import openpyxl in our code. To open an excel workbook, the method is load_workbook and pass the path of the excel file as a parameter to this method. To identify the active sheet, we have to use the active method on the workbook object. To read a cell, the method cell is applied on the active sheet and the row and column numbers are passed as parameters to this method. Then the value method is applied on a particular cell to read values within it. Let us read the value at the third row and second column having the value D as shown below in an excel workbook of name Data.xlsx − import openpyxl #configure workbook path b = openpyxl.load_workbook("C:\\Data.xlsx") #get active sheet sht = b.active #get cell address within active sheet cl = sht.cell (row = 3, column = 2) #read value with cell print("Reading value from row-3, col-2: ") print (cl.value)
https://www.tutorialspoint.com/how-can-we-read-data-from-an-excel-sheet-in-selenium-webdriver
CC-MAIN-2021-17
refinedweb
165
72.76
How to use Auto Complete Source in Combo Box using Java Tutorial This tutorial is all about How to How to use Auto Complete Source in Combo Box using Java Tutorial. In this tutorial you will learn How to use Auto Complete Source in Combo Box using Java Tutorial. Auto Complete Source in Combo Box using Java Tutorial will make your combo box more user friendly. And it allows the user to search an item inside combo box values. When you type something like for example you type the letter R all the items that start with the R are going to be display. This tutorial uses swingx-all-1.6.4.jar and swingx-all-1.6.3.jar version Package library, one combo box components and NetBeans IDE. Please follow all the steps below to complete this tutorial. How to use Auto Complete Source in Combo Box using Java Tutorial steps Download the swingx-all-1.6.4.jar and swingx-all-1.6.3.jar package in this website If the download is done, add it to your project by right clicking the Libraries folder located in your project and then Add JAR/Folder and browse where your downloaded package located and inside lib folder add these two libraries. The next step is to Create your project by clicking file at the top of your project and then click new project, you can name your project whatever you want. Create you form by right clicking the Source Packages and by selecting New JFrame Form and then drag a Combo box in your form. After that you need to import these packages import org.jdesktop.swingx.autocomplete.AutoCompleteDecorator; above your class and then double click your button and copy paste the code below: the code below is how you can populate your database values in combo box try { String sql = “SELECT product_name FROM products”; pst = conn.prepareStatement(sql); rs = pst.executeQuery(); while(rs.next()){ productsCombo.addItem(rs.getString(“product_name”)); } }catch(Exception e){ JOptionPane.showMessageDialog(null, e); }[ /java] This line of code “ AutoCompleteDecorator.decorate(productsCombo); ” is what make the auto complete source works..) Delete Confirmation Dialog using Java Tutorial in Netbeans IDE 2.) Automated Voting System using Java in Netbeans IDE
https://itsourcecode.com/2017/06/auto-complete-source-combo-box/
CC-MAIN-2017-51
refinedweb
372
56.15
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bruno Haible wrote: >Derek Robert Price wrote: > >>>In particular the Woe32 stdio in MSVCRT does not usually set errno upon >>>failure. I.e. it implements the C89 spec, not the POSIX spec. >> >>Now, that could cause problems. I'd have to implement the entire >>STDIO implementation using write and handling my own buffers to work >>around that. S'pose I could probably snag most of the implementation >>from glibc... > > >It may be easier to just use the Woe32 API instead of errno: > > #ifdef _WIN32 > FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, > NULL, GetLastError (), 0, buf, sizeof (buf), NULL)); > #else > perror (...); > #endif Ah, thanks. I didn't know that existed. I'm not usually much of a Windows programmer. Cheers, Derek - -- *8^) Email: address@hidden Get CVS support at <>! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - iD8DBQFBPbYJLD1OTBfyMaQRAvsgAKDadBHoSnlhoU4o6B81a0BbRTC5vACcCtU0 RPqme7pPzuqHGbyUEUlGKvk= =5qu4 -----END PGP SIGNATURE-----
http://lists.gnu.org/archive/html/bug-cvs/2004-09/msg00062.html
crawl-003
refinedweb
151
59.5
I've tried to make a fighting game between 4 teams, I completed everything but there is more complicated issue when I start the match the AI didn't attack the closest enemy it fixed when I use (FindGameObjectsWithTag("tag")), but he dosen't when I use (FindGameObjectsWithTag(tag)) Because I don't want to use (" ") i want it to find the closest Enemy with any tag added to the list Example: the list has 3 tags (Team1,Team2,Team3) the fighter from Team1 fighting a fighter from Team2, now the Team2 fighter escaped and gone far away, and there is more closer enemy has Team3 tag, how can I make the Team1 fighter to stop chasing Team2 fighter who escaped, and start chasing the closer enemy Team3? I used this code but he didn't work using System.Collections; using System.Collections.Generic; using UnityEngine; public class script: MonoBehaviour { public List<string> tags; public GameObject TargetedEnemy; void Start () { } void Update () { foreach (var thetag in tags) { GameObject[] gos; gos = GameObject.FindGameObjectsWithTag (thetag); GameObject closest; float distance = Mathf.Infinity; Vector3 position = transform.position; foreach (GameObject go in gos) { Vector3 diff = go.transform.position - position; float curDistance = diff.sqrMagnitude; if (curDistance < distance) { closest = go; distance = curDistance; closest = TargetedEnemy; } } } } } Answer by KenjiKyo · Jul 14, 2017 at 04:11 AM Move GameObject closest; out of your for loop, you reinitialize it on each pass. GameObject closest; for Also, I think you meant to write TargetedEnemy = closest; instead to update your variable available for the Editor. TargetedEnemy = closest; I would also move the TargetedEnemy = closest; statement after the for loop so it actually only gets updated once with the closest found target after all possibilities have been processed. Getting the element that has the same element number on another list ? 2 Answers I have trouble understanding arrays and enums. When and how? 0 Answers only keep every 10th entry in an array? 2 Answers Remembering list of camera positions is not working 0 Answers Creating card game - copy of list to new list - creating cycle 1 Answer
https://answers.unity.com/questions/1369812/find-the-closest-enemy-that-has-a-tag-added-in-a-l.html?sort=oldest
CC-MAIN-2020-05
refinedweb
342
51.07
John Simmons / outlaw programmer wrote:the code in that file merely kicks things off for a game. The report of my death was an exaggeration - Mark Twain Simply Elegant Designs JimmyRopes DesignsThink inside the box! ProActive Secure Systems I'm on-line therefore I am. JimmyRopes public class Naerling : Lazy<Person>{ public void DoWork(){ throw new NotImplementedException(); } } Dalek Dave wrote:I would put £50 on the Dalai Lama if I was a Tibetan man. delete this; General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Lounge.aspx?msg=4390117
CC-MAIN-2014-52
refinedweb
105
61.06
could someone please help me with a program? This is for a class i am taking in college and yes i did read all of the guidelines about not asking for help with homework and I normally wouldn't ask for help but i am totally stumped. This is driving me up the wall. I have to write a program that prints out the number of words in a text file. You know just do a Word Count. I am suppose to do it 2 different ways 1. Use a string object into which you input each word as a string. 2. Assume the string class (or include file) does not exist, and input the data one character at a time. This is what i have done so far: #include <iostream> #include <string> #include <fstream> using namespace std; void main () { ifstream infile; string word1; infile.open("textfile.txt"); if (' ' == character) wordCount++ } Here is part of a program that i did to count commas in something cin.get (inChar) while (cin) { cin.get(inChar) while (inCHar != '\n') if (inChar == ',') commaCount ++; I know that i am supposed to do something similar for the word count thing. I dont know. ANy help would be greatly appreciated. And yes i know i am a complete idiot with this stuff and that this should be really easy.
http://cboard.cprogramming.com/cplusplus-programming/4671-please-help.html
CC-MAIN-2014-15
refinedweb
222
83.25
Convert numbers to words Gian Aznar Greenhorn Joined: Jan 01, 2013 Posts: 2 posted Jan 01, 2013 06:20:28 0 i cant get 11 - 19... its output will be TenOneEleven please look at the codes import javax.swing.*; public class Excercise7{ public static void main (String[] args) { int num,len,n1,n2,n3; String input,output=""; input = JOptionPane.showInputDialog( "Please enter a number between 1 to 100." ); num = Integer.parseInt(input); len = input.length(); // if (n1 == 1) if(len == 1){ n1 = Integer.parseInt(""+input.charAt(0)); if(n1 == 1){ output += "One"; } if(n1 == 2){ output += "Two"; } if(n1 == 3){ output += "Three"; } if(n1 == 4){ output += "Four"; } if(n1 == 5){ output += "Five"; } if(n1 == 6){ output += "Six"; } if(n1 == 7){ output += "Seven"; } if(n1 == 8){ output += "Eight"; } if(n1 == 9){ output += "Nine"; } } if(len == 2){ n1 = Integer.parseInt(""+input.charAt(0)); n2 = Integer.parseInt(""+input.charAt(1)); //if(n1 == 1) if(n1 == 1){ output += "Ten"; } if(n1 == 2){ output += "Twenty "; } if(n1 == 3){ output += "Thirty "; } if(n1 == 4){ output += "Fourty "; } if(n1 == 5){ output += "Fifty "; } if(n1 == 6){ output += "Sixty "; } if(n1 == 7){ output += "Seventy "; } if(n1 == 8){ output += "Eighty "; } if(n1 == 9){ output += "Ninety "; } //if(n2 == 0) if(n2 == 0){ output += ""; } if(n2 == 1){ output += "one"; } if(n2 == 2){ output += "two"; } if(n2 == 3){ output += "three"; } if(n2 == 4){ output += "four"; } if(n2 == 5){ output += "five"; } if(n2 == 6){ output += "six"; } if(n2 == 7){ output += "seven"; } if(n2 == 8){ output += "eight"; } if(n2 == 9){ output += "nine"; } // special condition if(n1 == 1 && n2 == 1){ output += "Eleven"; } if(n1 == 1 && n2 == 2){ output += "Twelve"; } if(n1 == 1 && n2 == 3){ output += "Thirteen"; } if(n1 == 1 && n2 == 4){ output += "Fourteen"; } if(n1 == 1 && n2 == 5){ output += "Fifteen"; } if(n1 == 1 && n2 == 6){ output += "Sixteen"; } if(n1 == 1 && n2 == 7){ output += "Seventeen"; } if(n1 == 1 && n2 == 8){ output += "Eighteen"; } if(n1 == 1 && n2 == 9){ output += "Nineteen"; } } if(len == 3){ n1 = Integer.parseInt(""+input.charAt(0)); n2 = Integer.parseInt(""+input.charAt(1)); n3 = Integer.parseInt(""+input.charAt(2)); if(n1 == 1) output += "One "; if(n2 == 0) output += "hundred"; if(n3 == 0) output += ""; } JOptionPane.showMessageDialog(null,output); } } Gian Aznar Greenhorn Joined: Jan 01, 2013 Posts: 2 posted Jan 01, 2013 06:21:55 0 please do reply fred rosenberger lowercase baba Bartender Joined: Oct 02, 2003 Posts: 11952 20 I like... posted Jan 01, 2013 06:43:04 0 1. Please don't hijack someone else's post. We have plenty of room around here, so just create your own thread. I have split your two posts into a new thread. 2. Take a look at our HowToAskQuestionsOnJavaRanch FAQ. There are a LOT of tips on there on how to make posts here that give you the best chance of being answered. 3. I am about to send you a private message. Please check there in a minute or two. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Steve Luke Bartender Joined: Jan 28, 2003 Posts: 4181 21 I like... posted Jan 01, 2013 08:11:27 0 Also, please remember to UseCodeTags (<-click) next time you post code. I changed your post to include them this time. As for your problem: The issue is one of logic. You have identified some special cases, when the length of the input is > 1 and the tens digit is equal to 1. But you do not prevent normal output from happening in those special cases. Your option is to do a nested series of ifs like this (pseudocode): if (number of digits is 1) { parse the first digit normally } else { if (tens digit is 1) { do the special case formatting for ten through nineteen } else { do the normal case formatting for tens then ones digit } } This is pseudocode, and it doesn't handle the 100s digit, but you can add that yourself. Another option, which would require less change in your code, is when you reach the special case code don't add to the already present output string , instead, simply replace it. There are a number of other optimizations that can be made, but since this is a beginners programming exercise I won't go into them. Steve I agree. Here's the link: subject: Convert numbers to words Similar Threads How close am I in getting this program right? Print Statement help Question on switch construct From a given string of numeric values, print the individual numbers in words separated by space. Do while error, I'm going crazy All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/601452/java/java/Convert-numbers-words
CC-MAIN-2015-48
refinedweb
770
68.4
Context: I’m trying to figure out how I can create organic-ish ice crystals (if you’ve seen How to Train Your Dragon 2, the Bewilderbeast’s blast is what I’m trying to more-or-less replicate) in realtime. Here’s my script: import bge, mathutils.noise as noise NBasis = noise.types.VORONOI_F2F1 #In my testing, the "Voronoi F2-F1" type was the best for spikey crystals. # Available Noise types are: # BLENDER # CELLNOISE (Great for geometric, cubical crystals, such as Cobalt) # NEWPERLIN # STDPERLIN # VORONOI_CRACKLE # VORONOI_F2F1 # VORONOI_F1 # VORONOI_F2 # VORONOI_F3 # VORONOI_F4 cont = bge.logic.getCurrentController() own = cont.owner for msh in own.meshes: #Copied (with slight adjustments) directly from the API Reference for m_index in range(len(msh.materials)): for v_index in range(msh.getVertexArrayLength(m_index)): vtx = msh.getVertex(m_index, v_index) #Todo: consider angle-of-incidence and (maybe) speed-of-incidence (and maybe normal angle? Might not be required; not sure); use this info to add X/Y displacement to the Normal Z-displacement for more realism # For now, just do Z-Disp. nrml = vtx.getNormal() pos = vtx.getXYZ() #Now, do the magic! In theory, we now just add the Noise value to the Normal Z vector, apply the adjusted vector to the mesh, and it works. ...Yeah right. val = noise.noise(pos, NBasis) #Get the noise value at the vertex val *= 3; val = abs(val) #Process the value; this will need to do more in the future nrml.z += val #pos.z += val #vtx.z += val #Apply the value vtx.setNormal(nrml) #vtx.setXYZ(pos) My concept is pretty simple: Use mathutils.noise.noise() to displace each vertex in a source mesh along Normal Z (basically, a realtime Displace modifier). What I currently have kinda works, but it has a couple of problems: Firstly, in order to get more random, organic-looking crystals, I need to use each vertex’s Global position, and KX_VertexProxy.getXYZ() and KX_VertexProxy.XYZ are both the vert’s Local position. So the first question is, how can I get the World position of each KX_VertexProxy? I read a couple of threads on this already, but they were from 8-9 years ago, and I really didn’t understand them. Secondly, I actually have no clue how to displace a vertex along its Normal. My script (as you could probably tell) only displaces them along Object Z, which isn’t what I want. So the second question is, how do I do that? Thanks in advance for any help you can give me!
https://blenderartists.org/t/a-couple-of-questions-concerning-kx-vetexproxy/1302948
CC-MAIN-2021-25
refinedweb
418
67.55
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 21/04/2013 at 16:20, xxxxxxxx wrote: I try to set User Data by python longUD = c4d.GetCustomDataTypeDefault(c4d.DTYPE_LONG) longUD[c4d.DESC_NAME] = "Long" add13 = obj.AddUserData(longUD) is it possible to set the type of long or the min and max values ? and is it possible to get this values by code ? thanks On 21/04/2013 at 18:53, xxxxxxxx wrote: yes it is. check lib_descriptions.h. //-------------------- DESC_NAME = 1, // name for parameter standalone use DESC_SHORT_NAME = 2, // short name (only for attribute dialog) DESC_VERSION = 3, // LONG: bitmask of the following values DESC_VERSION_xxx DESC_VERSION_DEMO = (1<<0), DESC_VERSION_XL = (1<<1), DESC_VERSION_ALL = DESC_VERSION_DEMO|DESC_VERSION_XL, DESC_CHILDREN = 4, // BaseContainer DESC_MIN = 5, // LONG/Real/Vector minimum INcluded DESC_MAX = 6, // LONG/Real/Vector maximum INcluded DESC_MINEX = 7, // Bool: TRUE == minimum EXcluded DESC_MAXEX = 8, // Bool: TRUE == maximum EXcluded DESC_STEP = 9, // LONG/Real/Vector DESC_ANIMATE = 10, // LONG DESC_ANIMATE_OFF = 0, DESC_ANIMATE_ON = 1, DESC_ANIMATE_MIX = 2, DESC_ASKOBJECT = 11, // Bool: TRUE - ask object for this parameter, FALSE - look inside container DESC_UNIT = 12, // LONG: one of the following values DESC_UNIT_xxx for DTYPE_REAL/DTYPE_VECTOR DESC_UNIT_REAL = 'frea', //FORMAT_REAL, DESC_UNIT_LONG = 'flng', //FORMAT_LONG, DESC_UNIT_PERCENT = 'fpct', //FORMAT_PERCENT, DESC_UNIT_DEGREE = 'fdgr', //FORMAT_DEGREE, DESC_UNIT_METER = 'fmet', //FORMAT_METER, DESC_UNIT_TIME = 'ffrm', //FORMAT_FRAMES, DESC_PARENTGROUP = 13, // LONG/DescID: parent id DESC_CYCLE = 14, // Container: members of cycle DESC_HIDE = 15, // Bool: indicates whether the property is hidden or not DESC_DEFAULT = 16, // default value for LONG/Real/Vector: DESC_ACCEPT = 17, // ACCEPT: for InstanceOf-Check() DESC_SEPARATORLINE = 18, DESC_REFUSE = 19, // REFUSE: for InstanceOf-Check() DESC_PARENTID = 20, // for indent and anim track can append the parent-name DESC_CUSTOMGUI = 21, // customgui for this property #define CUSTOMGUI_REAL DTYPE_REAL #define CUSTOMGUI_REALSLIDER 1000489 #define CUSTOMGUI_REALSLIDERONLY 200000006 #define CUSTOMGUI_VECTOR DTYPE_VECTOR #define CUSTOMGUI_STRING DTYPE_STRING #define CUSTOMGUI_STRINGMULTI 200000007 #define CUSTOMGUI_STATICTEXT DTYPE_STATICTEXT #define CUSTOMGUI_CYCLE 200000180 #define CUSTOMGUI_CYCLEBUTTON 200000255 #define CUSTOMGUI_LONG DTYPE_LONG #define CUSTOMGUI_LONGSLIDER 1000490 #define CUSTOMGUI_BOOL DTYPE_BOOL #define CUSTOMGUI_TIME DTYPE_TIME #define CUSTOMGUI_COLOR 1000492 #define CUSTOMGUI_MATRIX DTYPE_MATRIX #define CUSTOMGUI_BUTTON DTYPE_BUTTON #define CUSTOMGUI_POPUP DTYPE_POPUP #define CUSTOMGUI_SEPARATOR DTYPE_SEPARATOR #define CUSTOMGUI_SUBDESCRIPTION 0 #define CUSTOMGUI_PROGRESSBAR 200000265 DESC_COLUMNS = 22, // DTYPE_GROUP: number of columns DESC_LAYOUTGROUP = 23, // Bool: only for layout in columns, in layout groups are only groups allowed! DESC_REMOVEABLE = 24, // Bool: TRUE allows to remove this entry DESC_GUIOPEN = 25, // Bool: default open DESC_EDITABLE = 26, // Bool: TRUE allows to edit this entry DESC_MINSLIDER = 27, // LONG/Real/Vector minimum INcluded DESC_MAXSLIDER = 28, // LONG/Real/Vector maximum INcluded DESC_GROUPSCALEV = 29, // Bool: allow to scale group height DESC_SCALEH = 30, // Bool: scale element horizontal DESC_LAYOUTVERSION = 31, // LONG: layout version DESC_ALIGNLEFT = 32, // Bool: align element left DESC_FITH = 33, // Bool: fit element DESC_NEWLINE = 34, // Bool: line break DESC_TITLEBAR = 35, // Bool: main group title bar DESC_CYCLEICONS = 36, // Container: LONG icon ids for cycle DESC_CYCLESYMBOLS = 37, // Container: String identifiers for help symbol export DESC_PARENT_COLLAPSE = 38, // parent collapse id DESC_FORBID_INLINE_FOLDING = 39, // Bool: instruct AM not to allow expanding inline objects for this property DESC_FORBID_SCALING = 40, // Bool: prevent auto scaling of the parameter with the scale tool (for DESC_UNIT_METER) DESC_ANGULAR_XYZ = 41, // Bool: angular representation as XYZ vs. HPB On 25/04/2013 at 06:19, xxxxxxxx wrote: hey ferdinand, thanks for that - it was really helpful now I stuck on the problem to get the value back - i like to readout the userdata and translate them for a gui - i get all max and min and name but how can I get the Value so I need the ID - hope you understand my problem and may can help me - thanks for id, bc in op.GetUserDataContainer() : print id, bc print "name:" print bc[c4d.DESC_NAME] print "unit:" print bc[c4d.DESC_UNIT] print "gui:" print bc[c4d.DESC_CUSTOMGUI] On 25/04/2013 at 07:49, xxxxxxxx wrote: you are using GetUserDataContainer wrong. It returns a list of the user data containers and obviously not a specific container as you do not pass an id. so it returns a list of BaseContainer for which each stands for a user data element. for id, bc in op.GetUserDataContainer() : if isinstance(bc, c4d.BaseContainer) : for id, data in bc: print id, data On 25/04/2013 at 09:09, xxxxxxxx wrote: Just as a side-note: GetUserDataContainer() garuantees to return a list of (DescID, BaseContainer), so you don't need the isinstance(bc, c4d.BaseContainer) part. Instead, you might want to check for a specific user-data ID. for id_, bc in op.GetUserDataContainer() : rid = id_[id_.GetDepth() - 1].id if rid == SOME_USERDATA_ID: print bc[c4d.DESC_NAME] Best, -Niklas On 25/04/2013 at 09:35, xxxxxxxx wrote: you are right nikklas, it is just a habit of precaution i got used to before trying to invoke any methods. i generally am too lazy or bad to check if my code is efficient. more a python user here rather than a real programmer my second posting sounds a bit grumpy when i am reading it now again, which was actually not intentional. On 26/04/2013 at 02:31, xxxxxxxx wrote: Niklas and Ferdinand, thanks for the detailed replies, Im a bit confused - the Id i get back is a Integer - but with that i dont get any data or value m a bit confused - the Id i get back is a Integer - but with that i don for id_, bc in op.GetUserDataContainer() : print "id:" rig = id_[id_.GetDepth() - 1].id print bc.GetCustomDataType(rig) On 26/04/2013 at 02:32, xxxxxxxx wrote: Hi connor, Probably because the attribute is not a custom data type..? Have you tried using __getitem__() ? print bc[rig] On 26/04/2013 at 04:16, xxxxxxxx wrote: i try : for id_, bc in op.GetUserDataContainer() : rig = id_[id_.GetDepth() - 1].id #print bc.GetCustomDataType(rig) print bc.__getitem__(rig) print bc[rig] print "name:" print bc[c4d.DESC_NAME] print "unit:" print bc[c4d.DESC_UNIT] ig get for both None Name and Unit are correct i try to grap user data to and merge it to a gui interface On 26/04/2013 at 04:32, xxxxxxxx wrote: Lol, sorry I messed up that bc is the item description container, not the host's container. So, what are you actually trying to do? Do you want to obtainthe parameter-name, min/max, etc. or do you want to get the actual parameters value? for id_, bc in op.GetUserDataContainer() : print bc[c4d.DESC_NAME] print op[id_] # Same as op[c4d.ID_USERDATA, id_[id_.GetDepth() - 1]] On 26/04/2013 at 07:37, xxxxxxxx wrote: hey niklas, both of them i need actual values and mast of the descriptions like max or min to rebuild the interface in a gui On 26/04/2013 at 08:37, xxxxxxxx wrote: you are aware that there are no dynamic descriptions in python and rebuilding a userdata description container with a dialog resource can also become quite tricky, as dialog resources and descriptions are not the same, although both are resource based. On 26/04/2013 at 09:28, xxxxxxxx wrote: First the solution to your request. Here is some code that will change the Min attribute on a Real type UD entry: import c4d def main() : obj = doc.GetFirstObject() #The object hosting the UD ud = obj.GetUserDataContainer() #The master UD container for i, bc in ud: level = i[1].id #Get the ID# for each UD entry if level == 1: #If we find the first UD entry bc[5]= 0.2 #Set the Min value to 20% in memory only!!! obj.SetUserDataContainer(i, bc) #Set the container changes from memory c4d.EventAdd() if __name__=='__main__': main() How did I come up with this code? I had to learn about how the UD system works. It's a rather convoluted system of a master base container. With sub containers. Which hold values consisting of Reals, Longs, Vectors, and Tuples. Then I mapped it out and made a note for myself to use as a reference guide on how to get at a specific UD attribute. This is one of several notes I've written to myself about How the UD system works. And how to get at the specific built-in attributes inside of them: #This is how to get at the options in a specific UD entry #The options are different depending on what kind of UD item it is #These options are descriptions found under "cid DESC_Items" in the sdk (lib_description.h) import c4d def main() : obj = doc.GetFirstObject() #The object hosting the UD ud = obj.GetUserDataContainer() #The master UD container first = ud[0] #The first UD entry (by it's stack position...not it's ID#) #first is now a variable that holds a tuple of two base containers #first[0] holds the level ID#, and two DescLevel objects #first[1] holds the various options for the UD entry. Changed by accessing their descriptions #Here is an example of accessing those options: container = first[1] #The second tuple container of the UD entry. Which holds most of the attributes for i in container: print i #Prints the container ID#s, values #Examples of what some of these options are for the default UD type entry: level = first[0][1].id #The level ID# Lname = first[1][1] #The long name Sname = first[1][2] #The short name three = first[1][3] # 3 for certain UD types minValue = first[1][5] #Min value maxValue = first[1][6] #Max value step = first[1][9] #Step Value anim = first[1][10] #Animatable option meters = first[1][12] #units type tupleData =first[1][13] # (700, 5, 0) interface =first[1][21] #interface type (float=19, #float slider=1000489, #float slider noedit = 200000006, #lat/lon = 1011148) minslider =first[1][27] #The min slider value (if enabled) if __name__=='__main__': main() I hope that help demystify UD a little bit. IMHO. The UD system is rather convoluted. Which I why I wrote myself lots of helper notes & guides like this one about it. -ScottA On 07/06/2013 at 16:23, xxxxxxxx wrote: hey thanks for the many replies - i try to use DESC_PARENTGROUP to create own groups in the userdata - without success - has one of you a idea or an example how create own userdata groups Many Thanks On 07/06/2013 at 17:12, xxxxxxxx wrote: Yeah. I have an example of using UD groups. This script creates a python tag on the selected object. And then adds a UD group to it. Then it adds a couple of things into that group. import c4d def main() : obj = doc.GetActiveObject() pyTag = c4d.BaseTag(1022749) #Create a python tag to host the User Data pyTag] = False childGroup[c4d.DESC_PARENTGROUP] = pyTag.GetUserDataContainer()[0][0] pyTag] = pyTag.GetUserDataContainer()[1][0] pyTag[pyTag] = pyTag.GetUserDataContainer()[1][0] pyTag[pyTag.AddUserData(childGrp_Item2)] = 55.6 obj.InsertTag(pyTag) c4d.EventAdd() if __name__=='__main__': main() On 13/06/2013 at 07:46, xxxxxxxx wrote: thanks scottA , is it possible to get out of the default user data group to create tab-items Thanks On 13/06/2013 at 10:02, xxxxxxxx wrote: You lost me on that one. UserData is created inside the UserData tab. If you're you asking me if we can create more than one User Data tab. And name them with our own custom names. My answer is I don't know. I've never tried it. But that's a good question.
https://plugincafe.maxon.net/topic/7119/8084_add-userdata-with-details
CC-MAIN-2021-39
refinedweb
1,895
51.89
in reply to Re^5: Writing a better Modern::Perl in thread Writing a better Modern::Perl Lets ignore Moose for the time being... Why do you chose not to use autodie, or indirect, or namespace::autoclean, or even mro "c3" is it because you don't understand these modules, or that you prefer the behavior without them, or that you just don't care (and should therefore care little if the behavior is changed)? What is "diverse code"? Different code has different requirements. Where I may use autodie for a run-once program, I'm less likely to have a need for autodie for a module that sits between a webapp and a database. Whatever you come up with nextgen (or anyone else with a similar module), even if it's ideal for some of the code I write, it cannot be ideal for the majority of the code I write. As I said before, the only modules/pragmas that are used in most of my modules are 'use strict; use warnings; use 5.XXX;'. There's nothing else that I use often enough that I want it loaded by default. There's nothing else that I use often enough that I want it loaded by default. I agree, and that's why I've been reluctant to add anything but autodie to Modern::Perl. My current approach is "it enables features of the core that should be on by default and nothing
http://www.perlmonks.org/?node_id=864065
CC-MAIN-2013-48
refinedweb
244
67.38
The STL+ C++ library Andy Rushton The STLplus library uses its own TextIO subsystem as it's preferred I/O system. However, the standard C++ I/O system is IOstream. The reasons for replacing IOstream throughout the STLplus are given in the documentation for TextIO. However, there will be times when the STLplus is used in an application that does use IOstream. This package provides a pair of TextIO wrapper classes that convert IOstream device classes to TextIO device classes. For example, consider the error handler subsystem. It takes a TextIO Output device as an argument: class error_handler { public: error_handler(otext& device,unsigned limit = 0,bool show = true) ... }; The otext object required by the constructor is the superclass for all output devices in the TextIO subsystem. What if the user of the error handler wants to use an IOstream device with the error handler? The answer is to put a TextIO wrapper around the IOstream device using the classes defined in iostreamio.hpp. For example: #include <fstream> #include "iostreamio.hpp" using namespace std; ... // create and open the IOstream device ofstream output_stream("errors.log", ios::binary); // create and initialise the TextIO wrapper device oiotext output(output_stream); // now initialise the error handler error_handler errors(output); The oiotext object is a subclass of otext. Any text output to this device will be routed to the underlying ofstream class (a subclass of ostream). The TextIO device stores a reference to the IOstream device. Therefore, as usual in C++, the IOstream device must remain in scope throughout the lifetime of the TextIO device. The above example achieves this by declaring both objects in the same scope. Furthermore, closing the TextIO device does not close the IOstream device, it simply disconnects it. Note that the iostream device is opened in binary mode. Unfortunately this has to be your responsibility - it doesn't seem to be possible to change the mode of a file after opening it, so it wasn't possible to set it to binary mode within the TextIO device's constructor. The reason for setting the iostream device to binary mode is so that the TextIO device can take responsibility for line-end handling. For example, if the TextIO device is in DOS line-end mode, then all newlines will be passed to the iostream device as cr/lf pairs. Since the iostream is in binary mode, it will be written to the file in that format. If both TextIO and iostream devices are in text mode, the two subsytems can end up creating a mess (such as double line-ends). You could if you prefer put the iostream device in text mode and the TextIO device into binary mode instead. The interface to the output device class is: class oiotext : public otext { public: oiotext(std::ostream&); void open(std::ostream&); std::ostream& get_stream(void); const std::ostream& get_stream(void) const; }; The name of the class is constructed as follows: oiotext = (o)utput (io)stream (text)io device This follows the normal convention for TextIO devices: an (o) for output, followed by one or more characters representing the subclass and then the word (text) which is common to all TextIO device classes. The constructor and the open method do the same thing - they associate the ostream (or a subclass of ostream) with the oiotext device. Any subclass of ostream can be used, not just file objects of class ofstream. The get_stream methods allow the attached stream to be accessed directly. This will fail catastrophically if there is no device attached (i.e. if the device has not been opened or if it has been closed). Once an ostream has been attached to the oiotext device, it can be used like any other otext device. For example: // create and open the IOstream device ofstream output_stream("output.txt", ios::binary); // create and initialise the TextIO wrapper device oiotext output(output_stream); ... // now use the device output << "Hello World!" << endl; This will redirect the text "Hello World!" and a newline to the underlying ostream object (in the example it is called output_stream). This in turn prints the text to the file output.txt. The interface to the input device class is: class iiotext : public itext { public: iiotext(std::istream&); void open(std::istream&); std::istream& get_stream(void); const std::istream& get_stream(void) const; }; The name of the class is constructed as follows: iiotext = (i)nput (io)stream (text)io device This again follows the normal convention for TextIO devices. The constructor and the open method associate the istream (or a subclass of istream) with the iiotext device. Again, any subclass of istream can be used. The get_stream methods allow the attached stream to be accessed directly. They will fail catastrophically if there is no device attached. Once an istream has been attached to the iiotext device, it can be used like any other itext device. For example: // create and open the IOstream device ifstream input_stream("data.txt", ios::binary); // create and initialise the TextIO wrapper device iiotext input(input_stream); // now read a series of floats, one per line of the input file while(input) { float data = 0.0; input >> float >> skipendl; ... } This example reads the data file data.txt which consists of one floating point value per line. Note that, although the file is opened as an IOstream device (specifically, an ifstream), it is read using TextIO functions (for example, the skipendl manipulator).
http://stlplus.sourceforge.net/stlplus/docs/iostreamio.html
crawl-001
refinedweb
895
63.19
FieldFlags Enumeration [namespace: Serenity.Data.Mapping] - [assembly: Serenity.Data] Serenity has a set of field flags that controls field behavior. public enum FieldFlags { None = 0, Insertable = 1, Updatable = 2, NotNull = 4, PrimaryKey = 8, AutoIncrement = 16, Foreign = 32, Calculated = 64, Reflective = 128, NotMapped = 256, Trim = 512, TrimToEmpty = 512 + 1024, DenyFiltering = 2048, Unique = 4096, Default = Insertable | Updatable | Trim, Required = Default | NotNull, Identity = PrimaryKey | AutoIncrement | NotNull } An ordinary table field has Insertable, Updatable and Trim flags set by default which corresponds to Default combination flag. Insertable Flag Insertable flag controls if the field is editable in new record mode. By default, all ordinary fields are considered to be insertable. Some fields might not be insertable in database table, e.g. identity columns shouldn't have this flags set. When a field doesn't have this flag, it won't be editable in forms in new record mode. This is also validated in services at repository level. Sometimes, there might be internal fields that are perfectly valid in SQL INSERT statements, but shouldn't be edited in forms. One example might be a InsertedByUserId which should be set on service level, and not by end user. If we would let end user to edit it in forms, this would be a security hole. Such fields shouldn't have Insertable flag set too. This means field flags don't have to match database table settings. Insertable Attribute To turn off Insertable flag for a field, put a [Insertable(false)] attribute on it: [Insertable(false)] public string MyField { get { return Fields.MyField[this]; set { Fields.MyField[this] = value; } Use Insertable(true) to turn it on. Non insertable fields are not hidden. They are just readonly. If you want to hide them, use [HideOnInsert] attribute (Serenity 1.9.8+) or write something like form.MyField.GetGridField().Toggle(IsNew) by overriding UpdateInterface method of your dialog. Updatable Flag This flag is just like Insertable flag, but controls edit record mode in forms and update operations in services. By default, all ordinary fields are considered to be updatable. Updatable Attribute To turn off Updatable flag for a field, put a [Updatable(false)] attribute on it: [Updatable(false)] public string MyField { get { return Fields.MyField[this]; set { Fields.MyField[this] = value; } Use Updatable(true) to turn it on. Non updatable fields are not hidden in dialogs. They are just readonly. If you want to hide them, use [HideOnUpdate] attribute (Serenity 1.9.8+) or write something like form.MyField.GetGridField().Toggle(!IsNew) by overriding UpdateInterface method of your dialog. Trim Flag This flag is only meaningful for string typed fields and controls whether their value should be trimmed before save. All string fields have this flag on by default. When a field value is empty string or whitespace only, it is trimmed to null. TrimToEmpty Flag Use this flag if you prefer to trim string fields to empty string instead of null. When a field value is null or whitespace only, it is trimmed to empty string. SetFieldFlags Attribute This attribute can be used on fields to include or exclude a set of flags. It takes a first required parameter to include flags, and a second optional parameter to exclude flags. To turn on TrimToEmpty flag on a field, we use it like this: [SetFieldFlags(FieldFlags.TrimToEmpty)] public string MyField { get { return Fields.MyField[this]; set { Fields.MyField[this] = value; } To turn off Trim flag: [SetFieldFlags(FieldFlags.None, FieldFlags.TrimToEmpty)] public string MyField { get { return Fields.MyField[this]; set { Fields.MyField[this] = value; } To include TrimToEmpty and Updatable but remove Insertable: [SetFieldFlags( FieldFlags.Updatable | FieldFlags.TrimToEmpty, FieldFlags.Insertable)] public string MyField { get { return Fields.MyField[this]; set { Fields.MyField[this] = value; } Insertable and Updatable attributes are subclasses of SetFieldFlags attribute. NotNull Flag Use this flag to set fields as not nullable. By default, this flag is set for fields that are not nullable in database, using NotNull attribute. When a field is not nullable, its corresponding label in forms has a red asterisk and they are required to be entered. NotNullable Attribute This sets the NotNull atttribute on a field to ON. Remove attribute to turn it off. You may also use [Required(false)] to make field not required in forms, even if it is not nullable in database. This doesn't clear the NotNull flag. Required Flag This is a combination of Default and NotNullable flags. It has no relation to [Required] attribute which controls validation in forms. PrimaryKey Flag and PrimaryKey Attribute Set this for primary key fields in table. Primary key fields are selected on Key column selection mode in List and Retrieve request handlers. [PrimaryKey] attribute sets this flag ON. AutoIncrement Flag and AutoIncrement Attribute Set this for fields that are auto incremented on server side, e.g. identity columns, or columns using a generator. Identity Flag and Identity Attribute This is a combination of PrimaryKey, AutoIncrement and NotNull flags, which is common for identity columns. Foreign Flag This flag is set for foreign view fields, that are originating from other tables through a join. It is automatically set for fields with expressions containing table aliases other than T0. For example, if a field has an attribute like [Expression("jCountry.CountryName")] it will have this flag. This has no relation to ForeignKey attribute Calculated Flag If a field has an expression involving more than one field or some mathematical operations, it will have this flag. This could also be set for fields that are calculated on SQL server side. NotMapped Flag and NotMapped Attribute Corresponds to an unmapped field in Serenity entities. They don't have a corresponding field in database table. These kinds of fields can be used for temporary calculation, storage and transfer on client and service layers. Reflective Flag This is used for an advanced form of unmapped fields, where they don't have a storage of their own in row, but reflects value of another field in a different form. For example, a field that displays absolute value of a integer field that can be negative. This should only be used in rare cases for such unmapped fields. DenyFiltering Flag If set, denies filtering operations on a sensitive field. This can be useful for secret fields like PasswordHash, that shouldn't be allowed to be selected or filtered by client side. Unique Flag and Unique Attribute When a field has this flag, its value is checked against existing values in database to be unique. You can turn on this flag with Unique attribute and determine if this constraint should be checked on service level (before the check in database level to avoid cryptic constraint errors).
https://volkanceylan.gitbooks.io/serenity-guide/entities/fieldflags_enumeration.html
CC-MAIN-2019-18
refinedweb
1,103
58.48
Is there a way to let phpstorm include the needed namespace in the header automatic? I've seen this in eclipse and would love to do this with phpstorm too. e.g. if i write this code: $foo = new Ragtek\Blog\Entity\Baz(); it changes the code automatic to use Ragtek\Blog\Entity; .... $foo = new Baz(); Is there a way to let phpstorm include the needed namespace in the header automatic? Hi there, Please try Settings | Editor | Auto Import | Automatically add 'use' statements Was this removed in PhpStorm PS-124.373? I can't find this setting anymore. EDIT: the code inspector suggested showed "unnecessary fully qualified name" and then i was able to to "import class" (it sounds strange^^)
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206365939-create-automatic-use-namespace-
CC-MAIN-2019-26
refinedweb
120
64
Results 1 to 2 of 2 Hey guys, i just started learning python (i usually use java/C). this has got me stumped as its not mentioned in the documentation (unless im skimming it every time). How ... - Join Date - Sep 2005 - 6 Python - instanciating classes from other files this has got me stumped as its not mentioned in the documentation (unless im skimming it every time). How does one instanciate a class from another file i thought it would be -----------------------------------code--------------------------- import file.py thisFile = ClassName() -------------------------------not code--------------------------- but its not and its not -----------------------------------code--------------------------- thisFile= filename.ClassName() -------------------------------not code--------------------------- im stuck cheers dave - Join Date - Jul 2004 - Location - Poland - 368 Suppose you have a file called foo.py with class Foo in it (i'm not sure if file is a good name for your module, as it clashes with built-in function file): Code: class Foo: def __init__(self, x): self.x = x print 'Foo constructed with x =', x Code: import foo my_object = foo.Foo(42) This way you'll have to prefix all symbols from foo.py module with foo dot as above. Another option is to import a single symbol: Code: from foo import Foo my_object = Foo(42) Code: from foo import * my_object = Foo(42) Hope this helps. (and yes, you are skimming it every time:)"I don't know what I'm running from And I don't know where I'm running to There's something deep and strange inside of me I see"
http://www.linuxforums.org/forum/programming-scripting/44379-python-instanciating-classes-other-files.html
CC-MAIN-2014-41
refinedweb
250
77.37
Python Programming, news on the Voidspace Python Projects and all things techie. itemgetter and attrgetter I've just discovered the attrgetter and itemgetter functions from the Python standard library operator module (both functions new in Python 2.4 with added functionality in Python 2.5). I wish I'd discovered them earlier as although they do very simple jobs they can make code cleaner and more readable. Both of them return functions that will fetch a specified item or attribute from objects. One place I could make use of attrgetter is in property definitions. It is a fairly common pattern to have a property with custom behaviour when set, but that merely returns an underlying instance attribute when fetched. As an example: def __init_(self): self._document = None def _set_document(self, document): self._document = document # do other stuff ... document = property(lambda self: self._document, _set_document) The 'document' setter method is _set_document, but the getter is merely a lambda that returns self._document. We can improve this by using attrgetter instead: ... document = property(attrgetter('_document'), _set_document) attrgetter is called with a string and returns a function that fetches that attribute. Anything that helps eliminate lambdas has to be good right? It is roughly the equivalent of: def getter(thing): return getattr(thing, attribute) return getter In Python 2.5 you can call attrgetter with multiple attributes and the getter it returns will fetch you a tuple of all the attributes. As an added bonus, if you are using CPython, it is nice and fast. itemgetter is very similar, but instead of fetching attributes it fetches items from sequences or mappings. One place this comes in handy is as a key function when sorting lists. A common pattern when needing a custom sort order for lists is 'decorate-sort-undecorate' (also known as the Schwartzian Transform from its Perl origin). This involves transforming the list (decorate) into one that can be sorted using the built-in sorted (which returns a new list) or using the list sort method (which sorts in place). You then transform your newly sorted list back (the undecorate). This pattern is now built-in to Python. Both sorted and sort take a key function to transform each item. The list is sorted on the transformed items, saving you the effort of having to do it yourself. As an example suppose we have a list of tuples like (first_name, last_name), and we want to sort on last name. We can achieve this by passing in a key function to sort that returns the second item of the tuple: sorted_items = sorted(items, key=lambda item: item[1]) I'm sure you can see what's coming. We can use itemgetter to eliminate the lambda: ... sorted_items = sorted(items, key=itemgetter(1)) itemgetter is roughly the equivalent of: def getter(thing): return thing[item] return getter As with attrgetter it is nice and fast and the Python 2.5 version can take multiple items and the getter will then return you a tuple. It doesn't just work with sequences, but can also be used with dictionaries. I actually discovered this when Christian was playing with Raymond Hettinger's Named Tuple Recipe (one of awesomenesses coming in Python 2.6) on IronPython. Its use of itemgetter triggered a very obscure bug in IronPython 2 Beta 4 that has thankfully gone away with the Beta 5 release. Named tuples really are awesome, and we will start using them in Resolver One. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-09-24 12:54:48 | | Categories: Python, Hacking Tags: standard library, operator module Python: Two Phase Object Creation I've started watching a talk by Alex Martelli on Python and design patterns: It's very interesting. Alex points out that design patterns can't be taken out of the context of the language (the technology) in which they are being used. For example, the iterator pattern is now built-in to most high-level languages. With first class types and functions, most of the object creational patterns are effectively built-in to Python (class and function factories are trivially easy to write). I'm only half way through the video, but I liked his explanation of object creation in Python. Python has two different 'magic-methods' (protocol methods) for object construction [1], and it is probably not until you have programming in Python for a while (become a 'journeyman') that you need to understand this. The method usually described as the 'constructor' is __init__. If you define a class with a 'dunder-init' [2] method, then it is called when a new object is created [3]: def __init__(self, *args, **keywargs) ... When you create an instance - SomeClass(*args, **keywargs) - the arguments you use in the call are passed to the __init__ method. Notice that __init__ is an instance method - it receives the instance self as the first argument. This means that it is really an object initialiser; the instance has already been created when it is called. The method responsible for creating objects is a class method called __new__. This receives the class as the first argument and should return an instance: def __new__(cls, *args, **keywargs) ... return object.__new__(cls) dunder-new can actually return anything; there is no restriction requiring it to be an instance of the class. It can return an instance of a sub-class picked by the arguments passed in, it can return None or a pre-created instance (useful for the Singleton pattern). If dunder-new does return an instance of the class then dunder-init is called with the same arguments that were passed to dunder-new - the arguments used in the call to construct the object. The vast majority of classes you write won't need a custom implementation of dunder-new. So when might you want to use __new__? We've already mentioned using your class as a factory that can return an instance of a subclass. Another reason is for creating immutable objects. Because dunder-init is an instance method it can be called again on an instance. If object state is setup in dunder-init then calling it again with new arguments could change the state of the object: instance.__init__(*newargs, **newkeywargs) If you setup the object state in dunder-new instead then calling it again creates a new instance rather than mutating the instance. You'll need to do this if you sub-class the built-in immutable types like strings, numbers or tuples. Even if you only want to write a new dunder-init method that takes new arguments, you will still need to override dunder-new. Any arguments passed to object creation will also get passed to dunder-new; and the default one will barf on extra arguments. def __new__(cls, value, arg): # ignore 'arg' return int.__new__(cls, value) def __init__(self, value, arg): self.arg = arg Creating instances from classes is actually done by the metaclass (yes - they are in the language for a reason other than making things complicated). You can make objects callable in Python by defining a __call__ method. If a class defines dunder-call, then instances of the class are callable. In fact functions and methods in Python are just examples of callable objects. Object creation is done by 'calling classes'. So if calling an object results in a call to dunder-call defined on their class, what happens when you call a class? The same thing... the class of a class is its metaclass - and when you call a class, __call__ on the metaclass is called. For classes that inherit from object (new-style classes), the default metaclass is type. The two phase object construction [4] we have been discussing is implemented in type.__call__, and it is roughly equivalent to: instance = cls.__new__(cls, *args, **keywargs) if isinstance(instance, cls): cls.__init__(instance, *args, **keywargs) return instance So you can customize what happens when a class is 'called' by implementing a custom metaclass that overrides __call__. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-09-21 13:21:57 | | Categories: Python, Hacking Tags: metaclasses, oop, objects Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2008_09_20.shtml
CC-MAIN-2013-48
refinedweb
1,385
64
Intro If you read my last post, you should know what a Widget is, if you haven't already, do it now, I'll await. await Reader.read('Widgets and components '); Now you know what Widgets should be. I tried to keep it simple for the first implementation, but I'm really turning my head around this. The thing is, React is a JavaScript pure framework and with JSX allow us to do things that we can't made with plain HTML templates, nevertheless, Typescript allow us for things that JavaScript does not, at the very least, it seems to allow that. Like private properties. With that in mind, I'll try to explain why you should use private properties inside a Widget and how to manage the data entry in a better way. The situation Following the comparative with React from the last post, when the props are send to a component, most of the the time we are getting it following the destructive pattern, allowing to do things like: interface IUser { firstname: string; lastname: string; age: number; } export function usersList({users = []} : { IUser[] }) { return (<ul> users.map(user => userDetails(user)); </ul>) } function userDetails({firstname, lastname, age}: IUser) { return (<li><span>{firstname} {lastname}</span>have {age} years old</li>); } I think this looks good, because it allow us to be explicit about what we want to show, and how to show it. In Angular, we just @Input the data that we want to get from the outside. // user.ts interface IUser { firstname: string; lastname: string; age: number; } // user-list.widget.ts import { Component, Input } from '@angular/core'; @Component({ selector: 'my-user-list', template: ` <ul> <li * <my-user-details [user]=user><my-user-details> </li> </ul> ` }) export class UserListWidget { @Input() public users: IUser[]; } // user-details.widget.ts import { Component, Input } from '@angular/core'; @Component({ selector: 'my-user-details', template: `<span>{{user.firstname}} {{user.lastname}}</span> has {{user.age}} years old`, }) export class UserDetailsWidget { @Input() public user: IUser; } This is right, as Widgets are suppose to display data, and they should only concern should be how to display that data. But we need a better way to organize how we show that data correctly in the template. I like the approach from React in that sense. Don't get me wrong, I love Angular. But JSX allow us to do something nicer than dotter properties, destructive assignment. Solution proposal Now that is being clearer that React have a somewhat advantage above us, we as Angular developers need to propose a solution to be even, using features that are being bringed by Typescript, private properties. I know that Typescript is compiled to JavaScript, and JavaScript doesn't have such things, but, we'll do as we don't care. Also, we will use a ES6 set feature. This allow us to have properties that can be set, but cannot be read. Introducing @input set import { Component, Input } from '@angular/core'; @Component({ selector: 'my-user-details', template: `<span>{{firstname}} {{lastname}}</span> has {{age}} years old`, }) export class UserDetailsWidget { private _user: IUser; @Input() public set user(value: User): void { this._user = value; } } OK. We learn we can set to private properties, but that does not quite allow us to show the properties in the template. Technically JavaScript only have properties, so we could bind to the private property, but we should never bind a private property to the template, as the AOT compiler would throw. Introducing template get @Component({ selector: 'my-user-details', template: `<span>{{firstname}} {{lastname}}</span> has {{age}} years old`, }) export class UserDetailsWidget { private _user: IUser; @Input() public set user(value: IUser) { this._user = value; } public get firstname(): string { return this._user.firstname; } public get lastname(): string { return this._user.lastname; } public get age(): age { return this._user.age; } } This actually works as expected. Allowing a better control over the data that are being displayed in the template, and how to expose it. Also, it would even allow us to do certain things, to maintain the code in a better way. @Component({ selector: 'my-user-details', template: `<span>{{fullname}}</span> has {{age}} years old`, }) export class UserDetailsWidget { private _user: IUser; @Input() public set user(value: IUser) { this._user = value; } public get fullname(): string { return `${this._user.firstname} ${this._user.lastname}`; } public get age(): number { return this._user.age; } } However, template get as showed here have certain issues, but I'll explain everything about those issues properly in other post. Example For this, I made a little Plunker that expose the use case. Here Thanks you As always, thank you for reading this long. I hope you like it, and I am really looking forward to read your thoughts in the comments section. Discussion
https://dev.to/michaeljota/input-set-pattern-for-widgets-in-angular
CC-MAIN-2020-50
refinedweb
780
53.61
Is Visual Basic a Good Beginner's Language? 1100 Austin Milbarge asks: "Ever since the .NET framework came along a few years ago, Microsoft had promised VB developers that their language would finally be taken seriously. To be honest, I never understood why some non-VB developers thought of VB as a 'toy' language, but that is for another article. Anyways, Microsoft made good on their promise and transformed VB from an easy to learn language into an object oriented power house, with lots of OOP functionality thrown in. The old VB has been discontinued, and the new VB is no longer a simple language. With all the fancy changes, is VB still the great beginner's language it once was? Would you recommend it to a beginner over C#?" still C (Score:4, Interesting) Re:still C (Score:5, Funny) % Segmentation Fault % Re:still C (Score:3, Funny) #include int main() { printf("Hello world\n"); } $ gcc -o test test.c $ test $ Re:still C (Score:5, Funny) #include int main() { //Begin Main function printf("Hello world\n"); } //End Main Function Re:still C (Score:5, Funny) ...or at least that was the way my first Hello World program was graded (although that was in C++). Re:still C (Score:3, Informative) If you don't know this yet: Use < for < and > for > Re:still C (Score:5, Funny) #include <stdio.h> int main() { printf("Segmentation Fault\n"); return 1; } Never improve on perfection :) (Score:4, Funny) That should return EXIT_FAILURE or something, not 1. Re:still C (Score:5, Interesting) On the contrary, C is one of the worst languages you can use as your first language. I've tried teaching C to first year students. You end up teaching the syntax and vagaries of C and not very much programming or computer science. One of the first things that beginning programmers want to do is do stuff with strings. Do you really want to explain C strings to people who have never programmed before? (There's a reason that microcomputer BASICs got peoples imaginations working. You can do stuff in it straight away!) Amusingly, about the only worse mainstream language I can think of for this purpose is VB. C's syntax and semantics at least have the advantage of being consistent. (In its defence, VB wasn't exactly designed to be the way it is. It was a fairly inextensible language which nonetheless got extended as the years progressed. "Congealed" might be a better word than "designed".) Save C for semester 2. Semester 1 should use a language which emphasises the basics. A functional language (say, Scheme or Haskell, but not ML; ML is a cool family of languages, but the syntax is way too arcane for a first language) would be my first choice, but even Java is a better choice than C. Hell, even C++ is a better choice. The syntax is at least as quirky as that of C, but at least it has a decent standard library which lets you do stuff straight away. Visual Basic is horrible; use Python (Score:4, Insightful) Re:Visual Basic is horrible; use Python (Score:5, Interesting) Re:Visual Basic is horrible; use Python (Score:4, Informative) Re:Visual Basic is horrible; use Python (Score:5, Insightful) Why would you want to start with "the hard way"? Bringing students into a EECS program might benefit from starting out with C or even assembler, but for anything else it is overkill. Python teaches variables, functions, structure, and control flow as well, so I don't see the disadvantage of starting with Python. The main thing that Python does not teach is memory management and some of the more interesting bugs that C can produce. The main reason to introduce a student to more than one language is to begin without the more complex parts of languages like object oriented code, and even structured code. The nice thing about python is that you can start with procedural code, continue through structured code, and end up at object oriented code, introducing only a few concepts at a time. With C, you have the problem of explaining the main() function or doing handwaving before you get around to explaining functions. With Java, you have the problem of classes even before you get to a main function. When even the most basic program requires structured or object oriented code, you have a problem teaching it to beginners. Take the following examples: Java: Student asks: What is a class?, What is that funny looking bracket?, What is public?, What is static?, What is void for?, What is main?, What are the parenthesis for?, What is a String?, What is args?, How come there are funny square brackets?, What is system?, What does the dot do?, What is out?, What is println?, Why are there quotes there?, What does the semicolon do?, How come it's all indented like that?. C: Student asks: What is #include?, What are the greater than and less than signs doing there?, What is stdio.h?, What is main? What are the parenthesis for?, What is the funny bracket for?, What is printf?, Why is hello world in quotes?, What is the backslash-N doing at the end?, What is the semicolon for? Python: Student asks: What is print?, Why is hello world in quotes? Get the picture? Re:Visual Basic is horrible; use Python (Score:4, Funny) And in other questions... (Score:5, Funny) Dear Osama Bin Laden: Would you like to come to my bar mitzvah? Dear Eagles fans: Would you be willing to sign Terrell Owens again? Re:And in other questions... (Score:3, Funny) Come on, quit holding back. How do you really feel about her? Not really. (Score:5, Insightful) Re:Not really. (Score:5, Insightful) Go with Ruby. It's such a lovely language. Good Beginner's Language (Score:4, Insightful) Re:Good Beginner's Language (Score:5, Insightful) An important point to note here, is that most programmers nowadays don't need to be aware of this. At least not to the extent they used to. More and more, business and industry needs software to simply automate simple tasks. As processing power increases, as memory space grows, it's not necessarily the case that the basic tasks people need done will grow to match them. Thus it becomes less important for a programmer to optimise or worry about optimisation. In addition, although more programmers will be required to create software, they will not be required to delve into programmings basic complexities. It's not required you know about opcodes and memory addresses to write HTML, and soon it will be th ecase that you won't need to know when coding business apps. At all. In short, the future legions of greasemonkey coders will be using Ruby on Rails, not C and FORTRAN. When the job gets too big, complex and nasty for them or a hardware upgrade to handle, they'll call in the high priced consultants who can still code in low level procedural languages. C != Good Beginner's Language (Score:5, Insightful) I agree that bells and whistles increase a languages' barrier to entry, but if they can be ignored (like a lot of the Java library) then it is a moot point. C is not, nor ever will be a newbie language. By the train of thought that it is best "to [know] what goes on under the covers", then the logical conclusion of that is to teach a simple assembly language, quickly followed by a compilers and systems course. In math, we typically teach younger students how to use a function or expression before we teach them how to prove it (consider it to be the process of giving them the specifications). Disclaimer: I have helped and witnessed many students learn C. I'd go with C# (Score:5, Insightful) If I may make a suggestion... (Score:5, Insightful) And yes, despite being a Linux hacker now I once did use Visual Basic, and I have to say it took way longer to learn VB than it did Python. I'm a fan of Java (Score:3, Insightful) Re:I'm a fan of Java (Score:5, Funny) You really need to get out more.. -jcr Good ... for what? (Score:5, Insightful) I guess this depends on what you qualify as "good"... I'm sure there are other reasons to consider VB to be a "good" language. Since I don't do VB anymore (thank God), I have lost track of those reasons. I think I'll stick with C and PHP, this way when I get a customer that wants something that'll work on Solaris or QNX or AIX or HP/UX, I have half a chance of success! Bad idea (Score:4, Interesting) Short Answer: Are you out of your bleeding mind? Long Answer: Visual Basic is riddled with problems for those who are new to programming. The first problem that hits someone looking to learn programming is that he/she sees a pretty layout manager, but no code. It's quite possible to build an interface without ever writing a single line of code. When the entire point of the exercise is to learn coding, this is NOT a good thing. The second problem is that Visual Basic doesn't clearly introduce the "programmer" to concepts like functions, interpreters, and compilers. Most of the functions in VB are automatically generated, giving the impression that these are magic incantations that shouldn't be touched by a "programmer". VB Studio has an interpreter, but it isn't interactive in the same way as BASIC interpreters. This makes it useless as a learning tool. The compiler is mostly a matter of setting a file name and hitting a button to produce an EXE. So the new programmer gains no understanding of how code gets translated into an executable. Concepts like linking, for example, are completely glossed over. One of my personal beefs with the older versions of VB (which have been corrected in VB also loses major points for failing to include typed variables. The automatic conversions between numbers, strings, and other types only serves to confuse a new programmer, especially when the auto-cast does the wrong thing. A new programmer should be taught to understand how data is represented by computers, not abstracted away so far that they can't understand how to fix problems. Beyond that, VB tends to do a lot of confusing things that are not easily explainable. The lack of useful documentation and/or a good documentation browser only serves to increase confusion. To be honest, I never understood why some non-VB developers thought of VB as a 'toy' language, VB had/has its uses, but it's still just a RAD tool. As soon as you run into situations that the RAD tool can't handle, you should be using a real language rather than trying to hack it. Re:Bad idea (Score:5, Insightful) Your first problem is that you're mashing VB6 and VB.NET. They, for all the similarities in syntax, are really completely different languages with a completely different runtime going underneath. Now, since this is a question about a "beginners language", it's unlikely that someone would mistakenly rant off about VB6, since it has been largely deprecated. Anyone starting with "VB" now would use VB.NET, with or without the pretty IDE. I think that's clear enough from most of the posts I've seen so far in this article. Some of your points are valid vis-a-vis VB6. It was completely tied to the IDE (the preprocessor infact was the IDE) and it supported a semi-OO model, which is like saying "a little bit pregnant", but regardless, most of these limitations were related to the fact that VB6 was essentially a COM server and consumer platform. The lack of implementation inheritance is a good example of that - since COM is a binary spec, it does not support it. Polymorphism and aggregation OTOH, which permeate COM, were. So pre-VB.NET, "Visual Basic" was both hobbled and all the better off for being tied to intrinsically to the COM spec. VB6 didn't behave like it did because someone at Microsoft didn't have anything better to do, it did because it had to play by the rules - the rules of COM. You could either understand these limitations (if they were to you) and live with them, or just use C++. By the time Microsoft released ATL, COM-centric coding in C++ became extremely easy - I always chuckle at the quitessential "yeah I know C++ and VB sucks, but I don't know a replacement for the GetObject() function and my life suxx0rz" claim from people who think it's really cool to bash VB because it has a large following of hobby developers that know nothing about software design, as if it was impossible to do anything meaningful with it. But I digress. Along comes VB.NET, which is essentially the VB6 syntax ported to the .NET CLR. Like the other "mainstream" languages that target the CLR/CLI, VB.NET is essentially a full OO implementation, unless you're willing to call Python or Java "toys" because they don't support multiple inheritance or the concept of friend classed as implemented by C++. So you have a fully OO language (for all practical purposes) with generics, operator overloading, partial classes, etc. that can be easily decoupled from the IDE - all you need is a text editor and the compiler, though most people prefer the IDE route. It just happens to look like BASIC. Other than that, I think it's a good beginner's language. Wouldn't you agree? That's as far as VB currently goes... the rest of your rant is just the usual bashing a platform that is no longer supported or in active development, nor understood (obviously) by people like you. dont learn vb (Score:4, Insightful) If you really want to start with a OO language, pick Java or C#.. But be warned, those are dynamic languages (Java, C#, Perl, PHP, Python, Javascript, etc) and they have some differences compared to "hard-compiled" languages like C. C forces you to understand how the computer works, and it will always help afterwards to know that. Python is also a good beginner's language, its clear, clean, easy to learn, easy to use. Stay away from Perl and PHP, they are very easy to use.. but they teach bad habits. And VB is badly considered not because the language sucks (and it did suck last time I used it.. but that was many years ago), but because most VB programmers suck and are not very good. Often not formally trained and they dont really understand many important concepts. Its fine if you want to cook for you familly, but that's not how you cook for a large restaurant. A good formally trained programmer should be able to pick up any not-to-weird language in very little time (since they all have basicly the same concepts)... VB programmers most often can't. Where I work, I have to handle C, C++, Java, Perl, PHP, having a good base is important. The concepts are important, the syntax is just a tool. Get a good tool, dump VB. In the next episode of Ask Slashdot... (Score:5, Insightful) "Is kicking puppies still a great way of attracting women, or do you recommend kittens these days?" VB was never a great beginner's language. It's wrong all over. The only thing that got it a reputation for being a "great beginner's language" was that you could draw the GUI in later versions * before you actually learnt how to write code, so you could get visually pleasing results immediately, whereas the competition at the time meant you actually had to learn how to use a GUI API (and consequently, how to write code) first. You want a good beginners language, look at Python. It's been used successfully in teaching environments for a while now. It enforces good practices like indentation and prohibits easy sources of bugs, like if foo = bar: O'Reilly have an article [oreilly.com] about Python for teaching programming that you might be interested in. * Yeah, the first versions of Visual Basic ran on DOS and didn't have the GUI builders that later versions did. I'm not quite sure what qualified them as basic of the "visual" variety, it's not like you had to type your code in with your eyes shut in other basics. Re:In the next episode of Ask Slashdot... (Score:3, Informative) VisualBasic 1.0 was most certainly Windows, I belive you are thinking of QuickBasic. I actually started way back when on QuickBasic and jumped to assembler (this was all on a 286), but that was all hobby stuff. When I began programming professionally Re:In the next episode of Ask Slashdot... (Score:3, Informative) No, there was such a thing as Visual Basic for DOS. I wouldn't blame you for repressing any memory you have of it :). It's mentioned in this Microsoft Knowledge Base article [microsoft.com] about preparing Visual Basic applications for Y2K. I quote: D'oh! (Score:5, Insightful) Private Sub Command1_Click() End Sub What they will do in the process is to go out and grab a bunch of someone else's code, paste it in there, and change the names of a few things. It really bothers me that the product of this process is even called software. At best shouldn't it be called 'macro-gramming?' Sorry to be such a stickler, but does that programmer have any idea what really goes on when that button is pushed? When the end users need a change that is not an exposed property or method of the pre-packaged object, what can they do? They probably have more creative skills when it comes to making excuses than they do at actually programming. Hell, we've all done it. It seemed like a good idea at the time to just slap together a few goodies, make it look pretty and ship it out the door. But what you end up doing is letting someone else make all the really important decisions for you. If you're lucky enough to be able to satisfy all the demands you encounter that way then more power to you. In order to learn the principles of computer programming, less is more in my book. The more computer science you know, the less dependent on any particular set of tools you become. When code is dear and time consuming to write debug test and maintain, you will be absolutely amazed on how little of it you can get by on. Take the same algorithm and implement it in a couple different formats, languages, compilers, etc. See how many instructions it actually becomes when it gets run. See where different efficiencies of speed or size become important. Try some Python to see what can really be done in an interpreted environment. Try a C compiler. Try looking for a couple of algorithms and see which one performs better and be able to describe why. Then, no matter what tools you end up using, you will have a much better idea of what is going on, how to make it both secure and efficient from the start. Free Pascal (Score:4, Informative) Object Pascal is a good language for beginners. It has strong typing and object-oriented features, but the typing isn't strict to the point of being obnoxious like in Java. It is lower level, so you will deal some with pointers and memory management but it is harder to make a mess with than C/C++. You can also visually design the UI of your application, but the language isn't a disaster like VB (and doesn't run in a VM like C# or Java, so it's quick). Yes ... and all the others too. (Score:3, Insightful) Every language I've learned has been useful on the various projects I've worked on and provided a perspective for evaluating what methods to use for new development. Learn every language you can. You'll probably be surprised to find that you don't just get broad shallow experience, but each language actually gives you more in-depth knowledge of the others (and what they may do behind the scenes). VB(A) is the scripting language built into many Microsoft products. Whether or not you harbor loathing for Microsoft, knowing VB(A) will be very helpful for many tasks and may be necessary to get a paycheck from many places. It's decent. (Score:5, Insightful) The long and the short of it is this: VB ain't bad. People will say that Visual Basic is "unstructured," and they're clueless. People will say that Visual Basic is slow, and they're one step up from clueless (VB5 and VB6 compiled to native code and could, when used correctly, rival Win32 C++ applications for speed; VB.NET compiles to the same CLR the rest of the My personal view of the Win32 API is that the inventor didn't like people. Window creation is needlessly masochistic. VB takes that hassle away. I've written applications where the entire backend of the program is in C++ and used the VB interface just to call C++ DLL functions. It's doable. It works pretty well. Basically--VB is a viable language if you want to get something done *now* and don't care all that much about whether it's pretty. Would I use it for game programming? No (once was enough, a 2D RPG for a school project in sophomore year of high school). Would I use it to write something quick and dirty that I need immediately? Sure, and I'll be done before a C++ coder even has a window up and running. VB also has some pretty nice features that YFTL lacks. You can run the program without compiling it, in interpreted mode--very useful for bug-ferreting. Its class system pre VB.NET was baroque at best, but its built-in garbage collection/memory allocation on-the-fly and the fact that all arrays could be dynamic without external references made it fun to mess with. ~Ed Yes. Just to be different. (Score:3, Insightful) Next up. Right tool for the job. If you're interested in embedded applications, coding on linux, or high performance apps, going And finally, to refute some pundits. VB.Net is a syntax option for coding in CLR, the same as C#, J#, PHP.Net and all the other screwy variations of *.Net. Vb.Net is every bit as Object Oriented as C# or Java. VB.Net by default has explicit and strict options off, turning those two options on makes its compiler just as strict as the C# compiler. VB.Net also has almost all of the functionality from C# (I have heard that there are some obscure pointer functions that are not in VB.Net's syntax, but I have never run into them, or the lack there of). Another one of those "What were they thinking?" items though, VB.Net has a "Hide Advanced Methods" option on by default that hides a lot of methods from the autocomplete lists, turning it off allows you to see all of the same functionality as in C#. The only substantial differences from VB.Net to C# is syntax Things like: VB.Net: Private VarName as String C#: Private String VarName VB.Net: If Var1 = Var2 Then 'Code here End If C#: If Var1 == Var2 { } -Rick VB.NET is just C# with different keywords (Score:5, Informative) If you're learning to code using the -Stephen And now for something completely different... (Score:5, Interesting) Developing in an IDE like VS obfuscates and distances the programmer from the code. It's a necessary evil for developing some things. But throwing a learning user at the bubbly GUI to figure out the wizards for him/herself is akin to putting a new pilot in the seat of a 747. There is just too much there that would seem confusing. For these three reasons I would suggest python: 1. All you need is a free (as in speech) interpreter and your favorite text editor. 2. Documentation, howtos, sample code is easily available (there are plenty of good VB help sites out there, but I have found many many many fantastic samples of python). 3. The syntax of VB and python would seem similar enough to a beginner. Where to begin? (Score:5, Insightful) This question is wrong in so many ways... Good beginners languages are: I would say Common Lisp is the best, but if you start programming using Lisp you'll never truly appreciate it because you assume all languages are that well-designed. Do you really want to learn how to program? (Score:4, Insightful) Or you just want to get some work done? If you really want to learn something, you should do it with PASCAL. Some people told you do study C, but after trying to teach it to a few people, I am fully convinced that C is not a beginer's language. PASCAL is different because you won't need to know about pointers to do quotidiane stuff, but still have manual memory allocation to study. To learn how to program, use PASCAL on a CLI. Don't worry about the time investment, you'll learn VB much faster after you know what you are doing. Just to finish, I'd like to put here a very true quote from Dijkstra: Nope, start with Pascal (Score:5, Insightful) Visual BASIC as a beginner language (Score:4, Informative) Visual BASIC.NET was a rewrite of Classic Visual BASIC, which added in C++ type error trapping, objects, and other things that many have criticised Classic Visual BASIC for not having. Many VB developers want Microsoft to continue to support Visual BASIC 6.0 or Classic Visual BASIC, but Microsoft wants to move on. BASIC stands for Beginners All Symbolic Instruction Code, the first word is for beginners. It was not designed to be anything but a learning tool, like Pascal, Pilot, and many other languages were designed to be. Microsoft used it for early Microcomputers, and then made a GW-BASIC version of it for MS-DOS and then later QBASIC or Quick BASIC for MS-DOS 5.0 and above. Many considered GW-BASIC and QBASIC to be free versions of BASIC and developed for them. Microsoft released Visual BASIC 1.0 and many BASIC developers adapted to it. I recall learning MS-Access 1.0 and using a form of Visual BASIC for applications for it, which they called Access BASIC or something. Borland picked up the Pascal craze, in colleges they taught Pascal for data structures courses. There was UCSD Pascal, but Borland came out with Turbo Pascal and it worked faster than most Pascal compilers. Object Pascal became Delphi by Borland, and it is still popular and a competitor to Visual BASIC. Free Pascal tries to use Object Pascal to be more like Delphi and the Lazarus project uses an IDE with Free Pascal to work like Delphi or Visual BASIC. I think there is an XBASIC out there that works like Classic Visual BASIC. Someone made a GNOME BASIC. The Novell Mono project has a Visual BASIC.NET language which is used on Windows, Linux, Mac OSX, *BSD Unix, etc. The whole argument against Visual BASIC is now moot. Classic Visual BASIC lacked proper OOP, but Visual BASIC.NET fixes that, but at the cost of learning new programming methods and syntax for Classic Visual BASIC developers. While designed for beginners, Visual BASIC has extended itself. Visual BASIC.NET uses a compiler very much designed like C# or C++ to compile into IL (Interprited Language) code (which is like assembly language) to run on the You will be shocked to find that most businesses use Visual BASIC.NET for the same reasons that they used to use COBOL, it is easy to learn, uses English words, and almost anyone can learn it. Still don't discount C#, C++, Java, Python, Perl, and many others, they can interface with Visual BASIC via the Is it a good beginner's language? (Score:4, Insightful) Personally, I think it's a toy language because it separates the programmer from the bare metal of the machine, with too many layers of abstraction, confining the programmer to a "digital playpen" much as you would confine an infant. I have similar feelings about C#. I started with C64 BASIC, moved on to C, then C++, then I learned MSVC and VB at about the same time, and after that I picked up ASM. I really think I learned a lot by following that path, and I'm glad I learned how much work went into writing a GUI long before I dragged and dropped my first VB app. Your SECOND language is the important one! (Score:4, Interesting) * learn several languages * learn languages with widely differing characteristics * learn them well enough, i.e. you don't know a language until you've used it for at least one non-trivial task * take a data structures and analysis of algorithms course, after you know at least two languages Most of the people I would consider bad programmers know only one language, or know one well and others very superficially, like the engineers who can write Fortran in any language*. To show what it's possible to overcome, I started out with BASIC in high school. BASIC does not cause permanent brain damage, if you limit your exposure to it. Before college I had moved out to assembly (PDP8); in college I was exposed to COBOL, FORTRAN, PL/I and 360 assembler. In graduate school I moved up to Pascal and C, but I also finally took a decent algorithms and data structures course - and learned Lisp. Those last two things were probably as important as all the previous experience in making me the hacker I am today. * This is not meant to be a slur on all engineers who program when necessary - just the ones who do it badly, over and over again. Personal Experience Speaking... (Score:4, Insightful) VB is pretty good at teaching programming or getting people started in programming. Being a modern 'basic' it can allow people to get the initial concepts of variables, and put them to use in a syntax that reads like common english language, yet not leave them making a turtle follow lines around a screen. The simplicity is also good to find the 'clicks' or points where people get it. When not teaching this stuff you forget these clicks, even explaining concepts as variables is something that is hard for some people to catch, even if they understand algebra. VB also can do some fairly advanced things now, especially with the current A person could start with no programming background, do the hello world, and stick with VB and make a career from it producing ok software. Pascal is also another easy to understand language (designed to be a learning language even), and it with Borland's support can be almost as powerful as C/C++. So it is another good starter language that a career can be made from - especially Europe, Delphi does quite well there in comparison to the US. I have taken a couple of roads with people, using either VB or Pascal as the 'get it' starting language. Then I progress them to some advanced levels in each language, and along the way contrast in another language, C is the poster child here for the contrast. It can show complexity and also levels of creativity not normally used in the other languages. Useful comparisons to stuff they are currently learning as well as 'wows' like a line of C code that is very complex and recursive, but performs as much as an advanced program. This lets them 'click' along the way, and will hopefully keep perspective and the certain 'creative' element that syntax complexity of C draws out of people. The 'creative' complexity has to be nurtured, even if you are keeping people in Pascal or VB for their career (or they are not going past that). It was the creative of 'how to make it work' concepts that are so dominate in C that define 'good programmers', because in the old days, we had to make it work. Yes it is nice to drag a button on the screen and have the IDE do the work for you, but without some of the 'creative' what ifs, and 'how can we' questions, programmers won't be more than glorified form designers, and that is sad for them to invest time in learning something and not fully getting it from both angles. (The logical syntax and function and the creative inspiration of thinking outside the box.) Programming is one area of expertise that definately benefits from bridge-brain individuals. Creative Logic at its finest... And sadly if the person you are teaching don't fully click in either direction (logic or creative) then you lead them down the road they are good at, and let them pair with a person or team that fills in the other side... Re:No. (Score:3, Insightful) Re:No. (Score:3, Funny) Or run, screaming in mental agony from the building as their virgin eyes behold the Java "Hello World" app. Re:No. (Score:3, Insightful) If you have to go to such trouble, why did you leave C++ in the first place? Java is a language designed for people who already know how to program. It starts off, from day one, with object orientation concepts, scope, namespaces, system calls, and in some Hello world cases, typecasting. All this verses a one liner in other lang Re:No. (Score:5, Insightful) 1. (Insert favorite *simple* language here): simple meaning that you can't do much with it, and what you can do is very easy and obvious. Qbasic comes to mind, as that's what I learned with, but there are many which fit the bill. 2. C: Turn the language from a magical tool that does what you want (poorly) into something that actually reflects the underlying architecture. Helps the programmer understand why their previous language performed so badly. Widely used (and mimicked), so they know a useful language. 3. C++: Now that they understand what's going on under the hood, teaches them good coding habits - objects which clean up their own memory, const variables, object oriented programming, generic programming, etc. Widely used (and mimicked) 4. Any other language(s) here: they already know the basic concepts, so it's just implementation details. You *could* have them branch out at any other stage, assuming that they've already learned the prerequisites. For example, once they've just learned to program (#1), they could do basic python. They won't be able to write maintainable or fast code, however. If that's not a problem, power to them! I haven't used VB since Windows 3.1. I understand it's really changed Re:No. (Score:3, Insightful) I only wish Re:No. (Score:3, Interesting) Re:No. (Score:5, Interesting) All fresher Engineers here (Cambridge, UK) have to learn to program 8086 assembler. Except they don't get an assembler, they have to enter the program using a hex keypad on the front panel. Yes, in 2006. And no, I'm not joking, I did it last term. Re:No. (Score:4, Insightful) at the command prompt type: python BTW I think Python is without a doubt the best language to teach a begginer with. Re:No. (Score:5, Insightful) I disagree. This is the wrong way to introduce OOP, as it treats it as some sort of high level way of managing code rather than as a fundamental technique that can be used at all levels. My view is that the best way is to teach something like Smalltalk or Ruby initially in a procedural style, and then show that everything in the language is an object, with methods and properties. Then, perhaps, the compromises made in a language like Java can be explained. One thing that should definitely be avoided is C - for goodness sake teach a safe language like Pascal instead. Beginners should not be dealing with pointers to memory (most developers never need to anyway). OOP needs to be taught at the start, not as an optional add-on. Re:No. (Score:3, Insightful) Re:No. (Score:3, Insightful) Re:No! (Score:3, Insightful) Re:No! (Score:5, Insightful) The only language a beginner should be using is C, C++, or assembly. Re:No! (Score:3, Insightful) Re:No! (Score:3, Insightful) A consistent object model with a real base object. No pointer/reference weirdness. Java has range checking on arrays. C++ is a good production language when you need the speed. Java is a "safer" language. STL really helps c++ a lot but it still isn't safe or friendly. However a person that does learn to program well in c++ will probably be a very good programmer. Re:why? (Score:3, Interesting) Well, I'm no fan of java for my own use (I like C++ for "C type" stuff, and I'm far more fond of e.g., lisp family languages, for "GC'd no worries" stuff), but having helped people use it for college programming courses, it does seem to have appreciably fewer sharp edges in many ways than C++, without most of the bogosities of something like VB. Some reasons: Re:No. (Score:4, Insightful) I sympathise with your sentiments, but VB is a turing machine just like all the rest of them [sourceforge.net] Re:Is Visual Basic a good beginner's language (Score:5, Funny) Re:Bad idea- compilers (Score:3, Interesting) Oh yeah, and completely agreed. I'm a professional programmer who learned VB after college- and I can always tell the difference in code between a real programmer and Visual Studio Wizards. Re:Bad idea- compilers (Score:5, Funny) So can I. The #Region " Windows Form Designer generated code " seems to be a bit of a giveaway, no? Re:Bad idea- compilers (Score:3, Insightful) No... right clicking and selecting "New Form" is a nicety, and far from making someone a non-programmer. Eclipse and other IDEs have wizards as well - a developer using and IDE does not a non-developer make. Re:Bad idea- compilers (Score:4, Interesting) If more people learned real languages before jumped-up assembly languages like C and pseudo-OO languages like C++ then we might see a bit more innovation in the language design community. Oh, and all three of the languages on my list run in an introspective environment. Re:Bad idea- compilers (Score:5, Insightful) I learned Logo in middle school. I learned QBASIC (ugh), Pascal (ugh), COBOL (ugh++), and RPG (!) in high school. I learned C, C++, VB, and Java in college. Those landed me a job doing CAD drawings for a small company. Eventually, I learned PHP on my own. That gave me enough "experience" to get a PHP job. So what have I learned? - All the languages I was told were going to be useful "in real life" have turned out to be mostly worthless (perhaps I haven't reached the level of the C++ stuff yet... I'm reserving judgement on that one). - Concepts are best learned from pseudocode, not from any particular language. - Comfortable syntax is learned from languages that are built around a particular concept. - Databases are the real reason OOP is a necessity. Data objects are your friends. - Most programmers are not architects/designers. They're too impatient. They jump right in and code a plate of spaghetti before thinking about how long they'll have to support that code. Some of them do fairly well at making things efficient, though, so you can't fault them all. I don't know ASM, so I tend to disagree with the hardcore "I coded in ASM uphill both ways naked in the snow blah blah blah bring me my cane, sonny" crowd. It's time to pull the plug, gramps. I also disagree with the academics that sip lattes, listen to jazz, wear berets, and say that everyone should learn and use [insert obscure language here] and piss and moan that it's not happening. Man up, nancy. The real world uses real tools for real work. Your toy languages are not going to be used. So take your Smalltalk, LISP, and Prolog back to your local Starbucks where you can "ooh" and "aah" about how "advanced" they are. If you're going to teach concepts, do so. Don't use a language as a crutch. Teach in pseudocode. Give examples of "how-to" in multiple languages. If you're going to teach a language, don't teach concepts. Teach what that tool is supposed to be used for. PHP is for dynamic web pages. C++ is for, well, damn near anything, but not dynamic web pages. Java is kinda like C++, but slower (unless you fuss with compiling natively), and can be multi-platform with minor changes. Perl is great for a quick, unreadable script. VB is nice if you want to spend lots of money for the ability to build piddly-shit apps that only you will use. And remember that not everyone learns things the same way. Someone who "just gets it" with C, C++, Java, PHP, and similar-looking languages might have an aneurysm just looking at code in Objective C. (I did.) Sometimes a familiar syntax matters. And yet, that same person (now bleeding out on the floor) might have no trouble at all deciphering Visual Basic or Pascal even though they're different. (Again, me.) That should tell the designers of the aneurysm language that the syntax is annoying, shitty, and induces aneurysms. (Go Smalltalk and Obj-C!) Re:Bad idea (Score:3, Interesting) I agree that clicking on a wizard is not programming, but for someone who's just starting, built-in IDE tools (like wizards) can really help. As an expirienced user, I have no problem manually typing private void Button1_Click(object sender, EventArgs e) { } Re:Bad idea (Score:5, Informative) I totally disagree. Difficult? Complicated? Sure, but not mean. People who have learned assembler are the ones who understand whywill usually run faster than even though both are conceptually identical. There are many things that seem perfectly reasonable in high-level languages that turn out to be a really bad idea once you learn what's going on in the hardware. I'm sure it's possible to learn that stuff without hitting the metal a few times, but I've never, not one single time, ever met someone who's done so. Re:Maybe, maybe not. (Score:5, Insightful) It probably wouldn't in a language like C, since it is very difficult to diagnose side-effects. If, however, you pick a (functional, or functional-style) language that supports a foreach statement then you could say something like: foreach({x,y} in image) ->You compiler / runtime would then pick an optimal number of concurrent threads to do this with for your target environment. do_something(x,y). A lot of the time, going to a lower level is a bad idea because: Re:Bad idea (Score:4, Insightful) Thank you-- I'm going to try to keep that in mind. I appreciate the tip. But if they tried to teach that stuff in programming 101 (regardless of the language), the students would leave at the end of the semester complaining that they didn't get to do anything fun, and are less likely to want to continue. A new student will have a hard enough time getting a project working. If you teach complex subjects first, they're more likely to make simple mistakes, rather like the typo you had in your initial example. Let them get passionate about the subject by completing small tasks right away, then move on to the heavy material. For that matter, in your example, the first code was not a "bad idea", it was simply less efficient. It would have been less efficient regardless of the language complexity, and the concept or cache-misses can be taught regardless of language. Re:Bad idea (Score:5, Insightful) Bad idea. A new programmer should start with small command-line programs, and grow into coding bigger things *by hand* at first. Only when they understand exactly what the wizard does should they start using them as time savers. That, I think is the point of wizards - to save you time, not to do for you things you don't understand. When new users start using wizards, bad code WILL result. Re:Bad idea (Score:3, Interesting) Now that's just silly. Does it make any difference if the function is GetDate() or getdate() or getDate()? Why spend valuable memory, attention span, and compile time dealing with silly nonsense like that? I'd much rather have my programmers know a hundred functions but not be 100% clear on the casing than have them know 10 and know the casing 100%. Re:Bad idea (Score:3, Interesting) I'll second that notion! Computer Science, as a generalization, has three types of people: People who only learned a high-level language, then learned algorithmics, and now produce "elegant" code that would take 2^27 times the current age of the universe to finish its task on any physically-possible hardware; People who took CS and expected to come away knowing how to pro Re:Try COBOL (Score:3, Interesting) Re:Try COBOL (Score:4, Insightful) when I say programmer I mean someone who designs and writes applications. Re:Bad idea (Score:3, Insightful) Re:Yes. (Score:4, Insightful) This doesn't even make sense. Java is a general-purpose programming language -- you can write absolutely anything in it. Client software, server software, command-line tools, graphical tools, compilers, games, anything. What do widgets have to do with things? Re:Yes. (Score:5, Insightful) I'm not a programmer because I love to program, I program to do a function, to make some part of my job easier. This is a truly key statement in his post. You have to ask yourself what you want to do with programming. If you want to write software that'll do interesting things for your own personal use, then VB is probably about right. It won't produce elegant code, but it will produce simple functionality fairly quickly, and you can build your own tools with it. In a society where computer illiteracy is becoming as problematic as written illiteracy, this kind of programming language definitely has a place. On the other hand, if you want to produce programs for OTHER people to use, you shouldn't flinch at spending a year learning how to make a programming language do what you want it to do. It's like mechanical or civil engineering. If you want to build a shed out back or a trebuchet then go ahead and pickup some parts at Home Depot and start nailing things together. If you want to design anything that ANYONE ELSE is going to use, like an office building or an automobile, then you had better figure out how to use something a little more sophisticated than 2x4's. A lot of people will come back with the argument that there should be something easier to learn than C or C++ for the beginners, but in my experience that's a flawed argument. Learning a language is an investment in time, and most people are unwilling to discard that investment. Instead, they've bolted on afterthoughts to the programming languages to make them more functional. For that reason, VB6 was always a horse designed by a committee. If you want to learn how to program like a professional then start with a professional language. The one exception is Assembly Language. Every time I try to teach people how to program I start by teaching them the Twelve Instruction Programming System (TwIPS), which is a simplified subset of assembly. With this they learn the bare bones of what any piece of software does, how algorithms function inside a computer, and what the instructions are really doing. And when they get around to learning C++ they find it considerably less tedious than if they had hit it directly. Re:Visual Studio and Visual Basic (Score:5, Interesting) Yeah, because programming boilerplate includes, class, main, and event handling code (which does nothing on its own) is really going to get someone hooked. Screw that. Give that code to a new programmer for free and let them add in something that does something fun, obvious, and interesting right away. That's how you'll get 'em hooked. Look at it this way... no-one got hooked on pot because they liked making bongs. Re:Why not both? (Score:5, Informative) There is a lot of devil in the details of that almost, however. C# has developer API comments yet VB.NET does not. VB.NET has more support for shadowing than C#. C# has useful convenience functions like using that VB.NET does not. VB.NET has convenience functions for late binding (considered harmful) and case-insensitive string comparisons that C# does not. C# has more object oriented features such as operator overloading that VB.NET does not. The list goes on and on. Re:Why not both? (Score:5, Informative) Re:Why not both? (Score:5, Interesting) I hate to jump on the Java/VB/C/C++ lovefest, but the question was about a teaching language. Why not something simple - like Pascal? Basic? A step up, but COBOL? Yes, there are advantages to learning a language like C++ that you will end up using, but it's not necessarily the best approach. One of the biggest complaints I have about a lot of the VB and C++ programmers I've been exposed to is a complete lack of fundamentals. Code that works, but sucks because they never bother to think of the background stuff ... memory, performance to name two. Multithreading is not a topic to start beginners on. OTOH, how many people here can take the Nth root of a number by hand? At some point, you have to accept that automation is here stay. The first time I hit VB, after doing a large project in CICS, I was really happy. Tons of things that were a royal pain became relatively painless. Then the downside ... performance sucks compared to CICS. Database interface sucked. Debugging really sucked. Generally, I like having my code in one place, not scattered over screens, buttons, rollovers ... Things are better now. SQL/SQL Server work really well together. A proper front end in some tools can be done FAST and painlessly (e.g. Brio (Now Hyperion Intelligence, to better play with the faster database around!) I've seen a couple of multi-hundred-million dollar projects die, because of innappropriate choice of languages (VB for one, VC++ the other). But management bought into the argument New=Better. Personally, I think C++ is a horrible choice as a starter language. Then again, I started on a virtual assembler language, designed strictly for teaching. Kinda like Pascal. Re:Why not both? (Score:5, Informative) Scheme and this book [mit.edu]. Re:Why not both? (Score:5, Insightful) Basically, I'm saying that students of computer science shouldn't start off with VisualBasic. But if you're a hobbyist or a network engineering type who needs to be able to write working scripts and stuff like that, sure, VisualBasic is as good as any other ultra-high-level language, I suppose. Re:Why not both? (Score:5, Interesting) If you want to learn the *fundamentals* of programming, Structure and Interpretation of Computer Programs is the best book I've seen, and it deals not at all with "memory management", "efficient use of resources", or other archane crap (since it uses Scheme Not that Scheme itself is used much beyond the academic. But a lot if its ideas are showing up in modern programming platforms that are much in use (memory management, closures, etc). I wouldn't tout scheme as a "beginning programming language" but I'd tout it as a "programming language to learn the fundamentals of programs for a potential CS major". I'd tout C for neither. Re:Why not both? (Score:4, Insightful) Let alone an efficient operating system. You're not going to learn the fundamentals of programming from a language that does it all for you. Java, Scheme (apparently), Visual Basic.. There are *concepts* that you need to take away from things like manual memory management. Do you think these features magically appear in a language? Learning assembly went a long way towards my being able to understand how to optimize code in a way that makes sense, since it's going to be converted to assembly at some point. If I had never used a language lower-level than C, at best I might have understood this after years of trial-and-error. I shudder to imagine trying to learn to optimize code from using something like Java. Your basic argument seems to be that the fundamentals of computer science aren't relevant anymore because there are programming languages that abstract the user from them; this is akin to saying that you don't need to know how addition and subtraction work because we have calculators. Seems silly when you put it in terms that you're familiar with, doesn't it? And besides that, someone has to make the calculators. This is crap. (Score:5, Informative) Sorry, but this is a very wrong view of what computer science or programming really are. There are three things being mixed up here which are largely separate bodies of knowledge and any decent computer science program separates them out as such. Algorithms - This is the core of Computer Science; learning to think like a programmer and to break problems down into logical chunks is tantamount to becoming a computer scientist. With this at the core, a language should then be chosen that most facilitates this. When I started college 10 years ago we used Pascal in our lab for our algorithms courses (which notably were just about implementing the theory we covered in the course), and that at the time was a very sane choice. Java's a pretty sane choice these days. Lots of things are really, but something like C forces people trying to learn how to think in algorithms to be side tracked by all of the tedious low level junk. (For reference, I'm a low-level C systems programmer at a large software company, so this isn't some "C sucks" wankery.) Computer Organization - This is usually cross listed in electrical or computer engineering, and for good reason. This is where you figure out how hardware works. C and assembler (RISC works fine here) are appropriate in such a course. As this course naturally follows introductory algorithms courses, you can here put the theoretical constructs learned there in context. Operating Systems - Memory management doesn't belong in either of the above and certainly saying that you learn "memory management" with C is pretty silly. You learn how to malloc and free stuff. Whoopee. "Memory management", in any sort of interesting way, is better treated in an Operating Systems course where you can track what exactly is happening down from the programming language, into the OS and finally at the hardware side. It can be put in context of what actually happens when you call malloc and what that means. Fundamentally, you don't understand anything more about memory management from a basic C course than if somebody tells you in a Java course "When you use 'new' some memory will be allocated, and when you're done with that object there's a thing called a garbage collector that will eventually come and give that memory back." Memory management is a non-trivial topic and one that certainly goes deeper than simple allocation. So, is VB suitable for any of this? Not really. VB is kind of orthogonal. Like you said, it's fine for someone who needs to solve certain sets of tasks, but doesn't want or need to bother with really understanding deeper concepts. choose a good teacher first (Score:4, Interesting) I have found that in programming, taking a class will cut down on the time spent banging your head against the wall because there's someone to answer your questions, even if they're stupid newbie questions. Programming teachers are usually far more responsive than other teachers (systems analysis, database, e.g.) because it's more practical. If you're just learning how to program, I wouldn't worry about pointers immediately. Visual Basic is powerful in that you can write applications quickly and learn really fast. Visual Basic: Schneider [amazon.com] Java: Barker [amazon.com] C#: Barker [amazon.com] Whatever your choice, there are free IDE's for all this now from Sun [sun.com] and Microsoft [microsoft.com], and part of learning will be learning how to navigate the IDE. It's a great time to learn to program. Where I live, people can't find enough VB or C# programmers, and not enough Java programmers with a security clearance. Before you buy the hype of the next great programming language, check out the want ads on Monster or Dice and see what people need now. And remember, the highest-paid programmers (not team leaders)still write COBOL for Mainframes, because nobody else knows how to do this, and the big companies still can't get all their systems off of them. Java snobs? (Score:5, Funny) Please don't associate those Java users with us C++ (and C for the procedural of us) users. *shudders* I feel so dirty. Re:Java snobs? (Score:4, Interesting) And anyone who's a serious C++/Java programmer doesn't think that C# is for morons, it's just a bit different, but still the same thing. The overall technique to write in the language is still the same, there are just some lexical and some structural differences, but they are not as different as basic and lisp. It's just the disgusting aftertaste of "pay microsoft a lot to run this faster than mono can do" that quite many people don't like. Mono is on the right path with it's evolution, but it's still not comparable in the terms of speed to microsoft's own platform or to java on any other system. C++ is not being interpreted and jitted so we can skip it in this section. My vision is that they are all usable languages, you should just use them dependantly from the context, i prefer java because of the cost and portability for now. But if C# get's faster on *nix platforms and matures a bit, i'm sure a lot of people will use it and nobody will call them morons. I hate the ignorant snobs who think that people who use more portable solutions automatically define C#'rs as morons. Re:Why not both? (Score:4, Interesting) BUT. If you want a an easy to learn, object oriented langauge, I VERY highly recommend python. Its not incredibly speedy (there are libraries out there to speed it up), since its an interpreted language, but getting real things done very soon is extremely easy. I am a gaming and simulation major at a small university, and the first langauge the freshman learn is python, then the second quarter they learn C. Python spoils them, quite frankly. Thankfully, I learned C first, then C++, java, and then python, but I have to say, given a choice as to which langauge I would use for any given console application (I have yet to do GUI with python), hands down it would be python. I was the only one in my algorithims class that knew python, when I took the class. Other students were extremely jelous that I could more or less type in the pseudo code off the board and have a working program, where as it would take them hours to get an algorithim functioning properly in C or Java. Re:Why not both? (Score:5, Interesting) The primary issue with such tools is that they tend to fail spectacularly as soon as they get outside their intended area of use. Visual Basic, for example, came along just in time to be abused for Client/Server development. Since VB wasn't designed with networking in mind, it was often faster and easier to do the code in C (and later Java). VB's life as a GUI front-end was extended thanks to the ability to link in COM+/ActiveX controls for more complex tasks, but GUIs eventually morphed into far more complex variants that the GUI Builder couldn't easily support. (Ever notice how you can spot a Visual Basic application visually?) At that point, most serious programmers realized that they were taking longer to hack VB to do what they wanted rather than just coding it from scratch in another language. So they gave up on it and moved back to C/C++/Delphi and the new-kid-on-the-block, Java. Re:Why not both? (Score:3, Interesting) Agreed. However, it's a part of history that is important for him to understand if he wants to know about the rise and fall of RAD tools and 4GL languages. VB.Net is useful for far more RAD. I'd actually argue with that, but not because VB.NET is incapable. VB has been completely overhauled to be compatible with C#. (Which is to say, that it's C# with a new faceplate.) So if you're going to be using the Re:Why not both? (Score:4, Interesting) He seemed to think that hiring Visual Basic programmers was a complete waste of money. Re:Why not both? (Score:5, Insightful) I may sound old-school, and then maybe I am, but "programming" and "writing a program" seem like two different things to me. If all you want to do is write programs, then I think about any high-level language could be appropriate because programs can be written in any language and high-level ones hide all the ugly computer part from the programmer. However, if you want to learn to program, then you need some serious commitment, and you need to learn (or at least understand) assembly language, and then work with C or C++ or a language that actually lets you play with bits and bytes. One of the lead computer people at one of the major oil companies told me once that all that their Visual Basic programmers do is to write meaningless little programs that noone ever uses. VB is a quite high-level language, and is easy to learn (or at least fiddle with). That lead to a whole bunch of VB coders who pretends they are programmers because they can write programs. However, all they do is write lines after lines of VB code (and most of it is *click* *click* *click* through the UI), with no understanding whatsoever of what is really going on when the program runs. It is really nice when, with little effort, a program can be made and performs the desired operation. However, when a bug arises, those coders that don't understand the low-level stuff might not understand the source of the bug (and then sometimes blame it on someone else), and therefore can't debug their own application. Every programming language is a tool, and when a job needs be done, one should use the best tool for the job. I suppose there are some jobs for which VB is the best tool. However, when someone claims to be a programmer and only knows VB, chances are he doesn't program, he just knows where to click to create a dialog with some buttons. If he knows VB and C++, Java, PHP, Perl, Python, etc., he's more likely to understand what he does, and will probably write a good VB program if he needs to. Know your needs and know your wants. If you want to learn how to program, don't choose a language that will hide all the ugly stuff from you, because you need to know about the ugly stuff. Worked for me. (Score:3, Funny) Worked for me. Re:still a toy (Score:3, Interesting) Here's [regdeveloper.co.uk] a reason not to choose delphi Another reason is that its not free in any sense of the word. I'd recommend using Java on eclipse (with the GUI builder if necessary). Fedora 4 includes an eclipse built on top of a 100% free java stack. Aside from cost, the reason I wouldn't recommend VB as a starter language is that the syntax is very different than C,C++,C#, Java and many others. And oh yeah, if you want to teach programming to young children Here's [cox.net] a little IDE I threw together
http://developers.slashdot.org/story/06/03/07/2046258/is-visual-basic-a-good-beginners-language?sdsrc=next
CC-MAIN-2014-15
refinedweb
11,001
71.95
On Wed, Jan 28, 2009 at 12:37:05AM +0100, Ivan Schreter wrote: > Stefano Sabatini wrote: > > On date Tuesday 2009-01-27 20:14:00 +0100, Ivan Schreter encoded: > > > >>). > >> > > > > Add the field just at the end of the struct, this way you're not going > > to change the offsets of all the other fields, this way you save ABI > > compatibility. > > > Sure. But this is a struct, which is used by both libavformat and > libavcodec. I assume that it's allocated inside of libavcodec (didn't > check yet). If the versions of libavformat and libavcodec don't match, > then it will most probably break. > > I suppose that the version of libavformat must be less than or equal to > version of libavcodec in order to guarantee compatibility. In that case > (and provided structures are allocated in libavcodec) this should work. > Am I correct? #if no major bump yet if(avcodec_version() > 123) #endif use some new: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-January/068459.html
CC-MAIN-2014-52
refinedweb
153
72.36
This documentation is archived and is not being maintained. Hey, Scripting Guy! Working with Access Databases in Windows PowerShell The Microsoft Scripting Guys Databases are mysterious pieces of software. In their simplest form they are nothing more than filing cabinets for storing information. The real magic begins with the application of this stored information. Of course, the most beautifully designed database without any records is nothing more than an academic exercise. It is the data that makes the database. Whenever we hear about someone who has a huge database, people react with awe. Not due to the database, but for the data it contains. How does all that data get into a database? Manual entry of data into a database is for the birds, and it went out with key punch cards. To build a database large enough to impress your friends and provide the potential to unlock the mysteries of your network, you must automate. Today, that means Windows PowerShell, and in this article, that's what we will use to collect some data about the local computer and write it to an Office Access database called ComputerData.mdb. This database can be created by hand, or you can use the script found in the article "How Can I Create a Database with More Than One Table?." We will call our script WriteToAccessDatabase.ps1 so we will know what it does. We'll start by creating the Check-Path function, which will be used to ensure that the database exists. To create the function, we use the Function keyword, give the function a name, and define any input variables it may need to receive. The first thing Check-Path does is use the Test-Path cmdlet to see if the directory in the database path exists. To do this, it uses the Split-Path cmdlet to break the path into a parent portion and a child portion. We only need the parent portion of the path to verify the directory's existence. Here's how we use Split-Path to retrieve the parent path: Instead of checking for the presence of the path, we use the Not operator (!) to look for its absence. If the folder does not exist, the Throw keyword is used to raise an error: Even if the folder exists, the database file might be missing. We use the ELSE keyword to introduce that alternate condition. Once again, we use the IF statement to look for the presence of the database file, and the Throw keyword to raise an error if it doesn't exist: We really don't need to use the IF…ELSE construction to verify the existence of the database. A simple call to the cmdlet Test-Path using the –path parameter would work. However, using IF…ELSE provides a higher level of feedback. We want to know if the directory exists and, if so, does the file exist? It is certainly possible the database might be missing from the folder, but it is also possible the folder itself could be missing. This gives more granular feedback and can aid in troubleshooting. When we have ensured the database exists, we create the Get-Bios function to obtain the BIOS information from the Win32_Bios WMI class. Here's the Get-Bios function: By encapsulating the WMI call into a function, we gain the ability to easily change the function, such as adding remote capability or accepting credentials. The modification could be made here without impacting the rest of the script. In fact, from a testing perspective, if it doesn't work, you simply comment out the function code and the remaining script continues to work. For help in finding information related to the WMI classes, you can use the Windows PowerShell Scriptomatic shown in Figure 1. This tool lets you easily explore WMI namespaces and classes, and even creates the Windows PowerShell script to retrieve the information. Figure 1 Windows PowerShell version of the Scriptomatic utility Next, we'll create the Get-Video function to retrieve video information from the Win32_VideoController WMI class. As you can see, this function is similar to the Get-Bios function: Now we need to make a connection to the database. To do this, we use the Connect-Database function. We create two input parameters for the Connect-Database function: –DB and –Tables whose values are stored in the $Db and $Tables variables inside the function. The first thing we do inside the Connect-Database function is to assign values to a couple of variables that are used to control the way the RecordSet is opened. The Open method of the RecordSet object can accept up to five different parameters, as follows: The first is the source parameter, which evaluates to a valid command object, a SQL statement, a table name, a stored procedure call, a URL, or the name of a file or stream object containing a persistently stored Recordset. The second parameter is the ActiveConnection, a string that evaluates to a valid connection object, or a string that contains connectionstring parameters. The CursorType parameter is used to determine the type of cursor that will be used when opening the RecordSet. Allowable values for the cursor type are shown in Figure 2. The LockType parameter is used to govern the type of lock to be used when updating records, and the options parameter is used to tell the provider how to evaluate the source parameter. The allowable values for the LockType parameter are shown in Figure 3. All five of the parameters for the Open method of the RecordSet object are optional; generally, we use only the first four. After we have assigned values to use for the cursor enumeration and the lock type, we use the New-Object cmdlet to create a new ADODB.Connection object that we store in the $connection variable. We then use the Open method from the Connection object, which needs the provider name and the data source. We then call the Update-Records function and pass the $Tables variable. Here's the Connect-DataBase function: In the Update-Records function, the first thing we do is create an instance of the ADODB.RecordSet object. We use the New-Object cmdlet to do this and store the newly created RecordSet object in the $RecordSet variable. Next, we use the For Each statement to walk through our array of tables. The table names are stored in the $Tables variable and are assigned at the start of the script. Inside the ForEach loop, we first create our query, which is a rather generic Select * from $Table. The advantage of using a variable for the table name is we only need to write the code once; the table name in the query gets changed each time we loop through the array of table names. We now come to the open method of the RecordSet object. We specify the query that is stored in the $Query variable, the connection object in the $Connection variable, the $OpenStatic value, and the $LockOptimistic value to control the way the RecordSet is opened. We then use the Invoke-Expression cmdlet to execute the value of a string. We do this because we have created two functions that are designed to update the different database tables. We named the functions after the tables that they update. We are not allowed to call a function name when half of it is a variable, so we need to resolve the variable and then call the function. But that does not work either—at least not directly. What we want to do is to treat the function name as if it were a string and not a command. But we want to execute it like a command. To do this, we use Invoke-Expression. This cmdlet calls each of the different update functions. Inside the loop that goes through the array of table names, we close each of the RecordSet objects, then return to the next item in the array of table names, create a new query, open a new RecordSet object, and call a new function. This continues for each of the table names in the array of tables, like so: After the records are updated, we can close the connection. To do this, we use the Close method from the Connection object: The Update-Records function calls two support functions, Update-Bios and Update-Video, which are designed to update the appropriate fields in the respective database table. If you were to add additional tables to your database, you would need to add an additional Update* function to update the new tables. As a best practice, we recommend keeping the database field names the same as the WMI property names. It makes things much easier to keep track of. When writing a script to update an existing database, you may want to look at the database schema for the tables, columns, and data types contained in the fields. The database schema for the ComputerData database is shown in Figure 4. This view was generated by the script from the article "How Can I Tell Which Tables and Columns Are in a Database without Opening It?" Figure 4 The database schema for the ComputerData database In the Update-Bios function, we first post a message stating we are updating the BIOS information. We then call the Get-Bios function and store the returned WMI Win32_Bios object in the variable $BiosInfo. Now we need to add a record to the database table. To do this, we call the AddNew method from the RecordSet object. After we have a new record, we add information to each of the fields in the table. When all the fields have been updated, we call the Update method to commit the record to the table. The complete Update-Bios function is shown here: When the BIOS table has been updated, we need to update the video table. To do this, we can call the Update-Video function, which is exactly the same as the Update-Bios function. We present a message stating we are updating the video, call the Get-Video function to retrieve the video information, call the AddNew method to add a new record to the Video table, and write all of the information to the appropriate fields. When we are done, we call the Update method. A potential issue in collecting the video information is the number of video controllers on the computer. My personal computer has a daughter card and reports multiple video controllers. To handle this eventuality, we use the ForEach statement to iterate through a collection of Win32_VideoControllers. If you are not interested in the daughter card configuration information or if your video card is dual channel and reports the same information twice, you could remove the ForEach loop and select $VideoInfo[0] to index directly into the first record that is reported. The problem with this approach is that if the query returns a singleton, you will generate an error because you cannot index into a single record: The entry point to the script points to the database, lists the tables, and then calls the connect-DataBase function, as shown here: After the script is run, new records are written to the ComputerData.mdb database as shown in Figure 5. The complete WriteToAccessDatabase.ps1 script can be seen in Figure 6. Figure 5 New records added to the ComputerData.mdb database Function Check-Path($Db) { If(!(Test-Path -path (Split-Path -path $Db -parent))) { Throw "$(Split-Path -path $Db -parent) Does not Exist" } ELSE { If(!(Test-Path -Path $Db)) { Throw "$db does not exist" } } } #End Check-Path Function Get-Bios { Get-WmiObject -Class Win32_Bios } #End Get-Bios Function Get-Video { Get-WmiObject -Class Win32_VideoController } #End Get-Video Function Connect-Database($Db, $Tables) { $OpenStatic = 3 $LockOptimistic = 3 $connection = New-Object -ComObject ADODB.Connection $connection.Open("Provider = Microsoft.Jet.OLEDB.4.0;Data Source=$Db" ) Update-Records($Tables) } #End Connect-DataBase Function Update-Records($Tables) { $RecordSet = new-object -ComObject ADODB.Recordset ForEach($Table in $Tables) { $Query = "Select * from $Table" $RecordSet.Open($Query, $Connection, $OpenStatic, $LockOptimistic) Invoke-Expression "Update-$Table" $RecordSet.Close() } $connection.Close() } #End Update-Records # *** Entry Point to Script *** $Db = "C:\FSO\ComputerData.mdb" $Tables = "Bios","Video" Check-Path -db $Db Connect-DataBase -db $Db -tables $Tables If you would like to learn more about working with Office Access databases from within Windows PowerShell, check out the "Hey, Scripting Guy!" archive for the week of February 20, 2009. Also, the 2009 Summer Scripting Games are coming soon! Visit scriptingguys.com for more information. Ed Wilson, a well-known scripting expert, is the author of eight books, including Windows PowerShell Scripting Guide (2008) and Microsoft Windows PowerShell Step by Step :
https://technet.microsoft.com/en-us/library/2009.05.scriptingguys.aspx
CC-MAIN-2018-17
refinedweb
2,125
61.26
cloudant 0.5.6 Asynchronous Cloudant / CouchDB Interface An effortless Cloudant / CouchDB interface for Python. Install pip install cloudant Usage Cloudant-Python is a wrapper around Python Requests for interacting with CouchDB or Cloudant instances. Check it out: import cloudant # connect to your account # in this case, USERNAME = 'garbados' account = cloudant.Account(USERNAME) # login, so we can make changes login = account.login(USERNAME, PASSWORD) assert login.status_code == 200 # create a database object db = account.database('test') # now, create the database on the server response = db.put() print response.json() # {'ok': True} HTTP requests return Response objects, right from Requests. Cloudant-Python can also make asynchronous requests by passing async=True to an object’s constructor, like so: import cloudant # connect to your account # in this case, USERNAME = 'garbados' account = cloudant.Account(USERNAME, async=True) # login, so we can make changes future = account.login(USERNAME, PASSWORD) # block until we get the response body login = future.result() assert login.status_code == 200 Asynchronous HTTP requests return Future objects, which will await the return of the HTTP response. Call result() to get the Response object. See the API reference for all the details you could ever want. Philosophy Cloudant-Python is minimal, performant, and effortless. Check it out: Pythonisms Cloudant and CouchDB expose REST APIs that map easily into native Python objects. As much as possible, Cloudant-Python uses native Python objects as shortcuts to the raw API, so that such convenience never obscures what’s going on underneath. For example: import cloudant account = cloudant.Account('garbados') db = account.database('test') same_db = account['test'] assert db.uri == same_db.uri # True Cloudant-Python expose raw interactions – HTTP requests, etc. – through special methods, so we provide syntactical sugar without obscuring the underlying API. Built-ins, such as __getitem__, act as Pythonic shortcuts to those methods. For example: import cloudant account = cloudant.Account('garbados') db_name = 'test' db = account.database(db_name) doc = db.document('test_doc') # create the document resp = doc.put(params={ '_id': 'hello_world', 'herp': 'derp' }) # delete the document rev = resp.json()['_rev'] doc.delete(rev).raise_for_status() # but this also creates a document db['hello_world'] = {'herp': 'derp'} # and this deletes the database del account[db_name] Iterate over Indexes Indexes, such as views and Cloudant’s search indexes, act as iterators. Check it out: import cloudant account = cloudant.Account('garbados') db = account.database('test') view = db.all_docs() # returns all docs in the database for doc in db: # iterates over every doc in the database pass for doc in view: # and so does this! pass for doc in view.iter(descending=True): # use `iter` to pass options to a view and then iterate over them pass Behind the scenes, Cloudant-Python yields documents only as you consume them, so you only load into memory the documents you’re using. Special Endpoints If CouchDB has a special endpoint for something, it’s in Cloudant-Python as a special method, so any special circumstances are taken care of automagically. As a rule, any endpoint like _METHOD is in Cloudant-Python as Object.METHOD. For example: - -> Account('garbados').all_dbs - -> Account().database(DB).all_docs() - -> Account().database(DB).design(DOC).view(INDEX) Asynchronous If you instantiate an object with the async=True option, its HTTP request methods (such as get and post) will return Future objects, which represent an eventual response. This allows your code to keep executing while the request is off doing its business in cyberspace. To get the Response object (waiting until it arrives if necessary) use the result method, like so: import cloudant account = cloudant.Account(async=True) db = account['test'] future = db.put() response = future.result() print db.get().result().json() # {'db_name': 'test', ...} As a result, any methods which must make an HTTP request return a Future object. Option Inheritance If you use one object to create another, the child will inherit the parents’ settings. So, you can create a Database object explicitly, or use Account.database to inherit cookies and other settings from the Account object. For example: import cloudant account = cloudant.Account('garbados') db = account.database('test') doc = db.document('test_doc') url = '' path = '/test/test_doc' otherdoc = cloudant.Document(url + path) assert doc.uri == otherdoc.uri # True Testing To run Cloudant-Python’s tests, just do: python setup.py test Documentation The API reference is automatically generated from the docstrings of each class and its methods. To install Cloudant-Python with the necessary extensions to build the docs, do this: pip install -e cloudant[docs] Then, in Cloudant-Python’s root directory, do this: python docs Note: docstrings are in Markdown. - Downloads (All Versions): - 81 downloads in the last day - 507 downloads in the last week - 575 downloads in the last month - Author: Max Thayer - License: MIT - Categories - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Natural Language :: English - Programming Language :: Python - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.2 - Programming Language :: Python :: 3.3 - Package Index Owner: BigBlueHat, Max.Thayer - DOAP record: cloudant-0.5.6.xml
https://pypi.python.org/pypi/cloudant/0.5.6
CC-MAIN-2015-11
refinedweb
829
51.65
x:Code Intrinsic XAML Type Allows placement of code within a XAML production. Such code can either be compiled by any XAML processor implementation that compiles XAML, or left in the XAML production for later uses such as interpretation by a runtime. The code within the x:Code XAML directive element is still interpreted within the general XML namespace and the XAML namespaces provided. Therefore, it is usually necessary to enclose the code used for x:Code inside a CDATA segment. x:Code is not permitted for all possible deployment mechanisms of a XAML production. In specific frameworks (for example WPF) the code must be compiled. In other frameworks, x:Code usage might be generally disallowed. For frameworks that permit managed x:Code content, the correct language compiler to use for x:Code content is determined by settings and targets of the containing project that is used to compile the application. WPF Usage Notes Code declared within x:Code for WPF has several notable limitations: The x:Code directive element must be an immediate child element of the root element of the XAML production. x:Class Directive must be provided on the parent root element. (nesting is allowed, but it is not typical because nested classes cannot be referenced in XAML). CLR namespaces other than the namespace that is used for the existing partial class cannot be defined or added to. References to code entities outside the partial class CLR namespace must all be fully qualified. If members being declared are overrides to the partial class overridable members, this must be specified with the language-specific override keyword. If members declared in x:Code scope conflict with members of the partial class created out of the XAML, in such a way that the compiler reports the conflict, the XAML file cannot compile or load.
http://msdn.microsoft.com/en-us/library/vstudio/ms750494(v=vs.100)
CC-MAIN-2013-20
refinedweb
304
51.68
Im currently working hard on learning c++ and I would appreciate any help with this problem. I want this program to check if the rectangles area equals 100 and then take an action based on that check. I want to do the check in a separate function and have the function return 1 if it equals 100 and 0 if it doesn't. The "rectangle" class works fine with all other programs so that is not the problem. By the way, this program isn't any serious one. I just wrote it to illustrate my problem.By the way, this program isn't any serious one. I just wrote it to illustrate my problem.Code:#include <iostream> #include <rectangle.h> using namespace std; int checkarea(rectangle *recta); int main() { rectangle rect; cout<<"The area is: "<<rect.GetArea()<<endl; if(checkarea(&rect)!=0); { return 0; } cout<<"End of main()."<<endl; return 0; } int checkarea (rectangle *recta) { if (recta->GetArea()==100) //I didn' really know { //if I should have used cout<<"recta->GetArea() is 100."<<endl; //the dot return 1; //operator instead of -> } //here. cout<<"End of checkarea()."<<endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/20711-help-needed-passing-objects-reference.html
CC-MAIN-2015-14
refinedweb
189
77.13
MissAuditore Newbie Poster 10 posts since Oct 2012 Reputation Points: 0 [?] Q&As Helped to Solve: 0 [?] Skill Endorsements: 0 [?] •Community Member *Hi, I am trying to plot a 3D surface plot. I have a list of x,y,z values. They are all of different dimensions. y1 y2 y3 y4... x1 z11 z12 z13 z14 x2 z21 z22 z23 .... x3 ................. x4 x5 . . . etc..* import csv from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np import datetime as dt import time import pylab as p import mpl_toolkits.mplot3d.axes3d as p3 fname = "C:\Users\Desktop\Temperature.csv" depth_list = [] tempr_list = [] with open(fname) as fin: for ix, line in enumerate(fin): line = line.rstrip() line_list = line.split(';') if ix == 0: time_list = line_list[1:] else: depth_list.append(float(line_list[0])) tempr_list.append([float(t) for t in line_list[1:]]) plt.plot(depth_list, tempr_list) fig = plt.figure() ax = fig.gca(projection='3d') #ax.plot_trisurf(time_list, depth_list, tempr_list, cmap=cm.jet, linewidth=0.2) plt.show() timelist = [] for i in range(len(time_list)): timelist = time.mktime(time.strptime(time_list[i], "%d.%m.%Y %H:%M:%S")) ax.plot3D(depth_list, tempr_list, timelist) The error I get, I'll just paste the entire thing: ValueError: setting an array element with a sequence. I tried converting the timelist to string, but its not working.. Not quite sure what to do:( You'll have better luck posting on the matplotlib forums. Something tells me that time_list[i] is not actually a single value, but is likely an interable (tuple or list). I'd also recommend using pandas, importing the data into your dataframe, and using a TimeIndex as the label axis. If you're working with timeseries data, it will really give you a lot more power than just basic python.
https://www.daniweb.com/software-development/python/threads/441679/3d-plot-need-help-on-arrays-of-different-size
CC-MAIN-2015-11
refinedweb
302
70.7
ACPI Device Tree - Representation of ACPI Namespace¶ © 2013, Intel Corporation - Author Lv Zheng <[email protected]> - Credit Thanks for the help from Zhang Rui <[email protected]> and Rafael J.Wysocki <[email protected]>. Abstract¶ The Linux ACPI subsystem converts ACPI namespace objects into a Linux device tree under the /sys/devices/LNXSYSTEM:00 and updates it upon receiving ACPI hotplug notification events. For each device object in this hierarchy there is a corresponding symbolic link in the /sys/bus/acpi/devices. This document illustrates the structure of the ACPI device tree. ACPI Definition Blocks¶ The ACPI firmware sets up RSDP (Root System Description Pointer) in the system memory address space pointing to the XSDT (Extended System Description Table). The XSDT always points to the FADT (Fixed ACPI Description Table) using its first entry, the data within the FADT includes various fixed-length entries that describe fixed ACPI features of the hardware. The FADT contains a pointer to the DSDT (Differentiated System Descripition Table). The XSDT also contains entries pointing to possibly multiple SSDTs (Secondary System Description Table). The DSDT and SSDT data is organized in data structures called definition blocks that contain definitions of various objects, including ACPI control methods, encoded in AML (ACPI Machine Language). The data block of the DSDT along with the contents of SSDTs represents a hierarchical data structure called the ACPI namespace whose topology reflects the structure of the underlying hardware platform. The relationships between ACPI System Definition Tables described above are illustrated in the following diagram: +---------+ +-------+ +--------+ +------------------------+ | RSDP | +->| XSDT | +->| FADT | | +-------------------+ | +---------+ | +-------+ | +--------+ +-|->| DSDT | | | Pointer | | | Entry |-+ | ...... | | | +-------------------+ | +---------+ | +-------+ | X_DSDT |--+ | | Definition Blocks | | | Pointer |-+ | ..... | | ...... | | +-------------------+ | +---------+ +-------+ +--------+ | +-------------------+ | | Entry |------------------|->| SSDT | | +- - - -+ | +-------------------| | | Entry | - - - - - - - -+ | | Definition Blocks | | +- - - -+ | | +-------------------+ | | | +- - - - - - - - - -+ | +-|->| SSDT | | | +-------------------+ | | | Definition Blocks | | | +- - - - - - - - - -+ | +------------------------+ | OSPM Loading | \|/ +----------------+ | ACPI Namespace | +----------------+ Figure 1. ACPI Definition Blocks 注釈 RSDP can also contain a pointer to the RSDT (Root System Description Table). Platforms provide RSDT to enable compatibility with ACPI 1.0 operating systems. The OS is expected to use XSDT, if present. Example ACPI Namespace¶ All definition blocks are loaded into a single namespace. The namespace is a hierarchy of objects identified by names and paths. The following naming conventions apply to object names in the ACPI namespace: - All names are 32 bits long. - The first byte of a name must be one of 'A' - 'Z', '_'. - Each of the remaining bytes of a name must be one of 'A' - 'Z', '0' - '9', '_'. - Names starting with '_' are reserved by the ACPI specification. - The '' symbol represents the root of the namespace (i.e. names prepended with '' are relative to the namespace root). - The '^' symbol represents the parent of the current namespace node (i.e. names prepended with '^' are relative to the parent of the current namespace node). The figure below shows an example ACPI namespace: +------+ | \ | Root +------+ | | +------+ +-| _PR | Scope(_PR): the processor namespace | +------+ | | | | +------+ | +-| CPU0 | Processor(CPU0): the first processor | +------+ | | +------+ +-| _SB | Scope(_SB): the system bus namespace | +------+ | | | | +------+ | +-| LID0 | Device(LID0); the lid device | | +------+ | | | | | | +------+ | | +-| _HID | Name(_HID, "PNP0C0D"): the hardware ID | | | +------+ | | | | | | +------+ | | +-| _STA | Method(_STA): the status control method | | +------+ | | | | +------+ | +-| PCI0 | Device(PCI0); the PCI root bridge | +------+ | | | | +------+ | +-| _HID | Name(_HID, "PNP0A08"): the hardware ID | | +------+ | | | | +------+ | +-| _CID | Name(_CID, "PNP0A03"): the compatible ID | | +------+ | | | | +------+ | +-| RP03 | Scope(RP03): the PCI0 power scope | | +------+ | | | | | | +------+ | | +-| PXP3 | PowerResource(PXP3): the PCI0 power resource | | +------+ | | | | +------+ | +-| GFX0 | Device(GFX0): the graphics adapter | +------+ | | | | +------+ | +-| _ADR | Name(_ADR, 0x00020000): the PCI bus address | | +------+ | | | | +------+ | +-| DD01 | Device(DD01): the LCD output device | +------+ | | | | +------+ | +-| _BCL | Method(_BCL): the backlight control method | +------+ | | +------+ +-| _TZ | Scope(_TZ): the thermal zone namespace | +------+ | | | | +------+ | +-| FN00 | PowerResource(FN00): the FAN0 power resource | | +------+ | | | | +------+ | +-| FAN0 | Device(FAN0): the FAN0 cooling device | | +------+ | | | | | | +------+ | | +-| _HID | Name(_HID, "PNP0A0B"): the hardware ID | | +------+ | | | | +------+ | +-| TZ00 | ThermalZone(TZ00); the FAN thermal zone | +------+ | | +------+ +-| _GPE | Scope(_GPE): the GPE namespace +------+ Figure 2. Example ACPI Namespace Linux ACPI Device Objects¶ The Linux kernel's core ACPI subsystem creates struct acpi_device objects for ACPI namespace objects representing devices, power resources processors, thermal zones. Those objects are exported to user space via sysfs as directories in the subtree under /sys/devices/LNXSYSTM:00. The format of their names is <bus_id:instance>, where 'bus_id' refers to the ACPI namespace representation of the given object and 'instance' is used for distinguishing different object of the same 'bus_id' (it is two-digit decimal representation of an unsigned integer). The value of 'bus_id' depends on the type of the object whose name it is part of as listed in the table below: +---+-----------------+-------+----------+ | | Object/Feature | Table | bus_id | +---+-----------------+-------+----------+ | N | Root | xSDT | LNXSYSTM | +---+-----------------+-------+----------+ | N | Device | xSDT | _HID | +---+-----------------+-------+----------+ | N | Processor | xSDT | LNXCPU | +---+-----------------+-------+----------+ | N | ThermalZone | xSDT | LNXTHERM | +---+-----------------+-------+----------+ | N | PowerResource | xSDT | LNXPOWER | +---+-----------------+-------+----------+ | N | Other Devices | xSDT | device | +---+-----------------+-------+----------+ | F | PWR_BUTTON | FADT | LNXPWRBN | +---+-----------------+-------+----------+ | F | SLP_BUTTON | FADT | LNXSLPBN | +---+-----------------+-------+----------+ | M | Video Extension | xSDT | LNXVIDEO | +---+-----------------+-------+----------+ | M | ATA Controller | xSDT | LNXIOBAY | +---+-----------------+-------+----------+ | M | Docking Station | xSDT | LNXDOCK | +---+-----------------+-------+----------+ Table 1. ACPI Namespace Objects Mapping The following rules apply when creating struct acpi_device objects on the basis of the contents of ACPI System Description Tables (as indicated by the letter in the first column and the notation in the second column of the table above): - N: - The object's source is an ACPI namespace node (as indicated by the named object's type in the second column). In that case the object's directory in sysfs will contain the 'path' attribute whose value is the full path to the node from the namespace root. - F: - The struct acpi_device object is created for a fixed hardware feature (as indicated by the fixed feature flag's name in the second column), so its sysfs directory will not contain the 'path' attribute. - M: - The struct acpi_device object is created for an ACPI namespace node with specific control methods (as indicated by the ACPI defined device's type in the second column). The 'path' attribute containing its namespace path will be present in its sysfs directory. For example, if the _BCL method is present for an ACPI namespace node, a struct acpi_device object with LNXVIDEO 'bus_id' will be created for it. The third column of the above table indicates which ACPI System Description Tables contain information used for the creation of the struct acpi_device objects represented by the given row (xSDT means DSDT or SSDT). The forth column of the above table indicates the 'bus_id' generation rule of the struct acpi_device object: - _HID: - _HID in the last column of the table means that the object's bus_id is derived from the _HID/_CID identification objects present under the corresponding ACPI namespace node. The object's sysfs directory will then contain the 'hid' and 'modalias' attributes that can be used to retrieve the _HID and _CIDs of that object. - LNXxxxxx: - The 'modalias' attribute is also present for struct acpi_device objects having bus_id of the "LNXxxxxx" form (pseudo devices), in which cases it contains the bus_id string itself. - device: - 'device' in the last column of the table indicates that the object's bus_id cannot be determined from _HID/_CID of the corresponding ACPI namespace node, although that object represents a device (for example, it may be a PCI device with _ADR defined and without _HID or _CID). In that case the string 'device' will be used as the object's bus_id. Linux ACPI Physical Device Glue¶ ACPI device (i.e. struct acpi_device) objects may be linked to other objects in the Linux' device hierarchy that represent "physical" devices (for example, devices on the PCI bus). If that happens, it means that the ACPI device object is a "companion" of a device otherwise represented in a different way and is used (1) to provide configuration information on that device which cannot be obtained by other means and (2) to do specific things to the device with the help of its ACPI control methods. One ACPI device object may be linked this way to multiple "physical" devices. If an ACPI device object is linked to a "physical" device, its sysfs directory contains the "physical_node" symbolic link to the sysfs directory of the target device object. In turn, the target device's sysfs directory will then contain the "firmware_node" symbolic link to the sysfs directory of the companion ACPI device object. The linking mechanism relies on device identification provided by the ACPI namespace. For example, if there's an ACPI namespace object representing a PCI device (i.e. a device object under an ACPI namespace object representing a PCI bridge) whose _ADR returns 0x00020000 and the bus number of the parent PCI bridge is 0, the sysfs directory representing the struct acpi_device object created for that ACPI namespace object will contain the 'physical_node' symbolic link to the /sys/devices/pci0000:00/0000:00:02:0/ sysfs directory of the corresponding PCI device. The linking mechanism is generally bus-specific. The core of its implementation is located in the drivers/acpi/glue.c file, but there are complementary parts depending on the bus types in question located elsewhere. For example, the PCI-specific part of it is located in drivers/pci/pci-acpi.c. Example Linux ACPI Device Tree¶ The sysfs hierarchy of struct acpi_device objects corresponding to the example ACPI namespace illustrated in Figure 2 with the addition of fixed PWR_BUTTON/SLP_BUTTON devices is shown below: +--------------+---+-----------------+ | LNXSYSTEM:00 | \ | acpi:LNXSYSTEM: | +--------------+---+-----------------+ | | +-------------+-----+----------------+ +-| LNXPWRBN:00 | N/A | acpi:LNXPWRBN: | | +-------------+-----+----------------+ | | +-------------+-----+----------------+ +-| LNXSLPBN:00 | N/A | acpi:LNXSLPBN: | | +-------------+-----+----------------+ | | +-----------+------------+--------------+ +-| LNXCPU:00 | \_PR_.CPU0 | acpi:LNXCPU: | | +-----------+------------+--------------+ | | +-------------+-------+----------------+ +-| LNXSYBUS:00 | \_SB_ | acpi:LNXSYBUS: | | +-------------+-------+----------------+ | | | | +- - - - - - - +- - - - - - +- - - - - - - -+ | +-| PNP0C0D:00 | \_SB_.LID0 | acpi:PNP0C0D: | | | +- - - - - - - +- - - - - - +- - - - - - - -+ | | | | +------------+------------+-----------------------+ | +-| PNP0A08:00 | \_SB_.PCI0 | acpi:PNP0A08:PNP0A03: | | +------------+------------+-----------------------+ | | | | +-----------+-----------------+-----+ | +-| device:00 | \_SB_.PCI0.RP03 | N/A | | | +-----------+-----------------+-----+ | | | | | | +-------------+----------------------+----------------+ | | +-| LNXPOWER:00 | \_SB_.PCI0.RP03.PXP3 | acpi:LNXPOWER: | | | +-------------+----------------------+----------------+ | | | | +-------------+-----------------+----------------+ | +-| LNXVIDEO:00 | \_SB_.PCI0.GFX0 | acpi:LNXVIDEO: | | +-------------+-----------------+----------------+ | | | | +-----------+-----------------+-----+ | +-| device:01 | \_SB_.PCI0.DD01 | N/A | | +-----------+-----------------+-----+ | | +-------------+-------+----------------+ +-| LNXSYBUS:01 | \_TZ_ | acpi:LNXSYBUS: | +-------------+-------+----------------+ | | +-------------+------------+----------------+ +-| LNXPOWER:0a | \_TZ_.FN00 | acpi:LNXPOWER: | | +-------------+------------+----------------+ | | +------------+------------+---------------+ +-| PNP0C0B:00 | \_TZ_.FAN0 | acpi:PNP0C0B: | | +------------+------------+---------------+ | | +-------------+------------+----------------+ +-| LNXTHERM:00 | \_TZ_.TZ00 | acpi:LNXTHERM: | +-------------+------------+----------------+ Figure 3. Example Linux ACPI Device Tree 注釈 Each node is represented as "object/path/modalias", where: 'object' is the name of the object's directory in sysfs. 'path' is the ACPI namespace path of the corresponding ACPI namespace object, as returned by the object's 'path' sysfs attribute. 'modalias' is the value of the object's 'modalias' sysfs attribute (as described earlier in this document). 注釈 N/A indicates the device object does not have the 'path' or the 'modalias' attribute.
https://doc.kusakata.com/firmware-guide/acpi/namespace.html
CC-MAIN-2022-33
refinedweb
1,666
50.46
Support meetings/20080210 From OLPC Sunday, Feb 10 2008, 4-6PM EST Attendees Community Support Volunteers confirmed: (please add your name if we missed you!) - John Webster (Arizona) - Alan Claver (Pennsylvania) - Seth Woodworth (New York / Oregon) - Aaron Konstam (Texas) - Kate Davis (Middletown, CT) - Mel Chua (New Jersey, NYC) (only on irc) - ixo (Bellingham, WA) - Ian Daniher (Ohio) - Sebastian Silva (Lima, Peru) - FFM (Virginia) - Sandy Culver (Massachusetts) - Greg Babbing (New Hampshire) - Sebastian (Peru) - Others ? 1CC - Adam Holt (1CC, Support Gangster in Chief) - Henry Hardy (1CC) - Kim Quirk (1CC) - SJ Klein (1CC) - Greg Babbin (1CC) - Arjun Sarwal (1CC) - Andriani Ferti (1CC) Not confirmed: - Katherine Elliott (Massachusetts) TOPIC: Guest Speaker Henry Hardy, new employee from Ohio/Michigan - "The History of the Net", his 1993 Masters thesis: (which is mentioned in !) - What he (OLPC's new SysAdmin) intends to bring to OLPC - Why OLPC genuinely need a Historian. How _you_ might help. - Aaron Konstam mentions SAGE. TOPIC: Billing/Shipping True Love. - Kim and Adam summarized where we're at, and how we're approaching the March finishing line. TOPIC: Paper Letters. - Why our team need only answer those paper letters that include email addresses. TOPIC: OLPC Re-org - Reorg into 4 working groups - 1 leader for each group, still in flux, leaders being sought. - (rest of notes lost) TOPIC: pushing out update.1 (build 656).... - Urgent need, fix for battery issues... - likely resolve alot of reported battery issues out there. - Best before 30 day warranty expires. - Warning: will uninstall non-Sugar Activity, addins, etc..... " - What's the best method for letting users know ? - ffm/Phil have an install flash script which may help. - Kate mentions, a new release of olpc-update, may resolve 'non-standard' addons, and search path. - [Action item] ffm will update wiki flash page, to install flash 'best method'. - Kim confirmed, auto updates , will create alt image.... for revert.... Boot-O image available - If you want to participate, with auto-update, - update streams info... TOPIC: Peru implementation - Sebastian talking about Peru, Walter Bender will be down there next week... - support-gang is a good resource for issue triage / supplementary support... - Kim suggestion: gather together grassroots people from surrounding area, - During the visit, and bring people together... to help create local support structure. - OLPC can't provide resources for coordinating Peru grassroots or people. - Encourage people in the area to come together and create something. - Possible forum could be setup by volunteers in US, for Peru... - Idea: setup a separate email address alongside of 'help@' (ayuda@laptop ?), for keeping Spanish / South American FAQ/RFTM straight and not-confusing. - Thoughts about wiki and language integration - Is separate Wiki needed for Peru (or language?) -- most people agree - NO. (only if demand requires it) - Continue with current translation efforts. - Suggestion: Sebastian (Peru) and Xavi (Argentina) coordinate together some espanol resources for both countries. - wiki/language discussion taken off line for later discussion between ffm, Kim, etc.. TOPIC: RT Feeping Creaturism - Need review of automated reply template that SJ edited and Adam reverted - Suggestions: - Adding more info on first page, like tracking #, etc . . . - Try avoiding promising too much to customers, already promised too much - More than one category on ticket.." - Work on quality, avoid speeding through as many as you can... - Keep an eye on RMA tickets, make sure less than 30 days. TOPIC: Speak Their Minds - What was YOUR toughest challenge this week? - Mchua: Speaking my mind: Toughest challenge == lack of transcript in this call, hard for me to get information otherwise ;) - Holt: Last week's 1-time experiment with live transcript failed dramatically. Despite extensive precautions on my part and others'. - SJ: toughest challenge: finding out what /everyone else/ was up to - SJ: suggestion.. encourage others to do Week in Review..... nice record... of what's happening in small bits.. - looks great and self acknowledgement of what you've been working on. - Still clarifing exact format or location, but under your own User namespace page is good start. - Encourage others for clear communication on what's going on and where. - ixo: would be nice to see some stats/numbers on volunteers active, and how many hours donated to OLPC - Idea: Each volunteer login centrally, track hours in week/month donated, descriptions generic, details optional - Good method to also be able to ack volunteer work and stats. TOPIC: Donor receipts for Laptops. - Can call Donor services for official receipt. TOPIC: Weekly Zine launch: - ixo: Current update, is that first issue is almost ready to go. Last articles, put in today. - 'public soft release' monday morning in english, with translations soon behind it within 24/36 hours. - Seth / isforinsects, and others have pulled together a great collection of items, and fleshing out quite nicely. - ffm: olpczine.org is not quite ready to go, still waiting on a few configuration settings. - MChua: Some futher ideas on structure for future issues. TOPIC: Documentation update - mchua: Template:Grassroots group and Template:Project have been made, pls use - Chris Carrick from Olin has a team that's looking at OLPC wiki usability for their class... - ..crazy-chris and I and possibly others are doing a hackathon session on organizing pilots-related wiki stuff this thursday, - ..possibly trying to start a "how to run a pilot" handbook and finding existing pilots to help fill it in TOPIC: Funds Development. - Stay posted. Thanks for all your great ideas this week! TOPIC: Vesna/Holt/SJ on "Social Cartography" (skimmed quickly over) - how 60+ of us here can each get to know each other Much Better? - How can we improve ? - Has everyone created a teamwiki account? Start here if not: Minutes THANKS to - ixo (last minute secretary, typing one handed while holding the phone w/oher) - ffm (briefly filled in, while ixo took a break) Briefly edited for clearity by Holt.
http://wiki.laptop.org/go/Support_meetings/20080210
CC-MAIN-2015-06
refinedweb
957
57.57
Python allows you to dynamically compile things at the module-level. That’s why the compile() builtin keyword accepts sourcecode and dictionaries of locals and globals, and doesn’t provide a direct way to call a fragment of dynamic (read: textual) sourcecode with arguments (“xyz(arg1, arg2)”). You also can’t directly invoke compiled-code as a generator (where a function that uses “yield” is interpreted as a generator rather than just a function). However, there’s a loophole, and it’s very elegant and consistent to Python. You simply have to wrap your code in a function definition, and then pull the function from the local scope. You can then call it as desired: import hashlib import random def _compile(arg_names, code): name = "(lambda compile)" # Needs to start with a letter. id_ = 'a' + str(random.random()) code = "def " + id_ + "(" + ', '.join(arg_names) + "):n" + 'n'.join((' ' + line) for line in code.replace('r', '').split('n')) + 'n' c = compile(code, name, 'exec') locals_ = {} exec(c, globals(), locals_) return locals_[id_] code = """ return a * b * c """ c = _compile(['a', 'b', 'c'], code) print(c(1, 2, 3)) Advertisements
https://dustinoprea.com/2014/08/10/dynamically-compiling-and-implementing-a-function/
CC-MAIN-2017-26
refinedweb
185
56.45
Version-Based Optimistic Concurrency Control in JPA/Hibernate This article is an introduction to version-based optimistic concurrency control in Hibernate and JPA. The concept is fairly old and much has been written on it, but anyway I have seen it reinvented, misunderstood and misused. I’m writing it just to spread knowledge and hopefully spark interest in the subject of concurrency control and locking. Use Cases Let’s say we have a system used by multiple users, where each entity can be modified by more than one user. We want to prevent situations where two persons load some information, make some decision based on what they see, and update the state at the same time. We don’t want to lose changes made by the user who first clicked “save” by overwriting them in the following transaction. It can also happen in server environment – multiple transactions can modify a shared entity, and we want to prevent scenarios like this: - Transaction 1 loads data - Transaction 2 updates that data and commits - Using state loaded in step 1 (which is no longer current), transaction 1 performs some calculations and update the state In some ways it’s comparable to non-repeatable reads. Solution: Versioning Hibernate and JPA implement the concept of version-based concurrency control for this reason. Here’s how it works. You can mark a simple property with @Version or <version> (numeric or timestamp). It’s going to be a special column in database. Our mapping can look like: @Entity @Table(name = "orders") public class Order { @Id private long id; @Version private int version; private String description; private String status; // ... mutators } When such an entity is persisted, the version property is set to a starting value. Whenever it’s updated, Hibernate executes query like: update orders set description=?, status=?, version=? where id=? and version=? Note that in the last line, the WHERE clause now includes version. This value is always set to the “old” value, so that it only will update a row if it has the expected version. Let’s say two users load an order at version 1 and take a while looking at it in the GUI. Anne decides to approve the order and executes such action. Status is updated in database, everything works as expected. Versions passed to update statement look like: update orders set description=?, status=?, version=2 where id=? and version=1 As you can see, while persisting that update the persistence layer increments the version counter to 2. In her GUI, Betty still has the old version (number 1). When she decides to perform an update on the order, the statement looks like: update orders set description=?, status=?, version=2 where id=? and version=1 At this point, after Anne’s update, the row’s version in database is 2. So this second update affects 0 rows (nothing matches the WHERE clause). Hibernate detects that and an org.hibernate.StaleObjectStateException (wrapped in a javax.persistence.OptimisticLockException). As a result, the second user cannot perform any updates unless he refreshes the view. For proper user experience we need some clean exception handling, but I’ll leave that out. Configuration There is little to customize here. The @Version property can be a number or a timestamp. Number is artificial, but typically occupies fewer bytes in memory and database. Timestamp is larger, but it always is updated to “current timestamp”, so you can actually use it to determine when the entity was updated. Why? So why would we use it? - It provides a convenient and automated way to maintain consistency in scenarios like those described above. It means that each action can only be performed once, and it guarantees that the user or server process saw up-to-date state while making a business decision. - It takes very little work to set up. - Thanks to its optimistic nature, it’s fast. There is no locking anywhere, only one more field added to the same queries. - In a way it guarantees repeatable reads even with read committed transaction isolation level. It would end with an exception, but at least it’s not possible to create inconsistent state. - It works well with very long conversations, including those that span multiple transactions. - It’s perfectly consistent in all possible scenarios and race conditions on ACID databases. The updates must be sequential, an update involves a row lock and the “second” one will always affect 0 rows and fail. Demo To demonstrate this, I created a very simple web application. It wires together Spring and Hibernate (behind JPA API), but it would work in other settings as well: Pure Hibernate (no JPA), JPA with different implementation, non-webapp, non-Spring etc. The application keeps one Order with schema similar to above and shows it in a web form where you can update description and status. To experiment with concurrency control, open the page in two tabs, do different modifications and save. Try the same thing without @Version. It uses an embedded database, so it needs minimal setup (only a web container) and only takes a restart to start with a fresh database. It’s pretty simplistic – accesses EntityManager in a @Transactional @Controller and backs the form directly with JPA-mapped entity. May not be the best way to do things for less trivial projects, but at least it gathers all code in one place and is very easy to grasp. Full source code as Eclipse project can be found at my GitHub repository.
https://dzone.com/articles/version-based-optimistic
CC-MAIN-2015-40
refinedweb
914
55.24
Understanding XML in SQL Server This topic outlines the reasons why you should use XML in SQL Server. It also provides guidelines for choosing between native XML storage and XML view technology, and gives data modeling suggestions.: Your data is sparse or you do not know the structure of the data, or the structure of your data may change significantly in the future. Your data represents containment hierarchy, instead of references among entities, and may be recursive. Order is inherent in your data. You want to query into the data or update parts of it, based on its structure. If none of these conditions is met, you should use the relational data model. For example, if your data is in XML format but your application just uses the database to store and retrieve the data, an [n]varchar(max) column is all you require. Storing the data in an XML column has additional benefits. This includes having the engine determine that the data is well formed or valid, and also includes support for fine-grained query and updates into the XML data. Following are some of the reasons to use native XML features in SQL Server instead of managing your XML data in the file system: You want to share, query, and modify your XML data in an efficient and transacted way. Fine-grained data access is important to your application. For example, you may want to extract some of the sections within an XML document, or you may want to insert a new section without replacing your whole document. You have relational data and XML data and you want interoperability between both relational and XML data within your application. You need language support for query and data modification for cross-domain applications. You want the server to guarantee that the data is well formed and also optionally validate your data according to XML schemas. You want indexing of XML data for efficient query processing and good scalability, and the use of a first-rate query optimizer. You want SOAP, ADO.NET, and OLE DB access to XML data. You want to use administrative functionality of the database server for managing your XML data. For example, this would be backup, recovery, and replication. If none of these conditions is satisfied, it may be better to store your data as a non-XML, large object type, such as [n]varchar(max) or varbinary(max). The storage options for XML in SQL Server include the following: Native storage as xml data type The data is stored in an internal representation that preserves the XML content of the data. This internal representation includes information about the containment hierarchy, document order, and element and attribute values. Specifically, the InfoSet content of the XML data is preserved. For more information about InfoSet, visit. The InfoSet content may not be an identical copy of the text XML, because the following information is not retained: insignificant white spaces, order of attributes, namespace prefixes, and XML declaration. For typed xml data type, an xml data type bound to XML schemas, the post-schema validation InfoSet (PSVI) adds type information to the InfoSet and is encoded in the internal representation. This improves parsing speed significantly. For more information, see the W3C XML Schema specifications at and. Mapping between XML and relational storage By using an annotated schema (AXSD), the XML is decomposed into columns in one or more tables. This preserves fidelity of the data at the relational level. As a result, the hierarchical structure is preserved although order among elements is ignored. The schema cannot be recursive. Large object storage, [n]varchar(max) and varbinary(max) An identical copy of the data is stored. This is useful for special-purpose applications such as legal documents. Most applications do not require an exact copy and are satisfied with the XML content (InfoSet fidelity). Generally, you may have to use a combination of these approaches. For example, you may want to store your XML data in an xml data type column and promote properties from it into relational columns. Or, you may want to use mapping technology to store nonrecursive parts in non-XML columns and only the recursive parts in xml data type columns. Choice of XML Technology The choice of XML technology, native XML versus XML view, generally depends upon the following factors: Storage options Your XML data may be more appropriate for large object storage (for example, a product manual), or more amenable to storage in relational columns (for example, a line item converted to XML). Each storage option preserves document fidelity to a different extent. Query capabilities You may find one storage option more appropriate than another, based on the nature of your queries and on the extent to which you query your XML data. Fine-grained query of your XML data, for example, predicate evaluation on XML nodes, is supported to varying degrees in the two storage options. Indexing XML data You may want to index the XML data to speed up XML query performance. Indexing options vary with the storage options; you have to make the appropriate choice to optimize your workload. Data modification capabilities Some workloads involve fine-grained modification of XML data. For example, this can include adding a new section within a document, while other workloads, such as Web content, do not. Data modification language support may be important for your application. Schema support Your XML data may be described by a schema that may or may not be an XML schema document. The support for schema-bound XML depends upon the XML technology. Different choices also have different performance characteristics. Native XML Storage You can store your XML data in an xml data type column at the server. This is an appropriate choice if the following applies: You want a straightforward way to store your XML data at the server and, at the same time, preserve document order and document structure. You may or may not have a schema for your XML data. You want to query and modify your XML data. You want to index the XML data for faster query processing. Your application needs system catalog views to administer your XML data and XML schemas. Native XML storage is useful when you have XML documents that have a range of structures, or you have XML documents that conform to different or complex schemas that are too hard to map to relational structures. Example: Modeling XML Data Using the xml Data Type Consider a product manual in XML format that is made up of a separate chapter for each topic and that has multiple sections within each chapter. A section can contain subsections. As a result, <section> is a recursive element. Product manuals contain a large amount of mixed content, diagrams, and technical material; the data is semi-structured. Users may want to perform a contextual search for topics of interest such as searching for the section on "clustered index" within the chapter on "indexing", and query technical quantities. An appropriate storage model for your XML documents is an xml data type column. This preserves the InfoSet content of your XML data. Indexing the XML column benefits query performance. Example: Retaining Exact Copies of XML Data For illustration, assume that government regulations require you to retain exact textual copies of your XML documents. For example, these could include signed documents, legal documents, or stock transaction orders. You may want to store your documents in a [n]varchar(max) column. For querying, convert the data to xml data type at run time and execute Xquery on it. The run-time conversion may be costly, especially when the document is large. If you query frequently,. XML View Technology By defining a mapping between your XML schemas and the tables in a database, you create an "XML view" of your persistent data. XML bulk load can be used to populate the underlying tables by using the XML view. You can query the XML view by using XPath version 1.0; the query is translated to SQL queries on the tables. Similarly, updates are also propagated to those tables. This technology is useful in the following situations: You want to have an XML-centric programming model using XML views over your existing relational data. You have a schema (XSD, XDR) for your XML data that an external partner may have provided. Order is not important in your data, or your query table data is not recursive, or the maximal recursion depth is known in advance. You want to query and modify the data through the XML view by using XPath version 1.0. You want to bulk load XML data and decompose them into the underlying tables by using the XML view. Examples include relational data exposed as XML for data exchange and Web services, and XML data with fixed schema. For more information, see the MSDN Online Library. Example: Modeling Data Using an Annotated XML Schema (AXSD) For illustration, assume that you have existing relational data, such as customers, orders, and line items, that you want to handle as XML. Define an XML view by using AXSD over the relational data. The XML view allows you to bulk load XML data into your tables and query and update the relational data by using the XML view. This model is useful if you have to exchange data that contains XML markup with other applications while your SQL applications work uninterrupted. Hybrid Model Frequently, a combination of relational and xml data type columns is appropriate for data modeling. Some of the values from your XML data can be stored in relational columns, and the rest, or the whole XML value stored in an XML column. This may yield better performance in that you have more control over the indexes created on the relational columns and locking characteristics. The values to store in relational columns depend on your workload. For example, if you retrieve all the XML values based on the path expression, /Customer/@CustId, promoting the value of the CustId attribute into a relational column and indexing it may yield faster query performance. On the other hand, if your XML data is extensively and nonredundantly decomposed into relational columns, the re-assembly cost may be significant. For highly structured XML data, for example, the content of a table has been converted into XML; you can map all values to relational columns, and possibly use XML view technology. The granularity of the XML data stored in an XML column is very important for locking and, to a lesser degree, it is also important for updates. SQL Server uses the same locking mechanism for both XML and non-XML data. Therefore, row-level locking causes all XML instances in the row to be locked. When the granularity is large, locking large XML instances for updates causes throughput to decline in a multiuser scenario. On the other hand, severe decomposition loses object encapsulation and increases reassembly cost. A balance between data modeling requirements and locking and update characteristics is important for good design. However, in SQL Server, the size of actual stored XML instances is not as critical. For example, updates to an XML instance are performed by using new support for partial binary large object (BLOB) and partial index updates in which the existing stored XML instance is compared to its updated version. Partial binary large object (BLOB) update performs a differential comparison between the two XML instances and updates only the differences. Partial index updates modify only those rows that must be changed in the XML index.
http://technet.microsoft.com/en-us/library/bb522493(v=sql.100).aspx
CC-MAIN-2014-35
refinedweb
1,932
53.31
#include <wx/splash.h> wxSplashScreen shows a window with a thin border, displaying a bitmap describing your application. Show it in application initialisation, and then either explicitly destroy it or let it time-out. Example usage: Construct the splash screen passing a bitmap, a style, a timeout, a window id, optional position and size, and a window style. splashStyle is a bitlist of some of the following: milliseconds is the timeout in milliseconds. Destroys the splash screen. Returns the splash style (see wxSplashScreen() for details). Returns the window used to display the bitmap. Returns the timeout in milliseconds. Reimplement this event handler if you want to set an application variable on window destruction, for example.
https://docs.wxwidgets.org/3.1.2/classwx_splash_screen.html
CC-MAIN-2019-09
refinedweb
115
50.43
go to bug id or search bugs for Description: ------------ This is basically the same as PHP bug #48129. Yes, I have read it "won't fix" My opinion on this is "won't fix" is not an option because it _is_ a bug and not fixing bugs does not work: 1) It is common practice in OO languages (including PHP) to give classes case sensitive names. Even the classes of PHP itself are case sensitive and usually start with capital letters (eg. DateTime, Exception, ...). PHP related projects like PEAR, Zend Framework etc. do the same. 2) In order to get a proper 1:1 mapping from class name to the file containing the PHP class definition, projects like PEAR or Zend Framework use the case sensitive class name, eg. System.php contains the class System. Again, this is common practice in other OO languages like C++. 3) What happens when the file system is case sensitive? See example: the script fails because the PEAR class System will be looked for in a file named system.php which does not exist because it is called System.php The workaround is using SPL_autoload_suxx instead. But look at the code: there are several compatibility issues (include_path separator : vs. ;), it does work but is not at all convenient. 4) What would happen if spl_autoload() wouldn't lowercase the class name when looking for a class definition? a) Filesystem is case sensitive It would work! The spl_autoload() would look for a file called System.php which exists, thus will be require'd b) Filesystem is not case sensitive It would still work! The spl_autoload() would look for a file called System.php Because the file system is case insensitive, it would use either System.php or system.php (or sYSTEM.PHP - you got the point?). Because on case insentive filesystems both files "System.php" and "system.php" are not allowed in the same directory, there is _no_ issue with backward compatibility. The only circumstances where it would break backwards compatibility would be on filesystem which is case insensitive but does not allow capital letters. Any real live examples of such a file system? Conclusion: The current specification of spl_autoload() with implicit lowercasing is excactly wrong. There has been, is and never will be any gain in this 'feature' since the class name itself inside PHP is case sensitive. Reproduce code: --------------- <?php /** * Demonstration of the current incompatibility * Make sure you have PEAR inside your PHP include_path */ // this should work but doesn't spl_autoload_register('spl_autoload'); // this does work //spl_autoload_register('SPL_autoload_suxx'); /** * Does the same as spl_autoload, but without lowercasing */ function SPL_autoload_suxx($name) { $rc = FALSE; $exts = explode(',', spl_autoload_extensions()); $sep = (substr(PHP_OS, 0, 3) == 'Win') ? ';' : ':'; $paths = explode($sep, ini_get('include_path')); foreach($paths as $path) { foreach($exts as $ext) { $file = $path . DIRECTORY_SEPARATOR . $name . $ext; if(is_readable($file)) { require_once $file; $rc = $file; break; } } } return $rc; } $binaries = array( 'mysql' => System::which('mysql'), 'mysqlbinlog' => System::which('mysqlbinlog'), 'php' => System::which('php') ); print_r($binaries); ?> Expected result: ---------------- Array ( [mysql] => /usr/bin/mysql [mysqlbinlog] => /usr/bin/mysqlbinlog [php] => /usr/local/bin/php ) Actual result: -------------- PHP Fatal error: Class 'System' not found in /srv/www/vhosts/ on line 38 Add a Patch Add a Pull Request Thank you for your bug report. Wontfix means: we agree that there is a bug, but there are reasons not to fix it. The reason here is that is spl_autoload becomes case sensitive, it will break scripts which depend on spl_autoload being case insensitive. >The reason here is that is spl_autoload becomes case >sensitive, it will break scripts which depend on spl_autoload being >case insensitive. spl_autoload() was introduced in PHP 5.1.2 which is case sensitive concerning class names. This implies that if an operation on an unknown class is done, spl_autoload() is triggered and executed with the case sensitive name of the class. Thus we have 4 different possibilities: 1) The class name all lower case, the file containing the class definition is all lower case (eg. $foo = system::bar(); system.php) This will work independent wether spl_autoload() is lowercasing or not, since all is lowercased. Note that if the class defined in the file system.php is actually named System it wouldn't have ever worked because the class system is still not defined, which would trigger an error. 2) The class name all lower case, the file containing the class definition is uppercased (eg. $foo = system::bar(); System.php) This wouldn't work anymore on file systems which are case sensitive if spl_autoload() would skip lowercasing. Note that this would only have worked if the file system is case insensitive and the class definition in System.php would define a class "system". 3) The class name contains upper case letters, the file containing the class definition is lowercased (eg. $foo = System::bar(); system.php) This is what currently isn't working at all but would work at least for case insensitive file systems if lowercasing would be dropped. Note that if the class defined in the file system.php is actually named system it wouldn't have ever worked because the class System is still not defined. 4) The class name contains upper case letters, the file containing the class definition is uppercased (eg. $foo = System::bar(); System.php) This is what should (and would) work, but currently doesn't. Conclusion: The only problem might be (2): Class name: sample Filename: Sample.php Class definition in Sample.php: class sample { ... } Note: this does work on case insensitive file systems only. I really can't see any reason for maintaining the "Worse is better" principle here, I really doubt that there is much code around relying on the tolowercase feature/bug of spl_autoload(). As a compromise I propose the following: 1) spl_autoload() additionally tries to find a file _not_ lowercased. 2) Throw a E_DEPRECATED in case the filename had to be lowercased to match. Until then: I really don't know why this lowercasing thing was introduced into slp_autoload() to begin with, all it ever did was preventing classes to be named with upper case letters on file systems which are case sensitive. In other words: the only compatibility issue is that code which currently works on platforms like Windows only would suddenly work on UN*X like platforms too. Pls confirm if this is the compatibility issue you are talking about. Trying both lowercased and original case could solve this without breaking backwards compatibility. However, you could as well supply your own autoload function defined in PHP to solve this. Why is this bug marked as bogus? Even if spl_autoload itself isn't fixed, at the very least a version that does it correctly could be added (although in this case it seriously could just be fixed by trying the correct case first). Implementing one in PHP is all very well, but that means that it's non-standard and likely incompatible with what each programmer might expect. It's also slower. I agree with Simon. There is absolutely no reason not to fix this, while keeping backwards compatibility. Two years ago the reason for not fixing it was "breaking BC is not an option". There are plenty of alternatives, and to be honest, PHP has broken BC alot of times in the last few versions (which is a good thing in my opinion, as long as the language becomes cleaner/stricter). Having all the files in lowercase makes them alot harder to read. Having a custom autoloader function is slower and more complicated to get right, and just makes code more ugly and harder to understand. At the very least case sensitivity of the SPL autoloader should be configurable, or available by the use of an extra suffix. I would love to see this in the new 5.4 release; shouldn't take more than a few lines of code. I agree, this bug is not bogus. In fact, it's actually quite serious. Just because spl_autoload was designed poorly doesn't mean it has to remain broken. Several fixes have been proposed, please consider implementing one of them to make this function cross-platform, and therefore useful. Please fix this After having lost 2 days over this, I agree, this should be fixed. At the very least, it should be documented that spl_autoload lower cases filenames. I spent hours trying to register an autoload class that would fix this, but in vain. How can I know that "dataloader" should be mapped to "DataLoader"? If it had been documented that filenames were lowercased, I would not have spent hours banging my head against a brick wall... This seems like a 20 min fix to me, and I've never looked the sql_autoload code before. I spent a good deal of time spinning my wheels on this. I don't see why anyone should lose time of this very obvious bug. However it's better to patch that to bitch, as I always say. I'll submit my patch and see what happens. It's not big deal if this is not fixed, since it's so easy to fix I can keep fixing it with each release. Here is a diff, I'll submit the patch with the "Add a Patch" link. 225d224 < char *lc_class_file; 227d225 < int lc_class_file_len; 229d226 < int mixed_case = 0; 235,236c232 < lc_class_file_len = spprintf(&lc_class_file, 0, "%s%s", lc_name, file_extension); < class_file_len = spprintf(&class_file, 0, "%s%s", class_name, file_extension); --- > class_file_len = spprintf(&class_file, 0, "%s%s", lc_name, file_extension); 252,261c248 < mixed_case = 1; < } < < /* fall back to lowercase file name. should issue deprecated warning. */ < if (ret != SUCCESS) { < ret = php_stream_open_for_zend_ex(lc_class_file, &file_handle, ENFORCE_SAFE_MODE|USE_PATH|STREAM_OPEN_FOR_INCLUDE TSRMLS_CC); < } < < if (ret == SUCCESS) { < if (!file_handle.opened_path && mixed_case == 1) { --- > if (!file_handle.opened_path) { 263,264d249 < } else if(!file_handle.opened_path && mixed_case == 0) { < file_handle.opened_path = estrndup(lc_class_file, lc_class_file_len); 290d274 < efree(lc_class_file); 295d278 < efree(lc_class_file); 331d313 < PS: I really hate this bug with a passion. Status: Not a bug Bullshit. IS A BUG. When will this be recognised as such by you php devs and added to the list of bugs to be fixed! Since I have an extensive codebase relying on classes defined with uppercase starting letter and I saw the 'tip' in the documentation (see below) , I wanted to switch. To my surprise I bumped into this issue with spl_autoload_register (needless to say that it works on a Windows box as a charm and breaks terribly on a Linux box). It definitely is a bug which should be mentioned clearly in the documentation. In addition I would strongly suggest the __autoload function will not be deprecated until this is fixed. -- From the documentation : Tip spl_autoload_register() provides a more flexible alternative for autoloading classes. For this reason, using __autoload() is discouraged and may be deprecated or removed in the future. -- Bumping this bug, at least add a boolean parameter to it for case-sensitivity. This should be literally a 5-minute job for the devs, why has nobody fixed it for 3 years and counting? Have you no shame? Also add my support to FIX THIS BUG!!! Just tried adding the Ebay Trading API to my existing site, which is using spl_autoload. Because the ebay team (like it seems every other sensible dev on the entire planet) uses CamelCased filenames, I now have 3 choices: 1: Do not use spl_autoload (Not an option because this would break our existing site). 2: Do not use the Ebay Trading API, or write my own implementation of this from scratch. (Also not an option for obvious reasons). 3: Rename over 1000 files, and replace every CamelCase instance in each and every file with lower case names. 3 is looking to be my only option, which of course would need repeating each and every time Ebay update their API, so is really NOT a viable option. Like so many others I also think this is a much too obvious bug, unexpected behaviour, etc ... you name it. It's a poor implementation that needs to be fixed even if it means breaking compatibility with PHP code that relyies on it, code + files which are poorly cased anyway (!). However you can maintain compatibility by introducing a new function which alters a case sensitivity flag. Just like you already have spl_autoload_extensions() to hint/restrict the extensions for spl_autoload(), you can have a function spl_autoload_case() and call it once, e.g.: spl_autoload_case(false); // for the broken lowercase spl_autoload() spl_autoload_case(true); // to respect case sensitivity You'd call this before spl_autoload() gets called. You can even make the default to be lowercase, like you so insist on having. This way you don't break compatibility -- although you should(too many aspects of PHP encourage bad coding already). There's a few things I'd like to add (actually a lot but it's probably best if I keep most of it to myself): [email protected]: "it will break scripts which depend on spl_autoload being case insensitive." This suggest that right now spl_autoload is in fact case insensitive, which it is not. A case insensitive system should find Core.php when asking for Core.php, just like a case sensitive system would. The difference is that it would ALSO find core.php which would be fine by me. Now it fails to find Core.php making it case destructive at best. wim at asgc dot be: "In addition I would strongly suggest the __autoload function will not be deprecated until this is fixed." Thank god I love irony, however, this won't actually be a problem as you can still use custom auto loaders. All you need to do is register it using spl_autoload_register(). And finally, when using namespaces it is quite easy to get around this problem using a short autoloader function: function SPL_autoload_suxx($class) { include \str_replace('\\', '/', $class) .'.php'; } \spl_autoload_register(__NAMESPACE__ .'\SPL_autoload_suxx'); All you have to do is copy, paste and mop up the river that you've cried. :-( I just spent 2 hours to get the naming conventions inline with all the other OOP languages out there (meaning CamelCase filenames) only to find out that spl_autload is broken. This needs to be fixed. It is completely unexpected behaviour (if it tries to load "Class" it should be able to locate "Class.php". If backwards compatibility is such a big problem, then please add some kind of flag for it. It's ridiculous that you have to work around this "feature". One thing everyone here seems to be missing here is that php classes are not case sensitive: This code will execute correctly: class FooBar { } new foobar; new FooBar; new FOOBAR; new fooBar; However, add autoloading into the mix and you create action at a distance that isn't immediately obvious: if the class is in FooBar.php and you have a case-sensitive autoloader then this executes: new FooBar; new foobar; but this does not: new foobar; new FooBar; Should the order of operations here have any affect on whether this script runs successfully or not? Even worse, lets say you have: function a() { new FooBar; } function b(){ new foobar; } function c() { a(); b(); } What's even less obvious is that if the implementation of the a function changes to no longer need the FooBar class, the c function stops working. To the developer working on the a function, the implementation has changed but the API has not so should have no negative effect on anything external, yet another part of the application now breaks for no obvious reason. The PHP Developers are correct that this is not a bug and it's more sensible for the autoloader to be case insensitive. One of two things need to happen: 1) All autoloaders should be case-insensitive OR 2) PHP enforces case sensitivity on all class names. The correct answer of course being solution 2: enforce case sensitivity on all class names (and functions too, whilst we are on the subject), like any sane language. This is not hard to fix. Add a deprecation warning for anyone who uses a class name that doesn't exactly match the case it was defined in, then at some point in the future change the functionality. Anyone who didn't pay attention to the deprecation warnings, or worse, didn't have warnings enabled, deserves everything they get. This is a long slow process to fix, which should have been started years ago. Why you just don't make two functions or a flag which tells php if it should do it's job case sensitive or case insensitive? That can't be too hard! Or add some option like: spl_autoload_register(null, null, null, true); to autoload files in a case sensitive manner. The power of PHP comes by leveraging the C implementation of things. This should be possible natively in combination with an elegant project folder/class name structure.
https://bugs.php.net/bug.php?id=49625
CC-MAIN-2016-30
refinedweb
2,790
64
How to capture stdout in real-time with Python This solution is thanks to this article. import subprocess def myrun(cmd): """from """ p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) stdout = [] while True: line = p.stdout.readline() stdout.append(line) print line, if line == '' and p.poll() != None: break return ''.join(stdout) Related posts - How to use the bash shell with Python's subprocess module instead of /bin/sh — posted 2011-04-13 - How to get stdout and stderr using Python's subprocess module — posted 2008-09-23 - How to use python and popen4 to capture stdout and stderr from a command — posted 2007-03-12 I know you posted this two years ago, but just wanted to say THANKS. 2 hours of hunting for a solution on various dicussion board getting vauge answers... This hit the nail on the head for my problem. Hello. after a huge googling, i found your solution and i want to appreciate your blog. Thank you very muchhhhhhhhhhhhh hi guys i couldnt capture iftop output stream from the above code pls help i get two lines and waiting for ever Heey this helped me a lot, I didn't use the exact implementaion but give a good idea Cheers Doesn't work. Blocks on readline( ). disqus:2279800256 Better yet, use a thread to listen for output:... disqus:2654375223
https://www.saltycrane.com/blog/2009/10/how-capture-stdout-in-real-time-python/
CC-MAIN-2019-47
refinedweb
228
64.3
Skylark Language Specification Skylark is a dialect of Python intended for use as a configuration language. A Skylark interpreter is typically embedded within a larger application, and this application may define additional domain-specific functions and data types beyond those provided by the core language. For example, Skylark is embedded within (and was originally developed for) the Bazel build tool, and Bazel’s build language is based on Skylark. Another implementation in Go can be found here: This document was derived from the description of the Go implementation of Skylark. It was influenced by the Python specification, Copyright 1990–2017, Python Software Foundation, and the Go specification, Copyright 2009–2017, The Go Authors. It is now maintained by the Bazel team. The name “Skylark” is a code name of the Bazel project. We plan to rename the language soon to reflect its applicability to projects unrelated to Bazel. Overview Skylark is an untyped dynamic language with high-level data types, first-class functions with lexical scope, and automatic memory management or garbage collection. Skylark is strongly influenced by Python, and is almost a subset of that language. In particular, its data types and syntax for statements and expressions will be very familiar to any Python programmer. However, Skylark is intended not for writing applications but for expressing configuration: its programs are short-lived and have no external side effects and their main result is structured data or side effects on the host application. Skylark is intended to be simple. There are no user-defined types, no inheritance, no reflection, no exceptions, no explicit memory management. Execution is finite. The language does not allow recursion or unbounded loops. Skylark is suitable for use in highly parallel applications. An application may invoke the Skylark interpreter concurrently from many threads, without the possibility of a data race, because shared data structures become immutable due to freezing. The language is deterministic and hermetic. Executing the same file with the same interpreter leads to the same result. By default, user code cannot interact with the environment. Lexical elements A Skylark program consists of one or more modules. Each module is defined by a single UTF-8-encoded text file. Skylark grammar is introduced gradually throughout this document as shown below, and a complete Skylark grammar reference is provided at the end. Grammar notation: - lowercase and 'quoted' items are lexical tokens. - Capitalized names denote grammar productions. - (...) implies grouping. - x | y means either x or y. - [x] means x is optional. - {x} means x is repeated zero or more times. - The end of each declaration is marked with a period. The contents of a Skylark file are broken into a sequence of tokens of five kinds: white space, punctuation, keywords, identifiers, and literals. Each token is formed from the longest sequence of characters that would form a valid token of each kind. File = {Statement | newline} eof . White space consists of spaces (U+0020), tabs (U+0009), carriage returns (U+000D), and newlines (U+000A). Within a line, white space has no effect other than to delimit the previous token, but newlines, and spaces at the start of a line, are significant tokens. #) appearing outside of a string literal marks the start of a comment; the comment extends to the end of the line, not including the newline character. Punctuation: The following punctuation characters or sequences of characters are tokens: + - * // % ** . , = ; : ( ) [ ] { } < > >= <= == != += -= *= //= %= Keywords: The following tokens are keywords and may not be used as identifiers: and else load break for not continue if or def in pass elif return The tokens below also may not be used as identifiers although they do not appear in the grammar; they are reserved as possible future keywords: as is assert lambda class nonlocal del raise except try finally while from with global yield import Identifiers: an identifier is a sequence of Unicode letters, decimal digits, and underscores ( _), not starting with a digit. Identifiers are used as names for values. Examples: None True len x index starts_with arg0 Literals: literals are tokens that denote specific values. Skylark has string and integer literals. 0 # int 123 # decimal int 0x7f # hexadecimal int 0o755 # octal int "hello" 'hello' # string '''hello''' """hello""" # triple-quoted string r'hello' r"hello" # raw string literal Integer literal tokens are defined by the following grammar: int = decimal_lit | octal_lit | hex_lit | 0 . decimal_lit = ('1' … '9') {decimal_digit} . octal_lit = '0' ('o' | 'O') octal_digit {octal_digit} . hex_lit = '0' ('x' | 'X') hex_digit {hex_digit} . decimal_digit = '0' … '9' . octal_digit = '0' … '7' . hex_digit = '0' … '9' | 'A' … 'F' | 'a' … 'f' . TODO: define string_lit, indent, outdent, semicolon, newline, eof Data types These are the main data types built in to the interpreter: NoneType # the type of None bool # True or False int # a signed integer string # a byte string list # a fixed-length sequence of values tuple # a fixed-length sequence of values, unmodifiable dict # a mapping from values to values function # a function Some functions, such as the range function, return instances of special-purpose types that don’t appear in this list. Additional data types may be defined by the host application into which the interpreter is embedded, and those data types may participate in basic operations of the language such as arithmetic, comparison, indexing, and function calls. Some operations can be applied to any Skylark value. For example, every value has a type string that can be obtained with the expression type(x), and any value may be converted to a string using the expression str(x), or to a Boolean truth value using the expression bool(x). Other operations apply only to certain types. For example, the indexing operation a[i] works only with strings, lists, and tuples, and any application-defined types that are indexable. The value concepts section explains the groupings of types by the operators they support. None None is a distinguished value used to indicate the absence of any other value. For example, the result of a call to a function that contains no return statement is None. None is equal only to itself. Its type is "NoneType". The truth value of None is False. Booleans There are two Boolean values, True and False, representing the truth or falsehood of a predicate. The type of a Boolean is "bool". Boolean values are typically used as conditions in if-statements, although any Skylark value used as a condition is implicitly interpreted as a Boolean. For example, the values None, 0, and the empty sequences "", (), [], and {} have a truth value of False, whereas non-zero numbers and non-empty sequences have a truth value of True. Application-defined types determine their own truth value. Any value may be explicitly converted to a Boolean using the built-in bool function. 1 + 1 == 2 # True 2 + 2 == 5 # False if 1 + 1: print("True") else: print("False") Integers The Skylark integer type represents integers. Its type is "int". Integers may be positive or negative. The precision is implementation-dependent. It is a dynamic error if a result is outside the supported range. Integers are totally ordered; comparisons follow mathematical tradition. The + and - operators perform addition and subtraction, respectively. The * operator performs multiplication. The // and % operations on integers compute floored division and remainder of floored division, respectively. If the signs of the operands differ, the sign of the remainder x % y matches that of the dividend, x. For all finite x and y (y ≠ 0), (x // y) * y + (x % y) == x. Any bool, number, or string may be interpreted as an integer by using the int built-in function. An integer used in a Boolean context is considered true if it is non-zero. 100 // 5 * 9 + 32 # 212 3 // 2 # 1 111111111 * 111111111 # 12345678987654321 int("0xffff", 16) # 65535 Strings A string represents an immutable sequence of bytes. The type of a string is "string". Strings can represent arbitrary binary data, including zero bytes, but most strings contain text, encoded by convention using UTF-8. The built-in len function returns the number of bytes in a string. Strings may be concatenated with the + operator. The substring expression s[i:j] returns the substring of s from index i up to index j. The index expression s[i] returns the 1-byte substring s[i:i+1]. Strings are hashable, and thus may be used as keys in a dictionary. Strings are totally ordered lexicographically, so strings may be compared using operators such as == and <. Strings are not iterable sequences, so they cannot be used as the operand of a for-loop, list comprehension, or any other operation than requires an iterable sequence. Any value may formatted as a string using the str or repr built-in functions, the str % tuple operator, or the str.format method. A string used in a Boolean context is considered true if it is non-empty. Strings have several built-in methods: capitalize count endswith find format index isalnum isalpha isdigit islower isspace istitle isupper join lower lstrip partition replace rfind rindex rpartition rsplit rstrip split splitlines startswith strip title upper Lists A list is a mutable sequence of values. The type of a list is "list". Lists are indexable sequences: the elements of a list may be iterated over by for-loops, list comprehensions, and various built-in functions. List may be constructed using bracketed list notation: [] # an empty list [1] # a 1-element list [1, 2] # a 2-element list Lists can also be constructed from any iterable sequence by using the built-in list function. The built-in len function applied to a list returns the number of elements. The index expression list[i] returns the element at index i, and the slice expression list[i:j] returns a new list consisting of the elements at indices from i to j. List elements may be added using the append or extend methods, removed using the remove method, or reordered by assignments such as list[i] = list[j]. The concatenation operation x + y yields a new list containing all the elements of the two lists x and y. For most types, x += y is equivalent to x = x + y, except that it evaluates x only once, that is, it allocates a new list to hold the concatenation of x and y. However, if x refers to a list, the statement does not allocate a new list but instead mutates the original list in place, similar to x.extend(y). Lists are not hashable, so may not be used in the keys of a dictionary. A list used in a Boolean context is considered true if it is non-empty. A list comprehension creates a new list whose elements are the result of some expression applied to each element of another sequence. [x*x for x in [1, 2, 3, 4]] # [1, 4, 9, 16] A list value has these methods: Tuples A tuple is an immutable sequence of values. The type of a tuple is "tuple". Tuples are constructed using parenthesized list notation: () # the empty tuple (1,) # a 1-tuple (1, 2) # a 2-tuple ("pair") (1, 2, 3) # a 3-tuple Observe that for the 1-tuple, the trailing comma is necessary to distinguish it from the parenthesized expression (1). 1-tuples are seldom used. Skylark, unlike Python, does not permit a trailing comma to appear in an unparenthesized tuple expression: for k, v, in dict.items(): pass # syntax error at 'in' _ = [(v, k) for k, v, in dict.items()] # syntax error at 'in' sorted(3, 1, 4, 1,) # ok [1, 2, 3, ] # ok {1: 2, 3:4, } # ok Any iterable sequence may be converted to a tuple by using the built-in tuple function. Like lists, tuples are indexed sequences, so they may be indexed and sliced. The index expression tuple[i] returns the tuple element at index i, and the slice expression tuple[i:j] returns a subsequence of a tuple. Tuples are iterable sequences, so they may be used as the operand of a for-loop, a list comprehension, or various built-in functions. Unlike lists, tuples cannot be modified. However, the mutable elements of a tuple may be modified. Tuples are hashable (assuming their elements are hashable), so they may be used as keys of a dictionary. Tuples may be concatenated using the + operator. A tuple used in a Boolean context is considered true if it is non-empty. Dictionaries A dictionary is a mutable mapping from keys to values. The type of a dictionary is "dict". Dictionaries provide constant-time operations to insert an element, to look up the value for a key, or to remove an element. Dictionaries are implemented using hash tables, so keys must be hashable. Hashable values include None, Booleans, numbers, and strings, and tuples composed from hashable values. Most mutable values, such as lists, and dictionaries, are not hashable, unless they are frozen. Attempting to use a non-hashable value as a key in a dictionary results in a dynamic error, as does passing one to the built-in hash function. A dictionary expression specifies a dictionary as a set of key/value pairs enclosed in braces: coins = { "penny": 1, "nickel": 5, "dime": 10, "quarter": 25, } The expression d[k], where d is a dictionary and k is a key, retrieves the value associated with the key. If the dictionary contains no such item, the operation fails: coins["penny"] # 1 coins["dime"] # 10 coins["silver dollar"] # error: key not found The number of items in a dictionary d is given by len(d). A key/value item may be added to a dictionary, or updated if the key is already present, by using d[k] on the left side of an assignment: len(coins) # 4 coins["shilling"] = 20 len(coins) # 5, item was inserted coins["shilling"] = 5 len(coins) # 5, existing item was updated A dictionary can also be constructed using a dictionary comprehension, which evaluates a pair of expressions, the key and the value, for every element of another iterable such as a list. This example builds a mapping from each word to its length in bytes: words = ["able", "baker", "charlie"] {x: len(x) for x in words} # {"charlie": 7, "baker": 5, "able": 4} Dictionaries are iterable sequences, so they may be used as the operand of a for-loop, a list comprehension, or various built-in functions. Iteration yields the dictionary’s keys in the order in which they were inserted; updating the value associated with an existing key does not affect the iteration order. x = dict([("a", 1), ("b", 2)]) # {"a": 1, "b": 2} x.update([("a", 3), ("c", 4)]) # {"a": 3, "b": 2, "c": 4} for name in coins: print(name, coins[name]) # prints "quarter 25", "dime 10", ... Like all mutable values in Skylark, a dictionary can be frozen, and once frozen, all subsequent operations that attempt to update it will fail. A dictionary used in a Boolean context is considered true if it is non-empty. The binary + operation may be applied to two dictionaries. It yields a new dictionary whose elements are the union of the two operands. If a key is present in both operands, the result contains the value from the right operand. Note: this feature is deprecated. Use the dict.update method instead. Dictionaries may be compared for equality using == and !=. Two dictionaries compare equal if they contain the same number of items and each key/value item (k, v) found in one dictionary is also present in the other. Dictionaries are not ordered; it is an error to compare two dictionaries with <. A dictionary value has these methods: Functions A function value represents a function defined in Skylark. Its type is "function". A function value used in a Boolean context is always considered true. Function definitions may not be nested. A function definition defines zero or more named parameters. Skylark has a rich mechanism for passing arguments to functions. The example below shows a definition and call of a function of two required parameters, x and y. def idiv(x, y): return x // y idiv(6, 3) # 2 A call may provide arguments to function parameters either by position, as in the example above, or by name, as in first two calls below, or by a mixture of the two forms, as in the third call below. All the positional arguments must precede all the named arguments. Named arguments may improve clarity, especially in functions of several parameters. idiv(x=6, y=3) # 2 idiv(y=3, x=6) # 2 idiv(6, y=3) # 2 Optional parameters: A parameter declaration may specify a default value using name=value syntax; such a parameter is optional. The default value expression is evaluated during execution of the def statement, and the default value forms part of the function value. All optional parameters must follow all non-optional parameters. A function call may omit arguments for any suffix of the optional parameters; the effective values of those arguments are supplied by the function’s parameter defaults. def f(x, y=3): return x, y f(1, 2) # (1, 2) f(1) # (1, 3) If a function parameter’s default value is a mutable expression, modifications to the value during one call may be observed by subsequent calls. Beware of this when using lists or dicts as default values. If the function becomes frozen, its parameters’ default values become frozen too. # module a.sky def f(x, list=[]): list.append(x) return list f(4, [1,2,3]) # [1, 2, 3, 4] f(1) # [1] f(2) # [1, 2], not [2]! # module b.sky load("a.sky", "f") f(3) # error: cannot append to frozen list Variadic functions: Some functions allow callers to provide an arbitrary number of arguments. After all required and optional parameters, a function definition may specify a variadic arguments or varargs parameter, indicated by a star preceding the parameter name: *args. Any surplus positional arguments provided by the caller are formed into a tuple and assigned to the args parameter. def f(x, y, *args): return x, y, args f(1, 2) # (1, 2, ()) f(1, 2, 3, 4) # (1, 2, (3, 4)) Keyword-variadic functions: Some functions allow callers to provide an arbitrary sequence of name=value keyword arguments. A function definition may include a final keyworded arguments or kwargs parameter, indicated by a double-star preceding the parameter name: **kwargs. Any surplus named arguments that do not correspond to named parameters are collected in a new dictionary and assigned to the kwargs parameter: def f(x, y, **kwargs): return x, y, kwargs f(1, 2) # (1, 2, {}) f(x=2, y=1) # (2, 1, {}) f(x=2, y=1, z=3) # (2, 1, {"z": 3}) It is a static error if any two parameters of a function have the same name. Just as a function definition may accept an arbitrary number of positional or keyworded arguments, a function call may provide an arbitrary number of positional or keyworded arguments supplied by a list or dictionary: def f(a, b, c=5): return a * b + c f(*[2, 3]) # 11 f(*[2, 3, 7]) # 13 f(*[2]) # error: f takes at least 2 arguments (1 given) f(**dict(b=3, a=2)) # 11 f(**dict(c=7, a=2, b=3)) # 13 f(**dict(a=2)) # error: f takes at least 2 arguments (1 given) f(**dict(d=4)) # error: f got unexpected keyword argument "d" Once the parameters have been successfully bound to the arguments supplied by the call, the sequence of statements that comprise the function body is executed. A function call completes normally after the execution of either a return statement, or of the last statement in the function body. The result of the function call is the value of the return statement’s operand, or None if the return statement had no operand or if the function completeted without executing a return statement. def f(x): if x == 0: return if x < 0: return -x print(x) f(1) # returns None after printing "1" f(0) # returns None without printing f(-1) # returns 1 without printing It is a dynamic error for a function to call itself or another function value with the same declaration. def fib(x): if x < 2: return x return fib(x-2) + fib(x-1) # dynamic error: function fib called recursively fib(5) This rule, combined with the invariant that all loops are iterations over finite sequences, implies that Skylark programs are not Turing-complete. Built-in functions A built-in function is a function or method implemented by the interpreter or the application into which the interpreter is embedded. A built-in function value used in a Boolean context is always considered true. Many built-in functions are defined in the “universe” block of the environment (see Name Resolution), and are thus available to all Skylark programs. Except where noted, built-in functions accept only positional arguments. Name binding and variables After a Skylark file is parsed, but before its execution begins, the Skylark interpreter checks statically that the program is well formed. For example, break and continue statements may appear only within a loop; if, for, and return statements may appear only within a function; and load statements may appear only outside any function. Name resolution is the static checking process that resolves names to variable bindings. During execution, names refer to variables. Statically, names denote places in the code where variables are created; these places are called bindings. A name may denote different bindings at different places in the program. The region of text in which a particular name refers to the same binding is called that binding’s scope. Four Skylark constructs bind names, as illustrated in the example below: load statements ( a and b), def statements ( c), function parameters ( d), and assignments ( e, h, including the augmented assignment e += h). Variables may be assigned or re-assigned explicitly ( e, h), or implicitly, as in a for-loop ( f) or comprehension ( g, i). load("lib.sky", "a", b="B") def c(d): e = 0 for f in d: print([True for g in f]) e += 1 h = [2*i for i in a] The environment of a Skylark program is structured as a tree of lexical blocks, each of which may contain name bindings. The tree of blocks is parallel to the syntax tree. Blocks are of four kinds. At the root of the tree is the universe block, which binds constant values such as None, True, and False, and built-in functions such as len, list, and so on. Skylark programs cannot change the set of universe bindings. Because the universe block is shared by all files (modules), all values bound in it must be immutable and stateless from the perspective of the Skylark program. Nested beneath the universe block is the module block, which contains the bindings of the current file. Bindings in the module block (such as a, b, c, and h in the example) are called global. The module block is typically empty at the start of the file and is populated by top-level binding statements, but an application may pre-bind one or more global names, to provide domain-specific functions to that file, for example. A module block contains a function block for each top-level function, and a comprehension block for each top-level comprehension. Bindings inside either of these kinds of block are called local. Additional functions and comprehensions, and their blocks, may be nested in any order, to any depth. If name is bound anywhere within a block, all uses of the name within the block are treated as references to that binding, even uses that appear before the binding. The binding of y on the last line of the example below makes y local to the function hello, so the use of y in the print statement also refers to the local y, even though it appears earlier. y = "goodbye" def hello(): for x in (1, 2): if x == 2: print(y) # prints "hello" if x == 1: y = "hello" It is a dynamic error to evaluate a reference to a local variable before it has been bound: def f(): print(x) # dynamic error: local variable x referenced before assignment x = "hello" The same is true for global variables: print(x) # dynamic error: global variable x referenced before assignment x = "hello" It is a static error to bind a global variable already explicitly bound in the file: x = 1 x = 2 # static error: cannot reassign global x declared on line 1 If a name was pre-bound by the application, the Skylark program may explicitly bind it, but only once. An augmented assignment statement such as x += 1 is considered a binding of x. It is therefore a static error to use it on a global variable. A name appearing after a dot, such as split in get_filename().split('/'), is not resolved statically. The dot expression .split is a dynamic operation on the value returned by get_filename(). Value concepts Skylark has eleven core data types. An application that embeds the Skylark intepreter may define additional types that behave like Skylark values. All values, whether core or application-defined, implement a few basic behaviors: str(x) -- return a string representation of x type(x) -- return a string describing the type of x bool(x) -- convert x to a Boolean truth value hash(x) -- return a hash code for x Identity and mutation Skylark is an imperative language: programs consist of sequences of statements executed for their side effects. For example, an assignment statement updates the value held by a variable, and calls to some built-in functions such as Values of some data types, such as NoneType, bool, int, and string, are immutable; they can never change. Immutable values have no notion of identity: it is impossible for a Skylark program to tell whether two integers, for instance, are represented by the same object; it can tell only whether they are equal. Values of other data types, such as list and dict, are mutable: they may be modified by a statement such as a[i] = 0 or items.clear(). Although tuple and function values are not directly mutable, they may refer to mutable values indirectly, so for this reason we consider them mutable too. Skylark values of these types are actually references to variables. Copying a reference to a variable, using an assignment statement for instance, creates an alias for the variable, and the effects of operations applied to the variable through one alias are visible through all others. x = [] # x refers to a new empty list variable y = x # y becomes an alias for x x.append(1) # changes the variable referred to by x print(y) # "[1]"; y observes the mutation Skylark uses call-by-value parameter passing: in a function call, argument values are assigned to function parameters as if by assignment statements. If the values are references, the caller and callee may refer to the same variables, so if the called function changes the variable referred to by a parameter, the effect may also be observed by the caller: def f(y): y.append(1) # changes the variable referred to by x x = [] # x refers to a new empty list variable f(x) # f's parameter y becomes an alias for x print(x) # "[1]"; x observes the mutation As in all imperative languages, understanding aliasing, the relationship between reference values and the variables to which they refer, is crucial to writing correct programs. Freezing a value Skylark has a feature unusual among imperative programming languages: a mutable value may be frozen so that all subsequent attempts to mutate it fail with a dynamic error; the value, and all other values reachable from it, become immutable. Immediately after execution of a Skylark module, all values in its top-level environment are frozen. Because all the global variables of an initialized Skylark module are immutable, the module may be published to and used by other threads in a parallel program without the need for locks. For example, the Bazel build system loads and executes BUILD and .bzl files in parallel, and two modules being executed concurrently may freely access variables or call functions from a third without the possibility of a race condition. Hashing The dict data type is implemented using hash tables, so only hashable values are suitable as keys of a dict. Attempting to use a non-hashable value as the key in a hash table, or as the operand of the hash built-in function, results in a dynamic error. The hash of a value is an unspecified integer chosen so that two equal values have the same hash, in other words, x == y => hash(x) == hash(y). A hashable value has the same hash throughout its lifetime. Values of the types NoneType, bool, int, and string, which are all immutable, are hashable. Values of mutable types such as list and dict are not hashable, unless they have become immutable due to freezing. A tuple value is hashable only if all its elements are hashable. Thus ("localhost", 80) is hashable but ([127, 0, 0, 1], 80) is not. Values of type function are also hashable. Although functions are not necessarily immutable, as they may be closures that refer to mutable variables, instances of these types are compared by reference identity (see Comparisons), so their hash values are derived from their identity. Sequence types Many Skylark data types represent a sequence of values: lists and tuples are sequences of arbitrary values, and in many contexts dictionaries act like a sequence of their keys. We can classify different kinds of sequence types based on the operations they support. Iterable: an iterable value lets us process each of its elements in a fixed order. Examples: dict, list, tuple, but not string. Sequence: a sequence of known length lets us know how many elements it contains without processing them. Examples: dict, list, tuple, but not string. Indexable: an indexed type has a fixed length and provides efficient random access to its elements, which are identified by integer indices. Examples: string, tuple, and list. SetIndexable: a settable indexed type additionally allows us to modify the element at a given integer index. Example: list. Mapping: a mapping is an association of keys to values. Example: dict. Although all of Skylark’s core data types for sequences implement at least the Sequence contract, it’s possible for an an application that embeds the Skylark interpreter to define additional data types representing sequences of unknown length that implement only the Iterable contract. Strings are not iterable, though they do support the len(s) and s[i] operations. Skylark deviates from Python here to avoid common pitfall in which a string is used by mistake where a list containing a single string was intended, resulting in its interpretation as a sequence of bytes. Most Skylark operators and built-in functions that need a sequence of values will accept any iterable. It is a dynamic error to mutate a sequence such as a list or a dictionary while iterating over it. def increment_values(dict): for k in dict: dict[k] += 1 # error: cannot insert into hash table during iteration dict = {"one": 1, "two": 2} increment_values(dict) Indexing Many Skylark operators and functions require an index operand i, such as a[i] or list.insert(i, x). Others require two indices i and j that indicate the start and end of a subsequence, such as a[i:j], list.index(x, i, j), or string.find(x, i, j). All such operations follow similar conventions, described here. Indexing in Skylark is zero-based. The first element of a string or list has index 0, the next 1, and so on. The last element of a sequence of length n has index n-1. "hello"[0] # "h" "hello"[4] # "o" "hello"[5] # error: index out of range For subsequence operations that require two indices, the first is inclusive and the second exclusive. Thus a[i:j] indicates the sequence starting with element i up to but not including element j. The length of this subsequence is j-i. This convention is known as half-open indexing. "hello"[1:4] # "ell" Either or both of the index operands may be omitted. If omitted, the first is treated equivalent to 0 and the second is equivalent to the length of the sequence: "hello"[1:] # "ello" "hello"[:4] # "hell" It is permissible to supply a negative integer to an indexing operation. The effective index is computed from the supplied value by the following two-step procedure. First, if the value is negative, the length of the sequence is added to it. This provides a convenient way to address the final elements of the sequence: "hello"[-1] # "o", like "hello"[4] "hello"[-3:-1] # "ll", like "hello"[2:4] Second, for subsequence operations, if the value is still negative, it is replaced by zero, or if it is greater than the length n of the sequence, it is replaced by n. In effect, the index is “truncated” to the nearest value in the range [0:n]. "hello"[-1000:1000] # "hello" This truncation step does not apply to indices of individual elements: "hello"[-6] # error: index out of range "hello"[-5] # "h" "hello"[4] # "o" "hello"[5] # error: index out of range Expressions An expression specifies the computation of a value. The Skylark grammar defines several categories of expression. An operand is an expression consisting of a single token (such as an identifier or a literal), or a bracketed expression. Operands are self-delimiting. An operand may be followed by any number of dot, call, or slice suffixes, to form a primary expression. In some places in the Skylark grammar where an expression is expected, it is legal to provide a comma-separated list of expressions denoting a tuple. The grammar uses Expression where a multiple-component expression is allowed, and Test where it accepts an expression of only a single component. Expression = Test {',' Test} . Test = IfExpr | PrimaryExpr | UnaryExpr | BinaryExpr . PrimaryExpr = Operand | PrimaryExpr DotSuffix | PrimaryExpr CallSuffix | PrimaryExpr SliceSuffix . Operand = identifier | int | string | ListExpr | ListComp | DictExpr | DictComp | '(' [Expression] [,] ')' | '-' PrimaryExpr . DotSuffix = '.' identifier . CallSuffix = '(' [Arguments [',']] ')' . SliceSuffix = '[' [Expression] [':' Test [':' Test]] ']' . Identifiers Primary = identifier An identifier is a name that identifies a value. Lookup of locals and globals may fail if not yet defined. Literals Skylark supports string literals of three different kinds: Primary = int | string Evaluation of a literal yields a value of the given type (string, int) with the given value. See Literals for details. Parenthesized expressions Primary = '(' [Expression] ')' A single expression enclosed in parentheses yields the result of that expression. Explicit parentheses may be used for clarity, or to override the default association of subexpressions. 1 + 2 * 3 + 4 # 11 (1 + 2) * (3 + 4) # 21 If the parentheses are empty, or contain a single expression followed by a comma, or contain two or more expressions, the expression yields a tuple. () # (), the empty tuple (1,) # (1,), a tuple of length 1 (1, 2) # (1, 2), a 2-tuple or pair (1, 2, 3) # (1, 2, 3), a 3-tuple or triple In some contexts, such as a return or assignment statement or the operand of a for statement, a tuple may be expressed without parentheses. x, y = 1, 2 return 1, 2 for x in 1, 2: print(x) Skylark (like Python 3) does not accept an unparenthesized tuple expression as the operand of a list comprehension: [2*x for x in 1, 2, 3] # parse error: unexpected ',' Dictionary expressions A dictionary expression is a comma-separated list of colon-separated key/value expression pairs, enclosed in curly brackets, and it yields a new dictionary object. An optional comma may follow the final pair. DictExpr = '{' [Entries [',']] '}' . Entries = Entry {',' Entry} . Entry = Test ':' Test . Examples: {} {"one": 1} {"one": 1, "two": 2,} The key and value expressions are evaluated in left-to-right order. Evaluation fails if the same key is used multiple times. Only hashable values may be used as the keys of a dictionary. List expressions A list expression is a comma-separated list of element expressions, enclosed in square brackets, and it yields a new list object. An optional comma may follow the last element expression. ListExpr = '[' [Expression [',']] ']' . Element expressions are evaluated in left-to-right order. Examples: [] # [], empty list [1] # [1], a 1-element list [1, 2, 3,] # [1, 2, 3], a 3-element list Unary operators There are two unary operators, both appearing before their operand: -, and not. UnaryExpr = '-' Test | 'not' Test . - number unary negation (number) not x logical negation (any type) The - operators returns the opposite of any number. if x > 0: return 1 elif x < 0: return -1 else: return 0 The not operator returns the negation of the truth value of its operand. not True # False not False # True not [1, 2, 3] # False not "" # True not 0 # True Binary operators Skylark has the following binary operators, arranged in order of increasing precedence: or and not == != < > <= >= in not in - + * / // % Comparison operators, in, and not in are non-associative, so the parser will not accept 0 <= i < n. All other binary operators of equal precedence associate to the left. BinaryExpr = Test {Binop Test} .' | '<=' | '>=' | 'in' | 'not' 'in' | '-' | '+' | '*' | '%' | '/' | '//' . or and and The or and and operators yield, respectively, the logical disjunction and conjunction of their arguments, which need not be Booleans. The expression x or y yields the value of x if its truth value is True, or the value of y otherwise. False or False # False False or True # True True or False # True True or True # True 0 or "hello" # "hello" 1 or "hello" # 1 Similarly, x and y yields the value of x if its truth value is False, or the value of y otherwise. False and False # False False and True # False True and False # False True and True # True 0 and "hello" # 0 1 and "hello" # "hello" These operators use “short circuit” evaluation, so the second expression is not evaluated if the value of the first expression has already determined the result, allowing constructions like these: len(x) > 0 and x[0] == 1 # x[0] is not evaluated if x is empty x and x[0] == 1 len(x) == 0 or x[0] == "" not x or not x[0] Comparisons The == operator reports whether its operands are equal; the != operator is its negation. The operators <, >, <=, and >= perform an ordered comparison of their operands. It is an error to apply these operators to operands of unequal type. Of the built-in types, only the following support ordered comparison, using the ordering relation shown: NoneType # None <= None bool # False < True int # mathematical string # lexicographical tuple # lexicographical list # lexicographical Applications may define additional types that support ordered comparison. The remaining built-in types support only equality comparisons. Values of type dict compare equal if their elements compare equal, and values of type function are equal only to themselves. dict # equal contents function # identity Arithmetic operations The following table summarizes the binary arithmetic operations available for built-in types: Arithmetic number + number # addition number - number # subtraction number * number # multiplication number // number # floored division number % number # remainder of floored division Concatenation string + string list + list tuple + tuple dict + dict # (deprecated) Repetition (string/list/tuple) int * sequence sequence * int String interpolation string % any # see String Interpolation The operands of the arithmetic operators +, -, *, //, and % must both be int. The type of the result has type int. The + operator may be applied to non-numeric operands of the same type, such as two lists, two tuples, or two strings, in which case it computes the concatenation of the two operands and yields a new value of the same type. "Hello, " + "world" # "Hello, world" (1, 2) + (3, 4) # (1, 2, 3, 4) [1, 2] + [3, 4] # [1, 2, 3, 4] The * operator may be applied to an integer n and a value of type string, list, or tuple, in which case it yields a new value of the same sequence type consisting of n repetitions of the original sequence. The order of the operands is immaterial. Negative values of n behave like zero. 'mur' * 2 # 'murmur' 3 * range(3) # [0, 1, 2, 0, 1, 2, 0, 1, 2] Applications may define additional types that support any subset of these operators. Membership tests any in sequence (list, tuple, dict, string) any not in sequence The in operator reports whether its first operand is a member of its second operand, which must be a list, tuple, dict, or string. The not in operator is its negation. Both return a Boolean. The meaning of membership varies by the type of the second operand: the members of a list or tuple are its elements; the members of a dict are its keys; the members of a string are all its substrings. 1 in [1, 2, 3] # True 4 not in (1, 2, 3) # True d = {"one": 1, "two": 2} "one" in d # True "three" in d # False 1 in d # False "nasty" in "dynasty" # True "a" in "banana" # True "f" not in "way" # True String interpolation The expression format % args performs string interpolation, a simple form of template expansion. The format string is interpreted as a sequence of literal portions and conversions. Each conversion, which starts with a % character, is replaced by its corresponding value from args. The characters following % in each conversion determine which argument it uses and how to convert it to a string. Each % character marks the start of a conversion specifier, unless it is immediately followed by another %, in which cases both characters together denote a single literal percent sign. The conversion’s operand is the next element of args, which must be a tuple with exactly one component per conversion, unless the format string contains only a single conversion, in which case args itself is its operand. Skylark does not support the flag, width, and padding specifiers supported by Python’s % and other variants of C’s printf. After the % comes a single letter indicating what operand types are valid and how to convert the operand x to a string: % none literal percent sign s any as if by str(x) r any as if by repr(x) d number signed integer decimal It is an error if the argument does not have the type required by the conversion specifier. A Boolean argument is not considered a number. Examples: "Hello %s" % "Bob" # "Hello Bob" "Hello %s, your score is %d" % ("Bob", 75) # "Hello Bob, your score is 75" ) One subtlety: to use a tuple as the operand of a conversion in format string containing only a single conversion, you must wrap the tuple in a singleton tuple: "coordinates=%s" % (40, -74) # error: too many arguments for format string "coordinates=%s" % ((40, -74),) # "coordinates=(40, -74)" Conditional expressions A conditional expression has the form a if cond else b. It first evaluates the condition cond. If it’s true, it evaluates a and yields its value; otherwise it yields the value of b. IfExpr = Test 'if' Test 'else' Test . Example: "yes" if enabled else "no" Comprehensions A comprehension constructs new list or dictionary value by looping over one or more iterables and evaluating a body expression that produces successive elements of the result. A list comprehension consists of a single expression followed by one or more clauses, the first of which must be a for clause. Each for clause resembles a for statement, and specifies an iterable operand and a set of variables to be assigned by successive values of the iterable. An if cause resembles an if statement, and specifies a condition that must be met for the body expression to be evaluated. A sequence of for and if clauses acts like a nested sequence of for and if statements. ListComp = '[' Test {CompClause} ']'. DictComp = '{' Entry {CompClause} '}' . CompClause = 'for' LoopVariables 'in' Test | 'if' Test . LoopVariables = PrimaryExpr {',' PrimaryExpr} . Examples: [x*x for x in range(5)] # [0, 1, 4, 9, 16] [x*x for x in range(5) if x%2 == 0] # [0, 4, 16] [(x, y) for x in range(5) if x%2 == 0 for y in range(5) if y > x] # [(0, 1), (0, 2), (0, 3), (0, 4), (2, 3), (2, 4)] A dict comprehension resembles a list comprehension, but its body is a pair of expressions, key: value, separated by a colon, and its result is a dictionary containing the key/value pairs for which the body expression was evaluated. Evaluation fails if the value of any key is unhashable. As with a for loop, the loop variables may exploit compound assignment: [x*y+z for (x, y), z in [((2, 3), 5), (("o", 2), "!")]] # [11, 'oo!'] Skylark, following Python 3, does not accept an unparenthesized tuple as the operand of a for clause: [x*x for x in 1, 2, 3] # parse error: unexpected comma Comprehensions in Skylark, again following Python 3, define a new lexical block, so assignments to loop variables have no effect on variables of the same name in an enclosing block: x = 1 _ = [x for x in [2]] # new variable x is local to the comprehension print(x) # 1 Function and method calls CallSuffix = '(' [Arguments [',']] ')' . Arguments = Argument {',' Argument} . Argument = Test | identifier '=' Test | '*' Test | '**' Test . A value f of type function may be called using the expression f(...). Applications may define additional types whose values may be called in the same way. A method call such as filename.endswith(".sky") is the composition of two operations, m = filename.endswith and m(".sky"). The first, a dot operation, yields a bound method, a function value that pairs a receiver value (the filename string) with a choice of method (string·endswith). Only built-in or application-defined types may have methods. See Functions for an explanation of function parameter passing. Dot expressions A dot expression x.f selects the attribute f (a field or method) of the value x. Fields are possessed by none of the main Skylark data types, but some application-defined types have them. Methods belong to the built-in types string, list, and dict, and to many application-defined types. DotSuffix = '.' identifier . A dot expression fails if the value does not have an attribute of the specified name. Use the built-in function hasattr(x, "f") to ascertain whether a value has a specific attribute, or dir(x) to enumerate all its attributes. The getattr(x, "f") function can be used to select an attribute when the name "f" is not known statically. A dot expression that selects a method typically appears within a call expression, as in these examples: ["able", "baker", "charlie"].index("baker") # 1 "banana".count("a") # 3 "banana".reverse() # error: string has no .reverse field or method But when not called immediately, the dot expression evaluates to a bound method, that is, a method coupled to a specific receiver value. A bound method can be called like an ordinary function, without a receiver argument: f = "banana".count f # <built-in method count of string value> f("a") # 3 f("n") # 2 Implementation note: The Java implementation does not currently allow a method to be selected but not immediately called. See Google Issue b/21392896. Index expressions An index expression a[i] yields the ith element of an indexable type such as a string, tuple, or list. The index i must be an int value in the range - n ≤ i < n, where n is len(a); any other index results in an error. SliceSuffix = '[' [Expression] [':' Test [':' Test]] ']' . A valid negative index i behaves like the non-negative index n+i, allowing for convenient indexing relative to the end of the sequence. "abc"[0] # "a" "abc"[1] # "b" "abc"[-1] # "c" ("zero", "one", "two")[0] # "zero" ("zero", "one", "two")[1] # "one" ("zero", "one", "two")[-1] # "two" An index expression d[key] may also be applied to a dictionary d, to obtain the value associated with the specified key. It is an error if the dictionary contains no such key. An index expression appearing on the left side of an assignment causes the specified list or dictionary element to be updated: a = range(3) # a == [0, 1, 2] a[2] = 7 # a == [0, 1, 7] coins["suzie b"] = 100 It is a dynamic error to attempt to update an element of an immutable type, such as a tuple or string, or a frozen value of a mutable type. Slice expressions A slice expression a[start:stop:stride] yields a new value containing a subsequence of a, which must be a string, tuple, or list. SliceSuffix = '[' [Expression] [':' Test [':' Test]] ']' . Each of the start, stop, and stride operands is optional; if present, and not None, each must be an integer. The stride value defaults to 1. If the stride is not specified, the colon preceding it may be omitted too. It is an error to specify a stride of zero. Conceptually, these operands specify a sequence of values i starting at start and successively adding stride until i reaches or passes stop. The result consists of the concatenation of values of a[i] for which i is valid.` The effective start and stop indices are computed from the three operands as follows. Let n be the length of the sequence. If the stride is positive: 0 to n, inclusive. If the stride is negative: -1 to n-1, inclusive. "abc"[1:] # "bc" (remove first element) "abc"[:-1] # "ab" (remove last element) "abc"[1:-1] # "b" (remove first and last element) "banana"[1::2] # "aaa" (select alternate elements starting at index 1) "banana"[4::-2] # "nnb" (select alternate elements in reverse, starting at index 4) Unlike Python, Skylark does not allow a slice expression on the left side of an assignment. Slicing a tuple or string may be more efficient than slicing a list because tuples and strings are immutable, so the result of the operation can share the underlying representation of the original operand (when the stride is 1). By contrast, slicing a list requires the creation of a new list and copying of the necessary elements. Statements Statement = DefStmt | IfStmt | ForStmt | SimpleStmt . SimpleStmt = SmallStmt {';' SmallStmt} [';'] '\n' . SmallStmt = ReturnStmt | BreakStmt | ContinueStmt | PassStmt | AssignStmt | ExprStmt | LoadStmt . Pass statements A pass statement does nothing. Use a pass statement when the syntax requires a statement but no behavior is required, such as the body of a function that does nothing. PassStmt = 'pass' . Example: def noop(): pass def list_to_dict(items): # Convert list of tuples to dict m = {} for k, m[k] in items: pass return m Assignments An assignment statement has the form lhs = rhs. It evaluates the expression on the right-hand side then assigns its value (or values) to the variable (or variables) on the left-hand side. AssignStmt = Expression '=' Expression . The expression on the left-hand side is called a target. The simplest target is the name of a variable, but a target may also have the form of an index expression, to update the element of a list or dictionary, to update the field of an object: k = 1 a[i] = v m.f = "" Compound targets may consist of a comma-separated list of subtargets, optionally surrounded by parentheses or square brackets, and targets may be nested arbitarily in this way. An assignment to a compound target checks that the right-hand value is a sequence with the same number of elements as the target. Each element of the sequence is then assigned to the corresponding element of the target, recursively applying the same logic. It is a static error if the sequence is empty. a, b = 2, 3 (x, y) = f() [zero, one, two] = range(3) [(a, b), (c, d)] = ("ab", "cd") The same process for assigning a value to a target expression is used in for loops and in comprehensions. Augmented assignments An augmented assignment, which has the form lhs op= rhs updates the variable lhs by applying a binary arithmetic operator op (one of +, -, *, /, //, %) to the previous value of lhs and the value of rhs. AssignStmt = Expression ('=' | '+=' | '-=' | '*=' | '/=' | '//=' | '%=') Expression . The left-hand side must be a simple target: a name, an index expression, or a dot expression. x -= 1 x.filename += ".sky" a[index()] *= 2 Any subexpressions in the target on the left-hand side are evaluated exactly once, before the evaluation of rhs. The first two assignments above are thus equivalent to: x = x - 1 x.filename = x.filename + ".sky" and the third assignment is similar in effect to the following two statements but does not declare a new temporary variable i: i = index() a[i] = a[i] * 2 Function definitions A def statement creates a named function and assigns it to a variable. DefStmt = 'def' identifier '(' [Parameters [',']] ')' ':' Suite . Example: def twice(x): return x * 2 str(twice) # "<function f>" twice(2) # 4 twice("two") # "twotwo" The function’s name is preceded by the def keyword and followed by the parameter list (which is enclosed in parentheses), a colon, and then an indented block of statements which form the body of the function. The parameter list is a comma-separated list whose elements are of four kinds. First come zero or more required parameters, which are simple identifiers; all calls must provide an argument value for these parameters. The required parameters are followed by zero or more optional parameters, of the form name=expression. The expression specifies the default value for the parameter for use in calls that do not provide an argument value for it. The required parameters are optionally followed by a single parameter name preceded by a *. This is the called the varargs parameter, and it accumulates surplus positional arguments specified by a call. Finally, there may be an optional parameter name preceded by **. This is called the keyword arguments parameter, and accumulates in a dictionary any surplus name=value arguments that do not match a prior parameter. Here are some example parameter lists: def f(): pass def f(a, b, c): pass def f(a, b, c=1): pass def f(a, b, c=1, *args): pass def f(a, b, c=1, *args, **kwargs): pass def f(**kwargs): pass Execution of a def statement creates a new function object. The function object contains: the syntax of the function body; the default value for each optional parameter; the value of each free variable referenced within the function body; and the global dictionary of the current module. Return statements A return statement ends the execution of a function and returns a value to the caller of the function. ReturnStmt = 'return' [Expression] . A return statement may have zero, one, or more result expressions separated by commas. With no expressions, the function has the result None. With a single expression, the function’s result is the value of that expression. With multiple expressions, the function’s result is a tuple. return # returns None return 1 # returns 1 return 1, 2 # returns (1, 2) Expression statements An expression statement evaluates an expression and discards its result. ExprStmt = Expression . Any expression may be used as a statement, but an expression statement is most often used to call a function for its side effects. list.append(1) If statements An if statement evaluates an expression (the condition), then, if the truth value of the condition is True, executes a list of statements. IfStmt = 'if' Test ':' Suite {'elif' Test ':' Suite} ['else' ':' Suite] . Example: if score >= 100: print("You win!") return An if statement may have an else block defining a second list of statements to be executed if the condition is false. if score >= 100: print("You win!") return else: print("Keep trying...") continue It is common for the else block to contain another if statement. To avoid increasing the nesting depth unnecessarily, the else and following if may be combined as elif: if x > 0: result = 1 elif x < 0: result = -1 else: result = 0 An if statement is permitted only within a function definition. An if statement at top level results in a static error. For loops A for loop evaluates its operand, which must be an iterable value. Then, for each element of the iterable’s sequence, the loop assigns the successive element values to one or more variables and executes a list of statements, the loop body. ForStmt = 'for' LoopVariables 'in' Expression ':' Suite . Example: for x in range(10): print(10) The assignment of each value to the loop variables follows the same rules as an ordinary assignment. In this example, two-element lists are repeatedly assigned to the pair of variables (a, i): for a, i in [["a", 1], ["b", 2], ["c", 3]]: print(a, i) # prints "a 1", "b 2", "c 3" Because Skylark loops always iterate over a finite sequence, they are guaranteed to terminate, unlike loops in most languages which can execute an arbitrary and perhaps unbounded number of iterations. Within the body of a for loop, break and continue statements may be used to stop the execution of the loop or advance to the next iteration. In Skylark, a for loop is permitted only within a function definition. A for loop at top level results in a static error. Break and Continue The break and continue statements terminate the current iteration of a for loop. Whereas the continue statement resumes the loop at the next iteration, a break statement terminates the entire loop. BreakStmt = 'break' . ContinueStmt = 'continue' . Example: for x in range(10): if x%2 == 1: continue # skip odd numbers if x > 7: break # stop at 8 print(x) # prints "0", "2", "4", "6" Both statements affect only the innermost lexically enclosing loop. It is a static error to use a break or continue statement outside a loop. Load statements The load statement loads another Skylark module, extracts one or more values from it, and binds them to names in the current module. Syntactically, a load statement looks like a function call load(...). LoadStmt = 'load' '(' string {',' [identifier '='] string} [','] ')' . A load statement requires at least two “arguments”. The first must be a literal string; it identifies the module to load. Its interpretation is determined by the application into which the Skylark interpreter is embedded, and is not specified here. During execution, the application determines what action to take for a load statement. A typical implementation locates and executes a Skylark file, populating a cache of files executed so far to avoid duplicate work, to obtain a module, which is a mapping from global names to values. The remaining arguments are a mixture of literal strings, such as "x", or named literal strings, such as y="x". The literal string ( "x"), which must denote a valid identifier not starting with _, specifies the name to extract from the loaded module. In effect, names starting with _ are not exported. The name ( y) specifies the local name; if no name is given, the local name matches the quoted name. load("module.sky", "x", "y", "z") # assigns x, y, and z load("module.sky", "x", y2="y", "z") # assigns x, y2, and z A load statement within a function is a static error. Module execution Each Skylark file defines a module, which is a mapping from the names of global variables to their values. When a Skylark file is executed, whether directly by the application or indirectly through a load statement, a new Skylark thread is created, and this thread executes all the top-level statements in the file. Because if-statements and for-loops cannot appear outside of a function, control flows from top to bottom. If execution reaches the end of the file, module initialization is successful. At that point, the value of each of the module’s global variables is frozen, rendering subsequent mutation impossible. The module is then ready for use by another Skylark thread, such as one executing a load statement. Such threads may access values or call functions defined in the loaded module. A Skylark thread may carry state on behalf of the application into which it is embedded, and application-defined functions may behave differently depending on this thread state. Because module initialization always occurs in a new thread, thread state is never carried from a higher-level module into a lower-level one. The initialization behavior of a module is thus independent of whichever module triggered its initialization. If a Skylark thread encounters an error, execution stops and the error is reported to the application, along with a backtrace showing the stack of active function calls at the time of the error. If an error occurs during initialization of a Skylark module, any active load statements waiting for initialization of the module also fail. Skylark provides no mechanism by which errors can be handled within the language. Built-in constants and functions The outermost block of the Skylark environment is known as the “universe” block. It defines a number of fundamental values and functions needed by all Skylark programs, such as None, True, False, and len. These names are not reserved words so Skylark programs are free to redefine them in a smaller block such as a function body or even at the top level of a module. However, doing so may be confusing to the reader. Nonetheless, this rule permits names to be added to the universe block in later versions of the language without breaking existing programs. None None is the distinguished value of the type NoneType. True and False True and False are the two values of type bool. any any(x) returns True if any element of the iterable sequence x is true. If the iterable is empty, it returns False. all all(x) returns False if any element of the iterable sequence x is false. If the iterable is empty, it returns True. bool bool(x) interprets x as a Boolean value— True or False. With no argument, bool() returns False. dict dict creates a dictionary. It accepts up to one positional argument, which is interpreted as an iterable of two-element sequences (pairs), each specifying a key/value pair in the resulting dictionary. dict also accepts any number of keyword arguments, each of which specifies a key/value pair in the resulting dictionary; each keyword is treated as a string. dict() # {}, empty dictionary dict([(1, 2), (3, 4)]) # {1: 2, 3: 4} dict([(1, 2), ["a", "b"]]) # {1: 2, "a": "b"} dict(one=1, two=2) # {"one": 1, "two", 1} dict([(1, 2)], x=3) # {1: 2, "x": 3} With no arguments, dict() returns a new empty dictionary. dict(x) where x is a dictionary returns a new copy of x. dir dir(x) returns a list of the names of the attributes (fields and methods) of its operand. The attributes of a value x are the names f such that x.f is a valid expression. For example, dir("hello") # ['capitalize', 'count', ...], the methods of a string Several types known to the interpreter, such as list, string, and dict, have methods, but none have fields. However, an application may define types with fields that may be read or set by statements such as these: y = x.f x.f = y enumerate enumerate(x) returns a list of (index, value) pairs, each containing successive values of the iterable sequence xand the index of the value within the sequence. The optional second parameter, start, specifies an integer value to add to each index. enumerate(["zero", "one", "two"]) # [(0, "zero"), (1, "one"), (2, "two")] enumerate(["one", "two"], 1) # [(1, "one"), (2, "two")] getattr getattr(x, name) returns the value of the attribute (field or method) of x named name. It is a dynamic error if x has no such attribute. getattr(x, "f") is equivalent to x.f. getattr("banana", "split")("a") # ["b", "n", "n", ""], equivalent to "banana".split("a") hasattr hasattr(x, name) reports whether x has an attribute (field or method) named name. hash hash(x) returns an integer hash value for a string x such that x == y implies hash(x) == hash(y). int int(x[, base]) interprets its argument as an integer. If x is an int, the result is x. If x is a bool, the result is 0 for False or 1 for True. If x is a string, it is interpreted as a sequence of digits in the specified base, decimal by default. If base is zero, x is interpreted like an integer literal, the base being inferred from an optional base marker such as 0b, 0o, or 0x preceding the first digit. These markers may also be used if base is the corresponding base. Irrespective of base, the string may start with an optional + or int("21") # 21 int("1234", 16) # 4660 int("0x1234", 16) # 4660 int("0x1234", 0) # 4660 int("0x1234") # error (invalid base 10 number) len len(x) returns the number of elements in its argument. It is a dynamic error if its argument is not a sequence. list list constructs a list. list(x) returns a new list containing the elements of the iterable sequence x. With no argument, list() returns a new empty list. max max(x) returns the greatest element in the iterable sequence x. It is an error if any element does not support ordered comparison, or if the sequence is empty. The optional named parameter key specifies a function to be applied to each element prior to comparison. max([3, 1, 4, 1, 5, 9]) # 9 max("two", "three", "four") # "two", the lexicographically greatest max("two", "three", "four", key=len) # "three", the longest min min(x) returns the least element in the iterable sequence x. It is an error if any element does not support ordered comparison, or if the sequence is empty. min([3, 1, 4, 1, 5, 9]) # 1 min("two", "three", "four") # "four", the lexicographically least min("two", "three", "four", key=len) # "two", the shortest print(*args, **kwargs) prints its arguments, followed by a newline. Arguments are formatted as if by str(x) and separated with a space. Keyword arguments are preceded by their name. Example: print(1, "hi", x=3) # "1 hi x=3\n" Typically the formatted string is printed to the standard error file, but the exact behavior is a property of the Skylark thread and is determined by the host application. range range returns an immutable sequence of integers defined by the specified interval and stride. range(stop) # equivalent to range(0, stop) range(start, stop) # equivalent to range(start, stop, 1) range(start, stop, step) range requires between one and three integer arguments. With one argument, range(stop) returns the ascending sequence of non-negative integers less than stop. With two arguments, range(start, stop) returns only integers not less than start. With three arguments, range(start, stop, step) returns integers formed by successively adding step to start until the value meets or passes stop. A call to range fails if the value of step is zero. A call to range does not materialize the entire sequence, but returns a fixed-size value of type "range" that represents the parameters that define the sequence. The range value is iterable and may be indexed efficiently. list(range(10)) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] list(range(3, 10)) # [3, 4, 5, 6, 7, 8, 9] list(range(3, 10, 2)) # [3, 5, 7, 9] list(range(10, 3, -2)) # [10, 8, 6, 4] The len function applied to a range value returns its length. The truth value of a range value is True if its length is non-zero. Range values are comparable: two range values compare equal if they denote the same sequence of integers, even if they were created using different parameters. Range values are not hashable. The str function applied to a range value yields a string of the form range(10), range(1, 10), or range(1, 10, 2). The x in y operator, where y is a range, reports whether x is equal to some member of the sequence y; the operation fails unless x is a number. repr repr(x) formats its argument as a string. All strings in the result are double-quoted. repr(1) # '1' repr("x") # '"x"' repr([1, "x"]) # '[1, "x"]' reversed reversed(x) returns a new list containing the elements of the iterable sequence x in reverse order. reversed(range(5)) # [4, 3, 2, 1, 0] reversed({"one": 1, "two": 2}.keys()) # ["two", "one"] sorted sorted(x) returns a new list containing the elements of the iterable sequence x, in sorted order. The sort algorithm is stable. sorted([3, 1, 4, 1, 5, 9]) # [1, 1, 3, 4, 5, 9] sorted(["two", "three", "four"]) # ["three", "two", "four"] str str(x) formats its argument as a string. If x is a string, the result is x (without quotation). All other strings, such as elements of a list of strings, are double-quoted. str(1) # '1' str("x") # 'x' str([1, "x"]) # '[1, "x"]' tuple tuple(x) returns a tuple containing the elements of the iterable x. With no arguments, tuple() returns the empty tuple. type type(x) returns a string describing the type of its operand. type(None) # "NoneType" type(0) # "int" zip zip() returns a new list of n-tuples formed from corresponding elements of each of the n iterable sequences provided as arguments to zip. That is, the first tuple contains the first element of each of the sequences, the second element contains the second element of each of the sequences, and so on. The result list is only as long as the shortest of the input sequences. zip() # [] zip(range(5)) # [(0,), (1,), (2,), (3,), (4,)] zip(range(5), "abc") # [(0, "a"), (1, "b"), (2, "c")] Built-in methods This section lists the methods of built-in types. Methods are selected using dot expressions. For example, strings have a count method that counts occurrences of a substring; "banana".count("a") yields 3. As with built-in functions, built-in methods accept only positional arguments except where noted. The parameter names serve merely as documentation. dict·get D.get(key[, default]) returns the dictionary value corresponding to the given key. If the dictionary contains no such value, get returns None, or the value of the optional default parameter if present. get fails if key is unhashable, or the dictionary is frozen or has active iterators. x = {"one": 1, "two": 2} x.get("one") # 1 x.get("three") # None x.get("three", 0) # 0 dict·items D.items() returns a new list of key/value pairs, one per element in dictionary D, in the same order as they would be returned by a for loop. x = {"one": 1, "two": 2} x.items() # [("one", 1), ("two", 2)] dict·keys D.keys() returns a new list containing the keys of dictionary D, in the same order as they would be returned by a for loop. x = {"one": 1, "two": 2} x.keys() # ["one", "two"] dict·pop D.pop(key[, default]) returns the value corresponding to the specified key, and removes it from the dictionary. If the dictionary contains no such value, and the optional default parameter is present, pop returns that value; otherwise, it fails. pop fails if key is unhashable, or the dictionary is frozen or has active iterators. x = {"one": 1, "two": 2} x.pop("one") # 1 x # {"two": 2} x.pop("three", 0) # 0 x.pop("four") # error: missing key dict·popitem D.popitem() returns the first key/value pair, removing it from the dictionary. popitem fails if the dictionary is empty, frozen, or has active iterators. x = {"one": 1, "two": 2} x.popitem() # ("one", 1) x.popitem() # ("two", 2) x.popitem() # error: empty dict dict·setdefault D.setdefault(key[, default]) returns the dictionary value corresponding to the given key. If the dictionary contains no such value, setdefault, like get, returns None or the value of the optional default parameter if present; setdefault additionally inserts the new key/value entry into the dictionary. setdefault fails if the key is unhashable, or if the dictionary is frozen or has active iterators. x = {"one": 1, "two": 2} x.setdefault("one") # 1 x.setdefault("three", 0) # 0 x # {"one": 1, "two": 2, "three": 0} x.setdefault("four") # None x # {"one": 1, "two": 2, "three": None} dict·update D.update([pairs][, name=value[, ...]) makes a sequence of key/value insertions into dictionary D, then returns None. If the positional argument pairs is present, it must be None, another dict, or some other iterable. If it is another dict, then its key/value pairs are inserted into D. If it is an iterable, it must provide a sequence of pairs (or other iterables of length 2), each of which is treated as a key/value pair to be inserted into D. For each name=value argument present, the name is converted to a string and used as the key for an insertion into D, with its corresponding value being value. update fails if the dictionary is frozen or has active iterators. x = {} x.update([("a", 1), ("b", 2)], c=3) x.update({"d": 4}) x.update(e=5) x # {"a": 1, "b": "2", "c": 3, "d": 4, "e": 5} dict·values D.values() returns a new list containing the dictionary’s values, in the same order as they would be returned by a for loop over the dictionary. x = {"one": 1, "two": 2} x.values() # [1, 2] list·append L.append(x) appends x to the list L, and returns None. append fails if the list is frozen or has active iterators. x = [] x.append(1) # None x.append(2) # None x.append(3) # None x # [1, 2, 3] list·clear L.clear() removes all the elements of the list L and returns None. It fails if the list is frozen or if there are active iterators. x = [1, 2, 3] x.clear() # None x # [] list·extend L.extend(x) appends the elements of x, which must be iterable, to the list L, and returns None. extend fails if x is not iterable, or if the list L is frozen or has active iterators. x = [] x.extend([1, 2, 3]) # None x.extend(["foo"]) # None x # [1, 2, 3, "foo"] list·index L.insert(x[, start[, end]]) finds x within the list L and returns its index. The optional start and end parameters restrict the portion of list L that is inspected. If provided and not None, they must be list indices of type int. If an index is negative, len(L) is effectively added to it, then if the index is outside the range [0:len(L)], the nearest value within that range is used; see Indexing. insert fails if x is not found in L, or if start or end is not a valid index ( int or None). x = ["b", "a", "n", "a", "n", "a"] x.index("a") # 1 (bAnana) x.index("a", 2) # 3 (banAna) x.index("a", -2) # 5 (bananA) list·insert L.insert(i, x) inserts the value x in the list L at index i, moving higher-numbered elements along by one. It returns None. As usual, the index i must be an int. If its value is negative, the length of the list is added, then its value is clamped to the nearest value in the range [0:len(L)] to yield the effective index. insert fails if the list is frozen or has active iterators. x = ["b", "c", "e"] x.insert(0, "a") # None x.insert(-1, "d") # None x # ["a", "b", "c", "d", "e"] list·pop L.pop([index]) removes and returns the last element of the list L, or, if the optional index is provided, at that index. insert fails if the index is negative or not less than the length of the list, of if the list is frozen or has active iterators. x = [1, 2, 3] x.pop() # 3 x.pop() # 2 x # [1] list·remove L.remove(x) removes the first occurrence of the value x from the list L, and returns None. remove fails if the list does not contain x, is frozen, or has active iterators. x = [1, 2, 3, 2] x.remove(2) # None (x == [1, 3, 2]) x.remove(2) # None (x == [1, 3]) x.remove(2) # error: element not found string·capitalize S.capitalize() returns a copy of string S with all Unicode letters that begin words changed to their title case. "hello, world!".capitalize() # "Hello, World!" string·count S.count(sub[, start[, end]]) returns the number of occcurences of sub within the string S, or, if the optional substring indices start and end are provided, within the designated substring of S. They are interpreted according to Skylark’s indexing conventions. "hello, world!".count("o") # 2 "hello, world!".count("o", 7, 12) # 1 (in "world") string·endswith S.endswith(suffix) reports whether the string S has the specified suffix. "filename.sky".endswith(".sky") # True string·find S.find(sub[, start[, end]]) returns the index of the first occurrence of the substring sub within S. If either or both of start or end are specified, they specify a subrange of S to which the search should be restricted. They are interpreted according to Skylark’s indexing conventions. If no occurrence is found, found returns -1. "bonbon".find("on") # 1 "bonbon".find("on", 2) # 4 "bonbon".find("on", 2, 5) # -1 string·format S.format(*args, **kwargs) returns a version of the format string S in which bracketed portions {...} are replaced by arguments from args and kwargs. Within the format string, a pair of braces `` is treated as a literal open or close brace. Each unpaired open brace must be matched by a close brace }. The optional text between corresponding open and close braces specifies which argument to use. {} {field} The field name may be either a decimal number or a keyword. A number is interpreted as the index of a positional argument; a keyword specifies the value of a keyword argument. If all the numeric field names form the sequence 0, 1, 2, and so on, they may be omitted and those values will be implied; however, the explicit and implicit forms may not be mixed. "a{x}b{y}c{}".format(1, x=2, y=3) # "a2b3c1" "a{}b{}c".format(1, 2) # "a1b2c" "({1}, {0})".format("zero", "one") # "(one, zero)" string·index S.index(sub[, start[, end]]) returns the index of the first occurrence of the substring sub within S, like S.find, except that if the substring is not found, the operation fails. "bonbon".index("on") # 1 "bonbon".index("on", 2) # 4 "bonbon".index("on", 2, 5) # error: substring not found (in "nbo") string·isalnum S.isalnum() reports whether the string S is non-empty and consists only Unicode letters and digits. "base64".isalnum() # True "Catch-22".isalnum() # False string·isalpha S.isalpha() reports whether the string S is non-empty and consists only of Unicode letters. "ABC".isalpha() # True "Catch-22".isalpha() # False "".isalpha() # False string·isdigit S.isdigit() reports whether the string S is non-empty and consists only of Unicode digits. "123".isdigit() # True "Catch-22".isdigit() # False "".isdigit() # False string·islower S.islower() reports whether the string S contains at least one cased Unicode letter, and all such letters are lowercase. "hello, world".islower() # True "Catch-22".islower() # False "123".islower() # False string·isspace S.isspace() reports whether the string S is non-empty and consists only of Unicode spaces. " ".isspace() # True "\r\t\n".isspace() # True "".isspace() # False string·istitle S.istitle() reports whether the string S contains at least one cased Unicode letter, and all such letters that begin a word are in title case. "Hello, World!".istitle() # True "Catch-22".istitle() # True "HAL-9000".istitle() # False "123".istitle() # False string·isupper S.isupper() reports whether the string S contains at least one cased Unicode letter, and all such letters are uppercase. "HAL-9000".isupper() # True "Catch-22".isupper() # False "123".isupper() # False string·join S.join(iterable) returns the string formed by concatenating each element of its argument, with a copy of the string S between successive elements. The argument must be an iterable whose elements are strings. ", ".join(["one", "two", "three"]) # "one, two, three" "a".join("ctmrn") # "catamaran" string·lower S.lower() returns a copy of the string S with letters converted to lowercase. "Hello, World!".lower() # "hello, world!" string·lstrip S.lstrip() returns a copy of the string S with leading whitespace removed. " hello ".lstrip() # " hello" string·partition S.partition(x) splits string S into three parts and returns them as a tuple: the portion before the first occurrence of string x, x itself, and the portion following it. If S does not contain x, partition returns (S, "", ""). partition fails if x is not a string, or is the empty string. "one/two/three".partition("/") # ("one", "/", "two/three") string·replace S.replace(old, new[, count]) returns a copy of string S with all occurrences of substring old replaced by new. If the optional argument count, which must be an int, is non-negative, it specifies a maximum number of occurrences to replace. "banana".replace("a", "o") # "bonono" "banana".replace("a", "o", 2) # "bonona" string·rfind S.rfind(sub[, start[, end]]) returns the index of the substring sub within S, like S.find, except that rfind returns the index of the substring’s last occurrence. "bonbon".rfind("on") # 4 "bonbon".rfind("on", None, 5) # 1 "bonbon".rfind("on", 2, 5) # -1 string·rindex S.rindex(sub[, start[, end]]) returns the index of the substring sub within S, like S.index, except that rindex returns the index of the substring’s last occurrence. "bonbon".rindex("on") # 4 "bonbon".rindex("on", None, 5) # 1 (in "bonbo") "bonbon".rindex("on", 2, 5) # error: substring not found (in "nbo") string·rpartition S.rpartition(x) is like partition, but splits S at the last occurrence of x. "one/two/three".partition("/") # ("one/two", "/", "three") string·rsplit S.rsplit([sep[, maxsplit]]) splits a string into substrings like S.split, except that when a maximum number of splits is specified, rsplit chooses the rightmost splits. "banana".rsplit("n") # ["ba", "a", "a"] "banana".rsplit("n", 1) # ["bana", "a"] "one two three".rsplit(None, 1) # ["one two", "three"] string·rstrip S.rstrip() returns a copy of the string S with trailing whitespace removed. " hello ".rstrip() # "hello " string·split S.split([sep [, maxsplit]]) returns the list of substrings of S, splitting at occurrences of the delimiter string sep. Consecutive occurrences of sep are considered to delimit empty strings, so 'food'.split('o') returns ['f', '', 'd']. Splitting an empty string with a specified separator returns ['']. If sep is the empty string, split fails. If sep is not specified or is None, split uses a different algorithm: it removes all leading spaces from S (or trailing spaces in the case of rsplit), then splits the string around each consecutive non-empty sequence of Unicode white space characters. If S consists only of white space, split returns the empty list. If maxsplit is given and non-negative, it specifies a maximum number of splits. "one two three".split() # ["one", "two", "three"] "one two three".split(" ") # ["one", "two", "", "three"] "one two three".split(None, 1) # ["one", "two three"] "banana".split("n") # ["ba", "a", "a"] "banana".split("n", 1) # ["ba", "ana"] string·splitlines S.splitlines([keepends]) returns a list whose elements are the successive lines of S, that is, the strings formed by splitting S at line terminators (currently assumed to be a single newline, \n, regardless of platform). The optional argument, keepends, is interpreted as a Boolean. If true, line terminators are preserved in the result, though the final element does not necessarily end with a line terminator. "one\n\ntwo".splitlines() # ["one", "", "two"] "one\n\ntwo".splitlines(True) # ["one\n", "\n", "two"] string·startswith S.startswith(suffix) reports whether the string S has the specified prefix. "filename.sky".startswith("filename") # True string·strip S.strip() returns a copy of the string S with leading and trailing whitespace removed. " hello ".strip() # "hello" string·title S.title() returns a copy of the string S with letters converted to titlecase. Letters are converted to uppercase at the start of words, lowercase elsewhere. "hElLo, WoRlD!".title() # "Hello, World!" string·upper S.upper() returns a copy of the string S with letters converted to uppercase. "Hello, World!".upper() # "HELLO, WORLD!" Grammar reference File = {Statement | newline} eof . Statement = DefStmt | IfStmt | ForStmt | SimpleStmt .' | '<=' | '>=' | 'in' | 'not' 'in' | '|' | '&' | '-' | '+' | '*' | '%' | '/' | '//' . Expression = Test {',' Test} . # NOTE: trailing comma permitted only when within [...] or (...). LoopVariables = PrimaryExpr {',' PrimaryExpr} . Tokens: - spaces: newline, eof, indent, outdent. - identifier. - literals: string, int. - plus all quoted tokens such as ‘+=’, ‘return’. Notes: - Ambiguity is resolved using operator precedence. - The grammar does not enforce the legal order of params and args, nor that the first compclause must be a ‘for’.
https://docs.bazel.build/versions/0.17.2/skylark/spec.html
CC-MAIN-2020-24
refinedweb
14,099
54.63
I’m currently working on some logging code that supposed to – among other things – print information about the calling function. This should be relatively easy, standard C++ has a type_info class. This contains the name of the typeid’d class/function/etc. but it’s mangled. It’s not very useful. I.e. typeid(std::vector<int>).name() returns St6vectorIiSaIiEE. Is there a way to produce something useful from this? Like std::vector<int> for the above example. If it only works for non-template classes, that’s fine too. The solution should work for gcc, but it would be better if I could port it. It’s for logging so it’s not so important that it can’t be turned off, but it should be helpful for debugging. Answer Given the attention this question / answer receives, and the valuable feedback from GManNickG, I have cleaned up the code a little bit. Two versions are given: one with C++11 features and another one with only C++98 features. In file type.hpp #ifndef TYPE_HPP #define TYPE_HPP #include <string> #include <typeinfo> std::string demangle(const char* name); template <class T> std::string type(const T& t) { return demangle(typeid(t).name()); } #endif In file type.cpp (requires C++11) #include "type.hpp" #ifdef __GNUG__ #include <cstdlib> #include <memory> #include <cxxabi.h> std::string demangle(const char* name) { int status = -4; // some arbitrary value to eliminate the compiler warning // enable c++11 by passing the flag -std=c++11 to g++ std::unique_ptr<char, void(*)(void*)> res { abi::__cxa_demangle(name, NULL, NULL, &status), std::free }; return (status==0) ? res.get() : name ; } #else // does nothing if not g++ std::string demangle(const char* name) { return name; } #endif Usage: #include <iostream> #include "type.hpp" struct Base { virtual ~Base() {} }; struct Derived : public Base { }; int main() { Base* ptr_base = new Derived(); // Please use smart pointers in YOUR code! std::cout << "Type of ptr_base: " << type(ptr_base) << std::endl; std::cout << "Type of pointee: " << type(*ptr_base) << std::endl; delete ptr_base; } It prints: Type of ptr_base: Base* Type of pointee: Derived Tested with g++ 4.7.2, g++ 4.9.0 20140302 (experimental), clang++ 3.4 (trunk 184647), clang 3.5 (trunk 202594) on Linux 64 bit and g++ 4.7.2 (Mingw32, Win32 XP SP2). If you cannot use C++11 features, here is how it can be done in C++98, the file type.cpp is now: #include "type.hpp" #ifdef __GNUG__ #include <cstdlib> #include <memory> #include <cxxabi.h> struct handle { char* p; handle(char* ptr) : p(ptr) { } ~handle() { std::free(p); } }; std::string demangle(const char* name) { int status = -4; // some arbitrary value to eliminate the compiler warning handle result( abi::__cxa_demangle(name, NULL, NULL, &status) ); return (status==0) ? result.p : name ; } #else // does nothing if not g++ std::string demangle(const char* name) { return name; } #endif (Update from Sep 8, 2013) The accepted answer (as of Sep 7, 2013), when the call to abi::__cxa_demangle() is successful, returns a pointer to a local, stack allocated array… ouch! Also note that if you provide a buffer, abi::__cxa_demangle() assumes it to be allocated on the heap. Allocating the buffer on the stack is a bug (from the gnu doc): “If output_buffer is not long enough, it is expanded using realloc.” Calling realloc() on a pointer to the stack… ouch! (See also Igor Skochinsky‘s kind comment.) You can easily verify both of these bugs: just reduce the buffer size in the accepted answer (as of Sep 7, 2013) from 1024 to something smaller, for example 16, and give it something with a name not longer than 15 (so realloc() is not called). Still, depending on your system and the compiler optimizations, the output will be: garbage / nothing / program crash. To verify the second bug: set the buffer size to 1 and call it with something whose name is longer than 1 character. When you run it, the program almost assuredly crashes as it attempts to call realloc() with a pointer to the stack. (The old answer from Dec 27, 2010) Important changes made to KeithB’s code: the buffer has to be either allocated by malloc or specified as NULL. Do NOT allocate it on the stack. It’s wise to check that status as well. I failed to find HAVE_CXA_DEMANGLE. I check __GNUG__ although that does not guarantee that the code will even compile. Anyone has a better idea? #include <cxxabi.h> const string demangle(const char* name) { int status = -4; char* res = abi::__cxa_demangle(name, NULL, NULL, &status); const char* const demangled_name = (status==0)?res:name; string ret_val(demangled_name); free(res); return ret_val; }
https://www.tutorialguruji.com/cpp/unmangling-the-result-of-stdtype_infoname/
CC-MAIN-2021-17
refinedweb
770
65.62
For other versions, see the Versioned plugin docs. For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix. Read messages as events over the network via udp. The only required configuration item is port, which specifies the udp port logstash will listen on for event streams. This plugin adds a field containing the source IP address of the UDP packet. By default, the IP address is stored in the host field. When Elastic Common Schema (ECS) is enabled (in ecs_compatibility), the source IP address is stored in the [host][ip] field. You can customize the field name using the source_ip_fieldname. See ecs_compatibility for more information. This plugin supports the following configuration options plus the Common Options described later. Also see Common Options for a list of options supported by all input plugins. The maximum packet size to read from the network - Value type is string Supported values are: disabled: unstructured connection metadata added at root level v1: structured connection metadata added under ECS compliant namespaces. Table 1. Metadata Location by ecs_compatibility value The port which logstash will listen on. Remember that ports less than 1024 (privileged ports) may require root or elevated privileges to use. This is the number of unprocessed UDP packets you can hold in memory before packets will start dropping. The socket receive buffer size in bytes. If option is not set, the operating system default is used. The operating system will use the max allowed value if receive_buffer_bytes is larger than allowed. Consult your operating system documentation if you need to increase this max allowed value. - Value type is string - Default value could be "host"or [host][ip]depending on the value of ecs_compatibility The name of the field where the source IP address will be stored. See Event Metadata and the Elastic Common Schema (ECS) for more information on how ECS compatibility settings affect these defaults. Example: input { udp { source_ip_fieldname => "[appliance][monitoring][ip]" } } inputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs. input { udp {.
https://www.elastic.co/guide/en/logstash/7.15/plugins-inputs-udp.html
CC-MAIN-2022-05
refinedweb
363
56.96
A Swift-Baked Lightweight HTTP Networking Library in addition with the json parsing facility with the help of ObjectMapper. It frees you from the stress of json parsing – Every API call returns you an object or an array of objects. Less Coding – More Efficiency. Requirements - iOS 8.0+ - Xcode 8.3+ - Swift 3.0+ Dependencies The foremost reason is to call it a Lightweight Networking Library is that it’s built on the core Swift Libraries except the JSON parsing thing. I have used ObjectMapper to map JSON data to User-Defined Model Objects. One more Library is Reachability. This library will check if the host is reachable or not and moreover if it’s reachable through Wifi or Cellular Data. - ObjectMapper – The JSON to Object Mapping Library - Reachability.swift – Reachability.swift is a replacement for Apple’s Reachability sample, re-written in Swift with closures. Installation CocoaPods CocoaPods is a dependency manager for Cocoa projects. You can install it with the following command: $ gem install cocoapods CocoaPods 1.3+ is required to build ASNet. To integrate Alamofire into your Xcode project using CocoaPods, specify it in your Podfile: source '' platform :ios, '10.0' use_frameworks! target '<Your Target Name>' do pod 'ASNet', '~> 1.0' end Then, run the following command: $ pod install Updates in new version There are Two major updates in this version. - Add Custom Header (HTTPHeader) Usage Initialize import ASNet ASNet.shared.initialize(withHost: "", andBaseURL: "") The host parameter is to check if the host domain is reachable or not. And the baseURL is for the common partial url that’s going to be used ahead of all the api calls in a single app. You can initialize it in AppDelegate and then use it throughout the whole app. Sample Request There are only two methods to request to the Rest API and getting response - One for Json Object Response - The other for Json Array Response Json Object Response import ASNet // For Object Response asNet.fetchAPIDataWithJsonObjectResponse(endpointURL: apiUrl, httpMethod: .get, httpHeader: nil, parameters: nil, isMultiPart: false, filesWhenMultipart: nil, returningType: Lead.self) { (result) in switch result { case .success(let lead, let json): // lead is an object of Lead model. // json is the the json response of the request. this response is not needed unless you want to do something special with this. The library is parsing your data itself and returning you the object of the class type you are sending through parameters you want. break case .error(let errorTitle, let errorText): // you can edit this errortitle and errortext. break } } Json Array Response import ASNet // for array response asNet.fetchAPIDataWithJsonArrayResponse(endpointURL: apiUrl, httpMethod: .get, httpHeader: nil, parameters: nil, isMultiPart: false, filesWhenMultipart: nil, returningType: Lead.self) { (result) in switch result { case .success(let leads, let json): // leads is an array of Lead model Objects. // json is the the json response of the request. this response is not needed unless you want to do something special with this. The library is parsing your data itself and returning you the array of object of the class type you are sending through parameters you want. break case .error(let errorTitle, let errorText): // you can edit this errortitle and errortext. break } } Parameters in Depth - Endpoint URL – The parameter name defines it pretty well. This is the endpoint url we are gonna append with the base url. - Http Method – It’s pretty straightforward as well. You can make a post or get call by choosing one. It’s an Enum that’s defined as follows – public enum HTTPMethod: String { case get = "GET" case post = "POST" case put = "PUT" case delete = "DELETE" } - HTTPHeader – It’s a nullable(You can send nil instead of empty dictionary) Dictionary to send parameters – either in json or in post form pattern. public typealias HTTPHeader = [String: String] - Parameters – It’s a nullable(You can send nil instead of empty dictionary) Dictionary to send parameters – either in json or in post form pattern. public typealias Parameters = [String: Any] - isMultiPart – It’s boolean argument to indicate if the request includes any multipart file to send or not. It’s false by default. - filesWhenMultipart – It’s a parameter of ImageFileArraytype. ImageFileArray is an array of ImageFiletype. If you have to send one or more images, you have to make an array of ImageFile type as follows – var imageFileList: ImageFileArray = [] let imageFile: = ImageFile(image: profieImageView.image!, fileKey: "photo", fileName: "profile_image", mimeType: .jpg) imageFileList.append(imageFile) // append as many ImageFile as you want and then send it through filesWhenMultipart in the methods. ImageFilehas some more supporting objects. Those are as follows – public typealias ImageFileArray = [ImageFile] open class ImageFile: NSObject { public var imageData: Data public var fileKey: String public var fileName: String public var mimeType: MimeType public init?(image: UIImage, fileKey: String, fileName: String, mimeType: MimeType) { self.fileKey = fileKey self.fileName = fileName self.mimeType = mimeType switch mimeType { case .jpg: guard let data = UIImageJPEGRepresentation(image, 0.7) else { return nil } self.imageData = data break case .png: guard let data = UIImagePNGRepresentation(image) else { return nil } self.imageData = data break } super.init() } } ```Swift public enum MimeType: String { case jpg = "image/jpg" case png = "image/png" } - returningType – Here comes the necessity of ObjectMapper. To use this library you have to let your model class inherit from the class Mappable (From ObjectMapper). My library will automatically map the response json result to your desired model object. I am providing a simple example here – [ { "id": 1, "title": "English" }, { "id": 2, "title": "Chinese" }, { "id": 3, "title": "Malay" }, { "id": 4, "title": "Hindi" }, { "id": 5, "title": "Tamil" }, { "id": 6, "title": "Korean" }, { "id": 7, "title": "Japanese" } ] To parse the json above you need to write a model as follows – import ObjectMapper class Language: Mappable { private struct Key { let id_key = "id" let title_key = "title" } internal var id: Int? internal var title: String? required internal init(map: Map) { mapping(map: map) } internal func mapping(map: Map) { let key = Key.init() id <- map[key.id_key] title <- map[key.title_key] } } Now call either of the api methods discussed above according to your need. You must have the query about the `result` thingy inside the closure of the method. Right? Actually it's an Enum that helps to accumulate the Successful and Failed response in a single object. ```Swift public enum JsonObjectResult<T> { case success(T, Any) case error(String, String) } public enum JsonArrayResult<T> { case success([T], Any) case error(String, String) } Load Image from Image URL You can load images from url through this library as well. import ASNet // To get an image from url asNet.loadImage(fromUrl: imageUrl, usingCache: true, onSuccess: { (image) in // do whatever you want with the image }, onError: { // Error downloading image. Show any alert or something. }) You have to make usingCache true if you need it to get from the cache. That’s it. Pretty straightforward! Some More Customizable Stuff TimeOut Interval You can change the TimeOut Interval for a request. import ASNet ASNet.shared.networkService.timeoutIntervalForRequest = 60 Error Texts open class ASNetErrorTexts { public var requestErrorTitle = "Bad Request!" public var responseErrorTitle = "Parse Error!" public var serverErrorTitle = "Server Error!" public var networkErrorTitle = "Network Error!" public var timeoutErrorTitle = "Network Error!" public var customErrorTitle = "Error!" public var parsingErrorTitle = "Parsing Error!" public var requestErrorText = "Please, check your request parameters!" public var responseErrorText = "Something went wrong with parsing response data!" public var serverErrorText = "Something went wrong on the Server Side." public var networkErrorText = "Network is not connected!" public var timeoutErrorText = "Request timed out! Please check your internet connection!" public var parsingErrorText = "Data parsing failed because of invalid format!" public init() { } } You can change the Error Texts that you are getting through the result object. import ASNet ASNet.shared.networkService.errorTexts.customErrorTitle = "Yo! It's a custom error title!" Upcoming Versions There are a lot to improve in this library. It’s just the beginning. The upcoming version will have the facility of sending headers through the methods. Keep your eyes on that. Keep exploring this version till then. Author Amit Sen – The Whole Work – Ronstorm License This project is licensed under the MIT License – see the LICENSE.md file for details Acknowledgments - Thanks to Hearst Digital Innovation Group (DIG) for the huge help providing their library called ObjectMapper. - Thanks to Stackoverflow to help me solve a lot of complex stuff while making this library. - Thanks to the colleagues and friends I worked with for supporting me. - Finally Thanks to Github. Latest podspec { "name": "ASNet", "version": "1.0", "summary": "A swift-baked networking library wrapped over URLSession thingy.", "description": "A Swift-Baked lightweight network library in addition with the json parsing facility with the help of ObjectMapper. It frees you from the stress of json parsing - Every API call returns you an object or an array of objects. Less Coding - More Efficiency. Happy coding.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Amit Sen": "[email protected]" }, "source": { "git": "", "tag": "1.0" }, "pod_target_xcconfig": { "SWIFT_VERSION": "4.0" }, "platforms": { "ios": "11.0" }, "source_files": "Source/*.swift", "dependencies": { "ObjectMapper": [] }, "pushed_with_swift_version": "3.0n4.0" } Sun, 12 Nov 2017 11:00:21 +0000
https://tryexcept.com/articles/cocoapod/asnet
CC-MAIN-2018-47
refinedweb
1,482
59.7
A tool for finding files in the filesystem Project description ff About ff is a tool for finding files in the filesystem. NOTE: ff is in the early stages of development, expect things to break and syntax to change. Summary ff is a tool for finding files in the filesystem that all share a set of common features. Its scope is similar to find(1) and fd(1) but it aims at being more accessible and easier to use than find and more versatile and powerful than fd. It is written in Python >= 3.6. Features - Search by file attributes. - Search in a wide variety of file metadata. - Simple yet powerful expression syntax. - Flexible output options. - Flexible sort options. - Extendable by user plugins. - Parallel search and processing. - Usable in scripts with a Python API. Examples To build and install ff simply type: $ python setup.py install or $ pip install find-ff Building with Cython is also supported. Cython >= 3.0 is required. Depending on the set of arguments this may offer a significant speed-up. $ python setup-cython.py install Python API You can use ff's query capabilities in your own scripts: from ff import Find for entry in Find("type=f git.tracked=yes", directories=["/home/user/project"], sort=["path"]): print(entry["relpath"]) Developing plugins and debug mode There is a template for new plugins to start from ( plugin_template.py) with exhaustive instructions and comments, so you can develop plugins for your own needs. Useful in that regard is ff's debug mode. It can be activated by executing the ff module. $ python -m ff --debug info,cache ... Debug mode produces lots of messages which can be limited to certain categories using the --debug category1,category2,... option. On top of that, debug mode activates many internal checks using assert(). Therefore, it is advisable to use debug mode during plugin development. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/find-ff/
CC-MAIN-2021-43
refinedweb
341
60.01
In december long challenge there were few questions which i did and were partially correct as my code was not time efficient and TLE was executed. i want to know how to write codes to remove TLE and need help. any type of suggestion,advice is welcomed. i am sharing my code of particular problem it is executing TLE due to 2 for loops inside while loop ,but i only know this .how to improve that i want to know here is one problem(SUBSPLAY):- my solution:- using namespace std; int main() { int t; //cout<<"enter total number of test cases "; cin>>t; while(t--) { int N; // cout<<"enter string length "; cin>>N; string S; // cout<<"enter string "; cin>>S; int i,j,c,K=0,d=0,min,found; min=N; //storing size of string in min for(i=0;i<S.length();++i) //loop from start of string 1st character to end { c=count(S.begin(),S.end(),S[i]); //counting number of character similar to S[i] if(c==N) //if all charcters of string are same { min=1; } if(c>1&&c!=N) //if character is not unique and all charcters of string are not same { for(j=i+1;j<N;j++) //another loop from i+1 to N-1 for finding minimum difference of indices { if(S[i]==S[j]) //checking similar charcters { d=j-i; //difference of indices if(min>d) { min=d; break; } } } } } //K will be length of string - minimum difference of indices of similar charcters K=N-min; cout<<K<<endl; } return 0; }```
https://discuss.codechef.com/t/how-to-create-time-efficient-code-to-avoid-tle/46868
CC-MAIN-2020-40
refinedweb
262
62.72
It was decided by the powers that we take an existing in-house application and release it externally. This application used a proprietary class library, which could not to be included. However, the application would need the same functionality in a new class library when released externally. Furthermore, the existing in-house class library was scheduled to be re-architected in the near future. The challenge was to maintain a single code base for the application to support both the existing and the new DLLs, and be flexible enough to support the yet-to-be-designed rewrite of the existing DLL. This seemed a perfect scenario to use the System.Reflection namespace and the GetInterface() method. This would allow me to create swappable class libraries for each implementation option, and access them in a type safe manner. The plan was to create an interface to describe the properties, methods, and events that will be exposed from the external class library. Any external dynamic linked library must implement this interface. We will then create a helper class library implementing the same interface. The helper class library will contain all the logic needed to load an external DLL implementing the interface. The main application will simply link to the helper class library, and remain oblivious to the logic needed to link specific external class libraries. We will use VB.NET 2005 to create a solution of several fairly generic projects. These projects will serve no useful purpose other than to demonstrate this concept. The first thing we need to do is create an interface that describes our swappable library. We will create a class library project named InterfaceDLL. This project contains a single interface named ILibrary. Public Interface ILibrary Function GetName() As String End Interface This interface states that any DLL to which our application intends to link must expose a GetName() function that returns a String data type. It is important to notice that we created a separate project for the interface. This is because the fully qualified name of the interface includes the project namespace. When other projects refer to this interface, they will refer to <InterfaceDLL.>ILibrary. If we just linked the interface across projects, it would still belong to the individual projects, and thus, not be the same interface. Next, we will create the first class library to use this interface. This will be the RedDLL project. It will contain the following code. Imports InterfaceDLL Public Class clsMe Implements ILibrary Public Function GetName() As String Implements ILibrary.GetName Return "RedDLL" End Function End Class This code imports the InterfaceDLL namespace, and instructs this class to implement the ILibrary interface. All we are required to implement is the GetName() function. Here, we are returning the library name, “RedDLL”. We will create a nearly duplicate BlueDLL project, using the same interface. In our test case, it is the same code, with the exception of returning “BlueDLL”. It is included in the sample solution. Next, we will create the helper class library project, in this case, ColorDLL. This project contains all the .NET magic for this example. First, just like the previous two class libraries, this one imports the InterfaceDLL, and exposes the GetName() function. But, this function is a little different from the others. This one makes use of the System.Reflection namespace that we have imported. We can use LoadFrom() from that namespace to get the assembly from a given class library (specified, in this case, in the app.config). We will then wade through the assembly, looking for an object that implements the ILibrary interface. When we find it, we call its GetName() function. Imports InterfaceDLL Imports System.Reflection Public Class clsMe Implements ILibrary ' Grab the name from main application's app.config Private strColorDLL As String = _ System.Configuration.ConfigurationSettings.AppSettings("ColorDLL") Public Function GetName() As String Implements ILibrary.GetName Dim objAssembly As Reflection.Assembly Dim clsColor As ILibrary = Nothing ' We're assuming the library is in the bin directory for this application Dim strColorDLLPath As String = _ System.IO.Path.Combine(My.Application.Info.DirectoryPath, strColorDLL) Dim objTypes() As Type Dim objFound As Type Dim strRV As String = String.Empty ' Load the assembly from the DLL objAssembly = Reflection.Assembly.LoadFrom(strColorDLLPath) ' March through the types in the assembly objTypes = objAssembly.GetTypes For Each objItem As Type In objTypes ' Looking for our Interface objFound = objItem.GetInterface("ILibrary") If objFound IsNot Nothing Then ' We wouldn't be here if it wasn't the right type, so we can DirectCast clsColor = DirectCast(objAssembly.CreateInstance(objItem.FullName), _ ILibrary) Exit For End If Next ' If we've got it, call our function If clsColor IsNot Nothing Then strRV = clsColor.GetName End If ' Let 'em have it GetName = strRV End Function End Class For each project containing a class importing the ILibrary interface, you will need to add a project reference to the InterfaceDLL project. This is done by opening the References tab in My Project, and clicking the Add button. Open the Projects tab in the Add Reference form, and find the InterfaceDLL project. Finally, we will create the main application project, in this case, creatively named MainExe. For this example, we will just drop the code into the Load event for Form1. Public Class Form1 Dim clsColor As New ColorDLL.clsMe Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load Me.lblLibrary.Text = clsColor.GetName End Sub End Class Form1_Load() simply calls the GetName() function from the helper class. MainExe knows nothing of the internals of RedDLL or BlueDLL, and doesn’t care. As long as RedDLL and BlueDLL implement the ILibrary interface, the helper library will quietly call them. Now, a couple things are happening behind the scenes here. First, when the developer builds the MainExe project, a reference will have to be added to the appropriate RedDLL or BlueDLL project or DLL, as well as a reference to the helper class. Additionally, we will add a line to the app.config to specify which DLL will be loaded by the helper class library. Remember, the app.config key is referenced in the helper class library, ColorDLL. It is not referenced in the main application. <appSettings> <!--It is assumed that the library is in the bin directory, we only need its name--> <add key="ColorDLL" value="BlueDLL.dll"/> </appSettings> Now, using this method, you can create a system for plugging in interchangeable libraries, without having multiple versions of your main application. You are also ready for future libraries that implement the same interface. Finally, you do not have to stray from your type safe development. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/DLL/StronglyTypedDLLSwap.aspx
crawl-002
refinedweb
1,112
58.08
Multiple Decorators Now that we see how a single decorator works, what about multiple decorators? It could be that we’d like to decorate our CoolButtons with another decoration— say, a diagonal red line. This is only slightly more complicated because we just need to enclose the CoolDecorator inside yet another Decorator panel for more decoration to occur. The only real change is that we not only need the instance of the panel we are wrapping in another but also the central object (here a button) being decorated, since we have to attach our paint routines to that central object’s paint method. So we need to create a constructor for our Decorator that has both the enclosing panel and the button as controls. public class CoolDecorator :Panel, Decorator { protected Control contl; protected Pen bPen, wPen, gPen; private bool mouse_over; protected float x1, y1, x2, y2; //---------------------------------- public CoolDecorator(Control c, Control baseC) { //the first control is the one laid out //the base control is the one whose paint method we extend //this allows for nesting of decorators contl = c; this.Controls.AddRange(new Control[] {contl} ); Then when we add the event handlers, the paint event handler must be attached to the base control. //paint handler catches button's paint baseC.Paint += new PaintEventHandler( paint); We make the paint method virtual so we can override it as we see here. public virtual void paint(object sender, PaintEventArgs e){ //draw over button to change its outline Graphics g = e.Graphics; It turns out that the easiest way to write our SlashDecorator, which draws that diagonal red line, is to derive it from CoolDecorator directly. We can reuse all the base methods and extend only the paint method from the CoolDecorator and save a lot of effort. public class SlashDeco:CoolDecorator { private Pen rPen; //---------------- public SlashDeco(Control c, Control bc):base(c, bc) { rPen = new Pen(Color.Red , 2); } //---------------- public override void paint(object sender, PaintEventArgs e){ Graphics g = e.Graphics ; x1=0; y1=0; x2=this.Size.Width ; y2=this.Size.Height ; g.DrawLine (rPen, x1, y1, x2, y2); } } This gives us a final program that displays the two buttons, as shown in Figure 17-2. The class diagram is shown in Figure 17-3. Figure 17-2. The A CoolButton is also decorated with a SlashDecorator. Figure 17-3. The UML class diagram for Decorators and two specific Decorator implementations
http://www.informit.com/articles/article.aspx?p=31350&seqNum=3
CC-MAIN-2019-30
refinedweb
399
51.78
Have you ever wondered what harm does the flag Debug=True brings to your application? Besides hurting your application's performance in production house, it turns website lifecycle into a vicious circle. Here is my understanding of using <compilation debug="true" /> on your application's web.config file: When <compilation debug=”true” /> is switched on within the application’s web.config file, it causes a number of non-optimal things to happen including: 1) You will see a lot more files in Temporary ASP.NET files folder when you use debug=true. 2) Your pages will not timeout when you have debug=“true” which is no an ideal production scenario. 3) Batch compilation will be disabled even if the batch attribute is true in the <compilation> element. 4) The System.Diagnostics.DebuggableAttribute gets added to all generated code which causes performance degradation. Basically, when you have debug=true, the DebuggableAttribute gets added to all generated code. 5)). 6)). 7) When the debug attribute in the Web.config file is set to true, it generates symbolic information(.pdb information) every time the compiler compiles your .aspx pages as well as disables code optimization. If the application is not being debugged, you should change this attribute to false. While the debugger is attached and in break mode, no requests to pages in the application execute. Therefore, you should not debug an application on a production Web server while it is in use. Only one debugger can be attached to the Aspnet_wp.exe process at a time. The CLR Debugger only debugs managed code, and it only debugs applications on the computer on which the CLR Debugger is running. If you need to debug remotely or if you need to debug code other than managed code, you should use the Visual Studio .NET debugger. Please refer: Now, let's see how can you turn OFF this flag at the server level ? As obvious as this, please use the following configuration feature on the machine.config file: <configuration> <system.web> <deployment retail=”true”/> </system.web> </configuration> You will disable the <compilation debug=”true”/> switch and this will disable the ability to). For more information on this, please refer: And, now let us see in what scenario you might want to turn on Debug=True: ASP.NET supports compiling applications in a special debug mode that facilitates developer troubleshooting. Debug mode causes ASP.NET to compile applications with extra information that enables a debugger to closely monitor and control the execution of an application. Applications that are compiled in debug mode execute as expected. However, the performance of the application is affected. To avoid the effect on performance, it is a good idea to enable debugging only when a developer is doing interactive troubleshooting. By default, debugging is disabled, and although debugging is frequently enabled to troubleshoot a problem, it is also frequently not disabled again after the problem is resolved. So, the best option would be to turn debug=“true,” fix your problem and turn debug=“false” back again. Please refer following article: Because many parts of an ASP.NET application (such as .aspx, .asmx and .ascx pages) are dynamically compiled at run-time, you need to configure the ASP.NET run-time process to compile the application with symbolic information before the application can be debugged. To do this, set the debug attribute in the configuration section of the Web.config file that is located in the root of the application folder to true, as follows: <configuration> <compile debug=true/> </configuration> Alternatively, you can set the Debug attribute of the Page directive to true in your .aspx pages, as follows: <%@ Page Debug="true" %> Let's do a small exercise to illustrate why do we really need to set debug=true in your application's configuration. Add the following code snippet in Page_Load method of Default.aspx of your sample web application. This code will cause a divide-by-zero(as expected) error message whenever you request the page. Code Snippet: protected void Page_Load(Object sender, EventArgs e) ( int32 i = 0; i = i/0; ) Check the output if you have debug="false" (first) and the same page with debug="true" (second) in Code above. Output when debug="false" Output when debug="true" [OverflowException: Arithmetic operation resulted in an overflow.] _Default.Page_Load(Object sender, EventArgs e) in C:\My Documents\Visual Studio 2010 C:\My Documents\Visual Studio 2010 Websites\DebugTrue\Default.aspx.vb:8. If you are a debugger freak and have some knoweldge about WinDbg dump analysis, you should agree with me in my following notes: To find out if any of the applications on your server run with debug=true you can run a nifty command in sos.dll called !finddebugtrue which will list out all applications where debug=true in the web.config 0:016> !finddebugtrue Debug set to true for Runtime: 61b48dc, AppDomain: /Sample... Sample. ScottGu has written an excellent blog on this issue, have a look at the below blog from him: Have a happy read and share your thoughts. Let me know if you have any questions and issues on this, I will be happy to answer them. I noticed that when you set compilateion debug = false, then the menu display at the top will ignore security settings and display all mlen items freely to all users regardless of role or login status. Any comment about this? Hi, I believe that this setting <compilation> debug flag has nothing to do with UI rendering issues. This attribute belongs to the system.web namespace which solely governs the compilation life-cycle of your application. What i can imagine would be happening in your case is, if you are testing that in IE, then it may be your IE settings or CSS which would be doing that.. Hope this helps, let me know if you find a solution for this problem.. _Default.Page_Load(Object sender, EventArgs e) in C:My DocumentsVisual Studio 2010WebSitesDebugTrueDefault thank you, this is very informative.
https://blogs.msdn.microsoft.com/prashant_upadhyay/2011/07/14/why-debugfalse-in-asp-net-applications-in-production-environment/
CC-MAIN-2017-34
refinedweb
998
56.25
In this hands-on lab we will be deploying microservices to our Minikube cluster. We will deploy the robot-shop example application into its own name space, and then ensure that all of the services are started and running. Once that is done, we will access the robot-shop application via proxy and ensure that it is working as expected. Learning Objectives Successfully complete this lab by achieving the following learning objectives: - Start the Minikube Cluster Using the Correct Driver Run the command: sudo minikube start --vm-driver none - Deploy Robot-Shop in Its Own Namespace In the user’s home directory, there is a robot-shopdirectory, and within it a K8ssubdirectory. Let’s get into it: cd ~/robot-shop/K8s Now that we’re there, we can create a namespace for the resources: sudo kubectl create namespace robot-shop Then we can create the resources in the namespace: sudo kubectl -n robot-shop apply -f ./descriptors/ Let’s monitor the pods to ensure that they come up: sudo kubectl -n robot-shop get po -w Once all of the pods have come up this task is complete. - Edit the Web Service, Configure the Proxy, and Test the Application Let’s take a closer look at the webservice: sudo kubectl -n robot-shop get svc web Its TYPE is currently LoadBalancer, and that’s not what we want here. We need to edit the web service and change its type to NodePort: sudo kubectl -n robot-shop edit svc web We’ll land in a vimsession. The specsection should look like this (minus the comments) when we’re done: spec: ports: - name: "8080" port: 8080 protocol: TCP targetPort: 8080 nodePort: 30080 <-- ensure that the nodePort it set to this value selector: service: web sessionAffinity: None type: NodePort <---- change from LoadBalancer Now let’s take another look at the webservice: sudo kubectl -n robot-shop get svc web Its TYPE should be NodePort now. Now let’s get the URL of the webservice: sudo minikube service list We need the TARGET_PORT of the webservice for this next step. We’ve got to edit the Nginx configuration and set the port forwarding to the NodePort URL: sudo vim /etc/nginx/sites-enabled/default location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. proxy_pass http://<minikube IP>:<svc port>; Once that’s done, we can restart Nginx: sudo systemctl restart nginx Finally, we can test in a web browser. If we exitin the terminal, we’ll get the server’s public IP. We can also get it from the hands-on lab overview page. But browse to it, and we should see the robot-shopapplication up and running. Feel free to tool around and look at the different robots. If you can, it means we’re through setting things up.
https://acloudguru.com/hands-on-labs/minikube-deploying-microservices
CC-MAIN-2021-31
refinedweb
476
53.75
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Problems on extended module of the partners We have extended the module res.partner. We have added a field after the email, and added a method on_change on this new field. In the method on_change it create an instance of a TransientModel class, and when it make the method self.env['test.classì].create({}), odoo set to null the fields related to retail Accounting. How to help me understand the reason of this behavior? code: # -*- coding: utf-8 -*- from openerp import models, fields, api class res_partner(models.Model): _inherit = 'res.partner' test = fields.Char(string='Test') @api.onchange('test') @api.one def onchange_test(self): p = self.env['test.class'].create({}) p.a = "test" return class test_class(models.TransientModel): _name = "test.class" a = fields.Char() b = fields.Char() About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/problems-on-extended-module-of-the-partners-96253
CC-MAIN-2018-22
refinedweb
182
62.54
-- | IO operations, part of the "Useful" module. module Useful.IO where import Useful.General import System.Random -- | repeats an IO function n number of times. replicateM :: (Monad m) => Int -> m a -> m [a] replicateM n x = sequence (replicate n x) -- | Like replicateM but stores the returns replicateM_ :: (Monad m) => Int -> m a -> m () replicateM_ n x = sequence_ (replicate n x) -- | repeats an IO function for every member of a list, using the list item as an arguement -- -- > $ do foreach [1..3] print -- > 1 -- > 2 -- > 3 -- > $ do foreach "asjkdnsd" putChar -- > asjkdnsd foreach :: Monad m => [a] -> (a -> m b) -> m () foreach = flip mapM_ -- | repeats an IO function until a IO Bool is true -- -- NOTE: Be careful with this function! Better to use recursion. Testing against an item created in the loop will not work. while :: (Monad m) => m Bool -> m a -> m () while test action = do val <- test if val then do {action;while test action} else return () -- | like putStr or putChar but works on any type with \"show\" defined in a similar way to how print does. Can be thought of as \"print\" without the trailing linebreak. -- -- NOTE: This means it will print strings with quotes around them. To print strings without quotes use putStr or putStrLn put :: Show a => a -> IO () put x = putStr (show x) -- | Alias of put write :: Show a => a -> IO () write = put -- | Alias of print writeln :: (Show a) => a -> IO () writeln = print -- | Alias of print putln :: (Show a) => a -> IO () putln = print -- maps an IO function in depth N to the given list. Also versions without _ for storing of the returns. -- -- Again there are also mapM_3, mapM_4 and mapM_5 defined (as well as versions without underscores) -- -- > $ mapM_2 write [[1,2,3,4,5],[1,2]] -- > 1234512 mapM_2 :: (Monad m) => (a -> m b) -> [[a]] -> m () mapM_2 f x = mapM_ (mapM_ f) x mapM_3 f x = mapM_ (mapM_ (mapM_ f)) x mapM_4 f x = mapM_ (mapM_ (mapM_ (mapM_ f))) x mapM_5 f x = mapM_ (mapM_ (mapM_ (mapM_ (mapM_ f)))) x mapM2 f x = mapM (mapM f) x mapM3 f x = mapM (mapM (mapM f)) x mapM4 f x = mapM (mapM (mapM (mapM f))) x mapM5 f x = mapM (mapM (mapM (mapM (mapM f)))) x -- | takes a list and returns a random element from that list -- -- > $ rand [1..5] -- > 5 -- > $ rand "hello there people" -- > 'l' rand :: [a] -> IO a rand xs = do i <- getStdRandom (randomR (0,(len xs)-1)) return (xs !! i)
http://hackage.haskell.org/package/Useful-0.0.1/docs/src/Useful-IO.html
CC-MAIN-2016-07
refinedweb
404
70.47
Algorithm Fourth Edition: 1.1.27 binomial distribution, from 10 to 10 billion@ TOC Recursive method to realize binomial distribution operation, O(2^N) The recursive algorithm is relatively simple. It is very suitable for small parameters. But the problem is also obvious. N * k > 100 is very slow at the beginning. When > 1000, the program basically crashes. The algorithm efficiency is O(2^N), which can not be used in practical development. double binomial(int N, int k, double p) { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } return (1.0 - p) * binomial(N - 1, k, p) + p * binomial(N - 1, k - 1, p); } Optimize by saving the calculation results to the array, O(N) It is also recursive, but saving and reusing the calculated results immediately reduces the complexity and efficiency O(N). At this time, large number operation can be carried out, for example, n * k = 100000000, which can be used in practice, and it is of little significance. struct bino { double qbinomial(int N, int k, double p) { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } vd = std::vector<std::vector<double>>((N + 1), std::vector<double>((k + 1), -1)); return qbinomial(0, N, k, p); } private: double qbinomial(int f, int N, int k, double p) { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } if (vd[N][k] < 0) { vd[N][k] = (1.0 - p) * qbinomial(0, N - 1, k, p) + p * qbinomial(0, N - 1, k - 1, p); } double result = vd[N][k]; vd.clear(); return result; } std::vector<std::vector<double>> vd; }; Pursue the best and seek a breakthrough With optimized recursion, the order of magnitude plus one, the program crashes. In order to improve the stability, we abandon the recursion method and use the loop, but we need to have a deep understanding of the algorithm. Recursion is pushed back and forward, which still takes up too much computation. The stability is improved by pushing directly from the array loop. At the level of N * k =500 million, it does not collapse, but the speed does not increase significantly. double qqbinomial(int N, int k, double p) //Fast binomial distribution; Building a matrix data structure { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } std::vector<std::vector<double>> qvd = std::vector<std::vector<double>> ((N + 1), std::vector<double>((k + 1), 0)); double q = 1; for (size_t i = 0; i != N + 1; ++i) { qvd[i][0] = q; q *= 1 - p; } double pp = 1 - p; for (size_t i = 1; i != N + 1; ++i) { if (i < k) //When I > k, the extra values are 0. Make judgment to avoid repeated calculation { for (size_t j = 1; j != i + 1; ++j) { qvd[i][j] = pp * qvd[i - 1][j] + p * qvd[i - 1][j - 1]; } } else { for (size_t j = 1; j != k + 1; ++j) { qvd[i][j] = pp * qvd[i - 1][j] + p * qvd[i - 1][j - 1]; } } } double result = qvd[N][k]; qvd.clear(); return result; } I want ultimate stability How can I reach the int limit of C + +? When n * k = 1 billion, there is not enough memory and still crashes, but this has not reached the upper limit of int integer. Can I reach the upper limit? Probably not, but try. Reviewing the circular array algorithm, it is found that the number of elements in the array is N * k. when the number exceeds 1 billion, 16g of memory directly crashes. But do I need such a large array? The problem lies in the data structure. When calculating the value, the algorithm only needs two numbers [n - 1] [k] and [n - 1] [K - 1], which need K numbers. Therefore, the STD:: deque < STD:: vector < double > > data structure can be adopted. Every time a row of array is calculated and inserted, a number group will be removed from the top of the queue. At this time, the occupied memory will not exceed K double memory. At the same time, according to the algorithm, when k > N, the result is 0, which can improve the efficiency of the algorithm to O(N) or O(1), and the memory space is not greater than the memory occupied by K double types. At this time, the order of magnitude N * K can reach 10 billion, theoretically breaking through the int upper limit (2 billion), but it is meaningless. Here are the codes: double binomial(int N, int k, double p) //Stabilize binomial distribution and construct queue { if (N == 0 && k == 0) { return 1; } if (N < 0 || k < 0) { return 0; } if (k > N) { double result = 0; return result; } std::deque<std::vector<double>> deqVecDou; deqVecDou.push_back(std::vector<double>{1, 0}); double q = 1; std::vector<double> lieOne(N + 1); for (size_t i = 0; i != N + 1; ++i) { lieOne[i] = q; q *= 1 - p; } double pp = 1 - p; for (size_t i = 1; i != N + 1; ++i) { if (i < k) { deqVecDou.push_back(std::vector<double>(i + 2, 0)); deqVecDou[1][0] = lieOne[i]; for (size_t j = 1; j != i + 1; ++j) { deqVecDou[1][j] = pp * deqVecDou[0][j] + p * deqVecDou[0][j - 1]; } deqVecDou.pop_front(); } else { deqVecDou.push_back(std::vector<double>(k + 2, 0)); deqVecDou[1][0] = lieOne[i]; for (size_t j = 1; j != k + 1; ++j) { deqVecDou[1][j] = pp * deqVecDou[0][j] + p * deqVecDou[0][j - 1]; } deqVecDou.pop_front(); } } double result = deqVecDou[0][k]; deqVecDou.clear(); lieOne.clear(); return result; } Without data structure, the algorithm is basically a castle in the air. I spent two days thinking about and improving the small problem in Section 1 of Chapter 1 of the fourth edition of the algorithm. Although it was a blind adjustment, because it had exceeded the actual demand in the end, and the data accuracy of double had been far left behind, it was still meaningful. Advancing from O(2^N) to O(N) requires only the most basic array. The memory occupation is from N ^ 2 to N. you only need to set an array in the queue, and then delete it at the top of the loop. I deeply realize that the attack of the algorithm is at the rolling level. C + +, which is famous for pursuing efficiency, is basically a joke without the blessing of the algorithm. For algorithms, see here: The recursive algorithm of binomial distribution (1.1.27 Binomial distribution) is improved based on array
https://programmer.group/1.1.27-binomial-distribution-from-10-to-10-billion.html
CC-MAIN-2022-40
refinedweb
1,079
54.42
I've been trying to match the following string: string = "TEMPLATES = ( ('index.html', 'home'), ('base.html', 'base'))" re.match("\(w*\)", string) Try this: import re w = "TEMPLATES = ( ('index.html', 'home'), ('base.html', 'base'))" # find outer parens outer = re.compile("\((.+)\)") m = outer.search(w) inner_str = m.group(1) # find inner pairs innerre = re.compile("\('([^']+)', '([^']+)'\)") results = innerre.findall(inner_str) for x,y in results: print "%s <-> %s" % (x,y) Output: index.html <-> home base.html <-> base Explanation: outer matches the first-starting group of parentheses using \( and \); by default search finds the longest match, giving us the outermost ( ) pair. The match m contains exactly what's between those outer parentheses; its content corresponds to the .+ bit of outer. innerre matches exactly one of your ('a', 'b') pairs, again using \( and \) to match the content parens in your input string, and using two groups inside the ' ' to match the strings inside of those single quotes. Then, we use findall (rather than search or match) to get all matches for innerre (rather than just one). At this point results is a list of pairs, as demonstrated by the print loop. Update: To match the whole thing, you could try something like this: rx = re.compile("^TEMPLATES = \(.+\)") rx.match(w)
https://codedump.io/share/oIgjFFuFEDwn/1/python-regex-matching-a-parenthesis-within-parenthesis
CC-MAIN-2017-09
refinedweb
205
78.25
Posts: 5989 Registered: 08-00 Gez said: The following recipe should work, I think: Note 4: I don't think Eternity actually implements the ACS namespace (since it's something originally invented by ZDoom for its script libraries). This is merely a convenience thing as it allows SLADE to automatically perform the export-text-lump,-compile-import-bytecode-lump rigamarole, so it's handy. But since EE will look in the global namespace, it's important that the text lumps come first so what it sees during its backward search is the compiled lumps. Posts: 1981 Registered: 05-00 Posts: 6509 Registered: 01-02 Posts: 11037 Registered: 07-07 Quasar said: I can add this; the technical argument for it is compelling enough. Posts: 597 Registered: 08-09 Quasar said: I could see ZDoom going that route with Doomscript because it's a bespoke compiler that they have complete control over. I doubt we'd be trying to do that with Aeon, which could possibly end up being based on TraceMonkey or JaegerMonkey at this point (now that C++ code is easily available to us). Ever looked at their code? Yikes :P tempun said: I think you didn't understand. DaniJ talked about making a simple translator from ACS bytecode to Javascript (or Doomsday Script, in case of Doomsday) (like those from Brainfuck to C) and then feeding the result to JavaScipt engine. You don't need to know what the engine does with it. And BTW, SpiderMonkey isn't the only open source JS engine. Posts: 2075 Registered: 08-03 Quasar said: I did understand, it's just that such high-level translation is unlikely to actually work. For example ACS supports a "suspend" operation and there is no such thing in JavaScript. DaniJ said: Naturally you would have to select or design a language with at least the same level of runtime functionality as the ACS interpreter or it obviously won't work. I didn't think I needed to point that out. Why JavaScript? If it were me planning a complete design change I'd probably go for something like Lua :P Posts: 8811 Registered: 06-06 DaniJ said: I believe we are at crossed purposes here. The discussion was about ACS and which other script engines one could lean on to replicate that functionality. I wasn't actually presuming that Lua was suitable for your purposes (which would seem to be a little different given your emphasis on the performance of complex object construction and/or exchange). My facetious point was that it is a different design philosophy but that it would still meet the overarching requirement of applicability to ACS. As for JavaScript, I don't really have anything against it personally. I was genuinely interested in your reasons for selecting it as a potential candidate alongside an ACS bytecode interpreter, given the context of the SMALL "false-start". printz said: Is the new language planned to be interpreted, unlike Small? Posts: 21 Registered: 06-09 cybermind said: I'm not found any comments in SLADE about Hexen or ZDoom (using 3.0.1) Posts: 289 Registered: 04-10 Quasar said: I don't really know why JS has such a bad reputation except that people are only accustomed to it via the DOM, which is not a particularly well-designed JS class library if you ask me. The DOM is more like the kind of design that would come randomly out of one of those tornadoes-in-a-junk-yard that creationists are always trying to use to explain away evolution. code: for (property_thing in object_thing) { /* do some stuff */ } code: for (x in new String("BAHAHAHAHAHAHAHA")) { alert(x) }; code: for (var i = 0; i < x.length; i++) { /* do stuff */ } code: for each (x in new String("BAHAHAHAHAHA")) { /* do stuff here */ } Quasar said: Frankly very little of it should matter for EE's purposes. tempun said: Even the lack of proper integers? Posts: 7708 Registered: 01-03 Posts: 4433 Registered: 03-04 code:class ThisIsAnObject(object): def __init__(self, arg1, arg2): self.arg1 = arg1 self.arg2 = arg2 my_object = ThisIsAnObject() code:function ThisIsAnObject(arg1, arg2) { this.arg1 = arg1; this.arg2 = arg2; } my_object = new ThisIsAnObject(); code:this_is_an_array = ['apple', 'banana', 'cherry'] for fruit in this_is_an_array: print fruit code:this_is_an_array = ['apple', 'banana', 'cherry']; for (var i = 0; i < this_is_an_array.length; i++) { print(this_is_an_array[i]); // [CG] Whatever 'print()' is in JS } code:class Rectangle(object): def __init__(self, width=1, height=1): self.__width = 1 self.__height = 1 self.width = width self.height = height def _get_width(self): return self.__width def _set_width(self, new_width): if new_width <= 0: raise ValueError('Width must be > 0') self.__width = new_width def _get_height(self): return self.__height def _set_height(self, new_height): if new_height <= 0: raise ValueError('Height must be > 0') self.__height = new_height width = property(_get_width, _set_width) height = property(_get_height, _set_height) @property def area(self): return self.width * self.height @property def perimeter(self): return (self.width * 2) + (self.height * 2) >
http://www.doomworld.com/vb/eternity/54760-acs-availability-increased/2/
CC-MAIN-2014-35
refinedweb
826
62.98
Object Construction that. Constructing an Object One of the more confusing aspects of moving from structured programming to object-oriented programming is the concept of constructing an object. Perhaps the reason for this is that there is really no comparable concept in structured programming. Object-oriented programming languages, at least the ones I have used, have a feature called a constructor. This is a physical piece of code (in fact, it is a special type of method) that is used to construct and initialize objects. Whereas it is true that these tasks can be accomplished in structured code in various ways (obviously, you can initialize variables in a structured program), the fact that constructors are built into the language makes constructing objects much safersafe in the sense that the compiler and runtime environment can do a lot of the safety checks for you. This topic of built-in functionality is a major theme throughout this series. This is because built-in functionality is a major design goal of object-oriented technologies. You saw this in the last column when the compiler was able to enforce the contract regarding the abstract methods. If you design your system properly, you can use the features of the object-oriented languages, compilers, and runtimes to assist you in building safe, reusable objects. Constructors Constructors are a new concept for people doing structured programming. Constructors do not normally exist in non-O-O languages such as C and Basic. Earlier, you learned about special methods that are used to construct objects. In Java and C++, as well as other O-O languages, constructors are methods that share the same name as the class and have no return type. For example, a constructor for the Cabbie class would look like this: public class Cabbie { public Cabbie(){ /* code to construct the object */ } } The compiler will recognize that the method name is identical to the class name and consider the method a constructor. Note that a constructor does not have a return value. If you provide a return value, the compiler will not treat the method as a constructorit will simply be treated as just another method. When Is a Constructor Called? When a new object is created, one of the first things to occur is that the constructor is called. Check out the following code: class Test { public static void main(String args[]) { Cabbie myCabbie = new Cabbie(); } } The new keyword indicates that a new instance of the Cabbie class is constructed, thus allocating the required memory. At this point, the constructor is called, passing the arguments in the parameter list (in this case there aren't any). The developer must do the appropriate initialization within the constructoryou will learn about this later. state. For example, if you have a counter object with an attribute called count, you need to set count to zero in the constructor: public class Cabbie { int count; public Cabbie(){ count = 0; } } Initializing attributes is a common, and vital, function performed within a constructor. The Default Constructor If you write a class and do not include a constructor, the class will still compile and you can still use it. public class Cabbie { int count; // no explicit constructor provided by programmer!! public void method01(){ count = 0; } } If the class provides no explicit constructor, such as in C++ and Java, the compiler will provide a default constructor, which would look like this: public class Cabbie { int count; public Cabbie(){ // this is not in the source code // no code - it is really empty } } It is important to realize that this code is only inserted in the bytecodes generated by the compiler. The default constructor does not show up at all in the source code. In fact, you would need to decompile the bytecodes to see it. After the memory is allocated, the default constructor calls the constructor of the superclassif there is one (actually there is always a superclassevery written Java class ultimately will implicitly inherit from the Object class). You will observe this point in much more detail next month. For example, if a constructor is not provided for the Cabbie class, the following default constructor is inserted: public class Cabbie { int count; public Cabbie(){ // this is not in the source code super(); // there is an implicit call to super() } } Perhaps the default constructor may. The rule of thumb is that you should always provide a constructor, even if you do not plan on doing anything inside it. You can provide a constructor with nothing in it and then add to it later. While there is technically nothing wrong with using the default constructor provided by the compiler, it is always nice to know exactly what your code looks like. You will explore this statement in much more detail next month. Page 1 of 3
http://www.developer.com/design/article.php/3516911
CC-MAIN-2017-22
refinedweb
799
57.71
I am writing a application for Arch Linux, actually I already wrote it, but I wanted to do it a little differently then the other. The origional one uses the goto statement. Here is the code: #include <iostream> #include <string> using namespace std; int main() { cout << "\tAurDown v2.1\n" << "\tLicense: GPL3\n\n" << "Menu\n" << "1. Search for package\n" << "2. Download a package\n"; cout << "Enter Selection\t"; cin >> nSel; while((nSel != 1) || (nSel != 2)) { cout << "Please Enter a valid selection"\n; cin >> nSel; } } I am wanting to it to print "Please enter a valid selection" as many times as somebody enters anything other than 1 or 2 and if the person enters 1 or two after they get the error "Please enter a valid...." it will resume. Sorry this is a newbie question. I know it has to be a simple problem, but can't find it. Thanks
https://www.daniweb.com/programming/software-development/threads/344723/app-and-avoiding-goto
CC-MAIN-2018-43
refinedweb
151
74.19
Type Error using Init() On 15/12/2015 at 07:24, xxxxxxxx wrote: Hi! I'm completely new to plugin programming and stumble my way through the doc and examples, but I'm stuck on an error. Maybe you guys can help me out. I'm trying to initalize description attributes from a .res file with the Init()/InitAttr() functions. The example from the doc is: import c4d def Init(self, node) : #please note, the desc element has to be a constant (id of your container element) self.InitAttr(host=node, type=float, desc=[c4d.PY_TUBEOBJECT_RAD]) self.InitAttr(host=node, type=float, desc=[c4d.PY_TUBEOBJECT_IRADX]) node[c4d.PY_TUBEOBJECT_RAD]= 200.0 node[c4d.PY_TUBEOBJECT_IRADX] = 50.0 When I follow the syntax I get the error: _ TypeError: Required argument 'id' (pos 3) not found_ I assume, that the example is wrong and the 'desc' argument should be 'id', but I might be wrong about that. When I change it to 'id' the attributes are set correctly, but I get the error: TypeError: Init expected bool, not None _ _ This might be unrelated, but the Init() function also seems to be called every time I edit an attribute in the description. Is it supposed to do that? On 15/12/2015 at 08:31, xxxxxxxx wrote: Hi and welcome to the Plugin Cafe! I'm sorry the example for NodeData.InitAttr() is wrong. desc parameter in the example is in fact callled id as you noticed. This will be fixed. You can find the Py-RoundedTube ObjectData example (this one run and is fixed) in the example s folder of the Python SDK documentation archive. Originally posted by xxxxxxxx This might be unrelated, but the Init() function also seems to be called every time I edit an attribute in the description. Is it supposed to do that? Yes this is normal. I won't go into details but this is because of the undo mechanism of Cinema. On 15/12/2015 at 08:54, xxxxxxxx wrote: Thank you for the quick response! I'm definitely going to check out the examples! Thanks again.
https://plugincafe.maxon.net/topic/9259/12316_type-error-using-init
CC-MAIN-2019-13
refinedweb
352
66.03
Implementing a Role Provider ASP.NET role management enables you to easily use a number of different providers for your ASP.NET applications. You can use the supplied profile providers that are included with the .NET Framework, or you can implement your own provider. There are two primary reasons for creating a custom role provider. You need to store role information in a data source that is not supported by the role providers included with the .NET Framework, such as a FoxPro database, an Oracle database, or other data source. You need to manage role information using a database schema that is different from the database schema used by the providers that ship with the .NET Framework. A common example of this would be authorization data that already exists in a SQL Server database for a company or Web site. Required Classes To implement a role provider, you create a class that inherits the RoleProvider abstract class from the System.Web.Security namespace. The RoleProvider abstract class inherits the ProviderBase abstract class from the System.Configuration.Provider namespace. As a result, you must implement the required members of the ProviderBase class as well. The following tables list the required properties and methods that you must implement from the ProviderBase and RoleProvider abstract classes and a description of each. To review an implementation of each member, see the code supplied for the Sample Role-Provider Implementation. ProviderBase Members RoleProvider Members ApplicationName Role providers store role information uniquely for each application. This enables multiple ASP.NET applications to use the same data source without running into a conflict if duplicate user names are used. Alternatively, multiple ASP.NET applications can use the same role data source by specifying the same ApplicationName. Because role providers store role information uniquely for each application, you will need to ensure that your data schema includes the application name and that queries and updates also include the application name. For example, the following command is used to retrieve a role name from a database and ensures that the ApplicationName is included in the query. Thread Safety. See Also ConceptsSample Role-Provider Implementation Securing Roles
http://msdn.microsoft.com/en-us/library/8fw7xh74(v=vs.80).aspx?cs-save-lang=1&cs-lang=vb
CC-MAIN-2014-41
refinedweb
357
55.44
Hi, I'm workin on using a header file with 2 .cpp files to find the volume of a cylinder. Heres what I have for the header file. the name for the header file is cylinder.cppthe name for the header file is cylinder.cppCode:class cylinder { private: double r,l,v; public: double cylvol(); cylinder(); void get_data(); }; ::this is cylinder.cpp:: and last but not least the main program called foo.cppand last but not least the main program called foo.cppCode:#include "cylinder.h" #include <iostream> using namespace std; void cylinder::get_data() { cout << "Please enter the radius and length of the Cylinder" << endl; cin >> r >> l; return; }; double cylinder::cylvol() { const double PI = 3.14157; v=PI*r*r*l; return v; }; cylinder::cylinder() { r = 0.0; l = 0.0; v = 0.0; }; I get 3 link errors, please tell me what I'm doing wrongI get 3 link errors, please tell me what I'm doing wrongCode:#include "cylinder.cpp" int main() { cylinder junk; junk.get_data(); cout << "The volume of the cylinder is: " << junk.cylvol() << endl; return 0; };
http://cboard.cprogramming.com/cplusplus-programming/63341-having-problems-using-multiple-files.html
CC-MAIN-2014-42
refinedweb
181
78.65
Overview - Get familiar with Hadoop Distributed File System (HDFS) - Understand the Components of HDFS Introduction In contemporary times, it is commonplace to deal with massive amounts of data. From your next WhatsApp message to your next Tweet, you are creating data at every step when you interact with technology. Now multiply that by 4.5 billion people on the internet – the math is simply mind-boggling! But ever wondered how to handle such data? Is it stored on a single machine? What if the machine fails? Will you lose your lovely 3 AM tweets *cough*? The answer is No. I am pretty sure you are already thinking about Hadoop. Hadoop is an amazing framework. With Hadoop by your side, you can leverage the amazing powers of Hadoop Distributed File System (HDFS)-the storage component of Hadoop. It is probably the most important component of Hadoop and demands a detailed explanation. So, in this article, we will learn what Hadoop Distributed File System (HDFS) really is and about its various components. Also, we will see what makes HDFS tick – that is what makes it so special. Let’s find out! Table of Contents - What is Hadoop Distributed File System (HDFS)? - What are the components of HDFS? - Blocks in HDFS? - Namenode in HDFS - Datanodes in HDFS - Secondary Node in HDFS - Replication Management - Replication of Blocks - What is a Rack in Hadoop? - Rack Awareness What is Hadoop Distributed File System(HDFS)? It is difficult to maintain huge volumes of data in a single machine. Therefore, it becomes necessary to break down the data into smaller chunks and store it on multiple machines. Filesystems that manage the storage across a network of machines are called distributed file systems. Hadoop Distributed File System (HDFS) is the storage component of Hadoop. All data stored on Hadoop is stored in a distributed manner across a cluster of machines. But it has a few properties that define its existence. - Huge volumes – Being a distributed file system, it is highly capable of storing petabytes of data without any glitches. - Data access – It is based on the philosophy that “the most effective data processing pattern is write-once, the read-many-times pattern”. - Cost-effective – HDFS runs on a cluster of commodity hardware. These are inexpensive machines that can be bought from any vendor. What are the components of the Hadoop Distributed File System(HDFS)? HDFS has two main components, broadly speaking, – data blocks and nodes storing those data blocks. But there is more to it than meets the eye. So, let’s look at this one by one to get a better understanding. HDFS Blocks HDFS breaks down a file into smaller units. Each of these units is stored on different machines in the cluster. This, however, is transparent to the user working on HDFS. To them, it seems like storing all the data onto a single machine. These smaller units are the blocks in HDFS. The size of each of these blocks is 128MB by default, you can easily change it according to requirement. So, if you had a file of size 512MB, it would be divided into 4 blocks storing 128MB each. If, however, you had a file of size 524MB, then, it would be divided into 5 blocks. 4 of these would store 128MB each, amounting to 512MB. And the 5th would store the remaining 12MB. That’s right! This last block won’t take up the complete 128MB on the disk. But, you must be wondering, why such a huge amount in a single block? Why not multiple blocks of 10KB each? Well, the amount of data with which we generally deal with in Hadoop is usually in the order of petra bytes or higher. Therefore, if we create blocks of small size, we would end up with a colossal number of blocks. This would mean we would have to deal with equally large metadata regarding the location of the blocks which would just create a lot of overhead. And we don’t really want that! There are several perks to storing data in blocks rather than saving the complete file. - The file itself would be too large to store on any single disk alone. Therefore, it is prudent to spread it across different machines on the cluster. - It would also enable a proper spread of the workload and prevent the choke of a single machine by taking advantage of parallelism. Now, you must be wondering, what about the machines in the cluster? How do they store the blocks and where is the metadata stored? Let’s find out. Namenode in HDFS HDFS operates in a master-worker architecture, this means that there are one master node and several worker nodes in the cluster. The master node is the Namenode. Namenode is the master node that runs on a separate node in the cluster. - Manages the filesystem namespace which is the filesystem tree or hierarchy of the files and directories. - Stores information like owners of files, file permissions, etc for all the files. - It is also aware of the locations of all the blocks of a file and their size. All this information is maintained persistently over the local disk in the form of two files: Fsimage and Edit Log. - Fsimage stores the information about the files and directories in the filesystem. For files, it stores the replication level, modification and access times, access permissions, blocks the file is made up of, and their sizes. For directories, it stores the modification time and permissions. - Edit log on the other hand keeps track of all the write operations that the client performs. This is regularly updated to the in-memory metadata to serve the read requests. Whenever a client wants to write information to HDFS or read information from HDFS, it connects with the Namenode. The Namenode returns the location of the blocks to the client and the operation is carried out. Yes, that’s right, the Namenode does not store the blocks. For that, we have separate nodes. Datanodes in HDFS Datanodes are the worker nodes. They are inexpensive commodity hardware that can be easily added to the cluster. Datanodes are responsible for storing, retrieving, replicating, deletion, etc. of blocks when asked by the Namenode. They periodically send heartbeats to the Namenode so that it is aware of their health. With that, a DataNode also sends a list of blocks that are stored on it so that the Namenode can maintain the mapping of blocks to Datanodes in its memory. But in addition to these two types of nodes in the cluster, there is also another node called the Secondary Namenode. Let’s look at what that is. Secondary Namenode in HDFS Suppose we need to restart the Namenode, which can happen in case of a failure. This would mean that we have to copy the Fsimage from disk to memory. Also, we would also have to copy the latest copy of Edit Log to Fsimage to keep track of all the transactions. But if we restart the node after a long time, then the Edit log could have grown in size. This would mean that it would take a lot of time to apply the transactions from the Edit log. And during this time, the filesystem would be offline. Therefore, to solve this problem, we bring in the Secondary Namenode. Secondary Namenode is another node present in the cluster whose main task is to regularly merge the Edit log with the Fsimage and produce check‐points of the primary’s in-memory file system metadata. This is also referred to as Checkpointing. But the checkpointing procedure is computationally very expensive and requires a lot of memory, which is why the Secondary namenode runs on a separate node on the cluster. However, despite its name, the Secondary Namenode does not act as a Namenode. It is merely there for Checkpointing and keeping a copy of the latest Fsimage. Replication Management in HDFS Now, one of the best features of HDFS is the replication of blocks which makes it very reliable. But how does it replicate the blocks and where does it store them? Let’s answer those questions now. Replication of blocks HDFS is a reliable storage component of Hadoop. This is because every block stored in the filesystem is replicated on different Data Nodes in the cluster. This makes HDFS fault-tolerant. The default replication factor in HDFS is 3. This means that every block will have two more copies of it, each stored on separate DataNodes in the cluster. However, this number is configurable. But you must be wondering doesn’t that mean that we are taking up too much storage. For instance, if we have 5 blocks of 128MB each, that amounts to 5*128*3 = 1920 MB. True. But then these nodes are commodity hardware. We can easily scale the cluster to add more of these machines. The cost of buying machines is much lower than the cost of losing the data! Now, you must be wondering, how does Namenode decide which Datanode to store the replicas on? Well, before answering that question, we need to have a look at what is a Rack in Hadoop. What is a Rack in Hadoop? A Rack is a collection of machines (30-40 in Hadoop) that are stored in the same physical location. There are multiple racks in a Hadoop cluster, all connected through switches. Rack awareness Replica storage is a tradeoff between reliability and read/write bandwidth. To increase reliability, we need to store block replicas on different racks and Datanodes to increase fault tolerance. While the write bandwidth is lowest when replicas are stored on the same node. Therefore, Hadoop has a default strategy to deal with this conundrum, also known as the Rack Awareness algorithm. For example, if the replication factor for a block is 3, then the first replica is stored on the same Datanode on which the client writes. The second replica is stored on a different Datanode but on a different rack, chosen randomly. While the third replica is stored on the same rack as the second but on a different Datanode, again chosen randomly. If, however, the replication factor was higher, then the subsequent replicas would be stored on random Data Nodes in the cluster. Endnotes I hope by now you have got a solid understanding of what Hadoop Distributed File System(HDFS) is, what are its important components, and how it stores the data. There are however still a few more concepts that we need to cover with respect to Hadoop Distributed File System(HDFS), but that is a story for another article. For now, I recommend you go through the following articles to get a better understanding of Hadoop and this Big Data world! Last but not the least, I recommend reading Hadoop: The Definitive Guide by Tom White. This article was highly inspired by it.You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2020/10/hadoop-distributed-file-system-hdfs-architecture-a-guide-to-hdfs-for-every-data-engineer/?utm_source=skills-data-science-ram&utm_medium=blog&utm_campaign=blackbelt
CC-MAIN-2021-25
refinedweb
1,832
74.08
Hi, this might be an easy question: I want to use scandir() My program simply reads the entries in ".". I know how to get it run when my file_selectMy program simply reads the entries in ".". I know how to get it run when my file_selectCode:int scandir(const char *dir, struct dirent ***namelist, int(*filter)(const struct dirent *), int(*compar)(const struct dirent **, const struct dirent **)); function comes after main() and in main there'sdeclared.declared.Code:int file_select(); My first question is:My first question is:Code:#include <sys/dir.h> #include <stdio.h> #include <stdlib.h> //exit() #include <string.h> //strcmp() extern int alphasort(); int main() { int file_select(); int count,i; struct dirent **files; count = scandir(".", &files, file_select, alphasort); //count = scandir(".", &files, &file_select, alphasort); if (count <= 0){ printf("No files in this directory\n"); exit(0); } printf("Number of files = %d\n",count); for (i=1;i<count+1;++i) printf("%s\n",files[i-1]->d_name); return 0; } int file_select(struct dirent *entry) { if ((strcmp(entry->d_name, ".") == 0) || (strcmp(entry->d_name, "..") == 0)) return (0); else return (1); } So why is there no difference if I call scandir() with &file_select or just file_select? My second issue is: But I want that the function file_select comes before the main. Then I run into a warning: warning: passing argument 3 of ‘scandir’ from incompatible pointer type Ok, I tried it with count = scandir(".", &files, &file_select, alphasort); and count = scandir(".", &files, file_select, alphasort); and also with this int file_select() declaration in main() and without... but it just doesnt work. I guess it has something to do with pointer and implicit declarations. However I have no idea what s the difference. Thanks TurboToJo
http://cboard.cprogramming.com/c-programming/91925-argument-3-confusion-scandir.html
CC-MAIN-2015-48
refinedweb
281
57.98
timer_create() Create a timer Synopsis: #include <signal.h> #include <time.h> int timer_create( clockid_t clock_id, struct sigevent * evp, timer_t * timerid ); Since: BlackBerry 10.0.0 Arguments: - clock_id - The clock source that you want to use;. - evp - NULL, or a pointer to a sigevent structure containing the event that you want to deliver when the timer fires. - timerid - A pointer to a timer_t object where the function stores the ID of the new timer. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The timer_create() function creates a per-process timer using the specified clock source, clock_id, as the timing base. - This function fails if the clock ID corresponds to the CPU-time clock of a process or thread different from the process or thread invoking the function. - In order to create a timer that sends a pulse, your process must have the PROCMGR_AID_TIMER ability enabled. For more information, see procmgr: - SIGEV_SIGNAL - SIGEV_SIGNAL_CODE - SIGEV_SIGNAL_THREAD - SIGEV_PULSE If the evp argument is NULL, a SIGALRM signal is sent to your process when the timer expires. To specify a handler for this signal, call sigaction(). Returns: Errors: - EAGAIN - All timers are in use. You'll have to wait for a process to release one. - EINVAL - The clock_id isn't one of the valid CLOCK_* constants. - EPERM - The calling process doesn't have the required permission; see procmgr_ability(). Examples: /* * Demonstrate how to set up a timer that, on expiry, * sends us a pulse. This example sets the first * expiry to 1.5 seconds and the repetition interval * to 1.5 seconds. */ #include <stdio.h> #include <time.h> #include <sys/netmgr.h> #include <sys/neutrino.h> #define MY_PULSE_CODE _PULSE_CODE_MINAVAIL typedef union { struct _pulse pulse; /* your other message structures would go here too */ } my_message_t; main() { struct sigevent event; struct itimerspec itime; timer_t timer_id; int chid; int rcvid; my_message_t msg; chid = ChannelCreate(0); event.sigev_notify = SIGEV_PULSE; event.sigev_coid = ConnectAttach(ND_LOCAL_NODE, 0, chid, _NTO_SIDE_CHANNEL, 0); event.sigev_priority = getprio(0); event.sigev_code = MY_PULSE_CODE; timer_create(CLOCK_REALTIME, ... */ } } Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/t/timer_create.html
CC-MAIN-2017-34
refinedweb
359
69.07
Play again function help? Dj Lee Greenhorn Joined: Apr 14, 2011 Posts: 3 posted Apr 14, 2011 16:03:18 0 I've made a hi-lo guessing game and I'm having trouble with the play again function using the loop. Here's my code: import java.util.Scanner; import java.util.Random; public class GuessingGAME { public static void main (String[] args) { int answer, guess; int trials = 1; boolean PLAY=true; String YorN; Scanner scan = new Scanner (System.in); Random generator = new Random(); answer = generator.nextInt(100) + 1; System.out.println ("For testing & programming purposes, # is " +answer); System.out.println ("A random number has been selected from 1 - 100."); System.out.println ("Guess the number(-1 to exit): "); guess = scan.nextInt(); do { if (guess == -1) { System.exit(0); } else if (guess > 100 || guess < 1) { System.out.println ("Guess a number BETWEEN 1 to 100(-1 to exit): "); guess = scan.nextInt(); } else if (guess > answer) { System.out.println ("Your guess was higher than the number. Try again(-1 to exit): "); guess = scan.nextInt(); trials++; } else if (guess < answer) { System.out.println ("Your guess was lower than the number. Try again(-1 to exit): "); guess = scan.nextInt(); trials++; } } while (guess != answer); if (guess == answer) { System.out.println ("YES! YOU GOT IT! The number is "+answer); System.out.println ("It took you " + trials +" time(s) to guess the number."); do { System.out.println ("Play again? Y or N?"); YorN = scan.nextLine(); if (YorN.equals("Y")||YorN.equals("y")) { PLAY=true; } else { PLAY=false; } } while (PLAY=true); } } } line 59 to 73 is where I inserted the play again function. If the user correctly guesses the number right, the program prints "Play again? Y or N?" twice... and I'm allowed to enter an input but it just loops through "play again? y or n?" instead of actually starting the program again. What am I doing wrong? Greg Brannon Bartender Joined: Oct 24, 2010 Posts: 557 posted Apr 14, 2011 16:29:10 0 I was surprised to see the second 'do' loop. With each prompt in the main do loop, you've already instructed the user to enter -1 if he/she wants to quit. That by itself could terminate the game. Then, your second do loop - just as you said - continues to loop, asking the player if they want to continue to play. That's all the loop does. There's no way for a true answer to get the logic back to the main do loop. So, figure out what you meant to do with the -1 selection in the main do loop. Or, if you wan to ask the user if he/she wants to continue, that would be a line at the end of the main do loop that sets the do loop flag to continue or exit, similar to what you've done in the second do loop. You don't need both exit paths, just one of them in the main do loop. do { // guess, display result // determine if user wants to quit or continue } while ( userWantsToContinue ); Learning Java using Eclipse on OpenSUSE 11.2 Linux user#: 501795 I agree. Here's the link: subject: Play again function help? Similar Threads Help with number guessing game Need help ending number guessing game and asking if user wants to play again 99 bottles of beer song....... Trouble with Guessing Game Unable to Return Variable All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/534456/java/java/Play-function
CC-MAIN-2014-15
refinedweb
583
77.43
Struct containing details of a relation expression for client callbacks from the Adam parser. More... #include <adam_parser.hpp> List of all members. Every relation declared in an Adam expression is parsed and packed into this struct for the client code. Definition at line 144 of file adam_parser.hpp. Trailing comment for the relation. Definition at line 150 of file adam_parser.hpp. Leading comment for the relation. Definition at line 149 of file adam_parser.hpp. Tokenized expression array of the relation. Definition at line 148 of file adam_parser.hpp. Set of named cells being operated upon by this relation. Definition at line 146 of file adam_parser.hpp. Position in the parse of the relation declaration. Definition at line 147 of file adam_parser.hpp. Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy. Search powered by Google
http://stlab.adobe.com/structadobe_1_1adam__callback__suite__t_1_1relation__t.html
CC-MAIN-2017-51
refinedweb
142
54.49
Now you have understood all the concepts related to XHTML. So you should be able to write your HTML document into a well formed XHTML document and cleaner version of your site. I am again repeating important steps if you want to convert your existing HTML site into XHTML site. To convert your existing content, you must first decide which DTD you're going to adhere to and include document type definition at the top of the document. Make sure you have all other required elements. These include a root element <html> that indicates an XML namespace, a <head> element, a <title> element contained within the <head> element, and a <body> element. Convert all element keywords and attribute names to lowercase. Ensure that all attributes are in a name="value" format. Make sure that all container elements have closing tags. Place a forward slash inside all standalone elements. For example, all <br> elements should be rewritten as <br />. Designate client-side script code and style sheet code as CDATA sections. Still XHTML is being improved and its next version XHTML 1.1 has been drafted. We have discussed this in detail in XHTML Version 1.1 chapter. XHTML tags, characters and entities are same HTML, so if you already know HTML then you do not need to put extra effort to learn these subjects specially for XHTML. We have listed out all HTML stuff alongwith XHTML tutorial as well because they are applicable to XHTML as well. We have listed out various resources for XHTML and HTML so if you are interested and you have time then I would recommend you to go through these resources to enhance your understanding on XHTML. Otherwise this tutorial must have given you enough knowledge to write your web pages using XHTML. Please send me your feedback at [email protected] Thanks. Webmaster, Tutorialspoint.com
http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=xhtml&file=xhtml_summary.htm
CC-MAIN-2014-35
refinedweb
311
64.51
Code Inspection: NUnit. Test case Result property is obsolete Starting from NUnit 2.6.2, the named parameter Result of the TestCase Attribute, which you can use to specify the expected result to be returned from the test method, becomes deprecated in favor of the ExpectedResult parameter, which serves the same purpose. Starting from NUnit 3.0, Result is not supported anymore and any tests that use this parameter will not compile with NUnit 3.0. This inspection detects usages of the Result parameter and suggests replacing them with ExpectedResult. // 'Result' is obsolete and should be replaced with 'ExpectedResult' [TestCase(12, 3, Result = 4)] public int DivideTest(int n, int d) { return (n / d); } Last modified: 08 March 2021
https://www.jetbrains.com/help/resharper/NUnit.TestCaseResultPropertyIsObsolete.html
CC-MAIN-2022-21
refinedweb
119
55.13
Current as of April 11, 2022 fxa-auth-server All of the code for sending email lives in the fxa-auth-server. FxA uses nodemailer and AWS SES to send its emails. Emails are sent by calling methods on the mailer object that are passed around the codebase. The methods are defined by a reducer in lib/senders/index.js and have names like sendVerifyEmail, sendNewDeviceLoginEmail and sendPasswordResetEmail, but those are really just thin wrappers that do a little bit of argument marshalling before handing off to other methods that actually do the work. Those are defined in lib/senders/email.js and don’t have send in the name, so verifyEmail, newDeviceLoginEmail and passwordResetEmail would be counterparts to the above. Ultimately each of these methods calls send, which in turn calls the nuts-and-bolts methods localize, render, selectEmailServices and sendMail. Triggering emails locally Check out creating an account locally as well as our local emails with MailDev docs. Almost every email can be triggered locally by walking through the flow locally to trigger an email, but certain environment variables may be different across staging and production than they are locally. If you want to run CAD (connect another device) emails locally, update dev.json to something like this: "cadReminders": { "firstInterval": "1s", "secondInterval": "2s" }, Then, run: NODE_ENV=dev ./scripts/verification-reminders.js Templates FxA emails use EJS to allow logic and conditional rendering without additional helper methods, MJML to shift the burden of maintaining solutions for email quirks off of FxA engineers, SCSS with Tailwind-like classes compiled down to inline CSS for easy maintenance and consistency, and Fluent for localization. We also create Storybook stories to preview emails and for documentation purposes. See this ADR for more details on why this stack was chosen. A small example of how template variables are passed down from the mailer object and consumed in the templates: Mailer.prototype.verifyLoginCodeEmail = async function (message) { // ... Logging for metrics, setting headers, and assigning other variables return this.send({ // always send appropriate headers headers, // always specify the name of the template template: 'verifyLoginCode', templateValues: { code: message.code, // ... all other templateValues } layout defaults to fxa. Pass in layout: 'subscription' at the same level headers and template is at (outside of the templateValues object) for the SubPlat layout. Corresponding MJML/EJS: <mj-section> <mj-column> // other MJML elements and strings <mj-text<%- code %></mj-text> // other MJML elements and strings </mj-column> </mj-section> Emails have a "campaign" assigned to them in a key/value map in _versions.json file for metrics. Every email has a rich HTML version as well as a plaintext version for users that prefer to see plaintext emails instead. Sometimes, the HTML version and plaintext version differ very slightly, but as a rule of thumb they should generally match for a consistent experience. lib/senders/emails contains layouts, partials, and templates. Layouts contain outer scaffolding that’s common across emails, like headers and footers. Partials contain reusable components that are common to many emails. Templates are the actual body of emails, and their names map to conventions described at the beginning of this doc, without the words 'send' or 'email', e.g. verify, newDeviceLogin and passwordReset etc. Previewing Emails and Storybook You can quickly preview all emails here, or alternatively run yarn storybook in the auth-server. We maintain Storybook for all FxA emails as a single source of truth for documentation; every email should have a clear description noting when and why we send the email, and all email states should be accounted for when updating or creating a new email template. Check out our docs on Storybook deploys with CircleCI for details on how to preview changes in PRs or to send a link out to anyone who may need to see FxA email copy or documentation. note Emails previewed in HTML are meant to be a rough representation of what an email will look like in an email client. They're essentially identical and MJML helps us with consistency, but keep in mind you're previewing in a browser when emails may be viewed in email clients in practice. We couple Storybook with the merge-ftl grunttask to display strings from our en.ftl files (see the l10n section for more details). At the time of writing, Storybook is our way to preview or manually check the English strings we ultimately pass to translators, and our tests cover the English fallback copy. In addition to running locally, Storybook can be built into a static format. This is ultimately what happens in our CI. This static format is then deployed and manual QA may be conducted against it. To test what this will look like, run yarn build-storybook then navigate to the storybook-static folder and run http-server . -p 8081. From there, simply navigate to and you will see the resulting static build. (If you don't have http-server installed, simply run npm install -g http-server) info The previews in Storybook are generated using the mjml-browser module and not the de facto mjml module. The difference is subtle but important. As the name indicates, mjml-browser is designed to render from a browser context, whereas mjml uses a nodejs context. Ideally these modules would be identical, but at the time of writing this, there are a couple of minor differences. Our code introduces a couple work arounds to achieve parity with the way templates would be rendered using the mjml module. If styles in sent out emails ever seem off, consider this discrepancy as an unlikely but potential source of error. write-emails-to-disk.js Before Storybook, the only way to preview emails was to run yarn write-emails which runs the write-emails-to-disk.js script. This runs through each Mailer.prototype.templateNameEmail function and writes its output to disk. This script has a lot of limitations that Storybook makes up for, but is still around because it actually creates an instance of our Mailer and gives us a more production-like output with real links generated from the server with UTM parameters. Because of this, it's also useful sometimes for debugging. At the time of writing, we are maintaining use of the script until it can be phased out. Styles We use scss stylesheets compiled to CSS and inlined by MJML for maximum mail client compatibility. We maintain shared stylesheets with common styles and variables, and template or partial-specific stylesheets scoped to that template or partial. FxA created a styleguide (click on a recent commit, and go into "fxa-settings"; also see "styling components" in fxa-settings) for engineers to reference convenience classes provided by Tailwind, which we use in other major front-end packages. While we would love to use Tailwind itself, it would have added even more complexity to the build pipeline and was not immediately compatible with MJML. Instead, we use class name conventions and styles that mirror Tailwind classes as using the closest px value to the design guide for consistency across FxA's CSS. Use SCSS variables set to mirror Tailwind's values, like for colors, margin, and padding, typically in the global file. Try to use existing, or create new, helper utility classes, when needed, that are similar to Tailwind's classes. $s-2: 8px; $s-5: 20px; .mt { &-2 { margin-top: $s-2 !important; } &-5 { margin-top: $s-5 !important; } ... Use @extend and placeholder selectors where appropriate. .font-sans { font-family: $font-sans !important; } .link-blue { @extend .text-blue-400, .font-sans; text-decoration: none; } %text-body-common { @extend .font-sans; } Then use the placeable: .text-body-no-margin div { @extend %text-body-common; } .text-body div { @extend %text-body-common; @extend .mb-5; } If it makes more sense to scope changes to a certain layout file or component file, do so. You'll have to import any files that have references you need to use, and reference any variables with global. in front. @use '../../global.scss'; @use '../../layouts/fxa/index.scss'; .text-body-grey-no-margin div { @extend %text-body-common; color: global.$grey-500 !important; } .text-body-grey { @extend .text-body-grey-no-margin; div { @extend .mb-6; } } MJML styling caveats MJML internally adds some default styling to their elements and use !important for better coverage of mail client quirks. This means if we add custom styles to overwrite theirs, we inevitably also need to use !important. With that in mind, we should not use it unless we explicitely need to. Sometimes, styling classes is not always how it seems due to how MJML compiles into HTML, and you may need to add div or td after the class to target that specific element. Every template has either an includes.json file or an includes.ts file where email subjects and optionally, actions, are housed. These are pulled in and localized before the email is rendered because 1) these values are needed in layout files and aren't easily localized since "subject" goes inside an mj-title and "action" goes in a script in metadata.mjml (where would we insert the Fluent IDs in the DOM?) and 2) we need to return a localized "subject" back to the Mailer anyway. Use includes.json unless logic is required to determine the subject or action like so: { "subject": { "id": "verify-subject", "message": "Finish creating your account" }, "action": { "id": "verify-action", "message": "Confirm email" } } Then in the corresponding FTL file: verify-subject = Finish creating your account verify-action = Confirm email If you need logic, you must create a function that is returned at the import step after checking for the template name. An example of includes.ts: import { GlobalTemplateValues } from '../../../renderer'; const getSubject = (numberRemaining: number) => numberRemaining === 1 ? '1 recovery code remaining' : '<%= numberRemaining %> recovery codes remaining'; export const getIncludes = (numberRemaining: number): GlobalTemplateValues => ({ subject: { id: 'lowRecoveryCodes-subject', message: getSubject(numberRemaining), }, action: { id: 'lowRecoveryCodes-action', message: 'Confirm email', }, }); export default getIncludes; The corresponding FTL file: lowRecoveryCodes-subject = { $numberRemaining -> [one] 1 recovery code remaining *[other] { $numberRemaining } recovery codes remaining } And then, in the Renderer, check for the template and dynamically import the required file with the argument it expects: if (context.template === 'lowRecoveryCodes') { return ( await require('../emails/templates/lowRecoveryCodes/includes') ).getIncludes(context.numberRemaining); } Localization (L10n) note See also this deep dive on Localization. Strings are automatically extracted to the fxa-content-server-l10n repo where they reach Pontoon for translations to occur by our l10n team and contributors. This is achieved by concatenating all of our .ftl (Fluent) files into a single auth.ftl file with the merge-ftl grunttask, and the extract-and-pull-request.sh script that runs in fxa-content-server-l10n on a weekly cadence. Non-email strings that must be translated are placed directly in lib/l10n/auth.ftl, under any brands we have set to message references. Email strings for translation are placed in a nearby ( templates/[templateName]/en.ftl or partials/[partialName]/en.ftl). Fluent requires a Fluent ID to find the translated string in other languages, but MJML doesn't support custom attributes since an MJML element may produce many HTML elements. We pass our email templates into @fluent/dom and provide Fluent an FTL ID by ensuring strings wrapped in a DOM element, like a span, where we can supply the ID via data-l10n-id. We don't have a hard rule for FTL ID naming but generally we start the ID matching the template, partial, or variable name, or a shortened version of it which may be snake-case or camelCase, followed by a short, snake case summary of the text. tip You must use curly quotes for strings in our MJML, plaintext, and FTL files, except in comments. They're considered more proper for copy and the l10n team will push back against straight quotes. subscriptionSupportContact MJML partial: <mj-text <span data- Thank you for subscribing to <%- productName %>. If you have any questions about your subscription or need more information about <%- productName %>, please <a data-contact us</a>. </span> </mj-text> We use JSON.stringify to ensure all values are JSON strings as expected. Note that at the time of writing, we have a spike open for l10n improvements across FxA (FXA-4477), including not needing to specify data-l10n-args. We also use Fluent to localize our plaintext. Strings in plaintext should follow a fluent-id = "default value provided" pattern where value of fluent-id is same as data-l10n-id attribute of the corresponding markup element. If fluent-id is present in Fluent bundle, the text will be localized, else it will be replaced with the fallback value present. In cases where we don't need localization, like for directly outputting a variable, use EJS instead and it will be rendered as-is. subscriptionSupportContact plaintext partial: subscriptionSupportContact-plaintext = "Thank you for subscribing to <%- productName %>. If you have any questions about your subscription or need more information about <%- productName %>, please contact us:" <%- subscriptionSupportUrl %> Every variable should have a comment to help translators with context. It can also be helpful to let translators know what a word's intent is if it can be ambiguous, or to let them know if something is followed by a link. subscriptionSupportContact FTL: # Variables # $productName (String) - The name of the subscribed product, e.g. Mozilla VPN subscriptionSupportContact = Thank you for subscribing to { $productName }. If you have any questions about your subscription or need more information about { $productName }, please <a data-contact us</a>. # After the colon, there's a link to subscriptionSupportContact-plaintext = Thank you for subscribing to { $productName }. If you have any questions about your subscription or need more information about { $productName }, please contact us: If the element you need translated is already wrapped in a non-MJML tag, like b, or li, supply the data-l10n-id on that element instead of creating an extra span DOM element. <b data-Payment details:</b> You do not need a data-l10n-id on strings that only contains a variable since the variable won't be localized. <mj-text<%- code %></mj-text> Fluent will overlay the translation onto the source fragment preserving attributes like class and href from the source and adding translations for the elements inside. warning If you need to change a string, you must also update the Fluent ID. Generally speaking, we just append a -2 or -v2 to the string if it's a rewording or we create a new ID entirely if the copy is significantly different. We must do this because IDs are saved in Pontoon and tied to translations for the original string. If you change a variable name and not the string text around it, technically you also need a new ID since the string is not identical. However, to not lose existing translations, you can also find-and-replace the variable name in that ID across locales in the l10n repo directly before or after your PR in fxa is merged. You must coordinate with the l10n team if you plan to do this. See a PR where we did this. At the time of writing, Storybook is our way to preview or manually check the English strings we ultimately pass to translators, and our tests cover the English fallback copy. Images and localizing alt text You must provide a width on mj-image tags. Otherwise, the parent width will be used in at least MacOS' native Mail app, resulting in large, 100% width images. See this PR for more details. Since we must pass Fluent an FTL ID for each string to be localized, localizing alt text for images is tricky. We use mj-html-attributes to add a custom attribute where we need them, and all of our images are given these HTML attributes in an images.mjml file pulled into every layout file: <mj-selector <mj-html-attributesubplat-footer-mozilla-logo</mj-html-attribute> </mj-selector> In MJML layout: <!--- remember to always provide a width ---> <mj-image </mj-image> In corresponding FTL files: subplat-footer-mozilla-logo = <img data- We could have technically localized strings this way as well rather than wrap text elements in spans, but that would have been significantly messier and confusing. Bounces and complaints SES delivery, bounce and complaint notifications are published to SQS queues. As well as emitting metrics (see below), we also store bounce records in the auth db whenever a bounce or complaint occurs. The email service then checks those records against thresholds defined in the config and if any thresholds are violated, sending will fail. The bounce and complaint handling code is in lib/email/bounces.js but long-term we want to migrate to the email service’s implementation instead, which was written some time ago but has not been deployed yet. The motivation for moving is partly semantic, because bounce records don’t really belong in the auth db, but also security, because the email service should not need access to the auth db. Bounce types Source: Bounce types | Amazon SNS notification contents for Amazon SES Metrics We emit metrics when emails are sent, delivered and when they bounce or complaints are received. A regrettable decision was made to treat complaints as a type of bounce when the metrics were first implemented, which means some of the numbers are unintuitive. Specifically, you might expect that count sent = count delivered + count bounced. That’s not true. Instead, count sent = count delivered + count bounced - count complained. The metrics code is slightly confusing because, as mentioned in the previous section, we haven’t finished migrating to the email service yet. That means there’s metrics code we’re using right now and other stuff we’re not using yet, which is waiting for the email service deployment. The stuff we’re using right now is in lib/email/delivery.js and lib/email/bounces.js. These modules receive events directly from SES and should be pretty straightforward to understand. The stuff we’re not using yet is in lib/email/notifications.js, which receives events from the email service (using the same format as SES for consistency). That queue won’t receive any events until the above-linked PR is deployed. The story behind it is we want to keep the metrics code in the auth server, even though bounce and complaint handling is moving away. The two handlers are designed to co-exist, so it’s fine for them both to be in operation when the email service stuff gets deployed, they’ll just compete for events without duplicating any metrics or whatever. When we’re happy that the email service is behaving correctly, we should remove the old SQS handlers from the auth server. Tests Historically the email tests have been a burden to maintain because there’s so many of them, and it’s easy for such a large volume of tests to obscure the presence of bugs. The main test cases are in test/local/senders/email.js, declared as data in maps called TESTS and COMMON_TESTS. Taking a declarative approach like this made it easier to get a feel for the test coverage and spot gaps in it. It also ensured we had common test cases that are applied to every email, e.g. we can make sure there are no HTML character entities in the plain text emails and that required headers are always set. There is also a "partials" test section for testing stateful partials. To test values other than what's in the MESSAGE const at the top of the file, use the updateTemplateValues helper function. In this example, productName will be updated and then the tests above it will be ran with the new value, undefined. ['templateNameEmail', new Map<string, Test | any>([ ['subject', { test: 'equal', expected: 'Expected subject' }], ['headers', new Map([/* header tests */], ])], ['html', [/* html tests */], ['text', [/* plaintext tests */] ]), {updateTemplateValues: templateValues => ({...templateValues, productName: undefined})}], We also have a functional-tests package containing end-to-end tests for our emails. You may need to update or add to them depending on your changes. Bulk mailer There have been unfortunate occasions in the past where it became necessary to manually send email out to large subsets of our user base. We have scripts/bulk-mailer.js for that purpose. Run node scripts/bulk-mailer --help to see usage information. How do I… ...change an existing template? - Find the template you want to change in lib/senders/emails/templates. - Make sure you update the HTML, plaintext, FTL, and Storybook forms of your template if applicable. You will need a new Fluent ID for new strings, see the l10n section for more info. - If you need to make changes in layoutsor partials, ensure you don’t break other templates. - Change or add test data in test/local/senders/emails.ts. - Bump the template version(s) in lib/senders/emails/templates/_versions.json, so that metrics can attribute any changes to the template change. - If you’re changing strings, make sure you’ve updated the FTL ID, more details above. - Be sure to run Storybook and make any needed changes there as well. ...add a new template? - Add HTML ( index.mjml), plaintext ( index.txt), FTL ( en.ftl), includes ( includes.json), and index.stories.tsin a new directory, lib/senders/templates/[templateName]`. - Use MJML (HTML) and EJS (HTML, plaintext, includes.json) to create your new template. Ensure the rich HTML and plaintext versions render as expected, the FTL file is properly filled out, and that Storybook includes documentation as well as displaying all states. - Add a version property to lib/senders/templates/_versions.json. Set it to 1. - Add a corresponding method to lib/senders/email.js. Make sure that method calls send with the subject and any template data you need. - Invoke your method from the code as mailer.send...Email. - Add new test data in test/local/senders/email.js. - If all the pieces are hooked up at the time (the endpoint, etc.), add the end-to-end test to functional-tests/lib/email.ts. Otherwise, make sure to file a follow up issue. ...change an email subject? - Make the change in the relevant includes.jsonor includes.tsfile, updating the FTL ID as well. - Update the test data in test/local/senders/email.js. ...view rendered templates locally? - Run yarn storybookand see all emails, HTML and plaintext, with all states, alongside documentation. - Optionally run node scripts/write-emails-to-diskthen open the .mail_outputdirectory in your browser. A rendered copy of every* template will be there. - Note this is not a substitute for testing changes in a real mail client. Email rendering is famously unreliable. *Some email methods render two templates based on conditional logic. Using yarn write-emails only runs through each method once providing the Mailer the message constant set in the script. You won't see every single template or state this way.
https://mozilla.github.io/ecosystem-platform/reference/emails
CC-MAIN-2022-27
refinedweb
3,809
55.64
Backends: Well Done, With a Side of Load Balancing - Part II Backends: Well Done, With a Side of Load Balancing - Part II Varnish HTTP cache software uses directors to highly configurable load balancing and failover. Join the DZone community and get the full member experience.Join For Free Maintain Application Performance with real-time monitoring and instrumentation for any application. Learn More!")); } } This allows us to route requests to the correct server, but I'm pretty sure none of you, smart cookies that you are, would be content with just one backend per domain. So much can go wrong, from floods to aliens to the intern unplugging the server to recharge his phone (Pokémon GO is such a battery hog!). Hence you will want to add some redundancy to the mix, so add two servers: backend alpha2 { .host = "192.168.0.201"; } backend bravo2 { .host = "192.168.0.202"; } And you'll plan on doing some round-robin with each pair (alpha/alpha2 and bravo/bravo2). But how can we do that? Just for the sake of it, let's have a look at one possible VCL implementation (using vmod-var). It's probably not the best one, and it's also not a true round-robin because of concurrency issues, but close enough: import var; backend alpha1 { .host = "192.168.0.101"; } backend alpha2 { .host = "192.168.0.201"; } backend bravo1 { .host = "192.168.0.102"; } backend bravo2;{ .host = "192.168.0.202"; } sub vcl_int { var.global_set("alpha_rr", "1"); var.global_set("bravo_rr", "1"); } sub vcl_recv { if (req.http.host == "alpha.example.com") { if (var.global_get("alpha_rr") == "1") { set req.backend_hint = alpha1; var.global_set("alpha_rr", "2"); } else { set req.backend_hint = alpha2; var.global_set("alpha_rr", "1"); } } else if (req.http.host == "bravo.example.com") { if (var.global_get("bravo_rr") == "1") { set req.backend_hint = bravo; var.global_set("bravo_rr", "2"); } else { set req.backend_hint = bravo2; var.global_set("bravo_rr", "1"); } } else { return (synth(404, "Host not found")); } if (!std.healthy(req.backend_hint)) { return (synth(503, "No healthy backend")); } } So, yeah, that works, but is it really satisfying? I mean, we are talking about a simple round-robin with two servers here, and the code is already becoming unwieldy, plus, think about adding a third or fourth server to the clusters. That code will soon become unmaintainable. Fortunately, we have a better way: directors! And the first one you will see is the round-robin director. A director is an object, provided by a VMOD, describing a set of backends combined with a selection policy. In the round-robin case, the policy is... well... round-robin. That was kind obvious. With it, our example becomes: import directors; backend alpha1 { .host = "192.168.0.101"; } backend alpha2 { .host = "192.168.0.201"; } backend bravo1 { .host = "192.168.0.102"; } backend bravo2;{ .host = "192.168.0.202"; } sub vcl_init { # first we must create the director new alpha_rr = directors.round_robin(); #then add the backends to it directors.add_backend(alpha1); directors.add_backend(alpha2); # do the same thing for bravo new bravo_rr = directors.round_robin(); directors.add_backend(bravo1); directors.add_backend(bravo2); } sub vcl_recv { # pick a backend if (req.http.host == "alpha.example.com") { set req.backend_hint = alpha_rr.backend(); } else if (req.http.host == "bravo.example.com") { set req.backend_hint = bravo_rr.backend(); } else { return (synth(404, "Host not found")); } if (!std.healthy(req.backend_hint)) { return (synth(503, "No healthy backend")); } } Much better! Now, adding a server to a cluster is only a matter of adding a new line with ".add_backend()" in vcl_init, one point for maintainability. We also reap one extra benefit from using the round-robin director: it will filter out all the unhealthy backends in its pool before selecting one, meaning it won't return a sick backend and that's something that wasn't done in your VCL implementation. If all its backends are sick, it will return a big nothing ("NULL", actually), and in this case, in our code, std.healthy() will return false, correctly triggering the error. And all of it is done automatically because we have configured our probes correctly in our previous article. Let Me Weigh in on Something Acute readers will have noticed that I wrote about simple round-robin as opposed to the "weighted" variant, and the reason for this is simple: Varnish doesn't have one! My take on this is that we don't really need it because we have the random director in vmod-directors. It allows you to add backends to it, exactly as before, but with one extra parameter: the weight: import directors; backend server1 { .host = "127.0.0.1"; } backend server2 { .host = "127.0.0.2"; } backend server3 { .host = "127.0.0.3"; } sub vcl_init { new rand = directors.random(); rand.add_backend(server1, 4); rand.add_backend(server2, 3); rand.add_backend(server3, 3); } Then, when h.backend() ran, it will roll a 10-sided dice. Check out the result: - 1-4: return server1. - 5-7: return server2. - 8-10: return server3. So, on average, server1 gets 40% of the request, and server2 and server3 30% each. The difference with a weighted round-robin (wrr) policy is that there's no global context, so it's cleaner and easier to code, removing concurrency issues while maintaining the average spread across servers. The negative point is that often, people fear that the PRNG (aka the dice-roller) will go all Debian on them and, in a streak of stubbornness, will send 1000 consecutive requests to the same server, killing it with extreme prejudice. To be fair, that's a mathematical possiblity; however, in the few years I've been working with Varnish, it has never been an issue. I know we live in a cloudy age where you can spawn thousands of identical server with just one command, but if you ever need a weighted content spread, please give this little guy a try, you won't regret it. Failure Is Not an Option, but It's Good to Have a Backup Plan Anyway Round-robin and random directors are fine for active-active setup, but what if you need an active-passive one? You could have an "error backend" used to deliver pretty error pages, instead of a synthetic page as we are currently doing in our VCL. And when I say "pretty", you can assume "large", so that's not something you want hardcoded in your VCL, plus, it's easier to just let the web designers have their own backend and just point Varnish to it. Turns out the vmod-directors also has the answer to this (in case you doubted it, everything is staged here) with the fallback director: this one will keep the backends in a list and simply offer the first healthy one, so if the first one is always healthy, it always gets picked. At this point, I'm sure you can figure out how to use it, so let's mix things up and learn something new with this example: import directors; backend alpha1 { .host = "192.168.0.101"; } backend alpha2 { .host = "192.168.0.201"; } backend bravo1 { .host = "192.168.0.102"; } backend bravo2 { .host = "192.168.0.202"; } backend err { .host = "192.168.0.150"; } sub vcl_int { new alpha_rr = directors.round_robin(); alpha_rr.add_backend(alpha1); alpha_rr.add_backend(alpha2); new bravo_rr = directors.round_robin(); bravo_rr.add_backend(bravo1); bravo_rr.add_backend(bravo2); new alpha_fb = directors.fallback(); alpha_fb.add_backend(alpha_rr); alpha_fb.add_backend(err); new bravo_fb = directors.fallback(); bravo_fb.add_backend(bravo_rr); bravo_fb.add_backend(err); } sub vcl_recv { if (req.http.host == "alpha.example.com") { set req.backend_hint = alpha_fb.backend(); } else if (req.http.host == "bravo.example.com") { set req.backend_hint = bravo_fb.backend(); } else { return (synth(404, "Host not found")); } } THE CROWD GOES WILD! We are adding a director to a director! And the thing is, it works exactly as you think it would: To explain how it works, I have to let you in on a little secret: internally, backends and directors are the same object, the difference being in a few methods implemented by one but not the other and vice-versa. So, we can easily define the health of a director recursively: if any of its sub-directors or backends is healthy, then the director is healthy. And as said earlier, directors will filter out sick backends, and so sub-directors before a selection, hence the flowchart describes exactly what our intuition predicted. Also, note that the err backend is defined once, but added to two directors, enabling some pretty crazy setups. Of course, the probe belongs to the backend, so a backend won't get probed twice as much because it's in two directors. That may seem an unnecessary precision, but I've been asked about it, and I'm sure that can benefit others, so, here you go. Advanced Load-Balancing: More Loaded, More Balanced Up until now, we only dealt with backends with fixed content: alphaX and bravoX contained respectively the alpha.example.com and bravo.example.com domains and err contained the error pages. But in what we'll see next, we'll let the director decide what backend to use, based on the request, so that we can consistently go fetch the same content on the same machine. Unsurprisingly, the methods will look like: new hash = directors.hash(); hash.add_backend(alpha1, 3); hash.add_backend(alpha2, 4); set req.backend_hint = hash.backend(req.url); Two points worth noting here: - this director is also weighted, pushing more requests to the heavier backends. - the .backend() method takes a string as input (here I gave the request's URL, but it can be any string), and given the same set of healthy backends, with the same weights, the same string input will return the same backend. You're probably thinking "Ok, but for a given string, how do I know what backend will be returned?", and while it's a very valid question, I only have a disappointing answer for you: you don't, and you don't need to. It's the same concept as for databases: you don't need to know the address where the information is stored, you just need to know that you can access it again by using the same key. Sadly, we can't use this to split our traffic between alpha and bravo machines, but we can still do something pretty sexy. Let's look at a simplified setup: A layer-4 (TCP level) load balancer is often used because they are fast, simple to put in place and you get Direct Server Return, which is neat when you need to deliver high volumes. However, they work at the connection level, so, as is the case here, you can end up requesting content that is already cached, just not on the selected server just because the load balancer is oblivious to the HTTP layer. It's probably not too bad from Varnish1 and Varnish2, but it means we hit the backend more than we "should". The solution is to replace the L4 balancer with Varnish and to use the hash director (nb: there's no point in using the round-robin director because, again, the two /foo requests would end up on two different Varnish servers). That Varnish server has no ambition to cache any content, and will just pass requests (meaning: no need for a lot of RAM or disk, just some good NICs), leaving the caching to Varnish1 and Varnish2. That's more like it! The connection grouping of requests is not an issue any more as only the url is important, and as planned, the two /foo requests go to the same server. Please note that almost all distributions are possible, I only put /bar and /baz together for symmetry. For example, all requests could have been hashed to the same server, the only constant being that the two /foo requests have the same hash and so go to the same caching server. Note: There's a funny parallel/contradiction to show here with the random director: mathematically, there's a chance that you receive a thousand requests that all get hashed to the same server, but somehow, that doesn't prevent people from using it. Diminishing backend pressure is only one of the advantages of using Varnish in front of the caching layer. As we removed duplication of content, we effectively doubled our cache capacity! And every new server added will give us more capacity in addition to more bandwidth, whereas in a round-robin setup we'd only gain bandwidth (instead of adding servers to expand storage, you can also look at the Massive Storage Engine, just sayin'). But (there's always a "but"), we lost something along the way, and I'm not talking about freckles or sense of wonder. No, I mean redundancy. If either Varnish1 or Varnish2, the cached objects in them are gone, because as we wanted, there's no content replication anymore (which was the whole point of the setup, so we can't really complain). Looks Like You Can't Have Your Cake and Eat It Or can you? Remember that you can stack directors? Let's do just that : By hashing toward a round-robin pool, we get to have increased caching storage, AND keep the redundancy. By adding more round-robin pools we gain cache capacity, and by adding servers in them, we increase redundancy. Buuuuuuuuuuuut (see? I told you), we are back to the initial backend pressure, since we'll cache the same content on both Varnish servers inside a pool. That can be an issue, indeed, but fortunately, we have a tool for this: Varnish High Availability can replicate content smartly without bothering the backends, allowing for retention of the cache capacity and redundancy without straining the backends. So you CAN have your cake, eat it, and not even gain weight! Note: the number of Varnish servers inside the round-robins is the only factor in computing the backend strain, adding more pools won't help, or make it worse (but they'll expand your storage capacity). What Lies Beyond Time to wrap up! We are done with our introduction to directors and load balancing in Varnish, and I hope this article gave you a few ideas on how you can leverage all this in your particular setup. If you have questions on remarks about how to better use directors, please reach out to me on IRC or Twitter and share your experience! Also, consider that we only looked at a very small part of the landscape, i.e., the directors that are readily available with Varnish, but since directors are VMODs (since 4.0), they are easy to create and use, so if your own need isn't fulfilled, have a look around GitHub for example, the VMOD you want may already exist. Collect, analyze, and visualize performance data from mobile to mainframe with AutoPilot APM. Learn More! Published at DZone with permission of Guillaume Quintard , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/backends-well-done-with-a-side-of-load-balancing-p-1
CC-MAIN-2018-30
refinedweb
2,511
64.3
Facebook messenger bot framework Contents Introduction koslab.messengerbot makes writing Facebook Messenger Bot easier by providing a framework that handles and abstract the Bots API. It is originally developed using Morepath as the web request processor and the default hub implementation is on morepath, but this library should work with any Python web frameworks Example: Writing An Echo Bot Lets install koslab.messengerbot pip install koslab.messengerbot Now lets write our EchoBot in echobot.py from koslab.messengerbot.bot import BaseMessengerBot # bot implementation class EchoBot(BaseMessengerBot): GREETING_TEXT = 'Hello!. EchoBot, at your service!' STARTUP_MESSAGE = {'text': 'Hi!, lets get started!' } def message_hook(self, event): text = event['message'].get('text', '') self.send(recipient=event['sender'], message={'text': text}) And now lets write a hub config file, config.yml. webhook: webhook use_message_queue: false message_queue: amqp://guest:guest@localhost:5672// hub_verify_token: <MY-VERIFY-TOKEN> bots: - page_id: <PAGE-ID> title: EchoBot class: echobot:EchoBot access_token: <PAGE-ACCESS-TOKEN> Start the bot messengerbot_hub config.yml Finally proceed to follow the Messenger Platform Getting Started guide to get your bot configured and registered in Facebook. Bot Configuration - POSTBACK_HANDLERS Dictionary mapping of payload to name of object method that will handle the payload. Default value is: POSTBACK_HANDLERS = {} Example: POSTBACK_HANDLERS = { 'mypostback': 'mypostback_hook' } def mypostback_hook(self, event): ... - GREETING_TEXT Greeting text for new threads. Default value is: GREETING_TEXT = 'Hello World!' - STARTUP_MESSAGE Message object to be sent when Get Started menu is clicked. Default value is: STARTUP_MESSAGE = { 'text' : 'Hello World!' } - PERSISTENT_MENU Persistent menu call_for_action buttons configuration. Default value is: PERSISTENT_MENU = [{ 'type': 'postback', 'title': 'Get Started', 'payload': 'messengerbot.get_started' }] Bot Hooks Following are the list of hooks that can be implemented on the bot - message_hook - Handles Message Received and Message Echo event. - postback_hook - Handles Postback Received event. This hook have a default implementation which triggers methods based on payload value. To define the mapping, configure POSTBACK_HANDLERS class variable. - authentication_hook - Handles Authentication event. - account_linking_hook - Handles Account Linking event. - message_delivered_hook - Handles Message Delivered event. - message_read_hook - Handles Message Read event Send API BaseMessengerBot class provide a send method to send responses to Facebook Messenger Bot service. Parameters are: - recipient - Recipient object. Eg: { 'id': '12345678'} - Message object. Refer to Facebook Send API reference for supported messages - sender_action - Sender actions. Supported values: mark_seen, typing_on, typing_off Note: If message is defined, sender_action value will be ignored. A convenience method reply can also be used to send a response. Parameters are: - event - Event object - Accepts string, callable or message object. Strings are automatically converted into message object. Callable will be called with the event object as its parameter. Postback Payload Postback values may be a JSON object or a string. In the case of Postback in JSON object format, an event key is required for routing postbacks to the right handler by postback_hook. For string postback values, the whole string is treated as the event key. Session Session Management is provided through a thin wrapper around Beaker Cache. Current conversation session variable may be acquired through get_session method on BaseMessengerBot class. Session object is dict-like and may be treated as such. def message_hook(self, event): session = self.get_session(event) Messenger Bot with AMQP AMQP queuing is supported by the hub process. To use this, in config.yml simply set use_message_queue to true and configure the transport uri to the message queue on message_queue setting. The queue is implemented using Kombu, so you may also use other transports that are supported by Kombu use_message_queue: true message_queue: amqp://guest:guest@localhost:5672// Conversation API NOTE: This is a draft spec. Not yet implemented. Inputs are welcomed. Spec conversation: myconversation steps: - message: What is your name? type: text store: name - message: Please share your photo type: image-attachment store: photo - message: Please share your location type: location-attachment store: location - message: - type: generic-template elements: - title: Summary subtitle: Summary image_url: ${data['photo']['url']} buttons: - type: postback title: Save payload: myconversation.save Contributors - Note: place names and roles of the people who contribute to this package - in this file, one to a line, like so: - Mohd Izhar Firdaus Bin Ismail, Original Author Changelog 1.0b5 (2016-08-09) - Rearrange priority of event handlers so that quick reply postbacks are caught [Izhar Firdaus] 1.0b4 (2016-08-08) - Ignore CLI when starting up hub [Izhar Firdaus] 1.0b3 (2016-08-08) - Ensure that all child processes are killed when parent is terminated. [Izhar Firdaus] 1.0b2 (2016-07-13) - Bug with page_id mapping. Ensure it is read as string now instead of integer [Izhar Firdaus] 1.0b1 (2016-07-13) - Initial fully functional bot framework with hub implementation [Izhar Firdaus] Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/koslab.messengerbot/
CC-MAIN-2018-05
refinedweb
776
51.34
Introduction: Shimmering Chameleon (smart)Skirt ~ I love to sew and I'm on the LED bandwagon, oh, and it's fashion show season. This would be a unique Prom Outfit, for! Step 1: The Code #include <Wire.h> #include <Adafruit_TCS34725.h> #include <Adafruit_LSM303) #define NUM_PIXELS 4 Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_PIXELS, 6, NEO_GRB + NEO_KHZ800); Adafruit_TCS34725 color_sensor = Adafruit_TCS34725(TCS34725_INTEGRATIONTIME_50MS, TCS34725_GAIN_4X); Adafruit_LSM303 accel; #define STILL_LIGHT // define if light is to be on when no movement. // Otherwise dark // our RGB -> eye-recognized gamma color byte gammatable[256]; int g_red, g_green, g_blue; // global colors read by color sensor int j; // mess with this number to adjust TWINklitude :) // lower number = more sensitive #define MOVE_THRESHOLD 45 #define FADE_RATE 5 int led = 7; double newVector; void flash(int times) { for (int i = 0; i < times; i++) { digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level) delay(150); // wait for a second digitalWrite(led, LOW); // turn the LED off by making the voltage LOW delay(150); } } float r, g, b; double storedVector; void setup() { pinMode(led, OUTPUT); // Try to initialise and warn if we couldn't detect the chip if (!accel.begin()) { Serial.println("Oops ... unable to initialize the LSM303. Check your wiring!"); while (1) { flash(4); delay(1000); }; } strip.begin(); strip.show(); // Initialize all pixels to 'off' if (!color_sensor.begin()) { Serial.println("No TCS34725 found ... check your connections"); while (1) { flash(3); delay(1000); }; } // thanks PhilB for this gamma table! // it helps convert RGB colors to what humans see for (int i = 0; i < 256; i++) { float x = i; x /= 255; x = pow(x, 2.5); x *= 255; gammatable[i] = x; } //this sequence flashes the first pixel three times // as a countdown to the color reading. for (int i = 0; i < 3; i++) { //white, but dimmer-- 255 for all three values makes it blinding! strip.setPixelColor(0, strip.Color(188, 188, 188)); strip.show(); delay(1000); strip.setPixelColor(0, strip.Color(0, 0, 0)); strip.show(); delay(500); } uint16_t clear, red, green, blue; color_sensor.setInterrupt(false); // turn on LED delay(60); // takes 50ms to read color_sensor.getRawData(&red, &green, &blue, &clear); color_sensor.setInterrupt(true); // turn off LED // Figure out some basic hex code for visualization uint32_t sum = red; sum += green; sum += blue; sum = clear; r = red; r /= sum; g = green; g /= sum; b = blue; b /= sum; r *= 256; g *= 256; b *= 256; g_red = gammatable[(int)r]; g_green = gammatable[(int)g]; g_blue = gammatable[(int)b]; // Get the magnitude (length) of the 3 axis vector // accel.read(); storedVector = accel.accelData.x*accel.accelData.x; storedVector += accel.accelData.y*accel.accelData.y; storedVector += accel.accelData.z*accel.accelData.z; storedVector = sqrt(storedVector); } void loop() { // get new data accel.read(); double newVector = accel.accelData.x*accel.accelData.x; newVector += accel.accelData.y*accel.accelData.y; newVector += accel.accelData.z*accel.accelData.z; newVector = sqrt(newVector); // are we moving if (abs(newVector - storedVector) > MOVE_THRESHOLD) { colorWipe(strip.Color(0, 0, 0), 0); flashRandom(10, 25); // first number is 'wait' delay, // shorter num == shorter twinkle // second number is how many neopixels to // simultaneously light up } #ifdef STILL_LIGHT else { colorWipe(strip.Color(gammatable[(int)r], gammatable[(int)g], gammatable[(int)b]), 0); storedVector = newVector; } #endif } void flashRandom(int wait, uint8_t howmany) { for (uint16_t i = 0; i < howmany; i++) { for (int simul_pixels = 0; simul_pixels < 8; simul_pixels++) { // get a random pixel from the list j = random(strip.numPixels()); strip.setPixelColor(j, strip.Color(g_red, g_green, g_blue)); } strip.show(); delay(wait); colorWipe(strip.Color(0, 0, 0), 0); // now we will 'fade' it in FADE_RATE steps for (int x = 0; x < FADE_RATE; x++) { int r = g_red * (x + 1); r /= FADE_RATE; int g = g_green * (x + 1); g /= FADE_RATE; int b = g_blue * (x + 1); b /= FADE_RATE; strip.setPixelColor(j, strip.Color(r, g, b)); strip.show(); delay(wait); } // & fade out for (int x = FADE_RATE; x >= 0; x--) { int r = g_red * x; r /= FADE_RATE; int g = g_green * x; g /= FADE_RATE; int b = g_blue * x; b /= FADE_RATE; strip.setPixelColor(j, strip.Color(r, g, b)); strip.show(); delay(wait); } } #ifdef STILL_LIGHT colorWipe(strip.Color(gammatable[(int)r], gammatable[(int)g], gammatable[(int)b]), 0); #endif } // Fill the dots one after the other with a color void colorWipe(uint32_t c, uint8_t wait) { for (uint16_t i = 0; i < strip.numPixels(); i++) { strip.setPixelColor(i, c); strip.show(); delay(wait); } } Step 2: The Test First I needed to conduct some tests for what I wanted to accomplish, using the materials I had. I had to test the resistance of the conductive thread for the distances I was going. I made a mock-up on the sewing machine, creating 3 rows of conductive thread, doubled up. Stainless steel is very strong, but it has a lot of resistance and can shift and become loose around pin-outs. I wanted to see the max distance the info could travel along this set-up. I set everything up with alligator clips to test the thread and make sure the code still worked as it was supposed to. Step 3: Supplies and Fashioning First I made a skirt out of a stretchy spandex fabric that was futuristic silver, and just used the wrong side. I had some leftover gold stretchy silk from when I hemmed a couture wedding dress and got to keep the cutoff. This allows me to have the skirt fit a range of sizes, BUT also means that I have to do a lot of hand-sewing. I chose to put the Party of the skirt at the bottom, which has a large enough circumference to be unaffected by the stretch limitations. Most of the electronics are from Adafruit. The color-tip glove I made from a pair of beautiful gloves from the 40's. I love mixing old with new! I marked with disappearing ink where I wanted all the neopixels to go. I then sewed them on with a small amount of regular and conductive thread. The real connections would come later. Step 4: The Brain This combo is the brain of the system. It is a Flora, attached to the Color Sensor attached to the Accelerometer/compass. I sewed everything with the 2-ply conductive thread, making solid connections with plenty of passes and spacing lines sufficiently apart. For the pin-out connections of data, power and ground that would be traveling down the length of the skirt, I used beefier, insulated wire. You could just use the 3-ply conductive thread, but you'd need to have at least 6 wires bundled together for each of the three lines. I then sewed the patch onto the skirt, feeding the battery wires though a small opening and into a pocket I created for the LiPo. The battery is 1200mAh, 3.7V. Step 5: My Work Is Cut Out for Me The first pic is the first neopixel of the 8 neopixel train. The rest would all be connected with 3-ply stainless steel conductive thread, strung with semi-precious stones and beads. These would all add a nice weight and rigidity, plus look more intentional than just dotted lines of wire connecting the neopixels. The trick was finding a needle eye big enough to accept the steel thread but small enough to fit though the stones/beads. There were a lot of beads that didn't make the cut. : D Step 6: The Underneath I wanted to cover all the exposed thread so as to minimize any chance of shorts. I used iron-on interfacing, clipping to allow curves, and ironed on with a single layer of cheesecloth between the skirt and iron. The initial wires I just wove in between the serged seam. Step 7: In Action! I have entered this into the Code Creations Contest and would love a vote if you think I deserve one! : D ~ Cynthia Runner Up in the Coded Creations Be the First to Share Recommendations 4 Comments 6 years ago AWESOME Project!! I made the chameleon scarf a few months ago, following Adafruit's tutorial. I love your adaptation, especially the sparkles. Cool idea! Reply 6 years ago Thank you so much! I want to improve my felting skills and inspirations and your instructables are so nice especially in that regard. Keep them coming! 6 years ago on Introduction Neat-o! Reply 6 years ago on Introduction Thanks! I love your Headboard Lighting 'ible.
https://www.instructables.com/Shimmering-Chameleon-smartSkirt-/
CC-MAIN-2021-49
refinedweb
1,380
65.22
UFDC Home | Help | RSS TABLE OF CONTENTS HIDE Front Cover Title Page Table of Contents Highlights Personnel Research network Legume-based pastures Low-input systems Agroforestry systems Continuous cultivation Comparative soil dynamics Soil characterization and... Soil fertility management in oxisols... Brasilia: Improving and modeling... High jungle extrapolation: pichis... Sitiung: extrapolation to transmigration... Publications Title: TropSoils technical report ALL VOLUMES CITATION PAGE IMAGE ZOOMABLE PAGE TEXT Publication Date: 1987- Frequency: biennialregular Subjects Subject: Soils -- Periodicals -- Tropics ( lcsh )Soils -- Periodicals -- Latin America ( lcsh )Soil management -- Periodicals -- Tropics ( lcsh )Soil science -- Periodicals -- Tropics ( lcsh ) Genre: government publication (state, provincial, terriorial, dependent) ( marcgt ) Notes Dates or Sequential Designation: 1985/1986- General Note: "TropSoils is one of the Collaborative Research Support Programs." General Note: Latest issue consulted: 1988-1989. Funding: Electronic resources created as part of a prototype UF Institutional Repository and Faculty Papers project by the University of Florida. Record Information Bibliographic ID: UF00055257 Volume ID: VID00002 Source Institution: University of Florida Holding Location: University of Florida Rights Management: All rights reserved by the source institution and holding location. Resource Identifier: oclc - 16150153lccn - sn 90040074 Related Items Preceded by: TropSoils triennial technical report Table of Contents Front Cover Front Cover Title Page Title Page Table of Contents Page i Page ii Page iii Page iv Highlights Page 1 Personnel Page 2 Page 3 Page 4 Page 5 Page 6 Research network Page 7 Yurimaguas workshop Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Network development in Latin America: RISTROP Page 16 Page 17 Page 18 Network development in Africa and Asia Page 19 Page 20 Page 21 Page 22 Page 23 Legume-based pastures Page 24 Page 25 Page 26 Persistence of grass-legume mixtures under grazing Page 27 Page 28 Page 29 Page 30 Page 31 Page 32 Page 33 Page 34 Page 35 Page 36 Page 37 Page 38 Page 39 Page 40 Page 41 Evaluation of animal preference in small plots Page 42 Page 43 Nitrogen contribution of legumes in mixed pastures Page 44 Page 45 Page 46 Page 47 Page 48 Page 49 Potassium dynamics in legume-based pastures Page 50 Page 51 Page 52 Page 53 Page 54 Page 55 Sulfur accumulation in grazed pastures Page 56 Page 57 Pasture reclamation in degraded steeplands Page 58 Page 59 Page 60 Page 61 Page 62 Pasture reclamation via herbicides Page 63 Page 64 Page 65 Page 66 Legume shade tolerance Page 67 Page 68 Extrapolation in farmer fields Page 69 Page 70 Page 71 Low-input systems Page 72 Central experiment: Transition to other technologies Page 73 Page 74 Page 75 Page 76 Page 77 Page 78 Page 79 Page 80 Page 81 Page 82 Page 83 Page 84 Page 85 Page 86 Page 87 Weed control in low-input cropping systems Page 88 Page 89 Page 90 Page 91 Page 92 Page 93 Page 94 Page 95 Agroforestry systems Page 96 Multipurpose tree selection for alleycropping Page 97 Page 98 Page 99 Page 100 Page 101 Page 102 Page 103 Mulch quality and nitrogen cycling Page 104 Page 105 Page 106 Page 107 Page 108 Page 109 Page 110 Page 111 Alleycropping in ultisols Page 112 Page 113 Alleycropping in alluvial soils Page 114 Page 115 Page 116 Page 117 Page 118 Page 119 Page 120 Inga-rice interface in alluvial soils Page 121 Page 122 Page 123 Page 124 Page 125 Page 126 Improved fallows Page 127 Page 128 Legume cover crops for peach palm Page 129 Page 130 Page 131 Mycorrhizae inoculation in tropical palm nurseries Page 132 Page 133 Nutritional requirements of peach palm Page 134 Page 135 Page 136 Nutritional requirements of other amazonian fruit trees Page 137 Page 138 Page 139 Nutritional requirements of Gmelina arborea Page 140 Page 141 Page 142 Continuous cultivation Page 143 Conservation tillage under continuous cropping Page 144 Page 145 Page 146 Page 147 Page 148 Page 149 Page 150 Page 151 Page 152 Page 153 Page 154 Page 155 Conventional tillage under continuous cropping Page 156 Page 157 Page 158 Page 159 Central continuous cropping experiment Page 160 Page 161 Page 162 Page 163 Nitrogen carryover in rotation and strip intercropping systems Page 164 Page 165 Page 166 Page 167 Page 168 Page 169 Page 170 Page 171 Page 172 Maintenance of phosphorus fertility under continuous cropping Page 173 Page 174 Page 175 Page 176 Page 177 Page 178 Page 179 Potassium buffering in yurimaguas ultisols Page 180 Page 181 Page 182 Page 183 Page 184 Page 185 Weed population shifts in continuous cropping Page 186 Page 187 Page 188 Page 189 Page 190 Comparative soil dynamics Page 191 Comparative soil dynamics under different management options Page 192 Page 193 Page 194 Page 195 Page 196 Page 197 Page 198 Page 199 Page 200 Page 201 Page 202 Page 203 Page 204 Page 205 Soil organic matter pools as affected by management options Page 206 Page 207 Page 208 Page 209 Root production and turnover in different management options Page 210 Page 211 Page 212 Page 213 Page 214 Page 215 Root distribution in pastures and alleycropping options Page 216 Page 217 Page 218 Occurrence of mycorrhizae in crops, pastures, and tree species. Page 219 Page 220 Page 221 Effect of different management options in mycorrhizal infection Page 222 Page 223 Page 224 Page 225 Page 226 Rhizobium nodulation in grain legumes Page 227 Page 228 Soil macrofauna as affected by management practices Page 229 Page 230 Page 231 Page 232 Nitrogen mineralization and leaching as affected by management practices Page 233 Page 234 Page 235 Page 236 Page 237 Page 238 Page 239 Page 240 Nitrogen mineralization and soil moisture content Page 241 Page 242 Soil characterization and interpretation Page 243 Utilization of the fertility capability classification system Page 244 Page 245 Page 246 Page 247 Page 248 Page 249 Page 250 Page 251 Alluvial soils of the amazon basin Page 252 Page 253 Page 254 Page 255 Page 256 Page 257 Page 258 Page 259 Page 260 Page 261 Page 262 Page 263 Page 264 Page 265 Page 266 Page 267 Page 268 Page 269 Page 270 Volcanic ash influence in transmigration areas of sumatra Page 271 Page 272 Page 273 Page 274 Page 275 Page 276 Page 277 Soil fertility management in oxisols of manaus Page 278 Nutrient dynamics Page 279 Page 280 Phosphorus fertilizer placement and profitability Page 281 Page 282 Page 283 Lime and gypsum applications Page 284 Page 285 Page 286 Page 287 Page 288 Page 289 Nitrogen management Page 290 Page 291 Page 292 Page 293 Page 294 Brasilia: Improving and modeling soil test interpretations Page 295 Comparison of mehlich-1, mehlich-3, bray-1, and anion exchange resin p soil test Page 296 Page 297 Page 298 Page 299 Page 300 Effects of soil texture, zinc, and pH on corn yield and plant zinc concentration Page 301 Page 302 Page 303 Page 304 Page 305 Page 306 A P-test interpretation model for kaolinitic soils using mehlich-1 and clay content Page 307 Page 308 Page 309 A P-test interpretation model for oxisols using mehlich-3, resin Page 310 Page 311 Page 312 Page 313 High jungle extrapolation: pichis and alto huallaga valleys Page 314 Runoff and erosion process in a primary forest catchment of the humid tropical steeplands Page 315 Page 316 Page 317 Page 318 Estabilishing a plant canopy in eroded ultisol steeplands Page 319 Page 320 Page 321 Page 322 Rainfed low-input crop rotation patterns in alluvial soils subject to seasonal flooding Page 323 Page 324 Page 325 Page 326 Page 327 Page 328 Evaluation of pasture germplasm under a perudic rainfall regime Page 329 Page 330 Page 331 Page 332 Page 333 Recuperation of degraded pastures dominated by homolepsis aturensis Page 334 Page 335 Sitiung: extrapolation to transmigration areas of Indonesia Page 336 Response of upland crops to potassium and lime applications Page 337 Page 338 Page 339 Page 340 Page 341 Page 342 Page 343 Page 344 Page 345 Page 346 Page 347 Page 348 Agroforestry research needs Page 349 Page 350 Page 351 Page 352 Page 353 Page 354 Page 355 Page 356 Page 357 Page 358 Publications Page 359 Page 360 Full Text TropSoils/NCSU Technical Report for 1986-87 Draft, July 1987 Tropical Soils Research Program Department of Soil Science North Carolina State University TROPSOILS/NCSU TECHNICAL REPORT FOR 1986 87 Draft, July 1987 Tropical Soils Research Program Department of Soil Science North Carolina State University Raleigh, NC 27695-7619 Supported by TropSoils The Soil Management Collaborative Research Support Program under a grant from the United States Agency for International Development Edited by Pedro A. Sanchez and Cynthia L. Garver TABLE OF CONTENTS 1. INTRODUCTION Highlights .................................................. 1 Personnel ................................................... 2 2. RESEARCH NETWORK ................................................. 7 Yurimaguas Workshop ...... ................................... 8 Network Development in Latin America: RISTROP ................. 16 Network Development in Africa and Asia ........................ 19 3. LEGUME-BASED PASTURES ............................................ 24 Persistence of Grass-Legume Mixtures under Grazing .......... 27 Evaluation of Animal Preference in Small Plots ................ 42 Nitrogen Contribution of Legumes in Mixed Pastures ............ 44 Potassium Dynamics in Legume-Based Pastures ................... 50 Sulfur Accumulation in Grazed Pastures ........................ 56 Pasture Reclamation in Degraded Steeplands .................... 58 Pasture Reclamation via Herbicides ........................... 63 Legume Shade Tolerance ...................................... 67 Extrapolation in Farmer Fields ............................... 69 4. LOW-INPUT SYSTEMS .............................................. 72 Central Experiment: Transition to Other Technologies .......... 73 Weed Control in Low-Input Cropping Systems .................... 88 5. AGROFORESTRY SYSTEMS ........................................... 96 Multipurpose Tree Selection for Alleycropping ................. 97 Mulch Quality and Nitrogen Cycling ............................ 104 Alleycropping in Ultisols .................................... 112 Page Alleycropping in Alluvial Soils................................ 114 Inga Rice Interface in Alluvial Soils ....................... 121 Improved Fallows .............................................. 127 Legume Cover Crops for Peach Palm ............................. 129 Mycorrhizae Inoculation in Tropical Palm Nurseries .......... 132 Nutritional Requirements of Peach Palm ........................ 134 Nutritional Requirements of Other Amazonian Fruit Trees ....... 137 Nutritional Requirements of Gmelina arborea ................... 140 6. CONTINUOUS CULTIVATION .......................................... 143 Conservation Tillage under Continuous Cropping ............... 144 Conventional Tillage under Continuous Cropping ................ 156 Central Continuous Cropping Experiment ........................ 160 Nitrogen Carryover in Rotation and Strip Intercropping Systems. 164 Maintenance of Phosphorus Fertility under Continuous Cropping 173 Potassium Buffering in Yurimaguas Ultisols .................... 180 Weed Population Shifts in Continuous Cropping ................. 186 7. COMPARATIVE SOIL DYNAMICS ........................................ 191 Comparative Soil Dynamics under Different Management Options .. 192 Soil Organic Matter Pools as Affected by Management Options ... 206 Root Production and Turnover in Different Management Options .. 210 Root Distribution in Pastures and Alleycropping Options ....... 216 Occurrence of Mycorrhizae in Crops, Pastures, and Tree Species. 219 Effect of Different Management Options in Mycorrhizal Infection 222 Rhizobium Nodulation in Grain Legumes ......................... 227 Soil Macrofauna as Affected by Management Practices .......... 229 Page NitrogenMineralizationandLeachingasAffectedby Management Practices .................................................... 233 Nitrogen Mineralization and Soil Moisture Content ............. 241 8. SOIL CHARACTERIZATION AND INTERPRETATION ......................... 243 Utilization of the Fertility Capability ClassificationSystem 244 Alluvial Soils of the Amazon Basin ............................ 252 Volcanic Ash Influence in Transmigration Areas of Sumatra ..... 271 9. SOIL FERTILITY MANAGEMENT IN OXISOLS OF MANAUS .................. 278 Nutrient Dynamics ............................................. 279 Phosphorus Fertilizer Placement and Profitability ............. 281 Lime and Gypsum Applications .................................. 284 Nitrogen Management ......................................... 290 10. BRASILIA: IMPROVING AND MODELING SOIL TEST INTERPRETATIONS .... 295 Comparison of Mehlich-1, Mehlich-3, Bray-1, and Anion Exchange Resin P Soil Test ........................................... 296 Effects of Soil Texture,Zinc, and pH on Corn Yield and Plant Zinc Concentration ........................................... 301 -A P-testInterpretation Model forKaoliniticSoilsusing Mehlich-1 and Clay Content .................................... 307 AP-testInterpretation Model for OxisolsUsingMehlich-3, Resin ......................................................... 310 11. HIGH JUNGLE EXTRAPOLATION: PICHIS AND ALTO HUALLAGA VALLEYS ..... 314 RunoffandErosion Process in a Primary Forest Catchmentof the Humid Tropical Steeplands ..................................... 315 Establishing a Plant Canopy in Eroded Ultisol Steeplands ...... 319 RainfedLow-InputCrop Rotation Patterns inAlluvialSoils Subject to Seasonal Flooding .................................. 323 Page Evaluation of Pasture Germplasm under a Perudic Rainfall Regime ................ ......................................... 329 Recuperation of Degraded Pastures Dominated by Homolepsis aturensis .............................. ............. .... 334 12. SITIUNG: EXTRAPOLATION TO TRANSMIGRATION AREAS OF INDONESIA ..... 336 Response of Upland Crops to Potassium and Lime Applications ... 337 Agroforestry Research Needs ................................... 349 13. PUBLICATIONS ............................... ......... ......... 359 HIGHLIGHTS FOR 1986 This is the fifteenth continuous year of operations of North Carolina State University's Tropical Soils Program and the fifth as part of TropSoils, the Soil Management Collaborative Research Support Program financed mainly by the U.S. Agency for International Development and collaborating host institutions: INIPA in Peru, EMBRAPA in Brazil, and AARD in Indonesia. The year 1986 was simultaneously very difficult but also very rewarding. We received a 25% budget cut from our donor agency in February 1986, which resulted in the discontinuation of our field research in Indonesia in August 1986 and in major adjustments in research activities in Peru, Brazil, and on campus. For the first time in the program's history, we were unable to offer new graduate assistantships. But 1986 was also a very rewarding and productive year, partly due to the momentum of all our projects in full operation and partly by the implementation of two major initiatives: the soil management research network, and comparative soil dynamics with emphasis on tropical soil biology. INIPA's building program at Yurimaguas is almost complete and the station was officially inaugurated by the Minister of Agriculture. Although modest by world standards, the Yurimaguas Experiment Station now has sufficient office, laboratory, and computer facilities to support long-term experiments and a 30-person training and conference center for on-the-job training. Three major international workshops were held at Yurimaguas in 1986: a Latin American Agroforestry Workshop in cooperation with ICRAF; the Third Tropical Soil Biology and Fertility Workshop; and the Latin American Soil Management Workshop, which trained 31 professionals from the "planting stick to the computer" in 3 weeks and helped launch the network. Extrapolation activities were also facilitated in tropical Asia and Africa through IBSRAM's Acid Tropical Soils Network. These network activities have paved the way for meaningful technology validation and transfer for 37 countries throughout Latin America, Asia, and Africa. Work reported herein is carried out in close collaboration with INIPA in Peru, EMBRAPA in Brazil, AARD in Indonesia, several international centers, USAID Missions, and our sister TropSoils universities. PERSONNEL NORTH CAROLINA STATE UNIVERSITY Faculty Robert H. Miller Depa Pedro A. Sanchez Prog Dale E. Bandy Chie: T. Jot Smyth Assi: Jos4 R. Benites Proj Michael K. Wade* Proj Julio C. Alegre Assi! Yurii Dennis del Castillo Proj Stanley W. Buol Prof Clasl D. Keith Cassel Prof, Fred R. Cox Prof< Charles B. Davey Prof4 Eugene J. Kamprath Prof< Robert E. McCollum Asso< Kenneth Reategui Reset Robert J. Scholes Post (Yur: Mary C. Scholes Post rtment Head ram Coordinator f, NCSU Mission to Peru stant Professor of Tropical Soils ect Leader, Yurimaguas ect Leader, Indonesia stant Professor of Soil Physics, maguas ect Leader, Pichis Palcazu essor of Soil Genesis and sification essor of Soil Physics essor of Soil Fertility essor of Forestry and Soil Science essor of Soil Fertility ciate Professor of Soil Fertility arch Associate, Pichis Palcazu Doctoral Fellow in Plant Ecology imaguas) Doctoral Fellow in Soil biology (Yurimaguas) * Resigned during the year. George C. Naderman, Jr. Staff Bertha I. Monar Mariela Gonzalez Sue Florindez* Patricia Gowland Tonya K. Forbes Olinda Ayre Rafael Roman Valeria Medeiros* Elizabeth Phillips Amparo Ayarza Extension Soil Management Specialist Program Administrator Administrative Assistant, Lima Office Administrative Assistant, Yurimaguas Research Technician Research Technician Soil Analysis Laboratory, Yurimaguas Plant Analysis Laboratory, Yurimaguas Bilingual Secretary Bilingual Secretary Translator, Yurimaguas Graduate Students (with degree candidacy and nationality) Miguel A. Ara Soil-pastures, Pucallpa (PhD-Peru) Miguel A. Ayarza Soil-pastures, Yurimaguas (PhD- Colombia) Dan W. Gill Soil fertility, Indonesia (PhD-USA) Ricardo J. Melgar Soil fertility, Manaus (PhD-Argentina) Ibere D. G. Lins Soil fertility, Brasilia (PhD-Brazil) Amilcar Ubiera Soil mineralogy, Raleigh (PhD-Dominican Republic) Helmut Elsenbeer Soil physics, Pichis (PhD-Germany) Jose R. Davelouis Soil fertility, Yurimaguas (PhD-Peru) * Resigned during the year. Cheryl A. Palm Lawrence T. Szott Erick C. M. Fernandes Carlos Castilla Christopher W. Smith Hadjrosuboto Subagjo Victor Ngachie Eleazar Salazar Eduardo Uribe Jane Mt. Pleasant Jonathan Hooper Marisa R. Fontes Abdul Karim Makarim** Mwenja P. Laurie J. Robert H. Gichuru** Newman** Hoag** Soil agroforestry, Yurimaguas (PhD-USA) Soil agroforestry, Yurimaguas (PhD-USA) Soil agroforestry, Raleigh (PhD-Kenya) Soil pastures, Raleigh (PhD-Colombia) Soil classification, Raleigh (PhD-USA) Soil classification, Indonesia (PhD- Indonesia) Soil fertility, Raleigh (MS-Cameroon) Soil classification, Raleigh (MS- Venezuela) Soil fertility, Raleigh (PhD-Colombia) Weed control, Raleigh (PhD-USA) Soil classification Raleigh (MS-USA) Soil chemistry, Raleigh (PhD-Brazil) Soil fertility, Indonesia (PhD- Indonesia) Soil fertility, Yurimaguas (PhD-Kenya) Soil classification, Raleigh (MS-USA) Soil classification, Raleigh (PhD-USA) INSTITUTE NATIONAL DE INVESTIGATION Y PROMOCION AGROPECUARIA (INIPA) Victor Palma* Lander Pacora Head of INIPA (and TropSoils Board Member) Head of INIPA (and TropSoils Board Member) * Resigned during the year. ** Completed degree in 1986. Manuel Villavicencio Angel Salazar Jorge M. Perez Pedro 0. Ruiz Luis A. Arevalo* Beto Pashanasi Miguel Bustamante Rolando Dextre* Daysi Lara Marco Galvez Andres Aznaran Cesar Tepe Alfredo Racchumi Jonathan L6pez Mercedes Escobar Wilfredo Guillen Jorge W. Vela Luis Zufiga Jos4 Merino Hemilce Ivazeta Marta Gallo Rodolfo Schaus Miguel Flores * Resigned during the Director, Yurimaguas Station Agroforestry, Yurimaguas Agrogorestry, Yurimaguas Mycorrhiza specialist, Yurimaguas Soil fertility specialist, Yurimaguas Soil zoology, Yurimaguas Training Officer, Yurimaguas Pastures specialist, Yurimaguas Pastures specialist, Yurimaguas Corn-weed control specialist, Yurimaguas Tillage specialist, Yurimaguas Paddy rice specialist, Yurimaguas Upland rice specialist, Yurimaguas Corn and sorghum specialist, Yurimaguas Economist, Yurimaguas Grain legumes specialist, Yurimaguas Pastures specialist, Pucallpa Soil physicist, Pichis Head, La Esperanza Station, Pichis Valley Pastures specialist, Tulumayo, Tingo Maria Soil specialist, Tulumayo, Tingo Maria Pastures specialist, Pucallpa Farming systems specialist, Tulumayo, Tingo Maria year. Jorge Fuigueroa Juan Lermo R. Ruiz G. Cantera E. Acuia Farming systems specialist, Tulumayo, Tingo Maria Agronomist, EMPRESA BRASILEIRA DE PESQUISA AGROPECUARIA (EMBRAPA) Erci de Moraes Manoel S. Cravo Chief UEPAE de Manaus Station Soil fertility specialist, Manaus CENTER FOR SOILS RESEARCH, AARD, Indonesia Mohammed Sudjadi I. P. G. Widjaya Adhi Sri Adiningsih Fahmuddin Agus M. Heryadi Al-Jabri Putu Wegena Director, CSR, Bogor Soil fertility, Country Coordinator, Bogor Head, Soil fertility division, Bogor Soil physicist, Sitiung Soil fertility, Sitiung Soil fertility, Bogor Soil fertility, Jambi RESEARCH NETWORK Tropical soils research has progressed to the point that several management options for sustainable productivity in agronomic and ecological terms are ready to be widely tested by national research institutions. The different options for the humid tropics constitute the model for networking (see first report). A training workshop, held in Spanish at Yurimaguas during September 1986, provided on-the-job training for 31 front-line professionals from the planting stick to the computer. The workshop participants created RISTROP (Red de Investigaci6n de Suelos Tropicales) with core experiments on low-input systems, agroforestry, continuous cropping, legume-based pastures, and paddy rice. This network is now operating in 11 Latin American countries. Continuing technical backstopping is being provided to IBSRAM (International Board for Soil Research and Management) in the establishment of two Acid Tropical Soils Networks in Asia and Africa. Given the striking similarities in soil constraints between tropical America and tropical Africa, a similar workshop is being planned in coordination with IBSRAM to train African soil specialists at Yurimaguas and establish a viable link between soil management technologies generated in Latin America and its potential users in Africa. Yurimaguas Workshop T. Jot Smyth, N. C. State University, Raleigh Jose R. Benites, N. C. State University, Yurimaguas, Peru Dale E. Bandy, N. C. State University, Yurimaguas, Peru Pedro A. Sanchez, N. C. State University, Raleigh Tropical soils research has progressed to the point of grouping promising alternatives into soil management options that account for differences in physical and socioeconomic conditions within this ecosystem (Figure 1). Many of the key components for these technologies can now be transferred to national research institutions, allowing local investigators to adapt soil management options to their specific conditions. A concerted validation and extrapolation effort across tropical soil ecosystems in Latin America would not only encourage interaction among participating institutions but also identify refinements and modifications that should be pursued to improve existing management options. Such an effort requires the identification and training of capable on-site personnel at collaborating national institutions. In September 1986, North Carolina State University's TropSoils conducted a 21-day workshop on tropical soils management at its primary research site in Yurimaguas, Peru, in cooperation with INIPA, USAID/Lima, and the Interamerican Institute for Cooperation in Agriculture (IICA). Purposes of the workshop were to acquaint key Latin American scientists with the most recent techniques in tropical soil characterization and management and to identify common interests and establish a soil research network. Criteria for selection of participants were based on the national research institutes': (a) interest and capabilities in pursuing soil management research in tropical ecosystems and (b) designation of workshop candidates, with at least a B.S. in agronomy or equivalent training, as the personnel responsible for conducting network investigations resulting from this workshop. The 31 participants who attended the workshop represented a total of 23 potential research sites distributed among 15 national institutions in 10 different countries (Table 1). A detailed description of the workshop and the experiments developed for the research network can be found in a report to the U.S. Agency for International Development available in English and Spanish. Activities were organized around the two workshop objectives and occurred simultaneously during the 3-week schedule. The sequence of topics covered during the instructional component included: characterization of tropical ecosystems; diversity, classification, and taxonomy of tropical soils; soil physics in relation to land clearing and tillage systems; and fundamental aspects of soil chemistry, soil fertility, soil testing, and plant analysis. Emphasis was placed on field activities that gave participants hands-on experience with the most recent field and laboratory techniques and computer software used in soil science research. Technology developed for humid tropical soil management was discussed as five distinct packages: mechanized high-input continuous crop production, low-input crop production with acid-tolerant species, agroforestry systems, paddy rice production on alluvial soils, and legume-based pastures. Approximately one full day was devoted to field tours of experiments for each management option. During their 3 weeks in Yurimaguas, participants installed a low-input experiment on a secondary forest site. This activity acquainted the group with the procedures, measurements, and decisions to be made during the processes of site selection, forest clearing, soil and vegetation characterization, plot establishment, and planting the initial crop. The group performed nondestructive forest biomass measurements, burned the slashed vegetation, collected ash and post-burn soil samples, and analyzed the ash and soil for nutrient content in the laboratories. Before departing, participants were able to observe their experiment with an initial stand of upland rice. Considerations for network development were initiated by participant presentations geared to provide opportunities for interaction and identification of common interests. Each member described the ecosystem, facilities, research program, and limitations of the experimental site where they performed studies for their national institutes. Comparative data among the participants' research sites (Table 2) indicate a broad spectrum of research thrusts and ecosystems. Toward the end of the workshop, participants voluntarily chose to participate in individual working groups on each of the five soil management options. Each working group was requested to (a) identify soil management factors that should be investigated in an extrapolation and validation network, (b) design experiments with clearly defined objectives, and (c) define experimental methodologies and basic requirements of equipment and facilities. A brief description of the network studies developed by each working group follows. Low-input option Objectives 1. Compare the sequence in which nutrient constraints appear in acid soils, under different ecosystems, under upland rice-cowpea production; 2. Compare the effects of nutrient additions by ash from burning different types of standing vegetation in different climatic regimes; 3. Establish soil nutrient levels to aid in formulating minimum fertilizer recommendations for sustained upland rice-cowpea production in acid soils. Experimental approach A total of 13 fertilization treatments, in a randomized complete block design with four replications, were developed as common to all network sites. Both udic and ustic moisture regimes exist among sites, and vegetation varies from primary rainforest to savanna. Post-clearing management will be constant among all sites. High-input option Objectives 1. Evaluate, under continuous cultivation, crop responses to increasing rates of K fertilization; 2. Evaluate the effects of crop residues on soil K dynamics; 3. Evaluate crop response to residual fertilizer K; 4. Determine the influence of K rates on K interactions with Ca and Mg in soils and plants. Experimental approach Crop rotations will be corn-soybean or corn-cowpea. Yield systems. Annual crops will be excluded from the experiment for steep topography. Paddy rice option Objectives 1. Investigate alluvial soil management systems for paddy rice production; 2. Evaluate over time changes in soil nutrient availability in alluvial soils under paddy rice production. Experimental approach Paddy rice production has not been practiced in large areas of the Amazon. Local expertise, therefore, is almost nonexistent. Farmer acceptance of the system in the lower Amazon Basin is believed to depend on the demonstration that (a) labor-intensive dike formation persists after seasonal river flooding, (b) constant flooding of rice paddies reduces weeding, and (c) broadcast seeding is a viable alternative to labor-intensive transplanting. After reviewing the proposals, participants were asked to identify, on a priority basis, the top three (if any) experiments they considered most applicable to their research station activities. Responses suggested primary interest in the low-input and agroforestry soil management options. This workshop activity provided valuable feedback information to the TropSoils program in Latin America, especially because these opinions were provided by professional field scientists after an intensive review of the primary research site program. Participants agreed that a common goal of the network would be the transferral and validation of improved soil management technologies on acid humid tropical soils in Latin America. The group requested that North Carolina State University coordinate information exchange and technical backstopping among network participants. The quality of the projects developed by the group suggests that the workshop was successful in enhancing the soil management research capabilities of collaborating country personnel. Workshop program evaluations by the participants highlighted the feasibility of using the response to K will be evaluated over six K rates, ranging from 0 to 250 kg K/ha. Crop residue effects and interactions with Mg will be evaluated in four additional treatments. Potassium will be monitored in both the soil profile and plant tissue. Improved pastures option Objectives 1. Determine the appropriate method for renewing pasture productivity through legume incorporation; 2. Determine the effects of P fertilization on legume establishment in pastures; 3. Evaluate the persistence of legume-grass associations as a function of establishment treatments. Experimental approach The experiment will be conducted in degraded pastures and will be composed of two distinct phases: (a) legume establishment as a function of tillage and P fertilization and (b) P dynamics and legume persistence as a function of animal grazing. The initial phase will be conducted in a network experiment with three P rates, two tillage methods, and four legume species in a split- split plot design with three replications. Agroforestry option Objective Improve soil fertility and control soil erosion and weed incidence by improved fallows and tree crops in agroforestry systems. Experimental approach The group chose to develop one experiment for improved fallows and two experiments for tree crops, with a distinction in the latter for steep and flat topography. The improved fallow study will compare the effects of a selected tree + groundcover legume short- term (3-5 years) fallow to a traditional secondary forest fallow (5-10 years) on the control of weeds, soil erosion, and soil fertility replenishment for subsequent crop production. The tree crop experiments will evaluate the productivity, nutrient distribution, and use of selected perennial crops in multistrata Yurimaguas Experiment Station research program to provide scientists in the humid tropics with on-site exposure to knowledge of how to manage soils. Table 1. Distribution of participants by country, national institutions, and experiment stations at the Tropical Soil Management Workshop, Yurimaguas, Aug. 31-Sept. 21, 1986. National Research Number of Country institution site participants ---------------------------------------------------""'" Bolivia Brazil Colombia Costa Rica Dominican Republic Ecuador Guatemala IBTA CEPLAC EMBRAPA: UNIBAN CATIE Univ. Costa Rica ISA INIAP PRONAREG ICTA Chapare Itabuna Bel6m Manaus Porto Velho Rio Branco Urabg Turrialba Rio Frfo Santiago Pichilingue Quito Izabal Min. Rec. Naturales IDIAP IIAP INIPA: La Molina Univ. Amaz. Peruana Danlf Calabacito Iquitos Moyobamba Iquitos Pichis Puerto Maldonado Tarapoto Tingo Maria Satipo Iquitos Honduras Panama Peru -- - - -- - - - --------------------" " Table 2. CQparative data aimg potential network research sites. Bolivia Brazil Colaobia Costa Rica Dom. Repub. Ecuador Guateiala arnuras Panara Peru IIAP INIPA Country Instituti Iquitos Pichis Rierto EMaldondo Mayobarba Tirgp Maria 2700 2800 2200 1230 2900 Systa under inVestition Sail DInant laboratories scientists sails Pastures TIee corps Food crops available available IBTA CELAC EMBRABT UNIBAN CATIE Ihiv. Costa Rica ISA INIAP ICrA Min. Rec. Nat. IDIAP Annual location rainfall nn apare 2500-4500 Itabuna 1300 Beln 2200 MMaus 2300 PBrto Vel, 2400 Rio Banco 1800 Ura 20004000 Tnrrialba 2500 Rio Etfo 4000 Santiago 2800 Pichilitnie 2160 Izabel EaniL 1000 Calabacito 2500-3300 Entisols- Inceptisals Qdsols- Utisols Qdsols- Utisols Qdsols- ULtisols Qdsols- ULtisols Ultisols Entisols- Inceptisals Inceptisols ULtisols- Inceptisals Entisols- Inceptisols Ineptisols Entisols- Ineptisals ULtisols- Alfisols- Incptisols Ultisols- Inceptisols ULtisols Iltisols Ultisols Alfisols ULtisols Note on other institutions: IUHT SEG/IEI OR conducts soil surveys and land resource evaluations thxogh the entire country. UNIV. IA MUNA/PERU oducts potato fertilization trials ard M. Sc. this research in agrcncy in the Ieruvian Aaman. University campus is located in Lima. 1. Plant alyse are performed at the MBAPA stations in Mamus and/or BelMn. 2. Soil and plant analyses are performed at another NIAP station iwthin 200 kn distance. 3. Soil ard plant analyses are performed at laboratories in Tegrcigalpa. 4. Sail and plant analyses are performed at central laboratories in Lina. eunpercial cammrcial namercial native fruits amerintal & native fruits commercial banana cammercial no fuelwood camerial no camercial native fruits cxmercial & native fruits commercial amercial camnierial none sail-plant soil-plant soil-plant scill soil soil-plant soil-plant soil-plant soil D2 soil-plant M3 sail-plant M4 n4 n4 n4 ED Paddy Rice Continuous Cropping Low-Input Cropping Pastures Agroforestry Forest/Farming Mosaic Regenerating Slopes __ Alluvial Soils Acid Soils Young Soils Figure 1. Soil management options for sustainable production in the humid tropics used in the research network. *4 Network Development in Latin America: RISTROP T. Jot Smyth, N. C. State University, Raleigh Participants requested that NCSU provide the central coordination for the network, which they chose to name RISTROP (Red de Investigaci6n de Suelos Tropicales). Assistance was requested for research site selection and characterization, support services for analyses and interpretation of resulting data, and information exchange between participating national institutions. Funding was not available to sponsor each participant's research activities in the network. Participants were therefore individually responsible for obtaining approval and funding from their national institutes for their network activities. The limited budget available for network coordination also led to the stipulation that support services from NCSU's Tropical Soils Program would only be initiated for a network participant upon confirmation of the national institute's approval of network activities. The network coordination has received positive responses from participants in 8 of 11 countries, for a total of 27 initiated or planned experiments, encompassing all management options presented during the Yurimaguas workshop (Table 1). With the exception of collaborators in Bolivia, participants have limited their commitments in 1987 to initiating network experiments, which they identified as first priority during the Yurimaguas workshop. Participants from IBTA/Bolivia intend to initiate experiments identified as both first and second priority for their institutional programs. INIPA stations in Puerto Maldonado, Moyobamba, and Iquitos plan to incorporate the low-input and agroforestry studies into experiments on native fruit tree production systems for peach palm, Brazil nut, and camu-camu. A similar approach is planned for the same network experiments at EMBRAPA/Manaus in guarana production systems. The Peruvian government recently established lime management as top research priority for the Selva region. High-input annual crop experiments at Tingo Maria and Moyobamba, therefore, will be directed toward comparisons of yield responses to locally available lime sources. RISTROP collaborators in Ecuador recommended delaying network initiatives in their country until funding is available through the recently established agricultural research foundation. Despite several inquiries, no response has been received on the network status in the Dominican Republic. A similar situation in Honduras was transformed into a positive commitment after discussions with the participant's superiors during recent travel to Central America. Additional correspondence with Colombian scientists, who were unable to attend the workshop, may result in implementation of two studies in that country. Venezuela also was not represented at the Yurimaguas workshop; however, after reviewing the workshop report, collaborators from the Universidad Central de Venezuela notified RISTROP coordination of their intention to participate in three network experiments. Since the completion of the Yurimaguas workshop, NCSU support activities for the network have centered on technical visits to participating institutions during implementation of field experiments. In addition to assisting participants in adjusting methodologies and field plans to local conditions, this action has also provided an opportunity to obtain on-site familiarity with the research programs of the national institutions. Extensive discussions with participating network scientists and travel conducted to date have provided the following observations: The specific needs for technical backstopping by national institutions and the capacity for NCSU's Tropical Soils Program to provide this expertise extends beyond the existing scope of the budget and/or conceptual development of the research network. The type of required technical support varies among institutions from assistance in establishing functional soil testing laboratories to the identification of research priorities through interpretation of existing soils information. Such limitations often impede participants' abilities to implement knowledge gained during the Yurimaguas workshop on a broader institutional scale. Ongoing national institute research programs, in some of the visited countries, are often unrelated and nonsupportive to the USAID Mission agricultural development programs. Quite often network participants and their superiors have indicated unfamiliarity with ongoing USAID Mission programs. Synchronization of national institute and USAID Mission programs would capitalize on the investment made, thus far, to transfer soil management technology to the network participants. Although it is not anticipated that soil science expertise in national institutions will be fully implemented through participation in RISTROP, it is fitting to consider supportive measures which will ensure that experiences gained in the network will be maintained and used in future national research endeavors. Table 1. Current stage of developments for RISTROP experiments in each participating country. Annual crops Low High Agrofor- Improved Paddy Country Institution input input estry pastures rice Guatemala ICTA I Honduras Min. Recursos Naturales I Costa Rica Univ. C. Rica/CATIE I Panama IDIAP A I Dom. Repub. ISA P Ecuador INIAP/PRONAREG P P Peru INIPA/Puerto Maldonado A A INIPA/Tingo Maria A INIPA/Iquitos A A INIPA/Moyobamba A A A INIPA/Yurimaguas I Bolivia IBTA/Chapare I I I Brazil CEPLAC A EMBRAPA/Manaus A A I EMBRAPA/Porto Velho I Colombia ICA P Com. Esp. Guaviare P Venezuela Univ. Central Venezuela I I I Total: Initiated (I) 6 2 1 2 2 Approved Plan (A) 6 2 4 Potential (P) 3 2 ------------------------------------------------------------------------- Network Development in Africa and Asia Pedro A. Sanchez, N. C. State University, Raleigh T. Jot Smyth, N. C. State University, Raleigh Stanley W. Buol, N. C. State University, Raleigh TropSoils and IBSRAM signed a memorandum of understanding in which both institutions formally agreed to work together toward the development of an Acid Tropical Soils Network on a worldwide basis. TropSoils input has concentrated on providing technical leadership through the Network Coordinating Committees. The Inaugural Workshop of the Acid Tropical Soils Network was held in Yurimaguas, Peru, and in Manaus and Brasilia, Brazil, from April 24 to May 3, 1985. It was organized by TropSoils/NCSU and co-sponsored by INIPA and EMBRAPA with support from various international organizations and donor agencies. After several days of observing ongoing long-term research in the humid tropics and acid savannas of South America, representatives from 13 developing countries (Brazil, Cameroon, China, Congo, Ivory Coast, Madagascar, Malaysia, Mexico, Panama, Peru, Thailand, Venezuela, and Zambia) decided to form the Acid Tropical Soils Network. The participants identified a defined target area, six principal research-validation topics, and several support services. The Inaugural Workshop Proceedings provide a state-of-the-art review on management of acid tropical soils and its publication, edited by TropSoils/NCSU, is expected in early 1987. The proceedings may serve as the conceptual base of the network. The first IBSRAM African regional workshop was held in Douala, Cameroon, on January 21-27, 1986. The five original African countries represented in the Inaugural Workshop were joined by Rwanda, Burundi, and Nigeria. A total of 55 individuals from 15 countries participated, under the sponsorship of the Cameroonian Ministry of Higher Education and Research, with several donor inputs. The five original countries have all initiated activities without waiting for additional funds. Sites have been selected for new experiment stations in Cameroon and Congo, where several of the humid tropical soil management options seen in Yurimaguas are planned to be implemented. Zambia began implementing many of the ideas gathered after visiting Yurimaguas, Manaus, and Brasilia. Proposals for new sites were identified by representatives for Madagascar, Ivory Coast, Nigeria, Rwanda, and Burundi. Common methodologies for evaluating edaphic parameters were agreed upon at the Cameroon Workshop and are shown in Table 1. To assist in project development, a list of equipment, supplies, and reagents for a fully- operational laboratory to analyze the agreed-upon edaphic parameters was prepared by TropSoils/NCSU and submitted to IBSRAM headquarters. A second African meeting scheduled for April 1987 in Lusaka, Zambia, is expected to produce concrete experimental designs. The first IBSRAM Regional Workshop on Soil Management under Humid Conditions in Asia was held at Khon Kaen and Phitsanulok, Thailand, from October 13 to 20, 1986, with 82 participants from 17 countries present. It was co-hosted by Thailand's Ministry of Agriculture and Cooperatives and IBSRAM, and was funded primarily by the Asian Development Bank (ADB) and the Australian Council for International Agricultural Research (ACIAR). A total of eight countries expressed interest in joining the Acid Soils Network. Included in the list are the three Asian participants at the Inaugural Workshop--China, Malaysia, and Thailand--and five additional countries-- India, Indonesia, Philippines, Vietnam, and Western Samoa. The common theme was the limited knowledge base on how to produce food crops on acid upland soils of tropical Asia. The participants agreed on two overall types of research activities, "core" experiment and component research, in addition to site characterization, a common activity of all networks. The core experiment is designed to compare current acid upland soil management practices with (1) low-input systems based on acid-tolerant cultivars, low levels of added P, and no change in the soil's acidity and (2) an intensive system with liming to neutralize exchangeable Al and appropriate fertilizer practices for cropping systems based on corn, soybean, or cotton. Although the specific cropping system will vary with site, the common thread will be the monitoring of soil dynamics as proposed at the Inaugural Workshop. Component research projects focus on (1) liming, (2) screening of acid- tolerant species and varieties, (3) residual effects of P fertilization, (4) organic inputs, initially focusing on determining the nutrient contents of composts, green manures, or animal manures, and (5) Fertility Capability Classification. Outlook Participants in the African network have expressed to IBSRAM particular concern about the need to increase their expertise in acid tropical soil management techniques among their front-line scientists. Based on the recent success of a similar activity for RISTROP participants, and on the African network emphasis on technologies developed in the TropSoils program, IBSRAM has requested NCSU's assistance with on-the-job training of soil scientists who will be doing the work in the Acid Soil Management Network. The experience accumulated in Latin America by the TropSoils program and the commonality of interests with IBSRAM in transferring such knowledge to African scientists makes it fitting that assistance be provided toward network implementation through a workshop/training course at the primary research site in Yurimaguas, Peru. Acid tropical soils comprise approximately 1.7 billion ha of land area in 72 developing countries. Their geographical concentration is primarily in least-developed Third World regions, many of which are currently undergoing social unrest and face several food shortages by the next decade. Thirty- seven of these countries are now involved in RISTROP and IBSRAM and tropical soils networks (Figure 1). The network support activities offer the opportunity for concerted worldwide efforts in disseminating existing information and developing local expertise on management of this fragile ecosystem in needy countries. Feedback from validation and extrapolation of existing information would enhance TropSoils' role in the discrimination of agronomically and economically sound acid soil management technologies through the identification of refinements and modifications for ongoing research. Table 1. Edaphic parameters to be measured in IBSRAM Acid Tropical Soils Network Experiments, as agreed in the Cameroon Seminar, January, 1986. Depth Parameter 0-10 10-20 20-50 Method ---------cm---------- Yes Yes pH (H20) pH (KC1) Exch. Al Exch. Ca Exch. Mg Exch. K Avail. P Avail. Zi Avail. F< Avail. Ci Avail. Mi Yes Yes Yes Yes Yes Yes Opt.* Opt.* Opt.* Opt.* 1:2.5 H20 IN KC1 Yes Yes Yes Yes Yes Yes Opt.* Opt.* Opt.* Opt.* Modified Ir It II It Olsen it ECEC Al sat. P sorption Org. C pH (NaF) ZPC (Zero point of charge) Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Slope Bulk density 1/3 bar H20 Yes Yes Yes Yes Exch. Exch. Al + Ca + Exch. Mg + Exch. K Exch. Al = ECEC x 100 Fox and Kamprath, Juo and Fox Walkley Black If allophane is suspect If subhorizon approaches acric properties At planting time optional where deficiencies or toxicities suspected. --------------------------------------- -------------- *Optional. Mn not ------r-F------------------ Figure 1. Countries involved in acid tropical soils research networks supported by IBSRAM and TropSoils. LEGUME-BASED PASTURES Cattle grazing for beef and milk production is one of the major land use activities of cleared rainforest areas in Latin America. We continue to find that when they are well managed, legume-based pastures protect productivity. Several million hectares of rainforest have been cleared for pastures, only to be abandoned as the pastures became degraded by overgrazing, soil compaction, and erosion. Pastures research at Yurimaguas and Pucallpa, Peru, has been closely integrated with the Tropical Pastures Program of the Centro Internacional de Agriculture Tropical (CIAT) and with INIPA's National Selva Program, which is now conducting most of the agronomic studies. Research reported here is based on an overall long-term strategy shown in Figure 1. All but the last stage are fully implemented. Long-term grazing studies show that high animal production levels can be sustained with three widely differing pastures in acid Ultisols with low inputs: a mixture of two stoloniferous grass and legume species (Brachiaria humidicola/Desmodium ovalifolium), a mixture of two erect grass and legume species (Andropogon gayanus/Stylosanthes guianensis), and one pure legume pasture of Centrosema pubescens. Other mixtures are failing because of low quality problems. After 4 to 6 years of continuous grazing, soil physical properties remain good and chemical properties have improved, because more than 80% of the P, K, and Ca applied as fertilizer is recycled to the soil. This year we obtained the first estimate of the N contribution of legumes to the associated grass under grazing in highly acid soil: the legume contributed an equivalent of 150 kg N/ha of urea nitrogen. Potassium dynamics, a Key issue for pasture persistence, continues to be quantified, and S accumulation in Ultisol subsoils was confirmed. Perhaps the most exciting accomplishment of this year was the successful transformation of degraded steepland pastures into highly productive ones by establishing grass and legume species with minimum tillage and phosphate rock in slopes ranging from 20 to 80%. In areas where herbicides are available, the proper combination of tillage and herbicide was determined to eliminate undesirable species and plant improved ones. ADAPTATION TRIALS Y-301, 305, 306 MOST PROMISING SPECIES PRODUCTIVITY AND PERSISTENCE UNDER GRAZING Y-302 MOST PROMISING SPECIES Figure 1. Strategy for developing the pasture soil management option for the humid tropics. LAND MANAGEMENT SYSTEMS - INTEGRATION OF ANNUAL CROPS WITH PASTURES - AGROFORESTRY SYSTEMS - EXTRAPOLATION TRIALS Persistence of Grass-Legume Mixtures under Grazing Miguel A. Ayarza, N. C. State University, Yurimaguas, Peru Rolando Dextre, INIPA, Yurimaguas, Peru Pedro A. Sanchez, N. C. State University, Raleigh The central experiment for the legume-based pasture soil management option is now in its sixth year of grazing. Its objectives are (1) to measure pasture and animal productivity in different associations, in terms of daily weight gain and annual liveweight production, (2) to evaluate the compatibility and the persistence of the different grass-legume mixtures under grazing, and (3) to evaluate changes in soil properties as a consequence of long-term pasture production. Four associations remain unchanged, but during the 4 years the project has been in progress, Panicum maximum + Pueraria phaseoloides was replaced by Andropogon gayanus + Centrosema macrocarpum 5056 in October 1984. A sixth association, Brachiaria dyctioneura + Desmodium ovalifolium, was established at a separate location with grazing initiated in March 1986. The species previously reported as Centrosema hybrid 438 was reclassified by plant taxonomists as Centrosema pubescens 438. The main features of the experiment are shown in Table 1. Annual Production and Botanical Composition 1985-1986 Annual liveweight gains per hectare (the measure of overall pasture productivity), individual animal daily gains (an estimate of pasture quality), legume content, actual grazing periods and stocking rates used are shown in Table 2 for 1985 and Table 3 for 1986. Annual liveweight gains were generally better in 1985 than in 1986 due to a more favorable rainfall pattern in 1985 and a 3-month delay in grazing initiation in 1986. Stocking rates were adjusted according to forage availability and divided into two semesters (January to June as the wetter period, and July to December as the less wet period). Brachiaria humidicola/D. ovalifolium mixtures produced very high annual liveweight gains (843 kg/ha), way above the other mixtures. This mixture produced high available forage throughout the year (Figure 1), which permitted a higher stocking rate. Liveweight gains decreased in 1986, in spite of legume content similar to that in 1985. The B. decumbens/D. ovalifolium mixture performed quite inferiorly, producing about half the daily animal gains than the previous mixture in spite of using identical stocking rates (Tables 2 and 3). Although forage availability was higher with B. decumbens than with B. humidicola (Figure 1), sharp animal weight losses were observed with the B. decumbens/D. ovalifolium mixture during the second half of the year. Individual animal gains dropped drastically from June 1985 and continued dropping until December when the animals lost more than 300 g/day (Figure 2). The same situation occurred at the same time in 1986, which was reflected in low individual animal gains. This problem may be due to photosensitivity, which sometimes occurs with B. decumbens, or to an unknown nutritional problem during the drier part of the year. Its solution is likely to require the collaboration of animal nutrition specialists. The pure legume pasture C. pubescens 438 maintained good levels of animal production in both years, in spite of the lower amount of forage on offer than the mixed pastures (Figure 2), which resulted in no weight losses during the drier period. Since forage quantity is lower than in the previous mixtures, the higher quality of C. pubescens 438 over D. ovalifolium (Table 4) is considered responsible for its good performance. The mixture of erect species A. gayanus with Stylosanthes guianensis performed remarkably well in 1985 and 1986, particularly in terms of daily gains, suggesting a higher quality of the legume component (Table 4). Promising results were obtained in A. gayanus + C. macrocarpum 5065. Excellent individual animal gains were recorded during 1985 and 1986. Since this pasture has only been grazed since May 1985, it is not possible to compare it with the older mixtures, but it certainly shows good promise. In sharp contrast, the new mixture established in 1985 which began grazing in March 1986 (B. dyctioneura/D. ovalifolium) performed very poorly even during the first year (Table 3). This mixture showed an excessively high content of legume in the forage on offer. This may significantly reduce animal performance due to the low quality of the legume, in comparison with the Centrosema species (Table 4). This mixture was eliminated from the trial in 1986. This is disappointing because this species is being considered an alternative to the other Brachiarias in the ustic tropics. Apparently this is not the case in Yurimaguas with a udic soil moisture regime. Long-term Trends in Persistence and Productivity A summary of the 6 years of grazing was presented at the American Society of Agronomy meetings in November 1986. Its summary gives for the first time a vision of pasture persistence. Only the four most promising mixtures are included in this analysis. The overall summary (Table 5) indicates that there are several widely different options, all with very high animal productivity potential (annual liveweight gains on the order of 500 to 600 kg/ha/yr). Considering that cattle production from unimproved pastures is of the order of 50-100 kg/ha/yr, each mixture is very attractive. The quality problem of B. decumbens/D. ovalifolium definitely puts it at a disadvantage in relation to the other three. Each of the remaining three represents a different approach: a mixture of creeper species (B. humidicola/D. ovalifolium), a mixture of erect species (A. gayanus/S. guianensis), and a pure, high-quality legume pasture (C. pubescens 438). The yearly fluctuation in the above parameters gives a feel for pasture stability. Annual liveweight gains fluctuated considerably, but the greatest fluctuations were observed in the erect mixture (Figure 3). The legume content tended to fluctuate less than liveweight gains (Figure 4) and showed little relationship with annual liveweight gains. Legume content seldom dropped below 20% of the forage on offer, a level below which is considered undesirable. Daily liveweight gains (Figure 5) showed less yearly fluctuations than the previous two parameters. Nevertheless, these yearly averages include sharp weight losses in the B. decumbens/D. ovalifolium mixture during the last 2 years. Overall, Figures 3, 4, and 5 suggest a reasonable degree of stability in the other three mixtures. Long-term Changes in Soil Properties Infiltration Measurements taken in December 1985 indicated a marked decrease in water infiltration rates in most of the pastures as a result of soil compaction produced by animals after 5 years of grazing (Table 6). Lowest values were found in A. gayanus/S. guianensis and A. gayanus/C. macrocarpum. This may be associated with the erect growth of these species so that they do not cover the soil as do the other pastures. Statistical analysis, however, showed that infiltration differences among pastures were not significant, due to the high variability of the double ring infiltrometer measurement. The infiltration rate prior to the start of grazing in November 1980 was 12.7 cm/hr with a range of 6.3 to 19.8. Five years later the average infiltration was 4.1 cm/hr with a range of 1.0 to 10.4 (Table 6). There is no question, therefore, that 5 years of trampling have decreased infiltration rates in this sandy loam Ultisol. The magnitude of the decrease has not produced visible runoff or erosion. This is because rainfall intensity values average much less than 41 mm/hr (see Continuous Cropping section) and also because these pasture provide a year-round full plant canopy that protects the soil from raindrop impact. Soil Organic Matter Topsoil (0-20 cm) organic matter and total N for each pasture were determined in December 1985, and values were compared to those obtained 5 years before the initiation of grazing (Table 7). These data compare the overall levels of the experiment prior to grazing with the effects of individual pastures, and not on a plot-per-plot basis. In spite of this limitation, it appears that organic matter and total N contents either increased or remained the same during 5 years of grazing. The differences between pastures cannot be explained in terms of legume content, since both the highest and lowest levels were observed in mixtures with D. ovalifolium. The lowest levels were observed with B. humidicola/D. ovalifolium, the most productive pasture. Topsoil organic matter contents were maintained under this management option (Table 7). Soil Fertility Paramenters Status of soil chemical properties in the 0-20 cm depth is reported in Table 8. Topsoil pH and P values were higher in 1985 than in 1980 in most of the pastures. Acidity decreased greatly in topsoils under B. decumbens/D. ovalifolium. Additional soil characterization at the 0-5 and 5-20 cm depths on B. humidicola/D. ovalifolium showed that most changes observed in topsoils were restricted to the 0-5 cm soil surface layer (Figures 6, 7, 8, 9). This may indicate that nutrient accumulation and cycling occurs mainly at the soil 0-5 m layer. Nutrient Cycling The improvements in soil properties in the top 20 cm can probably be attributed to the original and annual maintenance fertilization schedule shown in Table 1. It is noteworthy to observe that the annual maintenance fertilization was not applied in November 1983. The total amounts of P, K, and Ca added as fertilizers and lime during the 5-year period were compared by the amount calculated to be removed by animals in terms of annual liveweight gains. The results, shown in Table 9, indicate that 80% of the P, 98% of the K, and 92% of the Ca added as fertilizer was recycled to the soil in the B. decumbens/D. ovalifolium mixture. This high level of recycling is one of the real advantages of grazed grass-legume pastures, where the nutrients exported on the hoof are very low. Table 1. Main features of the central, legume-based pastures experiment (Y-302). Design: Randomized complete block, 2 replications Plot size: 0.45 ha Pasture established: Sept. 1979 Stocking rate: 3.3-5.5 an/ha (150-kg Cebu steers) Grazing management: Continuous from Nov. 1980 to July 1981 Alternate 28-35 day cycles since July 1981 Fertilization: 22 kg P/ha as SSP (yearly) 42 kg K/ha as KC1 (yearly) 10 kg Mg/ha as MgS04 (yearly) 500 kg lime/ha (once) Soil: Typic Paleudult, fine loamy, siliceous, isohyperthermic Initial topsoil (0-20 cm) properties. Sept. 1979: Clay: 13% Al sat.: 78% O.M.: 1.85% ECEC : 3.87 cmol/L pH : 4.3 Avail P : 2.2 pg/g ---------------------------------------------------------------------- Table 2. Animal production, grazing periods, and percentage of legume in five pastures under alternate grazing in 1985. Liveweight gains Stocking rates Grazing Grazing Rainy Dry Total Indiv. Legume Pasture years period period period prod. prod. content ----an/ha---- kg/ha/yr g/an/day % C. pubescens 438 4 Jan. 14- 4.4 4.4 510 342 100 Dec. 19 B. humidicola + 3 Jan. 14- D.ovalifolium 350 Dec. 19 5.5 4.4 843 482 30 B. decumbens + 5 Jan. 14- D. ovalifolium 350 Dec. 19 5.5 4.4 469 259 26 A. gayanus + 5 April. 1- S. guianensis 134-186 Dec. 3 3.3 3.3 363 594 49 A. gayanus + 2 April 23- C. macrocarpum 5016 Nov. 20 3.3 3.3 502 775 13 Table 3. Animal production, grazing periods, and percentage of legume in five pastures under alternate grazing in 1986. Liveweight gains Stocking rates Grazing Grazing Rainy Dry Total Indiv. Legume Pasture years period period period prod. prod. content ---an/ha---- kg/ha/yr g/an/day % C. pubescens 438 5 Mar. 18- 3.3 3.3 498 636 100 Dec. 4 B. humidicola + 4 Mar. 18- D. ovalifolium 350 Dec. 22 5.5 4.4 460 336 34 B. decumbens + 6 Mar. 18- D. ovalifolium 350 Dec. 10 5.5 4.4 223 170 22 A. gayanus + 6 Mar. 18- S. guianensis 136-184 Dec. 10 3.3 3.3 436 544 34 A. gayanus + 3 Mar. 18- C. macrocarpum 5056 Dec. 22 3.3 3.3 632 755 33 B. dyctioneura + 1 Mar. 18- D. ovalifolium 350 Oct. 10 4.4 4.4 131 149 52 Table 4. Nutrient quality of leaf blades (Dec. 1985). -------------------------------------------------------------- Crude Species protein P Tanninsa ---- ---------------------------------------------------------- -----------------_% --------------------- Legumes D. ovalifolium 12.4 0.22 21.0 C. pubescens 24.8 0.27 2.5 S. guianensis 21.4 0.35 4.0 Grasses B. decumbens 11.3 0.22 B. humidicola 10.6 0.22 A. gayanus 10.6 0.27 a. Analyzed in 1983. Vanillin-HCL method. -------------------------------------------------------------- Table 5. Average annual productivity of legume-based pastures in grazing experiment at Yurimaguas (1980-86). the central Mean Liveweight gains Pasture Years of Stocking Total Indiv. Legume mixture grazing rate prod. prod. content an/ha kg/ha/yr g/an/day-------------------------------- an/ha kg/ha/yr g/an/day % B. humidicola/D.ovalifolium B. decumbens/D. ovalifolium C. pubescens 438 A. gayanus/S. guianensis 4.6 4.7 3.8 3.2 671 532 573 477 ------------------------------------------ ----------------- -- Table 6. Infiltration values in five pastures under grazing in Yurimaguas for several years (mean of two replications and four observations per replication of each pasture). Years of Pasture grazing Infiltrationa S.E. x cm/hr B. decumbens + D. ovalifolium 5 4.71 a 1.20 B. humidicola + D. ovalifolium 3 2.00 a 0.58 C. pubescens 4 10.44 a 2.60 A. gayanus + C. macrocarpum 1 1.96 a 2.82 A. gayanus + S. guianensis 5 1.00 a 0.19 LSD0.05 = 10.4 a. Numbers followed by the same letter are not statistically significant at P = 0.05. Table 7. Changes in organic matter total nitrogen in the topsoil (0-20 cm) of five pastures under grazing after 5 years (1980-1985). -------------------------------------------------------------------------- Total N Pasture Organic matter SDa Amountb -------------------------------------------------------------------------- --------%-------------- kg/ha Before grazing (Nov 1980) 1.93+0.4 0.0707+0.016 1837 B. decumbens + D. ovalifolium 3.35+0.4 0.0895 0.015 2327 B. humidicola + D. ovalifolium 1.35+0.6 0.0705+0.412 1085 C. pubescens 1.95+0.2 0.0856+0.0025 2225 A. gayanus + S. guianensis 2.02+0.2 0.0770+0.005 2002 A. gayanus + C. macrocarpum 2.14+0.2 0.0757+0.009 1969 -------------------------------------------------------------------------- a. Mean of two composite samples (10 subsamples each). Rest of pasture is the mean of two replications (10 subsamples each). b. Value calculated assuming 1.4 g/cc in the 0-20 cm depth. Mean of three composite samples (10 subsamples each). Mean +standard deviation. Table 8. Status of soil fertility of the topsoil after 5 years of grazing in Yurimaguas. (0-20 cm) of five pastures Al Pasture Year pH O.M. Olsen P Ca+Mg K Al sat. % ppm ----cmol/L---- % B. decumbens + 1980 4.2 1.9 3.9 1.7 2.7 55 D. ovalifolium 1985 4.6 3.3 8.3 1.0 0.10 0.9 53 B. humidicola + 1980 5.2 1.7 2.2 2.4 1.7 42 D. ovalifolium 1985 5.0 1.3 6.4 1.2 0.12 1.3 49 C. pubescens 1980 4.9 2.2 8.6 2.1 2.5 54 1985 5.0 1.9 6.5 1.8 0.18 2.4 54 A. gayanus + 1980 4.6 2.1 5.8 1.7 2.4 56 S. guianensis 1985 5.3 2.0 7.9 1.6 0.11 1.8 51 A. gayanus + 1980 4.3 1.8 4.3 1.5 2.4 60 C. macrocarpuma 1985 5.1 2.1 1.0 1.1 0.07 2.2 65 a. A. gayanus + Pueraria phaseoloides for the first 3 years. Table 9. Nutrient balance in B. decumbens/D. ovalifolium during 5 years of grazing. Balance (1980-1985) P K Ca ---------kg/ha--------- Added as fertilizers and lime 112 160 505 Removed by animals 22 6 39 Balance 90 154 466 % "recycled" 80 98 92 Jan Mar May Jul Feb Apr Jun Aug Sep Nov Oct Dec Grazing periods (1985) Seasonal changes in available forage on offer of three pastures in 1985. o 8. decumbens + 0. ovali folium a B. humidicolo + 0. ovalifolium o C. pubescens -0 0 Jan Mar May Jul Sep Nov Feb Apr Jun Fug Oct Dec Grazing periods (1985) Seasonal changes in individual liveweight gains in 1985. L. --5 CD 4 O) Cm o 4- - 2 0 C a B. decumbens + 0. oval ifol ium o B. humidicola + D. ovalifolium a C. pubescens a o III Il Figure 1. 800 400 0 -400 Figure 2. ^1000 C 800 -- c 600 0 cm " 400 0D CD - 200 _i Bd/Oo Bh/Oo Cp Rg/Sg Pasture and grazing year Figure 3. Yearly fluctuations in liveweight gains in four rotationally grazed pastures at Yurimaguas. Bd = B. decumbens; Do = D. ovalifolium; Bh = B. humidicola; Cp = C. pubescens; Ag = A. gayanus; Sg =,S. guianensis. 60 40 20 0 - 123456 1234 12345 123456 Bd/Oo Bh/Oo Cp Ag/Sg Posture and grazing year Figure 4. Yearly fluctuation in average legume composition of the mixture. Bd = B. decumbens; Do = D. ovalifolium; Bh B. humidicola; Cp = C. pubescens; Ag = A. gayanus; Sg = S. guianensis. 100 80 400 200 0 1234 Bh/Oo 12345 Cp Pasture and grazing Yearly fluctuations in daily liveweight Bd = B. decumbens; Do = D. ovalifolium; B. humidicola; Cp = C. pubescens; Ag = Sg = S. guianensis. gains. Bh = A. gayanus; Available P (ppm) 4 6 Available P profile 5 years after grazing a B. humidicola/D. ovalifolium mixture. 600 F 123456 Bd/Do Figure 5. 123456 Ag/Sg year Figure 6. E S20-40 -c -r- Q- 40-60 60-80 80-100 Figure 7. C 5- E C. 20- -C a_ 40- 80- 80-11 Figure 8. Exchangeable Al profile 5 years after grazing a B. humidicoldD. ovalifolium mixture. Exchangeable Co + Mg (cmol/L) 0.2 0.4 0.8 0.8 1.0 1,2 Exchangeable Ca and Mg profiles 5 years after grazing a B.humidicola/D. ovalifolium pasture. Exchangeab Ie K (cmoI/L) )2 0.04 0.06 0.08 0.10 E S20-40 -r- C- 40-60 60-80- 80-100 - Figure 9. Exchangeable K profile 5 years after grazing a B. humidicola D. ovalifolium pasture. Evaluation of Animal Preference in Small Plots Miguel A. Ayarza, N. C. State University, Yurimaguas, Peru Rolando Dextre, INIPA, Yurimaguas, Peru Observation of animal preference at early stages of evaluating new forage accessions is considered an important tool in selecting desirable species from both the agronomic and the animal standpoints. Palatability is crucial in the acid-tolerant tropical legumes we are working with. Factors dealing with quality and preference can hardly be assessed without animals. Objective To test whether differences exist in animal preference for 13 legume accessions at Yurimaguas. Procedures An old regional Trial B, for which agronomic evaluation concluded in 1985, was used in this work. The experiment was fenced in March and animals were introduced in May 1986. Accessions were arranged in randomized complete blocks with 3 x 4 m plot size per species. Three animals (180 kg liveweight) were observed 6 hr/day for 3 days in each replication. Position of the animals and time spent grazing a particular accession were recorded every 15 minutes. Before entering the next replication, animals were kept out of the experiment on a pure stand of B. humidicola for 3 days in an attempt to diminish preference effects between replications. Results The first evaluation showed average grazing time per species differed even within a genus. Centrosema macrocarpum 5065 was preferred over the other Centrosema accessions (Figure 1), and Centrosema macrocarpum 5052 was hardly consumed at all. Desmodium ovalifolium 366 was preferred over D. ovalifolium 350. Implications These results indicate that differences in animal preference for legumes exist, even within the same genus and species. Although Centrosema species are known to be palatable, there are important differences in palatability. The preference for D. ovalifolium 366 is probably associated with its lower tannin content than D. ovalifolium 350, as has been reported by CIAT. A simple animal preference test can be included at the earliest stages of germplasm evaluation to obtain a comparative idea of animal preference with known species. This procedure could save significant time and effort by eliminating undesirable accessions before starting a grazing trial. 3000 2000 1000 Centrosema Centrosema Desmodium macrocarpum pubescens oval i fol ium Figure 1. Animal preference for 12 legumes at Yurimaguas. Zornia sp. Nitrogen Contribution of Legumes in Mixed Pastures Miguel A. Ara, N. C. State University, Pucallpa, Peru Jorge W. Vela, INIPA, Pucallpa, Peru Pedro A. Sanchez, N. C. State University, Raleigh Mixed grass-legume pastures are rare in the humid tropics. Most of the successful mixtures occur in the ustic soil moisture regimes. The success of some mixed pastures in such areas is often related to the ability of the legume to provide protein during the strong dry season when grasses cannot. In the humid tropics, where the grasses remain green throughout the year, the role of the legume is different. The purpose of this work is to ascertain the role of legumes under humid tropical conditions. A long-term grazing experiment was established at the IVITA Principal Tropical Station west of Pucallpa, Peru, in collaboration with INIPA and IVITA. Objectives 1. To estimate the N contribution of two adapted pasture legumes (Centrosema pubescens and Desmodium ovalifolium) to their respective mixtures with Brachiaria decumbens in terms of N yield, N availability to the grazing animals, legume litter accumulation, and N release; 2. To evaluate the effect of different grazing pressures on the mixture B. decumbens/D. ovalifolium in terms of N availability to grazing animals, legume litter accumulation, and N release. Procedures A 6-ha degraded pasture was planted to improved pastures to compare the N-supplying capability of two alternative N sources (legumes vs. different levels of N-fertilizer) on a grazed Brachiaria decumbens pasture, in terms of N yield of the standing biomass, N availability to the grazing animals, legume leaf litter accumulation, and N release. An additional variable, high grazing pressure, was included in the legume mixture (normal = 4.5 kg available forage dry matter per 100 kg animal liveweight; high = twice the pressure). The experimental design was randomized complete blocks with five treatments and three replications. Grazing was initiated in January 1986. Replications were utilized as paddocks of a 15-day grazing, 30-day rest rotation. Brown Swiss and Holstein-Cebu halfbreeds were used. Two esophagus-fistulated steers were always kept in the same treatment; intact animals were used as grazers. Animals were weighed before entering each replication, but only to adjust the grazing pressure. Slope was used as a blocking criterion. Replication I was located on 0- 5% slope, and replications II and III were located on 75% slopes. Soil was a Pucallpa series Paleudult. Selected soil properties at the initiation of the experiment are given in Table 1. Serious problems in establishing the Centrosema pubescens/Brachiaria decumbens mixture forced us to discard this treatment. Results Forage Availability Dry matter availability and botanical composition were evaluated using a double sampling dry weight rank procedure with 100 samples per 0.33-ha paddock just before grazing began. Dry matter yield and botanical composition are shown in Table 2 and the corresponding ANOVA for dry matter available to the treatments is shown in Table 3. No response was obtained for dry matter availability to the treatments, although a trend of N response is evident. Legume contents are adequate, but hover at high grazing pressure, as expected. Nitrogen Content and Nitrogen Yield This year we were able to install a micro-Kjeldahl unit on the IVITA soils laboratory. Results of N content and yield are the first from this laboratory. Unlike dry matter availability, treatment effects on N content and N yield of Brachiaria decumbens were well expressed (Table 2). The N fertilizer effect on N content and N yield was linear with no quadratic component (Figures 1 and 2). The presence of the legume significantly increased the N content of the grass by an average of 44% (1.06 vs. 1.53% N). The effect of grazing pressure of mixed pastures on grass N content was not significant. On the average, mixtures gave a higher grass N yield than grass alone with no N fertilizer, equivalent to 151 kg of N (from the regression equation). This amount is contributed by 28% of the legume component in the high grazing pressure treatment and 35% in the normal grazing pressure treatment. These numbers do not include the N content of the legume per se. Consequently, they provide an early indication that legumes contribute the equivalent of about 150 kg N/ha/yr to the associated grass during the first 10 months of grazing. Leaf Litter N Accumulation Four 0.5 x 0.5 m squares were selected in each legume paddock to cover the range of legume composition. The squares were fixed with iron stakes, and the Desmodium ovalifolium leaf litter was collected, dried, and weighed. Leaf litter dry matter accumulation averaged 14 g/0.5 m2/18 months for the high grazing pressure treatment and 29 g/0.5m2/18 months for the normal grazing pressure treatment. Nitrogen content of the leaf litter averaged 1.60%. Nitrogen accumulation in litter form during this 18-month period averaged 4.6 kg N/ha for the high grazing pressure treatment and 10.1 kg N/ha for the normal grazing pressure treatment. Consequently, the lower the grazing pressure, the more important will be the N transfer through the litter. This effect is being measured concurrently with N intake by animals and in excreta to fully calculate the N contribution of legumes to mixed pastures. Conclusions to Date 1. There is a strong linear response of Brachiaria decumbens grazed pastures to N fertilization as high as 300 kg N/ha/yr. 2. Including D. ovalifolium in the mixture contributes about 151 kg N/ha/yr to the grass, in addition to the legume's N content. 3. Leaf litter N accumulation in mixed pastures decreases with increasing grazing pasture. Implications These first year data show the obvious need for some sort of N input in Brachiaria decumbens pastures established in land that previously had degraded pastures. Addition of a legume in the mixture apparently makes a large contribution to meet this need. Continuing to gather data will demonstrate the effect on animal production and elucidate some of the N transfer processes involved. Table 1. Selected soil properties at the initiation of the experiment. 0-15 cm. December 1985. Al Rep Sand Clay pH P Al Ca Mg K O.M. N sat. ----%----- pg/ml ------cmol/L-------- ------%------ I 25.4 25.2 4.6 5.4 2.3 1.83 0.66 0.17 2.33 0.09 48 II 44.3 25.2 4.5 5.7 1.8 1.90 0.66 0.14 2.34 0.10 40 III 51.9 22.2 4.7 4.1 2.1 1.68 0.54 0.11 2.19 0.08 45 Table 2. Forage on offer and botanical composition of B. decumbens pastures fertilized with N or mixed with D. ovalifolium at two grazing pressures. Mean of three replications. N Grazing Forage dry source pressure mattera Legume Gross N content kg/ha/yr tDM/ha/cycle % % N kg/ha/cycle 0 Normal 1.34 -1.06 a 140 150 Normal 1.56 1.42 b 222 300 Normal 1.66 1.85 c 308 Legume Normal 1.42 35 1.38 b 218 Legume High 1.38 28 1.58 b 230 a. Mean of seven grazing cycles of 45 days duration each. Table 3. ANOVA for forage dry matter available. Degrees of Mean freedom Squares Replications 2 46.3047 ns Treatments 4 140.8177 ns N linear 1 398.53 ns N quadratic 1 21.12 ns Mixtures vs. NQ 1 18.40 ns HGP vs. LGP 1 5.23 ns 300 150 N (kg / ha / year) Effect of fertilizer N and legume presence on N content of Brachiaria decumbens pasture during the first grazing year in Pucallpa, Peru. 300 150 N (kg/ha /year) Apparent N contribution of legumes in relation to N accumulation by fertilized Brachiaria decumbens pastures in Pucallpa, Peru. Y = 1.0483 + 0.0026 x r = 0.99 ** H U Fertilizer N 0 Legume-normal grazing pressure 0 Legume-high grazing pressure I I Figure 1. Figure 2. Y = 0.8967 + 0.0028 x - r 0.99** M Fertilizer N 0 Legume -normal grazing pressure o Legume-high grazing pressure i I Potassium Dynamics in Legume-Based Pastures Miguel A. Ayarza, N. C. State University, Yurimaguas, Peru Pedro A. Sanchez, N. C. State University, Raleigh Tropical pastures on acid soils are stable and productive only when nutrients are sufficient to sustain a vigorous forage crop. Maintaining this fertility requires a management method that takes into account the nutrient leaching common in areas of high rainfall, as well as the cycling of nutrients among soil, forage, and animals. This study, which was conducted at the Yurimaguas Experiment Station, concentrated on one nutrient, potassium, and one of our most promising mixtures (Brachiaria humidicola/ Desmodium ovalifolium). Objectives 1. To quantify leaching losses of K in pastures under clipping and grazing; 2. To monitor the effect of K levels on the productivity of the pasture and on the dynamics of K in the soil; 3. To estimate the effect of K return by animal excretions; 4. To compare estimated K losses from pastures with losses from crops grown in the same area. Procedures The grazing experiment was a factorial of three annual rates of K fertilization (0, 50, and 100 kg K/ha) applied only once by two stocking rates (3.3 and 6.6 animals/ha), with three replications. Two additional experiments were established on 3 x 4 m plots with K rates of 0, 25, 50, 75, and 100 kg K/ha/yr. The first, a clipping experiment in which some plots had clippings removed while others had clippings returned, provides a comparison of the effect of grazing on K dynamics. The second was a bare-plot experiment designed to account for soil chemical and physical properties related to K leaching and to estimate the effect of plant growth on K dynamics. Four hectares were planted with a mixture of Brachiaria humidicola and Desmodium ovalifolium in December 1984. Potassium treatments were applied on May 13, 1985, and grazing began on July 4 of that year and terminated 2 years later.. Results Soil characterization of the area showed very low levels of exchangeable K in the soil profile (0.05-0.06 cmol/L), except for the 0-5 cm layer, which averaged 0.16 cmol/L. Soil texture was classified as sandy loam with topsoil clay contents of 17%. Application of K treatments produced a significant increase in exchangeable K in the 0-5 cm of the small plot experiment. Applied K, however, moved down the profile as a function of precipitation and K rates. There were significant changes in the 0-5 and 5-20 cm depths after 970 and 1678 mm cumulative rainfall, especially in the 300 kg K rate (Figure 1). The presence of plants significantly reduced the levels of exchangeable K in the soil, and this effect increased with the level applied (Figure 2). Preliminary results about the effect of K rates and plant residues on the productivity of the association indicated a positive effect of addition of residues on yields (Table 1). There was also a response to K on the cumulative yields in both treatments. There were no significant differences of the effect. The return of plant residues on the soil properties after 1 year is shown in Table 2. After 255 days of grazing, the effect of K and stocking rates on aboveground biomass available for grazing are summarized in Table 3. There was an increase of forage dry matter in response to the treatments; however, it was not significant, probably due to differences among grazing cycles. On-site studies of the effect of plant cover on soil moisture were started in September 1986. In addition, K in the soil solution is being measured in the small plot experiment. Conclusions to Date 1. There was a movement of applied K in the soil used in the experiment; however, the degree of movement depended on the rates applied and on rainfall. 2. Plants seemed to be able to significantly reduce the amounts of K susceptible to leaching by absorbing most of the available K in the soil. 3. Plant residues appear to be an important component in the stability of pastures. Table 1. Effect of the return of plant residues on the cumulative yields of a mixture of B. humidicola + D. ovalifolium (sum of six cuttings). Applied K No residue return Residue returned Increase kg/ha -----------t dry matter/ha----------- % 0 9.75 14.87 34 50 11.74 15.34 23 100 13.30 17.93 26 300 15.96 18.05 11 Table 2. Effect of plants and residue return on topsoil (0-5 cm) properties under a pasture of B. humidicola + D. ovalifolium after 1 year (mean of three replications).a --------------------------------------------------------------------- Plots P Al Ca Mg K ppm ------------------cmol/L----------------- Bare 10.5 b 2.77 a 0.54 a 0.20 a 0.08 a Planted, no residues returned 8.9 a 2.53 a 0.49 a 0.09 b 0.06 a Planted, residues returned 7.8 a 2.63 a 0.54 a 0.15 a 0.07 a --------------------------------------------------------------------- a. Columns and rows with the same letters are not statistically significant at P = 0.05. --------------------------------------------------------------------- Table 3. Effect of two stocking rates and three potassium rates on levels of available forage before grazing (mean of five grazing cycles). -------------------------------------------------------------- Stocking rate K applied Green dry matter forage Grass --------------------------------------------------------------------- animals/ha kg/ha t/ha/cycle % 3.3 0 3.04 46 50 3.73 51 100 3.85 50 6.6 0 3.26 50 50 4.20 54 100 4.48 58 --------------------------------------------------------------------- Exch. K 0 0.04 0.08 i I 5- 20- 40- 60- 100- 5- 20- 40- 60- 100- 5- 20- 40- 60- S I (cmol/L) 0.12 0.16 0.20 0.25 1 1 a I 50 K (kg/ha) Cumulative Rainfall 25 mm 0 970 mm A 1687 mm I I I I 100 K (kg/ha) 0.77 J 300 K (kg/ha) 100 - Figure 1. Effect of potassium additions and cumulative rainfall on distribution of exchangeable K in bare plots. . I I I I Il B I I I I I I W B I B I m B I Exch. K 0 0.04 0.08 (cmol/L) 0.12 0.16 0.20 0.25 50 K (kg/ha) 100 K (kg/ha) 0.30 0.34 0.38 040 I I l I i / I I I I 300 K (kg/ha) with plants 0 no plants Effect of plants on the distribution of exchangeable K in the profile at three potassium rates (mean of seven sampling dates). 100 5 20 40- 20- 40- 60- 100- Figure 2. Sulfur Accumulation in Grazed Pastures Miguel A. Ayarza, N. C. State University, Yurimaguas, Peru Pedro A. Sanchez, N. C. State University, Raleigh Ultisols have increasing clay contents with depth and often accumulate S in their subsoil because of their higher S sorption capacity. This situation is well documented in the Southeastern United States and is therefore expected to occur in Ultisols of Yurimaguas. Procedures Soil samples were taken from a Brachiaria decumbens/ Desmodium ovalifolium pasture under grazing for 3 years, which had received a total of 116 kg S/ha from yearly additions of ordinary superphosphate and magnesium sulfate. The topsoil of a degraded and not fertilized nearby pasture was also sampled to serve as a control. Twenty subsamples per site were composite and extracted with 0.01M Ca(H2P04)2 for SO0 determination. Results Sulfur distribution in the profile of the fertilized pasture is shown in Table 1. Applied S apparently moved down and accumulated in the subsoil as expected. Nevertheless, S concentrations in the topsoil of the fertilized pasture were significantly higher than those in the degraded pasture. Responses to S applications were observed in a pasture of D. ovalifolium grown in an Oxisol in Carimagua, Colombia, when extracted SO4 was 12 ppm. Thus if S assessment of the fertilized pasture were based solely on S status of the topsoil, a probable response would be predicted. This should not be the case when the potential contribution from sorbed S in the subsoil is taken into account. In addition, grass and legume components of this pasture are Al-tolerant species with extensive root systems in the B horizon. Use of the weighed profile mean to make better predictions of S responses in soils has been suggested, and a critical level of 4 ppm of S04 below which S responses must be expected was established by Australian researchers. Sulfur requirements in Ultisols of the humid tropics should be established using this approach. Table 1. Extractable sulfur in an Ultisol under pasture in Yurimaguas. Site Sampling depth S04a cm ppm Not fertilized 0-20 10.30 d Fertilized 0-20 13.20 c 20-40 26.58 a 40-60 21.42 b 60-100 14.06 c a. Figures with the same letter are not statistically significant at P = 0.05. Pasture Reclamation in Degraded Steeplands Miguel A. Ayarza, N. C. State University, Yurimaguas, Peru Rolando Dextre, INIPA, Yurimaguas, Peru There are several million hectares of degraded, unproductive pastures in the Amazon, often on steep slopes. The purpose of this project is to develop a simple technique for reclaiming degraded pastures in Ultisol steeplands, using different establishment techniques. Procedures A two-factor experiment was installed in a degraded pasture occupying a 5.18-ha watershed with sideslopes of 20 to 50%. Treatments were established in an amphitheater fashion, following slope contours, with tillage methods as main plots and improved pastures grow). The only fertilizer was Bayovar rock phosphate, applied at the rate of 12 kg P/ha in the hole or furrow. The species included Brachiaria decumbens, Brachiaria humidicola, Desmodium ovalifolium 350, and Centrosema pubescens 438. The species were planted in rows 2 m apart from each other. Initial soil chemical and physical properties are supplied in Table 1. The main degraded pasture species were of torourco complex. The experiment was initiated in June 1986. Improved grasses were planted using vegetative propagules and legumes using several seeds. After 1 month, a standardization cut was done on grasses and a minimum handweeding on the zero tillage plots. Performance of species was determined on the basis of percent cover 2 months after planting and the production of biomass 6 months after planting. Cover was measured using quadrants 4 x 1 m in size. The number of introduced species was counted and the percent cover estimated. Biomass production was determined in a 4 x 1 m area in the center of each plot. Two rows and two strips were included in the sampling area. Results were expressed as fresh weight of planted species and percentage of the total biomass. Results Soil physical properties appeared more limiting than chemical properties. Penetrometer resistance values indicated that a compacted layer was present in the 5-10 cm depth, perhaps due to overgrazing (Figure 1). Relatively high pH and Ca values (Table 1) were associated with the age of the pasture. The area was burned and cleared 3 years ago to favor the regrowth of natural grasses. Fresh biomass production of the four species 5 months after establishment is shown in Table 2. The two grasses were successfully established without tillage while the two legumes responded significantly to minimum or total tillage. The ability of the improved species to take over the degraded pasture is shown in Figure 2. More than half the area was covered by the improved species in all cases except where the legumes were planted without tillage. Total tillage, done progressively in order to avoid contiguous areas of exposed surfaces, resulted in almost complete cover within 5 months. These results suggest that minimum soil disturbance is needed to establish grasses such as those used in the experiment. The stoloniferous species are able to rapidly cover new areas and compete strongly with species already present. On the other hand, legumes require at least minimum tillage. In general, Centrosema performed better than D. ovalifolium, due to a faster growth habit and a more aggressive tendency of Centrosema than D. ovalifolium to cover. A second phase of the study was initiated in February 1986 to determine whether the persistence of a species depends upon grazing. The experimental area was divided into three paddocks, each containing the four species and the three methods of establishment. Two 150-kg steers started grazing by replication. Animal management was adjusted to give the animals the chance to consume all available forage (18 days grazing and 36 days resting). Persistence was monitored by using transects across the plots for each species x treatment combination before each grazing period. Results were expressed as percentage of presence of shoots of the species over the total number of counting every 50 cm along the transect. Results After 6 months of grazing, the percentage of grasses increased, whereas that of legumes decreased (Figure 3). This is probably the result of the capacity of these Brachiaria species to compete and displace existing vegetation. Although the legume population is decreasing, an excellent stand of Centrosema is present. Conclusions Promising methods exist to establish improved grasses and legumes in degraded pastures. The results of this experiment indicate that simple establishment methods can be successful in compacted Ultisol steeplands but that some minimum tillage is needed to establish the legumes. Although minimum tillage was performed by a hand tractor, it is possible to replace it by using animal power. Table 1. Initial soil properties of the steepland Ultisol area used for pasture reclamation in Yurimaguas (mean of seven samples). Depth pH P Al Ca Mg K Bulk density ppm ----------cmol/L----------- gm/cc3 0-20 4.7 8.1 3.8 2.88 0.73 0.12 1.37 20-40 4.8 4.4 5.9 2.42 0.64 0.10 Table 2. Effect of three tillage methods on the production of green forage of two grasses and two legumes after 5 months of establishment.a Tillage treatment Species Zero Minimum Total ---------t fresh wt/ha--------- Brachiaria decumbens 7.70 a 10.6 a 10.3 a Brachiaria humidicola 6.67 a 10.7 a 12.4 a Desmodium ovalifolium 0.32 a 1.78 a 4.65 b Centrosema 438 0.76 a 4.41 b 2.72 b a. Figures followed by the same letter ard not statistically different according to Duncan's multiple range test (P = 0.05). Mechonical 1 resistance (kg/cm2) 2 3 Figure 1. Soil mechanical resistance measured by a penetrometer in degraded pasture prior to treatment initiation (average of 36 observations per depth). Icw C - o 0 8. dcu. rsben K B. hunidicolo 0 Centrosemo 438 SNo tilloge Min. tillogs Total tilloge Land preparat i on Figure 2. Effect of tillage method on percentage of biomass produced by four introduced species invading a degraded pasture. L-L invading a degraded pasture. C. pubescens D. ovoli folium I. Zero tilloge 2. Minimum tillage 3. Conventional tillage B. decumbens 1. Zero tillage 2. Minimum tllloge 3. Conventional tlloo B. humidicolo Effect of grazing on persistence of two grasses and two legumes established on a degraded steepland area in Yurimaguas. Figure 3. Pasture Reclamation via Herbicides Jorge W. Vela, INIPA, Pucallpa, Peru Miguel A. Ara, N. C. State University, Pucallpa, Peru Another approach to pasture reclamation is the eradication of the unsuitable species using herbicides. This approach is important in the Pucallpa region of the Peruvian Amazon where chemical inputs are more available than in Yurimaguas. Objective To obtain the optimum combination of herbicide rate and time of application after tilling for effective weed control during the establishment phase of Brachiaria decumbens and Andropogon gayanus. Procedures A trial was carried out from February to July 1986 at the IVITA station near Pucallpa. Factors under study were three times of herbicide application and planting after land preparation (after 30, after 45, and after 60 days) and three glyphosate rates (1, 2, and 4 L/ha) as Round-up. The treatment structure was a 3 x 3 factorial. The experimental layout consisted of 2 x 6 m plots in a randomized complete block design with six replications. Variables measured were dry matter yield of Andropogon and Brachiaria at establishment (120 days of growth); weed reinfestation at 30, 60, 90, and 120 days after tilling; and pasture cover at the same times. The experiment was laid out on a degraded native pasture. Selected soil properties at initiation of the experiment are shown in Table 1. Results Predominant weeds before land preparation by rototilling and 30 days after tilling are shown in Table 2. Predominance of weeds changed from before land preparation: grasses comprised 70% of the weeds before but only 10% 30 days after tilling. Ciperaceae comprised only 1.6% in the early stage but had the highest percentage (39%) after tilling. The weed-control treatments had similar results in both Andropogon and Brachiaria so we will discuss only Brachiaria. Treatment effects of glyphosate on weed cover 30 days after herbicide application and crop planting reinfestationn) were highly significant, both for herbicide rates and times (Table 3) and for their linear and quadratic interactions. The lowest rate (1 L/ha) applied 30 days after tilling was able to reduce weed cover by 41%, but the highest reduction was for the highest rate (4 L/ha) applied 60 days after tilling, which reduced weed cover by 85%. Higher weed covers for the lowest rate applied at both 30 and 60 days after tilling were the product of a combination of effects-- reinfestation at the "30 days after" treatment and insufficient herbicide at the "60 days after" treatment. This condition was more or less the same after 60 and 90 days, after herbicide application and rate. At 90 days, maximum reinfestation occurred for the "30 days after" treatment (60% weed cover). After that, the weed cover reflected more the competition with an already developed Brachiaria than the treatment effect (no significant effect of treatments). Effect on the pasture itself is less clear. Thirty days after herbicide application and crop planting, the percentage of Brachiaria cover did not reflect any treatment effect but Andropogon showed a highly linear interaction between rates and times, which is believed to reflect seed quality and season more than the treatment itself (Table 4). At 60 and 90 days after herbicide application and planting, the Brachiaria cover slightly reflected weed control efforts, but at 120 days no treatment effect was observed and cover exceeded 50%. This condition was also observed in the pasture dry matter yield at establishment, which was not affected by any treatments. Conclusions No significant differences between the different treatments in dry matter yield at establishment were present, and it was concluded that the lowest rate (1 L glyphosate/ha) is as good as 4 L/ha and could be recommended. With regard to the "times after" treatments, convenience depends on particular cases, but when low grazing pressure is planned at the early establishment phase it could be advisable to wait 60 days after tilling before herbicide application and planting. Table 1. Selected soil properties at initiation of the experiment, 0-20 cm depth. Al Sand Clay pH P Al Ca Mg K O.M. sat. ------%------ Fg/ml --------cmol/L----------- -----%---- 43 20 4.7 3.8 2.2 2.07 0.53 0.15 2.73 44 Table 2. Predominant weeds before rototilling and 30 days later. Approximate coverage Before 30 days after Local name Scientific name rototilling rototilling --------------%--------------- Torourco Homolepis, Axonopus, 70 10 Paspalum sp. Pega-Pega Desmodium sp. 8 4 Mimosa Mimosa pudica 6 2 Matapasto Pseudoelephantopus 6 3 Sinchipichana unknown 4 19 Ciperaceae Ciperus sp. 2 39 Guayaba Psidium guayaba 3 0 Broadleaved weeds 0 24 Table 3. Weed cover at 30 days after glyphosate application and Brachiaria planting. Treatment means of six replications. Highly significant linear times x rates interaction. Time of application Rate of application 30 days 45 days 60 days L/ha ------------------%------------------ 1 59 63 67 2 43 63 34 4 45 35 15 Table 4. Pasture cover for Brachiaria decumbens and Andropogon gayanus. Means of six replications. Brachiaria: no significance. Andropogon: highly significant linear times x rates interaction. Time of application Rate of application 30 days 45 days 60 days L/ha ------------------%------------------ Brachiaria 1 4 3 4 2 4 3 5 4 4 3 4 Andropogon 1 5 2 0.3 2 2 1 0.5 3 2 2 0.3 Legume Shade Tolerance Jorge W. Vela, INIPA, Pucallpa, Peru Miguel A. Ara, N. C. State University, Pucallpa, Peru Acid-tolerant legumes used in the TropSoils pasture management options may play an important role in agroforestry systems where cattle grazing may take place under trees. Experience in Southeast Asia indicates a wide variability of legume response to shade. Objective To test our most promising legume germplasm under shade of a mature oil palm plantation to determine their adaptability to this factor. Procedures This experiment was established in November 1985 under an oil palm plantation managed by CIPA XXIII at km 44 of the Pucallpa-Lima highway. Three legumes (Desmodium ovalifolium, Pueraria phaseoloides, and Stylosanthes guianensis) are being evaluated for their dry matter productivity and feed quality in the presence and absence of oil palm shade. The experimental design is randomized complete blocks with five replications. Plots are 8 x 16 m, and each plot has two oil palm trees in the same position. A similar but smaller (3 x 5 plot) experiment was established outside the palm plantation and will be used as a reference; all the variables will be analyzed as percentages of full sunlight. Results Dry matter production of all three legumes was severely affected by oil palm shade. Desmodium ovalifolium and P. phaseoloides performed similarly but yielded only about 22% of the full sunlight value even though they outperformed S. guianensis, which gave a poor 8% of the reference value. Crude protein content values for D. ovalifolium and S. guianensis were higher than full sunlight values, but D. ovalifolium values were highest (Table 1). Preliminary data suggest that D. ovalifolium is the most shade-tolerant legume tested. Visual observations show S. guianensis performs quite poorly and tends to disappear. The stands of both D. ovalifolium and kudzu were quite acceptable under shade in absolute terms. Since kudzu is a climber, periodical cutting around trees is required, whereas D. ovalifolium does not require such cutting. Table 1. Percentage of dry matter yield and crude protein content (CPC) of the legumes under shade relative to full sunlight (means of five replications).a S. guianensis D. ovalifolium P. phaseoloides ---------------% of full sunlight--------------- Dry matter 7.7 b 23.3 a 21.8 a Crude protein 113.2 b 128.4 a 84.2 b a. Means with the same letter in the horizontal rows are not significant at P = 0.01. Extrapolation in Farmer Fields Miguel A. Ayarza, N. C. State University, Yurimaguas, Peru Rolando Dextre, INIPA, Yurimaguas, Peru After 6 years of work in the Yurimaguas Station, Brachiaria humidicola, Centrosema pubescens, Stylosanthes guianensis, and Desmodium ovalifolium have shown a high potential to increase animal production and good persistence under grazing. On the basis of the experience gained over the years, it was decided to test the potential of improved pastures to replace degraded native pastures, which are common in the area. Procedures A validation trial was set up to demonstrate that degraded pastures can be put into production by substituting present vegetation or improvement of present pasture plus incorporation of new species. An agreement was signed with the Empresa Ganadera Amazonas to conduct a 20-ha validation trial in an area covered by degraded pastures. Four pasture renovation options were designed: 1. Introduction of a legume on a degraded pasture of Brachiaria ruzisensis; 2. Use of a crop as precursor to establishing improved grass-legume pasture; 3. Improvement of native pasture through introduction of a legume; 4. Reclamation of a steepland pasture using improved grasses, legumes, and trees with minimum tillage. Work started in December 1985 with soil characterization of the area. Extremely low native fertility and a predominantly sandy texture existed in this sandy Ultisol (Table 1). For option 1, 4 ha of a 10-year-old B. ruzisensis field were disked slightly to break the soil surface and facilitate planting of C. pubescens 438 and to promote grass regrowth. The legume was broadcast over the area at a rate of 1 kg/ha. Rock phosphate was applied to supply 25 kg P205 and dolomitic limestone was applied to supply 50 kg Ca + 10 kg Mg. Option 2 was installed on a flat area infested by weeds and unpalatable grass species. Two diskings and two rototiller passes were needed to completely till 3 ha to eliminate existing vegetation. Cv. Africano desconocido, an upland rice variety known to be tolerant to Al toxicity, was planted in strips 12 m wide. The crop received 60 kg N, 40 kg K20, and 50 kg P205 per ha. Brachiaria humidicola was planted in alternate strips 2 months later. After rice harvest, cowpea was planted in rows 50 cm apart and B. humidicola + D. ovalifolium was planted between rows. Option 3 was installed on a 2-ha native pasture of torourco (Axonopus compresus). Strips 4 m wide were opened every 20 m and planted with Stylosanthes guianensis accessions 136 and 184. Phosphorus was applied at a rate of 12 kg P/ha using rock phosphate. Option 4 was installed in a degraded pasture on a 20% slope. The area was tilled in strips 1 m wide and 4 m apart in contour. Land preparation was carried out with a 1-1/2 HP manual rototiller. Areas between rows remained untouched. Brachiaria humidicola and C. pubescens 438 were planted in alternate strips, and Erythrina popigiana was planted in rows every 12 m along the contour. Results After 8 months, all pasture renovation options were fully established. In option 1, B. ruzisensis reacted positively to tillage and C. pubescens was spreading very well over the area. Measurements of botanical composition indicated 20% of Centrosema in the total biomass, a highly desirable legume content of a mixed pasture. In option 2, rice yields were affected by short dry spells during January and February 1986. In spite of limitations, 1.5 t/ha of rice were harvested and cowpea yields reached about 800 kg/ha. Both are acceptable yields for low-input systems. After harvesting the cowpea, B. humidicola was almost fully established, although it was infested by annual weeds. In option 3, Stylosanthes established rather quickly, although some handweeding was required. Introduced pastures in the steepland covered almost the entire area in option 4. Both Centrosema and B. humidicola replaced most of the native species in the nontilled bands. On the other hand, Erythrina did not establish well. Growth was affected by soil conditions (sandy texture and K deficiency). Only two of six rows have established. Costs of renovating degraded pastures are presented in Table 2. Total investment varied among systems. The most expensive system was the one with crops (option 2) because of labor and machine costs. Returns from crop yields were enough to pay for pasture renovation, however, and leave some extra profit before grazing. Conclusions Overall results showed that renovation of degraded pastures can be accomplished with low monetary inputs in some instances. In other cases where a complete renovation is required, crops as precursors pay for the entire cost of pasture re-establishment. A second phase calls for milk production from cows grazed on the pastures in every system. This phase will start in 1987 as work supported by INIPA and the Ganadera Amazonas S.A. and will serve as a thesis for an undergraduate student from the Universidad Agraria at La Molina. Table 1. Topsoil chemical property of the soil used for the extrapolation work in K-17 (Ganadera Amazonas area) (mean of five samples). December 1985. pH P Al Ca Mg K ECEC Al sat. Sand Clay ppm -----------cmol/L-------- --------------- 4.0 3.4 1.1 0.3 0.08 0.05 1.53 72 82 14 Table 2. Cost of renovating a degraded pasture by three methods of introduction of improved species. Option 1 Option 2 Option 3 B. ruzisensis + Rice-cowpea- Native pasture C. pubescens pasture (Stylosanthes) U.S.$/ha % U.S.$/ha % U.S.$/ha % Labor 7 11 86 39 20 16 Machinery 9 13 65 29 43 34 Fertilizer 35 53 53 24 6 4 Seed 12 17 16 7 50 40 Other 3 5 11 5 6 5 Total 66 99 231 104 125 99 ----------------------------------------------------------------- LOW-INPUT SYSTEMS A low-input cropping system, reported last year as a transition technology between shifting cultivation and permanent agriculture, collapsed after seven crops in 3 years due to P and K deficiencies and increasing weed pressure. This year we report on the nutrient cycling and economic aspects of this first cropping period, the success of a 1-year kudzu fallow in overcoming weed constraints, the successful transition to other systems, and in-depth data on weed build-ups during the first cropping period. Nutrient cycling from above and below plant residues returned to the soil more than 80% of the K and Ca accumulated by plants, about half the biomass produced and half the N and Mg accumulation, but only 37% of the P accumulated, largely because of grain removal. The system was highly profitable both with and without fertilizer applications: purchased chemical inputs accounted for only 8% of the total costs without fertilizer and 16% with fertilizer. After 1 year of kudzu fallow, the fields were largely devoid of weeds and showed a higher fertility status. A second low-input cropping period started with high yields, and the transition to fertilizer- based continuous cultivation was successful. Weed-control studies indicate the inability to grow more than five or six low-input crops continuously with the best combination of herbicides and manual weed control. The main problem is controlling weeds in upland rice. Central Experiment: Transition to Other Technologies Jose R. Benites, N.C. State University, Yurimaguas, Peru Pedro A. Sanchez, N.C. State University, Raleigh A low-input cropping system has been developed in Yurimaguas, Peru, to serve as a transition technology between shifting and continuous cultivation for acid soils of the humid tropics. Its principal components are (1) Traditional slash-and-burn clearing of forest fallow, (2) selection of acid- tolerant cultivars capable of high yields without liming, (3) rotation of upland rice and cowpea cultivars (no tillage) and removing only the grain, (4) no fertilizers, lime, or organic inputs are brought in and soil pH remains at about 4.5, (5) the rotation continues for 3 years, but increasing weed pressure and decreases in available P and K cause the system to collapse in agronomic and economic terms. A total of five upland rice and two cowpea crops were harvested during a 3-year period, as opposed to one rice crop under traditional cultivation. Crop yields and effects on soil properties during this first period were presented in the last TropSoils technical report. This report covers the nutrient cycling effects and economic analysis of the first cropping period and the transition phase initiated after the system collapsed at the seventh continuous harvest on June 30, 1985. Nutrient Removal Cycling Low-input systems should be efficient recycles of nutrients in order to minimize nutritional inputs that must replace nutrients extracted by crop harvests. The nutrient composition of the acid-tolerant rice (cv. Africano) and cowpea (Vita 7) plant parts at harvest obtained in neighboring experiments are presented in Table 1. The calculated amount of nutrient accumulation by the seven crops is shown in Table 2. Even though only the rice grain and cowpea pods plus grain were exported from the field, the harvested products during the 3 year period represented considerable nutrient removal from the field (Table 2). The amounts of nutrients accumulated by the crops but returned to the soil as above- or belowground organic inputs was larger than the amount removed except for P. Crop residues plus root runover returned to the soil 62% of the dry matter produced, 54% of the N, 59% of the Mg, 87% of the K, 94% of the Ca, but only 37% of the P accumulated by crops (Table 2). Root turnover, assuming 100% fine root decomposition, accounted for a relatively minor proportion of amounts recycled (14% of dry matter, 21% of N, 25% of P, 5% of K, 18% of Ca, and 12% of Mg). The actual amounts returned, therefore, are equivalent to an annual fertilization rate of 98-7-199-33-13 kg/ha of N-P-K-Ca-Mg. A proportion of the N returned as aboveground residue, however, may be lost before it enters the soil, via denitrification that may take place on the mulch-soil surface interface. Biological N fixation by cowpea, however, may counteract such losses, but neither process was measured. The P, K, Ca, and Mg inputs, however, are likely to be transferred entirely to the soil. Phosphorus, therefore, appears to be the critical nutrient, since about two-thirds of the crop uptake was removed by the harvested products giving this element the lowest percentage of recycling and the lowest absolute amounts returned to the soil among the five nutrients evaluated. Weed Pressure In the authors' view, increasing weed control difficulties was the single most important factor for the instability of this low-input system during its third year. The initial weed population was mainly broadleaved, which is typical of shifting cultivation fields in the area. With time, the weed population gradually shifted to grasses that are more aggressive and not subject to economically sound control by commercially available herbicides. Of particular importance was the spread of Rottboelia excelsa, a non- rhizomatous grass, particularly during rice growth. Cowpea was more competitive with weeds than upland rice because cowpea covered the soil surface more thoroughly. Studies on weed control in low-input systems at Yurimaguas indicate that the absence of tillage and burning promotes weed build-up (see next report). Rice straw mulch may decrease weed growth in cowpea, but cowpea residues do not have the same effect on rice, perhaps because of the fast decomposition rate visually observed with cowpea residues. Economics Cost records were kept in this 1-ha experiment. The summary for the first seven crops in the plots without fertilization is shown in Table 3. Labor inputs for the first crop include land clearing; thus the subsequent crops averaged two-thirds of the first crop's labor. Returning and redistributing crop residues averaged 10 man days/ha, or approximately U.S. $20/ha per crop. Another major labor input was bird watchers near harvest time. The next major cost items were interest on crop loans from the Banco Agrario and government fees for receiving and processing rice at the mills. Shifting cultivators routinely obtain bank loans, which are used primarily as an advance on their labor. Interest charges fluctuating from 40 to 101% on an annual basis in local currency reflect the high inflation rate in Peru, which averaged 125% annually during the study period. Even in U.S. dollar terms, the indirect costs averaged about 30% of the total production costs. In contrast, the cost of purchased chemical inputs (herbicides and insecticides) and others (seed, bags, thresher rent) comprised 8 and 19% of the total production costs, respectively. The low-input system without fertilizer applications was highly profitable, averaging net returns of U.S. $1144/ha per year, or a 21% return over total costs (Table 4). The low-input system with fertilizers was also quite profitable, averaging an annual net return of US$ 1125/ha and a 100% return over total costs. Fertilizers accounted for 9% of the total cost in the system, but also resulted in additional labor, interest, thresher use, and transport costs. The low-input system either with or without fertilizers is vastly more profitable than traditional shifting cultivation (Table 4). Transition to Other Systems This low-input system, therefore, is a transitional technology in both agronomic and economic terms. After 3 years, the field is devoid of felled logs and most of the remaining tree stumps are sufficiently decomposed to be destroyed with a good kick. The land clearing process is thus complete, providing several options to the farmer. One is to put the land into a managed fallow and then start a second cropping cycle. A second is to plow, lime, fertilize, and rotate crops intensively; a third is pastures, and a fourth is agroforestry. The experiment described above was modified to address some of these options after the seventh crop harvest in July 1985. The 1-ha field was divided into eight 1250 m2 plots, providing four treatments in a randomized complete block design with two replications. The replicates were located on the previously fertilized and not fertilized treatments in order to block the residual effects. Two treatments were designed to test the weed control factor by continuing the low-input system (cowpea-rice-cowpea), a third was planted to kudzu fallow, and the fourth was a high-input system. Continuing the Low-Input System Prior crop yields suggest that the system collapsed after the seventh crop. This observation was confirmed by growing three more crops with a weed-control variable: full weed removal at economically unrealistic levels vs. the conventional treatment as previously described. The full treatment consisted of eliminating weeds by a pre-emergence application of 2.25 kg/ha active ingredient of metolachlor plus 2.5 L/ ha of paraquat, followed by 0.28 kg ha active ingredient of sethoxydim supplemented by handweeding was needed. The actual cost of this treatment was US $225/ha per crop, a totally unrealistic level. The conventional treatment was the pre-emergence application of 1.5 L/ha of 2,4D followed by 2.5 L/ha of paraquat 5 days later and no handweeding, with a total cost of $25/ha per crop. Both weed control plots received the same application of NPK fertilizers to rice as stated previously, in order to eliminate P and K deficiencies. Grain yields of the eighth, ninth, and tenth consecutive crops under conventional weed control were low with cowpea, and practically zero with rice (Table 5). When weeds were totally removed, yields of the ninth (rice) and tenth cowpeaa) crops reached acceptable levels. Consequently, it seems reasonable to assume that the collapse of the system is directly related to weed-control problems. Kudzu Fallow and a Second Crop Cycle Traditional shifting cultivation involves a secondary forest fallow period of 4 to 20 years, supposedly to replenish soil nutrient availability and control weeds, although the processes involved are not well understood. Farmer experience around Yurimaguas indicates that a minimum desired age of fallow is about 12 years, but population pressures effectively reduce this period to an average of 4 years. Slashing and burning young forest fallows results in faster grass weed invasion than would occur in older fallows because the weed seed pool declines with age. Considering the limited likelihood of long secondary fallow periods in developing humid tropical areas, the need for an improved fallow is apparent. Following a farmer's suggestion, we studied the use of tropical kudzu (Pueraria phaseoloides) as a managed fallow. Unlike its temperate-region counterpart (Pueraria lobata), tropical kudzu does not produce storage roots and therefore is easy to eradicate by slash-and-burn. Kudzu fallows were grown in previously cultivated fields for different durations. In the most infertile and compacted soils of Yurimaguas, kudzu is slow to establish and initially shows several classic nutrient deficiency symptoms, but within 3 months a complete canopy is attained, the kudzu leaves become dark green, and weeds are smothered. Aboveground dry matter and ash biomass accumulation by kudzu peaks at about 2 years. We observed increases in exchangeable Ca and Mg and decreases in Al saturation on the topsoil of kudzu fallow plots that had a lime and fertilizer application history. But no improvements in these topsoil chemical properties were recorded in the kudzu fallow plots that had never been limed or fertilized during a previous cropping period. Consequently, the subsoils must have some nutrients available for recycling if significant recycling by a managed fallow is to take place in such acid soils. The same kudzu ecotype was seeded in this low-input experiment, on August 28, 1985, after harvesting the seventh rice crop, which was heavily infested with Rottboelia excelsa and other weeds. No fertilizers were added to the kudzu plots, but one handweeding was used to pull tall Rottboelia plants. As before, kudzu was slow in establishing, but within 3 months it had developed a complete ground cover and a surface litter layer. Kudzu was slashed with machetes on September 13, 1986; after 10 days of dry weather, it was burned in a total time of 4 minutes for the 1250 m2 plots. Ash sampled 1 day after the burn contained significant accounts of nutrients which were incorporated to the soil by the first rains (Table 6). A clear residual effect of the previous NPK fertilization is evident in the ash composition, particularly in P and K contents. A crop of Africano rice was planted 3 days after the kudzu burn and harvested on January 22, 1987. It received the 30-22-40 kg/ha of NPK as did all previous rice crops. Grain yields were the highest obtained to date at this site (Table 7). This is partly due to very favorable rainfall distribution for rice growth as evidenced by similar rice yields obtained in other experiments at that time, but also due to the absence of significant weed pressure. The 1-year kudzu fallow, therefore, effectively suppressed weed growth in a way far superior to the herbicide combinations attempted to date. The plots are now growing a subsequent crop of upland rice. Changes in topsoil chemical properties in the kudzu fallow plots are shown in Table 8 at the end of the first cropping cycle, after 1 year of kudzu fallow (1 day prior to burning it), and after the first harvest of the second cropping cycle. The effect of the kudzu fallow on topsoil chemical properties includes a significant decrease in exchangeable Ca and K, presumably due to plant uptake with no changes in acidity, Al saturation, or available P and K (Table 8). Differences in the last four properties are significant in terms of prior fertilization treatment, which the kudzu fallow maintains. Topsoil chemical properties after the first harvest of the second cropping cycle show fewer differences due to previous fertilization than at the end of the first cropping cycle, partly because the entire area was fertilized with 30-22-48 NPK formula. Nevertheless, there is an overall trend of increasing available P, K, Ca, and Mg that may be related to the nutrient content of the ash and prior fertilization during the first cropping cycle. Topsoil properties at 54 months after burning, representing seven crop harvests, 1 year of kudzu fallow, and one crop harvest afterwards are about as good or better than 3 months after burning the original forest (see previous technical report). Topsoil total organic matter contents have increased, probably as a result of the kudzu fallow litter inputs (Table 8). It appears reasonable to speculate that organic-matter contents will decrease slightly with subsequent cropping as it did during the first cropping cycle. The second cropping cycle continues in order to determine how long it will last, except that no fertilizer is being applied and weed control will only be at the conventional level. High-Input Crop Production Another option is for the low-input system to serve as a 3-year transitional period to intensive, fertilizer-based, continuous cropping systems for areas that have developed a sufficient road, credit, and market infrastructure to make this possibility attractive. The fields are certainly ready for mechanized tillage, provided slopes are suitable, because most of the felled vegetation has decomposed. One treatment of this field experiment was tilled to 25 cm with a 50-HP tractor, limed with 3 t/ha of dolomitic lime, fertilized with 25 kg/ha P as triple superphosphate, 25 kg/ha Mg as MgS04, 1 kg Zn as ZnSO4, 1 kg Cu as CuSO4, and 1 kg B as borax. Lime and fertilizers were incorporated with tillage. "Marginal 28," an adapted corn variety, was planted in ridges at a population of 56,000 plants/ha. This crop then received 100 kg N/ha as urea and 100 kg/ha of K as KC1 in three split applications. Weeds remaining after tillage were controlled by 2.25 kg/ha active ingredient of metalochlor. A second crop of corn was then planted after mechanically incorporating corn stover. It received an application of 100-25-100 kg/ha NPK and metalochlor, at the same rate. The corn was followed by soybean (cv. Jupiter), which received an application of 30-25-100 kg NPK/ha. Insecticides were used in corn and soybean as needed to control mild insect attacks to these crops. Corn yields were normal for high-input systems for the planting season (3-4 t/ha), while soybean yields were somewhat lower than normal (2.0 t/ha) partly because of unusually heavy rains (Table 9). The system appears stable and of similar productivity to long-term high-input systems grown at Yurimaguas. A combined total of 8.6 t/ha of high value grain (corn and soybean) was produced in approximately 15 months. Total productivity of the entire sequence was 22.4 t/ha of grain in 4 years and 4 months, or 5.2 t/ha per year. In order to decrease weed infestations, using the kudzu fallow prior to shifting to high input cropping may be advisable. Legume-Based Pasture The low-input system can also serve as a precursor to establishing improved, acid-tolerant pastures, beginning with the clearing of secondary forests. Income-generating food crops can be grown and the pasture species may be planted either vegetatively or by seed under a rice canopy. Several combinations of persistent, acid-tolerant grasses and legumes produced high and sustained liveweight grains in Yurimaguas for 6 years, as reported in the legume-based section. The kudzu fallow itself could be used as a pasture in rotation with grass-based pastures. Although we have not shifted from low- input cropping to pastures in an actual experiment, the possibility appears feasible. Since weed encroachment is a major limiting factor in pasture establishment, it may be advantageous to limit the number of crops in order to minimize the weed build-up that occurred during the sixth and seventh crops. Planting kudzu fallow, burning it after 1 year, and then establishing the pastures may be a better approach. Agroforestry The low-input cropping system is a good way of providing cash income and ground cover during the establishment phase of tree plantations. The decision, however, has to be made early in order to transplant or seed the tree crops at adequate spacing shortly after clearing and burning the second forest. Unless liming is contemplated, the choice of tree crops should be limited to acid-tolerant ones. Examples of acid-tolerant crops for industrial purposes are rubber (Hevea brasiliensis), oil palm (Elaeis guineensis) and guarana (Paulinia cupana); for food production, peach palm (Gulielma gasipaes); for alleycropping, perhaps Inga edulis. Woody species known to be sensitive to soil acidity such as Theobroma cacao or Leucaena leucocephala should be avoided. The low-input system has been used successfully in nearby experiments for the establishment of peach palm and multipurpose tree production systems that include fast-growing (Inga edulis) and slow-growing (Cedrelinga cataeniformis) species. For peach palm, seedlings are transplanted with the first rice crop. Within 18 months, the peach palm produces too much shade for further crop growth; kudzu is then planted as an understory. Implications The low-input system has several potentially positive environmental impacts. It provides a low-cost alternative for shifting cultivation in highly acid soils. In order to produce the grain yields reported for the first cropping period, a shifting cultivator would need to clear about 14 ha in 3 years, in comparison to 1 ha in this low-input system. Furthermore, the use of secondary forest fallows instead of primary forests is emphasized, although the system should work well starting from primary forest. Erosion hazards are largely eliminated by the absence of tillage and the presence of a plant canopy on the soil surface, be it slash-and-burn debris, crop canopies, crop residue mulch, or a managed fallow. Nutrient recycling is maximized, but nutrients exported as grain must be replenished by outside inputs in soils so low in nutrient reserves. Perhaps just as importantly, the low-input system does not lead the farmer into a corner; it provides a wide range of options after the first cropping cycle is complete. There are many unanswered questions about the technology just described. Although its feasibility during the first cropping cycle followed by a managed fallow period has been demonstrated, information about the second cropping cycle is limited to one crop harvest. It cannot be stated at this point that a modified form of shifting cultivation with a 3:1 crop to managed fallow ratio is feasible on a long-term basis. More in-depth knowledge of weed population shifts and fertility dynamics is needed. Zero tillage poses a major constraint to long-term weed control. Soil data have been confined to readily determined chemical parameters; soil physical and biological dynamics are now being intensively studied. Tropical kudzu is but one of several promising species for managed fallows, and others are being investigated in terms of above- and belowground biomass accumulation, nutrient cycling, and weed suppression. A fresh look at the management of organic inputs and soil organic matter is being researched. The effect of age of fallow needs to be studied in greater detail. What are the trade-offs with a longer fallow? Also, can the weed problem be reduced by having a shorter time period between harvesting and planting? Time will tell. Table 1. Nutrient concentrations of rice (cv. Africano desconocido) and cowpea (cv. Vita 7) grain and straw at harvest, and fine roots at anthesis, under low-input systems in Yurimaguas. Mean values of seven rice harvests and three cowpea harvests for aboveground parts, and two crop harvests each for roots. Plant Crop part N P K Ca Mg ------------------%------------------ Rice Grain 1.40 0.23 0.36 0.03 0.11 Straw 1.01 0.07 2.85 0.28 0.18 Roots 1.19 0.09 0.93 0.21 0.05 Cowpea Grain 3.86 0.35 1.36 0.06 0.21 Straw 1.86 0.13 3.97 0.95 0.23 Pods 0.70 0.06 2.17 0.17 0.22 Roots 1.23 0.12 0.71 0.59 0.19 Table 2. Total dry matter and nutrient accumulation by five rice and two cowpea crops harvested in 34 months without fertilization, and amounts returned to the soil. Dry Plant part matter N P K Ca Mg t/ha ------------------kg/ha----------------- Grain + pods 14.2 256 34 87 65 18 Straw 18.1 232 15 565 80 35 Roots 5.2 63 5 32 18 5 Total 37.5 551 54 684 104 58 % returned 62 54 37 87 94 59 to soil a. Based on mean grain:straw ratios of 0.84 for rice and 0.52 for cowpea; mean cowpea pod weight of 0.32 t/ha per crop, and fine root biomass of 0.65 and 0.97 t/ha per crop for rice and cowpea in the top 30 cm of soil. Table 3. Labor input, production costs, and revenues incurred in the low-input system with seven crops without fertilization. Crop sequence 1 2 3 4 5 6 7 Input or output Rice Rice Cowpea Rice Cowpea Rice Rice Labor (man day/ha) 172 79 99 79 99 79 79 Cost (U.S. $/ha): Labor 380 140 113 134 167 130 95 Herbicides 21 21 25 26 25 24 25 Insecticides 0 11 14 14 13 0 0 Seed 19 17 75 18 51 16 17 Bags 16 18 8 20 7 18 50 Thresher rent 0 34 0 38 0 34 80 Transport to market 12 12 14 14 14 12 14 Loan interest and fees 135 80 86 105 108 111 225 Total cost 583 333 335 369 385 345 506 Revenue: Grain produced (t/ha) 2.44 2.99 1.10 2.77 1.19 1.84 1.52 Price (U.S. $/t) 321 281 1420 305 1127 265 274 Gross revenue (U.S. $/ha) 783 840 1562 845 1341 488 416 Net return (U.S. $/ha) 200 507 1227 476 956 143 -90 Net return/Cost (%) 34 152 366 129 248 41 -18 --------------------------------------------------------------------- Table 4. Cumulative production costs and returns actually incurred with seven crops in 3 years with and without fertilization and under shifting cultivation. Low-input system Shifting Inputs and outputs Not fertilized Fertilized cultivation U.S. $/ha % U.S. $/ha % U.S. $/ha % Costs: Labor inputs 1159 41 1185 35 380 65 Chemical inputs: Fertilizers 0 0 292 9 0 0 Herbicides 167 6 167 5 21 4 Insecticides 52 2 52 2 0 0 Other inputs: Seeds 213 7 213 6 19 3 Bags 137 5 140 4 16 3 Thresher use 186 7 189 6 0 0 Transport to market 92 3 96 3 12 2 Loan interest and fees 850 30 1073 38 135 23 Total costs 2843 100 3351 100 583 100 Gross revenues: 6275 6688 783 - Net returns: 3432 3377 200 - % Returns/cost 121 100 30 Table 5. Grain yields of three additional crop harvests in the low-input system at two levels of weed control. Weed-control level Planting Harvest Crop sequence date date Conventional Full LSDO.05 ------------------------------------------------------------t/ ------ --------t/ha------ 8. Cowpea cv. Vita 7 Aug. 19, '85 Oct. 31, '85 0.58 0.58 ns 9. Rice cv. Africanoa Jan. 9, '86 May 9, '86 0.09 1.60 0.19 10. Cowpea cv. Vita 7 July 15, '86 Sept. 25, '86 0.43 0.82 0.26 a. Fertilized with 30-22-48 kg/ha of NPK as previously. Table 6. Dry matter and nutrient content of 1-year-old kudzu fallow ash 1 day after burning. Mean of 12 observations in the previously not fertilized treatments and 11 in the previously fertilized treatment. Date: September 24, 1986, 51 months after clearing. Previous fertilizer Dry treatments matter N P K Ca Mg Zn Cu Fe Mn t/ha ----------------------kg/ha----------------- Not fertilized 1.45 21 7 42 58 14 0.17 0.08 1.07 0.82 Fertilized 2.29 26 21 103 92 20 0.29 0.12 0.99 1.34 LSDO.05 0.85 ns 9 51 ns ns 0.12 ns ns ns Table 7. Dates of planting and harvesting kudzu fallow and subsequent crops and crop yield as affected by a fertilizer differential prior to planting kudzu. Grain yield Residual fertilizer level Planting Harvest Crop sequence date date None Yes LSDO.05 ---- ------------------------------------------------------------ --t/ha-- 8. Kudzu fallow Aug. 28, '85 Sep. 13, '86 -- -- -- 9. Rice cv. Africano Sep. 26, '86 Jan. 22, '87 3.86 3.98 0.41 10. Rice cv. Africano Feb. 13, '87 in progress ------------------------------------------------------------------- Table 8. Changes in selected topsoil (0-15 cm) chemical properties prior to and after 1-year fallow, and after harvest of a subsequent crop in the kudzu fallow plots. --------------------------------------------------------------------------- Months Prior Exchangeable Plot after fertili- pH Al Avail. Organic status burning zation (H20) -Al Ca Mg K sat. P matter ----------------------------------------------------------------------- --------cmol/L------- % mg/dm3 % End of first 35 No 4.5 1.9 0.98 0.10 0.26 50 4 1.92 cropping cycle Yes 4.5 1.2 0.98 0.16 0.19 46 14 1.77 (seven crops) After 1 year 52 No 4.5 1.8 0.60 0.09 0.13 68 7 in kuzdu fallow Yes 4.6 1.1 0.57 0.13 0.11 57 14 (before burning) At first 54 No 4.8 1.6 1.05 0.18 0.23 52 14 2.44 harvest of second Yes 4.5 1.1 0.76 0.20 0.14 49 25 2.71 cropping cycle LSD0.05 0.2 0.3 0.25 0.05 0.05 11 6 ns 4 25 35 32 39 21 48 15 - - - - - - - - - - - - - - - - - Table 9. Grain yields of the first three intensively managed continuous crops after shifting from a low- to high- input system. Planting Harvest Grain Crop sequence date date yield t/ha 8. Corn cv. Marginal 28 Sept. 9, '85 Jan. 19, 86 3.86 9. Corn cv. Marginal 28 March 22, '86 July 24, 86 2.90 10. Soybean cv. Jupiter Sept. 11, '86 Dec. 23, 86 1.87 Totals 15 months 8.63 Weed Control in Low-Input Cropping Systems Jane Mt. Pleasant, N. C. State University, Raleigh Robert E. McCollum, N. C. State University, Raleigh Weed control in low-input cropping systems must rely on cultural practices to increase the crop's ability to compete against weeds and thereby reduce the amount of herbicide and manual weeding inputs that are needed to maintain crop yields. Mulching, tillage, timely fertilization, increasing crop density, and the use of competitive cultivars are all examples of cultural practices that can aid in controlling weeds. Objective To identify cultural practices that can form the basis of a weed management program for a continuously cropped rice-cowpea rotation under low-input conditions. Procedures The experiment was established after cutting and burning a 5- to 10- year-old secondary forest at Yurimaguas. Five consecutive crops, identified as Cycles 1 through 5 (rice-rice-cowpea-rice-cowpea), were planted. The experimental design was a split-plot, with tillage and residue management as main plots. There were three main-plot treatments: (1) rototill with previous crop residues incorporated, (2) rototill with residues mulched and, (3) no-till with residues mulched. A factorial arrangement of two crop densities (high and low) and three weed-control practices (handweeding, herbicide, and no control) comprised the subplot treatments. Oxadiazon and propanil were the herbicides used on rice, and metolachlor was applied to cowpea. Data from Cycles 2 through 5 were combined and analyzed as a single experiment. Cycle 1 was not included in this combined analysis because there was no residue management variable in the first crop that succeeded the forest fallow. The remaining four cycles were consistent in treatment through the duration of the experiment. Although the emphasis in this report is on the combined analysis, some data from individual crop cycles are also reported. Results and Discussion Crop Species Rice had more weeds and lower relative yields than cowpea (Figure 1), and failure to control weeds had a much greater effect on rice than on cowpea yields (Figure 2). Because cowpea establishes quickly, we hypothesize that it covers the row and shades out emerging weed seedlings. Rice, in contrast, is much slower in forming a canopy. With the ground left unshaded, weed seedlings in rice quickly become competitors. Time Weed infestation was also more severe the longer the field was cropped (Figure 1). Weed levels in Cycle-4 rice were much higher than in Cycle-2 rice. The same pattern was seen in cowpea: weed infestation was higher in Cycle 5 than in Cycle 3. Residue Management Mulching was ineffective as a weed control practice, but relative crop yields were higher when residue was incorporated (Figure 3). In this experiment the primary role of residue management appears to be related to nutrient release rather than weed control. We theorize that rapid decomposition of residues following incorporation releases nutrients more quickly for immediate recycling than when they are allowed to decompose on the surface. Tillage No-till plots had more weeds and lower yields than rototilled plots in the combined analysis, but the effect of tillage on weed growth changed over the course of the experiment (Figure 4). In the first cycle, rototilled plots had many more weeds than no-till plots. By Cycle 5, however, the no- till treatment had the greater weed infection. Rototilling in Cycle 1 provided an ideal seedbed for weed seed germination. With tillage, weed seeds were brought to the surface where they germinated in a flush. In contrast, weed infestation in Cycle-i no-till plots was low. Fire destroyed both standing vegetation and surface seeds. Without soil disturbance to bring buried weed seeds to the surface, weed infestation in the first crop was minimal. When the field is cropped continuously, however, lack of tillage through several cropping cycles brings increased weed problems. Soil disturbance has a positive effect in a continuous cropping system because existing vegetation can be completely eliminated between crops. While weeds in no-till plots were burned back with a pre-plant application of paraquat between crops, they quickly regrew. Consequently, after five consecutive no-till cropping cycles, untilled plots had a much larger weed infestation than rototilled treatments. Crop Density Increasing crop density was an effective weed control measure (Figure 5). In most cycles there were fewer weeds and higher product yields when the crops were planted at the higher density. But yields, particularly rice, increased at the higher density even when weed growth was not affected. Apparently closer-spaced rice was more efficient in intercepting sunlight for photosynthetic production, and the higher yields were independent of the effect of crop density on weed growth. Weed-Control Practice Failure to control weeds reduced product yields in both rice and cowpea, but the weed-control method (manual or chemical) had little effect on yield (Figure 5). Management and Research Implications If a viable low-input continuous cropping system is to evolve in the Peruvian Amazon, it will depend on upland rice as the central cash crop, and the rice or any associated crop will be planted without tillage. It is now apparent that we cannot control weeds through repeated cycles of rice-cowpea rotations without large and unprofitable inputs for weed control. For this reason, such a cropping system must be considered transitional. It may form a bridge between shifting cultivation and a more permanent agriculture, but it is not a stable, long-term alternative to shifting cultivation. Lack of tillage and the poor competitive ability of upland rice are the primary causes of the weed-control problem. Weed infestation in rice increases dramatically with time when a field is cropped continuously without tillage, even when rice is rotated with cowpea. Because upland rice is such a poor competitor, grasses invade vigorously and crop yields decline sharply with successive rice crops. It is possible to control weeds (either manually or with herbicides) and maintain yields, but the cost of control is prohibitive for a low-input system within the present price-profit structure in Peru. With rice yields of 2 to 3 t/ha and cowpea yields of 1 to 2 t/ha, weed-control costs represent 25 to 40% of the value of the crop. Based on our present knowledge, a realistic "lifespan" for the low-input system is probably five or six crop cycles, after which the cropping system must be interrupted by a fallow period or tillage in order to disrupt and displace the weed community. Research should now focus on developing effective and economic weed-control strategies for this transitional low- input system comprised of no more than six consecutive crop cycles. Our work suggests several avenues that may be productive. Weed-control inputs are unnecessary or minimal in the first rice crop after a forest fallow and in all cycles with cowpea. The majority of weed- control costs will be concentrated in the second, third, and fourth rice crops. Increasing the planting density of rice is an effective and cheap form of weed control. Work is needed to establish optimum planting densities for rice cultivars in this environment. As demonstrated with cowpea, an aggressive, fast-growing crop is another form of inexpensive weed control. Rice cultivars used in the low-input system should be selected for their competitive abilities. Early canopy formation, to shade out weed seedlings, is probably a critical characteristic for rice cultivars in this management system. We have shown that herbicides can provide excellent weed control in upland rice. Furthermore, we suggest that this control method may become economic if herbicide use can be integrated with other control practices so that the rate and number of applications can be reduced. A practical and effective weed-management program for a low-input system will combine cultural practices with chemical and manual methods of control. 300 "c E--I Rice \E 7 Cowpea S200- r- .c i 100 0 0 OU- R CP C2 C4 C3 C5 R R CP CP Figure 1. Effect of crop species and number of cropping cycles on weed infestation in four consecutively-planted short-season crops. Yurimaguas 1984-1985. R = rice; CP = cowpea; C2 = Cycle 2; C3 = Cycle 3; C4 = Cycle 4; C5 = Cycle 5. 100 0" .4- -D a) -4, S-r 300 200 100 H*- Rice H- Cowpea No control All control r-r Cowpea *-4 Rice No control All control Effect of weed control on weed infestation and relative yield in rice and cowpea in four consecutively-planted short-season crops. Yurimaguas 1984-1985. NO CTRL = no weed control; ALL CTRL = treated with herbicide or handweeded. Figure 2. 400 150 - 0 Mulch =3 Incorp 100 o 50 k 100- [-= Mulch 07 Incorp 75- 50 - Effect of residue management on weed infestation and relative yield in four consecutively- planted short-season crops. Yurimaguas 1984-1985. Figure 3. m . 1 . ,, I Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs
http://ufdc.ufl.edu/UF00055257/00002
CC-MAIN-2016-36
refinedweb
23,214
55.03
NAME | SYNOPSIS | DESCRIPTION | ERRORS | USAGE | EXAMPLES | BUGS | ATTRIBUTES | SEE ALSO | NOTES #include <time.h>char *ctime(const time_t *clock); extern time_t timezone, altzone; extern int daylight; extern char *tzname[2];void tzset(void); cc [ flag... ] file... -D_POSIX_PTHREAD_SEMANTICS [ library... ]char *ctime_r(const time_t *clock, char *buf); The ctime() function converts the time pointed to by clock, representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970), to local time in the form of a 26-character string, as shown below. Time zone and daylight savings corrections are made before string generation. The fields are in constant width: Fri Sep 13 00:00:00 1986\n\0 The ctime() function is equivalent to: asctime(localtime(clock)) The ctime(), asctime(), gmtime(), and localtime() functions return values in one of two static objects: a broken-down time structure and an array of char. Execution of any of the functions can overwrite the information returned in either of these objects by any of the other functions. The ctime_r() function has the same functionality as ctime() except that the caller must supply a buffer buf with length buflen to store the result; buf must be at least 26 bytes. The POSIX ctime_r() function does not take a buflen parameter. The localtime() and gmtime() functions return pointers to tm structures (see below). The localtime() function corrects for the main time zone and possible alternate (“daylight savings”) time zone; the gmtime() function converts directly to Coordinated Universal Time (UTC), which is what the UNIX system uses internally. The localtime_r() and gmtime_r() functions have the same functionality as localtime() and gmtime() respectively, except that the caller must supply a buffer res to store the result. The asctime() function converts a tm structure to a 26-character string, as shown in the previous example, and returns a pointer to the string. The asctime_r() function has the same functionality as asctime() except that the caller must supply a buffer buf with length buflen for the result to be stored. The buf argument must be at least 26 bytes. The POSIX, 61] */ /*, in seconds, between Coordinated Universal Time and the alternate time zone. The external variable timezone contains the difference, in seconds, between UTC and local standard time. The external variable daylight indicates whether time should reflect daylight savings time. Both timezone and altzone default to 0 (UTC). The external variable daylight is non-zero if an alternate time zone exists. The time zone names are contained in the external variable tzname, which by default is set to: char *tzname[2] = { "GMT", " " }; These functions know about the peculiarities of this conversion for various time periods for the U.S. (specifically, the years 1974, 1975, and 1987). They start handling the new daylight savings time starting with the first Sunday in April, 1987. The tzset() function uses the contents of the environment variable TZ to override the value of the different external variables. It is called by asctime() and can also be called by the user. See environ(5) for a description of the TZ environment variable.() change the values of the external variables timezone, altzone, daylight, and tzname. Note that in most installations, TZ is set to the correct value by default when the user logs on, using the local /etc/default/init file (see TIMEZONE(4)). The ctime_r() and asctime_r() functions will fail if: The length of the buffer supplied by the caller is not large enough to store the result. historial records. The tzset() function scans the contents of the environment variable and assigns the different fields to the respective variable. For example, the most complete setting for New Jersey in 1986 could be: EST5EDT4,116/2:00:00,298/2:00:00or simply EST5EDT An example of a southern hemisphere setting such as the Cook Islands could be KDT9:30KST10:00,63/5:00,302/20:00 In the longer version of the New Jersey example of TZ, tzname[0] is EST, timezone is set to 5*60*60, tzname[1] is EDT, altzone is set to 4*60*60, the starting date of the alternate time zone is the 117th day at 2 AM, the ending date of the alternate time zone is the 299th day at 2 AM (using zero-based Julian days), and daylight is set positive.() are. The zoneinfo timezone data files do not transition past Tue Jan 19 03:14:07 2038 UTC. Therefore for 64-bit applications using zoneinfo timezones, calculations beyond this date might not use the correct offset from standard time, and could return incorrect values. This affects the 64-bit version of localtime(), localtime_r(), ctime(), and ctime_r(). See attributes(5) for descriptions of the following attributes: time(2), Intro(3), getenv(3C), mktime(3C), printf(3C), putenv(3C), setlocale(3C), strftime(3C), TIMEZONE(4), attributes(5), environ(5) When compiling multithreaded programs, see Intro(3), Notes On Multithreaded Applications. multithread applications, as long as no user-defined function directly modifies one of the following variables: timezone, altzone, daylight, and tzname. These four variables are not MT-Safe to access. They are modified by the tzset() function in an MT-Safe manner. The mktime(), localtime_r(), and ctime_r() functions call tzset(). Solaris 2.4 and earlier releases provided definitions of the ctime_r(), localtime_r(), gmtime_r(), and asctime_r() functions as specified in POSIX.1c Draft 6. The final POSIX.1c standard changed the interface for ctime_r() and asctime_r(). Support for the Draft 6 interface is provided for compatibility only and might | ERRORS | USAGE | EXAMPLES | BUGS | ATTRIBUTES | SEE ALSO | NOTES
https://docs.oracle.com/cd/E19683-01/817-0710/6mgg8q8ep/index.html
CC-MAIN-2019-43
refinedweb
920
51.48
append a CSV file containing User-Agent strings with IsMobile, PlatformName and PlatformVersion properties. The following aspects of the API are covered: Offline processing example of using 51Degrees device detection. The example shows how to: var provider = FiftyOneDegreesPatternV3.NewProvider(dataFile) fin, err := os.Open("20000 User Agents.csv") fout, err := os.Create(outputFile) var out = [][]string{{"User-Agent"}} for i := range properties { out[0] = append(out[0], properties[i]) }) } } package main import ( "fmt" "./src/pattern" "encoding/csv" "os" "log" "io" ) // Locations of data files. const dataFile = "../data/51Degrees-LiteV3.2.dat" const inputFile = "../data/20000 User Agents.csv" const outputFile = "offlineProcessingOutput.csv" // Number of records in the CSV file to process. const numRecords = 20 // Provides access to device detection functions. var provider = FiftyOneDegreesPatternV3.NewProvider(dataFile) // Which properties to retrieve var properties = []string{"IsMobile", "PlatformName", "PlatformVersion"} func output_offline_processing() { // Set csv up reader fin, err := os.Open(inputFile) if err != nil{ log.Fatal(err) } r := csv.NewReader(fin) // Separator between records in the input csv file is a new line r.Comma = '\n' r.LazyQuotes = true // Set csv up writer fout, err := os.Create(outputFile) if err != nil { log.Fatal("Unable to create output file.") } w := csv.NewWriter(fout) // Set separator in output file to the pipe character, this is because // User-Agents can contain commas. w.Comma = '|' defer fout.Close() // Create a 2D array to append the match results to so we can write later. var out = [][]string{{"User-Agent"}} for i := range properties { out[0] = append(out[0], properties[i]) } var records []string // Read a set number records from input file. if numRecords != -1 { for i := 0; i < numRecords; i++ { record, err := r.Read() // Stop at EOF. if err == io.EOF { break } // The first part of each record is the User-Agent, append it to // records array. records = append(records, record[0]) } }else { for true{ record, err := r.Read() // Stop at EOF. if err == io.EOF { break } // The first part of each record is the User-Agent, append it to // records array. records = append(records, record[0]) } } // Process all records) } } fmt.Println("Done") // Write all processed records to the output file. w.WriteAll(out) defer w.Flush() fmt.Println() } func main() { fmt.Println("Starting Offline Prosessing") output_offline_processing() fmt.Println("Output file written to ", outputFile) } Find Profiles Strongly Typed Match Metrics Offline Processing Web App Match For Device Id
https://51degrees.com/Developers/Documentation/APIs/Go/Tutorials/Offline-Processing
CC-MAIN-2017-22
refinedweb
388
54.9
Scala supports the notion of case classes. Case classes are just regular classes that are: Here is an example for a Notification type hierarchy which consists of an abstract super class Notification and three concrete Notification types implemented with case classes SMS, and VoiceRecording. abstract class Notification case class Email(sourceEmail: String, title: String, body: String) extends Notification case class SMS(sourceNumber: String, message: String) extends Notification case class VoiceRecording(contactName: String, link: String) extends Notification Instantiating a case class is easy: (Note that we don’t need to use the new keyword) val emailFromJohn = Email("[email protected]", "Greetings From John!", "Hello World!") The constructor parameters of case classes are treated as public values and can be accessed directly. val title = emailFromJohn.title println(title) // prints "Greetings From John!" With case classes, you cannot mutate their fields directly. (unless you insert var before a field, but doing so is generally discouraged). emailFromJohn.title = "Goodbye From John!" // This is a compilation error. We cannot assign another value to val fields, which all case classes fields are by default. Instead, you make a copy using the copy method. As seen below, you can replace just some of the fields: val editedEmail = emailFromJohn.copy(title = "I am learning Scala!", body = "It's so cool!") println(emailFromJohn) // prints "Email([email protected],Greetings From John!,Hello World!)" println(editedEmail) // prints "Email([email protected],I am learning Scala,It's so cool!)" For every case class the Scala compiler generates an equals method which implements structural equality and a toString method. For instance: val firstSms = SMS("12345", "Hello!") val secondSms = SMS("12345", "Hello!") if (firstSms == secondSms) { println("They are equal!") } println("SMS is: " + firstSms) will print They are equal! SMS is: SMS(12345, Hello!) With case classes, you can utilize pattern matching to work with your data. Here’s a function that prints out different messages depending on what type of Notification is received: def showNotification(notification: Notification): String = { notification match { case Email(email, title, _) => "You got an email from " + email + " with title: " + title case SMS(number, message) => "You got an SMS from " + number + "! Message: " + message case VoiceRecording(name, link) => "you received a Voice Recording from " + name + "! Click the link to hear it: " + link } } val someSms = SMS("12345", "Are you there?") val someVoiceRecording = VoiceRecording("Tom", "voicerecording.org/id/123") println(showNotification(someSms)) println(showNotification(someVoiceRecording)) // prints: // You got an SMS from 12345! Message: Are you there? // you received a Voice Recording from Tom! Click the link to hear it: voicerecording.org/id/123 Here’s a more involved example using if guards. With the if guard, the pattern match branch will fail if the condition in the guard returns false. def showNotificationSpecial(notification: Notification, specialEmail: String, specialNumber: String): String = { notification match { case Email(email, _, _) if email == specialEmail => "You got an email from special someone!" case SMS(number, _) if number == specialNumber => "You got an SMS from special someone!" case other => showNotification(other) // nothing special, delegate to our original showNotification function } } val SPECIAL_NUMBER = "55555" val SPECIAL_EMAIL = "[email protected]" val someSms = SMS("12345", "Are you there?") val someVoiceRecording = VoiceRecording("Tom", "voicerecording.org/id/123") val specialEmail = Email("[email protected]", "Drinks tonight?", "I'm free after 5!") val specialSms = SMS("55555", "I'm here! Where are you?") println(showNotificationSpecial(someSms, SPECIAL_EMAIL, SPECIAL_NUMBER)) println(showNotificationSpecial(someVoiceRecording, SPECIAL_EMAIL, SPECIAL_NUMBER)) println(showNotificationSpecial(specialEmail, SPECIAL_EMAIL, SPECIAL_NUMBER)) println(showNotificationSpecial(specialSms, SPECIAL_EMAIL, SPECIAL_NUMBER)) // prints: // You got an SMS from 12345! Message: Are you there? // you received a Voice Recording from Tom! Click the link to hear it: voicerecording.org/id/123 // You got an email from special someone! // You got an SMS from special someone! When programming in Scala, it is recommended that you use case classes pervasively to model/group data as they help you to write more expressive and maintainable code: Contents
http://docs.scala-lang.org/tutorials/tour/case-classes
CC-MAIN-2017-04
refinedweb
640
52.46
This site uses strictly necessary cookies. More Information I know the logic that I want to implement, but I don't know how to actually program it. What I want to do is have it so that if you hit one of the alpha buttons (1, 2, 3) the script will enable the corresponding object in an array. Then I want to disable ever other object in the array. This way the number of weapons in the inventory can be changed. It would be really helpful for an explanation of a script as well. Thanks! this question answer system is for "problems" not "design of my code". Ask in the normal forum for this kinda question. secondly, try to search for similar tutorials on this. From what I read, it looks to me like an inventory kinda setup. Search for that in Google or in Unity forums. I believe there are even free assets in the Assets store for inventory, but I am not sure. Question, do you need to deactivate every object in the array? Is it possible to run this like a FS$$anonymous$$, simply cache the gameObject that is currently active and only disable that one? Otherwise you can use a for or foreach loop to iterate over the entire array. Answer by Lazdude17 · Jul 02, 2014 at 03:56 AM Make your List in your inventory script using System.Collections.Generic; private List<GameObject> weaponsList; Then assign the Weapons you have to it so that after you pick up a weapon it goes into your inventory. //On pick up of sword inventory.weaponsList.Add(//gameobject here like sword) Then after you equip the weapon you want like "sword" set it to the variable "currentWeapon" foreach(GameObject weapon in inventory.weaponsList) { if(gameObject.activeSelf) { gameObject.SetActive(false); } } //Now that they're all off you turn yours back on currentWeapon.SetActive(true); Now I know this is kinda rough but this is how I would start it. After a little tweaking, this works perfectly. Thanks! No problem! Glad it worked. Answer by Bolbo13 · Jul 02, 2014 at 02:41 AM You can implement a Weapon class. Then have all your weapons inherit from it. From there you can create a Weapon array that contains all your weapons. So to sum up the class list should look like this : Class Weapon Class Rifle : Weapon Class Shotgun : Weapon etc. Of course every method you call through the Weapon array needs to be defined in the Weapon class (as abstract or virtual) and the specific weapon (rifle, shotgun, whatnot) should override it if need be. In essence, that's polymorphism at work. I hope I was clear. This approach makes it very nicely OO and using Polymorphism could be logic, but from a gamedesign point of view, as they are all weapons, just making small variables pr. type and then keep a "type" variable in them would save you a LOT of work of building new classes everytime you invent a new weapon + you can make weapon upgrades on generic weapons system, you cant on polymorphism architeture, as you would have to make a class for each weapon grade too. Just imagine the amount of "IF" structures needed if you have 50 different weapons in a game? You can handle ALL your generic weapons with ONE IF structure, just check the variables and do the math. So I dont agree with this answer Bolbo13, its a rather bad answer in my opinion. Inheritance is one valid answer. In many cases changing variables is not enough to create truly unique weapons. A slingshot, knife, bazooka, machinegun, blaster pistol, portal gun, firehose, gravity ray and time warping pen all need different coding structures. Also its possible to have variables in the class that allow upgrades exactly the same as on a single. Weapon switching problem 1 Answer Weapon Switching 1 Answer Multiple Cars not working 1 Answer Changing enum values 1 Answer Distribute terrain in zones 3 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/739066/switching-weapons-c.html
CC-MAIN-2021-49
refinedweb
668
71.95
I'd like to see a report of how long each timing test took, and also see differences between runs. When comparing two runs I'd like to see visually which is faster. Timing information is a very important thing to time for gui applications like games, and websites. As so many machines are very different, it's useful to be able to time things as they run on different machines. A game or website can quite easily run 10-30 times slower even on machines with the same CPU. Many other factors like OS, hard drive speed, available memory, installed drivers, directx/opengl/X/frame buffer, different browser speed, different installed versions of libraries or plugins like flash. Testing all these things manually is almost impossible, testing them manually every time something changes is definitely impossible. So I want to use this tool pygame unittests specifically, and also for websites. If there's another testing framework which can do timing very well then we could change testing frame works away from unittest. So I'd like to be able to save the timing data to a file or a database, and select the runs I'd want compared against. Extra points for being able to do that automatically for different revisions of subversion. So I could tell it to checkout different revisions/branches, run the setup.py, run the tests, then show me a report. But that's something I could script later I guess. It would be nice if the tests could be separate processes, so I could do something like: def some_tests(): flags = ['1', '2', '3', '4', '5', '6', '7'] for f in flags: my_f = lambda x: os.system("python myscript.py -doit=%s" % f) do_test(name = "a test for:%s" % f, my_f) So it would just time how long each one takes to run as many times as it needs. It'd be nice if I could read what the process prints, and provide a parsing function which can extract timing information from the output. So then when something prints "33FPS" or "blit_blend_ADD 10202094.0093" I could tell the timing framework what those mean. Where script would be timed automatically, and I could see results for tests like "a test for 1", "a test for 2" etc. That way I could more easily reuse existing timing/benchmark scripts, as well as time code which is not python - like any executable. It would be nice to be able to keep the timing tests with the other tests, but be able to disable the timing tests when needed. because, it's likely some timing tests will run for a while, and do things like open many windows (which takes time). Website collection would get bonus points. So it could collect the timing information from any machine, so I can compare them all in one place. Since I want to run these timing tests on different machines, and allow random people on the internet to submit their timing tests. Since I don't have 1000's of videocard/OS/cpu/hard disk combinations - it would be useful if anyone could submit their test results. A http collection method would be useful for storing timing information from other languages, like from flash, javascript, or html too. Looking at average, mean, fastest run, and slowest run would be good too - rather than just average. Merging multiple runs together so you can see the slowest results from each run, or the fastest from each run would be good. Then you do things like combine results for 1000 different machines, then find out the machine configuration for the slowest runs. If there common slow downs then you can try and see if there are any similarities for system configurations. Thus allowing you to direct your optimization work better. Being able to run tests with a profiler active, then store the profiling data would be nice. This would be for different profilers like gprof, or a python profiler. So if you can see a profile report from a slow run, and find out which functions you might need to optimize at an even lower level. Allowing different system configuration detection programs to run would be good too. So you can see what HD the slow machine was using, how much free ram it had, the load on the machine at the time of running, etc,etc. All this tied in with respect for peoples privacy, so they know what data they are supplying, and how long it will be archived for. Support for series of time would be nice too. Like giving it a list of how long each frame took to render. So then I could see a graph of that, then compare it over multiple runs. Then you could look at data from many machines, and compare how they are performing, as well as compare different algorithms on different machines. Hopefully something already exists which can do some of these things? What I've written above is more a wish list, but even a portion of that functionality would allow people to more easily test for performance regressions. 3 comments: You may want to take a look at nose and its plugins. Titus wrote a stopwatch plugin as part of his pinocchio nose extensions (ha ha): Grig The part about automagically submitting the results to a website sounds particularily nice. Submitting bug reports to people is hard work, but having it done instantly is convenient. I've run the tests included with libraries before, but when they crash I always feel lazy and don't want to have to go through that whole: - explain what I was doing - cut-n-paste the test-case that failed - tell more about my system, my setup, blah blah blah .. It would also be nice if the user could optionally enter their e-mail address at the beginning in case the dev. wanted to contact them to ask them for more details about their system, or whatever. I expected to see "and a pony" by the end of the post :-)
http://renesd.blogspot.com/2007/08/timing-and-unittests-graphing-speed.html
CC-MAIN-2018-43
refinedweb
1,014
69.31
Set of tools to fetch taxonomic metadata for a list of organisms Project description taiga-bio - Package version of the TaIGa program This is a package version of the original TaIGa program. This was built to be available on Pypi and be easily installable by any Python package manager, and also to allow the user to import and make custom scripts from TaIGa's functionalities. For better information, see the repo for the original TaIGa. 1 How to run taiga-bio From a Python script of your choice (must be >=Python 3.6), do: from taiga.core import taxonomy from taiga.common import data_handlers taxon_list = taxonomy.run_taiga(input_file, email) df = data_handlers.create_df(taxon_list) data_handlers.create_output(ouput_directory, df, taxon_list) This will run TaIGa's main function, which grabs a list of names from input_file, fetches for their taxonomic information on NCBI's Taxonomy, which needs your create_df function, which receives the list of Taxon objects and returns a DataFrame. Then, it calls the create_output function, which outputs the results to the specified output_directory, which need not to be pre-created, using the DataFrame and the Taxon list. It needs the Taxon list to be able to create a file that lists Taxon objects missing critical information. 2 Arguments run_taiga(infile, email, gb_mode=0, tid=False, correction=False, retries=5, silent=False) -> List[Taxon] 2.1 Positional (required) Arguments: [input file]: This is the full path to the file you will use as an input for TaIGa. By default, TaIGa expects it to be a list of organism names separated by line in a text-like file ( .txt). You can change this behaviour so TaIGa would expect: a line separated text file with a collection of Taxon IDs; a Genbank format file with multiple records, all from the same organism; a Genbank format file with only one record; or a Genbank format file with multiple records from multiple organisms. Organism names refer to any valid taxonomic level that is available on NCBI's Taxonomy database. [user e-mail]: This is just a valid e-mail of yours. Nothing will be sent to this e-mail, and neither TaIGa itself neither me will ever use it for anything other than running TaIGa (in fact, I will never have access to this information. You may check the code yourself to confirm this). TaIGa only requires this field because it is standard procedure to pass on this information when sending requests to Entrez. This is all TaIGa will use the e-mail for. You may pass on gibberish, if you so want, but I advise you not to. TaIGa will run fine anyways, as long as you provide something to this argument field. 2.2 Optional Arguments: gb_mode [0, 1, 2, 3]: Default: 0. This changes TaIGa's default input type to instead expect a Genbank format file. This argument exepects one numeric option from the available ones. Those are: - 0: Acts the same as not passing the --gb-modeargument at all, not altering TaIGa's default behavior. - 1: A Genbank format file containing multiple records from multiple, differently named organisms (eg. Escherichia coli, Bos taurus, Mus musculus, all in the same .gbor .gbfffile). - 2: A Genbank format file containing a single record (eg. an annotation for a COX 1 gene for Homo sapiens). - 3: A Genbank format file containing multiple records for a single organism (eg. many annotations for Apis mellifera genes). tid: This changes TaIGa's behaviour to, instead of expecting any sort of name-based input, to expect a text file with a list of valid Taxon IDs for a collection of organisms (or taxon levels). This is incompatible with the '-c' option, as TaIGa skips the spelling correction when run with Taxon IDs. correction: This enables TaIGa's name correcting functionality. The usefulness of this is discussed below. This is incompatible with '--tid'. See '--tid' above. retries: Default: 5. This sets the maximum number of retries TaIGa will do when fetching for taxonomic information for an organism. This can be very useful as Entrez will many times return broken responses. silent: This disables TaIGa's standard verbose mode, so TaIGa will automatically generate a log file called TaIGa_run.log inside the current working directory. This log file will contain all information about that particular TaIGa run. 3 Output files To create the output files, you'll need to run the create_output function from the taiga.common.data_handlers module. It expects an output_folder, a df and a taxon_list as arguments. To create those and output your results, run: from taiga.core import taxonomy from taiga.common import data_handlers taxon_list = taxonomy.run_taiga(input_file, email) df = data_handlers.create_df(taxon_list) data_handlers.create_output(ouput_directory, df, taxon_list) The arguments are: - [output directory]: a string containing the path to the output directory, which doesn't need to be created yet. - [df]: the DataFrame returned by taiga.core.taxonomy.run_taiga(). - [taxon_list]: the list of Taxon objects returned by taiga.core.taxonomy.run_taiga(). 3.1 TaIGa_result.csv After running successfuly, TaIGa will create the output files at the provided output path. If the output folder doesn't exist (or its parent folders), TaIGa will check it and create them for you. Do note that you still need to provide a valid path for TaIGa to run successfuly. Check it twice before running TaIGa. The created file will be named TaIGa_result.csv. This is default and can only be changed on the source code, which you can surely do if you know what you're doing. It will be a .csv format file. To better visualize the results, import it to any spreadsheet viewer of yours. The file will contain a number of rows equal to the number of input organisms. Each row will be named for the corresponding taxon. Each column will be a variable for a particular taxonomic information. The first two are always the organism's Taxon ID and Genome ID (if it has one). The rest are the valid taxon rank names available on Taxonomy and their corresponding value for each organism. If there's any missing value (eg., lack of a Genome ID or lack of a tribe for Homo sapiens), the value will be N/A. 3.2 TaIGa_missing.txt TaIGa will also create a file named 'TaIGa_missing.txt'. This will be created regardless if TaIGa was able to run without issues or any missing information. If any organism happens to be missing one of the core informations TaIGa needs to be able to run (those being a valid Name or Corrected Name, Taxon ID and Classification), that organism will be outputed to this file within the correct class of missing information. 4 Other functions TaIGa's run_taiga() function calls a bunch of other functions from helpers, parsers, fetchers, retrievers and data_handlers and uses the Taxon object from data_models. You could import those modules with something like: from taiga.common import fetchers, data_models animal = data_models.Taxon() fetchers.fetch_taxonomic_info(email, animal, retries) All modules and functions are nicely documented and you can check their docstrings to see what they do exactly and how they do it. I won't extend further on them simply because, as of now, they're not really meant to be executed alone. They will probably work well and do their job if you execute them properly, but individually those are rather simple wrappers over some common Biopython functionalities. 5 Licensing TaIGa is licensed under the MIT license. You can check the information inside the LICENSE file. To make it short, I wanted it to be free and open, so that anyone can contribute to it. 6 Ending regards As said in the introduction, the major inspiration for TaIGa's name is the cute romance anime character Taiga, from the japanese animation ToraDora. I highly recommend it. And, as state before, this is simply a package version of the "standalone" original TaIGa python program, that you can check on the original repo. There, you will find a better documentation that, albeit a bit different, is still relevant and might be helpful for this version. Share love and knowledge and, on top of all, respect people. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/taiga-bio/
CC-MAIN-2020-16
refinedweb
1,376
56.66
$143.20. Premium members get this course for $349.00. Premium members get this course for $174.99. Instead declare array of class B in class A to array (or list) of pointers of class B. Or declare class A as a pointer in class B. Choose the one which is more natural. The class where you use a pointer doesn't require class definition first, but you have to tell compiler that it's available. // B.h class A; // this class exists class B { private: A* a; }; -------- // A.h #include "B.h" class A { private: Array<B> b; }; I used dynamic template based array approach here. --------- // A.cpp #include "A.h" // code here --------- // B.cpp #include "B.h" #include "A.h" // we use this here // code here JMu Just mean class a; now you can use a in b; and declare a and implement it later . As JMu told. //it's so-called forward declaration, :) Sorry again:) Best Regards Wyn both!
https://www.experts-exchange.com/questions/10297757/Dependencies-between-classes.html
CC-MAIN-2018-13
refinedweb
162
87.62
08 July 2010 10:05 [Source: ICIS news] SINGAPORE (ICIS news)--Asian toluene di-isocyanate (TDI) contracts settled at a lower price for the fourth month in a row in June, down $350/tonne (€276.5/tonne) May’s levels amid a supply glut and weak import demand, market players said on Thursday. TDI majors BASF, Mitsui Chemicals and Bayer Material Science (BMS) had mostly settled their June contracts last week at $2,450-2,550/tonne CFR China Main Port (CMP)/?xml:namespace> Meanwhile, some small transactions were made in the low-$2,400/tonne CFR CMP/Hong Kong and high-$2,500/tonne CFR CMP/Hong Kong levels, market sources said. “Sales were barely two-third of what we had in May. This is no good for us,” said a key producer. Since March, TDI contract prices had declined by 21%, tracking falls in spot prices due to strong supply in Spot prices for import material were discussed on Wednesday at $2,340-2,500/tonne CFR CMP/Hong Kong, down $50-60/tonne week on week, market sources said. In the Chinese domestic markets, prices were discussed at yuan (CNY) 19,200-21,000/tonne ($2,832-3,097/tonne) ($1 = €0.79 / $1 = CNY6
http://www.icis.com/Articles/2010/07/08/9374795/asia-june-tdi-contract-settlement-falls-350tonne-from-may.html
CC-MAIN-2014-23
refinedweb
208
69.21
FAQ: What are the build numbers for releases of ArcGIS Pro? Question What are the build numbers for releases of ArcGIS Pro? Answer ArcGIS Pro Version and Build Number List: Version 2.4.2 Build 2.4.2.19948 = ArcGIS Pro 2.4.2 Release Date: 10/3/2019 Version 2.4.1 Build 2.4.1.19948 = ArcGIS Pro 2.4.1 Release Date: 8/8/2019 Version 2.4 Build 2.4.0.19948 = ArcGIS Pro 2.4.0 Release Date: 6/27/2019 Version 2.3.3 Build 2.3.3.15900 = ArcGIS Pro 2.3.3 Release Date: 5/21/2019 Version 2.3.2 Build 2.3.2.15850 = ArcGIS Pro 2.3.2 Release Date: 3/28/2019 Version 2.3.1 Build 2.3.1.15800 = ArcGIS Pro 2.3.1 Release Date: 2/27/2019 Version 2.3 Build 2.3.0.15769 = ArcGIS Pro 2.3.0 Release Date: 1/24/2019 Version 2.2.3 Build 2.2.3.12813 = ArcGIS Pro 2.2.3 Release Date: 10/5/2018 Version 2.2.2 Build 2.2.2.12813 = ArcGIS Pro 2.2.2 Release Date: 9/6/2018 Note: ArcGIS Pro patch 2.2.2 has been recalled and is no longer available. Version 2.2.1 Build 2.2.1.12813 = ArcGIS Pro 2.2.1 Release Date: 7/31/2018 Version 2.2 Build 2.2.0.12813 = ArcGIS Pro 2.2 Release Date: 6/26/2018 Version 2.1 Build 2.1.0.10257 = ArcGIS Pro 2.1 Release Date: 1/17/2018 Version 2.0.1 Build 2.0.1.8933 = ArcGIS Pro 2.0.1 Release Date: 8/22/2017 Version 2.0 Build 2.0.0.8933 = ArcGIS Pro 2.0 Release Date: 6/27/2017 Version 1.4 Build 1.4.0.7198 = ArcGIS Pro 1.4 Release Date: 1/12/2017 Version 1.3.1 Build 1.3.1.5861 = ArcGIS Pro 1.3.1 Release Date: 8/23/2016 Version 1.3 Build 1.3.0.5861 = ArcGIS Pro 1.3 Release Date: 7/7/2016 Version 1.2 Build 1.2.0.5023 = ArcGIS Pro 1.2 Release Date: 3/1/2016 Version 1.1.1 Build 1.1.1.3310 = ArcGIS Pro 1.1.1 Release Date: 9/29/2015 Version 1.1 Build 1.1.0.3308 = ArcGIS Pro 1.1 Release Date: 7/16/2015 Version 1.0.2 Build 1.0.2.1810 = ArcGIS Pro 1.0.2 Release Date: 5/19/2015 Version 1.0.1 Build 1.0.1.1809 = ArcGIS Pro 1.0.1 Release Date: 4/1/2015 Version 1.0 Build 1.0.0.1808 = ArcGIS Pro 1.0 Final Release Date: 1/27/2015 How to determine the ArcGIS Pro Build Number The first three digits of the build number indicate product version, while the last five digits indicate the build number. For example, Build 2.2.3.12813 indicates it is ArcGIS Pro version 2.2.3, build 12813. There are two possible methods to view the version and build number of ArcGIS Pro: Via Windows Control Panel - In Windows, click Start > Control Panel > Programs and Features. - Search for ArcGIS Pro in the list of programs installed. - The ArcGIS Pro version and build number is listed under the Version column. Using Python Commands in ArcGIS Pro - Open ArcGIS Pro, and start a new project (a template is not required). - On the View tab in the Windows group, click the Python button to show the Python window. - Paste the following script in the Python window to display the version and build number: import arcpy print(arcpy.GetInstallInfo()['Version']) print(arcpy.GetInstallInfo()['BuildNumber']) Related Information
https://support.esri.com/en/technical-article/000012500
CC-MAIN-2019-51
refinedweb
625
83.22
OriginalGriff wrote:Common Sense and Law OriginalGriff wrote:which lawyers and judges are vehemently against as it would put them out of business. For good. OriginalGriff wrote:it requires a joining of Common Sense and Law The report of my death was an exaggeration - Mark Twain Simply Elegant Designs JimmyRopes DesignsThink inside the box! ProActive Secure Systems I'm on-line therefore I am. JimmyRopes Vivic wrote:The judge learnt enough programming to test out how complicated some of the patented routines may be and discovered they were trivial. ORDER BY *pre-emptive celebratory nipple tassle jiggle* - Sean Ewington "Mind bleach! Send me mind bleach!" - Nagy Vilmos * for humans * many clients * free access * no guarantee on content * for machines * fewer clients * limited access * content guaranteed leppie ((λ (x) `(,x ',x)) '(λ (x) `(,x ',x))) Dalek Dave wrote:I got mine yesterday and spent the night metaphorically m*ing with it. Dalek Dave wrote:several hours of playing and I have barely scratched the surface. Dalek Dave wrote:Oh, the joys, the apps, the funtionality! public class Naerling : Lazy<Person>{ public void DoWork(){ throw new NotImplementedException(); } } if (match.Draw || !match.Draw) { match.Interesting = true; } match.Interesting = true; Roger Wright wrote:I should be asking for hazardous duty pay Roger Wright wrote: I wonder if I should be asking for hazardous duty pay? General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Lounge.aspx?msg=4386505
CC-MAIN-2015-18
refinedweb
250
53.81
R is a robust language used by Analysts, Data Scientists, and Business users to perform various tasks such as statistical analysis, visualizations, and developing statistical software in multiple fields. Data Wrangling is a process reimaging the raw data to a more structured format, which will help to get better insights and make better decisions from the data. What are Tibbles? Tibbles are the core data structure of the tidyverse and is used to facilitate the display and analysis of information in a tidy format. Tibbles is a new form of data frame where data frames are the most common data structures used to store data sets in R. Advantages of Tibbles over data frames - All Tidyverse packages support Tibbles. - Tibbles print in a much cleaner format than data frames. - A data frame often converts character strings to factor and analysts often have to override the setting while Tibbles doesn’t try to make this conversion automatically. Different ways to create Tibbles - as_tibble(): The first function is as tibble function. This function is used to create a tibble from an existing data frame. Syntax: as_tibble(x, validate = NULL, …) x is either a data frame, matrix, or list. - tibble(): The second way is to use a tibble()function, which is used to create a tibble from scratch. Syntax: tibble(s…, rows = NULL) s represents a set of name-value pairs. - Import(): Finally, you can use the tidyverse’s data import packages to create Tibbles from external data sources such as databases or CSV files. Syntax: import(pkgname …) - library(): The library()function is used to load the namespace of the package. Syntax: library(package, help, pos = 2, lib.loc = NULL) Note: To find more about the functions in R, type ? followed by function name. Eg: ?tibble. Let us see some examples of how to use the above functions using Rstudio IDE. We will be using a builtin dataset (CO2) Carbon Dioxide Uptake in Grass Plants to create a tibble. This dataset consists of several variables, such as plant, type, treatment, concentration, and uptake. It is difficult to work with this type of information, so let us convert this information into a tibble. Let us create a tibble named sample_tibble from CO2 dataset using as_tibble() function. Example of as_tibble() Here we are converting a data frame (CO2) into tibble using as_tibble() function. It requires you to install tidyverse package in Rstudio. Output: Example of tibble() The second Method was to create a tibble from scratch using tibble() function so we will create few vectors such as name, marks_in_Math, marks_in_Java, Fav_color etc and pass them to tibble() function which converts them into tibble. Output: Subsetting tibbles Data analysts often extract a single variable from a tibble for further use in their analysis, which is called subsetting. When we try to subset a tibble, we extract a single variable from the Tibble in vector form. We can do this by using a few special operators. - $ Operator - [[]] Operator $ Operator The first way we can extract a variable from Tibble is by using a dollar($) sign, operator. To do this, we will be creating a tibble from scratch using a tibble() function. Output: [[]] Operator The second way you can access a single variable from Tibble is by using square braces([[]]). We will use the same tibble created previously. Output: Filtering Tibbles Filtering provides a way to help reduce the number of rows in your tibble. When performing filtering, we can specify conditions or specific criteria that are used to reduce the number of rows in the dataset. filter() Function: Syntax: filter(data, conditions) The data represents the Tibble name, and conditions are used to specify an expression that returns a logical value. We will be using the student’s Tibble, which we created in the above example. Output:
https://www.geeksforgeeks.org/data-wrangling-in-r-programming-working-with-tibbles/?ref=rp
CC-MAIN-2021-25
refinedweb
630
54.52
So, here is the story. I have a GridView which contains the ASP.NET CheckBox control. Each row also have the primary key of the database table. Now, I need to get the ID of the selected checkbox using JavaScript. I used a hidden field to store the ID of the row (The row is hidden and the user cannot see it on the display but you can see the value if you do the view source). Now, I like to access the hidden field of the selected row of the GridView. < <ItemTemplate> <asp:CheckBox <asp:HiddenField </ItemTemplate> </asp:TemplateField> And here is the JavaScript code: function { var elements = document.getElementsByTagName("INPUT"); var ID; var count = 0; for(i=0; i< elements.length; i++) if(elements[i].type == 'checkbox') if(elements[i].checked == true) count = count + 1; ID= elements[i].nextSibling.nextSibling.value; } } // check if more then one exam is selected. If so, then alert the user if(count > 1) { alert('Please only select a single item from the list'); return (-1); } else return ID; Althought I am only allowing the user to select a single item I am sure you got the basic idea.
http://geekswithblogs.net/AzamSharp/archive/2006/08/29/89697.aspx
CC-MAIN-2021-04
refinedweb
195
67.35
Hi guys, so I'm making a queue, but having a bit of trouble... The problem is with the way my pop() function works. What I want it to do is return the array's 0th element, and then increase the address the array holds by 1, which would change the 0th element to the 1st element. Unfortunately I'm not very good at explaining it, so I hope you understand what I'm trying to say, I'll give a brief example so you guys actually know what I'm trying to do. Say I have a pointer to an int: Say the 0th, 1st, 2nd elements of that array contain these values:Say the 0th, 1st, 2nd elements of that array contain these values:Code:int * pArray; pArray = new int[100]; This is my code:This is my code:Code:pArray[0] = 5; pArray[1] = 10; pArray[2] = 15; Or another way:Or another way:Code:return *this->pArray++; Anyway they both actually work, but they give crash errors. I took a screenshot, here's the link: they both actually work, but they give crash errors. I took a screenshot, here's the link: this->pArray++[0]; And here's the actual program code: Cheers for the help!Cheers for the help!Code:#include <iostream> #include <cstdio> using namespace std; class queue { public: queue(); ~queue() { delete pArray; } void push(int); int pop(); private: int * pArray; int count; }; queue::queue() { pArray = new int[100]; count = 0; } void queue::push(int val) { this->pArray[count] = val; this->count++; } int queue::pop() { return *this->pArray++; }; int main() { queue q; int val = 0; int amount = 0; cout << "Queue Program. Enter 4 numbers:\n\n"; for (int x = 0; x < 5; x++) { cout << "#" << x+1 << ": "; cin >> val; cin.ignore(); q.push(val); } cout << "How many items do you wish to pop: "; cin >> amount; if (amount > 0 && amount <= 5) { while (amount) { cout << q.pop() << endl; --amount; } } else cout << "Incorrect pop number."; }
https://cboard.cprogramming.com/cplusplus-programming/107267-making-queue.html
CC-MAIN-2017-51
refinedweb
327
64.54
Asked by: Include GAC assemblies in helpfile - Hello I'm creating a help file for a component which uses the MailItem and CalendarItem classes from the Microsoft.Office.Interop.Outlook namespace. These Microsoft Office assemblies are stored in the GAC. Is there a way to include them in my Sandcastle help project so that information about the MailItem and CalendarItem class will be generated ? At this moment I am using the SandCastle Help File Builder to generate the files. Thanks StefanThursday, January 03, 2008 7:00 PM Question All replies In Sandcastle Help File Builder it is very simple thing to do, but I will encourage you to post this SHFB specific question on the SHFB forum. Best regards, Paul.Thursday, January 03, 2008 7:48 PM Add them to the Dependencies property as GAC references. You can find answers to this and other common questions in the FAQ in the help file builder's help file. EricFriday, January 04, 2008 2:52 AM - I have added the Microsoft.Office.Interop.Outlook assembly from the GAC to the Dependencies but it doesn't make a difference. None of the Outlook classes is being documentated. e.g. One of the properties of my component is a collection of AppointmentItems : public IEnumerable<AppointmentItem> CalendarItems { get; } SandCastle has generated a link for the IEnumerable interface to the corresponding MSDN web page, but there is no link for the AppointmentItem class. Am I missing some configuration in SHFB or SandCastle ? Or is there a problem with this specific Outlook assembly ?Friday, January 04, 2008 8:13 PM You'll only get links to online MSDN content if an entry is added to the ResolveReferenceLinks2 component in the configuration file. For that to happen, you'd have to run MRefBuilder and a couple of the doc model transforms on the Office Interop assemblies. There's also the question of whether the MSDN web service knows about stuff in the interop assemblies and can produce links to them. That's a question better answered by Anand. So, for now, you can produce a help file that will list the inherited members of the class, but there won't be links to online help for the interop stuff. EricSaturday, January 05, 2008 2:45 AM
https://social.msdn.microsoft.com/Forums/en-US/3061228b-9b78-41ee-ae83-e5467cb057f3/include-gac-assemblies-in-helpfile?forum=devdocs
CC-MAIN-2017-09
refinedweb
378
61.87
I. You could use cwd parameter, to run scriptB in its directory: import os from subprocess import check_call check_call([scriptB], cwd=os.path.dirname(scriptB)) you need to give it the full path to the script that you are trying to call, if you want to do this dynamically (and you're in the same directory), you can do: import os full_path = os.path.abspath('kvadrat.py') Try commands module for py2.3.4, note that this module has been deprecated since py2.6: Use commands.getoutput: import commands answer = commands.getoutput('./a Make sure your Lua script is sending the same HTTP headers. The important part for PHP is that the form with attached file upload is sent as "multipart/form-data", and the file must be properly embedded in the POST body of the HTTP request as a multipart mime message. I cannot see if your Lua script actually does this, but I think no. Otherwise PHP would be happy. The child process flushes its output buffers on exit but the prints from the parent are still in the parent's buffer. The solution is to flush the parent buffers before running the child: print("Starting script...") sys.stdout.flush() build.run() The way your script is written, there is no way to import it and not have the plots made. To make it so that import stumpff will work, and your script will understand C(z) and S(z), you'll need to make the plotting code such that it will only run if you are running as a script. One way to do this is to put all of it in a main() function, and then use if __name__ == '__main__': main() Alternatively, simply have all of it underneath that condition, like this: #!/usr/bin/env ipython # This program plots the Stumpff functions C(z) and S(z) import numpy as np import pylab from matplotlib.ticker import MaxNLocator def C(z): if z > 0: return (1 - np.cos(z ** 0.5)) / z elif z < 0: return (np.cosh(np.sqrt(-z)) - 1) / -z return 0.5 def S(z): if z Try pyIDL. Google for it, I'm not sure where the most recent version lives. It seems to be fairly old, you might have to do some work to convert from numarray to NumPy. Try moving ob_start() above $tmp=passthru("python serverscript1.py $query");. It appears nothing is being output after the output buffer is started. <?php class SearchCategorizationService { function searcher($query) { ob_start(); $tmp=passthru("python serverscript1.py $query"); $out=ob_get_contents(); echo print_r($out,true); } } ?> For calling Sikuli code from Selenium, my first choice would be TestAutomationEngr's suggestion of using Java, since Selenium and Sikuli both have native Java bindings. Since you want to use Python, you should try running Selenium under Jython. It's important to remember that Sikuli is Jython, which is probably why you're not able to import it. (The other reason would be that you don't have it in Jython's module path.) I have not tried this myself, but there was a bug fixed last year in Selenium which indicates that it should be fine under Jython. Note that if you call your Sikuli code directly from Jython, you need to add from sikuli.Sikuli import * to the top. This is because the Sikuli IDE implicitly adds that to all Sikuli code. Finally, your last resort is to call Sikuli fr In script1.py place this: def main(): do something if __name__ == "__main__": main() In script2.py: import script1 if condition: script1.main() It might be the case that the dev_appserver is already running and then you run it again with hudson job. The best way will be to stop the dev_appserver instance first and then pump it up again. The problem is that you are putting the .. dots in the wrong place. clldsystem.esa.ESAAnalyzer is the Java class that contains the main() method which is to be executed by java. java tries to find clldsystem.esa.ESAAnalyzer by looking through the classes which it loads from the jars specified in the classpath by -cp. So try replacing java -cp "../lib/*:esalib.jar" ../clldsystem.esa.ESAAnalyzer with the following: java -cp "../lib/*:../esalib.jar" clldsystem.esa.ESAAnalyzer can check the exit code of another process with the child error variable $?. For example: system("perl foo.pl"); my $exit_val = $? >> 8; # now contains the exit value of the perl script Read the documentation for more info..
http://www.w3hello.com/questions/How-can-I-capture-the-result-of-a-python-script-in-calling-python-script-duplicate-
CC-MAIN-2018-17
refinedweb
748
66.23
Binary search is used to find an element among many other elements. The binary search method is faster than the linear search method. The array members must be in ascending order for binary search. To get rid of the unsorted arrays, you can use the Arrays.sort(arr) method to sort it. Java Linear Search A simple strategy for searching is linear search. The array is scanned progressively in this method, and each element is compared to the key until the key is found or the array is reached. In real-world applications, linear search is rarely used. Because binary search is substantially faster than linear search, it is the most commonly utilized strategy. There are three techniques to execute a binary search in Java: - Using the iterative approach - Using a recursive approach - Using Arrays.binarySearch () method. All three techniques will be implemented and discussed in this session. Binary search algorithm in Java The collection is repeatedly divided in half in the binary search technique. The main item is found in the collection’s left or right half based on whether the key is less than or greater than the collection’s mid element. The following is a basic Binary Search Algorithm: - Calculate the collection’s middle element. - Compare and contrast the key elements with the central element. - We return the central index position for the key discovered if the key Equals the middle element. - Else If key > mid element, the key is located in the collection’s right half. As a result, repeat steps 1–3 on the collection’s lower (right) half. - Otherwise, if the key is in the central element, the key is in the upper half of the collection. - As a result, the binary search in the upper half must be repeated. As you can see from the preceding stages, half of the elements in the collection are rejected after the first comparison in the Binary search. It’s worth noting that the identical techniques apply to both iterative and recursive binary searches. As a result, we divide the array several times and decide which half to search for the key by comparing the key element to the mid. Binary search is highly efficient as far as time and accuracy and is much faster. Java Binary Search Implementation Let’s use the strategy mentioned above to create a Binary search application in Java that uses an iterative approach. We use an example array in this program and do a binary search. import java.util.*; class Codeunderscored { public static void main(String args[]){ int numArray[] = {10,15,20,25,30,35,40}; System.out.println(" array input: " + Arrays.toString(numArray)); //relevant key to be searched int key = 35; System.out.println("\nSearched key=" + key); //set start_idx to the array's first index int start_idx = 0; //set end_idx to the last elements in array int end_idx=numArray.length-1; //calculate the array's mid_idx int mid_idx = ( start_idx + end_idx)/2; //as long as the first and last do not overlap while( start_idx <= end_idx ){ //if the mid < key, then key to be searched is in the first half of array if ( numArray[mid_idx] < key ){ start_idx = mid_idx + 1; }else if ( numArray[mid_idx] == key ){ //if key = element at central, then go ahead and print the location System.out.println("Item found at index: " + mid_idx); break; }else{ // search for the key in the array's second half end_idx = mid_idx - 1; } mid = ( start_idx + end_idx)/2; } //if start_idx and end_idx overlap, then key is not present in the array if ( start_idx > end_idx ){ System.out.println("Item not found!"); } } } The given program demonstrates an iterative Binary search method. An array is declared first, followed by a key to be searched. After determining the array’s central element, the key is compared to the central element. The key is then searched in the lower or upper half of the array based on whether it is less than or greater than the key. Performing Recursive binary search in Java You can also use the recursion approach to execute a binary search. The binary search strategy is used here recursively until the key is located or the list is exhausted. The following is the program that implements a recursive binary search: import java.util.*; class Codeunderscored{ // binary search via recursive approach public static int binary_Search(int intArray[], int start_idx, int end_idx, int key){ // If the array is in order, execute a binary search on it. if (end_idx>=start_idx){ //calculate mid int mid_idx = start_idx + (end_idx - start_idx)/2; //if key =intArray[mid_idx] return mid_idx if (intArray[mid_idx] == key){ return mid_idx; } //if intArray[mid_idx] > key then key is in left half of array if (intArray[mid_idx] > key){ return binary_Search(intArray, start_idx, mid-1, key);// searching for the key recursively }else //key is present in the array's right half { return binary_Search(intArray, mid_idx+1, end_idx, key);//recursively search for key } } return -1; } public static void main(String args[]){ // array and key definition int intArray[] = {11,21,31,41,51,61,71,81,91,101}; System.out.println("Input List: " + Arrays.toString(intArray)); int key = 31; System.out.println("\nThe key we are looking for is:" + key); int end_idx=intArray.length-1; //calling the binary search method int result = binary_Search(intArray,0,end_idx,key); //printing the result if (result == -1) System.out.println("\nKey is not available in the provided list!"); else System.out.println("\nKey is available at position: "+result + " in the list"); } } Using Arrays.binarySearch () function The Arrays class in Java includes a ‘binarySearch ()’ method that executes the binary search on the provided array. The given array and the key to be searched are sent as parameters, and the position of the key in the array is returned. The method returns -1 if the key cannot be found. The Arrays.binarySearch () method is implemented in the example below. import java.util.Arrays; class Codeunderscored{ public static void main(String args[]){ // array definition int intArray[] = {20,30,40,50,60,70,80,90,100}; System.out.println(" Array input is : " + Arrays.toString(intArray)); //establish the key we want to search for int key = 60; System.out.println("\nThe key we are looking for is:" + key); //calling the binarySearch method on the provided array given the key to be searched int result = Arrays.binarySearch(intArray,key); //printing the return result if (result < 0) System.out.println("\nKey is not available in the provided array!"); else System.out.println("\nKey is available at index: "+result + " in the array."); } } Java Iterative Binary Search Example Let’s look at a binary search example in Java. class BinarySearchExample { public static void binarySearch(int arr[], int first, int last, int key){ int mid_idx = (start_idx + end_idx)/2; while( start_idx <= end_idx ){ if ( arr[mid_idx] < key ){ start_idx = mid_idx + 1; }else if ( arr[mid_idx] == key ){ System.out.println("Element is available at the index: " + mid_idx); break; }else{ end_idx = mid_idx - 1; } mid_idx = (first_idx + last_idx)/2; } if ( first_idx > last_idx ){ System.out.println("Element is not available!"); } } public static void main(String args[]){ int arr[] = {20,30,40,50, 60}; int key = 40; int end_idx=arr.length-1; binarySearch(arr,0,end_idx,key); } } Example of a Binary Search in Java with Recursion Let’s look at an example of binary search in Java, where we’ll use recursion to find an element from an array. class BinarySearch{ public static int binarySearch(int arr[], int first, int last, int key){ if (last>=first){ int mid_idx = start_idx + (end_idx - start_idx)/2; if (arr[mid_idx] == key){ return mid_idx; } if (arr[mid_idx] > key){ return binarySearch(arr, start_idx, mid_idx-1, key);//searching for the key in left subarray }else{ return binarySearch(arr, mid_idx+1, end_idx, key);//searching for the key in right subarray } } return -1; } public static void main(String args[]){ int arr[] = {20,30,40,50,60}; int key = 40; int end_idx=arr.length-1; int result = binarySearch(arr,0,end_idx,key); if (result == -1) System.out.println("Element is not available!"); else System.out.println("Element is available at the index: "+result); } } Example: Binary Search in Java using Arrays.binarySearch() import java.util.Arrays; class BinarySearch{ public static void main(String args[]){ int arr[] = {20,30,40,50, 60}; int key = 40; int result = Arrays.binarySearch(arr,key); if (result < 0) System.out.println("Element is not available in the array!"); else System.out.println("Element is available at the index: "+result); } } Conclusion This tutorial has covered Binary Search and Recursive Binary Search in Java and their algorithms, implementations, and Java Binary Search code examples. In Java, binary search is the most commonly used search method. First, the data provided must be sorted in ascending order before a binary search. In Java, a binary search is a mechanism for looking for a certain value or key in a collection. It’s a key-finding method that employs the “divide and conquers” strategy. The collection on which a Binary search will be used to find a key should be sorted in ascending order. Typically, most programming languages support Linear search, Binary search, and Hashing strategies for searching for data in a collection.
https://www.codeunderscored.com/binary-search-in-java-explained-with-examples/
CC-MAIN-2022-21
refinedweb
1,502
54.83
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site. import java.io.PrintStream; import java.util.Scanner; import java.util.Scanner; public class TriangleArea { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter the base length of the triangle:"); double base = sc.nextDouble(); System.out.print("Enter the height length of the triangle:"); double height = sc.nextDouble(); double preCalculation = base * height; double Area = preCalculation / 2.0; System.out.println("The Area of your triangle is: " +Area); } } import java.io.PrintStream; import java.util.Scanner; public class Rectangle { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter one of the side lengths:"); double length1 = sc.nextDouble(); System.out.println("Enter the other side length"); double length2 = sc.nextDouble(); double Area = length1 * length2; System.out.println("The area of your rectangle is: " +Area); } } import java.io.PrintStream; import java.util.Scanner; public class CircleArea { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter the radius of your circle:"); double radius = sc.nextDouble(); double Area = 3.1415926535 * radius * radius; System.out.println("The area of your circle is: " +Area); } } import java.io.PrintStream; import java.util.Scanner; public class MainClass { public static void main(String[] args) { System.out.println("Would you like to calculate the area of a Triangle, Rectangle, or Circle?:"); } } if(args[0].equals("Triangle")) import java.io.PrintStream; import java.util.Scanner; public class MainClass { public static void main(String[] args) { System.out.println("Press 1 for triangle, 2 for rectangle, and 3 for circle:"); } } Quote from VoteLobster I just started learning java... I need the code :huh.gif: And so it doesn't seem to lazy, it wouldn't hurt if you put some explanation of it... if (typeinput == 1){ circle(); } if else (typeinput == 2){ square(); } if else (typeinput == 3){ rectangle(); } public void circle(){ } import java.util.Scanner; public class Main { static Scanner typeinput = new Scanner(System.in); public static void main(String[] args) { System.out.println("Type 1 for triangle, 2 for rectangle, 3 for circle."); if (typeinput == 1){ } } } Quote from VoteLobster(stupid decision; I got a little ahead of myself there) Quote from VoteLobster So... there would be a 'get area' class, with each of the formulae in it, and the separate shapes have their own class, such as 'triangleClass'? And that shape class will tell the 'get area' class which formula to use? public abstract class shape { private ArrayList<double> points;//define the polygon's points public shape() { } abstract public double getArea(); //put accessors and mutators in, of course } //------------pretend this is a different file public class triangle extends shape { public triangle() { } @Override public double getArea() { //put the code to calculate area for a triangle here } //put accessors and mutators in, of course } //--------------pretend this is a different file again public class circle extends shape { private double radius; public circle() { } @Override public double getArea() { //put the code to calculate area for a circle here } //put accessors and mutators in, of course } shape roundThing = new Circle(); roundThing.setRadius(5); double roundArea = roundThing.getArea();//will give you the area of the circle public abstract class Shape { //This is an abstract class, meaning it never directly instantiated. Only subclass/child classes can of this class can be instantiated protected double area; private String colour; //No constructor, so default empty constructor will be used public double calcArea() { //This will never actually be called, but to override it in subclasses/child classes we need to have it defined here return 0; } public String getColour() { //When no method of the same name is found in a subclass/child class, java will look to this (the superclass/parent class) and run this one return colour.toString(); } public void setColour(String colour) { this.colour = colour; } } public class Rectangle extends Shape { private double length; private double breadth; //constructor public Rectangle(double length, double breadth) { this.length = length; this.breadth = breadth; } public double calcArea() { //Override calcArea method return length * breadth; } } public class Triangle extends Shape { private double base; private double height; //constructor public Triangle(double base, double height) { this.base = base; this.height = height; area = 0; } public double calcArea() { //Override calArea method return (base * height) / 2; } } public class AreaCalculator { public static void main(String[] args) { Shape shape; shape = new Rectangle(10, 5); shape.setColour("Red"); System.out.println("RECTANGLE"); System.out.println("*********"); System.out.println("Area: " + shape.calcArea()); System.out.println("Colour: " + shape.getColour() + "\n"); shape = new Triangle(10, 5); shape.setColour("Green"); System.out.println("TRIANGLE"); System.out.println("********"); System.out.println("Area: " + shape.calcArea()); System.out.println("Colour: " + shape.getColour() + "\n"); shape = new Circle(5); shape.setColour("Blue"); System.out.println("CIRCLE"); System.out.println("******"); System.out.println("Area: " + shape.calcArea()); System.out.println("Colour: " + shape.getColour() + "\n"); } } Public class Main{ Static int textinput = new Scanner(system.in) System.out.println("1 for triangle, 2 for rectangle, 3 for circle"); if(textinput .equals 1){ //then from there it goes into the calculator} public int getInteger() ArrayList<int> thingy = new ArrayList(); if(thingy.toString.matches("woop")) { // do something meaningless here } So now I more understand what a method is; so a method is a function/ within the Main method that is run with the rest of the class? Do they only exist in Main classes? or are they put in other classes? So just making sure. A double is a value (does it have to be an integer?) that is used with the class to perform a function? (triangle) (rectangle) (and circle) I need a way to have a class determine which other class to use. I have this so far: I want to have it ask you which one, and by text input (triangle, rectangle, or circle) I need a way to let it know which class to run. But where do I go from there? I'm confused. I keep getting bracket errors. I wont give you the code that would make it too easy. But have the main method have it output press 1 for triangle 2 for rectangle or 3 for circle. Then have an if else for 1 2 and 3 then in the else make it output invalid number and call the main method or you could stick it all in a giant loop. And so it doesn't seem to lazy, it wouldn't hurt if you put some explanation of it... and then? Superclasses Shape Subclasses Triangle Rectangle Circle Other Classes ShapeArea (what you've called MainClass, I just don't like that name, for reasons I won't go into, unless you really want me to) And what is the superclass Shape? If you just copy paste you don't learn ill put some code but I will tell you how to do it. Ok create 1 class you don't need a new class for every thing. Have it print Please type 1 for Triangle 2 for circle 3 rectangle. Then your main method in it put a scanner to input into a byte Byte is primitive data type. just call it typeinput Then make If else statements. Use == to see if something is = if you just put = you are telling it to be equal to it. That would be put in the main method. Now after that you would create a method called circle. Do that for circle triangle and square and copy and paste the code you did earlier out of the circle class for the circle method etc. I think you should give Xaanos' method a fair shot. If you can get it working, or can show that you've given it a fair shot, I will show you how to implement it using inheritance, and explain it to the best of my ability. I checked the classes for the individual shapes I made earlier; I put static Scanner ___ = new Scanner(System.in); in the same place, but at the If line it recognizes 'typeinput' but still gives me an error It's not really that stupid of a decision. As orbit79 said, this is a great place to use polymorphism. You had the right idea, you just didn't quite know what you were doing yet. this might be a bit out of your league right now, but very quick breakdown of inheritance and polymorphism: When a class inherits from another class, the class being inherited from is called the 'parent' class, while the class inheriting from the parent is known as the 'child' class. The child class receives all the fields and functions of the base class, which you can then add on to in the class definition. The relationship between a parent class and a child class is known as an "is a" relationship. Take the example of a parent class "shape" and its child class of "triangle". Triangle inherits from shape, therefore triangle is a shape. Polymorphism is treating a set of child classes that all inherit from a common parent class as if they were that parent class. Often times, the parent class is abstract, meaning that you cannot ever encounter an instance that is purely that class. If we put this into the perspective of the previous example, if shape was defined as abstract, you will never encounter a object of type shape. You can still encounter triangles, which are shapes, as well as any other class that inherits from the abstract class 'shape', but never shape all by itself. So if you had an abstract class, shape, with the abstract function getArea(), you could then define a bunch of classes that inherit from your shape class (such as triangle, circle, rectangle, etc.) and override the getArea() function in those classes to do the specific calculation for that particular shape. What you can then do is tell the class that makes the decision to expect an object of type shape as a parameter to one of its functions and then call the getArea() function. You can then pass it any class that inherits from shape, and the getArea() function that will be called is the function of the child class that was passed to it. It says "give me a shape", you say "here's a triangle", and it's okay with that. It says "give me a shape", you say "here's a circle", and it's fine with that. From what I understand so far, You would have a Main class. You run it, and it asks you which shape you want. When you give it your input, say, it will go to 'get area' class and that will refer to your shape class (be it triangle) then it performs the calculations, sends it to the main, and spits it out for you? Code wise, not sure how to do that, but kinda makes sense to me. Somewhat. No, there'd be a bunch of types of shape, and each type would have its own class. Each of those classes will have a 'get area' function that overrides the one declared in the parent 'shape' class. Each of the shapes will know how to calculate their own area. Polymorphism allows us to tell those shapes to use that function even if we don't know exactly what kind of shape it is. It'd look something like this: So with that, you could then do something like this: So even though roundThing is a shape, you can put a circle in its place because a circle is a shape. Then, when you use the getArea function, it gives you the correct area for the specific shape you put in it. I hope that helps, thanks. PS. I have included the colour stuff to demonstrate how a subclass will call the method from the superclass in the event that a copy within the subclass is not defined. Shape.java Rectangle.java Triangle.java Circle AreaCalculator.java 1- With the 'public String getColor' command, I see that you set the colors for the shapes in AreaCalculator.java. From what I see, is this just another way of labeling the different shapes, and the color refers back to the AreaCalculator class to know which Triangle/Rectangle/Circle class to use? Not have to label them individually? 2-Can you explain what the AreaCalculator class and Shape class are doing, in a nutshell? By the way, I used completely different calculation codes for the shapes, but I don't imagine that would affect it much? 3-What exactly is a Double? An integer value? Also, I talked to my friend about it today. He said to do this: (this was for merging all of the shape classes into one big class) then from there it goes into at the end he said to put a 'return;' . Is a return; just for ending if statements? Return to the main code, per say? I also tried this but when I ran it, it didn't ask for text-input. So when the AreaCalculator class tells each shape to use its getArea() function, each shape will call its own getArea function. The functions defined in the child classes are used because those functions override the getArea function found in the shape class. They are all shapes, so they each have a getArea function, but each one has its own specific method of finding its area. On the other hand, when the AreaCalculator class tells each shape to use its getColor() function, each shape will use the getColor function from the shape class. In this case, the function from the shape class is used because none of the child classes have overridden it with their own functions. Though they may be different from the base shape class in many ways, they are all still shapes. 3. A double is a type of floating point value. It's called a double because it uses twice as much memory as a float. It honestly doesn't make much of a difference these days, but it did some 20-30 years ago. Stick to using doubles unless you need to make many millions of them and memory use actually does become an issue. 4. The return statement informs the program to exit the function and send back a particular value. This value must be of the type laid out in the function's declaration. When you see a function like this: Then that function must return an integer value. In Java and many other languages, the function call itself can be treated as the type it returns. This is true to the point that you can chain function calls based upon what each function returns. So here I'm calling the .toString() function, which is common to all objects in the Java libraries. Since it returns a String, I'm capable of calling the .matches() function, which is a member function of the String class, on that function call. Putting a return statement in the main method causes the program to exit. Also note that I keep using the word "function" even though I should technically be saying "method". A method is a function that is part of a class, a "member function" if you will. In Java, functions that aren't members of classes don't exist, so they're technically all methods. So just making sure. A double is a value (does it have to be an integer?) that is used with the class to perform a function? About half-way through I remembered that I hated java. Or perhaps hatred was renewed when, instead of being able to follow a sane approach and, you know, be able to enumerate Types/classes in a package, I was forced to essentially write a god damned ClassLoader. But I digress. That, and the result would have in no way helped the OP.... A method is a function on an object. To calculate the area of a shape, for example, you'd call a method. When you write to the output stream (using system.out.println) you are calling the println method of the OutputStream class. When you read a double from the Scanner, you are calling the getDouble() method. You've got two types of methods, basically (well, in java, let's not complicate things). You have methods that belong to the object instance and methods that belong to the object class. A Class essentially defines the template for each instance. The build in "String" type is a class, and you can call "String.ValueOf" and get the string representation of a value. there you aren't accessing a string, but rather the class itself- you don't need an instance. Whereas if you have a string variable, it is an "instance" of a string and any and all strings will have the instance methods of string. The "Main" method is only special by convention; by convention the way a Java class is "started" is by the java class loader loading up the class, finding the Main Method, and invoking it. it is static because by virtue it doesn't need to have a loaded instance of the class it is contained in, and if the routine needs an instance it can create one anyway. double is a data type. The return statement returns to the caller. In the case of the Main method, this will return to the Java class loader, or whatever loaded up your program. What it is doesn't really matter, but basically when you return from the main method your program is finished. Within other methods, it returns to whatever called it.
http://www.minecraftforum.net/forums/off-topic/computer-science-and-technology/481546-question-about-java
CC-MAIN-2017-22
refinedweb
2,946
72.46
HubSpot E-Commerce: Integrate a Shopping Cart to Your SiteJune 30, 2016 Since we launched Snipcart, we've had many interesting integration suggestions for our e-commerce solution. One of them recently caught my eye. It went: Demo request: enable HubSpot e-commerce by integrating Snipcart. After researching the e-commerce + HubSpot topic for a while, I realized there weren't a whole lot of solutions to sell on HubSpot marketing sites. A bit like with Unbounce's landing page builder. So I thought I'd come up with a post to guide and inspire HubSpot users wanting to sell products on their site. After playing around a bit with their marketing software, I came up with a decent demo for the blog. So today, I'm going to show you how to integrate our shopping cart platform to HubSpot's Marketing software. Following the next 5 steps will allow you to easily add buy buttons and products to your site. 1. Create free Snipcart and HubSpot accounts You can sign up free for Snipcart and for HubSpot's free 30 days trial. While in test mode, Snipcart will remain forever free. Note that you'll need a valid domain name to start your free trial of both our e-commerce solution and their marketing software. For the following tutorial, we're going to assume you already have a website set up with HubSpot. If you don't, here are some helpful getting started resources. 2. Add our shopping cart platform to your HubSpot e-commerce site Go to your Snipcart merchant dashboard. Click on the user icon, top right corner. Under Account, click on API Keys. Copy the snippet of code you find there, without the jquery line. We won't need to add jQuery since HubSpot already has it in its site themes. Note: As you can see in the screenshot here, we're using Snipcart in Test mode. Because the API key included in the snippet is the Test one, we won't be able to make real transactions on the HubSpot site just yet. To go Live with Snipcart, please read this documentation section. Let's jump to HubSpot's marketing admin panel now. Go to Content > Content Settings > Page Publishing. Scroll down a bit and paste the Snipcart code snippet in the Site Header HTML section: Click on the blue Save changes button further down the screen. 3. Create a custom HubSpot module for your online store's products What we need to do now is create a re-usable module for your Snipcart products. Thanks to HubSpot's custom modules, this can be achieved pretty easily, with a bit of simple code. Go to Content > Design Manager > Custom Modules. Hit the blue New custom module button, select Custom Module, and hit Create. Name your new module Snipcart products. You should see a bunch of pre-made fields and a few demo lines of code in the editor: Get rid of all pre-made fields by hitting Arrange fields & Delete on each of them. Wipe the demo code in your editor also. Now, create Snipcart's required product attributes using the Add new field button (more details on how Snipcart's product definition works). You'll need to add the following: - Name: a TEXT field labeled product_name - Product image: an IMAGE field labeled product_image - Product ID: a TEXT field labeled product_id - Product price: a TEXT field labeled product_price Using HubSpot's basic HubL templating language, we'll need to code a custom module containing all of the required Snipcart product attributes (the fields defined above + the product URL): <button class="snipcart-add-item" data-item-name="{{ widget.product_name }}" data-item-url="{{ content.absolute_url }}" data-item-id="{{ widget.product_id }}" data-item-price="{{ widget.product_price }}" {% if widget.product_image.src %} data-item-image="{{ widget.product_image.src }}" {% endif %} >Buy now</button> Your editor should end up looking something like this: Hit the Update button. The code and settings above are in fact a re-usable "buy now" button for Snipcart products. You'll be able to add this module to site page templates, and use it wherever you'd like on your site. Before we do that, however, let's style the button a bit. 4. Style your products "buy now" buttons with a custom CSS rule Go to Content > Design Manager > Coded Files > Custom > system > css. Open your HubSpot theme's stylesheet (I picked the Stratus theme while creating my HubSpot site). You'll need to add a custom CSS rule to define your Snipcart buy buttons' look and feel. You can use this website to generate one quickly. Don't forget to change the name of the rule to snipcart-add-item to make it match with our products custom module. Here's an example: .snipcart-add-item { background-color:#DE0F29; -moz-border-radius:28px; -webkit-border-radius:28px; border-radius:28px; border:1px solid #DE0F29; display:table; margin:4em auto 0; cursor:pointer; color:#ffffff; font-family:Arial; font-size:17px; padding:16px 31px; text-decoration:none; text-shadow:0px 1px 0px #DE0F29; } .snipcart-add-item:hover { background-color:#cc3300; } .snipcart-add-item:active { position:relative; top:1px; } You can copy your rule towards the end of your theme's stylesheet: Hit Publish Changes before moving on. Note: depending on your theme and visual preferences, you'll most likely have to adjust this CSS rule to better fit your needs. 5. Add products to your HubSpot e-commerce site Now let's add a buy button to our homepage as an example. Go to Content > Design Manager > Templates > Custom > Web Page Basic. Open up the Homepage. There, add the Snipcart products custom module where you wish to insert a buy button: Now, head to Content > Website Pages > Home. Select Edit. Click on your buy now button custom module. Fill the fields we created earlier with the appropriate product info, and hit Save further down. You can repeat the process above with any type of templates in your HubSpot e-commerce site where you'd like to be able to add products. Now for the fun part. Update your changes, and open your site. Go ahead and try adding a Snipcart product! And there you have it: a fully functioning online store right on your HubSpot site! Once you're ready to sell, you'll need to go live with HubSpot. You'll also be able to manage all e-commerce operations in your Snipcart merchant dashboard. Conclusion Enabling e-commerce on HubSpot with our shopping cart platform is fast and straightforward. In this demo, we used HubL custom modules and a bit of code to facilitate product creation. However, you could've injected products using pre-made custom HTML modules with Snipcart product definitions. You could also customize Snipcart's shopping cart to make it fit with your HubSpot theme. HubSpot Marketing is a great software solution to optimize your business' inbound marketing results. And we'd sure love to see more e-commerce integration of it with Snipcart! If you enjoyed this post and found it valuable, could you send a tweet our way? I'd love that! Got any questions regarding HubSpot e-commerce & Snipcart? Feel free to hit the comments. :)
https://snipcart.com/blog/hubspot-ecommerce-shopping-cart-integration
CC-MAIN-2017-04
refinedweb
1,207
65.22
Dynamically Import All Python Files Hi, I'm trying to import dynamically import python files in a directory. For example if the directory is: utility ------init.py ------file1.py ------file2.py ------file3.py The code to import them would be from utility import file1 from utility import file2 from utility import file3 I want this line of code to be executed dynamically. So if I add a file4.py, I don't want to write from utility import import file4. That's why I want them to import dynamically. However, my code below gives an error of ImportError: cannot import name 'filename'where it treats the filename as an actual string and not as a variable. Is there a way around this? Here is the current code: import utility for f in files: if f.endswith(".py") and not f.startswith('__init__'): filename = f.split('.') from utility import filename[0] P.S. There are some solutions in the stack overflow like this one: But again, it requires to be executed every time I add a python file in the directory (i.e. no dynamic) Hi, for clarification: I think you do not actually mean importing modules dynamically (which would imply importing them on a "as needed basis" depending on the branching of your code. From what I do understand you want to implicitly import all modules located in a package (folder). I assume you are aware that python has the wildcard syntax for that. from package import a, b, c, d # is equivalent to (assuming there is only a, b, c, d) from package import * To make this somewhat useful you will have to define in your __init__.pywhat you want to be imported (as described in your linked post). However to import from packages Python has to know where to look for them. If you have your folder utilityin your c4d plugin folder, this won't work out of the box. There are multiple ways to achieve this and people will probably boo me for telling you this, but the easiest way is to just add to (or pollute as some people would say) the sys path. _path = os.path.dirname(__file__) if _path not in sys.path: sys.path.insert(0, _path) #assuming foo is a folder/package in the dir of __file__ from foo import bar One of the drawbacks is that you have to do this: _path = os.path.dirname(__file__) if _path not in sys.path: sys.path.insert(0, _path) from foo import bar if SOME_SORT_OF_DEBUG_CONDITION_IS_TRUE: reload(bar) If you want to reload that module on a running Python VM. For example when you hit "reload python plugins" in c4d. Otherwise you have to restart c4d for each code change to apply. Cheers zipit @zipit Thanks for the response. Yes, I ended up "polluting" the sys.path. I don't mind though. It works as expected. Here is the working code for me without using the __init__file and it imports dynamically. import c4d import os import sys rigDir = "C:/Users/Luke/Dropbox/Scripts/c4d" if not rigDir in sys.path: sys.path.append(rigDir) dir_path = "C:/Users/Luke/Dropbox/Scripts/c4d/utility" files = os.listdir(dir_path) import utility from utility import hello_world for f in files: if f.endswith(".py") and not f.startswith('__init__'): filename = f.split('.') #from utility import filename[0] exec('from utility import %s' % filename[0]) exec('reload(%s)'% filename[0]) The only caveat so far is to convert all scripts to if __name__=='__main__': main() Because at this point, all scripts get executed when imported hahaha. Should have listed to everyone telling me to use the if __name__=='__main__'from the start. While @zipit has almost said everything I would like to add few things: if __name__=='__main__'is really needed since its ensure that your code is not called from anywhere, performance-wise its also very important, even if it's not a big issue in your plugin logic if your code is executed each time you import stuff, think to the user and his performance. - It's recommended to do local import and not global import (as you do) since if you add a module named xyz to the sys.path from your plugin A and from plugin B you try to import another module named xyz, it will load the one from plugin A (the first one register). And the other plugin will not work. For more information read Best Practise for import and especially the great localimport module from @NiklasR that is mentioned in the Best Practise for Import topic Cheers, Maxime.
https://plugincafe.maxon.net/topic/11790/dynamically-import-all-python-files
CC-MAIN-2021-17
refinedweb
767
63.9
Code. Collaborate. Organize. No Limits. Try it Today. Web services are still new and growing up, it will definitely have an important role to play in the future of distributed computation. In this paper, I introduce a tool that allows you to search web services via UDDI registry servers. The point of my tool is to deliver business value, which in this case means easy discovery of web services and letting you view their WSDL on a tree. Prior to making this tool, I fumbled through MSDN and CodeProject, to learn about how to discover a web service from UDDI, parsing a WSDL and XML schema. Along with some of my previous works on text processing, the program was born with the following features: If you are not familiar with WSDL, XML schema, XML namespace and UDDI, please do some reading to get a feel of this problem. First, I want to start with a short introduction to web services. Web services consist of a service provider and multiple consumers based on the client-server architecture. Each web service uses a custom communication protocol for the clients to access the servers. The most common access pattern for a web service consists of requests and responses. The client sends a request message that specifies the operation to be performed and all the relevant information to perform the operation, to the server. The server performs the specified operation and replies with a response message. The actions carried out by the server might result in permanent changes to the sate of the server. Essentially, web services provide RPC like interfaces to the client. For example, MyInfor, is a web service that allows users to maintain access information such as names, addresses and phone numbers of their contacts. The MyInfor web service exports operations to insert, delete, replace and query portions of this contact information. Each of these operations takes input parameters (the query string) and produce an output (query response or success status) while making permanent state changes at the server. Each web service provides its own custom interface that could be vastly different from those provided by other web services. For example, a travel web service would provide operations to search for airfares, reserve and buy tickets and look-up itinerary. Before going further, you should have a basic understanding of WSDL, XML namespaces and XML schema. WSDL is a document written in XML. It provides a way for service providers to describe the basic format of web service requests over different protocols. The WSDL specification defines the following major elements of networked XML-based services: Typically, information common to a certain category of business services such as message formats, port types, and protocol bindings, are included in the reusable portion, while information pertaining to a particular service endpoint is included in the service implementation definition portion. XML schema is an XML based alternative to DTD. An XML schema describes the structure of an XML document, and it is used to validate whether an XML document conforms to the definition structure. The XML schema is used to define the data type structure in a WSDL. A UDDI server is a web services registry, it contains the following information: Looking at the UDDI mechanism we find that it is a web service, which exposes information about other web services. One of the major purposes of a UDDI is to provide an API for publishing and retrieving information about web services. The operations can be invoked by a SOAP call to the exposed methods of a certain web service. The common uses of a UDDI are: Public UDDI hosted by large companies like Microsoft, IBM. Anyone can get an account in these servers and look for a web service that they want to invoke for their development. Companies that have built web services most likely use these public services. Protected Industries that expose their own UDDI servers for performance or security reasons. For example: chemical sector, or finance. Private Some large companies may choose to run a UDDI server on their Intranet so that generic building blocks for corporate applications can be exposed throughout the company. For more details about UDDI please refer to UDDI specifications. The problem of searching for a web service involves two steps: At the basic level, the UDDI API provides only a simple keyword search on the "web service name" (or TModel name) advertised in UDDI registries. In fact, some valuable information may not be included in the name. The information about a web service is comprised in the advertised UDDI description, the description in the web service itself, method etc. Unfortunately, the result returned by the UDDI server may be huge. There are a lot of web services that can be found with the associated access point like "...". This is useless information. Users need to visit hundreds of entry points to find the appropriate services. In addition to the simple keyword processing, I now provide the post-filtering query feature to help find out a relevant advertisement among the currently available ones. In the first part of the work, I used the data directly from the UDDI registry, and did not utilize WSDL files as a source evidence for the searching task. But this is a must in the next step because these XML files carry all the information that is needed to describe a web service.[*] This task is made easier with the availability of a local repository of service advertisement information and descriptions. The filtering algorithm is based on the vector space model which was proposed by Salton[1988]. The major idea behind this algorithm is that documents and query are represented on a K-dimensional vector. K is the number of distinct words which are extracted from the document collection. Each word is assigned a weight, it reflects the importance of a word within the document. This value is calculated based on its frequency and its distribution across a collection of documents. The idea behind IDF weighting is that people usually express their opinion by using frequently used words. The similarity of two documents is calculated based on TF-IDF and the cosine similarity between the angle of two vectors which represent the documents. This value is then normalized 0 through 1, and is used to rank the search results. For more detail about TF-IDF, please refer to this. Here, I consider the terms in a UDDI description of the retrieved advertisement services (the concatenation of all advertised information about a web service) as a bag of words and use the TF-IDF measure to compute the similarity between two such bags. The pre-processing step includes: word stemming (suffixes removing), and stop word removal (removing frequent and insignificant words). This could help improve the accuracy of ranking. Due to the descriptions that are highly compact, I decided to use n-gram text method to extract the vocabulary collection (distinction terms). N-grams are a language-neutral representation, it works better for languages other than English where the rules based stemming algorithm (e.g. English-Porter stemming) have not shown to work well. The N-grams have also shown to work well in short text matching, including spelling errors, acronym, name matching etc. The disadvantage of N-grams based tokens system are slower running speed, and it incurs disk usage more than the word based tokens system. Typically, 2 or 3-grams occur in documents, but in some documents it is 6 or 7-grams. After the pre-processing step, a description is split up in n-grams, instead of words like in other information retrieval systems. In my preliminary evaluation, the tri(3)-gram, and quad(4)-gram based systems have shown to return better results than the word-based tokens system. Each web service has an associated service description (WSDL) that describes its abstract interface and the concrete implementation functionality. The service description will be parsed for all major content elements like the type definitions, elements, operations etc. These elements will be modeled on a tree. The application starts parsing from the implementation level (service, port, binding) of a WSDL and goes up to the interface level (portType, message, operation). The information about the service, port, binding elements will be used to determine the form style of the parsed message. A web service node contains the PortType collection, a PortType (like a class) corresponds to a set of one or more operations, where an operation (like a method) defines a specific input/output that must correspond to the name of a message that was defined earlier in the WSDL document. If an operation specifies just an input, it is a one-way operation. An output followed by an input is a solicit-response operation, and a single input is a notification. PortType The input/output parameters of an operation use a message as their type. An operation may either have an input or output or both. The message part uses the XML schema to define their part's type. A binding corresponds to a PortType implemented using a particular protocol such as SOAP or CORBA. The type attribute of the binding must correspond to the name of a PortType that was defined earlier in the WSDL document. If the service supports more than one protocol, the WSDL includes a binding for each. The input/output parameter trees of an operation are created based on the operation's style (like Document or RPC) that follows the SOAP binding rules. These trees are a kind of XML schema trees and look like the XML SOAP messages of the consuming web service. The WSDL(1.1) SOAP binding rules are: <soap:body> Each element or attribute of the schema is translated into a tree node. This implementation supports all XML schema elements, and extensible elements (array) that come from the SOAP definition. Parsing XML schema is a complex recursive operation that walks on every particle of an XML schema. Modeling XML schema as a tree requires an exhausting operation due to the complexity of the XML schema structure. During this task, the constructs in the XML schema are always taken care of to ensure that a schema is modeled as a full tree. i.e. : some complexType derives from another complexType, extension or restriction... This also takes into account the following definitions: Reference definition is a mechanism to make schema simple through the sharing of common segments/types. In the process of transforming this structure into a tree, I chose to duplicate the shared segment under the node that refers to it. Therefore, you don't need to care about the reference types. This happens when a leaf element refers to one of its ancestors (i.e.: a class has a member as an instance of it). This structure definition will also break the tree structure and it has to be solved differently from the way of solving reference types, otherwise it falls into an infinite recursive loop. In this case, it just shows the node (which refers to its ancestors) with a predefined depth. Each node will be associated with a prefix if the schema definition specifies that the form of this node (element/attribute) is qualified. The elements which appear more than once in an XML document will be displayed duplicated on the tree. The XML schema tree looks very much like an XML instance document of that XML schema. This tool is built on .NET 2.0, and references the Microsoft UDDI SDK package. This form features an advance searching of web service via UDDI servers. It may help your application find a potential partner. I have started modifying the original sample class "Discovery web service via UDDI" from MSDN. Building from a simple sample, I chose to create the new features like extension search parameters, searching thread, parsing and viewing WSDL and XML schema. There are some well-known UDDI servers such as Microsoft or IBM... that are already at the UDDI server URL repository. In my opinion, these parameters are the most efficient things that help you boost the search effectively. The search parameters now include: This optional collection of string values represent one or more partial services names qualified with the xml:lang attributes. Any BusinessService data contained in the specified BusinessEntity with a matching partial name value gets returned. xml:lang BusinessService BusinessEntity Names of the business providers have been registered in the selected UDDI server. If more than one business provider matches the name, it will return all the services that match the service name which belongs to each of them. (logical OR). If the textbox is null this means that it will search for all the providers, i.e. search with business name "simpleTron", at this URL, blank service name, blank TModel. null The names of the TModels have been registered by the services in the selected UDDI server. It will search for all the business structures that contain BindingTemplate structures with fingerprint information that matches the TModel name. If more than one TModel matches the name, then the BusinessService structures that contain BindingTemplate structures with fingerprint information that matches all of their TModel keys specified will be used to filter the services (logical AND only). BindingTemplate These are used to sort the results and to control the keyword matching: case sensitive/insensitive, use of wildcards and exact match. Sometimes it may take a long time for the search for whatever reason via the server; in that case you can stop the searching thread by clicking on the stop button. The search result lists the web services that match the criteria. Each result is a web service. A web service is translated into a node that has two children: After getting the results from the UDDI server, you can input a new query (at the web service name text box) and click on the "Filter" button to sort the list. The ranking module will then take the query, and sort the result in the descending order of their estimated similarity rating when compared to the given query. It is expected that this order will show the list from the most relevant results. WSDL view displays the selected WSDL on a tree. View displays service properties, such as name, location (URL), documentation (annotation), the Port types, the collection of operations that are offered by the service and their parameters. Depending on the complexity of the WSDL and the server usage status, tree view may take a few seconds to render. Move your cursor over any operation element to view the input/output parameters associated with it. I hope to have more chances to intensively study this topic. By the next step, I would like to utilize the WordNet dictionary to enrich the semantic of the search. Building a web service (s) corpus to store the search and computation results (TF-IDF weighting, vector space), that may be helpful for some extension works. I also would like to study DAML-S, that is said to be fairly similar to WSDL, but it supports the specification of semantic information in RDF format. Migrating to the Mono framework will also be an interesting task. It also requires to test carefully on performance, and a precision/recall rating. This tool is provided free for using. None of the source copyright notes and author lines should be replaced. This article is short because I have not explained the methods in detail, so if you have any comments or questions regarding this tool please drop me a line, Thank you. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Inquire.AuthenticationMode=AuthenticationMode.None ; WebProxy proxy=new WebProxy(proxyServerAddress, proxyPort); GlobalProxySelection.Select=proxy; Uri uri = proxy.Address; CredentialCache cache = new CredentialCache(); cache.Add(uri, "NTLM", new NetworkCredential(yourUserName, yourPassword)); proxy.Credentials = cache; Inquire.HttpClient.Proxy = proxy; General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/11267/UDDI-Explorer-Tool-for-searching-web-services?fid=206386&df=90&mpp=25&sort=Position&spc=Relaxed&tid=1310968
CC-MAIN-2014-23
refinedweb
2,708
53.1
[Solved] Gsoap with nokia Qt SDK problems hi,. 12 replies Yes, there’s an alternative to gsoap, with much better Qt integration: KDSoap. It features a WSDL to C++ code generator, qmake integration, and many other things (sync and async calls, RPC and document mode, etc.) • also add stdsoap2.h and stdsoap2.cpp from gsoap directory as header and source in your project. •. ) Hey there. I’ve followed all those steps and whenever I try to include the gsoap headers in a class other than the main.cpp i get this error when running the project in Qt Creator: mwldsym2.exe: Multiply defined symbol: struct Namespace * namespaces (?namespaces@@3PAUNamespace@@A) in mwldsym2.exe: files moc_maxintervaltimer.o, maxintervaltimer.o You must log in to post a reply. Not a member yet? Register here!
http://qt-project.org/forums/viewthread/1553
CC-MAIN-2013-20
refinedweb
132
70.5
Windows IoT Core is a young project – and whereas there are already a lot of good tutorials and examples on the internet, and there’s a lot more work to be done before the libraries available can compare with the work done by the Arduino community. I’ve managed to make servo motors work in the past with Arduino – there’s already a servo project packaged with the Arduino development environment, and that just works out of the box. I was interested to see if I could do this in C# with the Raspberry Pi 3, and I couldn’t find any simple pre-existing code for this. Since I like an interesting technical challenge, I thought this would be a good problem to solve. First – how do servos work? A servo is more than just a simple motor which accepts a power supply and spins round – it’s possible to precisely control how much a servo turns. It’s able to do this because the servo is basically made up of a motor, a potentiometer, and a controller. A very simple explanation of how it works is: - The controller chip is given a signal – for example, turn the motor to the 90 degree position; - The motor’s output spindle is connected to a potentiometer – since the controller chip is able to measure the resitance between terminals of the potentiometer, therefore it’s able to infer the current position of the motor; - The controller only powers the motor until the resistance of the potentiometer matches the value it expects when the spindle is at the 90 degree position. So this explains the mechanical operation of a servo – but what about the signal which is sent to the controller chip? How do we tell it to move to 0 degrees, 90 degrees, or 180 degrees? It turns out there’s quite a simple answer to this – we send a series of pulses to the controller, which have different widths for different motor positions – this works like this: - The controller chip expects a series of digital pulses at a particular frequency; - The frequency describes how many pulses are sent per second – so for example, if the time between pulses starting needs to be 20ms, then we’d need to send 50 per second (50 cycles x 20ms = 1000ms). - There for the frequency is 50 pulses per second – also called 50Hz. - Each signal is made up of two logic states – logic 1 (5 volts) and logic 0 (0 volts); - The ratio of time in each cycle spent at logic 1 to the total length of the cycle is called the duty cycle. - For example, if the time between pulses starting is 20ms, and the pulse is 2ms at logic 1, then the duty cycle is 10% (2ms/20ms x 100%); My research suggested that most servos expect pulses at a frequency of 50Hz. They will move to: - 0 degree position with a duty cycle of 5% (1ms of logic 1 in a 20ms pulse); - 180 degree position with a duty cycle of 10% (2ms of logic 1 in a 20ms pulse); So my challenge was to find a way for the Raspberry Pi to generate a series of variable width pulses. This technique of changing the width of pulses is is known as pulse width modulation (PWM). This is easier said than done with the Raspberry Pi. Whereas the Arduino has several pins that output PWM signals, there are no pins in the Raspberry Pi that obviously output PWM. Next – can I simulate PWM using C# code? Well…I gave it a go. My theory was that I could set a pin to logic 1, and then wait for a certain number of milliseconds to pass before setting the pin back to logic zero. I connected the three wires of the servo to my Raspberry Pi – the 5v wire to Pin 2, the ground wire to Pin 39, and the control wire went to Pin 29 (which was GPIO 5). In order to develop a Windows app for the Raspberry Pi, I created a blank Windows UWP app, and added a reference to the Windows IoT Extensions. I then added the code below to the MainPage.xaml.cs file. This code didn’t work – I’m just including this as part of the path that I followed. var gpioController = GpioController.GetDefault(); var gpioPin = gpioController.OpenPin(5); gpioPin.SetDriveMode(GpioPinDriveMode.Output); var _stopwatch = new Stopwatch(); _stopwatch.Start(); // number of system ticks in a single millisecond var ticksPerMs = (ulong)(Stopwatch.Frequency) / 1000; // length of pulse is 20ms (which equates to a frequency of 50Hz) var pulseDuration = 20; // let the pin sit at logic 1 until 2ms have passed var logicOneDuration = 2; while (true) { var ticks = _stopwatch.ElapsedTicks; gpioPin.Write(GpioPinValue.High); while (true) { var timePassed = _stopwatch.ElapsedTicks - ticks; if ((ulong)(timePassed) >= logicOneDuration * ticksPerMs) { break; } } gpioPin.Write(GpioPinValue.Low); while (true) { var timePassed = _stopwatch.ElapsedTicks - ticks; if ((ulong)(timePassed) >= pulseDuration* ticksPerMs) { break; } } } This experiment wasn’t really successful – theoretically it was sound, but practically I don’t think this method of “bitbanging” is really good enough to give the accuracy necessary for a servo controller. I found this made the servo twitch, but not much else. I tried a different way – rather than looping until a certain time passed, I thought I’d try blocking the thread for the a number of milliseconds after setting the GPIO pin to high or low…this didn’t really work either, giving more-or-less the same results as the original code (i.e. the servo twitched, but didn’t consistently move in the way I expected it to). public MainPage() { this.InitializeComponent(); var gpioController = GpioController.GetDefault(); var gpioPin = gpioController.OpenPin(5); gpioPin.SetDriveMode(GpioPinDriveMode.Output); while (true) { gpioPin.Write(GpioPinValue.High); Task.Delay(2).Wait(); gpioPin.Write(GpioPinValue.Low); Task.Delay(18).Wait(); } } I needed to find another way to generate PWM from a Raspberry Pi 3. Fortunately, Microsoft have provided a technology which solves this problem. Using Microsoft Lightning Providers to generate PWM Lightning is new software from Microsoft that implement some new functions, including SPI and PWM support. It’s pretty easy to enable this software – there’s few simple steps. This learning process was helped by the set-up guide from Microsoft and from the blog post from Lee P. Richardson here. Change the default controller driver I opened the online administrative interface for the Pi at, and navigated to the Devices tab of this interface. This has a dropdown at the top of the page showing the “Default Controller Driver”, which was set to “Inbox Driver”. I opened this dropdown, and selected the second value which is “Direct Memory Mapped Driver”. Once I selected this, I clicked on the button titled “Update Driver”, and was prompted to reboot my Pi. When I rebooted the Pi, I looked at the Devices tab of the interface again, and saw that my option was selected. Download the lightning providers from Nuget I right clicked on the Windows app project in VS2015, and selected “Manage Nuget Packages…”. This opened the Nuget package manager, and I searched for “Microsoft.IoT.Lightning”. This returned two packages: - Microsoft.IoT.Lightning (presently v1.0.4), and - Microsoft.IoT.Lightning.Providers (presently v1.0.0); Change the package.appxmanifest file to add the new capabilities I had to make a couple more changes to enable device capabilities. There were changes to the package.appxmanifest file. I needed to make these changes directly to the XML, so I right clicked on the file in VS2015, and selected “View Code”. First, add the IOT property to the Package node, and add “iot” to the ignoreable namespaces. <Package xmlns="" xmlns: Next, add the new iot and DeviceCapabilities. <Capabilities> <Capability Name="internetClient" /> <iot:Capability <DeviceCapability Name="109b86ad-f53d-4b76-aa5f-821e2ddf2141" /> </Capabilities> Add the PWM code for a servo I found the code worked well – obviously this is proof of concept code, but I found it moved the servo from 0 degrees, to 90 degrees, and then to 180 degrees. public MainPage() { this.InitializeComponent(); Servo(); } private async void Servo() {()); if (pwmControllers != null) { // use the on-device controller var pwmController = pwmControllers[1]; // Set the frequency, defaulted to 50Hz pwmController.SetDesiredFrequency(50); // Open pin 5 for pulse width modulation var servoGpioPin = pwmController.OpenPin(5); // Set the Duty Cycle - 0.05 will set the servo to its 0 degree position servoGpioPin.SetActiveDutyCyclePercentage(0.05); // Start PWN from pin 5, and give the servo a second to move to position servoGpioPin.Start(); Task.Delay(1000).Wait(); servoGpioPin.Stop(); // Set the Duty Cycle - 0.1 will set the servo to its 180 degree position servoGpioPin.SetActiveDutyCyclePercentage(0.1); // Start PWN from pin 5, and give the servo a second to move to position servoGpioPin.Start(); Task.Delay(1000).Wait(); servoGpioPin.Stop(); } } In the Part 2, I’ll design an interface for the servo library and refine the implementation code.
https://jeremylindsayni.wordpress.com/2016/05/08/a-servo-library-in-c-for-raspberry-pi-3-part-1-implementing-pwm/
CC-MAIN-2017-26
refinedweb
1,487
52.9
We are given an input as a text stream and a word, and the task is to find out the count of occurrences of anagrams of the word in the given text stream. Anagrams are generated by rearranging letters from a word which ends up being a different word or phrase like anagrams of words in a statement "New York Times" can be formed as "Monkeys write". Input: String string-: “workitwrokoffowkr” word = “work” Output: Count of occurrences of anagram in the string are: 3 Explanation: The possible anagrams for the word “work” are work,wrok, rowk, owkr, etc. In the given string the available anagrams for work are work, wrok and owkr therefore the count is 3. Input: String string-: “expresshycool” word = “Zen” Output: Count of occurrences of anagram in the string are: 0 Explanation: The possible anagrams for the word “zen” are nez, ezn, enz, zne, nze, zen, etc. In the given string there are no available anagrams for the word “zen” therefore the count is 0. import java.io.*; import java.util.*; public class testClass { static boolean arrangeAna(String s1, String s2) { char[] c1 = s1.toCharArray(); char[] c2 = s2.toCharArray(); Arrays.sort(c1); Arrays.sort(c2); if (Arrays.equals(c1, c2)) { return true; } else { return false; } } static int countAna(String stream, String w) { int count = 0; for (int i = 0; i <= (stream.length()) - (w.length()); i++) { if (arrangeAna(w, stream.substring(i, i + (w.length())))) { count++; } } return count; } public static void main(String args[]) { Scanner scan = new Scanner(System.in); String stream = scan.next(); //workitwrokoffowkr String w = scan.next(); //work System.out.print(countAna(stream, w)); } } If we run the above code it will generate the following output − Count of occurrences of anagram in the string are: 3
https://www.tutorialspoint.com/count-occurrences-of-anagrams-in-cplusplus
CC-MAIN-2021-43
refinedweb
290
72.76
The following code compiles just fine and I'm not sure why. Can someone please explain to me why this is legal? I am using g++ (Debian 6.1.1-10) 6.1.1 20160724 to compile. #include <iostream> int sum(int x, int y) { return x + y; } int main(int argc, char *argv[]) { using std::cout; int (*) (int, int) = ∑ cout << "what" << '\n'; } int main() { int (*) = 20; } It's very likely to be related to this bug reported by Zack Weinberg: Bug 68265 - Arbitrary syntactic nonsense silently accepted after 'int (*){}' until the next close brace (From Why does this invalid-looking code compile successfully on g++ 6.0? :) The C++ compiler fails to diagnose ill-formed constructs such as int main() { int (*) {} any amount of syntactic nonsense on multiple lines, with *punctuation* and ++operators++ even... will be silently discarded until the next close brace } With -pedantic -std=c++98 you do get "warning: extended initializer lists only available with -std=c++11 or -std=gnu++11", but with -std=c++11, not a peep. If any one (or more) of the tokens 'int ( * ) { }' are removed, you do get an error. Also, the C compiler does not have the same bug. Of course, if you try int (*) (int, int) {} or other variants, it erroneously compiles. The interesting thing is that the difference between this and the previous duplicate/bug reports is that int (*) (int, int) = asdf requires asdf to be a name in scope. But I highly doubt that the bugs are different in nature, since the core issue is that GCC is allowing you to omit a declarator-id. [n4567 §7/8]: "Each init-declarator in the init-declarator-list contains exactly one declarator-id, which is the name declared by that init-declarator and hence one of the names declared by the declaration." Here's an oddity: int (*) (int, int) = main; In this specific scenario, GCC doesn't complain about taking the address of main (like arrays, &main is equivalent to main).
https://codedump.io/share/u8Kw9VQdIvKZ/1/why-am-i-able-to-assign-a-function-reference-to-an-anonymous-function-pointer-variable
CC-MAIN-2016-50
refinedweb
333
66.47
GCM push notifications android I am currently working on a blog series about typical problems that developers are facing when using Qt for mobile app development. I think that might be a good topic for the next blog post. :) - SGaist Lifetime Qt Champion @ekkescorner sur there is, but usually specific to their own applications except in the case of V-Play that provides the support through their framework. @Schluchti In the categories of stuff that can help: - for Qt related libraries - qt-pods project @SGaist V-Play is not a solution for me - I'm using Qt because it's Open Source and I want to use Open Source for any plugins / extensions / libraries. V-Play is a great SDK, but V-Play PlugIns are not OSS Mobile Apps are special. Developing mobile Apps with Qt there are many common things missed: PushServices, Access to Contacts, Phone, Android Intents, iOS App Extensions, Battery state, SignaturePad, ... all those stuff you usually need, but not found from Qt Classes. And you need it for Android and iOS and probably W10. All mobile devs together could develop a community market place. Not for Qt in general - for Qt mobile Apps. Current solution is that devs have to learn HowTo create the native Android / iOS stuff and HowTo call this from Qt. I'm sure there are many devs doing the same again and again. Would be cool to have a common place where devs can provide solutions and take a look at other devs solutions HowTo handle such kind of stuff for Android and iOS. take a look at PhoneGab / Cordova Plugins: and or per ex. a SignaturePad for ReactNative: I found from. Try to find a Signature Pad for QQC2 apps running on Android and iOS. there was one Qt Blog about Intents with Qt for Android, part 1 and nothing else. also no info HowTo solve similar use-case für iOS. There are also some great blogs from KDAB. Now with QQC2 we have the UI Controls we need for mobile, but I'm really missing a central place from Qt from where you can find all those important extensions/libraries for mobile app developers. ... seems I'm running out of scope from this thread ;-) . - ekkescorner Qt Champions 2016 @Schluchti said in GCM push notifications android: . great idea ;-) best would be a small example project at gitHub together with blogpost Hi, As already mentioned by @SGaist, V-Play provides support for Push Notifications via GCM and OneSignal for Android and iOS. Also, local notifications are possible with V-Play. It's really convenient to add, check out this code sample for OneSignal: import QtQuick 2.1 import VPlayPlugins 1.0 OneSignal { id: onesignal appId: "<ONESIGNAL-APP-ID>" onNotificationReceived: { console.debug("Received notification with message:", message) } } That's all the code you'll need! As pointed out by @ekkescorner, you will need a V-Play Indie- or Enterprise license starting from $49 / month to use these features. By the way, support for Firebase is currently in the pipeline ;) Cheers, Lorenz @ekkescorner Yeah, that's what I was thinking about ;-). Hope to get the first article done by the end of this week. :) - m.kuncevicius Thank you for answers guys! I knew before about V-Play, but I'm not interested in using it since it is not free. @Schluchti it would be great if you don't mind sharing your code! First part is now available. I hope it covers everything. @Schluchti great article - thx providing your experiences and sample app. Second part is now done: Hope I didn't miss anything (implemented the notifications a while ago and now tried to remember as much as I could ;-)). If something is missing, please let me know. @Schluchti cool - blog look great :) probably next 2 weeks I'll test it. just still optimizing all my apps for Qt 5.8 This post is deleted!? @Schluchti said in GCM push notifications android:? Hi @Schluchti, I deleted my last post because after some investigations, I realiced Qt creator has a bug which makes impossible to use the last gradle version (or 3.3 in my case). QT-BUG I will give it a try once qt creator 4.3 is out. Anyway, thanks again for your blog. I hope you keep doing these amazing series. Regards, @Schluchti Finally I make it work. The python scripts need to be updated to make it works with python3 since apns2 it's only compatible with it at version 0.52. But everything works like a charm.. Thank you a lot. PD: I can pull the python script modification to github if you want. @Atr0n I am glad that it worked :) It would be great if you could create a pull request for it, many thanks! @Schluchti Nice, thanks! I have a final question for you: How do you stop notifications? For example, if the user log out. I was thinking on moving MyGcmListenerService to the main Activity and start it from there. Then I can start and stop the service when I want. What do you think? Regards, @Atr0n I haven't done that, but I would probably try a different approach first. It's just a personal preference, but moving such a central component to a different place would be to risky for me. (I would be afraid to break something and miss messages in certain occasions). Instead I would try to store the state (user logged in) somewhere and only show the notification when the user is logged in. One could do that either server-side (probably not that comfortable if the user logs in from different devices) or client-side. For example: You could store a boolean flag somewhere (e.q in SharedPreferences, so you can easily access it from the Java part) that saves the state (logged in/logged out). If you receive a new notification you first check if you are logged in. If you aren't, you discard the receive message. But I am not an android expert...so please take that advice with a grain of salt ;-). P.S: The disadvantage of that approach is, that you always receive messages although they aren't displayed. So if you are sending a lot of messages this could produce unnecessary load. Another thing is, that if you have a app, where users can explicitly disable push notifications, you are kind of screwing them over with that approach. (because messages are received anyhow) In that case I would probably go the way you described and look for ways to disable that feature completely (or even better: don't send notifications from the server in the first place) @Schluchti Thanks again for your time! Finally, I just want to share with you a piece of code that could help the people follow your tutorials. It's a notification sender using Qt code. I am not a Github guy, so I post it here in case you want to use it (or anyone else). Note that if you are using Windows, you need to install openssl 1.02 in your computer. Edit: This is android only. http.cpp http::http(QObject *parent) : QObject(parent) { m_manager = new QNetworkAccessManager(this); QUrl url(""); QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader,"application/json"); request.setRawHeader(QByteArray("Authorization"), QByteArray("key=your key here")); QByteArray data = "{\"data\" : { \"message\": \"Hello world\"}, \"to\" : \"your target here\" }"; connect(m_manager,&QNetworkAccessManager::finished,this,&http::finished); m_manager->post(request,data); qDebug() << data; } void http::finished(QNetworkReply *reply) { qDebug() << reply->error() << reply->errorString(); m_mananger->deleteLater(); m_manager = 0; } http.h #ifndef HTTP_H #define HTTP_H #include <QObject> #include <QNetworkAccessManager> #include <QNetworkReply> class http : public QObject { Q_OBJECT public: explicit http(QObject *parent = nullptr); signals: public slots: void finished(QNetworkReply *reply); private: QNetworkAccessManager* m_manager; }; #endif // HTTP_H
https://forum.qt.io/topic/76826/gcm-push-notifications-android/23
CC-MAIN-2019-09
refinedweb
1,294
65.52
Scripting A simple form of scripting is to just write marcel commands in a text file. From an ordinary shell, you can execute this script by redirecting it to the marcel executable, e.g. marcel < script.marcel. A more powerful form of scripting can be done from Python, using the marcel.api module. With this module, you have access to the operators of marcel, neatly integrated into Python. For example, here is the "recent files" example in Python: import os from marcel.api import * for file in (ls(os.getcwd(), file=True, recursive=True) | select(lambda f: now() - f.mtime < days(1))): print(file) ls(os.getcwd(), file=True, recursive=True) invokes the ls operator as a function, passing in the current directory, requesting only files (file=True), and recursive exploration of the directory (recursive=True). The resulting Files are passed to the select function, which checks for Files modified in the past day. The shell part of the command (ls ... | select ...) yields a Python iterator, so that the resulting Files can be accessed using a for loop.
https://www.marceltheshell.org/scripting-1
CC-MAIN-2021-04
refinedweb
178
57.98
Android Gaming • Advanced Imaging Sensors • Android Wifi Stack ODROID Year Three Issue #35 Nov 2016 Magazine Ultra-HD 4K ODROID Ambilight Create a synchronized video light display • Linux Gaming: Get Serious with the SeriousEngine • Android Nougat: OpenJDK-based Java and a new graphics API What we stand for.. We are now shipping the ODROID-C2 and XU4 devices to EU countries! Come and visit our online store to shop! Address: Max-Pollin-Straße 1 85104 Pförring Germany Telephone & Fax phone: +49 (0) 8403 / 920-920 email: [email protected] Our ODROID products can be found at EDITORIAL O ne of the most versatile peripherals for the ODROID is the Arduino, which can be programmed as a standalone controller for many projects, from robots to home automation. A simple project to get started with the Arduino is to create an Ambilight system, which is a stunning background light display that synchronizes itself with live video. The engineers at Hardkernel demonstrated it at ARM TechCon 2016, and wrote a guide for you to easily create the same stunning light show in your own home. To further enhance your viewing experience, we present a tutorial on setting up a MythTV front end as well as an article on enabling accelerated video playback in an ODROID-C2 web browser. For more experienced DIY enthusiasts, Miltiadis presents his lights controller with SMS notifier project that can be adapted and expanded to any IoT application, and Jörg shows us how to set up an alarm system with window sensors. Andy expands upon our previous Docker series with up-to-date information, Tobias introduces us to the Serious gaming engine, Nanik describes the Android WiFi stack, and Bruno has fun with Ancestor, a visual stunning Android game with amazing gameplay. ODROID Magazine, published monthly at, is your source for all things ODROIDian. Hard Kernel, Ltd. • 704 Anyang K-Center, Gwanyang, Dongan, Anyang, Gyeonggi, South Korea, 431-815 Hardkernel manufactures the ODROID family of quad-core development boards and the world’s first ARM big.LITTLE single board computer. For information on submitting articles, contact [email protected], or visit. You can join the growing ODROID community with members from over 135 countries at. Explore the new technologies offered by Hardkernel at. OUR AMAZING ODROIDIAN STAFF: Rob Roy, Chief Editor I’m a computer programmer in San Francisco, CA, designing and building web applications for local clients on my network cluster of ODROIDs. My primary languages are jQuery, Angular JS and HTML5/CSS3. I also develop prebuilt. Bruno Doiche, Senior Art Editor Bruno lately is fiddling with 2 of his ODROIDs, playing games and being amazed by the responsiveness of his machines with this new and amazing system. He is making sure that he never runs out of gaming column ideas for the readers that discover new games along with him! Manuel Adamuz, Spanish Editor. Nicole Scott, Art Editor Nicole is a Digital Strategist and Transmedia Producer specializing in online optimization and inbound marketing strategies, social media management, and media production for print, web, video, and film. Managing multiple accounts with agencies and filmmakers, from web design and programming, Analytics and Adwords, to video editing and DVD authoring, Nicole helps clients with the all aspects of online visibility. Nicole owns anODROID-U2, and a number of ODROID-U3’s and looks forward to using the latest technologies for both personal and business endeavors. Nicole’s web site can be found at. James LeFevour, Art Editor I’mrew Ruggeri, Assistant Editor. Venkat Bommakanti, Assistant Editorosh Sherman, Assistant Editor I’m from the New York area, and volunteer my time as a writer and editor for ODROID Magazine. I tinker with computers of all shapes and sizes: tearing apart tablets, turning Raspberry Pis into PlayStations, and experimenting with ODROIDs and other SoCs. I love getting into the nitty gritty in order to learn more, and enjoy teaching others by writing stories and guides about Linux, ARM, and other fun experimental projects. INDEX IoT device - 6 advanced imaging sensors - 15 alarm central - 17 Ancenstor - 22 Ambilight - 23 docker - 26 linux gaming - 31 android development - 33 myth tv - 36 android nOUGAt - 39 video helper - 40 meet an odroidian - 44 IoT DEVICE ODROID-C2 BUILDING AN IoT device USING AN ODROID-C2 Street and home lights controller with SMS notifier by Miltiadis Melissas I t is common knowledge that cities consume a lot of energy operating their street-lighting infrastructure. Individual users face similar situations for controlling the lighting in their homes efficiently and effectively. The IoT lighting solution presented in this article, which is based on Hardkernel’s ODROIDC2, an excellent 64-bit quad-core single board computer (SBC) (. ly/2bWxgrK), can help create a safe, energy-efficient environment with smart capabilites. Smart street-lighting and home lighting sensors can easily be connected to the network as Internet of Things (Iot). The sensor, which is a photoresistor in this project, can turn lights on and off upon successful reading in order to ensure the lowest energy consumption and proper operation. Moreover, home users can be notified from an IoT device by means of SMS messages sent to mobile phones. The SMS messages can notify the users of the exact timing the lighting is set to on/off back in their home and reporting possible malfunctions. This is the third project in my series of tutorials regarding Internet of Things (IoT) using an ODROID-C2. This is also the first time we make use of a photoresistor/photocell sensor. Our previous projects were built and operated using only actuators, such as LEDs and servos. This article will guide you on how to drive such an electronic component, controlling it as an input, by using the WiringPi library The assembled lighting solution using inside Python programming language, and thus setting the an ODROID-C2 and C Tinkering Kit basis for our next IoT project: Wine Preserver and Notifier. The IoT device works under the normal light conditions during typical daily exposure, and the photoresistor keeps the LED off under these circumstances. However, when it get dark, the photoresistor triggers the ODROID-C2 and the LED turns on and blinks, simulating the operation of the street/home lights at night. The interesting thing is that when this happens, the ODROID-C2 notifies the user that this operation has started successfully by sending an SMS message to his/her mobile phone or tablet. This is a complete IoT device that makes use of a sensor (photoresistor), an actuator (LED) and a cloud service (SMS messaging). Building the circuit We will use a breadboard in order to avoid any soldering and the hassle of designing a PCB. We will connect various circuit components with the ODROID-C2 GPIO pins using Dupont Jumper Wires, as shown in this page. ODROID MAGAZINE 6 IoT DEVICE ODROID-C2 Hardware ODROID-C2 running Ubuntu USB power supply 5V/2A and cable, use the right one provided by Hardkernel’s store () Breadboard and Dupont Jumpers (male to female) 1 X Photocell/Photoresistor 1 X 1uF Capacitor 1 X LED 1 X 220 Ohm Resistor Software Ubuntu 16.04 v2.0 available from Hardkernel at () Python language for programming. Fortunately for us Ubuntu 16.04 v2.0 from Hardkernel comes pre-installed with this programming tool WiringPi Library for controlling ODROID-C2 GPIO pins. For instructions on how to install this go to Hardkernel’s excellent setup guide available at () Python language for programming the IoT device Building our IoT device As mentioned previously, we will use a breadboard to build our IoT device with the electronic components and Dupont jumper wires. It’s a good idea to disconnect the power supply from the ODROID-C2 before connecting anything on its pins, because you can destroy it with a short circuit if you make a wrong connection accidentally. Double check with the schematic in this article, and make the correct connections before you power it up. Circuit diagram For connections, we used the male to female Dupont wires. The female side of this kind of jumper connects to the male header of the ODROID-C2 and the other one -male- connects to the holes of the Breadboard. Please refer to Hardkernel’s pin layout schematic at next page as you create the connections, which is also available at: Physical Pin1 provides the VCC (3.3V) to our circuit, and we connect it on the second vertical line of our Breadboard. Since we are going to use Pin6 as the common Ground, we connect that to the second vertical line of our Breadboard, near the edge. The photoresistor/photocell is connected to physical pin18 on one of its side, the other one goes to VCC (3.3V). Please note that this red Dupont wire/jumper connected to the vertical line of our Breadboard. Kindly refer again to our schematic in Figure 2 for the correct connections. Extra care must be given to the polarity of the capacitor (1uF), since we need to connect its negative side marked by (-) symbol with the common Ground. The positive side of the capacitor is connected to the photoresistor through the yellow Dupont wire and from there to physical pin18. We will explain the role of ODROID MAGAZINE 7 IoT DEVICE ODROID-C2 capacitor (1uF) in the next paragraph. Finally, the LED is connected to physical pin7 for its anode (+) while the cathode (-) is connected of course to the common Ground. That’s it! All of our physical wiring is now connected. The role of the resistor and capacitor For this circuit, we need to use the 3.3v out from the ODROID-C2 Pin 1, as well as Ground (GND) of course. We connected these from the ODROID-C2 to the Breadboard. The operational LED is connected to pin7 through a 220 Ohm resistor in order to limit the amount of current that flows through the LED. The presence of the resistor ensures that the LED components will be keeping safe under an accidental very large current. The role of capacitor is different, however. This is because we need the capacitor to act like a bucket and the photoresistor like a thin pipe. To fill a bucket up with a very thin pipe takes enough time that ODROID-C2 Pin Layout you can figure out how wide the pipe is by timing how long it takes to fill the bucket up halfway. In this case, our bucket is a 1uF capacitor. So, the photo resistor is connected through the 1μF capacitor to pin18 of the ODROID-C2 and the negative side of the capacitor is connected to the common Ground. Since the hardware setup is now done, let us see how we can send SMS message from our IoT device. Using Twilio Twilio is a Python package that sends text messaging (SMS). Twilio is not part of the standard Python library, but it’s one of the thousands of external Python packages that are available for us to download and use. Python developers usually use one of the two common utilities to automatically download and setup necessary folders and files: “easy-install” and “pip”. “easy-install” comes with the setuptools Python library, which is standard for Python and pip comes with the “pip” library. “easy_install” and “pip” are executed in the terminal that can be used to install Python packages. Since the ODROID-C2 Linux image () comes pre-installed with Python, it is very easy to install Twilio on our IoT device. The only step we need to take is starting up the Terminal application and type in: $ sudo easy_install twilio Enter the administrator password to give easy-install permission to write to our system folders, which is “odroid” on the official Hardkernel Linux image. Alternatively, if you want to install Twilio with pip, which is the installer for Python, you have to first install pip on ODROID-C2: ODROID MAGAZINE 8 IoT DEVICE ODROID-C2 $ sudo easy_install pip $ sudo pip install twilio If you want to check whether Twilio was properly installed, enter the Python command to enter the Python interpreter in the Command Prompt or Terminal application and type in these 2 commands: > import twilio > print(twilio._version_) If it prints a version number of Twilio, the setup has been completed properly. Twilio registration Go to the Twilio registration page at, and sign up for free, as show in Figure 4. Twilio needs your mobile number, so provide it in the appropriate field. Twilio will then send you a verification code on the phone that you have previously registered in order to verify that you are not a software bot. Enter the code in appropriate the box, and Twilio will give you a phone number. Make a note of the phone number, and continue with the registration process. Finally, you will land on a page with lots of activities, including make a call, send an SMS message, receive a call, and receive an SMS message. From this page, we need Twilio’s API authorization token. Look for the button titled “Go to your account” and click it. On the dashboard page is the account SID and the authorization token, as shown in Figure 5. Next, you have to copy and paste it to our program. Note that your screen may look a bit different if it is your first time you are logging to Twilio. In my case, my account SID authorization token were at the top of the page (blanked out). Yours may be somewhere at the bottom. Twilio signup web page Twilio dashboard with API token ODROID MAGAZINE 9 IoT DEVICE ODROID-C2 Explaining Twilio code You can find the sample code below from the Twilio Python Helper Library available at. All of the source code relevant to the Twilio service is available from the GitHub repository at. from twilio.rest import TwilioRestClient # Your Account SID from account_sid = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx” # Your Auth Token from auth_token = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx” client = TwilioRestClient(account_sid, auth_token) message = client.messages.create(body=”Hello from ODROID-C2”, to=”+306972438526”, # Replace with your phone number from_=”+12052364839”) # Replace with your Twilio number print(message.sid) You will notice that inside Twilio, there is a folder called “rest”, and inside that folder is a class called “twiliorestclient”. We make use of that class in the following code snippet: <from Twilio.rest import Twiliorestclient> In the following line of code, this line of code, we assign a variable client to twilioRestClient for verification: <client=twilioRestClient (account_sid, auth_token)> Finally, with the following line, we create the message and print it or actually sending it to our mobile phone: <message=client.sms.messages.create> Sending the SMS message First, copy and paste the account SID and auth-token to your program. Next, change the body of the text message to something like: “Hello from ODROIDC2” as I did in the below example. In the field called “to”, change it to your phone’s mobile number. In the field called “from_”, you have to fill in your Twilio number: this is the number Twilio gave you upon registration. If you have not written it down, go back to your Twilio account ( ) and on the top of the page find the numbers tab and click it. You will get your phone numbers from Twilio, as shown in Figure 6. Now save and run the program and see if it works: from twilio.rest import TwilioRestClient # Your Account SID from account_sid = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx” # Your Auth Token from ODROID MAGAZINE 10 IoT DEVICE ODROID-C2 auth_token = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx” client = TwilioRestClient(account_sid, auth_token) message = client.messages.create(body=”Hello from ODROID-C2”, to=”+30XXXXXXXXXX”, # Replace with your phone number from_=”+120XXXXXXXXX”) # Replace with your Twilio number print(message.sid) Your phone should notify you that you have gotten an SMS message. The final step is to connect this SMS code with our photoresistor and make our IoT device a smart one. Twilio setup screen showing the phone number used in the code snippet Connecting Twilio to the photoresistor Now that we know how Twilio is working, let’s see how to connect it with our photoresistor code. The tricky part is how to calibrate the photoresistor according to our light conditions in the room. There is always a threshold that we need to find out by some trial and error in order to trigger the IoT device, such as the blinking of the LED and the sending of the SMS message to the user at the same time. Remember that the blinking of the LED simulates the normal operation of lights during the night, in the street or at home, and that the SMS sent to user confirms normal operation. Please study the code below and then follow along as I explain line by line what is happening. #!/usr/bin/env python # Example for RC timing reading for ODROID-C2 # Must be used with wiringpi2 import wiringpi2 as odroid, time from twilio.rest import TwilioRestClient ODROID MAGAZINE 11 IoT DEVICE ODROID-C2 DEBUG = 1 odroid.wiringPiSetup() LEDpin = 7 odroid.pinMode(LEDpin,1) def RCtime(RCpin): reading = 0 odroid.pinMode(RCpin,1) odroid.digitalWrite(RCpin,0) time.sleep(0.1) odroid.pinMode(RCpin,0) # This takes about 1 millisecond per loop cycle while (odroid.digitalRead(RCpin) == 0): reading += 1 return reading def Send_SMS(): account_sid = “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX” # Your Account SID from auth_token = “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX” # Your Auth Token from client = TwilioRestClient(account_sid, auth_token) message = client.messages.create(body=”Hello from Python”, to=”+30XXXXXXXXXX”, # Replace with your phone number from_=”+120XXXXXXXXX”) # Replace with your Twilio number print(message.sid) while True: print RCtime(5) # Read RC timing using physical pin #18 time.sleep(300) if (RCtime(5)>2500): Send_SMS() for i in range (0,300): if (RCtime(5)>5500): odroid.digitalWrite(LEDpin,1) time.sleep(0.02) odroid.digitalWrite(LEDpin,0) time.sleep(0.02) First, we import the wiringpi2 module and we create the object “odroid” because we want to control the GPIO pins of our ODROID-C2: <import wiringpi2 as odroid> There is a detailed tutorial from Hardkernel’s excellent support page on how to ODROID MAGAZINE 12 IoT DEVICE ODROID-C2 download and install WiringPi to your ODROID-C2 running Ubuntu at http:// bit.ly/2ba6h8o. Next, we import TwilioRestClient from twilio.rest, something that we have explained in detail in the previous paragraph: <from twilio.rest import TwilioRestClient> Then, with the following line of code, we reference the GPIO wiring according to the table provided by Hardkernel for ODROID-C2, as shown in Figure 2: <odroid.wiringPiSetup()> We assign the pin7 for the LED: <LEDpin=7> Immediately after that, we set it as an output <odroid.pinMode(LEDpin,1)> In the next section of code, we define the function called RCtime. This is a very important function for measuring up the levels of light in the room. We keep track of these levels with a counter: <reading = 0> Next, we setup the relevant pin i.e pin5 (physical pin18) as an output: <odroid.pinMode(RCpin,1)> We then write to that pin: <odroid.digitalWrite(RCpin,0)> We alternate state almost immediately after that using time.sleep(0.1): <odroid.pinMode(RCpin,0)> Pin5 is set now to low, and on the next line of code we read its state, adding +1 to our counter: <while (odroid.digitalRead(RCpin) == 0):> <reading += 1> Finally, we check if the level of darkness is under its threshold: <RCtime(5)>2500> The number 2500 is calculated with some trial and error endeavours looking for the right threshold according to our light conditions in the room. If the light conditions are above this limit, we call the function Send_SMS and we sent the SMS message via Twilio at the same time that we make the LED blink. Please ODROID MAGAZINE 13 IoT DEVICE ODROID-C2 note that the LED blinks for an interval equal to the time that we set for our IoT device to check for the right light conditions in the room. In this example, that interval is 300 seconds, or every 5 minutes. Of course you can set your own intervals for checking. <time.sleep(300)=for i in range (0,300)> Those time intervals must be the same to ensure the right timing. In order to make the LED blink, the pin is set to high and then low with a short time interval of 0.02 milliseconds: for i in range (0,300): ….. <odroid.digitalWrite(LEDpin,1> <time.sleep(0.02> <odroid.digitalWrite(LEDpin,0> <time.sleep(0.02> Bringing it all together Now it’s time to run and test our code. Copy and paste the entire code to your Python IDLE document (Integrated Development and Learning Environment) under the name OdroidSMS.py. Remember that all Python scripts have the extension *.py. You can start IDLE from your Ubuntu desktop by simply clicking the Applications Menu (Applications -> Programming -> IDLE). After the file has been saved, run it with sudo privileges from a command prompt after navigating to the directory where the file resides: $ sudo python OdroidSMS.py This is the basic idea of a smart device, and if I can do it, you can do it too. What will be your next step? Take this guide, study it carefully and then expand upon it by creating another, even a more sophisticated IoT device with your ODROID-C2. ODROID MAGAZINE 14 ADVANCED IMAGING SENSORS How to Get Fancy Color Images from A Simple Sensor Image Using the Bayer Pattern to Create an RGB Color Image By [email protected] I t is well known that the color of a pixel can be represented by a mixture of three primary colors: red, green, and blue. For this, many people might think that a single pixel in a camera’s sensor also has three colors: red, green, and blue. For example, in a 1024 x 1024 image, it is generally assumed that there is the same amount of pixels, 1024 x 1024, of red, green and blue colors. However from a manufacturing point of view, it is very complicated and expensive to put three different types of color sensors in one location. Therefore, a beam splitter is usually used to light up the sensors on different sensor panels. As a result, this approach is prohibitively complex, bulky and expensive. A more practical and feasible alternative is to have monochrome sensors with an accompanying color filter. Here, the filter has the same number of cells as the image pixels. For example, in a 1024 x 1024 image, we use 1024 x 1024 monochrome sensors with a color filter of 1024 x 1024 cells with three colors: red, green and blue. Figure 1 shows two diagrams the the leftmost is of multi-sensors with beam-splitting. The rightmost diagram is of monochrome sensors with color filter array, or CFA. Although various patterns can be used for a CFA, the Bayer pattern is the most common. In the next column we show the basic 2 x 2 form of the Bayer pattern which has two greens, one red, and one blue filter. We use more green Two different structures of color sensor: multi-color sensors(left) and single monochrome sensors(right) subpixels to mimic the sensitivity of human eyes which are more tuned to detect intensities of the color green. The Bayer pattern can be thought of the combination of red, green and Basic form of the Bayer pattern: The Bayer pattern is the combination of red, green and blue patterns. blue patterns as shown below. For example, the first row of a 1024 x 1024 image is made up of 512 red pixels and 512 green pixels. Similarly, the second row is made up of 512 green pixels and 512 blue pixels. Therefore, if we use one byte of data for each pixel, the total data size of 1MB of a 1024 x 1024 image is composed of 0.25MB red data, 0.25MB of blue, and 0.5MB of green data. This is a great size reduction for image data. The data size of an image from a setup with three different color sensor panels would be 3MB or three times the size compared the 1MB image from the Bayer pattern. However, we inevitably lose detailed color information by using the Bayer pattern color sensor. For example, if we look at the top left pixel in figure 3, we only get the intensity of green. Therefore, we have to “guess” the other color values for this pixel. Generally, the interpolation is used to estimate the missing values. One of the simplest methods is the Pixel Doubling Interpolation. Using the nomenclature used in Figure 4, we get the full RGB color intensities for each pixel using the following formula: ODROID MAGAZINE 15 ADVANCED IMAGING SENSORS Top left pixel: (R, G, B) = (R1, G2, B4) Top right pixel: (R, G, B) = (R1, G2, B4) Bottom left pixel: (R, G, B) = (R1, G3, B4) Bottom right pixel: (R, G, B) = (R1, G3, B4) Although we 2 x 2 Bayer pattern block for the Pixel Double Interpolation. get the full color data with the least amount of calculation using this technique, we also get the worst quality image. To enhance the image quality, more pixels in the neighborhood of the pixel being filled in are used, in addition to using a more complicated formula. One example of this is the Bilinear Interpolation method. article titled “Understanding oCam’s Global Shutter” (http:// bit.ly/2ee4sJ9, many ODROID users requested a global shutter camera after the release of the oCam-1MGN-U, the monochrome global shutter camera. However, no global shutter color camera has been provided to be used with ODROIDs because there is no color sensor with both a global shutter and ISP functionality. A lot of effort has been given to solve this problem through the use of software, instead of waiting for appropriate sensor hardware to become available. Fortunately, a new type of global shutter color camera has been developed for ODROIDs using a proprietary algorithm. This new camera, the oCam1CGN-U, will be available around December 2016. Figure 6 shows the striking improvement in the color quality. 6 x 6 Bayer pattern block for the Bilinear Interpolation. Original color image obtained by using Bilinear Interpolation of Bayer image (left) and the color image enhanced through a proprietary enhancement algorithm (right) applied to the original image. Pixel R33: (R, G, B) = (R33, (G23+G34+G32+G43)/4, (B22+B24+B42+B44)/4) Pixel G34: (R, G, B) = ((R33+R35)/2, G34, (B24+B44)/2) Pixel G43: (R, G, B) = ((R33+R53)/2, G43, (B42+B44)/2) Pixel B44: (R, G, B) = ((R33+R35+R53+R55)/4, (G34+G43+G45+G54)/4, B44) For more information about the various interpolation techniques, you can refer to “Image Demosaicing: A Systematic Survey by Xin Li, Bahadir Gunturk, and Lei Zhang (. ly/2eHnGGm). The problem with Bilinear Interpolation is the poor color quality. To overcome this limitation, many CMOS sensor manufacturers use a special processor known as an Image Signal Processor, or ISP. This further enhances the image obtained by interpolating the Bayer pattern image. Although the usefulness of a global shutter camera is well known, as discussed in the August 2016 ODROID Magazine ODROID MAGAZINE 16 The new camera will have the following specifications: Sensor: OnSemi AR0134 Bayer Color CMOS image sensor Lens: Standard M12 lens (changeable) Image sensor size: 1/3 inch Image resolution: 1280 x 960 Shutter: Electric global shutter Interface: USB 3.0 super-speed In next month’s article, an interesting example will be developed using this new global color shutter camera with the ODROID platform. ALARM CENTRAL Alarm Central Part 1 - RF24 Window Sensor and Mirf Library by Jörg Wolff T his is the first part of my series about my Alarm Central project that uses an ODROID-C1 running Android. The project consists of the Alarm Central Android app, ultra low power window sensors, and ultra low power motion sensors, which are not yet ready. The sensors communicate with the ODROID-C1 using Nordic Semiconductor nRF24L01 2.4Ghz modules. This article details the ultra low power window sensors and the communication library, mirf, which I ported to Android. During my first tests, I decided to use the ODROID-C1 instead the ODROID-C2, because the latter does not have a native SPI interface and the bitbang SPI driver for the ODROID-C2 is too slow to use with the nRF24L01 modules. The development of other parts the system, such as the door lock and fingerprint sensor, is ongoing. The Alarm Central project is not yet installed in my house but, as soon as I finish the case for the ODROID-VU7+ and the ODROIDC1, I will install it. In case of an alarm, the app will send a short message through the internet to a smartphone. Studio, and the components were hand soldered, which took about 20 to 30 minutes per board. Figure 2 Nrf24 Window Sensor Window Sensor pcb Partlist Alarm Central Home Window sensor The window sensors are based on an ATTiny84 processor. I designed a small 24mm x 60mm board which contains a reed contact, a connector for the nRF24L01, an ISP connector for flashing the processor, a holder for a CR2450, and some additional parts. The PCB was ordered from Itead Printed board Attiny 84A-SSU SO-14 NRF24L01 module Battery Holder HU2450 Renata Reed Contact NO 13x2.0 Resistor 5M1 SMD1206 Capacitor 22u/16V 4.3x4.3 Capacitor 10n/50V 3.2x1.6 Pin Strip 2x3 2.54 Female strip 2x4 2.54 Neodym Magnet 10x1 (Dimensions are in mm) ODROID MAGAZINE 17 ALARM CENTRAL It would be possible to design the board to be smaller, but the larger size accommodates a 650mAh CR2450 battery that provides a long battery life. The ATtiny is designed to sleep for 4 seconds, then wake up and send a 20 bytes message to the Alarm Center. If the reed contact changes state, the ATtiny wakes up and sends a message to the ODROID-C1. During sleep mode, the total current consumption of all components is about 6µA. When the ATtiny is awake, the current jumps for a short period to around a couple of mA. The overall average total current is about 17µA. With a battery capacity of 650mAh this give us a battery life of 3 to 4 years. Without message encryption, the average current would be even less and the battery life would be about 5 years. To reach this low current in sleep mode, the Brown Out Detection is disabled, this makes it impossible to store data in the EEPROM. Occasionally, due to power cycling, there is some data loss such as node number or the AES key. This made me implement data storage in the flash. With data being stored in flash, there is no longer any data loss when power cycling. On the sensor board’s first boot, it sends its data unencrypted with the node number, 255. The Alarm Central receives this message and does auto node numbering and returns the AES key. This only happens when Alarm Central is offline and the user has made authentication. For a short time, communication is open. The code for the sensor can be found on Github at. Circuit diagram I could not find a small plastic case that fit my sensor. So, I used a plastic U-profile 15 mm x 15 mm (9/16” x 9/16”) strip, and cut two 68mm (2 5/8”) pieces and glued them together. The most complicated part seemed to be soldering all the small SMD parts, but with the right technique and a little practice, it goes smoothly. It’s best to begin with the ATtiny and only solder one pin so you can adjust the position a bit, then the other pins. It works great if you solder a few of the pins together, since a solder bridge can be removed with desoldering braid and a little soldering flux. You should not ODROID MAGAZINE 18 Sensor and Case forget to clean any excess soldering flux off the components with acetone or a universal cleaner. To reduce the height of the PCB and components, the quartz can be desoldered from top and soldered to the bottom of the nRF24L01 board. Also, the female 2x4 strip can be wetted down 1 or 2 mm (1/16”) and the pins of the 2x4 strip can be cut about 2 mm (1/16”). The total height should be about 13 mm (½”) to make them fit into the case. The KiCad project can be found at. Sensor and nRF24L01 Mirf Library The mirf library is responsible for wireless communication, and was ported to Android. Basically, it is the same code that is used on the ATtiny and the ODROID-C1. The two key differences in the code are with the SPI interface code and a C++ wrapper on the ODROID code. To build the library of ODROID, first install the Android NDK. Next, build the library from jni folder by running the following command: $ ../../ndk-build -B ALARM CENTRAL You can find the source code on Github at. ly/2eiANjl. To use this library in an Android app, it needs a wrapper library such as this: #define LOG_F(fn_name) __android_log_write(ANDROID_ LOG_DEBUG, LOG_TAG, “Called : “ fn_name ) static JavaVM *java_vm; mirf* receiver; /* Model of Mirf wrapping library ported to ODROIDC1 / Android jint JNI_OnLoad(JavaVM* vm, void* reserved) { Copyright (C) <2016> JNIEnv* env; <Jörg Wolff> if (vm->GetEnv(reinterpret_cast<void**>(&env), JNI_VERSION_1_6) != JNI_OK) { return -1; This program is free software: you can redistrib} ute it and/or modify it under the terms of the GNU General Public License as published by // Get jclass with env->FindClass. the Free Software Foundation, either version 3 of // Register methods with env->RegisterNatives. the License, or (at your option) any later version. This program is distributed in the hope that it system(“insmod /system/lib/modules/spicc.ko”); will be useful, system(“insmod /system/lib/modules/spidev.ko”); but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. return JNI_VERSION_1_6; See the GNU General Public License for more details. } You should have received a copy of the GNU General Public License along with this program. If not, see <http:// //mirf(uint8_t _cepin, uint32_t _freq, uint8_t _spi_>. channel, uint8_t _payload_size, uint8_t _mirf_CH); */ JNIEXPORT void JNICALL Java_path_to_your_app_MirfSetup(JNIEnv * env, jobject obj, uint8_t ce, uint32_t speed, uint8_t spi_channel, #include <jni.h> uint8_t size, uint8_t mirf_channel) { receiver = new mirf(ce, speed, spi_channel, size, #include <stdio.h> #include <stdlib.h> mirf_channel); //LOG_D(“Setup”); #include <android/log.h> #include <mirf.h> } #ifdef __cplusplus //void config(void); extern “C” { JNIEXPORT void JNICALL #endif Java_path_to_your_app_MirfConfig(JNIEnv* env, jobject obj) { if (receiver != NULL) receiver->config(); #define LOG_TAG “com.jw.mirf” } #define LOG_D(...) __android_log_print(ANDROID_LOG_DE- //void reconfig_rx(void); BUG, LOG_TAG, __VA_ARGS__) JNIEXPORT void JNICALL ODROID MAGAZINE 19 ALARM CENTRAL Java_path_to_your_app_MirfReConfigRx(JNIEnv* env, job- jobject obj, jbyteArray array) { if (receiver != NULL) { ject obj) { jbyte *buf = env->GetByteArrayElements(array, if (receiver != NULL) receiver->reconfig_rx(); NULL); } receiver->transmit_data(buf); env->ReleaseByteArrayElements(array, buf, 0); } //void reconfig_tx(void); JNIEXPORT void JNICALL } Java_path_to_your_app_MirfReConfigTx(JNIEnv* env, jobject obj) { if (receiver != NULL) receiver->reconfig_tx(); //uint8_t status(void); } //uint8_t max_rt_reached(void); //void set_address(uint8_t pos, uint8_t* address); //uint8_t data_ready(void); JNIEXPORT void JNICALL JNIEXPORT int JNICALL Java_path_to_your_app_MirfSetAddress(JNIEnv* env, Java_path_to_your_app_MirfDataReady(JNIEnv* env, job- jobject obj, jbyte pos, jstring address) { ject obj) { if (receiver != NULL) return receiver->data_ if (receiver != NULL){ const char *nativeString = env- ready(); LOG_D(“MirfDataReady:return 0”); >GetStringUTFChars(address, 0); return 0; receiver->set_address(pos, (uint8_t*)nativeString); } LOG_D(“SetAddress: %s”, nativeString); env->ReleaseStringUTFChars(address, nativeString); //uint8_t read_register(uint8_t reg, uint8_t* buf, uint8_t len); } //uint8_t read_register(uint8_t reg); } //uint8_t write_register(uint8_t reg, const uint8_t* buf, uint8_t len); //uint8_t receive_data(void* buf); //uint8_t write_register(uint8_t reg, uint8_t value); JNIEXPORT jbyteArray JNICALL //void config_register(uint8_t reg, uint8_t value); Java_path_to_your_app_MirfReceiveData(JNIEnv* env, //uint8_t get_data(void* buf); jobject obj, jbyte size) { //uint8_t send_data(void* buf); if (receiver != NULL) { jbyte *data=(jbyte *) malloc(size*sizeof(jbyte)); //void power_up_rx(void); receiver->receive_data(data); JNIEXPORT void JNICALL jbyteArray result=env->NewByteArray(size); Java_path_to_your_app_MirfPowerUpRx(JNIEnv* env, job- env->SetByteArrayRegion(result, 0, size, ject obj) { if (receiver != NULL) receiver->power_up_rx(); data); delete[] data; } return result; } return 0; } //void power_up_tx(void); JNIEXPORT void JNICALL Java_path_to_your_app_MirfPowerUpTx(JNIEnv* env, jobject obj) { if (receiver != NULL) receiver->power_up_tx(); //uint8_t transmit_data(void* buf); JNIEXPORT void JNICALL Java_path_to_your_app_MirfTransmitData(JNIEnv* env, ODROID MAGAZINE 20 } ALARM CENTRAL //void power_down(void); //uint8_t flush_rx(void); The Mirf library and the functions need some glue code, so the Java app can call the C++ library: static { System.loadLibrary(“mirf_android”); JNIEXPORT int JNICALL Java_path_to_your_app_MirfFlushRx(JNIEnv* env, job- } ject obj) { if (receiver != NULL) return receiver->flush_rx(); return 0; public native int MirfSetup( byte ce, int speed, byte spi_channel, byte size, byte mirf_channel); } public native void MirfConfig(); public native void MirfReConfigTx(); //uint8_t flush_tx(void); public native void MirfReConfigRx(); JNIEXPORT int JNICALL public native void MirfPowerUpRx(); Java_path_to_your_app_MirfFlushTx(JNIEnv* env, job- public native void MirfPowerUpTx(); ject obj) { public native void MirfSetAddress(byte pos, String if (receiver != NULL) return receiver->flush_tx(); address); return 0; public native byte[] MirfReceiveData(int size); public native void MirfTransmitData(byte[] data); } public native int MirfDataReady(); public native void MirfStartListening(); //void start_listening(void); public native void MirfStopListening(); JNIEXPORT void JNICALL public native int MirfFlushRx(); Java_path_to_your_app_MirfStartListening(JNIEnv* env, public native void MirfDelayMicroSeconds(int us); jobject obj) { if (receiver != NULL) receiver->start_listening(); } And to create a mirf object, this code as example in the onCreate() function: //void stop_listening(void); /* JNIEXPORT void JNICALL * Some needed constants for the mirf object Java_path_to_your_app_MirfStopListening(JNIEnv* env, */ jobject obj) { byte pin_ce = 6; //Header pin 22 if (receiver != NULL) receiver->stop_listening(); } byte spi_channel = 0; byte length_payload = 20; byte mirf_channel = 5; int spi_speed = 4000000; //void delay_us(unsigned int howLong); /* JNIEXPORT void JNICALL * Setup the mirf communication Java_path_to_your_app_MirfDelayMicroSeconds(JNIEnv* */ env, jobject obj, int us) { MirfSetup(pin_ce, spi_speed, spi_channel, length_pay- if (receiver != NULL) receiver->delay_us(us); } #ifdef __cplusplus } load, mirf_channel); MirfConfig(); In a loop,or a HandlerThread, the messages can be read from the sensors, as shown in the following code snippet: #endif ODROID MAGAZINE 21 ANDROID GAMING Ancestor ALARM CENTRAL while (true) { MirfPowerUpRx(); A game full of fun with perfect gameplay, visuals and details MirfFlushRx(); MirfStartListening(); while (MirfDataReady() == 0) { MirfDelayMicroSeconds(250); } MirfStopListening(); by Bruno Doiche W hile we and our beloved ODROIDS are still far from emulating the Playstation 3, where we can enjoy playing Journey to the point of exhaustion, we can get a game with similar visuals, puzzles and the added bonus of being an endless runner game with bosses! Ancestor was written by a brother and sister production team that absolutely loved Mass Effect. They teamed up with their father, who is a programmer, and the rest is up to you to figure out how far you can go in this absolutely enjoyable game. Grab your joystick and think fast! inbuffer = Arrays.copyOf(MirfReceiveData(length_ payload), 16); //Do something with inbuffer. try { Thread.sleep(5L); } catch (InterruptedException e) { e.printStackTrace(); } }. supermegaquest.ancestor The wiring of the nRF24L01 module to the ODROIDC1 is as follow: C1.Header.19 – nRF24L01.6 C1.Header.21 – nRF24L01.7 C1.Header.23 – nRF24L01.5 C1.Header.22 – nRF24L01.3 C1.Header.24 – nRF24L01.4 C1.Header.1 – nRF24L01.2 C1.Header.6 – nRF24L01.1 (MOSI) (MISO) (SCK) (CE) (CSN) (VCC) (GND) The look is similar to Journey (PS3), but the gameplay is frantic In the next part of this series, I will share more information on the RF24 motion sensor, the Alarm Central App itself, and a nice handmade case for the ODROID-VU7+. In Ancestor, you solve puzzles to progress and have fun ODROID MAGAZINE 22 AMBILIGHT Ultra-HD 4K Ambilight Create a spectacular synchronized visual background for your home theatER by Charles Park and Brian Kim A mbilight, short for “ambient lighting”, is a lighting system for television developed by Philips in which lighting effects are created around the TV that corresponds to the video content. You can achieve a similar effect by using a strip of RGB LEDs and software that samples the image on the screen, then colors each individual LED in a different shade accordingly. In this article, we shall see how to use an ODROID to achieve this. such as playing the media file, capturing the video frame, and sending the LED color data format via USB serial interface. Arduino receive the color data format from ODROID-C2, and then sets the color of each LED. Figure 3 - Arduino Wiring Figure 2 - Hardware components for Ambilight Hardware setup Figure 1 - Ambilight on ODROID-C2 Requirements We used an Arduino for controlling the LEDs and are using an ODROIDC2 in order to run Kodi media player as the interface for the video content. The hardware components are listed below. • ODROID-C2 • Arduino UNO • USB cable Type A – Type B • 5V/6A power Supply • 32GB eMMC Module C2 Linux • WS2801 LEDs • DC Plug Cable Assembly 2.5mm • 4-pin connector cable x 3 We used an Arduino to control LEDs and ODROID-C2 to do everything else, The first step is to set up the wiring on the Arduino board. There are two wiring areas: the 4-pin LED connector, and the ODROID-C2 DC plug cable. The 4-pin LED connector connects to the WS2801 LEDs which are able to be controlled via the SPI interface. 4-pin LED connector cable • Red: VCC • Black: Ground • Blue: SCK (13) • Green: MOSI (11) DC Plug Cable • Red: VCC • Black: Ground outdoor LED displays and decorative LED lighting system. In order to mount LEDs into the TV, we cut the LEDs roll with an alternative size from the TV height and width size. Each cut LEDs need to be connected with 4-pin connector cables. Software setup There are three kinds of main software for DIY ambilight: Arduino LED control firmware, Hyperion, and Kodi media player. The Arduino is connected to the ODROID-C2 via USB serial device (ttyACM0). We can easily develop the Arduino firmware in Linux natively on the ODROID-C2 using the Arduino IDE: Figure 4 - LED Wiring The WS2801 is a constant current LED driver, and is designed for indoor/ ODROID MAGAZINE 23 AMBILIGHT libusb-1.0-0-dev python-dev $ sudo apt-get update $ sudo apt-get install git build- $ sudo apt-get update $ sudo apt-get install arduino $ cd /usr/share/arduino/libraries/ $ sudo git clone \ Adafruit_NeoPixel.git The Adafruit NeoPixel library is for controlling LEDs. The LED control firmware source code is available for download at: dev $ git clone --depth 1 \ $ git clone --depth 1 \ aml_libs.git $ cd linux $ cd c2_aml_libs $ make odroidc2_defconfig $ sudo make $ make menuconfig $ sudo make install Device Drivers ---> $ cd Amlogic Device Drivers ---> Video Decoders ---> [*] Amlogic Video Capture support Generic Driver Options ---> *** Default contiguous memory area size: *** (12) Size in Mega Bytes $ make -j4 $ sudo make modules_install --recursive bkrepo/hyperion.git $ cd hyperion $ mkdir build $ cd build $ cmake -DENABLE_DISPMANX=OFF \ -DENABLE_SPIDEV=OFF -DENABLE_ AMLOGIC=ON \ -DCMAKE_BUILD_TYPE=Release -Wno-dev .. media/boot/Image.back $ make -j4 $ sudo mv /media/boot/ /media/ $ sudo make install boot/Image.back $ sudo mv /media/boot/meson64_ $ arduino odroidc2.dtb \ /media/boot/meson64_odroidc2. odroid/odlight.ino file) dtb.back (Ctrl + R) Verify / Compile $ sudo cp arch/arm64/boot/Image / (Ctrl + U) Upload media/boot/ ODROID MAGAZINE 24 $ git clone --depth 1 \ $ sudo mv /media/boot/Image / odlight.ino Of course, the Arduino needs to be connected to ODROID-C2 during the firmware upload. You can get more information about Arduino IDE software on the Arduino home page at. ly/212hc7p. To describe Ambilight behavior, background software captures the video frame while the media file is displaying, and then sends the RGB data to the LED control device. Hyperion is an open source Ambilight implementations that runs on many platforms, which uses the video capture driver of the ODROID-C2 for getting the video frame data. However, the video capture driver allocates 8 megabytes DMA area, so ODROID-C2 Linux kernel needs more contiguous memory: linux.git -b odroidc2-3.14.y $ wget -O (Ctrl + O) -> (Select /home/ python libasound2-dev zlib1g- essential Figure 5 - Ambilight mounted on the TV libxrender-dev \ $ sudo cp arch/arm64/boot/dts/meson64_odroidc2.dtb \ /media/boot/ $ sudo sync $ sudo reboot Hyperion is a good choice for LED color control software because it requires less processing power, works quickly and effectively, and also provides an easy configuration. Furthermore, Hyperion is compatible with the Amlogic platform on the ODROID-C2, even though it does not officially support it yet. However, it is not complicated to add support for the ODROID-C2 in Hyperion: $ sudo apt-get update $ sudo apt-get install cmake libqt4-dev \ Hyperion needs a configuration file, which can easily be generated by the Hypercon configuration program, which is available at. Even if Hypercon cannot generate a complete configuration file for ODROID-C2, the program is useful for LED position setting. There are three options in order to set LED positions: LEDs horizontal, LEDs Left and LEDs Right. The first LED offset option is for adjusting LED starting point, and we can also set the direction to clockwise or counterclockwise. To get the JSON format configuration file after finished the LED position setting, just click the Create Hyperion Configuration button. Figure 6 - Hypercon AMBILIGHT Figure 7 - Ambilight Running The “leds” option in the generated JSON configuration file via Hypercon needs to be copied to the default configuration file (). All of the other options in the default JSON configuration file can remain set to the default values: For more information, please refer to the Adalight Project. ly/1EnG6zZ), our previous Ambilight article in ODROID Magazine (http:// bit.ly/2dOPWsk) and our ODROID forum threads at and. ODROID Magazine is on Reddit! $ wget -O / etc/hyperion/hyperion.config.json Open the generated configuration file via Hypercon, then overwrite the “leds” option from the generated configuration file, and write it to the file /etc/hyperion/ hyperion.config.json. Playing a movie To play Ambilight on the ODROIDC2, run the Hyperion daemon in the background, then play the movie in Kodi: $ hyperiond /etc/hyperion/hyper- Figure 8 - Ambilight closeup ODROID Talk Subreddit ion.config.json & $ kodi ODROID-C2 also supports 4K H265 movies, which requires setting the double_write_mode option, as detailed at: $ echo 1 | sudo tee \ /sys/module/amvdec_h265/parameters/double_write_mode ODROID MAGAZINE 25 DOCKER Docker 101 Part 1 - Why Docker? by Andy Yuen T he article “Why People user Docker” (), sums up Docker nicely.. Docker uses a virtualization technology called containers. Containers use the Linux kernel feature “cgroups” to isolate between containers and other processes running on the host, and “namespaces” to make available a set of system resources and present them to a process as if they are dedicated to that process. Container technology is different from virtual machine technology in that it does not need a hypervisor nor a guest operating system. Containers only Figure 1 - Virtual Machine versus Container require an application packaged with dependent binaries and libraries in an image. For virtual machines, the virtualization is all the way down to the device level, while that for containers is down to the operating system level only. Compared to virtual machines, containers are smaller and consume less resources while operating with improved performance. Since your application is isolated in a Docker image together with its dependencies (binaries and libraries), you can build it once and run it on any Docker host with the same computer architecture the image was built on. This means that Docker images built for Intel x86 architecture will not run on ARM64 machines and vice-versa. There are many more reasons for enterprises to use including Docker CI/ CD, DevOps, blue/green deployment, rolling updates, etc. I just don’t have space in this article to cover them all. An interesting fact about Docker is that anytime you use Google, such as searching, Gmail, or Google Docs, you are being issued a new container. Using Docker Figure 2 shows the flow of working with Docker. The list below details the flow: 1. Someone creates a Dockerfile 2. The Dockerfile is used to build ODROID MAGAZINE 26 an image, which is your application with its dependent binaries and libraries. 3. The image is pushed to Docker Hub, which serves as a central repository for the Docker images. You can find a large number of Docker images that you can download and use if they suit your purpose. 4. A user pulls an image from Docker Hub and runs the image in a container. Multiple replicas of the application can be run on the same Docker host or on a Docker cluster on demand. Overview In this tutorial, I will cover all of the activities listed above, and is split into 2 parts. The first part covers the classic Docker commands to start and stop containers and let them communicate with each other. The second part covFigure 2 - Using Docker DOCKER ers Docker swarm mode, which is new in Docker version 1.12. Swarm mode is all about orchestration, which means clustering and scheduling of where to run the containers, and how many replicas should be started in a cluster. ODROID Magazine published several Docker articles back in early 2015, so what has changed since then? For one thing, kernels that support Docker were not that common in stock Operating Systems for ODROID. In order to run Docker, one has to tinker with kernel builds which, for most people, myself included, is too much trouble. However, kernel features that are required for Docker have been incorporated in many operating systems. Another thing to note is that Docker has been designed and built for 64-bit machines. Back in 2015, all ODROIDs were based on 32bit architecture only. The ODROIDC2 is the first 64-bit ODROID to date. Although Docker can be, and has been, adapted to run on 32-bit ARM architecture, this is the first 64-bit implementation. Let’s explore its capabilities in this tutorial. Prerequisites In order to follow this tutorial, you must have the following in place: 1. ODROID-C2 running an OS with kernel features required by Docker enabled I use Armbian Xenial server, which is based on Ubuntu. How I decided on using the Armbian Xenial server is documented in my blog at. ly/2dyTUGr. I had a look at the Armbian website at recently, and noticed that only the Jessie server, which is based on Debian, is available for download. 2. ODROID-C2 with Docker engine installed To install the Docker engine on your machine, issue the following commands: $ docker run -d -p 3306:3306 \ --name mysql \ -e MYSQL_USER=fishuser \ -e MYSQL_PASSWORD=fish456 \ -e MYSQL_DATABASE=fish \ -v /media/sata/fish-mysql:/u01/ my3306/data \ mrdreambot/arm64-mysql If you are using a different OS from mine, you have to use your OS’s package management command such as yum, dnf, or apt-get. If Docker is not available in your OS’s repository, try the Docker v1.12.1 binaries that I used along with an installation script at. These binaries have been tested on Armbian Xenial and Jessie servers. For Part 1 of the tutorial, either docker.io v1.10, v1.11 or v1.12 will work fine. Part 2 will require v1.12. In order to avoid having to add “sudo” in front of every Docker command you issue during the tutorial, you should add your user name (login) to the “docker” group as follows, then reboot the system and then log back in: Figure 3 - Checking the Docker version Figure 4 - Finding out more about Docker $ docker run -d \ -p 8080:8080 --name fish \ -e MYSQL_SERVER=192.168.1.100 \ -e MYSQL_PORT=3306 \ mrdreambot/arm64-fish 3. ODROID-C2 with a working Internet connection Your ODROID-C2 will need an internet connection to pull down images from the Docker Hub during this tutorial. Figure 5 - Getting Help Basic Docker commands In the tutorial that follows, for simple commands whose output is self-explanatory, I shall just include the screenshots for the command and output. For more complicated commands, I shall also explain the command syntax and what the options mean. If you want help for the options available to a certain command, just enter the command and append --help as shown in Figure 6. Figure 6 - Docker Build Help Running your first container Make sure you have an Internet connection, and issue the following command: $ docker run -d -p 80:80 --name ODROID MAGAZINE 27 DOCKER httpd \ mrdreambot/arm64-busybox-httpd Options: -d means run in the daemon mode -p 80:80 means map port 80 of the container to the host’s port 80 so that you can access the application on your ODROID-C2’s port 80 --name gives a name to the container you just started. This is optional, although I highly recommend you to always give it a name so that you can refer to your container easily. You can identify running containers even without using the --name option by issuing the “docker ps” command, which is described later. Since the image “mrdreambot/arm64busybox-httpd” is not already on your Docker host, Docker will download it from the Docker hub. Navigate your web browser on your PC to your Docker host, which is your ODROID-C2 machine. You should get the page shown in Figure 7, which is our Docker “Hello World” program. In this case, we want to start the command shell We can run any command available to the command shell. In this example, we run the ping command. When we are done, we just type “exit” to exit the container. The container is still running after you exited from the interactive session. Stopping and removing your active running containers To list the running containers issue the following command: $ docker ps To stop and remove the httpd container we started, issue the following commands: $ docker stop httpd $ docker rm httpd Figure 8 - Docker Exec We could have started the container in the interactive mode when we started the container. For example, instead of using the following command: Sometimes when you run a container in an interaction session and exit, you will not see the container in the “docker ps” command. You have to use the following command: $ docker ps -a $ docker run -d -p 80:80 --name httpd \ mrdreambot/arm64-busybox-httpd We could have used this command: $ docker run -it -rm \ mrdreambot/arm64-busybox-httpd bin/sh Figure 7 - ODROID Docker test page Connecting to your running container You can connect to your running container by issuing the following command: $ docker exec -it httpd /bin/sh The options are as follows: -it means Interactive tty mode httpd is the name we gave the container when we started it /bin/sh is the command we want to run when we connect to the container. ODROID MAGAZINE 28 Figure 10 - docker ps -a and docker rm command outputs The options used are: -it means interactive tty mode -rm means remove the container when we exit from the shell /bin/sh means replace the default entry point (starting the httpd server) with the /bin/sh command. Figure 9 - Docker PS command output I did not give a name to the interactive session. I have to identify the session and issue a “docker rm” command as shown where d4e0029be98e is the container Id identified in the “docker ps -a” command. Figure 11 - Docker images DOCKER Managing images You can list the images available on your Docker host using the “docker images” command as shown in Figure 11. As described earlier, if you do not have the image on your Docker host when you start a container, Docker will try to download it from the Docker Hub. However, you can download the image beforehand using the “docker pull” command: $ docker pull mrdreambot/arm64busybox-httpd You can remove an image from your Docker host using the following command, where adda0ff62710 is the image ID listed in the “docker images” command: $ docker rmi adda0ff62710 In Figure 11, adda0ff62710 refers to the mrdreambot/arm64-fish image. Searching for images Figure 12 shows two examples of searching for arm64 images. The first command searches for all images with “arm64” in the image name and limits the maximum of results to 10. Figure 12 - Docker search Figure 13 - image details The second command searches for all images with “arm64” in its name and have a star rating of 5 or above. You can use the -- help option to explore other options for the search command: $ docker search --help You can also do a search using your browser by navigating to. docker.com. You can find out more information such as how to use the image you are interested in on the Docker hub, as shown in Figure 13. Managing persistent storage If you save data inside your Docker container, the data will be gone once the container is removed using the “docker rm” command. There are different ways of making your data persistent such as using Data Volumes or Data Containers. However, the easier way is to use the “-v” option as shown below: $ docker run -d -p 3306:3306 \ -- name mysql \ -e MYSQL_USER=fishuser \ -e MYSQL_PASSWORD=fish456 \ -e MYSQL_DATABASE=fish \ -v /media/sata/fish-mysql:/u01/ my3306/data \ mrdreambot/arm64-mysql -e MYSQL_USER=fishuser sets the environment variable to tell the container to set the MySQL user to fishuser -e MYSQL_PASSWORD=fish456 sets the environment variable to tell the container to set the MySQL password to fish456 -e MYSQL_DATABASE=fish sets the environment variable to tell the container to set the database to use fish -v /media/sata/fish-mysql:/u01/ my3306/data bind mounts the /media/ sata/fish-mysql volume on the container’s :/u01/my3306/data directory which is the MySQL database data directory. This means that MySQL will save all data onto your host directory /media/ sata/fish-mysql. Although we have provided the storage for the database, we have not yet initialized the database content. The next section shows how this is done. Using multiple Docker containers In this section, I am demonstrating a more common deployment scenario in which an application has a web front end and a MySQL database backend. This means that the web front end and the database are running in their own container. For this tutorial, I am using the WEB4J sample application called the “Fish and Chips Club”. From now on, I am going to refer to this application as “Fish”. This application includes features to: • • • • • • • • edit club members edit local restaurants edit ratings of each restaurant add new lunches (a given restaurant on a given day) RSVP for each upcoming lunch interact using a simple discussion board produce simple reports provide a simple search page Fish uses 3 databases running on MySQL. You can find out more about how to configure this application at. Running the Fish application To start the Fish container, issue the following command: $ docker run -d \ -p 8080:8080 \ --name fish \ -e MYSQL_SERVER=192.168.1.100 \ -e MYSQL_PORT=3306 \ mrdreambot/arm64-fish The option -e MYSQL_SERVER= ODROID MAGAZINE 29 DOCKER 192.168.1.100 sets the environment variable to tell the container the name/ IP address of the MySQL server -e MYSQL_PORT=3306 sets the environment variable to tell the container the port number to use to access the MySQL server Notice that up to now, we’ve not set up the fish databases with content yet. We are going to do that in the next section. Figure 15 - Fish and Chips login Setting up the database To set up the Fish databases, issue the following commands: $ docker exec -it fish /bin/bash $ cd /fish/WEB-INF/datastore/ mysql/ $ mysql -u fishuser -p -h 192.168.1.100 < CreateALL.SQL $ mysql -u fishuser -p -h 192.168.1.100 $ show databases; $ exit $ exit The SQL script to set up the Fish databases is in the Fish container’s /fish/ WEB-INF/datastore/mysql/ directory. We get into the running Fish container using “docker exec” and proceed to run the MySQL client to initialize the MySQL database which runs on a separate container. After that, we use the MySQL client again to verify that the databases have been created using the “show databases;” command, as shown in Figure 14. Figure 14 - Create database Figure 16 - Fish and Chips club Test-driving “Fish” Now that we have set up the databases required for Fish with proper content, we can point a web browser to the Fish application at fish. Replace 192.168.1.100 with your ODROID-C2’s IP address. You will be asked to login. Use “testeA” and “testtest” as username and password respectively. After logging in, you will see the Fish home page. Creating a Docker image So far, we have used images already created by me. How do we create an image in the first place? Here is a really quick look at how this is done. First, clone my simple mrdreambot/arm64busybox-httpd on Github at. ly/2eWnLdb. Here is the content of the Dockerfile which tells Docker how the images is to be created: FROM arm64el/busybox-arm64el MAINTAINER MrDreamBot COPY www /www ENTRYPOINT [“bin/busybox”] CMD [“httpd, “-f”, “-p”, “80”, “-h”, “/www”] FROM - specifies that this images is based on the arm64el/busybox-arm64el ODROID MAGAZINE 30 images MAINTAINER - specifies the author of the Dockerfile COPY - copies the www directory in the current directory to the image’s / www directory ENTRYPOINT - set default command and argument to start the container CMD - sets additional defaults that are more likely to be changed. If mrdreambot/arm64-busyboxhttpd still shows up when you issue the “docker images” command. Use the “docker rmi” command to remove the images before you carry out the next step. Then, change to the directory where you cloned my project and issue the following command, as shown in Figure 17: $ docker build -t arm64-busyboxhttpd . $ docker images Now you have created your first Docker image called arm64-busyboxhttpd, which can be deployed in the same way that I showed you in the “Running your first container” section. Just replace “mrdreambot/arm64-busybox-httpd” with “arm64-busybox-httpd” in the “docker run” command and point your browser to it to see the same ODROID-C2 image. What’s next? In this tutorial, I’ve outlined all the classic Docker commands that you will need to run and manage applications running in containers. The Docker Figure 17 - Build Image DOCKER commands that I showed you will only work on your local Docker host, which in this case is your ODROID-C2. You will realize soon there is a limit to the number of containers that you can run on a single machine. What about enterprises that use Docker in a production environment? Surely a single machine will not be able to run all their production workload! This is where Docker Swarm Mode comes in. Swarm mode is new to Docker 1.12. It has a built-in orchestration engine, which means that, in this context, it handles clustering, scheduling of workload, and state management. Clustering is the use of a set of machines to work together and act like a single machine. Scheduling and state management means deciding where to run the containers among the machines that make up a cluster, and how many replicas of the containers should be run. Before Docker 1.12, you have to do lots of extra setup work before you can achieve Docker orchestration. In Part 2, I will show you how to use swarm mode commands for orchestration. LINUX GAMING Linux Gaming Get Serious with the Serious-Engine by Tobias Schaaf I n this article, I want to talk about a guy that is nearly as much of a renegade as Duke Nukem himself. His name is Sam, and he’s very serious! A while ago, I saw that forum user @ptitSeb from the OpenPandora forums was working on one of his many awesome game ports. This time, it was the “Serious-Engine”, which is an open source engine for the game Serious Sam – The First Encounter and Serious Sam: The Second Encounter. I remember this game well, and I spent many hours with a friend fighting wave after wave of monsters. I immediately started trying to compile the games for the ODROID platform with moderate success. I got them to work, but since there was no installer with it and the structure of the game files and libraries was somewhat strange, I temporarily abandoned the project. @ptitSeb kept improving his version, as well as GLshim, which he used to run the engine, and now the game runs very well. I took the time again to compile and test the game, and was finally able to create an installer to take care of the different requirements of the game, as well as make it easier for users to add the missing game data.t Installation As usual, you can install this game from my repository, the installation steps of which are detailed at. ly/2eOG92v. Serious Sam is in my jessie/main package list, both games Serious Sam – The First Encounter (TFE) and Serious Sam – The Second Encounter (TSE) can be installed separately depending on which game(s) you own: $ apt-get install ssam-tfe-odroid $ apt-get install ssam-tse-odroid The Serious-Engine requires OpenGL, and since ODROIDs don’t have OpenGL but only OpenGL ES, we have to use GLshim to run the game, which is also provided by @ptitseb. You will also need the original game data files, specifically the “Data”, “Levels” and “Demo” folders, as well as all of the “.gro” files. These files have to be placed in the game folder, which is in your home folder of the current user and named either “.tfe” or “.tse”. Copy your game files in that folder and you’re ready to play. Known issues Although the game is working very well, there are some issues that I encountered during my tests, which I hope can be fixed sometime in the future. For example, there seem to be some graphical glitches on the Exynos boards (ODROID-X, X2, U2, U3, XU3, and XU4) which show strange colors in different places. These issues are not everyODROID MAGAZINE 31 LINUX GAMING Figure 1 - ODROID XU3/XU4 (top) vs ODROID C2 (bottom), showing some of the color glitches that can be seen on Exynos devices where, and can probably be ignored, but it’s slightly annoying. I couldn’t see the same glitches on the C2, so apparently it has something to do with the Exynos drivers. Although they are not pretty, these glitches don’t hinder you in playing the game. They do not appear too often on the screen, and in later levels they disappear completely, so this bug can probably be ignored. I also found that no music is working currently at this time, but I think that’s an issue that can be solved, and it might already be fixed by the time this article is released. Also, it seems that the mul- Figure 4 and 5 - I love the great sharp shooting action in Serious Sam tiplayer modes don’t yet work. You can’t join an Internet game, although there are plenty of servers available. Apparently, even LAN games are not currently working correctly, so I hope this can be fixed as well. I also tried split screen, which seems to work, and you can play with up to four people with different controllers, or as a single player using mouse and keyboard. However, the C2 is not powerful enough to handle two players simultaneously, and on the XU4, I saw some graphical glitches where the picture started to flicker, which looks like a vertical sync issue. I hope that LAN gaming can be fixed, since this game is really awesome in multiplayer. There seems to be another issue in Serious Sam – The Second Encounter as well, where you slip underneath the surface and are stuck between the level and the ground. This happens randomly and only on a few locations. If it happens, you can’t do anything except reload the game and restart it from the last save point. @ptitSeb is working hard on fixing some of these issues, and I will update Serious Sam and GLshim when there are some fixes available. Gameplay Figures 2 and 3 - Serious Sam offers a lot of nice weapons and monsters ODROID MAGAZINE 32 You might ask, with all the issues, what is actually working? Well, it seems mostly everything else. You can play Serious Sam – The First Encounter and Serious Sam – The Second Encounter in single player mode with all of its goodies and baddies. It’s really a one-of-a-kind shooter that defies all types of realism. You are a one man army, killing hundreds and hundreds of monsters. The game looks great and plays nicely on ODROIDs. As you can see from the screenshots, you can still play and enjoy the game. Have fun playing the game and keep an eye on the forums for updates and bug fixes. ANDROID DEVELOPMENT Android Development Android WiFi Stack by Nanik Tolaram N ot only are Android devices typically quite powerful and come packed with features, but they can also be quite portable and easy to carry. But what good is any device without a connection to the Internet? Of course, portability means doing this without wires, which leaves us two main ways of getting our device online: via Wi-Fi or a cellular connection. All Android devices have built-in Wi-Fi functionality, and in this article we’re going to take a look at how that Wi-Fi connectivity works internally inside the operating system. The source code that is used in this article is based on Android Open Source Project (AOSP) android-5.1.1_r38 release build. A High Level Overview Let’s start by taking a look at Figure 1. It shows a high level interaction between the different system stacks inside the Android operating system. The top layer is where our applications like YouTube and Twitter run. To make it easier to build applications, Android uses a framework to help these applications communicate with the kernel and hardwarelevel technology that makes our internet connection work. This 2nd layer is where we’re going to take a look at the Wi-Fi stack for Android and how it works between the Linux Kernel and our applications. A High Level Overview Android’s Stack Understanding wpa_supplicant In our previous article, we looked at the HAL (Hardware Abstraction Layer) inside Android to understand how applications use this framework to communicate with the hardware inside our devices. Normally most software takes advantage of this layer, but Wi-Fi doesn’t. This is because it uses an open source low-level stack to support its software-hardware communication called wpa_supplicant. The Android framework uses wpa_supplicant as a way to communicate with the Linux kernel, much like many Linux and Unix-based operating systems. This is achieved by using the wpa_supplicant client library to communicate via a socket connection to the wpa_supplicant daemon running on the device. As shown in Figure 2, commands are initiating from an application in the Android framework that uses the socket connection to the daemon in order to relay Wi-Fi commands to the hardware itself. The daemon helps ensure that these commands, such as enabling and disabling the Wi-Fi radio, are understood by the Linux kernel. A closer look at the Wi-Fi stack The wpa_supplicant is initialized during Android startup process, as seen in Figure 3. This snippet was taken from. ODROID MAGAZINE 33 ANDROID DEVELOPMENT Table 1 - The frameworks/opt/net/wifi directory code The wpa_supplicant daemon startup process Internal framework Android user applications have access to the Wi-Fi hardware under the Linux Kernel layer through the framework by accessing the Context.WIFI_SERVICE service using the Context object: $ Context.getSystemService(Context.WIFI_SERVICE) The user will launch an instance of the WiFiManager providing users with the ability to access Wi-Fi services such as their current active connection, re-associating with the connection, and many more features. You can see the implementation of the “actual” Wi-Fi manager in Figure 4, inside the frameworks/opt/net/wifi directory. The low level code that interfaces with the wpa_supplicant is packaged inside the libwifi-service.so binary file, which can be seen in the Android.mk Makefile as shown here. As you can see, the libwifi-service.so file uses com_android_server_wifi_WifiNative.cpp, which is loaded by the framework as shown in Figure 6 during the time while the class is being set up. A look at Android.mk State Management The Wi-Fi stack tracks every changes in the Wi-Fi connection and this is done by using state management internally. In keeping tabs with the different Wi-Fi state, this allows the framework to react accordingly and provide A look at WifiNative.java the ability to inform user application via broadcast intent. Below we outline the different classes we uses internally to understand Wi-Fi state management status. Class Figure 4 - The frameworks/opt/net/wifi folder If you’re interested in knowing the framework code that lives in each of these folders, you can review the information outlined in Table 1. ODROID MAGAZINE 34 DefaultState InitialState DriverStoppingState DriverStartingState DriverStartedState DriverStoppedState ScanmodeState SupplicantStartingState SupplicantStartedState SupplicantStoppingState VerifyingLinkState RoamingState ANDROID DEVELOPMENT L2ConnectedState WaitForP2PDisableState SoftAPRunningState WPSRunningState SoftAPStartedState ConnectedState ObtainingIPState DisconnectingState DisconnectedState ConnectModeState TetheringState TetheredState TetheringState UntheteringState The application now all ready to receive intent from the framework when the wifi state changes. Inside the framework this intent is broadcast out in the WifiStateMachine.java class as shown here. Internally, there is more code before calling the setWifiState(..) but this method is the point where the user will get information about the current Wi-Fi state. I hope this gives you some insight about the Wi-Fi stack and how your applications interact with it. List of Wifi state management intents Intents are the lifeblood of Android applications, and internally the framework uses each intent to communicate wifi status/states to user applications. Let’s take a look at an example at how the intent is used internally to inform user applications about some state. The easiest example is when applications wants to be informed about Wi-Fi status (enabled/disabled) so it can take some action accordingly. This is done using the following <intent> declaration in AndroidManifest.xml. Sending WIFI_STATE_CHANGED_ACTION <receiver android:name=”.WifiReceiver”> <intent-filter> <action android:name=”android.net.wifi.WIFI_ STATE_CHANGED” /> </intent-filter> </receiver> Inside the application there will be a class that extends the BroadcastReceiver, like this: public class WifiReceiver extends BroadcastReceiver { … @Override public void onReceive(final Context context, final Intent intent) { int wifiState = intent.getIntExtra(WifiManager. EXTRA_WIFI_STATE, -1); if (WifiManager.WIFI_STATE_CHANGED_ACTION. equals(intent.getAction()) && WifiManager.WIFI_STATE_ENABLED == wifiState) { … } } ODROID MAGAZINE 35 MYTH TV MythTV Running THE OPEN-SOuRCE HOME ENTERTAINMENT APPLICATION ON YOUR ODROID-C2 by @WebMaka I use several ODROID-C2s on my TVs as front-ends for watching live video streams, using MythTV as the main software driver. On the back-end is a standard computer running Ubuntu 16.04.1 LTS and using a HDHomeRun PRIME with a paired CableCard and tuning adapter as the means of converting my digital cable service into streams that I can view remotely. Since this can take a lot of time to set up the first time, here’s a how-to on creating the same setup for yourself. This guide is written for the latest LTS release of Ubuntu as of its writing, 16.04.1 LTS, but I know it works on 14.04 LTS as well and should also work on any Debian derivative. About DRM DRM is the bane of live TV users everywhere. Expect this to be your single biggest problem, and if you’re trying to wean others off live TV, such as your spouse or elderly parents, expect the biggest point of contention to be over not being able to watch DRMed content or channels. Please note that this means MythTV cannot and will not work with any liveTV programming that is flagged as “protected/copy-once” or “protected/no-copy.” There is, by design, no way around this with any open-source software. The whole point of this approach is to force ODROID MAGAZINE 36 the use of “approved” hardware and software that you have to spend money month after month to use, under the premise that this protects content from redistribution. Some cable companies mark almost everything as copy-once, but others only mark premium paid-sub channels, like HBO or Cinemax. Some channels that, in my opinion, should not be marked as copy-once, are often done because of demands to that effect on the part of the channel, not the cable company. If your cable company flags everything or nearly everything as heavily restricted, don’t bother with HDHomeRuns or any other consumer-owned equipment. You’ll have to stick with the cable box or just stop using cable television service entirely. The HDHomeRun Prime can view protected content with the right software, but if you do want to watch and stream DRM-protected content, the only sure way of doing this is to use Windows Media Center running on Windows. Keep in mind that only Windows 7 includes Windows Media Center for free. SiliconDust reportedly had an Android app that did allow watching copy-once content but the current app doesn’t do it consistently. I did hear that you can watch DRMed content from a HDHR Prime over an Nvidia Shield, but I haven’t tried this myself. Setup Here are the general steps necessary to use MythTV with your ODROID: 1. Install and update Ubuntu 16.04 LTS 2. Install HDHomeRun driver software on your Linux box 3. Install and configure the MythTV backend on your Linux box 4. Set up the clients/frontends on your ODROID 5. Watch live TV along with anything you can download/stream over the Internet Install and update Ubuntu If you’re using the latest 16.04 LTS version of Ubuntu already, start off by doing repetitions of the following two commands until everything’s up-to-date: $ sudo apt-get update $ sudo apt-get upgrade If you’re using an older version, keep in mind that upgrading the OS is usually more time-consuming and more risky than just reinstalling fresh from the latest image. These instructions should work to varying degrees with other Debianbased distros, although your experience may differ. MYTH TV Install HDHomeRun driver To install the HDHomeRun libraries and configuration tool, use the following command: $ sudo apt-get install libhdhomerun hdhomerun-config Next, install a suitable driver. The easiest approach for the driver part is to grab the DVB driver with Debian packaging that was published on GitHub by @h0tw1r3: $ git clone h0tw1r3/dvbhdhomerun $ cd dvbhdhomerun $ dpkg-buildpackage -b If this fails with a dependency error, you’ll need to run “sudo apt-get install” for whatever packages are missing, such as dkms. Assuming that the buildpackage command completed successfully, let’s install the built packages: $ cd .. $ dpkg -i *hdhomerun*.deb Once this is done, you should be able to launch the HDHomeRun Config GUI and have it list your Prime’s three tuners. If it doesn’t detect the HDHR Prime, correct this before proceeding, since you may have skipped a step or had a failure that you overlooked. Install and configure MythTV’s backend Type the following command to install the MythTV package: $ sudo apt-get install mythtv Expect a number of dependencies to be installed on a fresh Ubuntu install, so, let it do what it needs to do. Then, click the Ubuntu button at the top of the bar and then click MythTV Backend Setup. You may need to search for it if it’s not right there on the first row of icons. If it asks for information for a database connection, note what the settings say for database name, username, and password and then close the backend configuration application. You’ll need to add the database to MySQL, create the user account that MythTV will use, and give it full permissions on the database MythTV will use. Do not just give it the MySQL “root” username and password, as this is a very insecure method. Instead, use a MySQL management tool to create a new user for MythTV. Then, restart the backend setup program. If all went well, it shouldn’t ask for database connection details. If not, correct this before proceeding. It’s important to give the machine that will be the primary backend for MythTV a static IP address. You will need to access it at a fixed IP, and DHCP will most likely change the IP address whenever the router reboots. Consult your router’s documentation on how to assign IPs statically. Do this before proceeding if the machine currently has a dynamic/DHCP-assigned IP. Setting the IP address Once MythTV’s backend setup tool is happy with the MySQL configuration and its ability to connect to the database, you should get a series of options with “General” as the first option. “Host Address Backend Setup” is where you’ll start. Change “IPv4 Address” to whatever the static IP is. Do not leave it on 127.0.0.1, or it will only accept client connections from localhost, effectively ignoring all other clients on the network. Change “IPv6 Address” to blank unless you’re using IPv6. If you don’t do this, MythTV may bind only to an IPv6 address and may ignore IPv4 connections. It’s not supposed to, but it did for me and people have complained online about this happening to them. Then, change “Security Pin (required)” to 0000 (for “allow any clients,” or to an actual pin if you need to restrict access) or MythTV won’t accept connections. For the “Master Backend” part, change “IP address” to match the static IP as well, or again it’ll only watch localhost for connections from slave backends if you use them. Keep pressing “next” until you end up back at the main menu. For the initial setup, you shouldn’t need to change anything else in the General options, and can go back later if you do. Setup the capture cards Navigate to the “Capture Cards” section. We’ll do this three times, one for each tuner on the HDHR Prime. First, click “New Capture Card”, or use the arrow keys to select and hit Enter. For the first box, “DVB Device,” open the list and choose the HDHomeRun option. If you do not have it, you missed something in step one, so backtrack accordingly. Again, if HDHomeRun Config cannot “see” your HDHR Prime, MythTV won’t either. In “Available Devices,” open the list and pick the first tuner. It’ll be an ID code for the HDHR followed by “-0”, e.g., 1234ABCD-0. “Tuner” should be “0” for obvious reasons. Click “Recorder Options.” Increase “Signal timeout (ms)” to “10000” (read: add zero at the end) and increase “Tuning timeout (ms)” to “30000” (see previous). This will reduce timeouts from slow clients or during overloads on busy non-gigabit networks. Reduce “Max recordings” to “1” as the HDHR Prime can only support one action at a time on any one tuner. Click “Next” and “Finish” until you see the list of capture cards, and repeat the setup for the other two tuners on the HDHR Prime. Then, press the Escape key to get back to the main menu. Configuring the video sources In the main menu, navigate to “Video Sources.” This is where you’ll set up your channel listings and guide. Give ODROID MAGAZINE 37 MYTH TV the guide setup a suitable name in “Video Source name.” If you have an active subscription to SchedulesDirect.org for listing/guide data, change “Listings Grabber” to the SchedulesDirect option and fill in the blanks for the username and password as required. Then, click “Retrieve Lineups” to fetch what you’ve selected in your SchedulesDirect.org account for your location. Choose the appropriate entry for “Data Direct lineup”. MythTV will then fetch channel/guide data as required. Just make sure to leave “Perform EIT scan” unchecked, since EIT data will overwrite all over the SD.org guide data and your guides will be much less usable. If you don’t have an active sub to SchedulesDirect.org, consider spending the $25 a year for one, as their guides are very useful. At the very least, get a 7-day trial to see if their data will suit your needs. Otherwise, if you’re not in an area they have listings for, leave “Listings grabber” on “Transmitted guide only (EIT)” to try to pull down the cable company’s guide information or change it to “xmltv Selections” and step through the options for that if you have another grabber setup that you’d like to use. Again, click “Next” and “Finish” until you return to the list of sources. Repeat the steps if you want to set up more than one, and press the Escape key to the main menu when you’re done. Input connections Navigate to the “Input Connections” section, where you tell MythTV to use the guide source’s channel, frequency, and program data to tell the tuners what to tune to. Please note that we will also do this part three times, once for each tuner, but with a couple subtle but important changes. Click the first HDHR tuner on the list, or use the arrow keys and the Enter key. I left “Display name” blank but you can change it to anything so long as it’s unique for your stream. For “Video Source,” pick the grabber ODROID MAGAZINE 38 you had set up previously, such as your SchedulesDirect.org account. For “Use quick tuning,” I didn’t see where it made any difference for my HDHR Prime, so I left it on “Never.” Click the “Fetch channels from listings source” button and immediately hit the down-arrow key twice. It’ll take a few seconds or so to fetch the channel/ freq/program data, and you’ll know it’s finished when the selected control jumps suddenly to the “Next” button. When it does, click the “Next” button. This is an important step: “Schedule Order” and “Live TV Order” will be different for each tuner. Tuner 0 will be “3” for “Schedule Order” and “1” for “Live TV Order”, tuner 1 will be “2” for both, and tuner 2 will be “1” for “Schedule Order” and “3” for “Live TV order” 3 and 1, 2 and 2, and 1 and 3. If you don’t do it this way, you’ll have clients fetch tuners out of order and likely won’t have HDHR hand out access to all three tuners because MythTV will think it’s at the end of the tuner pool before it actually hands out all three. Also, if you don’t give them a reverse sequence between the two settings, MythTV will hand out tuner 2 to the first client and the HDHR Prime will tell MythTV it’s out of available tuners. This took me a lot of time to figure out, since it’s not mentioned anywhere except for a single forum post that I stumbled across. I left the input group settings set to the default values. Press “Next” and “Finish” until you return to the list of sources, and repeat the steps if you want to set up more than one. Use the Escape key to return to the main menu when you’re done. From the main menu, hit the Escape key again. MythTV should then display a dialog about refreshing the guide if you changed any channel info. Click OK or tap Enter to dismiss this and let the configuration program close. Confirm that you want to run the MythTV backend, and give it the proper username and password. Then, run “mythfilldatabase” so that it will fetch the complete channel lineup and programming guide. The backend will come up pretty quickly, which can be tested by navigating a browser to. The guide update will take approximately 20 minutes to run the first time. Set up the clients Most people that user MythTV backends are probably going to be using Kodi for the clients. I’m running Kodi 16.1 (Jarvis) on a custom build of LibreELEC for the ODROID-C2 since OpenELEC’s support for the C2 is not as robust. This is very likely going to change when OpenELEC hits its first full 7.x release, since C2 support is currently in development. Find the MythTV PVR plugin in Kodi by navigating to System -> Settings -> Add-Ons -> My Add-Ons -> PVR Addons, then finding the plugin and selecting “Config”. Give it the IP address of the backend machine, enable the add-on by following the same navigation steps and selecting “Enable”, then restart Kodi. The plugin should then fetch the channel list/lineup/guide info from the backend and the “TV” section should be visible in the main menu. Watch live TV From your device and environment of choice, select “TV,” select “Guide,” and select a channel/show. If everything is working properly, you should be able to enjoy your video content on the ODROID-C2. For comments, questions, or suggestions, please visit the original thread at. Android Nougat Impress YOUR FRIENDS WITH THE Latest ANDROID VERSION by @voodik example, to use a Realtek 8192cu (default), set the following parameter: wlan.modname=8192cu To use Realtek 8188eu: wlan.modname=8188eu To use Ralink RT33XX/RT35XX/ RT53XX/RT55XX: wlan.modname=rt2800usb A ndroid Nougat, which was officially released earlier this. Features • Android 7.0 Nougat Cyanogenmod 14.0 • Kernel 3.10.9 • OpenGL ES 1.1/2.0/3.0 (GPU acceleration) • OpenCL 1.1 EP (GPU acceleration) • Multi-user feature is enabled (Up to 8 users) • On board Ethernet and external USB 3.0 Gigabit Ethernet support • RTL8188CUS , RTL8191SU and Ralink Wireless USB dongle support • USB GPS dongle support • USB tethering • Portable Wi-Fi hotspot • Android native USB DAC support • USB UVC Webcam support • HDMI-CEC support • Selinux Enforce • Bluetooth and USB-3G don’t work yet If you are using a USB GPS dongle, set the correct tty port and speed in build.prop: Installation ro.kernel.android.gps=ttyACM0 To install Android Nougat onto a blank media, you need prepare your eMMC or SD card with an appropriate self-installation image, which is available at. Write the image to your eMMC or SD card using Win32DiskImager (. ly/1Vk9u4o). If you are updating from the CM 13.0 image, copy and paste the following URL into the ODROID-Updater URL section: ODROID-XU3/Android/CM-14.0/Alpha-0.1_10.10.16/update.zip The update process might take up to 20 minutes, so be patient. ro.kernel.android.gps.speed=9600 To enable the ability to shutdown the system without confirmation by long pressing the power button, add the following line: setprop persist.pwbtn.shutdown true If you want to enable the Cloud Print feature, download and install Google Cloud Print Service app from. ly/2e2YyLw. For comments, questions or suggestions, please visit the original post at. Tips To get Wifi working set the correct module name in the build.prop file. For Figure 1 - Android Nougat screenshot ODROID MAGAZINE 39 VIDEO HELPER Accelerated Video Playback for Browsing on the ODROID-C2 WATCH YOUR WEB MEDIA CONTENT IN FULL HD by Adrian Popa I f you’re using your ODROID-C2 as a desktop computer, chances are you’re also surfing the web on it. By now, you may have noticed that inbrowser video playback doesn’t have the performance that you would expect. Although you can view 720p Youtube videos in-browser, it’s choppy, so 360p is the only acceptable resolution where playback is watchable. Other sites simply report that they can’t play any videos. In order to improve video playback quality, two things are needed: Accelerated video decoding, which is handled by the aml-libs package on the C1/C2, and accelerated rendering, which should be handled by the X11 drivers. Unfortunately, Chrome does not support accelerated video decoding for aml-libs though, so playback is done with the CPU instead, causing these slow and choppy experiences on HD video streams. The goal of this article is to offload the video playback to a process which can do it using accelerated hardware and improve the browser’s video playback experience The best approach The best way to get this done is to fix the egg before we have a problem with the chicken in the first place. By this, we mean adding aml-libs support to Chrome’s source code. If you look into Chromium’s documentation (. ly/2eTfcww), you can see that a customODROID MAGAZINE 40 built ffmpeg package is used for video decoding. This means that as soon as will already be working. Figure 1 - A diagram of the rendering process The convoluted approach In case the “best approach” method is not yet ready, we’ll have to try something else. The technique in this guide is inspired by work done on Firefox by a friend of mine. A word of caution is that, whose name is Silviu, decided to do something about it instead of throwing money at the problem. He used Firefox and the Greasemonkey plugin to write his own mini-plugin for Firefox and managed to offload video playback to the mpv video player software. He did this all while rendering everything in the browser window itself to give the impression of a cohesive experience. This is what I wanted to try to replicate on ODROIDs at first. What he did was use Greasemonkey to load a custom 1600-line JavaScript function that overrides all HTML5 video elements and replaces them with proxy objects that forward the calls to his plugin. For this he needed to reimplement the whole HTML5 video API which is described at and. His code gets the calls the browser makes to create new video objects, set the video source, and can start, seek, and stop playback. VIDEO HELPER Upon intercepting the messages, the JavaScript code creates a request with a fake content-type which is handled by a plugin library specially designed to be the middleman in this scenario. The JavaScript code doesn’t have the right to read or execute files, but the plugins do, so the plugin is used to start up an mpv instance and communicate with it through a piped connection. The JavaScript code sends play, pause, and seek requests to the plugin, which it then converts them into mpv commands and lets the mpv process do all the heavy lifting. result of all this is that the original player shows the correct status, current position and can be used to control the player in an intuitive way. This is important because a lot of sites have built-in protections against scraping and may query periodically through JavaScript to see the player’s state and check for anything strange taking place. In terms of presentation, using a plugin allows you to create a new operating system window without titlebar decorations, and position it on top of the HTML5 video object. The fun fact is that the window is embedded inside the webpage so it gets hidden out of view when you scroll. MPV can use the --wid parameter to draw its output to an arbitrary window, which is a cool option that works in Windows and Linux alike. This makes the player look like it’s native and unobtrusive. Figure 2 - MPV plays to a Mousepad editor window Also, the plugin allows you to create multiple instances of the player, so that several tabs can play video simultaneously. Unfortunately, the solution I’ve seen was tested only on Firefox and Windows, but with a bit of work, it could be ported to Chromium as well. I have to convince my friend to release his sources so that others could join in. The resulting solution A relatively simple way to get accelerated playback from the browser is to pass the URL of the video to a player which has aml-libs support. Fortunately, after a long and very entertaining discussion on the forum at, we now have a standalone player for the C2 as well as patches to ffmpeg and mpv that could lead to a more diverse player selection in the future. However, for our needs, we’ll use c2play, a minimalistic player built especially for the C2 by forum user @ crashoverride (). Our plan consists of the following: 1) create a chrome plugin to send the current URL (or the video element’s URL) to a backend script, and 2) the backend script calls youtube-dl to get the video URL (if needed) and calls the player to play it. Fortunately, Chrome has a native messaging API (http:// bit.ly/1cOVcrU) that lets you communicate with external processes (which it’s kind enough to spawn for you), so this is what we’ll do. To install c2play, you will need to consult the latest instructions from the ongoing support thread at. ly/2eXOp0s, since development is progressing rapidly. But at the time of this writing, you can install it with the following commands: $ git clone -b beta1 $ sudo apt-get install libasound2-dev \ libavformat-dev libass-dev libx11-dev premake4 $ cd c2play $ premake4 gmake Figure 3 - Firefox proof-of-concept with skinned mpv controls $ make c2play-x11 $ sudo cp c2play-x11 \ /usr/local/bin/c2play-x11 You’ll also need youtube-dl, which you can get from Ubuntu’s repositories, or from. Youtube-dl is a program that takes a page URL and extracts the video element URL embedded inside. It can do this for a wide range of websites that either use HTML5 or Flash for playback (http:// bit.ly/2d9yknp). I recommend that you install it manually, because changes to sites happen often and you’ll need to be able to update it easily, which can be done with the following set of commands: ODROID MAGAZINE 41 VIDEO HELPER $ sudo curl -L \ \ -o /usr/local/bin/youtube-dl $ sudo chmod a+rx \ /usr/local/bin/youtube-dl Finally, to install the Chrome plugin that ties all of these together, follow these steps: $ git clone. c2.video.helper.git $ cd odroid.c2.video.helper $ sudo apt-get install libjson-perl \ libproc-background-perl libconfig-simple-perl To install the plugin, you will need to navigate inside Chrome to chrome://extensions/. Next, drag and drop from a file browser the extension located in odroid.c2.video.helper/ release/odroid.c2.video.helper-1.1.crx (or version 1.2, as described below) into the Chrome window, and it will prompt you to install it. player next to your address bar. If you can make a better icon, please contribute to the support thread referenced at the end of the article. After activating the plugin, when you navigate to a website that has a video embedded, you can press this button, and the tab’s URL will be passed to the backend script. The backend script will run it through youtube-dl, which can take about 5-6 seconds, and will obtain the URL of the video. It will then call the preferred player with this URL, and playback should start. A video of the installation and playback process is available at. At the time of writing, I only tested it with c2play-x11 with playback in fullscreen. In order to control it, consult the help page at. Adding windowed playback support is on @crashoverride’s todo list () and will probably become a reality in the near future. Also, @ LongChair’s mpv port could be used for playback in the future. For now I’ve tried with the stock mpv, and playback is fine for SD content, but 720p seems to lag. You can view a demo at. Figure 4 - Installing our plugin Once it is installed, continue to install the backend script: $ Figure 5 - Push the button! Figure 6 - Non-accelerated playback in mpv The backend script has a configuration file stored at ~/.odroid.c2.video.helper.conf using an ini syntax. With this file, you can disable debugging, which is on by default. You can also change the player and set custom parameters, such as quality, for various sites supported by youtube-dl. For instance, the default config file sets the youtube quality to 720p (-f 22) and disables playlist support. You can add new sections, which must have a name that matches the URL’s domain. Also, the section name must not contain the character dot (“.”) because it’s not well supported by the configuration parser module. To view debugging information you can run: $ sudo journalctl -f Playing 1080p or 4k from YouTube is a bit more problematic because, for resolutions higher than 720p, YouTube splits video and audio into two distinct streams, and their player ODROID MAGAZINE 42 VIDEO HELPER Figure 7 - A configuration sample merges them on the client browser. This doesn’t happen on other sites such as Vimeo. Fortunately, @crashoverride also has a branch called dualstream that tries to support this, as described at. Preliminary tests show that he is on the right track. I was able to play 4K video with his player, but playback unfortunately stalled after several seconds. This will likely improve in the near future. What should you expect to work and what should you expect not to work? Sites supported by youtube-dl, such as YouTube, Vimeo, Facebook, IMDB, Engadget, DailyMotion, TED, Cracked, Apple Trailers, 9gag TV and a variety of adult sites should all, due to how the plugin connects and processes the stream. Relying on youtube-dl for site support. However, guaranteed You- Figure 8 - Normal video object vs media-source Tube and Vimeo. Digging through the page revealed why: these sites use a technique called “media-source” (. ly/2eWmfnG) which uses JavaScript to make requests and stream the video, so we can’t extract a URL to pass for playback. For sites that use a video element with a real URL as as the source, at http:// bit.ly/2eBVtCZ. All this wouldn’t have been possible without the hard work of @crashoverride and @LongChair, so send them a big thank you! ODROID MAGAZINE 43 MEET AN ODROIDIAN Meet an ODROIDian Joachim Althof edited by Rob Roy Please tell us a little about yourself. My name is Joachim Althof, and I am 32 years old. I live near Hanover in Germany along with my wife Katrin and our two daughters. I did my studies in Mechatronics at the University of Applied Sciences in Lemgo, which is a tiny town in northwestern Germany. After finishing my studies, I worked for a company near the border to Denmark, where I wrote embedded microcontroller software for relatively large grid inverters (>100 kW) and other kinds of photovoltaic- and windpower converters. Five years later, I went back to my former hometown and started to work for a new company. I am the first, and so far only, software developer for small PMSM inverters (<10 kW), and am responsible for many other things regarding development coordination. How did you get started with computers? I gained my first computer experiences with my father’s Commodore 64 back in the early 1990s. Thinking back, this was really old-school, with an almost round-shaped greenscale monitor that flickered a lot, , a reset-button and homemade loudspeaker, along with a 9-pin parallel-port printer with gray “endless paper”. At first, I had absolutely no clue about what was going on when I typed <load “$”,8,1>. And honestly, I didn’t really care, as long as the games started. A short while later, I started to be more interested in understanding what was going on inside of that gray box. I even got GEOS running, which was a type of graphical OS for the C64, and managed to install the printer driver. When I was around 12, I got my first i486 computer running DOS. Due to the fast hardware development at that time, that device was slowly upgraded piece by piece and became my first “serious” computer for years. At home, my wife says that I am “responsible for everything with a cable”. What attracted you to the ODROID platform? When I started with SBCs, the Raspberry Pi was already widespread. I wanted to join the hype, but desired something different and more powerful than the RPi. I decided to purchase a Cubieboard2, which was brand-new those days. I learned a lot and met some really nice people in the forum. However, I was soon slightly disappointed by the company, the community and the product, so I searched for a new platform to play around with, and that’s how I discovered the ODROID platform. I was impressed by the hardware specs and the reaODROID MAGAZINE 44 Joachim sitting on the roof of a wind turbine in Estonia during a commissioning trip in April 2015 sonable price, so I bought my first ODROID, which was the C1 model. I ported all of my projects onto it and it was fun to see what it can do, and also to discover all its quirks. I was very satisfied with the performance, because it did everything that I wanted it to do. When the C2 came out, I said to my wife, “Look at this! We need this one! More is better!”. Admittedly, she didn’t really care, but I bought my first C2 and a little bit later, my second C2 arrived, accompanied by my wife shaking her head at me. I have found that the ODROID community is a very active one. Of course, there are some more and some less active members, but in general, I was happily surprised by the community. What I highly appreciate is the fact that even the administrators and Hardkernel members contribute to the forum. For me, it looks like there is an active collaboration between the Hardkernel members and the internet community, which is priceless. The best board is worthless without proper support. How do you use your ODROIDs? My first approach was to get all of my projects running on one single board. My original C1 was a network file server, a media server, a media player, SVN server, a gaming machine, Ambilight-maker, a development board, and an orbital defense cannon. In Germany, we would call it an “egg-laying woolmilk-sow”! However, every time I tinkered a little bit too hard, several things broke at the same time, which prompted a low Wife Acceptance Factor (WAF). I decided to decentralize my system, which resulted in the need for more ODROIDs, which prompted my wife to say “Wait, ... what was THIS thing for?!” Now I have my C1 as a dedicated server device, which will be ported to a V2 over time, one C2 as a media player running LibreELEC, and one C2 to play around with. Later I will use one of the ODROIDs as a retro-gaming platform again. MEET AN ODROIDIAN ready available on the XU4. The most important thing to me is a good support of the features, since I had to struggle with the buggy USB drivers on the C1. Also, something similar to the RPi Zero would be great. I know that Hardkernel offers the C0, but this is not comparable with the RPi Zero in many ways. I think some people could use a MIPI-CSI port as well, especially when it is compatible with the RPi. This could open up a huge new field for ODROID applications. Joachim’s home server powered by an ODROID-C1, all tightly fitted into an old ATX power supply case Which ODROID is your favorite and why? At the moment, I only have experience with the C1 and C2, but my favorite is the C2. It is really powerful for the price, has a small form factor, is versatile, and well supported. In my use cases, both boards perform very well. I think there will be even more projects and possibilities, once the 64-bit ARM architecture becomes more mature. I also like the I/O header of the C1 and the C2, because it makes hardware tinkering easy. Whether it is necessary to have the I/O header be Raspberry Pi-compatible is another story. What innovations would you like to see in future Hardkernel products? That’s a good question, because many features are already there. I’d like onboard-Wifi, or maybe USB 3.0, which is alA homemade PCB for a 32-channel PWM, powerd by a ATTiny85 via SPI and a bunch of shift-registers for a ceiling lamp in his daughter’s room to make it look like a night sky full of sparkling stars What hobbies and interests do you have apart from computers? I like doing sports like jogging and Freeletics. I am a fan of metal music, and play drums. My singing is also not bad, and I like to go to music concerts and festivals. During the dark and cold days of winter, I work on an embedded project with AVR microcontrollers and a lot of soldering. Most importantly, I enjoy doing nothing as well as spending free time with friends and my family. What advice do you have for someone wanting to learn more about programming? It depends on the specific kind of programming. I think that learning embedded C is a difficult way to start, because it is less “visual” than writing software for a PC. You can only see or measure your programming results on hardware I/Os. However, it’s worth the effort when you think about the possibilities! Start with something simple like an LED blink loop. Don’t be afraid of pointers*, although they’re not necessary for beginning programmers. Always keep in mind how powerful software can be. The software is the soul of a device, and to bring life to a piece of dead hardware can be awesome! Also, knowing that something can explode by setting a wrong bit can be very exciting and scary. On the other hand, programming shell scripts or C# can be good to start with, because they is more visual then embedded programming. It really depends on what the application is. Most important: is to just do it! Show some self-initiative and people will help you as you help yourself. Joachim and his girls having some fun at the fair, although they enjoyed it more than he did ODROID MAGAZINE 45 * Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
https://manualzz.com/doc/29065816/create-a-synchronized-video-light-display-%E2%80%A2-android-nougat
CC-MAIN-2020-40
refinedweb
18,536
59.74
Some Problems with MMS download and with 'packed mode' I am using MMS with Python codes, but I have many problems: the sensors have bluetooth connection problems. In fact, some sensors are able to stream data up to about 1 m from the dongle, while others lose the connection already if they are moved away a few cm. Can you tell me if it is a problem of the specific sensors or if I can do something to improve the connection ? When I use the sensors in streaming mode I would like to send the data in packets and I saw that there is a function (packed) to send multiple samples in one bluetooth packet. However, if I use the code that is among the examples (the one of the gyroscopes) it gives me as error "segmentation fault". Can you tell me how I can solve ? also I would like to know if the packed mode is usable also in logging mode (I tried but I failed). When I use the sensors in logging mode I usually do quite long tests (more than 1h). Downloading the data, the time is almost double the time of the test (I sample accelerometers and gyroscopes at 100 Hz). I would like to know if there is a way to download them faster. To do this I tried to connect the sensor via usb (I saw that there is the new function for MMS with python scripts). However every time I connect the sensor it takes more than a minute to connect and when disconnected it is no longer detected by bluetooth (I was able to connect only after doing 2 resets manually and this method does not always work). Also, even with USB the data download time is similar to bluetooth. can you tell me if I'm doing something wrong ? Anyone who can help me? Do you need more information? from mbientlab.metawear import MetaWear, libmetawear, parse_value, create_voidp, create_voidp_int from mbientlab.warble import * from mbientlab.metawear.cbindings import * from ctypes import c_void_p, cast, POINTER from time import sleep from threading import Event import six import platform import sys from sys import argv import csv import numpy as np import matplotlib.pyplot as plt from datetime import datetime; def retry_connection(d): print("----------------------------------Trying CONNECTION---------------------------------------------------") sleep(3.0) try: d.connect() except: return retry_connection(d) if sys.version_info[0] == 2: range = xrange class State: def init(self, device): self.device = device self.samples = 0 self.analog_samples=0 self.e=Event() self.timer=None self.callback = FnVoid_VoidP_DataP(self.data_handler) self.analog_callback = FnVoid_VoidP_DataP(self.analog_data_handler) self.an_signal = signal = libmetawear.mbl_mw_multi_chnl_temp_get_temperature_data_signal(self.device.board, MetaWearRProChannel.ON_BOARD_THERMISTOR) self.processor = None # Acc Gyr Callback def data_handler(self, ctx, data): values = parse_value(data, n_elem = 2) with open('./Data/dataacc_'+self.device.address+'.csv', 'a+', newline='') as file: writer = csv.writer(file, delimiter=';') writer.writerow([ datetime.timestamp(datetime.now()), values]) # with open('./Data/datagyr_'+self.device.address+'.csv', 'a+', newline='') as file: # writer = csv.writer(file, delimiter=';') # writer.writerow([ datetime.timestamp(datetime.now()), values[1].x, values[1].y, values[1].z]) libmetawear.mbl_mw_datasignal_read(self.an_signal) self.samples+= 1 # Asnalog1 Callback def analog_data_handler(self, ctx, data): with open('./Data/analog_'+self.device.address+'.csv', 'a+', newline='') as file: writer = csv.writer(file, delimiter=';') writer.writerow([datetime.timestamp(datetime.now()), values]) # ricerca dispositivi nelle vicinanze devices = None array = sys.argv del array[0] print(array) states = [] # connect for a in array: device = MetaWear(a) device.connect() print("Connecting to " + device.address) print("CONNECTION STATE " + str(device.is_connected)) states.append(State(device)) # configure for s in states: # print("Configuring %s" % (s.device.address)) s.setup() # start for s in states: s.start() print("Streaming data") stop = -1 while stop == -1: msg = "Press s to stop data recording: " selection = raw_input(msg) if platform.python_version_tuple()[0] == '2' else input(msg) if selection == "s": stop = 0 print("-----STOP STREAMING-----") for s in states: if s.device.is_connected: print("connected to " + device.address) s.stop() else: while(not s.device.is_connected): retry_connection(s.device) s.stop() # reset print("Resetting devices") events = [] for s in states: e = Event() events.append(e) s.device.on_disconnect = lambda s: e.set() libmetawear.mbl_mw_debug_reset(s.device.board) for e in events: e.wait() # recap print("Total Samples Received") for s in states: print("%s -> %d" % (s.device.address, s.samples)) this is my code. Unfortunately i can't dowload and stream data in packed mode to have a faster download. @giovanni_dq Make sure that you install the latest version of MetaWear Python on PyPi to be sure your devices are connecting over USB. The example scripts have also been updated to indicate what type of connection has been established and you may want to add something similar to your script. Here is an example: The packed mode is meant only for streaming, and will have a detrimental effect if logged. For an application that needs to switch between modes, you can log the unpacked version and stream the packed version. It is still advisable not to stream while downloading, as it may result in packet loss, but if this is a hard requirement you can try to test it in your setup. If the total throughput is not too high, it may work okay for you, especially if operating with USB downloads. Hi, the same thing happens to me with packed mode. What I mean is that when I use the code provided in the examples of the gyro streaming with packed mode, it gives me: "segmentation fault" that does not allow me to use the MMS sensor gyro with this mode. I checked all updates. However, this error persists when I acquire the gyro. Whereas everything seems to work fine when I acquire the accelerometer.
https://mbientlab.com/community/discussion/4126/some-problems-with-mms-download-and-with-packed-mode
CC-MAIN-2022-40
refinedweb
954
58.08