text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
The past couple of weeks I wrote several kind of tests for my Spring Boot application, we started of with some integration tests for our REST service, and today I’m writing the last part of the series with some functional/integration tests for the application using Selenium. Maven dependencies When you ask a developer about functional testing, a lot of them will probably think “Selenium”, and in this case it’s not different. We will be using Selenium, but we will also use FluentLenium, a framework that acts as a wrapper around Selenium, but provides more a more fluent API to make it a lot easier to test your web application. Selenium is not a testing framework though, Selenium is a browser automation framework, so we we will need two additional things: - A web browser - A testing framework For our web browser I will be using PhantomJS, a scriptable/CLI based browser running on the same engine as Safari and previously also Google Chrome, called WebKit. Selenium works with drivers, and so we need to install an additional WebDriver for PhantomJS, called Ghost Driver. The testing framework I will be choosing is AssertJ. The dependencies I have to add are the following: <dependency> <groupId>org.fluentlenium</groupId> <artifactId>fluentlenium-assertj</artifactId> <version>0.10.3</version> </dependency> <dependency> <groupId>com.github.detro</groupId> <artifactId>phantomjsdriver</artifactId> <version>1.2.0</version> </dependency> <dependency> <groupId>xml-apis</groupId> <artifactId>xml-apis</artifactId> <version>1.4.01</version> </dependency> The following dependencies are also necessary, but if you followed any of my other recent tutorials, you probably already have these: >1.7.0</version> <scope>test</scope> </dependency> Maven plugin To run your integration tests using PhantomJS, you can use the phantomjs-maven-plugin. I already used this plugin before in my tutorial about executing Jasmine tests using Maven, but you will also need it here. <plugin> <groupId>com.github.klieber</groupId> <artifactId>phantomjs-maven-plugin</artifactId> <version>0.4</version> <executions> <execution> <goals> <goal>install</goal> </goals> </execution> </executions> <configuration> <version>1.9.8</version> </configuration> </plugin> Preparing the integration test I already wrote a Spring Boot integration test before, and the setup is quite similar as I explained in that tutorial. Before actually writing the test we will need to tell the test to run our application first, which we can do by specifying some annotations on top of our test, like this: @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) @WebAppConfiguration @IntegrationTest("server.port:0") public class ApplicationIT extends FluentTest { } By adding these annotations we’re telling our test that it should be ran using the SpringJUnit4ClassRunner which allows you to run your tests using a Spring context. The @SpringApplicationConfiguration annotation tells the runner how to bootstrap the application, while the other annotations are necessary for setting up the integration test. Finally, we’re also inheriting from FluentTest, which provides the Fluentlenium API. By using the @IntegrationTest annotation together with server.port:0, we’re telling Spring Boot to use a random port on startup. This means we have to retrieve the portnumber somehow, which we can do by writing the following code: @Value("${local.server.port}") private int serverPort; We also have to set up our WebDriver: private WebDriver webDriver = new PhantomJSDriver(); To use this driver inside our tests, we will have to override the getDefaultDriver() method from FluentTest: @Override public WebDriver getDefaultDriver() { return webDriver; } Then I’m going to set up some model objects which I can insert into our datasource, this is what I used:(); We also have to autowire the repository: @Autowired private ItemRepository repository; And finally, we can insert FIRST_ITEM and SECOND_ITEM before each test: @Before public void setUp() { repository.deleteAll(); repository.save(Arrays.asList(FIRST_ITEM, SECOND_ITEM)); repository.flush(); } Writing a Sele.. err… FluentLenium test Let’s start by writing an easy test first. Our application has a header called “A checklist”, to verify that the header indeed exists, we can write the following test: private String getUrl() { return ":" + serverPort; } @Test public void hasPageTitle() { goTo(getUrl()); assertThat(find(".page-header").getText()).isEqualTo("A checklist"); } First of all we use the goTo() method to tell Selenium where to go. We’re using a seperate method getUrl() for this so that we can use this again in our other tests. Then we use the find() method, providing a CSS selector to find our header. Then we can verify if the text is equal to the text inside our header using assertThat(). As you can see you can write some really easy to read tests using FluentLenium. Waiting for a response The application I’m going to test, uses AJAX to load the data. Obviously, this data is not immediately available, so we will have to wait for it. We can use await() to wait for a specific thing to happen: @Test public void hasTwoItems() { goTo(getUrl()); await().atMost(5, TimeUnit.SECONDS).until(".checkbox").hasSize(2); assertThat(find(".checkbox").getTexts()).containsOnly(FIRST_ITEM_DESCRIPTION, SECOND_ITEM_DESCRIPTION); assertThat(find(".checkbox").first().find(":checked")).isNotEmpty(); assertThat(find(".checkbox").get(1).find(":checked")).isEmpty(); } What happens here is that we wait 5 seconds at most, until there are at least two checkbox items. Which makes sense, since we inserted two records in our datasource before. We can then test if the description matches and if the first item is checked and the second isn’t. Testing a specific action The next thing you can do is test certain actions, for example clicking on a button. In our application we have a delete button next to each item to delete it. The delete button causes a serverside action again, so we’re going to wait for it to happen as well: @Test public void hasOneItemAfterDeleting() { goTo(getUrl()); await().atMost(5, TimeUnit.SECONDS).until(".checkbox").hasSize(2); find(".form-group").first().find("button").click(); await().atMost(5, TimeUnit.SECONDS).until(".checkbox").hasSize(1); assertThat(find(".checkbox").getTexts()).containsOnly(SECOND_ITEM_DESCRIPTION); assertThat(repository.findAll()).hasSize(1); } So, what happpens here is that we look for the first item, then click the button (which is the delete button). After clicking it, we wait for at most 5 seconds again until there is only one item left. We can then test if the remaining item has the correct item description (so we didn’t delete the wrong one) and we can even use the repository to verify that an item has actually been deleted. A similar thing can be done for checking/unchecking an item: @Test public void hasTwoCheckedItemsAfterCheckingBoth() { goTo(getUrl()); await().atMost(5, TimeUnit.SECONDS).until(".checkbox").hasSize(2); find(".checkbox").get(1).find("input[type=checkbox]").click(); assertThat(find(".form-group :checked")).hasSize(2); assertThat(repository.findChecked()).hasSize(2); } In this case we’re clicking a specific checkbox and we’re verifying that there are now indeed 2 items checked. Finally we can use the repository to verify that two items are indeed checked. Submitting a form One specific action is to enter some data and submit it, which isn’t too hard with Selenium/FluentLenium either: @Test public void hasThreeItemsAfterAddingOne() { goTo(getUrl()); await().atMost(5, TimeUnit.SECONDS).until(".checkbox").hasSize(2); fill(".input-group input[type=text]").with(THIRD_ITEM_DESCRIPTION); submit("form"); await().atMost(5, TimeUnit.SECONDS).until(".checkbox").hasSize(3); assertThat(find(".checkbox").getTexts()) .containsOnly(FIRST_ITEM_DESCRIPTION, SECOND_ITEM_DESCRIPTION, THIRD_ITEM_DESCRIPTION); assertThat(repository.findAll()).hasSize(3); } In thise case we’re using the fill() method to find our textbox and to fill it with THIRD_ITEM_DESCRIPTION. We can then call the submit() method to submit the form. We have only one form on the page, so we can use a simple selector. Submitting the form executes another AJAX request, but by now we already know how to react on it by using await(). In this case we’re waiting until there are three items (2 + the 1 we just added). Then finally we can use the same selector as the one we started with to verify the text of those items and check that all three descriptions are there. Finally we can also verify that the item has been stored in our database by checking the repository. Testing it out The first way to test it out is by running Maven. You can use Maven profiles if you want (like I did), and then run the test. You don’t have to install additional software because PhantomJS is loaded through the plugin. If you want to run them from within your IDE you will have to install PhantomJS manually. You then have to add an additional VM argument, if you’re using Eclipse you can do that by opening the Run Configurations… window: Then you create a new JUnit configuration and tell it which project and test class it has to run: Additionally you will have to configure a specific VM argument containing the location of your PhantomJS executable, for me it is: -Dphantomjs.binary.path=/usr/local/phantomjs-1.9.8/bin/phantomjs After saving it you can test your configuration out, which should give you the same result: Achievement: Master in testing applications If you’re seeing this and you read my other tutorials about testing, then you can call yourself a real testing master. If you’re interested in the full code example, you can find it on GitHub. If you want to try out the code yourself, you can download an archive from GitHub. Pingback: Links & Reads from 2015 Week 11 | Martin's Weekly Curations()
http://g00glen00b.be/spring-boot-selenium/
CC-MAIN-2017-26
en
refinedweb
I'm falling a little behind in my Java class (pun not intended) and I'm slightly confused about secondary methods (if that's what they're called). It's an online class so there is minimal help from the teacher. Anyways, we're creating a Rock Paper Scissors game in an RPS Class (all code in one file). To quote the assignment: main() will be part of the RPS class, fairly small, and contain only a loop that asks the user if they want to play "Rock, Paper, Scissors". If they say yes, it calls the static method play() of the RPS class. If not, the program ends. play() has no parameters, and no return value. Since everything is in a single class, it should separate code into methods for organization. You will need the main method, "public static void main(...)", however if you want to make any extra methods that are not contained within an another (instance) class then those methods need to be declared as static as well. So it would look something like this... public class RPS{ public static void main(String[] args){ play(); } private static void play(){...} }
https://codedump.io/share/wGmR0lEoxmoq/1/calling-a-static-method-play-java
CC-MAIN-2017-26
en
refinedweb
MonoTouch.UIKit.UIPageControl.UIPageControlAppearance Class Appearance class for objects of type UIPageControl. See Also: UIPageControl+UIPageControlAppearance Syntax public class UIPageControl.UIPageControlAppearance : UIControl+UIControlAppearance Remarks This appearance class is a strongly typed subclass of UIAppearance that is intended to be used with objects of class UIPageControl. You can obtain an instance to this class by either accessing the static UIPageControl.Appearance property on the UIPageControl or by calling the UIPageControlIPageControl.UIPageControlAppearance are listed below. See Also: UIControl+UIControlAppearance
https://developer.xamarin.com/api/type/MonoTouch.UIKit.UIPageControl+UIPageControlAppearance/
CC-MAIN-2017-26
en
refinedweb
Since a total function is a special case of a partial function, I think I should be able to return a function when I need a partial. Eg, def partial : PartialFunction[Any,Any] = any => any def partial : PartialFunction[Any,Any] = { case any => any } You could use PartialFunction.apply method: val partial = PartialFunction[Any,Any]{ any => any } You could import this method if you want to make it shorter: import PartialFunction.{apply => pf} val partial = pf[Any,Any]{ any => any }
https://codedump.io/share/fw9J4R03Fmm3/1/scala-total-function-as-partial-function
CC-MAIN-2017-26
en
refinedweb
I need to send only part of the file into STDIN of another process? #Python 3.5 from subprocess import PIPE, Popen, STDOUT fh = io.open("test0.dat", "rb") fh.seek(10000) p = Popen(command, stdin=fh, stdout=PIPE, stderr=STDOUT) p.wait() command problem is, when passed an handle, Popen tries to get fileno, and uses real OS handles, so it's not possible to fool it easily with other file-like objects. But you can set Popen stdin as PIPE, write only the correct number of bytes to it, possibly in small chunks, at the rythm you choose, and close it afterwards import io from subprocess import PIPE, Popen, STDOUT fh = io.open("test0.dat", "rb") fh.seek(10000) p = Popen(command, stdin=PIPE, stdout=PIPE, stderr=STDOUT) p.stdin.write(fh.read(1000)) # do something & write again p.stdin.write(fh.read(1000)) # now it's enough: close the input p.stdin.close() p.wait() Be careful though: since stdin and stdout use PIPE, you have to consume stdout to avoid deadlocks.
https://codedump.io/share/nhz8HchkdHfA/1/can-i-override-default-read-method-in-iobufferedreader
CC-MAIN-2017-26
en
refinedweb
table of contents other versions - jessie 3.74-1 - jessie-backports 4.10-2~bpo8+1 - stretch 4.10-2 - testing 4.10-2 - unstable 4.11-1 other sections NAME¶connect - initiate a connection on a socket SYNOPSIS¶ #include <sys/types.h> /* See NOTES */#include <sys/socket.h>#include <sys/socket.h>int connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen);int connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen); DESCRIPTION¶. RETURN VALUE¶If the connection or binding succeeds, zero is returned. On error, -1 is returned, and errno is set appropriately. ERRORS¶TheISCONN - The socket is already connected. - ENETUNREACH - Network is unreachable. - ENOTSOCK - The file descriptor is not associated.
https://manpages.debian.org/jessie/manpages-dev/connect.2.en.html
CC-MAIN-2017-26
en
refinedweb
Back to index #include "nsRuleNetwork.h" #include "nsFixedSizeAllocator.h" #include "nsTemplateMatch.h" #include "pldhash.h" Go to the source code of this file. If the set is currently. Definition at line 209 of file nsTemplateMatchSet.h. The set is implemented as a dual datastructure. It is initially a simple array that holds storage for kMaxInlineMatches elements. Once that capacity is exceeded, the storage is re-used for a PLDHashTable header. The hashtable allocates its entries from the normal malloc() heap. the InlineMatches structure is implemented such that its mCount variable overlaps with the PLDHashTable's `ops' member (which is a pointer to the hashtable's callback table). On a 32-bit architecture, we're safe assuming that the value for `ops' will be larger than kMaxInlineMatches when treated as an unsigned integer. And we'd have to get pretty unlucky on a 64-bit system for us to get screwed, I think. Instrumentation (define NSTEMPLATEMATCHSET_METER) shows that almost all of the match sets contain fewer than seven elements. Definition at line 232 of file nsTemplateMatchSet.h.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/ns_template_match_set_8h.html
CC-MAIN-2017-51
en
refinedweb
Hello I am new to C++ and I am trying to understand the subject observer pattern. Here is what I have been working on based on an example form the design patters book. #include <vector> #include <iostream> class Subject; class Observer { public: virtual void notify(Subject* s) = 0; }; class Subject { std::vector<Observer*> *observers; protected: void notify_observers() { std::vector<Observer*>::iterator iter; for (iter = observers->begin(); iter != observers->end(); ++iter) (*iter)->notify(this); } public: void register_observer(Observer* o) { observers->push_back(o); } }; class Horn : public Observer { public: virtual void notify(Subject* s) { std::cout << "Horn with id " << s->get_alarm_id << " is sounding\n"; } }; class Alarm : public Subject { public: Alarm() { std::cout << "alarm created" << "\n"; } void triggerd() { std::cout << "The alarm has been triggerd" << "\n"; notify_observers(); } int const get_alarm_id(){ return 100; } }; int main () { Alarm a = Alarm(); Horn h = Horn(); a.register_observer(&h); a.triggerd(); return 0; } Basically I want to push the subject class to the observer so the observer can get information form the subject. However When the notify method is called it gets the subject object not the alarm object. Can someone tell me how to ensure that the alarm object is getting passed.
https://www.daniweb.com/programming/software-development/threads/269006/understanding-subject-observer
CC-MAIN-2017-51
en
refinedweb
I need to write a function in C that reads a small text file , stores every character in a variable, and then stops reading when it reaches EOF. I assume I need to use fread(); I need precise info on my problem, I have been searching around only finding which parameters go into fread(); but no luck on how to loop until end of file. Any help will be greatly appreciated. I'm not much on programming, but here is a starter. do { what ever crap you need in here } while !EOF that should help a little until someone else can reply sorry its not more The only limit a person has, is the limit they give themselves. Cogito ergo sum. - Descartes I find it easier to use the fscanf function, which is similar to the scanf function used to read from standard in. First you need to open a file stream (this opens test.txt for read-only) FILE * pFIn = fopen("test.txt", "r"); This move the file pointer to the start of the file. fseek(pFIn, 0L, SEEK_SET); This scans a character from the file and stores it in the myChar variable of type char. iVal = fscanf(pFIn, "%c", &myChar); You would then loop while iVal does not equal EOF. iVal normally contains the number of bytes read or EOF on end of file FILE * pFIn = fopen("test.txt", "r"); fseek(pFIn, 0L, SEEK_SET); iVal = fscanf(pFIn, "%c", &myChar); while(iVal != EOF){ //Do stuff iVal = fscanf(pFIn, "%c", &myChar); } Hope this helps! \"When you say best friends, it means friends forever\" Brand New \"Best friends means I pulled the trigger Best friends means you get what you deserve\" Taking Back Sunday Visit alastairgrant.ca Maybe something like this would work.. I cant remember my c all that well but something like this would work in c++ and c. Use an array of n size. i=0 while scanf array[i] i++ Ben Franklin said it best. \"They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.\" sry i didn't go in detail, it's 1:00 am, and i really should get some sleep #define MAX 200 \\ put whatever here... char text[MAX]; (open file) for ( t = 0; t < MAX ; t ++) { fscanf( FILE, TYPE, &text[t] ) } <--- 10 min later -----> (edited) here's a simple program that should do what you ask #include <fstream.h> main() { ifstream fin("Msdos.sys"); // ignore msdos.sys and replace it with the text file cout<< "thingy's\n\n\n"; char ch; int t = 0; char text[]; while(fin.get(ch)) text[t] = ch; cout << ch; t++; fin.close(); } Tha all mity Rodent! using fgetc() function is also an option. and not to make your programme more dynamic use linked list to store the inputs of the file stuct some_name{ char file_data; struct some_name *next; }; fgetc may be a little tricky for him to use.. fgetc just calls a character, but he wants to change all the characters into a variable.. You also can try by using struct of array and combine it with linked list ,use some pointer I hope you understand what i mean ok Struct bla_bla_bla{ Char data; int x; struct data *next; }*head,*curr,*tail; Forum Rules
http://www.antionline.com/showthread.php?239870-help-with-fread-in-C
CC-MAIN-2017-51
en
refinedweb
public class Solution { public int findMin(int[] nums) { int result=nums[0]; for(int i=0;i<nums.length-1;i++){ if(nums[i]>nums[i+1]){ result=nums[i+1]; } } return result; } } No, this is not a best solution. I just want a java solution which is much faster, since binary search is not much faster than this solution. @wansongsong.jack, I disagree with your argument. Binary search is and should be much faster than your method. The reason you don't see a difference in running time at OJ is only because the test cases here are not tough enough as I mentioned here. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/16544/is-there-a-faster-solution
CC-MAIN-2017-51
en
refinedweb
After solving several "Game Playing" questions in leetcode, I find them to be pretty similar. Most of them can be solved using the top-down DP approach, which "brute-forcely" simulates every possible state of the game. The key part for the top-down dp strategy is that we need to avoid repeatedly solving sub-problems. Instead, we should use some strategy to "remember" the outcome of sub-problems. Then when we see them again, we instantly know their result. By doing this, we can always reduce time complexity from exponential to polynomial. (EDIT: Thanks for @billbirdh for pointing out the mistake here. For this problem, by applying the memo, we at most compute for every subproblem once, and there are O(2^n) subproblems, so the complexity is O(2^n) after memorization. (Without memo, time complexity should be like O(n!)) For this question, the key part is: what is the state of the game? Intuitively, to uniquely determine the result of any state, we need to know: - The unchosen numbers - The remaining desiredTotal to reach A second thought reveals that 1) and 2) are actually related because we can always get the 2) by deducting the sum of chosen numbers from original desiredTotal. Then the problem becomes how to describe the state using 1). In my solution, I use a boolean array to denote which numbers have been chosen, and then a question comes to mind, if we want to use a Hashmap to remember the outcome of sub-problems, can we just use Map<boolean[], Boolean> ? Obviously we cannot, because the if we use boolean[] as a key, the reference to boolean[] won't reveal the actual content in boolean[]. Since in the problem statement, it says maxChoosableInteger will not be larger than 20, which means the length of our boolean[] array will be less than 20. Then we can use an Integer to represent this boolean[] array. How? Say the boolean[] is {false, false, true, true, false}, then we can transfer it to an Integer with binary representation as 00110. Since Integer is a perfect choice to be the key of HashMap, then we now can "memorize" the sub-problems using Map<Integer, Boolean>. The rest part of the solution is just simulating the game process using the top-down dp. public class Solution { Map<Integer, Boolean> map; boolean[] used; public boolean canIWin(int maxChoosableInteger, int desiredTotal) { int sum = (1+maxChoosableInteger)*maxChoosableInteger/2; if(sum < desiredTotal) return false; if(desiredTotal <= 0) return true; map = new HashMap(); used = new boolean[maxChoosableInteger+1]; return helper(desiredTotal); } public boolean helper(int desiredTotal){ if(desiredTotal <= 0) return false; int key = format(used); if(!map.containsKey(key)){ // try every unchosen number as next step for(int i=1; i<used.length; i++){ if(!used[i]){ used[i] = true; // check whether this lead to a win (i.e. the other player lose) if(!helper(desiredTotal-i)){ map.put(key, true); used[i] = false; return true; } used[i] = false; } } map.put(key, false); } return map.get(key); } // transfer boolean[] to an Integer public int format(boolean[] used){ int num = 0; for(boolean b: used){ num <<= 1; if(b) num |= 1; } return num; } } Updated: Thanks for @ckcz123 for sharing the great idea. In Java, to denote boolean[], an easier way is to use Arrays.toString(boolean[]), which will transfer a boolean[] to sth like "[true, false, false, ....]", which is also not limited to how maxChoosableInteger is set, so it can be generalized to arbitrary large maxChoosableInteger. Brilliant solution! I think using Arrays.toString() is better. Here is my code: public class Solution { public boolean canIWin(int maxChoosableInteger, int desiredTotal) { if (desiredTotal<=0) return true; if (maxChoosableInteger*(maxChoosableInteger+1)/2<desiredTotal) return false; return canIWin(desiredTotal, new int[maxChoosableInteger], new HashMap<>()); } private boolean canIWin(int total, int[] state, HashMap<String, Boolean> hashMap) { String curr=Arrays.toString(state); if (hashMap.containsKey(curr)) return hashMap.get(curr); for (int i=0;i<state.length;i++) { if (state[i]==0) { state[i]=1; if (total<=i+1 || !canIWin(total-(i+1), state, hashMap)) { hashMap.put(curr, true); state[i]=0; return true; } state[i]=0; } } hashMap.put(curr, false); return false; } } Or, using int is enough. public class Solution { public boolean canIWin(int maxChoosableInteger, int desiredTotal) { if (desiredTotal<=0) return true; if (maxChoosableInteger*(maxChoosableInteger+1)/2<desiredTotal) return false; return canIWin(desiredTotal, maxChoosableInteger, 0, new HashMap<>()); } private boolean canIWin(int total, int n, int state, HashMap<Integer, Boolean> hashMap) { if (hashMap.containsKey(state)) return hashMap.get(state); for (int i=0;i<n;i++) { if ((state&(1<<i))!=0) continue; if (total<=i+1 || !canIWin(total-(i+1), n, state|(1<<i), hashMap)) { hashMap.put(state, true); return true; } } hashMap.put(state, false); return false; } } Very smart and detailed explanation. I have one quick question regarding the memorization. If we cannot use Map<boolean[], boolean> because of the shallow copy like you said, can we simply use Map<Set<Integer>, boolean>? The Set<Integer> is the set of chosen numbers. Thank you so so much. @LeoM58 After some research, I think your idea is feasible because the HashCode for Set<Object> is the sum of hashcode of it's object, and in this case it can uniquely determine a hash set. Here is a small example: Map<Set<Integer>, Integer> map = new HashMap<>(); Set<Integer> set1 = new HashSet<>(); Set<Integer> set2 = new HashSet<>(); set1.add(2); set1.add(3); map.put(set1, 1); // put set1 into map set2.add(2); set2.add(3); System.out.print(map.get(set1)); // 1 System.out.print(map.get(set2)); // 1 Thank you for your solution, can you please explain the logic of the helper function? Or point out the invariant? @Rhodey said in Java solution using HashMap with detailed explanation: Thank you for your solution, can you please explain the logic of the helper function? Or point out the invariant? Sure. First, this helper function has a parameter desiredTotal, and it determines that if a player plays first with such a desiredTotal, can s/he win? Then it comes to how to decide whether s/he can win. The strategy is we try to simulate every possible state. E.g. we let this player choose any unchosen number at next step and see whether this leads to a win. If it does, then this player can guarantee a win by choosing this number. If we find that whatever number s/he chooses, s/he won't win the game, then we know that s/he is guarantee to lose given such a state. See explanations below: // try every unchosen number as next step for(int i=1; i<used.length; i++){ if(!used[i]){ used[i] = true; // check whether this lead to a win, which means helper(desiredTotal-i) must return false (the other player lose) if(!helper(desiredTotal-i)){ map.put(key, true); used[i] = false; return true; } used[i] = false; } } map.put(key, false); @leogogogo Very detailed and understandable answer! Thank you so much! @leogogogo Thank you so much for your time and help. It means a lot. I tried the set idea and it is too slow. I guess it's because the frequent copy of sets and space issue. Inspired by your idea and code, I finished my version. Thank you again. public class Solution { int n; public boolean canIWin(int newN, int target) { n = newN; if (target > n * (n + 1) / 2) { return false; } Map<Integer, Boolean> memo = new HashMap<>(); return helper(0, memo, target); } private boolean helper(int visiting, Map<Integer, Boolean> memo, int target) { if (memo.get(visiting) != null) { return memo.get(visiting); } for (int i = n ; i >= 1 ; i --) { int choice = 1 << i; if ((visiting & choice) == 0) { if (i >= target) { memo.put(visiting, true); return true; } visiting += choice; boolean nextWinner = helper(visiting, memo, target - i); visiting -= choice; if (!nextWinner) { memo.put(visiting, true); return true; } } } memo.put(visiting, false); return false; } } @leogogogo Thanks again for your timely replay. Can you teach me how to post code? I typed ``` and copied all my code there. But they do not seem to align automatically. @LeoM58 Yeah, sometimes the code doesn't align automatically, so I just manually add some space for better appearance. Hi @leogogogo, thank you for your post. May I ask what else "game playing" problems in LC can also solved by top down DP. I want to do more practice. Thank you! @wondershow like Flip Game II, Burst Balloons , they all can be solved with top-down dp @ckcz123 you can just use int state as the key, absolutely faster and shorter. public class Solution { public boolean canIWin(int maxChoosableInteger, int desiredTotal) { if (desiredTotal <= 0) return true; if (maxChoosableInteger * (maxChoosableInteger + 1) / 2 < desiredTotal) return false; return canIWin(maxChoosableInteger, desiredTotal, 0, new HashMap <> ()); } private boolean canIWin(int length, int total, int state, HashMap <Integer, Boolean> hashMap) { if (hashMap.containsKey(state)) return hashMap.get(state); for (int i = 0; i < length; i++) { if ((1 << i & state) == 0) { if (total <= i + 1 || !canIWin(length, total - (i + 1), 1 << i | state, hashMap)) { hashMap.put(state, true); return true; } } } hashMap.put(state, false); return false; } } Hi, In your post, you mentioned that time complexity will transformed from "exponential to polynomial." My thought is that there is still 2^n possible boolean[]used array and 2^n possible key in the hashmap. Could you explain a little bit more about the polynomial time complexity? Thanks! @leogogogo Hi, In the p("leet") problem there, there are 4 possible subproblems p("eet"),p("et"),pt("t") and p(""). So using the memo schema would lower the bound to polynomial (or proportional to the length of the given string). However, in this problem, there are possibly 2^n possible boolean[]used, the subproblem size is scaled to 2^n here. The memo process will lower the time complexity of every subproblem to exactly O(1) and the altogether should be O(2^n). However, the memo process will lower the complexity than the brute force which may cost more. @billbirdh Well it seems that you are right, because there are O(2^n) combinations, and by using memo we only calculate for each combination at most once. I will update my answer, thanks. @billbirdh By the way, without memo, what is the time complexity? Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/68896/java-solution-using-hashmap-with-detailed-explanation
CC-MAIN-2017-51
en
refinedweb
There are a few solutions using BST with worst case time complexity O(n*k), but we know k can be become large. I wanted to come up with a solution that is guaranteed to run in O(n*log(n)) time. This is in my opinion the best solution so far. The idea is inspired by solutions to Find Median from Data Stream: use two heaps to store numbers in the sliding window. However there is the issue of numbers moving out of the window, and it turns out that a hash table that records these numbers will just work (and is surprisingly neat). The recorded numbers will only be deleted when they come to the top of the heaps. class Solution { public: vector<double> medianSlidingWindow(vector<int>& nums, int k) { vector<double> medians; unordered_map<int, int> hash; // count numbers to be deleted priority_queue<int, vector<int>> bheap; // heap on the bottom priority_queue<int, vector<int>, greater<int>> theap; // heap on the top int i = 0; // Initialize the heaps while (i < k) { bheap.push(nums[i++]); } for (int count = k/2; count > 0; --count) { theap.push(bheap.top()); bheap.pop(); } while (true) { // Get median if (k % 2) medians.push_back(bheap.top()); else medians.push_back( ((double)bheap.top() + theap.top()) / 2 ); if (i == nums.size()) break; int m = nums[i-k], n = nums[i++], balance = 0; // What happens to the number m that is moving out of the window if (m <= bheap.top()) { --balance; if (m == bheap.top()) bheap.pop(); else ++hash[m]; } else { ++balance; if (m == theap.top()) theap.pop(); else ++hash[m]; } // Insert the new number n that enters the window if (!bheap.empty() && n <= bheap.top()) { ++balance; bheap.push(n); } else { --balance; theap.push(n); } // Rebalance the bottom and top heaps if (balance < 0) { bheap.push(theap.top()); theap.pop(); } else if (balance > 0) { theap.push(bheap.top()); bheap.pop(); } // Remove numbers that should be discarded at the top of the two heaps while (!bheap.empty() && hash[bheap.top()]) { --hash[bheap.top()]; bheap.pop(); } while (!theap.empty() && hash[theap.top()]) { --hash[theap.top()]; theap.pop(); } } return medians; } }; Since both heaps will never have a size greater than n, the time complexity is O(n*log(n)) in the worst case. My Python version of the above code import collections from heapq import heappush, heappop, heapify class Solution(object): def medianSlidingWindow(self, nums, k): '''Similar to the median stream problem, we maintain 2 heaps which represent the top and bottom halves of the window. Since deletion from a heap is an O(1) operation, we perform it lazily. At any time, if a number leaves a window, we delete it if it is at the top of the heap. Else, we stage it for deletion, but alter the count of this half of the array. When this element eventually comes to the top of the heap at a later instance, we perform the staged deletions. ''' to_be_deleted, res = collections.defaultdict(int), [] top_half, bottom_half = nums[:k], [] # We first begin by heapifying the first k-window heapify(top_half) # Balancing the top and bottom halves of the k-window while len(top_half) - len(bottom_half) > 1: heappush(bottom_half, -heappop(top_half)) for i in xrange(k, len(nums)+1): median = top_half[0] if k%2 else 0.5*(top_half[0]-bottom_half[0]) res.append(median) if i<len(nums): num, num_to_be_deleted = nums[i], nums[i-k] top_bottom_balance = 0 #top_bottom_balance = len(top_half) - len(bottom_half) # If number to be deleted is in the top half, we decrement the top_bottom_balance if num_to_be_deleted >= top_half[0]: top_bottom_balance-=1 # If the number to be deleted is at the top of the heap, we remove the entry if num_to_be_deleted == top_half[0]: heappop(top_half) # Else, we keep track of this number for later deletion else: to_be_deleted[num_to_be_deleted]+=1 else: top_bottom_balance+=1 if num_to_be_deleted == -bottom_half[0]: heappop(bottom_half) else: to_be_deleted[num_to_be_deleted]+=1 # If the new number to be inserted falls into the top half, we insert it there and update the top_bottom_balance if top_half and num >= top_half[0]: top_bottom_balance+=1 heappush(top_half, num) else: top_bottom_balance-=1 heappush(bottom_half, -num) # top_bottom_balance can only be -2, 0 or +2 # If top_bottom_balance is -2, then we deleted num_to_be_deleted from the top half AND added the new number to the bottom half # We hence add the head of the bottom half to the top half to balance both trees if top_bottom_balance>0: heappush(bottom_half, -heappop(top_half)) elif top_bottom_balance<0: heappush(top_half, -heappop(bottom_half)) # While the head of the top_half has been staged for deletion # previously, remove it from the heap while top_half and to_be_deleted[top_half[0]]: to_be_deleted[top_half[0]]-=1 heappop(top_half) while bottom_half and to_be_deleted[-bottom_half[0]]: to_be_deleted[-bottom_half[0]]-=1 heappop(bottom_half) return map(float, res) You solution is Great!!! But.... if (balance < 0) { bheap.push(theap.top()); theap.pop(); } else if (balance > 0) { theap.push(bheap.top()); bheap.pop(); } What if theap.top() or bheap.top() is not available? I cannot figure it out... @BURIBURI balance is changed twice in the code, each time it is either ++balance or --balance. If say, balance < 0 after the two changes, then it must be that they are both --balance, so something has been pushed to theap on the line: else { --balance; theap.push(n); } This took me a while to figure out! @ipt Got it! If we have --balance; and m == bheap.top(), bheap.pop(); and finally balance < 0 , actually we will not visit bheap.top()at all , no need to worry about if it is available! Thank you very much! Do you need to rebalance the two heaps after removing the numbers that should be discarded at the top of the two heaps? I use a while loop to rebalance and remove top numbers until there is no top numbers that need to be removed after rebalancing, but it does not give the correct result.. I have a question, why you remove numbers that should be discarded after rebalance? how do we guarantee it is still balanced when doing medians.push_back? @VincentSatou Balancing here only cares about the numbers that are in the window but now those that are discarded. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/74679/o-n-log-n-time-c-solution-using-two-heaps-and-a-hash-table
CC-MAIN-2017-51
en
refinedweb
Hello, I'm getting an odd error on some very simple code. I have an swf that contain a movieclip with the Linkage name 'Box'. The movieclip has an animation of 30 frames, yet Flash Builder keeps erroring saying it's not a movieclip! public class Main extends Sprite { [Embed(source="../assets/box.swf", symbol="Box")] public static const A_Box:Class; public function Main() { var box:MovieClip = new A_Box(); } } #1034: Type Coercion failed: cannot convert box_swf$161b93e3bc30cfa0cb18e1d734943c6f-1063626660@53720a1 to flash.display.MovieClip. If I try to bring it in as a Sprite, it works just fine (var box:Sprite = new A_Box();), however then it does not animate, as it's being treated as a sprite. I've looked at around 20 online examples, the code is very simple and I can't seem to find anything wrong with my code. I've also double checked 300 times and the Movieclip is set up properly inside the Swf. (Linkage name, multiple frames, export for actionscript frame 1, etc) So... Why does only Sprite work and MovieClip doesn't? Am I doing something wrong or is this maybe a FlashBuilder/Air13.0 problem? Thanks! I opened my Project again today to find out that it suddenly works perfectly... Something must have been cached because I struggled with it for 6 hours yesterday, and now without changing anything it work right off the bat. I find there's so many caching issues like this in FB. Sometimes any changes I make are not reflected in the project at all (including deleting main classes) until I completely restart FB. Really wish Adobe would get their act together and release a patch to fix some of the any bugs.
https://forums.adobe.com/thread/1442309
CC-MAIN-2017-51
en
refinedweb
the program below is what i have...it compiles and runs but the problem is when it runs it asks Enter the number of employees which is fine but then it asks Enter days missed which is also find but it should calculate the average days missed but it keeps returning the message enter days missed....iwhat do i do from here??? please help ASAP #include <iostream> #include <stdlib.h> #include <math.h> using namespace std; int getnumemployees(); int getdaysmissed(int num_employees); double getdaysmissedavg(int num_employees,int days_missed); int main () { int number_of_employees=0; int totaldaysmissed=0; double average_days_missed=0; number_of_employees=getnumemployees(); totaldaysmissed = getdaysmissed(number_of_employees); average_days_missed = getdaysmissedavg(number_of_employees,totaldaysmissed); return 0; } int getnumemployees() { int numemployees=0; cout<<"Enter the number of employees: "<<endl; cin>>numemployees; return numemployees; } int getdaysmissed(int num_employees) { int daysmissed=0; int totaldaysmissed = 0; while (num_employees>=1) { cout<<"Enter days missed: "<<endl; cin>>daysmissed; totaldaysmissed += daysmissed; num_employees--; } return daysmissed; } double getdaysmissedavg(int num_employees, int days_missed) { int numemployees=0; int daysmissed=0; double daysmissedavg=0; daysmissedavg=numemployees/daysmissed; cout<<"Calculate the average number of days absent "<<daysmissedavg; system("pause"); return 0; } `
https://www.daniweb.com/programming/software-development/threads/134539/c-employee-average-days-missed
CC-MAIN-2017-51
en
refinedweb
I'm having trouble inserting a bubble sort for my program, any help would be appreciated. #include <iostream> #include <string> using namespace std; int main() { string food[100]; string lookup; int calories[100]; int x = -1; do { x++; cout << "Enter a menu item (enter 'done' when finished): "; getline(cin,food[x]); if (food[x] != "done") { cout << "Enter the number of calories: "; cin >> calories[x]; cin.ignore(); } }while (food[x] != "done"); do { bool found = false; cout << "Enter a product to look up: " ; getline(cin, lookup); for (int y = 0; y < x; y++) if (lookup == food[y]) { cout << food[y] << " has " << calories[y] << " calories." << endl; found = true; } if ((found == false) && (lookup != "done")) { cout << lookup << " was not found." << endl; } } while (lookup != "done"); }
https://www.daniweb.com/programming/software-development/threads/161210/bubble-sort-issues
CC-MAIN-2017-51
en
refinedweb
Transport Interface Diagram A copy of this uml diagram will be made available in the next source distribution Transport Namespace This namespace's focus is providing developers with an extensible, robust design for the Acquire-Modify-Persist pattern. Traditionally, this pattern is used to propagate data between data stores, but the intention here is to have the pattern open-ended and flexible enough that anything can be considered an end-point. Interfaces IConnector The connector interface defines the contract that an end-point must implement in order to work within this pattern. As long as the implementation provides the required functionality, anything could become an end-point for the pattern. The most obvious class inheriting this interface will be the DatabaseConnector and will make use of the DataInterface class provided in the Nvigorate.Data namespace. Notice that the connector is not required to actually contain all source needed to read and write data to the store, it can just be a simple wrapper around a pre-existing interface. IFormat IFormat plays a very critical role in this pattern because it not only defines a particular format which the data may exist in at any given time, but it also facilitates the translation between itself and other formats. The current idea is to perform this translation by having all formats capable of presenting their data in Xml. This way, each format can translate itself to and from Xml, making all formats interchangeable. I'm currently exploring alternatives to this approach simply because working with Xml as a universal format could create performance problems. IAutomaton IAutomaton is a generalized way to describe an object capable of automating the runtime configuration of another object. In this specific example, the suggested use is to have configuration files which the IAutomaton instance reads and then applies to the targeted object (in this case, the IConnector instance). This allows for things such as switching data stores without having to recompile, something that can not only make configuration far easier, but also make a much more robust system. IProcess In order to allow the developer a simple way to have complete control over the processing which takes place on the data as it's passed throught the IChannel instance, IProcess gives a unified approach to create small, abstract processes which perform a set of steps against the data provided and pass the results back. By putting IProcess implementations into IChannel's process pipeline, the developer can change or adapt the pipeline during runtime. IChannel IChannel is the interface that calls are actually made against. Applications making use of IChannel implementations will act against configurable channels designed to process and move the data between end points. IChannels tie all the previous interfaces together and work the entire thing as one contiguous, but highly flexible piece.
http://nvigorate.wikidot.com/transport
CC-MAIN-2017-51
en
refinedweb
Twice a month, we revisit some of our readers' favorite posts from throughout the history of Nettuts+. This tutorial was first published in January, 2010. Give me an hour of your time, and I'll take you on a fly by of the Ruby on Rails framework. We'll create controllers, models, views, add admin logins, and deploy using Heroku's service in under an hour! In this article we'll create a simple bookshelf application where you can add books and write thoughts about them. Then we'll deploy the application in just a few minutes. So buckle up because this article moves fast! This article assumes that you may know what Ruby on Rails is, but not exactly how it works. This article doesn't describe in-depth how each step works, but it does describe what we need to do, then the code to do that. Zero Ruby on Rails is a full stack MVC web application framework. Full stack means you get everything: a simple web server you can use to test your apps, a database layer, testing framework, and an MVC based design. MVC stands for Model-View-Controller. Model A model stores information. Models are stored in the database. Rails supports MySQL, PostgreSQL, or SQLite. Each model has its own class and table. Say we want to model a "game." A game has things like number of players, a start time, end time, teams playing, and a winner. These attributes become columns in the "games" table. Table names are lowercase, underscored, and pluralized. The model's class name is Game. In Rails you create models through migrations and generators. A migration describes how to add/remove columns and tables from the database. Controller A controller is the manager. It takes information and does some logic like CRUD, or maybe import some stuff from a file, add/remove permissions--you name it a controller can do it. Controllers are the part of your app that does. How do we call controllers? Rails uses routes. A route is a formatted url that is tied to an action with a set of parameters. Going back to the Game model, we need a controller for functionality. At some point we'll to need to list all the games in the system. A basic REST url for this route looks like "/games" How does Rails know what controller to look for and what action to call? It looks at your routes.rb file. You may have a route that looks like this "GET /makes {:name => 'games', :action => 'index'"}. This translates to GamesController and it's index method. Just like models, class names are CamelCase and file names are underscored. So our GamesController would be stored in /app/controllers/games_controller.rb. After the logic, the controller renders a view. View A view is the easiest part to understand. It's what you see. It's the HTML you generate to show the user what they need. Views are ERB templates. ERB stands for Embedded Ruby. You use ERB similar to how you embed php into a document. If you want to insert an instance variable @game.time into some html write <%= @game.time %> Ten First install Rails. Installing Rails is very easy depending on your platform. If you are on a Linux/OSX, it's no problem. Windows is more complicated and I have no experience with it. This section will give you a brief overview of installing Rails through RubyGems, the Ruby package manager. A gem is a bundle of ruby code in a package that can be used in your programs. For UNIX based system, install RubyGems, then install the Rails gem. This process will go something like this: # ubuntu sudo apt-get install rubygems # fedora sudo yum install rubygems # gentoo sudo emerge rubygems # OSX sudo port install rubygems # after you have rubygems installed sudo gem install gemcutter # ruby gem hosting service sudo gem tumble sudo gem install Rails Here are some links to help you through the setup process - Instant Rails, like instant LAMP but for Rails - Ubuntu Community Docs about Ruby on Rails - Getting Rails running on Windows - Snow Leopard on Rails by a guy I know - The mandatory google link Once you can run the "rails" command you're ready for the next step. Fifteen Now it's time to install database support before we get started. Rails has support for all the popular DB's, but for this example we'll use SQLite because it's lightweight.. Depending on your platform (again) installing sqlite support can be easy or painful. It can be a pain since the gem has to be built against C extensions, which means the sqlite3 package has to be installed on your system. Again the process will go something like this: # ubuntu sudo apt-get install sqlite3 sqlite3-devel # fedora sudo yum install sqlite3 sqlite3-devel # OSX sudo port install sqlite3 # then once you have the sqlite3 package installed sudo gem install sqlite3-ruby Read the previous links if you're having problems with these steps. They describe installing sqlite as well. Twenty Time to generate our app. The rails command creates a base application structure. All we need to do is be in a directory and run it like so: $ cd ~/projects $ Rails bookshelf #this will create a new directory named bookshelf that holds our app $ cd bookshelf It's important to note that the Rails default is an SQLite based app. You may be thinking, what if I don't want that? The rails command is a generator. All it does is copy stored files into a new directory. By default it will create sqlite3 databases in /bookshelf/db/development.sqlite3, /bookshelf/db/production.sqlite3, and /bookshelf/db/testing.sqlite3. Database connection information is stored in /bookshelf/config/database.yml. You don't need to edit this file since it contains default information for an sqlite setup. It should look like this: # SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard) Notice there are different environments assigned. Rails has three modes: Development, Testing, and Production. Each has different settings and databases. Development is the default environment. At this point we can start our app to make sure it's working. You can see there's a directory called /script. This directory contains ruby scripts for interacting with our application. Some important ones are /script/console, and /script/server. We will use the /script/server command to start a simple server for our application. bookshelf $ ./script/server # then you should see something like this. Rails will start a different server depending on your platform, but it should look something like this: => Booting Mongrel => Rails 2.3.5 application starting on => Call with -d to detach => Ctrl-C to shutdown server Time to visit the application. Point your browser to "" and you should see this splash page: You're riding on Rails. Now that the code working on a basic level, it's time to delete the splash page and get started with some code. bookshelf $ rm /public/index.html Twenty Five Our application needs data. Remember what this means? It means models. Great, but how we generate a model? Rails comes with some generators to common tasks. The generator is the file /script/generate. The generator will create our model.rb file along with a migration to add the table to the database. A migration file contains code to add/drop tables, or alter/add/remove columns from tables. Migrations are executed in sequence to create the tables. Run migrations (and various other commands) with "rake". Rake is a ruby code runner. Before we get any further, let's start by defining some basic information for the books. A book has these attributes: - Title : String - Thoughts : Text That's enough to start the application. Start by generating a model with these fields using the model generator: bookshelf $ ./script/generate model Book title:string thoughts:text # notice how the attributes/types are passed to the generator. This will automatically create a migration for these attributes # These are optional. If you leave them out, the generator will create an empty migration. exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/book.rb create test/unit/book_test.rb create test/fixtures/books.yml create db/migrate create db/migrate/20091202052507_create_books.rb # The generator created all the files we need to get our model up and running. We need to pay the most attention to these files: # app/models/book.rb # where our code resides # db/migrate/20091202052507_create_books.rb # code to create our books table. Open up the migration file: class CreateBooks < ActiveRecord::Migration def self.up create_table :books do |t| t.string :title t.text :thoughts t.timestamps end end def self.down drop_table :books end end Notice the create_table :books block. This is where columns are created. An id primary key is created automatically. t.timestamps adds columns for created_at and updated_at. Now, run the migration using the rake task db:migrate. db:migrate applies pending migrations: bookshelf $ rake db:migrate == CreateBooks: migrating ==================================================== -- create_table(:books) -> 0.0037s == CreateBooks: migrated (0.0038s) =========================================== Cool, now we have a table, let's create a dummy book just for kicks in the console. The Rails console uses IRB (interactive ruby) and loads all classes for your project. IE you can access to all your models. Open the console like this: bookshelf $ ./script/console >> # let's create a new model. You can specify a hash of assignments in the constructor to assign values like this: >> book = Book.new :title => 'Rails is awesome!' , :thoughts => 'Some sentence from a super long paragraph' => #<Book id: nil, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: nil, updated_at: nil> # and ruby will display it back >> book.save => true # now are book is saved in the database. We can query it like this: >> Book.all # find all books and return them in an array => [#<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38">] >> exit # now that our model is saved, let's exit the console. Now that we can create books, we need some way to show them to the user Thirty Remember controllers? We need a controller to display all the books in the system. This scenario corresponds to the index action in our BooksController (books_controller.rb) which we don't have yet. Just like generating models, use a generator to create the controller: bookshelf $ ./script/generate controller Books exists app/controllers/ exists app/helpers/ create app/views/books exists test/functional/ create test/unit/helpers/ create app/controllers/books_controller.rb create test/functional/books_controller_test.rb create app/helpers/books_helper.rb create test/unit/helpers/books_helper_test.rb # notice Rails created the file app/controllers/books_controller.rb? This is where we are going to define our actions or methods for the BooksController class We need to define an action that finds and displays all books. How did we find all the books? Earlier we used Book.all. Our strategy is use Book.all and assign it to an instance variable. Why an instance variable? We assign instance variables because views are rendered with the controllers binding. You're probably thinking bindings and instance variables...what's going on? Views have access to variables defined in actions but only instance variables. Why, because instance variables are scoped to the object and not the action. Let's see some code: class BooksController < ApplicationController # notice we've defined a method called index for a BooksController instance. We tie this together with routes def index @books = Book.all # instance variables are prefixed with an @. If we said books = Book.all, we wouldn't be able to access books in the template end end Now the controller can find all the books. But how do we tie this to a url? We have to create some routes. Rails comes with some handy functions for generating RESTful routes (another Rails design principle). This will generate urls like /makes and /makes/1 combined with HTTP verbs to determine what method to call in our controller. Use map.resources to create RESTful routes. Open up /config/routes.rb and change it to this: ActionController::Routing::Routes.draw do |map| map.resources :books end Routes.rb can look arcane to new users. Luckily there is a way to decipher this mess. There is routes rake task to display all your routing information. Run that now and take a peek inside: bookshelf $ rake routes"} # as you can see this command can display a lot of information. On the left column we have a helper to generate a url, then the HTTP verb associated with the url, then the url, and finally the controller and action to call. # for example GET /books will call BooksController#index or # GET /books/1 will call BooksController#show # the url helpers are very important but we'll get to them later. For now know that we are going to create a /books page to list all books Now we need to create a template to display all our books. Create a new file called /app/views/books/index.html.erb and paste this: <% for book in @books do %> <h2><%=h book.title %></h2> <p><%= book.thoughts %></p> <% end %> This simple view loops over all @books and displays some HTML for each book. Notice a subtle difference. <%= is used when we need to output some text. <% is used when we aren't. If you don't follow this rule, you'll get an exception. Also notice the h before book.title. h is a method that escapes HTML entities. If you're not familiar with ruby, you can leave off ()'s on method calls if they're not needed. h text translates to: h(text). Time to run the server and see what we've got. Start the server, then go to. bookshelf $ ./script/server If all goes according to plan you should see some basic HTML. Thirty Five We have one book in our system, but we need some more books to play with. There are a few ways to go about doing this. I like the forgery gem. Forgery can create random strings like names, or lorem ipsum stuff. We are going to set a gem dependency in our app, install the gem, then use it to create a rake task to populate our data. Step 1: open up /config/environment.rb and add this line: config.gem 'forgery' # now let's tell Rails to install all gems dependencies for this project # gem install gemcutter # if you haven't already # gem tumble # again, if you haven't already bookshelf $ sudo rake gems:install Now we're going to use the Forgery classes to create some fake data. The Forgery documentation is here. We'll use the LoremIpsumForgery to create some basic data. We can define our own rake tasks by creating a .rake file in /lib/tasks. So create a new file /lib/tasks/populate.rake: begin namespace :db do desc "Populate the development database with some fake data" task :populate => :environment do 5.times do Book.create! :title => Forgery::LoremIpsum.sentence, :thoughts => Forgery::LoremIpsum.paragraphs(3) end end end rescue LoadError puts "Please run: sudo gem install forgery" end This rake task will create five fake books. Notice I added a begin/rescue. When you run a rake task it looks at all possible tasks in the rake initialization. If you were to run any rake task before you installed the gem, rake would blow up. Wrapping it in an begin/rescue stop rake from aborting. Run the task to populate our db: bookshelf $ rake db:populate bookshelf $ ./script/console # let's enter the console to verify it all worked >> Book.all => [#<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38">, #<Book id: 2,::::">] >> exit Start the server again and head back to the /books pages. You should see: Now we have a listing of more than one book. What if we have a lot of books? We need to paginate the results. There's another gem for this. The gem is 'will_paginate.' Following the same steps as before, let's add a gem dependency for 'will_paginate' and rake gems:install: # in environment.rb config.gem 'will_paginate' # from terminal bookshelf $ sudo rake gems:install # then let's add more books to our db bookshelf $ rake db:populate # run this a few times to get a large sample, or change the number in rake file Head back to your /books page and you should be bombarded by books at this point. It's time to add pagination. Pagination works at two levels. The controller decides which books should be in @books, and the view should display the pagination links. The will_paginate helper makes this very easy. We'll use the .paginate method and the will_paginate view helper to render page links. All it takes is two lines of code. # books_controller.rb, change the previous line to: @books = Books.paginate :page => params[:page], :per_page => 10 # index.html.erb, add this line after the loop: <%= will_paginate @books %> Head back to your /makes page and you should see some pagination links (given you have more than 10 books) Sweet! We are movin' through this app. It's time to spruce up our page a little bit. One key Rails principle is DRY (Do Not Repeat Yourself). We could work through the exercise of doing some basic CSS to get a page looking OK, or we could keep things DRY and use some code to do it for us. We are going to use Ryan Bate's nifty-generators gem to generate a layout for the site. A layout is a template your views can fill in. For example we can use a layout to determine the over all structure of a page, then define where the views fill it in. Since this isn't a project dependency, we don't have to add it to environment.rb. We can just install it regularly. # console $ sudo gem install nifty-generators Run the generator to create a layout file and stylesheets. $ ./script/generate nifty_layout exists app/views/layouts exists public/stylesheets exists app/helpers create app/views/layouts/application.html.erb # this is our layout file create public/stylesheets/application.css # css file that styles the layout create app/helpers/layout_helper.rb # view helpers needed in the generator Take a look at the application.html.erb file and see what's inside: < %> </div> </body> </html> See those yields? That's where a view fills in the layout. The last yield has no argument. Default content goes there. Yields with an argument must have content defined using content_for. Fix up index.html.erb view to go with the new layout: <% title 'My Books' %> <% for book in @books do %> <h2><%=h book.title %></h2> <p><%= book.thoughts %></p> <% end %> <%= will_paginate @books %> All we did was add the title method which sets the title for a page. The title helper calls content_for :title and sets it to the argument. Our view fills in the last yield. Check out the results! Forty Now that our application is looking better, let's add some interaction. In typical Web 2.0 style we're going to allow users to comment on our content, but we aren't going to require the user to register. We need to create new model called Comment. A comment is going to have some text, an author, and an associated Book. How do we link these two models together? Associations. Rails provides these associations: belongs_to, has_many, has_one, and has_and_belongs_to many. It should be easy to see that a book has many comments, and a comment belongs_to a book. So we'll use a generator to create the comment model and migration: $ ./script/generate model Comment text:text author:string exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/comment.rb create test/unit/comment_test.rb create test/fixtures/comments.yml exists db/migrate create db/migrate/20091202081421_create_comments.rb Astute readers will notice that this migration is lacking the foreign key column. We'll have to add that ourselves. Open up your create_comments.rb migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.text :text t.string :author t.belongs_to :book # creates a new integer column named book_id t.timestamps end end def self.down drop_table :comments end end # now migrate your database $ rake db:migrate rake db:migrate (in /Users/adam/Code/bookshelf) == CreateComments: migrating ================================================= -- create_table(:comments) -> 0.0021s == CreateComments: migrated (0.0022s) ======================================== Now it's time to associate our models using the Rails associations. We'll call the method inside the model's class body. Rails uses metaprogramming to generate the methods needed to make our association work. We'll edit our comment.rb and book.rb files: # book.rb class Book < ActiveRecord::Base has_many :comments end # comment.rb class Comment < ActiveRecord::Base belongs_to :book end Now Book instances have a method .comments with returns an array of all its comments. Comment instances have a method called .book that return the associated book. Use the << operator to add objects to arrays. Let's see how it works in the console: $ ./script/console >> book = Book.find(1) => #<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38"> >> comment = Comment.new :text => "This is an comment", :author => "Adam" => #<Comment id: nil, text: "This is an comment", author: "Adam", book_id: nil, created_at: nil, updated_at: nil> >> book.comments << comment => [#<Comment id: 1, text: "This is an comment", author: "Adam", book_id: 1, created_at: "2009-12-02 08:25:47", updated_at: "2009-12-02 08:25:47">] >> book.save => true >> book.comments => [#<Comment id: 1, text: "This is an comment", author: "Adam", book_id: 1, created_at: "2009-12-02 08:25:47", updated_at: "2009-12-02 08:25:47">] >> comment.book => #<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38"> >> exit In the console session I found one of the existing books, then created a new comment. Next I added it to the book.comments. Then I save book. The book must be saved for the association to be stored. What's next? We need to create a page where the user can view a book and all it comments. That page should also have a form where the user can add their comment. Create a new action in the books controller to show the details for a specified book. The book is found by id. Pop into books_controller.rb and add this: def show @book = Book.find params[:id] end Make a new template for this action at /app/views/books/show.html.erb and paste this: <% title @book.title %> <h2><%=link_to(h(@book.title), book_path(@book)) %></h2> <p><%= @book.thoughts %></p> Now let's add some links for the index actions to link the show action: # update index.html.erb contents to: <% title 'My Books' %> <% for book in @books do %> <h2><%=link_to(h(book.title), book_path(book)) %></h2> <p><%= book.thoughts %></p> <% end %> <%= will_paginate @books %> Remember our url helpers from rake routes? We're using book_path to generate a url to the book controller's show actions. If you don't remember check rake routes again. link_to is a helper to generate a link tag. Now let's fire up our server and click through the app. Now you should have some ugly blue links. Click on your book title and it should go to /books/:id aka BooksController#show: Time to display some comments. Remember that console session we did a little bit back? One of our books has some comments. let's update our controller to find the comments and our show.html.erb to display them. # books_controller.rb def show @book = Book.find(params[:id]) @comments = @book.comments end # show.html.erb <% title @book.title %> <h2><%=link_to(h(@book.title), book_path(@book)) %></h2> <p><%= @book.thoughts %></p> <% if @comments %> <h3>Comments</h3> <% for comment in @comments do %> <p><strong><%=h(comment.author) %></strong>: <%=h comment.text %> <% end %> <% end %> So we assign @comments in the controller to be all the book's comments, then do a loop in the view to display them. Head over to /books/1 (1 came from Book.find(1) in the console session). Check this out: Now we need the form to create a new comment. We need two things. 1, A comments controller to save the comment, and 2 a route to that action. let's tackle #1 first. bookshelf $ ./script/generate controller Comments Create action that instantiates a new comment, sets its attributes (text/author) from the submitted form data, and saves it. class CommentsController < ApplicationController def create book = Book.find params[:book_id] comment = book.comments.new params[:comment] comment.save flash[:notice] = 'Comment saved' redirect_to book_path(book) end end First the code finds the book, then creates a new comment form the form data, saves it, sets a message, then redirects back to that book's page. params holds a hash of all GET/POST data with a request. Now we need to create a route to the controller's action. Open up routes.rb: ActionController::Routing::Routes.draw do |map| map.resources :books do |book| book.resources :comments, :only => :create end end bookshelf $ rake routes # We needed to add a route to create a new comment for a book. We need to know what book we are creating a comment for, so we need a book_id in the route. Look at the book_comment line. # book_comment is tied to our CommentsController#create book_comments POST /books/:book_id/comments(.:format) {:controller=>"comments", :action=>"create"}"} # every time you modify routes.rb you'll need to restart the server # kill the server process you have running with ^c (ctrl + c) and start it again Head back to the /books page and make sure nothing has blown up. Everything should be fine and dandy. Now for constructing the form. We need a form that submits POST data to /book/:book_id/comments. Luckily Rails has the perfect helper for this: form_for. form_for takes some models and generates a route for them. We pass form_for a block to create form inputs. Go ahead and paste this into the bottom of your show.html.erb: <h3>Post Your Comment</h3> <% form_for([@book, Comment.new]) do |form| %> <p><%= form.label :author %></p> <p><%= form.text_field :author %></p> <p><%= form.label :text, 'Comment' %></p> <p><%= form.text_area :text %></p> <%= form.submit 'Save' %> <% end %> We call form_for to create a new form for the book's comment, then use the text_field/text_area to create inputs for attributes. At this point we can go ahead and make a comment. Fill in the form and viola you now have comments! See that green thing? That's the flash. The flash is a way to store messages between actions. It's perfect for storing little messages like this. But what do we do if a book has too many comments? We paginate them just like did before. So let's make some changes to the controller and view: # books_controller.rb def show @book = Book.find(params[:id]) @comments = @book.comments.paginate :page => params[:page], :per_page => 10, :order => 'created_at ASC' end # show.html.erb <% title @book.title %> <h2><%=link_to(h(@book.title), book_path(@book)) %></h2> %> Start commenting on your books and you should see some pagination. Now people can comment, and everything is paginated but we're missing something. We have no web interface for creating books. We need to create a form for that. Also we are the admin so only I should be allowed to create books. This means we need to create a user, login, and check to see if they can do an action. Fifty Now we're going to implement CRUD functionality for admins. First we'll implement actions to create, edit, and delete books. Then we'll create an admin login. Finally we'll make sure only admins can do those actions. Creating a new books requires two new actions. One action that renders a form for a new book. This action is named 'new'. The second is named 'create.' This action takes the form parameters and saves them in the database. Open up your books_controller.rb and add these actions: def new @book = Book.new end def create @book = Book.new params[:book] if @book.save flash[:notice] = "#{@book.title} saved." redirect_to @book else render :new end end We also need a new view that shows a form. Create a new file /apps/views/books/new.html.erb and paste this: <% form_for(@book) do |form| %> <p> <%= form.label :title %><br/> <%= form.text_field :title %> </p> <p> <%= form.label :thoughts %><br/> <%= form.text_area :thoughts %> </p> <%= form.submit %> <% end %> Now we're ready to create a new book. Point your browser to /books/new and you should see this form. Go a head and create a new book. After you fill in your form you should see your new book. Get rid of the double header in /app/views/books/show.html.erb and add some links to actions an admin can do on that book. Open up that file and set it's contents to: <% title @book.title %> %> <p> Admin Actions: <%= link_to 'Edit', edit_book_path(@book) %> | <%= link_to 'Delete', book_path(@book), :method => :delete, :confirm => "Are you sure?" %> </p> Head over to a book's page and you should see: Now that we have some links to edit and delete, you can implement them. Editing a book works just about the same as creating a new one. We need an action that shows an edit form, and one to save the changes. Delete is just one action that deletes the record from the database. Open up books_controller.rb and add these actions: def edit @book = Book.find params[:id] end def update @book = Book.find params[:id] if @book.update_attributes(params[:book]) flash[:notice] = "#{@book.title} saved." redirect_to @book else render :edit end end def destroy book = Book.find params[:id] book.destroy flash[:notice] = "#{book.title} deleted." redirect_to books_path end The edit action finds the requested book from the id in the url. The update action finds the book from the id and uses the update_attributes method to set the new values from the form. Delete finds the book by id and deletes it. Then it redirects you back to the books listing. Next we have to create an edit form. This form is exactly the same as the create form. We can just about duplicate the show.html.erb to edit.html.erb. All we are going to do is change the title. Create a new file in /app/views/books/edit.html.erb and paste this: <% title "Editing #{@book.title}" %> <% form_for(@book) do |form| %> <p> <%= form.label :title %><br/> <%= form.text_field :title %> </p> <p> <%= form.label :thoughts %><br/> <%= form.text_area :thoughts %> </p> <%= form.submit %> <% end %> Now from one of the book's pages, click the edit link. You should see a familiar form: Notice how Rails filled in the inputs with the saved values? Nice huh. Go ahead and save some changes to a book. When you're done you should see this: Now delete that book. You should get a confirmation dialog then be redirected back to /books. Add a link to create a new book on the index page. Open up /app/views/books/index.html.erb and add this to the bottom: <p> Admin actions: <%= link_to 'New Book', new_book_path %> </p> Now that we have CRUD functionality. We need to create our admin user. Fifty Five Maintaing user logins is a solved problem in Rails. You rarely have to write your own authentication system. We're going to use the authlogic gem. Authlogic provides simple mechanics to authenticate users and store sessions. This is prefect for our app. We need an admin to login so he can create/edit/delete books. First let's start by installing the authlogic gem. # add config.gem 'authlogic' in environment.rb bookshelf $ sudo rake gems:install Create a new model to hold the admins. Since our users are only admins, we'll name the model Admin. For now the model only needs a login attribute. Generate the model using script/generate model: bookshelf $ ./script/generate model Admin login:string exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/admin.rb create test/unit/admin_test.rb create test/fixtures/admins.yml exists db/migrate create db/migrate/20091204202129_create_admins.rb Now add authlogic specific columns to our admin model. Open up the migration you just created and paste this into it: class CreateAdmins < ActiveRecord::Migration def self.up create_table :admins do |t| t.string :login t.string :crypted_password, :null => false t.string :password_salt, :null => false t.string :persistence_token, :null => false t.timestamps end end def self.down drop_table :admins end end Now migrate your database. bookshelf $ rake db:migrate == CreateAdmins: migrating =================================================== -- create_table(:admins) -> 0.0025s == CreateAdmins: migrated (0.0026s) ========================================== Now the admin model is created. Next we need to create an authlogic session for that admin. Authlogic includes a generator for this: bookshelf $ ./script/generate session admin_session exists app/models/ create app/models/admin_session.rb Next we need to create some routes for logging in and out. Open up routes.rb and add this line: map.resource :admin_session Now we need a controller to handle the logging in and out. Generate this controller using the generator: bookshelf $ ./script/generate controller AdminSessions exists app/controllers/ exists app/helpers/ create app/views/admin_sessions exists test/functional/ exists test/unit/helpers/ create app/controllers/admin_sessions_controller.rb create test/functional/admin_sessions_controller_test.rb create app/helpers/admin_sessions_helper.rb create test/unit/helpers/admin_sessions_helper_test.rb Now open up /app/controllers/admin_sessions_controller.rb and paste this into it: class AdminSessionsController < ApplicationController def new @admin_session = AdminSession.new end def create @admin_session = AdminSession.new(params[:admin_session]) if @admin_session.save flash[:notice] = "Login successful!" redirect_to books_path else render :action => :new end end def destroy current_admin_session.destroy flash[:notice] = "Logout successful!" redirect_to books_path end end Wow! It seems like we just did a lot, but we haven't. We've just created 2 new models. One model to hold our admins, and the other to hold admin session information. Finally we created a controller to handle the logging in and out. Now we need a view to show a login form. Create a new file at /app/views/admin_sessions/new.html.erb and paste this into it: <% title 'Login' %> <% form_for @admin_session, :url => admin_session_path do |f| %> <%= f.error_messages %> <p> <%= f.label :login %><br /> <%= f.text_field :login %> </p> <p> <%= f.label :password %><br /> <%= f.password_field :password %> </p> <%= f.submit "Login" %> <% end %> We're almost done. We still need to tell our Admin model that it uses authlogic and add some logic to our application controller to maintain session information. All controller inherit from application_controller, so it's a good way to share methods between controllers. Open up /app/controllers/application_controller.rb and paste this: class ApplicationController < ActionController::Base helper :all # include all helpers, all the time protect_from_forgery # See ActionController::RequestForgeryProtection for details # Scrub sensitive parameters from your log # filter_parameter_logging :password filter_parameter_logging :password, :password_confirmation helper_method :current_admin_session, :current_admin private def current_admin_session return @current_admin_session if defined?(@current_admin_session) @current_admin_session = AdminSession.find end def current_admin return @current_admin if defined?(@current_admin) @current_admin = current_admin_session && current_admin_session.user end end Now in /app/models/admin.rb add this line inside the class: # /app/models/admin.rb acts_as_authentic We're finally ready to do some logging in and out. All of the stuff we did was almost purely from the authlogic documentation examples. This is a standard setup for many applications. If you want to find out more about how authlogic works you can here. Here's a run down of what we did. - Install the authlogic gem - Create an Admin model to hold the basic information like login/password - Add authlogic specific columns to the Admin table - Generated an authlogic admin session - Created routes for logging in and out - Generated an AdminSession controller to do all the work - Created a view that shows a login form - Added methods to ApplicationController for persisting sessions - Told the Admin model that it uses authlogic It's time to create the admin account. Our application is simple and only has one admin. Since we only have one admin, we can easily use the console. Since we'll need to recreate that user later when we deploy, it doesn't make sense to do the same thing twice. Rails now has a functionality for seeding the database. This is perfect for creating the initial records. There is a file /db/seeds.rb where you can write ruby code to create your initial models. Then you can run this file through rake db:seed. In order to create our admin model we'll need a login, password, and password confirmation. Open up /db/seeds.rb and paste this. Fill in the login with the name you want. Admin.create! :login => 'Adam', :password => 'nettuts', :password_confirmation => 'nettuts' We use the create! method because it will throw an exception if the record can't be saved. Go ahead and run the rake task to seed the database: bookshelf $ rake db:seed Now we should be able to login. Restart the server to get the new routes. Head to /admin_session/new. You should see: Go ahead and fill it in and now you should be logged in! Now that admins can login, we can give them access to the new/edit/delete functionality. Rails has these awesome things called filters. Filters are things you can do at points in the request lifecycle. The most popular filter is a before_filter. This filter gets executed before an action in the controller. We can create a before filter in the books controller that checks to see if we have a logged in admin. The filter will redirect users who aren't logged in, therefore preventing unauthorized access. Open up books_controller.rb and add these lines: # first line inside the class: before_filter :login_required, :except => [:index, :show] # after all the actions private def login_required unless current_admin flash[:error] = 'Only logged in admins an access this page.' redirect_to books_path end end Now we need to update our views to show the admin links only if there's an admin logged in. That's easy enough. All we need to do is wrap it in an if. # show.html.erb <% if current_admin %> <p> Admin Actions: <%= link_to 'Edit', edit_book_path(@book) %> | <%= link_to 'Delete', book_path(@book), :method => :delete, :confirm => "Are you sure?" %> </p> <% end %> # index.html.erb <% if current_admin %> <p> Admin actions: <%= link_to 'New Book', new_book_path %> </p> <% end %> We still need to add a login/logout link. That should go on every page. An easy way to put something on every page is add it to the layout. # /app/views/layouts/application.erb < %> <% if current_admin %> <p><%= link_to 'Logout', admin_session_path(current_admin_session), :method => :delete %></p> <% else %> <p><%= link_to 'Login', new_admin_session_path %></p> <% end %> </div> </body> </html> Now you should have login/logout links on pages depending if your logged in and logged out. Go ahead and click the through the app. Try access the new book page after you've logged out. You should see an error message. Click through the app. You should be able to login and out, and edit/create/delete books. Time for the final step. Let's add some formatting to your thoughts and user comments. Rails has a helper method that will change new lines to line breaks and that sorta thing. Add that show.html.erb: # <p><%= @book.thoughts %></p> becomes <%= simple_format @book.thoughts %> # do the same thing for comments # <p><strong><%=h(comment.author) %></strong>: <%=h comment.text %> becomes <p><strong><%=h(comment.author) %></strong>:</p> <%= simple_format comment.text %> It doesn't make sense to put the thoughts in the index, so let's replace that with a preview instead of the entire text. # index.html.erb # <p><%= book.thoughts %></p> becomes <%= simple_format(truncate(book.thoughts, 100)) %> Now our final index page should look like this: Finally we need to set up a route for our root page. Open up routes.rb and add this line: map.root :controller => 'books', :action => 'index' Now when you go to / you'll see the book listing. Sixty Now we are going to deploy this app in a few steps. You don't need your own server or anything like that. All you need is an account on Heroku. Heroku is a cloud Rails hosting service. If you have a small app, you can use their service for free. Once you've signed up for an account, install the heroku gem: $ sudo gem install heroku Heroku works with git. Git is a distributed source control management system. In order to deploy to heroku all you need to do is create your app then push your code to it's server. If you haven't already install git, instructions can be found here. Once you have heroku and git installed you are ready to deploy. First thing we need to do is create a new git repo out of your project: bookshelf $ git init Initialized empty Git repository in /Users/adam/Code/bookshelf/.git/ It's time to do some preparation for heroku deployment. In order to get your application's gems installed, create a .gems file in the root project directory. Each line has the name of the gem on it. When you push your code to heroku it will read the .gems file and install the gems for you. So create a .gems file and paste this into it: forgery will_paginate authlogic There is a problem with authlogic on heroku, so we need to create an initializer to require the gem for us. Create a new file in /config/initializers/authlogic.rb and put this line in there: require 'authlogic' Now we should be ready to deploy. First thing you're going to do is run heroku create. This will create a new heroku app for you. If you're a first time user, it will guide you through the setup process. bookshelf $ heroku create Git remote heroku added No we are ready to deploy. Here are the steps - Add all files in the project to a commit - Commit the files - Push are code to heroku - Migrate the database on heroku - Seed the database on heroku - Restart the heroku server - Open your running application bookshelf $ git add -A bookshelf $ git commit -m 'Initial commit' bookshelf $ git push heroku master bookshelf $ heroku rake db:migrate bookshelf $ heroku rake db:seed bookshelf $ heroku restart bookshelf $ heroku open Here is the finally app running on the world wide web: Hit the Brakes We've covered a lot of ground in this article, so where do we go from here? There are few things we didn't do in this app. We didn't add any validations to the models. We didn't use partials. We didn't do any administration for the comments. These are things you should look into next. Here are some links to help you with the next steps. - Completed Source Code - Confused about the form parts? Read this - Confused about routes? Read this - Confused about heroku? Read this - Confused about associations? Read this - Confused about authlogic? Read this Links to gems used in this project. - Follow us on Twitter, or subscribe to the Nettuts+ RSS Feed for the best web development tutorials on the web. Ready Ready to take your skills to the next level, and start profiting from your scripts and components? Check out our sister marketplace, CodeCanyon._20<< Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/zero-to-sixty-creating-and-deploying-a-rails-app-in-under-an-hour--net-8252
CC-MAIN-2017-51
en
refinedweb
Class describing an archive file containing multiple sub-files, like a ZIP or TAR archive. Definition at line 24 of file TArchiveFile.h. #include <TArchiveFile.h> Definition at line 41 of file TArchiveFile.h. Specify the archive name and member name. The member can be a decimal number which allows to access the n-th sub-file. This method is normally only called via TFile. Definition at line 44 of file TArchiveFile.cxx. Dtor. Definition at line 63 of file TArchiveFile.cxx. Definition at line 55 of file TArchiveFile.h. Definition at line 51 of file TArchiveFile.h. Return position in archive of current member. Definition at line 71 of file TArchiveFile.cxx. Definition at line 57 of file TArchiveFile.h. Definition at line 56 of file TArchiveFile.h. Definition at line 52 of file TArchiveFile.h. Returns number of members in archive. Definition at line 79 of file TArchiveFile.cxx.(). Definition at line 121 of file TArchiveFile.cxx. Try to determine if url contains an anchor specifying an archive member. Returns kFALSE in case of an error. Definition at line 149 of file TArchiveFile.cxx. Explicitely make the specified member the current member. Returns -1 in case of error, 0 otherwise. Definition at line 88 of file TArchiveFile.cxx. Explicitely make the member with the specified index the current member. Returns -1 in case of error, 0 otherwise. Definition at line 100 of file TArchiveFile.cxx. Archive file name. Definition at line 31 of file TArchiveFile.h. Current archive member. Definition at line 36 of file TArchiveFile.h. File stream used to access the archive. Definition at line 34 of file TArchiveFile.h. Index of sub-file in archive. Definition at line 33 of file TArchiveFile.h. Sub-file name. Definition at line 32 of file TArchiveFile.h. Members in this archive. Definition at line 35 of file TArchiveFile.h.
https://root.cern.ch/doc/master/classTArchiveFile.html
CC-MAIN-2018-47
en
refinedweb
notsure 0 Posted March 22, 2012 (edited) Hello,I am experiencing some troubles with a with statement..?!However, it works fine on my own PC, but when i try to use the following code on another PC it will crash with the error message:Error: Only Object-type variables allowed in a "With" statment.Anyone has an idea what causes this error to occur? I mean, it works perfectly on my PC. Other PC's tested are same OS (XP) and also tried on a Win7 PC which gives the same problem. I deliberately made a function for "AddToList" (normally the code in that function was within the With-statement but since the error I tried to use (outside)variables in a function outside the statement.) Doesn't work tho.With $TableContents While Not .EOF $Name = .Fields ("Name").value $m_ID = .Fields("m_id").value AddToList() .MoveNext WEnd EndWith func AddToList() if stringleft($Name, stringlen($filter)) = $filter then GUICtrlCreateListViewItem($Name & "|" & $m_ID , $LVsource) endif EndFuncThe other PC's (2x winXP and 1x Win7 also have AutoIT installed, so that shouldn't be the problem either... is it true you cannot use ANY other variable within a WITH statement? Also no function calls? And if so, then why does it work on my PC?Confused....Using MySQL.au3 UDF Edited March 22, 2012 by notsure Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/138848-error-only-object-type-variables-allowed-in-a-with-statment/
CC-MAIN-2018-47
en
refinedweb
Agda A dependently typed functional programming language and proof assistant See all snapshots Agda appears in Module documentation for 2.5.4.2 - Agda - Agda.Auto - Agda.Benchmarking - Agda.Compiler - Agda.Compiler.Backend - Agda.Compiler.CallCompiler - Agda.Compiler.Common - Agda.Compiler.JS - Agda.Compiler.MAlonzo - Agda.Compiler.ToTreeless - Agda.Compiler.Treeless - Agda.Compiler.Treeless.AsPatterns - Agda.Compiler.Treeless.Builtin - Agda.Compiler.Treeless.Compare - Agda.Compiler.Treeless.EliminateDefaults - Agda.Compiler.Treeless.EliminateLiteralPatterns - Agda.Compiler.Treeless.Erase - Agda.Compiler.Treeless.GuardsToPrims - Agda.Compiler.Treeless.Identity - Agda.Compiler.Treeless.NormalizeNames - Agda.Compiler.Treeless.Pretty - Agda.Compiler.Treeless.Simplify - Agda.Compiler.Treeless.Subst - Agda.Compiler.Treeless.Uncase - Agda.Compiler.Treeless.Unused - Agda.ImpossibleTest - Agda.Interaction - Agda.Interaction.BasicOps - Agda.Interaction.CommandLine - Agda.Interaction.EmacsCommand - Agda.Interaction.EmacsTop - Agda.Interaction.FindFile - Agda.Interaction.Highlighting - Agda.Interaction.Imports - Agda.Interaction.InteractionTop - Agda.Interaction.Library - Agda.Interaction.MakeCase - Agda.Interaction.Monad - Agda.Interaction.Options - Agda.Interaction.Response - Agda.Interaction.SearchAbout - Agda.Main - Agda.Syntax - Agda.Syntax.Abstract - Agda.Syntax.Common - Agda.Syntax.Concrete - Agda.Syntax.DoNotation - Agda.Syntax.Fixity - Agda.Syntax.IdiomBrackets - Agda.Syntax.Info - Agda.Syntax.Internal - Agda.Syntax.Literal - Agda.Syntax.Notation - Agda.Syntax.Parser - Agda.Syntax.Parser.Alex - Agda.Syntax.Parser.Comments - Agda.Syntax.Parser.Layout - Agda.Syntax.Parser.LexActions - Agda.Syntax.Parser.Lexer - Agda.Syntax.Parser.Literate - Agda.Syntax.Parser.LookAhead - Agda.Syntax.Parser.Monad - Agda.Syntax.Parser.Parser - Agda.Syntax.Parser.StringLiterals - Agda.Syntax.Parser.Tokens - Agda.Syntax.Position - Agda.Syntax.Reflected - Agda.Syntax.Scope - Agda.Syntax.Translation - Agda.Syntax.Treeless - Agda.Termination -.DeadCode - Agda.TypeChecking.DisplayForm - Agda.TypeChecking.DropArgs - Agda.TypeChecking.Empty - Agda.TypeChecking.Errors - Agda.TypeChecking.EtaContract - Agda.TypeChecking.Forcing - Agda.TypeChecking.Free - Agda.TypeChecking.Functions - Agda.TypeChecking.Implicit - Agda.TypeChecking.Injectivity - Agda.TypeChecking.Inlining -.Caching - Agda.TypeChecking.Monad.Closure - Agda.TypeChecking.Monad.Constraints - Agda.TypeChecking.Monad.Context - Agda.TypeChecking.Monad.Debug - Agda.TypeChecking.Monad.Env - Agda.TypeChecking.Monad.Imports - Agda.TypeChecking.Monad.MetaVars - Agda.TypeChecking.Monad.Mutual - Agda.TypeChecking.Monad.Open - Agda.TypeChecking.Monad.Options -.ReconstructParameters - Agda.TypeChecking.RecordPatterns - Agda.TypeChecking.Records - Agda.TypeChecking.Reduce - Agda.TypeChecking.Rewriting - Agda.TypeChecking.Rules - Agda.TypeChecking.Rules.Application - Agda.TypeChecking.Rules.Builtin - Agda.TypeChecking.Rules.Data - Agda.TypeChecking.Rules.Decl - Agda.TypeChecking.Rules.Def - Agda.TypeChecking.Rules.Display - Agda.TypeChecking.Rules.LHS - Agda.TypeChecking.Rules.Record - Agda.TypeChecking.Rules.Term - Agda.TypeChecking.Serialise - Agda.TypeChecking.Serialise.Base - Agda.TypeChecking.Serialise.Instances - Agda.TypeChecking.SizedTypes - Agda.TypeChecking.Sort - Agda.TypeChecking.Substitute - Agda.TypeChecking.SyntacticEquality - Agda.TypeChecking.Telescope - Agda.TypeChecking.Unquote - Agda.TypeChecking.Warnings - Agda.TypeChecking.With - Agda.Utils - Agda.Utils.AffineHole - Agda.Utils.AssocList - Agda.Utils.Bag - Agda.Utils.Benchmark - Agda.Utils.BiMap - Agda.Utils.Char - Agda.Utils.Cluster - Agda.Utils.Either - Agda.Utils.Empty - Agda.Utils.Environment - Agda.Utils.Except - Agda.Utils.Favorites - Agda.Utils.FileName - Agda.Utils.Float - Agda.Utils.Function - Agda.Utils.Functor - Agda.Utils.Geniplate - Agda.Utils.Graph - Agda.Utils.Graph.AdjacencyMap - Agda.Utils.Hash - Agda.Utils.HashMap - Agda.Utils.Haskell - Agda.Utils.IO - Agda.Utils.IORef - Agda.Utils.Impossible - Agda.Utils.IndexedList - Agda.Utils.IntSet - Agda.Utils.Lens - Agda.Utils.List - Agda.Utils.ListT - Agda.Utils.Map - Agda.Utils.Maybe - Agda.Utils.Memo - Agda.Utils.Monad - Agda.Utils.Monoid - Agda.Utils.NonemptyList - Agda.Utils.Null - Agda.Utils.POMonoid - Agda.Utils.Parser - Agda.Utils.PartialOrd - Agda.Utils.Permutation - Agda.Utils.Pointer - Agda.Utils.Pretty - Agda.Utils.SemiRing - Agda.Utils.Singleton - Agda.Utils.Size - Agda.Utils.String - Agda.Utils.Suffix - Agda.Utils.Three - Agda.Utils.Time - Agda.Utils.Trie - Agda.Utils.Tuple - Agda.Utils.TypeLevel - Agda.Utils.Update - Agda.Utils.VarSet - Agda.Utils.Warshall - Agda.Utils.Zipper - Agda.Version - Agda.VersionCommit Agda 2 Note that this README is only about Agda, not its standard library. See the Agda Wiki for information about the library. Documentation Getting Started Hacking on Agda Changes Release notes for Agda version 2.5.4.2 Installation and infrastructure Fixed installation with some old versions of cabal-install[Issue #3225]. Using cppinstead of cpphsas the default preprocessor [Issue #3223]. Added support for GHC 8.4.4. Other closed issues For 2.5.4.2 the following issues have also been closed (see bug tracker): Release notes for Agda version 2.5.4.1 Installation and infrastructure Emacs mode - Light highlighting is no longer applied continuously, but only when the file is saved [Issue #3119]. Release notes for Agda version 2.5.4 Installation and infrastructure Added support for GHC 8.2.2 and GHC 8.4.3. Note that GHC 8.4.* requires cabal-install≥ 2.2.0.0. Removed support for GHC 7.8.4. Included user manual in PDF format in doc/user-manual.pdf. Language Call-by-need reduction. Compile-time weak-head evaluation is now call-by-need, but each weak-head reduction has a local heap, so sharing is not maintained between different reductions. The reduction machine has been rewritten from scratch and should be faster than the old one in all cases, even those not exploiting laziness. Compile-time inlining. Simple definitions (that don’t do any pattern matching) marked as INLINE are now also inlined at compile time, whereas before they were only inlined by the compiler backends. Inlining only triggers in function bodies and not in type signatures, to preserve goal types as far as possible. Automatic inlining. Definitions satisfying the following criteria are now automatically inlined (can be disabled using the new NOINLINE pragma): - No pattern matching. - Uses each argument at most once. - Does not use all its arguments. Automatic inlining can be turned off using the flag --no-auto-inline. This can be useful when debugging tactics that may be affected by whether or not a particular definition is being inlined. Syntax Do-notation. There is now builtin do-notation syntax. This means that dois a reserved keyword and cannot be used as an identifier. Do-blocks support lets and pattern matching binds. If the pattern in a bind is non-exhaustive the other patterns need to be handled in a where-clause (see example below). Example: filter : {A : Set} → (A → Bool) → List A → List A filter p xs = do x ← xs true ← return (p x) where false → [] return x Do-blocks desugar to _>>=_and _>>_before scope checking, so whatever definitions of these two functions are in scope of the do-block will be used. More precisely: Simple bind do x ← m m' desugars to m >>= λ x → m'. Pattern bind do p ← m where pᵢ → mᵢ m' desugars to m >>= λ { p → m'; pᵢ → mᵢ }, where pᵢ → mᵢis an arbitrary sequence of clauses and follows the usual layout rules for where. If pis exhaustive the whereclause can be omitted. Non-binding operation do m m' desugars to m >> m'. Let do let ds m desugars to let ds in m, where dsis an arbitrary sequence of valid let-declarations. The last statement in the do block must be a plain expression (no let or bind). Bind statements can use either ←or <-. Neither of these are reserved, so code outside do-blocks can use identifiers with these names, but inside a do-block they would need to be used qualified or under different names. Infix let declarations. [Issue #917] Let declarations can now be defined in infix (or mixfix) style. For instance: f : Nat → Nat f n = let _!_ : Nat → Nat → Nat x ! y = 2 * x + y in n ! n Overloaded pattern synonyms. [Issue #2787] Pattern synonyms can now be overloaded if all candidates have the same shape. Two pattern synonym definitions have the same shape if they are equal up to variable and constructor names. Shapes are checked at resolution time. For instance, the following is accepted: open import Agda.Builtin.Nat data List (A : Set) : Set where lnil : List A lcons : A → List A → List A data Vec (A : Set) : Nat → Set where vnil : Vec A 0 vcons : ∀ {n} → A → Vec A n → Vec A (suc n) pattern [] = lnil pattern [] = vnil pattern _∷_ x xs = lcons x xs pattern _∷_ y ys = vcons y ys lmap : ∀ {A B} → (A → B) → List A → List B lmap f [] = [] lmap f (x ∷ xs) = f x ∷ lmap f xs vmap : ∀ {A B n} → (A → B) → Vec A n → Vec B n vmap f [] = [] vmap f (x ∷ xs) = f x ∷ vmap f xs If the file has no top-level module header, the first module cannot have the same name as the file. [Issues #2808 and #1077] This means that the following file File.agdais rejected: -- no module header postulate A : Set module File where -- inner module with the same name as the file Agda reports Illegal declarations(s) before top-level moduleat the postulate. This is to avoid confusing scope errors in similar situations. If a top-level module header is inserted manually, the file is accepted: module _ where -- user written module header postulate A : Set module File where -- inner module with the same name as the file, ok Pattern matching Forced constructor patterns. Constructor patterns can now be dotted to indicate that Agda should not case split on them but rather their value is forced by the type of the other patterns. The difference between this and a regular dot pattern is that forced constructor patterns can still bind variables in their arguments. For example, open import Agda.Builtin.Nat data Vec (A : Set) : Nat → Set where nil : Vec A zero cons : (n : Nat) → A → Vec A n → Vec A (suc n) append : {A : Set} (m n : Nat) → Vec A m → Vec A n → Vec A (m + n) append .zero n nil ys = ys append (.suc m) n (cons .m x xs) ys = cons (m + n) x (append m n xs ys) Inferring the type of a function based on its patterns Agda no longer infers the type of a function based on the patterns used in its definition. [Issue #2834] This means that the following Agda program is no longer accepted: open import Agda.Builtin.Nat f : _ → _ f zero = zero f (suc n) = n Agda now requires the type of the argument of fto be given explicitly. Improved constraint solving for pattern matching functions Constraint solving for functions where each right-hand side has a distinct rigid head has been extended to also cover the case where some clauses return an argument of the function. A typical example is append on lists: _++_ : {A : Set} → List A → List A → List A [] ++ ys = ys (x ∷ xs) ++ ys = x ∷ (xs ++ ys) Agda can now solve constraints like ?X ++ ys == 1 ∷ yswhen ysis a neutral term. Record expressions translated to copatterns Definitions of the form f ps = record { f₁ = e₁; ..; fₙ = eₙ } are translated internally to use copatterns: f ps .f₁ = e₁ ... f ps .fₙ = eₙ This means that f psdoes not reduce, but thanks to η-equality the two definitions are equivalent. The change should lead to fewer big record expressions showing up in goal types, and potentially significant performance improvement in some cases. This may have a minor impact on with-abstraction and code using --rewritingsince η-equality is not used in these cases. When using with, it is now allowed to replace any pattern from the parent clause by a variable in the with clause. For example: f : List ℕ → List ℕ f [] = [] f (x ∷ xs) with x ≤? 10 f xs | p = {!!} In the with clause, xsis treated as a let-bound variable with value .x ∷ .xs(where .x : ℕand .xs : List ℕare out of scope) and p : Dec (.x ≤ 10). Since with-abstraction may change the type of variables, instantiations of variables in the with clause are type checked again after with-abstraction. Builtins Added support for built-in 64-bit machine words. These are defined in Agda.Builtin.Wordand come with two primitive operations to convert to and from natural numbers. Word64 : Set primWord64ToNat : Word64 → Nat primWord64FromNat : Nat → Word64 Converting to a natural number is the trivial embedding, and converting from a natural number gives you the remainder modulo 2^64. The proofs of these theorems are not primitive, but can be defined in a library using primTrustMe. Basic arithmetic operations can be defined on Word64by converting to natural numbers, peforming compiles (in the GHC backend) to addition and subtraction on Data.Word.Word64. New primitive primFloatLess and changed semantics of primFloatNumericalLess. primFloatNumericalLessnow uses standard IEEE <, so for instance NaN < x = x < NaN = false. On the other hand primFloatLessprovides a total order on Float, with -Inf < NaN < -1.0 < -0.0 < 0.0 < 1.0 < Inf. The SIZEINFbuiltin is now given the name ∞in Agda.Builtin.Size[Issue #2931]. Previously it was given the name ω. Reflection New TC primitive: declarePostulate. [Issue #2782] declarePostulate : Arg Name → Type → TC ⊤ This can be used to declare new postulates. The Visibility of the Arg must not be hidden. This feature fails when executed with --safeflag from command-line. Pragmas and options The --cachingoption is ON by default and is also a valid pragma. Caching can (sometimes) speed up re-typechecking in --interactionmode by reusing the result of the previous typechecking for the prefix of the file that has not changed (with a granularity at the level of declarations/mutual blocks). It can be turned off by passing --no-cachingto agdaor with the following at the top of your file. {-# OPTIONS --no-caching #-} The --sharingand --no-sharingoptions have been deprecated and do nothing. Compile-time evaluation is now always call-by-need. BUILTIN pragmas can now appear before the top-level module header and in parametrized modules. [Issue #2824] {-# OPTIONS --rewriting #-} open import Agda.Builtin.Equality {-# BUILTIN REWRITE _≡_ #-} -- here module TopLevel (A : Set) where {-# BUILTIN REWRITE _≡_ #-} -- or here Note that it is still the case that built-ins cannot be bound if they depend on module parameters from an enclosing module. For instance, the following is illegal: module _ {a} {A : Set a} where data _≡_ (x : A) : A → Set a where refl : x ≡ x {-# BUILTIN EQUALITY _≡_ #-} Builtin NILand CONShave been merged with LIST. When binding the LISTbuiltin, NILand CONSare bound to the appropriate constructors automatically. This means that instead of writing {-# BUILTIN LIST List #-} {-# BUILTIN NIL [] #-} {-# BUILTIN CONS _∷_ #-} you just write {-# BUILTIN LIST List #-} Attempting to bind NILor CONSresults in a warning and has otherwise no effect. The --no-unicodepragma prevents Agda from introducing unicode characters when pretty printing a term. Lambda, Arrows and Forall quantifiers are all replaced by their ascii only version. Instead of resorting to subscript suffixes, Agda uses ascii digit characters. New option --inversion-max-depth=N. The depth is used to avoid looping due to inverting pattern matching for unsatisfiable constraints [Issue #431]. This option is only expected to be necessary in pathological cases. New option --no-print-pattern-synonyms. This disables the use of pattern synonyms in output from Agda. See [Issue #2902] for situations where this might be desirable. New fine-grained control over the warning machinery: ability to (en/dis)able warnings on a one-by-one basis. The command line option --helpnow takes an optional argument which allows the user to request more specific usage information about particular topics. The only one added so far is warning. New pragma NOINLINE. {-# NOINLINE f #-} Disables automatic inlining of f. New pragma WARNING_ON_USAGE {-# WARNING_ON_USAGE QName Message #} Prints Message whenever QName is used. Emacs mode Banana brackets have been added to the Agda input method. \(( #x2985 LEFT WHITE PARENTHESIS \)) #x2986 RIGHT WHITE PARENTHESIS Result splitting will introduce the trailing hidden arguments, if there is nothing else todo [Issue #2871]. Example: data Fun (A : Set) : Set where mkFun : (A → A) → Fun A test : {A : Set} → Fun A test = ? Splitting on the result here ( C-c C-c RET) will append {A}to the left hand side. test {A} = ? Light highlighting is performed dynamically, even if the file is not loaded [Issue #2794]. This light highlighting is based on the token stream generated by Agda’s lexer: the code is only highlighted if the file is lexically correct. If the Agda backend is not busy with something else, then the code is highlighted automatically in certain situations: When the file is saved. When Emacs has been idle, continuously, for a certain period of time (by default 0.2 s) after the last modification of the file, and the file has not been saved (or marked as being unmodified). This functionality can be turned off, and the time period can be customised. Highlighting of comments is no longer handled by Font Lock mode [Issue #2794]. The Emacs mode’s syntax table has been changed. Previously _was treated as punctuation. Now it is treated in the same way as most other characters: if the standard syntax table assigns it the syntax class “whitespace”, “open parenthesis” or “close parenthesis”, then it gets that syntax class, and otherwise it gets the syntax class “word constituent”. Compiler backends The GHC backend now automatically compiles BUILTIN LIST to Haskell lists. This means that it’s no longer necessary to give a COMPILE GHC pragma for the builtin list type. Indeed, doing so has no effect on the compilation and results in a warning. The GHC backend performance improvements. Generated Haskell code now contains approximate type signatures, which lets GHC get rid of many of the unsafeCoerces. This leads to performance improvements of up to 50% of compiled code. The GHC backend now compiles the INFINITY, SHARPand FLATbuiltins in a different way [Issue #2909]. Previously these were compiled to (basically) nothing. Now the INFINITYbuiltin is compiled to Infinity, available from MAlonzo.RTE: data Inf a = Sharp { flat :: a } type Infinity level a = Inf a The SHARPbuiltin is compiled to Sharp, and the FLATbuiltin is (by default) compiled to a corresponding destructor. Note that code that interacts with Haskell libraries may have to be updated. As an example, here is one way to print colists of characters using the Haskell function putStr: open import Agda.Builtin.Char open import Agda.Builtin.Coinduction open import Agda.Builtin.IO open import Agda.Builtin.Unit data Colist {a} (A : Set a) : Set a where [] : Colist A _∷_ : A → ∞ (Colist A) → Colist A {-# FOREIGN GHC data Colist a = Nil | Cons a (MAlonzo.RTE.Inf (Colist a)) type Colist' l a = Colist a fromColist :: Colist a -> [a] fromColist Nil = [] fromColist (Cons x xs) = x : fromColist (MAlonzo.RTE.flat xs) #-} {-# COMPILE GHC Colist = data Colist' (Nil | Cons) #-} postulate putStr : Colist Char → IO ⊤ {-# COMPILE GHC putStr = putStr . fromColist #-} COMPILE GHCpragmas have been included for the size primitives [Issue #2879]. LaTeX backend The codeenvironment can now take arguments [Issues #2744 and #2453]. Everything from \begin{code} to the end of the line is preserved in the generated LaTeX code, and not treated as Agda code. The default implementation of the codeenvironment recognises one optional argument, hide, which can be used for code that should be type-checked, but not typeset: \begin{code}[hide] open import Module \end{code} The AgdaHidemacro has not been removed, but has been deprecated in favour of [hide]. The AgdaSuppressSpaceand AgdaMultiCodeenvironments no longer take an argument. Instead some documents need to be compiled multiple times. The --count-clustersflag can now be given in OPTIONSpragmas. The nofontsetupoption to the LaTeX package agdawas broken, and has (hopefully) been fixed [Issue #2773]. Fewer packages than before are loaded when nofontsetupis used, see agda.styfor details. Furthermore, if LuaLaTeX or XeLaTeX are not used, then the font encoding is no longer changed. The new option noinputencodingsetupinstructs the LaTeX package agdato not change the input encoding, and to not load the ucspackage. Underscores are now typeset using \AgdaUnderscore{}. The default implementation is \_(the command that was previously generated for underscores). Note that it is possible to override this implementation. OtherAspects (unsolved meta variables, catchall clauses, etc.) are now correctly highlighted in the LaTeX backend (and the HTML one). [Issue #2474] HTML backend An identifier (excluding bound variables), gets the identifier itself as an anchor, in addition to the file position [Issue #2756]. In Agda 2.5.3, the identifier anchor would replace the file position anchor [Issue #2604]. Symbolic anchors look like <a id="test1"> <a id="M.bla"> while file position -- Only character position anchor test2 : Set₁ -- Issue2604.html#test2 test2 = bla where bla = Set -- Only -- Only character position anchor test5 : Set₁ -- Only character position anchor test5 = M.bla List of closed issues For 2.5.4, the following issues have been closed (see bug tracker): - #351: Constraint solving for irrelevant metas - #421: Higher order positivity - #431: Constructor-headed function makes type-checker diverge - #437: Detect when something cannot be a function type - #488: Refining on user defined syntax mixes up the order of the subgoals - #681: Lack of visual state indicators in new Emacs mode - #689: Contradictory constraints should yield error - #708: Coverage checker not taking literal patterns into account properly - #875: Nonstrict irrelevance violated by implicit inference - #964: Allow unsolved metas in imported files - #987: –html anchors could be more informative - #1054: Inlined Agda code in LaTeX backend - #1131: Infix definitions not allowed in let definitions - #1169: Auto fails with non-terminating function - #1268: Hard to print type of variable if the type starts with an instance argument - #1384: Order of constructor arguments matters for coverage checker - #1425: Instances with relevant recursive instance arguments are not considered in irrelevant positions - #1548: Confusing error about ambiguous definition with parametrized modules - #1884: what is the format of the libraries and defaults files - #1906: Possible performance problem - #2056: Cannot instantiate meta to solution…: Pattern checking done too early in where block - #2067: Display forms in parameterised module too general - #2183: Allow splitting on dotted variables - #2226: open {{…}} gets hiding wrong - #2255: Performance issue with deeply-nested lambdas - #2306: Commands in the emacs-mode get confused if we add question marks to the file - #2384: More fine-grained blocking in constraint solver - #2401: LaTeX backend error - #2404: checkType doesn’t accept a type-checking definition checked with the same type - #2420: Failed to solve level constraints in record type with hole - #2421: After emacs starts up, Agda does not process file without restart of Agda - #2436: Agda allows coinductive records with eta-equality - #2450: Irrelevant variables are pruned too eagerly - #2474: The LaTeX and HTML backends do not highlight (all) unsolved metas - #2484: Regression related to sized types - #2526: Better documentation of record modules - #2536: UTF8 parsed incorrectly for literate agda files - #2565: Options for the interaction action give to keep the overloaded literals and sections? - #2576: Shadowing data decl by data sig produces Missing type signature error - #2594: Valid partial cover rejected: “Cannot split on argument of non-datatype” - #2600: Stack complains about Agda.cabal - #2607: Instance search confused when an instance argument is sourced from a record - #2617: Installation instructions - #2623: Incorrect indentation when \AgdaHide is used - #2634: Fixity declaration ignored in definitions in record - #2636: The positivity checker complains when a new definition is added in the same where clause - #2640: Unifier dots the relevant pattern variables when it should dot the irrelevant ones - #2668: Changing the visibility of a module parameter breaks with - #2728: Bad interaction between caching and the warning machinery - #2738: Update Stackage LTS from 9.1 to version supporting Alex 3.2.3 - #2744: It should be possible to give arguments to the code environment - #2745: Broken build with GHC 7.8.4 due to (new) version 1.2.2.0 of hashtables - #2749: Add –no-unicode cli option to Agda - #2751: Unsolved constraints, but no highlighting - #2752: Mutual blocks inside instance blocks - #2753: Unsolved constraint, related to instance arguments and sized types - #2756: HTML backend generates broken links - #2758: Relevant meta is instantiated with irrelevant solution - #2759: Empty mutual blocks should be warning rather than error - #2762: Automatically generate DISPLAY pragmas to fold pattern synonyms - #2763: Internal Error at “src/full/Agda/TypeChecking/Abstract.hs:138” - #2765: Inferred level expressions are often “reversed” - #2769: Agda prints ill-formed expression, record argument dropped - #2771: Erroneous ‘with’ error message - #2773: The nofontsetup option does not work as advertised - #2775: Irrelevance to be taken into account in ‘with’ abstraction. - #2776: Dotted variable in inferred type - #2780: Improve level constraint solving for groups of inequality constraints - #2782: Extending Agda reflection to introduce postulates - #2785: internal error @ ConcreteToAbstract.hs:721 - #2787: Overloaded pattern synonyms - #2792: Safe modules can sometimes not be imported from unsafe modules - #2794: Using \texttt{-} destroys code coloring in literate file - #2796: Overloaded (inherited) projection resolution fails with parametrized record - #2798: The LaTeX backend ignores the “operator” aspect - #2802: Printing of overloaded functions broken due to eager normalization of projections - #2803: Case splitting loses names of hidden arguments - #2808: Confusing error when inserting declaration before top-level module - #2810: Make --cachinga pragma option - #2811: OPTION –caching allowed in file (Issue #2810) - #2819: Forcing analysis doesn’t consider relevance - #2821: BUILTIN BOOL gremlin - #2824: Allow {-# BUILTIN #-} in preamble and in parametrized modules - #2826: Case splitting on earlier variable uses duplicate variable name - #2827: Variables off in with-clauses. Parameter refinement? - #2831: NO_POSITIVITY_CHECK pragma can be written before a mutual block without data or record types - #2832: BUILTIN NIL and CONS are not needed - #2834: Disambiguation of type based on pattern leads to non-unique meta solution - #2836: The Emacs mode does not handle .lagda.tex files - #2840: Internal error in positivity with modules/datatype definitions - #2841: Opting out of idiom brackets - #2844: Root documentation URL redirects to version 2.5.2 - #2849: Internal error at absurd pattern followed by rewrite - #2854: Agda worries about possibly empty type of sizes even when no builtins for size are active - #2855: Single-clause definition is both unreachable and incomplete - #2856: Panic: unbound variable - #2859: Error “pattern variable shadows constructor” caused by parameter refinement - #2862: inconsistency from a mutual datatype declaration and module definition - #2867: Give does not insert parenthesis for module parameters - #2868: With –postfix-projections, record fields are printed preceded by a dot when working within the record - #2870: Lexical error for - (hyphen) - #2871: Introduce just trailing hidden arguments by result splitting - #2873: Refinement problem in presence of overloaded constructors - #2874: Internal error in src/full/Agda/TypeChecking/Coverage/Match.hs:312 - #2878: Support for GHC 8.4.1 - #2879: Include COMPILE GHC pragmas for size primitives - #2881: Internal error in BasicOps - #2883: “internal error in TypeChecking/Substitute.hs:379” - #2884: Missing PDF user manual in the tarball - #2888: Internal error caused by new forcing translation - #2894: Unifier tries to eta expand non-eta record - #2896: Unifier throws away pattern - #2897: Internal error for local modules with refined parameters - #2904: No tab completion for GHCNoMain - #2906: Confusing “cannot be translated to a Haskell type” error message - #2908: primForce is compiled away - #2909: Agda uses newtypes incorrectly, causing wellformed programs to loop - #2911: Inferring missing instance clause panics in refined context - #2912: Add fine-grained control over the displayed warnings - #2914: Slicing ignores as pragma? - #2916: The GHC backend generates code with an incorrect number of constructor arguments - #2917: Very slow due to unsolved size? - #2919: Internal error in Agda.TypeChecking.Forcing - #2921: COMPILE data for data types with erased constructor arguments - #2923: Word.agda not included as builtin - #2925: Allow adding the same rewrite rules multiple times - #2927: Panic related to sized types - #2928: Internal error in Agda.TypeChecking.Rules.LHS - #2931: Rename Agda.Builtin.Size.ω to ∞? - #2941: “coinductive” record inconsistent - #2944: Regression, seemingly related to record expressions - #2945: Inversion warning in code that used to be accepted - #2947: Internal error in Agda.TypeChecking.Forcing - #2952: Wrong compilation of pattern matching to Haskell - #2953: Generated Haskell code does not typecheck - #2954: Pattern matching on string gives unexpected unreachable clause - #2957: Support for async 2.2.1 - #2958: asnames being duplicated in buffer after with - #2959: Repeating a successful command after revert + reload fails with caching enabled - #2960: Uncommenting indented lines doesn’t work - #2963: Extended lambdas bypass positivity checking in records - #2966: Internal error in Auto - #2968: Bad Interaction with copatterns and eta?, leads to ill-typed terms in error messages. - #2971: Copattern split with --no-irrelevant-projectionspanics - #2974: Copatterns break canonicity - #2975: Termination checker runs too early for definitions inside record (or: positivity checker runs too late) - #2976: Emacs mode reports errors in connection with highlighting comments - #2978: Double solving of meta - #2985: The termination checker accepts non-terminating code - #2989: Internal error when checking record match in let expr - #2990: Performance regression related to the abstract machine - #2994: Solution accepted in hole is subsequently rejected on reload - #2996: Internal error with -v tc.cover:20 - #2997: Internal error in Agda.TypeChecking.Rules.LHS - #2998: Regression: With clause pattern x is not an instance of its parent pattern “eta expansion of x” - #3002: Spurious 1 after simplification - #3004: Agda hangs on extended lambda - #3007: Internal error in Parser - #3012: Internal Error at : “src/full/Agda/TypeChecking/Reduce/Fast.hs:1030” - #3014: Internal error in Rules.LHS - #3020: Missing highlighting in record modules - #3023: Support for GHC 8.4.2 - #3024: Postfix projection patterns not highlighted correctly with agda –latex - #3030: [ warning ] user defined warnings - #3031: Eta failure for record meta with irrelevant fields - #3033: Giving and solving don’t insert parenthesis for applications in dot pattern - #3044: Internal error in src/full/Agda/TypeChecking/Substitute/Class.hs:209 - #3045: GHC backend generates type without enough arguments - #3046: do-notation causes parse errors in subsequent where clauses - #3049: Positivity unsoundness - #3050: We revert back to call-by-name during positivity checking - #3051: Pattern synonyms should be allowed in mutual blocks - #3052: Another recent inference change - #3062: Literal match does not respect first-match semantics - #3063: Internal error in Agda.TypeChecking.Forcing - #3064: Coverage checker bogus on literals combined with copatterns - #3065: Internal error in coverage checker triggered by literal dot pattern - #3067: checking hangs on invalid program - #3072: invalid section printing - #3074: Wrong hiding causes internal error in LHS checker - #3075: Automatic inlining and tactics - #3078: Error building with GHC 7.10.2: Missing transformers library - #3079: Wrong parameter hiding for instance open - #3080: Case splitting prints out-of-scope pattern synonyms - #3082: Emacs mode regression: a ? inserted before existing hole hijacks its interaction point - #3083: Wrong hiding in module application - #3084: Changes to mode line do not take effect immediately - #3085: Postpone checking a pattern let binding when type is blocked - #3090: Internal error in parser when using parentheses in BUILTIN pragma - #3096: Support GHC 8.4.3 Release notes for Agda version 2.5.3 Installation and infrastructure Added support for GHC 8.0.2 and 8.2.1. Removed support for GHC 7.6.3. Markdown support for literate Agda [PR #2357]. Files ending in .lagda.mdwill be parsed as literate Markdown files. - Code blocks start with ```or ```agdain its own line, and end with ```, also in its own line. -. Language Pattern matching Dot patterns. The dot in front of an inaccessible pattern can now be skipped if the pattern consists entirely of constructors or literals. For example: open import Agda.Builtin.Bool data D : Bool → Set where c : D true f : (x : Bool) → D x → Bool f true c = true Before this change, you had to write f .true c = true. With-clause patterns can be replaced by _ [Issue #2363]. Example: test : Nat → Set test zero with zero test _ | _ = Nat test (suc x) with zero test _ | _ = Nat We do not have to spell out the pattern of the parent clause ( zero/ suc x) in the with-clause if we do not need the pattern variables. Note that xis not in scope in the with-clause! A more elaborate example, which cannot be reduced to an ellipsis ...: record R : Set where coinductive -- disallow _) Pattern matching lambdas (also known as extended lambdas) can now be nullary, mirroring the behaviour for ordinary function definitions. [Issue #2671] This is useful for case splitting on the result inside an expression: given record _×_ (A B : Set) : Set where field π₁ : A π₂ : B open _×_ one may case split on the result (C-c C-c RET) in a hole λ { → {!!}} of type A × B to produce λ { .π₁ → {!!} ; .π₂ → {!!}} Records with a field of an empty type are now recognized as empty by Agda. In particular, they can be matched against with an absurd pattern (). For example: data ⊥ : Set where record Empty : Set where field absurdity : ⊥ magic : Empty → ⊥ magic () Injective pragmas. Injective pragmas can be used to mark a definition as injective for the pattern matching unifier. This can be used as a version of --injective-type-constructorsthat only applies to specific datatypes. For example: open import Agda.Builtin.Equality data Fin : Nat → Set where zero : {n : Nat} → Fin (suc n) suc : {n : Nat} → Fin n → Fin (suc n) {-# INJECTIVE Fin #-} Fin-injective : {m n : Nat} → Fin m ≡ Fin n → m ≡ n Fin-injective refl = refl Aside from datatypes, this pragma can also be used to mark other definitions as being injective (for example postulates). Metavariables can no longer be instantiated during case splitting. This means Agda will refuse to split instead of taking the first constructor it finds. For example: open import Agda.Builtin.Nat data Vec (A : Set) : Nat → Set where nil : Vec A 0 cons : {n : Nat} → A → Vec A n → Vec A (suc n) foo : Vec Nat _ → Nat foo x = {!x!} In Agda 2.5.2, case splitting on xproduced the single clause foo nil = {!!}, but now Agda refuses to split. Reflection New TC primitive: debugPrint. debugPrint : String → Nat → List ErrorPart → TC ⊤ This maps to the internal function reportSDoc. Debug output is enabled with the -vflag at the command line, or in an OPTIONSpragma. For instance, giving -v a.b.c:10enables printing from debugPrint "a.b.c.d" 10 msg. In the Emacs mode, debug output ends up in the *Agda debug*buffer. Built-ins BUILTIN REFL is now superfluous, subsumed by BUILTIN EQUALITY [Issue #2389]. BUILTIN EQUALITY is now more liberal [Issue #2386]. It accepts, among others, the following new definitions of equality: -- Non-universe polymorphic: data _≡_ {A : Set} (x : A) : A → Set where refl : x ≡ x -- ... with explicit argument to refl; data _≡_ {A : Set} : (x y : A) → Set where refl : {x : A} → x ≡ x -- ... even visible data _≡_ {A : Set} : (x y : A) → Set where refl : (x : A) → x ≡ x -- Equality in a different universe than domain: -- (also with explicit argument to refl) data _≡_ {a} {A : Set a} (x : A) : A → Set where refl : x ≡ x The standard definition is still: -- Equality in same universe as domain: data _≡_ {a} {A : Set a} (x : A) : A → Set a where refl : x ≡ x Miscellaneous Rule change for omitted top-level module headers. [Issue #1077] If your file is named Bla.agda, then the following content is rejected. foo = Set module Bla where bar = Set Before the fix of this issue, Agda would add the missing module header module Bla whereat the top of the file. However, in this particular case it is more likely the user put the declaration foo = Setbefore the module start in error. Now you get the error Illegal declaration(s) before top-level module if the following conditions are met: There is at least one non-import declaration or non-toplevel pragma before the start of the first module. The module has the same name as the file. The module is the only module at this level (may have submodules, of course). If you should see this error, insert a top-level module before the illegal declarations, or move them inside the existing module. Emacs mode New warnings: Unreachable clauses give rise to a simple warning. They are highlighted in gray. Incomplete patterns are non-fatal warnings: it is possible to keep interacting with the file (the reduction will simply be stuck on arguments not matching any pattern). The definition with incomplete patterns are highlighted in wheat. Clauses which do not hold definitionally are now highlighted in white smoke. Fewer commands have the side effect that the buffer is saved. Aborting commands. Now one can (try to) abort an Agda command by using C-c C-x C-aor a menu entry. The effect is similar to that of restarting Agda ( C-c C-x C-r), but some state is preserved, which could mean that it takes less time to reload the module. Warning: If a command is aborted while it is writing data to disk (for instance .agdai files or Haskell files generated by the GHC backend), then the resulting files may be corrupted. Note also that external commands (like GHC) are not aborted, and their output may continue to be sent to the Emacs mode. New bindings for the Agda input method: All the bold digits are now available. The naming scheme is \Bxfor digit x. Typing \:you can now get a whole slew of colons. (The Agda input method originally only bound the standard unicode colon, which looks deceptively like the normal colon.) Case splitting now preserves underscores. [Issue #819] data ⊥ : Set where test : {A B : Set} → A → ⊥ → B test _ x = {! x !} Splitting on xyields test _ () Interactively expanding ellipsis. [Issue #2589] An ellipsis in a with-clause can be expanded by splitting on “variable” “.” (dot). test0 : Nat → Nat test0 x with zero ... | q = {! . !} -- C-c C-c Splitting on dot here yields: test0 x | q = ? New command to check an expression against the type of the hole it is in and see what it elaborates to. [Issue #2700] This is useful to determine e.g. what solution typeclass resolution yields. The command is bound to C-c C-;and respects the C-umodifier. record Pointed (A : Set) : Set where field point : A it : ∀ {A : Set} {{x : A}} → A it {{x}} = x instance _ = record { point = 3 - 4 } _ : Pointed Nat _ = {! it !} -- C-u C-u C-c C-; yields Goal: Pointed Nat Elaborates to: record { point = 0 } If agda2-giveis called with a prefix, then giving is forced, i.e., the safety checks are skipped, including positivity, termination, and double type-checking. [Issue #2730] Invoke forced giving with key sequence C-u C-c C-SPC. Library management The namefield in an .agda-libfile is now optional. [Issue #2708] This feature is convenient if you just want to specify the dependencies and include pathes for your local project in an .agda-libfile. Naturally, libraries without names cannot be depended on. Compiler backends Unified compiler pragmas The compiler pragmas ( COMPILED, COMPILED_DATA, etc.) have been unified across backends into two new pragmas: {-# COMPILE <Backend> <Name> <Text> #-} {-# FOREIGN <Backend> <Text> #-} The old pragmas still work, but will emit a warning if used. They will be removed completely in Agda 2.6. The translation of old pragmas into new ones is as follows: GHC Haskell backend The COMPILED pragma (and the corresponding COMPILE GHC pragma) is now also allowed for functions. This makes it possible to have both an Agda implementation and a native Haskell runtime implementation. The GHC file header pragmas LANGUAGE, OPTIONS_GHC, and INCLUDEinside a FOREIGN GHCpragma are recognized and printed correctly at the top of the generated Haskell file. [Issue #2712] UHC compiler backend The UHC backend has been moved to its own repository [] and is no longer part of the Agda distribution. Haskell imports are no longer transitively inherited from imported modules. The (now deprecated) IMPORT and IMPORT_UHC pragmas no longer cause import statements in modules importing the module containing the pragma. The same is true for the corresponding FOREIGN pragmas. Support for stand-alone backends. There is a new API in Agda.Compiler.Backendfor creating stand-alone backends using Agda as a library. This allows prospective backend writers to experiment with new backends without having to change the Agda code base. HTML backend Anchors for identifiers (excluding bound variables) are now the identifiers themselves rather than just the file position [Issue #2604]. Symbolic anchors look like <a id="test1"> <a id="M.bla"> while other -- Character position anchor test2 : Set₁ -- Issue2604.html#test2 test2 = bla where bla = Set -- -- Character position anchor test5 : Set₁ -- Character position anchor test5 = M.bla Some generated HTML files now have different file names [Issue #2725]. Agda now uses an encoding that amounts to first converting the module names to UTF-8, and then percent-encoding the resulting bytes. For instance, HTML for the module Σis placed in %CE%A3.html. LaTeX backend The LaTeX backend now handles indentation in a different way [Issue #1832].environment is used in an appropriate way. If custom settings are used, for instance if \AgdaIndentis redefined, then the constraint discussed above may not be satisfied. (Note that the meaning of the \AgdaIndentcommand’s argument has changed, and that the command is now used in a different way in the generated LaTeX files.) Examples: Here Cis indented further than B: postulate A B C : Set Here Cis not (necessarily) indented further than B, because Xshadows B: postulate A B : Set X C : Set The new rule is inspired by, but not identical to, the one used by lhs2TeX’s poly mode (see Section 8.4 of the manual for lhs2TeX version 1.17). Some spacing issues [#2353, #2441, #2733, #2740] have been fixed. The user can now control the typesetting of (certain) individual tokens by redefining the \AgdaFormatcommand.is the token, and the second argument the thing to be typeset. One can now instruct the agda package not to select any fonts. If the nofontsetupoption is used, then some font packages are loaded, but specific fonts are not selected: \usepackage[nofontsetup]{agda} The height of empty lines is now configurable [#2734]. The height is controlled by the length \AgdaEmptySkip, which by default is \baselineskip. The alignment feature regards the string +̲, containing +and a combining character, as having length two. However, it seems more reasonable to treat it as having length one, as it occupies a single column, if displayed “properly” using a monospace font. The new flag --count-clustersis used, then the LaTeX backend does not align the two fieldkeywords: record +̲ : Set₁ where field A : Set field B : Set The --count-clustersflag is not enabled in all builds of Agda, because the implementation depends on the ICU library, the installation of which could cause extra trouble for some users. The presence of this flag is controlled by the Cabal flag enable-cluster-counting. A faster variant of the LaTeX backend: QuickLaTeX. When this variant of the backend is used the top-level module is not type-checked, only scope-checked. This implies that some highlighting information is not available. For instance, overloaded constructors are not resolved. QuickLaTeX can be invoked from the Emacs mode, or using agda --latex --only-scope-checking. If the module has already been type-checked successfully, then this information is reused; in this case QuickLaTeX behaves like the regular LaTeX backend. The --only-scope-checkingflag can also be used independently, but it is perhaps unclear what purpose that would serve. (The flag can currently not be combined with --html, --dependency-graphor --vim.) The flag is not allowed in safe mode. Pragmas and options The --safeoption is now a valid pragma. This makes it possible to declare a module as being part of the safe subset of the language by stating {-# OPTIONS --safe #-}at the top of the corresponding file. Incompatibilities between the --safeoption and other options or language constructs are non-fatal errors. The --no-mainoption is now a valid pragma. One can now suppress the compiler warning about a missing main function by putting {-# OPTIONS --no-main #-} on top of the file. New command-line option and pragma --warning=MODE(or -W MODE) for setting the warning mode. Current options are warnfor displaying warnings (default) errorfor turning warnings into errors ignorefor not displaying warnings List of fixed issues For 2.5.3, the following issues have been fixed (see bug tracker): - #142: Inherited dot patterns in with functions are not checked - #623: Error message points to importing module rather than imported module - #657: Yet another display form problem - #668: Ability to stop, or restart, typechecking somehow - #705: confusing error message for ambiguous datatype module name - #719: Error message for duplicate module definition points to external module instead of internal module - #776: Unsolvable constraints should give error - #819: Case-splitting doesn’t preserve underscores - #883: Rewrite loses type information - #899: Instance search fails if there are several definitionally equal values in scope - #1077: problem with module syntax, with parametric module import - #1126: Port optimizations from the Epic backend - #1175: Internal Error in Auto - #1544: Positivity polymorphism needed for compositional positivity analysis - #1611: Interactive splitting instantiates meta - #1664: Add Reflection primitives to expose precedence and fixity - #1817: Solvable size constraints reported as unsolvable - #1832: Insufficient indentation in LaTeX-rendered Agda code - #1834: Copattern matching: order of clauses should not matter here - #1886: Second copies of telescopes not checked? - #1899: Positivity checker does not treat datatypes and record types in the same way - #1975: Type-incorrect instantiated overloaded constructor accepted in pattern - #1976: Type-incorrect instantiated projection accepted in pattern - #2035: Matching on string causes solver to fail with internal error - #2146: Unicode syntax for instance arguments - #2217: Abort Agda without losing state - #2229: Absence or presence of top-level module header affects scope - #2253: Wrong scope error for abstract constructors - #2261: Internal error in Auto/CaseSplit.hs:284 - #2270: Printer does not use sections. - #2329: Size solver does not use type Size< ito gain the necessary information - #2354: Interaction between instance search, size solver, and ordinary constraint solver. - #2355: Literate Agda parser does not recognize TeX comments - #2360: With clause stripping chokes on ambiguous projection - #2362: Printing of parent patterns when with-clause does not match - #2363: Allow underscore in with-clause patterns - #2366: With-clause patterns renamed in error message - #2368: Internal error after refining a tactic @ MetaVars.hs:267 - #2371: Shadowed module parameter crashes interaction - #2372: problems when instances are declared with inferred types - #2374: Ambiguous projection pattern could be disambiguated by visibility - #2376: Termination checking interacts badly with eta-contraction - #2377: open public is useless before module header - #2381: Search ( C-c C-z) panics on pattern synonyms - #2386: Relax requirements of BUILTIN EQUALITY - #2389: BUILTIN REFL not needed - #2400: LaTeX backend error on LaTeX comments - #2402: Parameters not dropped when reporting incomplete patterns - #2403: Termination checker should reduce arguments in structural order check - #2405: instance search failing in parameterized module - #2408: DLub sorts are not serialized - #2412: Problem with checking with sized types - #2413: Agda crashes on x@y pattern - #2415: Size solver reports “inconsistent upper bound” even though there is a solution - #2416: Cannot give size as computed by solver - #2422: Overloaded inherited projections don’t resolve - #2423: Inherited projection on lhs - #2426: On just warning about missing cases - #2429: Irrelevant lambda should be accepted when relevant lambda is expected - #2430: Another regression related to parameter refinement? - #2433: rebindLocalRewriteRules re-adds global rewrite rules - #2434: Exact split analysis is too strict when matching on eta record constructor - #2441: Incorrect alignement in latex using the new ACM format - #2444: Generalising compiler pragmas - #2445: The LaTeX backend is slow - #2447: Cache loaded interfaces even if a type error is encountered - #2449: Agda depends on additional C library icu - #2451: Agda panics when attempting to rewrite a typeclass Eq - #2456: Internal error when postulating instance - #2458: Regression: Agda-2.5.3 loops where Agda-2.5.2 passes - #2462: Overloaded postfix projection does not resolve - #2464: Eta contraction for irrelevant functions breaks subject reduction - #2466: Case split to make hidden variable visible does not work - #2467: REWRITE without BUILTIN REWRITE crashes - #2469: “Partial” pattern match causes segfault at runtime - #2472: Regression related to the auto command - #2477: Sized data type analysis brittle, does not reduce size - #2478: Multiply defined labels on the user manual (pdf) - #2479: “Occurs check” error in generated Haskell code - #2480: Agda accepts incorrect (?) code, subject reduction broken - #2482: Wrong counting of data parameters with new-style mutual blocks - #2483: Files are sometimes truncated to a size of 201 bytes - #2486: Imports via FOREIGN are not transitively inherited anymore - #2488: Instance search inhibits holes for instance fields - #2493: Regression: Agda seems to loop when expression is given - #2494: Instance fields sometimes have incorrect goal types - #2495: Regression: termination checker of Agda-2.5.3 seemingly loops where Agda-2.5.2 passes - #2500: Adding fields to a record can cause Agda to reject previous definitions - #2510: Wrong error with –no-pattern-matching - #2517: “Not a variable error” - #2518: CopatternReductions in TreeLess - #2523: The documentation of --without-Kis outdated - #2529: Unable to install Agda on Windows. - #2537: case splitting with ‘with’ creates {_} instead of replicating the arguments it found. - #2538: Internal error when parsing as-pattern - #2543: Case splitting with ellipsis produces spurious parentheses - #2545: Race condition in api tests - #2549: Rewrite rule for higher path constructor does not fire - #2550: Internal error in Agda.TypeChecking.Substitute - #2552: Let bindings in module telescopes crash Agda.Interaction.BasicOps - #2553: Internal error in Agda.TypeChecking.CheckInternal - #2554: More flexible size-assignment in successor style - #2555: Why does the positivity checker care about non-recursive occurrences? - #2558: Internal error in Warshall Solver - #2560: Internal Error in Reduce.Fast - #2564: Non-exact-split highlighting makes other highlighting disappear - #2568: agda2-infer-type-maybe-toplevel (in hole) does not respect “single-solution” requirement of instance resolution - #2571: Record pattern translation does not eta contract - #2573: Rewrite rules fail depending on unrelated changes - #2574: No link attached to module without toplevel name - #2575: Internal error, related to caching - #2577: deBruijn fail for higher order instance problem - #2578: Catch-all clause face used incorrectly for parent with pattern - #2579: Import statements with module instantiation should not trigger an error message - #2580: Implicit absurd match is NonVariant, explicit not - #2583: Wrong de Bruijn index introduced by absurd pattern - #2584: Duplicate warning printing - #2585: Definition by copatterns not modulo eta - #2586: “λ where” with single absurd clause not parsed - #2588: agda --latexproduces invalid LaTeX when there are block comments - #2592: Internal Error in Agda/TypeChecking/Serialise/Instances/Common.hs - #2597: Inline record definitions confuse the reflection API - #2602: Debug output messes up AgdaInfo buffer - #2603: Internal error in MetaVars.hs - #2604: Use QNames as anchors in generated HTML - #2605: HTML backend generates anchors for whitespace - #2606: Check that LHS of a rewrite rule doesn’t reduce is too strict - #2612: exact-splitdocumentation is outdated and incomplete - #2613: Parametrised modules, with-abstraction and termination - #2620: Internal error in auto. - #2621: Case splitting instantiates meta - #2626: triggered internal error with sized types in MetaVars module - #2629: Exact splitting should not complain about absurd clauses - #2631: docs for auto aren’t clear on how to use flags/options - #2632: some flags to auto dont seem to work in current agda 2.5.2 - #2637: Internal error in Agda.TypeChecking.Pretty, possibly related to sized types - #2639: Performance regression, possibly related to the size solver - #2641: Required instance of FromNat when compiling imported files - #2642: Records with duplicate fields - #2644: Wrong substitution in expandRecordVar - #2645: Agda accepts postulated fields in a record - #2646: Only warn if fixities for undefined symbols are given - #2649: Empty list of “previous definition” in duplicate definition error - #2652: Added a new variant of the colon to the Agda input method - #2653: agda-mode: “cannot refine” inside instance argument even though term to be refined typechecks there - #2654: Internal error on result splitting without –postfix-projections - #2664: Segmentation fault with compiled programs using mutual record - #2665: Documentation: Record update syntax in wrong location - #2666: Internal error at Agda/Syntax/Abstract/Name.hs:113 - #2667: Panic error on unbound variable. - #2669: Interaction: incorrect field variable name generation - #2671: Feature request: nullary pattern matching lambdas - #2679: Internal error at “Typechecking/Abstract.hs:133” and “TypeChecking/Telescope.hs:68” - #2682: What are the rules for projections of abstract records? - #2684: Bad error message for abstract constructor - #2686: Abstract constructors should be ignored when resolving overloading - #2690: [regression?] Agda engages in deep search instead of immediately failing - #2700: Add a command to check against goal type (and normalise) - #2703: Regression: Internal error for underapplied indexed constructor - #2705: The GHC backend might diverge in infinite file creation - #2708: Why is the namefield in .agda-lib files mandatory? - #2710: Type checker hangs - #2712: Compiler Pragma for headers - #2714: Option –no-main should be allowed as file-local option - #2717: internal error at DisplayForm.hs:197 - #2718: Interactive ‘give’ doesn’t insert enough parenthesis - #2721: Without-K doesn’t prevent heterogeneous conflict between literals - #2723: Unreachable clauses in definition by copattern matching trip clause compiler - #2725: File names for generated HTML files - #2726: Old regression related to with - #2727: Internal errors related to rewrite - #2729: Regression: case splitting uses variable name variants instead of the unused original names - #2730: Command to give in spite of termination errors - #2731: Agda fails to build with happy 1.19.6 - #2733: Avoid some uses of \AgdaIndent? - #2734: Make height of empty lines configurable - #2736: Segfault using Alex 3.2.2 and cpphs - #2740: Indenting every line of code should be a no-op. The record-typeconstructor now has an extra argument containing information about the record type’s fields: data Definition : Set where … record-type : (c : Name) (fs : List (Arg Name)) → Definition …for lowercase blackboard bold \bXfor uppercase blackboard bold \bGxfor lowercase greek blackboard bold (similar to \Gxfor greeks) \bGXfor uppercase greek blackboard bold (similar to \GXfor ‘syntax’) ‘library’ ‘libraries’ file by calling agda -l fjdsk Dummy.agdaand looking at the error message (assuming you don’t have a library called fjdsk installed). a .agda-libfile defining a library LIB. This library is used as if a --librarary,toand sucinstead of lzeroand lsuc) from the Levelmodule.installationis now default. --no-sized-typeswill turn off an extra (inexpensive) analysis on data types used for subtyping of sized types. Language Experimental feature: quoteContext There is a new keyword quoteContextthatswill consist of two names, nand band then projected via fst, reduces to a. pairby abyclauses.extension. So, a module defined in /A/B/C.agdawouldstand sndaremodulesand SUChave been merged with NATURAL. When binding the NATURALbuiltin, ZEROand SUCareusing {and publiccan now appear in arbitrary order. Multiple using/ hiding/ renamingdirectives are allowed, but you still cannot have both using and hiding(because that doesn’t make sense). [Issue #493] Goal and error display The error message Refuse to construct infinite termhaswith fresh bound variables x ris eta-expanded to c xwhich allows the type of pto reduce to P xand xtois ```agda)Pointhas been improved. Cases which previously required --termination-depthto withexpandsin module MAlonzo.Code.A.Bthat fin the goal prints snoc : ∀ {A} {n} → Vec A n → A → Vec A (suc n) A new command Explain why a particular name is in scope( C-c C-w) has been added. [Issue #207]-won mkFoocommand (fAto get 𝐴𝑨𝒜𝓐𝔄. Note: \McBdoesband \b[0-9]). Key bindings for controlling simplification/normalisation:is used exactly once ( C-u C-c C-,), then the result is neither (explicitly) normalised nor simplified. If C-uis used twice ( C-u C-u C-c C-,), then the result is normalised.cannot be found by the LateX environment, it is now copied into the LateX output directory ( latexbyAndHTMLtool to list available fonts. Add experimental support for hyperlinks to identifiers If the hyperrefLateXAndHTML Installation Made it possible to compile Agda with more recent versions of hashable, QuickCheck and Win32. Excluded mtl-2.1. Type checking Release notes for Agda 2 version 2.3.needsare now printed as _arg_numberinsteadargument, since cases for both zeroand sucare present. Then, it can split on the Vecargument, since the empty vector is already ruled out by instantiating ntofunctionand Bin ‘Nonvariant’ to arguments that are not actually used (except for absurd matches). If f’s first argument is Nonvariant, then f xis definitionally equal to f yregardless of xadoes not use its parameters nand p, they are considered as used, allowing “phantom type” techniques. In contrast, the arguments of function Blaare recognized as unused. The following code type-checks if we open Invariantbut pand Pairare data or record types. Compiler backends -Werroris now overridable. To enable compilation of Haskell modules containing warnings, the -Werrorflag for the MAlonzo backend has been made overridable. If, for example, --ghc-flag=-Wwarn: At first the buffer is highlighted in a somewhat crude way (without go-to-definition information for overloaded constructors). If the highlighting level is “interactive”, then the piece of code that is currently being type-checked is highlighted as such. (The default is “non-interactive”.)thehaskeyword to the declaration and the abstractkeywordcanflag is used literals are now treated as constructors. Under-applied functions can now reduce. Consider the following definition: id : {A : Set} → A → A id x = x Previously the expression idwouldtowhere the meta-variable _noccurscannotand ℕareargumentinstances in scope are eq-Listis irrelevant both in t[x]and in B[x]. This is possible if, for instance, B[x] = B′ x, with B′ : .A → Set. Dependent irrelevance allows us to define the eliminator for the Squashtype:is irrelevant and the IsEquivalencemoduleenablespragma will be normalised before compilation. Example usage: {-# STATIC power #-} power : ℕ → ℕ → ℕ power 0 x = 1 power 1 x = x power (suc n) x = power n x * x Occurrences of power 4 xwillis inlined (which it will be in any saturated call), then Aand Bdisappearcanconstructors’ ℕarguments,daimports List.agda, then the forget function should be put in Vec.agdatois compiled into an ECMAScript target <DIR>/jAgda.<TOP-LEVEL MODULE NAME>.js. The compiler can also be invoked using the Emacs mode (the variable agda2-backendcontrolsdata type and nullfunctiondeclarationensuresis irrelevant). However, certificate is declared to be irrelevant, so it can use the axiom irrelevant. Furthermore the first argument of the axiom is irrelevant, which means that irrelevant p≤keyflag can be given multiple times; each flag is given verbatim to the Epic compiler (in the given order). The resulting executable is named after the main module and placed in the directory specified by the --compile-dirflag as functions from Unitto A, and main is applied to the unit value. The Epic compiler compiles via C, not Haskell, so the pragmas related to the Haskell FFI ( IMPORT, COMPILED_DATAis the name of an Agda postulate and codeis some Epic code which should include the function arguments, return type and function body. As an example the IOmonand iobindare Epic functions which are defined in the file AgdaPrelude.ewhich Fintype,controls which backend is used. Release notes for Agda 2 version 2.2.8doesabove.implies that x ``for an example. Termination checker can count. There is a new flag --termination-depth=Naccepting values N >= 1(with N = 1beingto fvia auxwhere the relation of call argument to callee parameter is computed as “unrelated” (composition of <and ?). Setting N >= 2allows a finer analysis: nhas two constructors less than suc (suc n), and suc mhas one more than m, so we get the call graph: f --(-2)--> aux --(+1)--> f The indirect call f --> fis now labeled with (-1), and the termination checker can recognise that the call argument is decreasing on this path. Setting the termination depth to Nmeans that the termination checker counts decrease up to Nand increase up to N-1. The default, N=1, means that no increase is counted, every increase turns to “unrelated”. In practice, examples like the one above sometimes arise whenorsextensionand Finarethe value of xwillis the name of a definition (function, datatype, record, or a constructor), quote xgives you the representation of xfor taking the maximum of two levels: max : Level → Level → Level max zero m = m max (suc n) zero = suc n max (suc n) (suc m) = suc (max n m) {-# BUILTIN LEVELMAX max #-} The non-polymorphic universe levels Set, Set₁flag can no longer be used as a pragma. The experimental and incomplete support for proof irrelevance has been disabled. Tools New introcommandwill.Cis Mmaycanafter an Agda file has been loaded). Release notes for Agda 2 version 2.2.4 Important changes since 2.2.2: Change to the semantics of open importalways goes with import, and publical.
https://www.stackage.org/package/Agda
CC-MAIN-2018-47
en
refinedweb
This post is the third Assuming the dataset is named “people_wiki.csv”, place the below code in another .py file (let’s say indexing.py) in the same folder as the data. import pandas as pd import numpy as np import json import time from elasticsearch import Elasticsearch start_time = time.time() es = Elasticsearch([{'host': 'localhost', 'port': 9200}]) start_time = time.time() data = pd.read_csv('people_wiki.csv') print 'Data prepared in ' + str((time.time()-start_time)/60) + ' minutes' json_body = data.reset_index().to_json(orient='index') json_body = json_body.decode('ascii','ignore').encode('utf-8','replace') json_parsed = json.loads(json_body) print np.shape(data) for elements in json_parsed: data_json = json_parsed[elements] id_ = data_json['URI'] es.index(index='wiki_search', doc_type='data', id=id_, body=data_json) print id_ + ' indexed successfully' print 'Indexed in '+str((time.time()-start_time)/60)+' minutes' Executing this script will result in steaming logs which is ultimately leading to the data getting indexed in elasticsearch. That’s how easy it is! Let’s spend the next few lines on what actually happened. We declare our elasticsearch object configured on our local machine. Once that object is initialized we will use it to index all of our data. Pandas is a python library for loading datasets in python and it works great. We use the read_csv() to load our data. Once that is done we have to convert it to a JSON format to send it to elasticsearch for indexing. We do this conversion by using the json library that is shipped with the anaconda distribution. Every person on our data will be treated as a separate document. We are specifying the URI as the id of the documents as we can be sure that URI will always be unique for a document. Once all this is in place, we can go ahead and call es.index() with the specified parameters to start indexing our documents iteratively. Indexing in elasticsearch is the process that it goes through to understand the data beforehand. It will parse the free text, do all the pre-processing already discussed in Part 2 and store the data in shards to achieve blazing fast speed in query time. Once this script has completed running, the elasticsearch module will be completely ready. From hereon we are done with building the search engine. We just have to build a frontend using AngularJS and all the awesomeness of elastisearch will be accessible through it. In the next parts we will concentrate on building the front end and we will be done with how to build a search engine – our fuzzy and blazing fast.
https://machinelearningblogs.com/2016/12/26/how-to-build-a-search-engine-part-3/
CC-MAIN-2018-47
en
refinedweb
. Important For time-sensitive calculations that are evaluated once at run-time and that you want to remain the same value throughout report processing, consider whether to use a report variable or group variable. For more information, see Report and Group Variables Collections References (Report Builder and SSRS).. Note Be aware that during an upgrade of a report server, reports that depend on custom assemblies might require additional steps to complete the upgrade. Note You can create and modify paginated report definition (.rdl) files in Report Builder and in Report Designer in SQL Server Data Tools. Each authoring environment provides different ways to create, open, and save reports and related items. Working with Custom Code in Report Builder. Including References to Commonly Used Functions Math and Math, Convert, and Visual Basic Runtime Library Members on MSDN. Including References to Less Commonly Used Functions To include a reference to other less commonly used CLR namespaces, you must use a fully qualified reference, for example, StringBuilder. IntelliSense is not supported in the code pane of the Expression dialog box for these less commonly used functions. For more information, see Visual Basic Runtime Library Members on MSDN. Including References to External Assemblies. Including Embedded Code: =Code.ToUSD(Fields!StandardCost.Value) To reference built-in collections in your custom code, include a reference to the built-in Report object: =Report.Parameters!Param1.Value The following examples show how to define some custom constants and variables. Public Const MyNote = "Authored by Bob" Public Const NCopies As Int32 = 2 Public Dim MyVersion As String = "123.456" Public Dim MyDoubleVersion As Double = 123.456 Although custom constants do not appear in the Constants category in the Expression dialog box (which only displays built-in constants), you can add references to them from any expression, as shown in the following examples. In an expression, a custom constant is treated as a Variant. =Code.MyNote =Code.NCopies =Code.MyVersion =Code.MyDoubleVersion. Public Function FixSpelling(ByVal s As String) As String Dim strBuilder As New System.Text.StringBuilder(s) If s.Contains("Bike") Then strBuilder.Replace("Bike", "Bicycle") Return strBuilder.ToString() Else : Return s End If End Function For more information about built-in object collections and initialization, see Built-in Globals and Users References (Report Builder and SSRS) and Initializing Custom Assembly Objects. Including References to Parameters from Code multi value parameter. The following table includes examples of referencing the built-in collection Parameters from custom code: Passing an entire global parameter collection to custom code.This function returns the value of a specific report parameter MyParameter. Reference in Expression =Code.DisplayAParameterValue(Parameters) Custom Code definition Public Function DisplayAParameterValue(ByVal parameters as Parameters) as Object Return parameters("MyParameter").Value End Function Passing an individual parameter to custom code. Reference in Expression =Code.ShowParametersValues(Parameters!DayOfTheWeek) This example returns the value of the parameter passed in. If the parameter is a multivalue parameter, the return string is a concatenation of all the values. Custom Code definition Public Function ShowParameterValues(ByVal parameter as Parameter) as String Dim s as String If parameter.IsMultiValue then s = "Multivalue: " For i as integer = 0 to parameter.Count-1 s = s + CStr(parameter.Value(i)) + " " Next Else s = "Single value: " + CStr(parameter.Value) End If Return s End Function Including References to Code from Custom Assemblies: =CurrencyConversion.DollarCurrencyConversion.ToGBP(Fields!StandardCost.Value) Instance-based methods are available through a globally defined Code member. You access these by referring to the Code member, followed by the instance and method name. The following example calls the instance method ToEUR, which converts the value of StandardCost from dollar to euro: =Code.m_myDollarCoversion.ToEUR(Fields!StandardCost.Value) Note In Report Designer, a custom assembly is loaded once and is not unloaded until you close Visual Studio. If you preview a report, make changes to a custom assembly used in the report, and then preview the report again, the changes will not appear in the second preview. To reload the assembly, close and reopen Visual Studio and then preview the report. For more information about accessing your code, see Accessing Custom Assemblies Through Expressions. Passing Built-in Collections into Custom Assemblies. See Also Add Code to a Report (SSRS) Using Custom Assemblies with Reports Add an Assembly Reference to a Report (SSRS) Reporting Services Tutorials (SSRS) Expression Examples (Report Builder and SSRS) Report Samples (Report Builder and SSRS)
https://docs.microsoft.com/en-us/sql/reporting-services/report-design/custom-code-and-assembly-references-in-expressions-in-report-designer-ssrs?view=sql-server-2017
CC-MAIN-2018-47
en
refinedweb
Gary Shank created DERBY-6341: --------------------------------- Summary: LOB streaming not working with ClientDriver - IOException: object already closed Key: DERBY-6341 URL: Project: Derby Issue Type: Bug Components: JDBC Affects Versions: 10.10.1.1 Reporter: Gary Shank I have a small test program using OpenJPA v2.2.2 with Derby database 10.10.1.1 and the Derby org.apache.derby.jdbc.ClientDriver. I also tried ClientDriver40. My entity is defined like this: @Entity(name = "BLOB_TEST") public class BlobTest implements java.io.Serializable { public BlobTest() {} @Id @Column(name = "PRIM_KEY", columnDefinition="VARCHAR(10)") private String primKey = null; public void setKey(String key) { primKey = key; } public String getKey() { return primKey; } @Persistent @Column(name = "DATA") private InputStream data = null; public void setData(InputStream data) { this.data = data; } public InputStream getData() { return data; } } Putting data into the database works fine: EntityManager em = open(); // performs configuration and emf.createEntityManager(); em.getTransaction().begin(); FileInputStream fis = new FileInputStream("someInputFile"); BlobTest bt = new BlobTest(); bt.setKey("1"); bt.setData(fis); em.persist(bt); em.getTransaction().commit(); em.close(); Getting the data fails with "IOException: The object is already closed." when any InputStream.read method is called: EntityManager em = open(); // performs configuration and emf.createEntityManager(); BlobTest bt = em.find(BlobTest.class, "1"); // the record is found InputStream is = bt.getData(); while ( (bytesRead = is.read(buffer, 0, len)) != -1 ) java.io.IOException: The object is already closed. at org.apache.derby.client.am.CloseFilterInputStream.read(Unknown Source) Getting the data works if I use JDBC directly like this: EntityManager em = open(); // performs configuration and emf.createEntityManager(); Connection conx = (Connection)org.apache.openjpa.persistence.OpenJPAPersistence.cast(em).getConnection(); PreparedStatement pstmt = conx.prepareStatement("select DATA from BLOB_TEST where PRIM_KEY='1'"); ResultSet rs = pstmt.executeQuery(); InputStream is = rs.getBinaryStream(1); while ( (bytesRead = is.read(buffer, 0, len)) != -1 ) Is this a bug or am I just doing something wrong? My code has to work with multiple databases so I can't really use JDBC directly - which is I opted for using OpenJPA. I'm not sure if this is an OpenJPA issue or Derby issue but, at the moment, I'm assuming is a problem with the client driver. By the way, I did not test with the embedded driver since we need it to work with the client driver. I've looked at the following other issues: DERBY-3646 mentions "object already close" and the CloseFilterInputStream OPENJPA-1248 - LOB streaming does not work as expected OPENJPA-130 - use of InputStream for LOB streaming -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/db-derby-dev/201309.mbox/%3CJIRA.12667546.1378725839819.100488.1378725951585@arcas%3E
CC-MAIN-2018-47
en
refinedweb
Building An Asynchronous FTP Client October 17, 2002 | Fredrik Lundh This article describes how to use Python’s standard asynchat and asyncore modules to implement an asynchronous FTP client. In the first part, we’ll look at the FTP protocol itself, and how to use the asynchat library to talk to an FTP server. Contents: - Part #1: Reading Directory Listings - Part #2: Transferring Files The scripts and modules used in this article are available from the effbot.org subversion repository: $ svn co Part #1: Reading Directory Listings # The File Transfer Protocol # The File Transfer Protocol (FTP) has been around for ages; it’s even older than the Internet. Despite its age, FTP is still commonly used to download data from remote servers, and it’s by far the most common protocol for uploading data to servers. Unlike HTTP, the FTP is a “chat-style” protocol. The client sends a command, waits for a response, sends another command, reads the response, etc. A typical interchange might look something like (C=client, S=server): C: connects S: 220 FTP server ready. C: USER mulder S: 331 Password required for mulder C: PASS trustno1 S: 230 User mulder logged in. C: PASV S: 227 Entering Passive Mode (195,100,36,198,219,28) C: RETR sculley.zip S: 150 Opening BINARY mode data connection for sculley.zip (271165 bytes). S: 226 Transfer complete. C: PASV S: 227 Entering Passive Mode (195,100,36,198,219,29) C: LIST S: 150 Opening ASCII mode data connection for directory listing. S: 226 Transfer complete. C: QUIT S: 221-You have transferred 271165 bytes in 1 files. S: 221-Total traffic for this session was 271859 bytes in 1 transfers. S: 221 Thank you for using the FTP service on server.example.com. The client lines all consists of a command name (e.g. USER) followed by an optional argument. The server response lines consist of a 3-digit code, followed by either a space or a dash (-), followed by a text message. The lines using a dash are belong to a multi-line response; the client should keep reading response lines until it gets a line without the dash. Lines are separated by CR and LF (chr(10)+chr(13)), but some clients and servers use only LF (chr(10)). Common FTP Commands The above example uses the following FTP commands: USER. Provide user name. The server should respond with 230 if the user is accepted as is, 530 if the login attempt was rejected, or 331 or 332 if the client must provide a password (using the PASS command). PASS. Provide password. The server should respond with 230 if the user is accepted, 530 if the login failed, or 332 if further login information is required (the details of which is outside the scope of this article). PASV. Tell the server to prepare a data transfer channel. The server will return 227 and the response message will also contain six integers, separated by commas. The numbers specify an IP address and a port number to which the client should connect to transfer the data. The client should ignore the first four digits, and use the server address instead. To get the port number, multiply the fifth integer by 256 and add the sixth integer. RETR. Initialize a data transfer from the server to the client, using the port number specified by the PASV command. The client should connect to the data port before issuing this command. When the transfer is initialized, the server will return a 150 response and start sending data over the transfer port. When the transfer is completed (whether all data was sent or not), the server follows up with a 226 response. LIST. This is similar to RETR, but it returns a directory listing for the current directory. As with RETR, you must use PASV to prepare the data channel before issuing this command. QUIT. Shutdown the connection. The server usually returns a multiline summary message. If you’re not interested in the message, you can just shut down the socket connection. For more information on the FTP protocol, see Dan Bernstein’s extensive FTP protocol reference, which is written with an emphasis on how FTP works in practice. Introducing the asynchat Module # The asyncore library comes with a support module for chat-style protocols, called asynchat. This module provides a asyncore.dispatcher subclass called async_chat, which adds an input parser and output buffering to the basic dispatcher. The input parser feeds data to the collect_incoming_data method. When the parser sees a predefined terminator string, it calls the found_terminator method. The following example prints incoming lines to standard output, one line at a time: class channel(asynchat.async_chat): def __init__(self): asynchat.async_chat.__init__(self) self.buffer = "" self.set_terminator("\r\n") def collect_incoming_data(self, data): self.buffer = self.buffer + data def found_terminator(self): print "got", self.buffer self.buffer = "" The async_chat class also provides output buffering, via the push method: class channel(asynchat.async_chat): def found_terminator(self): # echo string back to sender self.push("echo %s\n" % self.buffer) self.buffer = "" There’s also a push_with_producer method that takes a producer object, which can be used to generate data on the fly. Producer objects are outside the scope of this article. The push and push_with_producer methods add data to an output queue, and the framework automatically sends data whenever the receiving end is ready. Using asynchat for FTP But let’s get back to the topic for this article: doing asynchronous FTP. The FTP server expects the client to read a response, send a command, read the next response, etc. The found_terminator method is where you end up after each response, so it makes a certain sense to put the protocol logic in that method. Here’s a first attempt: import asyncore, asynchat import re, socket class anon_ftp(asynchat.async_chat): def __init__(self, host): asynchat.async_chat.__init__(self) self.commands = [ "USER anonymous", "PASS anonymous@", "PWD", "QUIT" ] self.set_terminator("\n") self.data = "" # connect to ftp server self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect((host, 21)) def handle_connect(self): # connection succeeded pass def handle_expt(self): # connection failed self.close() def collect_incoming_data(self, data): # received a chunk of incoming data self.data = self.data + data def found_terminator(self): # got a response line data = self.data if data.endswith("\r"): data = data[:-1] self.data = "" print "S:", data if re.match("\d\d\d ", data): # this was the last line in this response # send the next command to the server try: command = self.commands.pop(0) except IndexError: pass # no more commands else: print "C:", command self.push(command + "\r\n") anon_ftp("") asyncore.loop() This class uses a predefined command list (in the commands attribute), which logs in to an FTP server as an anonymous user, fetches the name of the current directory using the PWD command, and finally logs off. The re.match function uses a regular expression to look for a string that starts with three digits followed by a space; as we saw earlier, the server may send multiline responses, but only the last line in such a response may use a space as the fourth character. If you run this script, it should print something like this: S: 220 ProFTPD 1.2.4 Server () C: USER anonymous S: 331 Anonymous login ok, send your complete email address as your password. C: PASS anonymous@ S: 230 Anonymous access granted, restrictions apply. C: PWD S: 257 "/" is current directory. C: QUIT S: 221 Goodbye. A problem here is of course that the client doesn’t really look at the server response; we’ll keep sending commands even if the server doesn’t allow us to log in. And even if it’s not very common, an FTP server does not have to require a password. If the USER command results in a 220 response code, the client shouldn’t send a PASS command. In other words, you need to look at each response before you decide what to do next. One way to do this is to add explicit tests to the found_terminator code; something like this could work: last_command = None def found_terminator(self): # got a response line data = self.data if data.endswith("\r"): data = data[:-1] self.data = "" if not re.match("\d\d\d ", data): return # this was the last line in this response # check if last command needs special treatment if self.last_command == None: # handle connection if data.startswith("220"): self.last_command = "USER" self.push("USER anonymous\r\n") return else: raise Exception("ftp login failed") elif self.last_command == "USER": # handle user response if data.startswith("230"): pass # user accepted elif data.startswith("331") or data.startswith("333"): self.last_command = "PASS" self.push("PASS " + self.password + "\r\n") return else: raise Exception("ftp login failed") elif self.last_command == "PASS": if code == "230": pass # user and password accepted else: raise Exception("ftp login failed") # send the next command to the server try: self.push(self.commands.pop(0) + "\r\n") except IndexError: pass # no more commands A more flexible (and scalable) approach is to use pluggable response handlers. The following version adds a handle attribute which, if not None, points to a piece of code that’s prepared to look at the response from the previous command. The ftp_handle_connect, ftp_handle_user_response, and ftp_handle_pass_response handlers take care of the login sequence. import asyncore, asynchat import re, socket class anon_ftp(asynchat.async_chat): def __init__(self, host): asynchat.async_chat.__init__(self) self.host = host self.user = "anonymous" self.password = "anonymous@" self.set_terminator("\n") self.data = "" self.response = [] self.commands = ["PWD", "QUIT"] self.handler = self.ftp_handle_connect # connect to ftp server self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect((host, 21)) def handle_connect(self): # connection succeeded pass def handle_expt(self): # connection failed self.close() def collect_incoming_data(self, data): self.data = self.data + data def found_terminator(self): # collect response data = self.data if data.endswith("\r"): data = data[:-1] self.data = "" self.response.append(data) if not re.match("\d\d\d ", data): return response = self.response self.response = [] for line in response: print "S:", line # process response if self.handler: # call the response handler handler = self.handler self.handler = None handler(response) if self.handler: return # follow-up command in progress # send next command from queue try: print "C:", self.commands[0] self.push(self.commands.pop(0) + "\r\n") except IndexError: pass def ftp_handle_connect(self, response): code = response[-1][:3] # get response code if code == "220": self.push("USER " + self.user + "\r\n") self.handler = self.ftp_handle_user_response else: raise Exception("ftp login failed") def ftp_handle_user_response(self, response): code = response[-1][:3] if code == "230": return # user accepted elif code == "331" or code == "332": self.push("PASS " + self.password + "\r\n") self.handler = self.ftp_handle_pass_response else: raise Exception("ftp login failed: user name not accepted") def ftp_handle_pass_response(self, response): code = response[-1][:3] if code == "230": return # user and password accepted else: raise Exception("ftp login failed: user/password not accepted") anon_ftp("") asyncore.loop() Running this, you’ll get output similar to this (note that commands sent by the response handlers are not logged): S: 220 ProFTPD 1.2.4 Server () S: 331 Anonymous login ok, send your complete email address as your password. S: 230 Anonymous access granted, restrictions apply. C: PWD S: 257 "/" is current directory. C: QUIT S: 221 Goodbye. Downloading Directory Listings As mentioned earlier, the FTP server uses separate data channels to transfer data. The main channel is only used to issue commands, and to return responses from the server. Let’s use the LIST command as an example. Before you can send this command, you must use PASV to set up a data channel. The server will respond with the port number to connect to, and wait for the LIST command (or any other data transfer command). The command/response exchange might look something like: C: PASV S: 227 Entering Passive Mode (194,109,137,227,8,11). C: LIST S: 150 Opening ASCII mode data connection for file list ...download listing from port 8*256+11=2059... S: 226 Transfer complete. To parse the PASV response, you can use a response handler looking something like: import re # get port number from pasv response pasv_pattern = re.compile("[-\d]+,[-\d]+,[-\d]+,[-\d]+,([-\d]+),([-\d]+)") class anon_ftp(asynchat.async_chat): ... def ftp_handle_pasv_response(self, response): code = response[-1][:3] if code != "227": return # pasv failed match = pasv_pattern.search(response[-1]) if not match: return # bad port p1, p2 = match.groups() try: port = (int(p1) & 255) * 256 + (int(p2) & 255) except ValueError: return # bad port # establish data connection async_ftp_download(self.host, port) Note that to be on the safe side, the regular expression accepts negative integers, and the port number calculation only uses eight bits from each integer. The async_ftp_download class is another asynchronous socket class. Here’s a simple implementation that simple prints all incoming data to standard output: import asyncore, socket, sys class async_ftp_download(asyncore.dispatcher): def __init__(self, host, port): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect((host, port)) def writable(self): return 0 def handle_connect(self): pass def handle_expt(self): self.close() def handle_read(self): sys.stdout.write(self.recv(8192)) def handle_close(self): self.close() The last piece of the puzzle is to make sure that the ftp_handle_pasv_response method is called at the right time. The first step is to change the command list, to make sure we send PASV followed by a LIST command: self.commands = ["PASV", "LIST", "QUIT"] If you run this, the client will hang after the LIST command. Or rather, it’s the server that hangs, waiting for the client to connect to the given port. To fix this, let’s add an optional handler to the command list, and change the send code to look for an optional response handler: class anon_ftp(asynchat.async_chat): def __init__(self, host): ... self.commands = [ "PASV", self.ftp_handle_pasv_response, "LIST", "QUIT" ] ... def found_terminator(self): ... # send next command from queue try: command = self.commands.pop(0) if self.commands and callable(self.commands[0]): self.handler = self.commands.pop(0) print "C:", command self.push(command + "\r\n") except IndexError: pass If you put all the pieces together and run the script, you’ll get something like: S: 220 ProFTPD 1.2.4 Server () S: 331 Anonymous login ok, send your complete email address as your password. S: 230 Anonymous access granted, restrictions apply. C: PASV S: 227 Entering Passive Mode (194,109,137,227,8,20). C: LIST S: 150 Opening ASCII mode data connection for file list C: QUIT drwxrwxr-x 4 webmaster webmaster 512 Oct 12 2001 pub S: 226 Transfer complete. S: 221 Goodbye. In this case, the directory listing contains a single directory, called pub. Note that this directly listing looks like the output from Unix’ ls command. Unfortunately, the FTP standard doesn’t specify what format to use; the servers can use any format they want, hoping that a human reader will be able to figure something out. But in practice, most contemporary servers use the Unix format. The following snippet can be used to “parse” the output line. It’s far from bulletproof (e.g. what happens if a filename contains a space?), but it’s better than nothing: parts = line.split() if len(parts) > 2: directory = parts[0].startswith("d") size = int(parts[5]) filename = parts[-1] To be continued… In the next article, we’ll look at how to move around between directories on the server, and how to download data from the server. Stay tuned. Send questions and comments to [email protected].
http://sandbox.effbot.org/zone/asyncore-ftp-client.htm
CC-MAIN-2018-47
en
refinedweb
UBports Documentation Release 1.0 Marius Gripsgard Oct 09, 2017 About 1 About UBports 3 2 Install Ubuntu Touch 5 3 Daily use 7 4 Advanced use 11 5 Contributing to UBports 15 6 App development 21 i ii UBports Documentation, Release 1.0. Note: This documentation is currently in a quite volatile state, so don’t be alarmed if pages are shuffled around from the last time you were here! If you want to help improving the docs, this will get you started. About 1 UBports Documentation, Release 1.0 2 About CHAPTER 1 About UBports Introduction This is the documentation for the UBports project. Our goal is to create an open-source (GPL if possible) mobile operating system that converges and respects your freedom. About UB April of 2017, UBports and its sister projects began work on the open-source code, maintaining and expanding its possiblilities for the future. About the Documentation This documentation is always improving thanks to the members of the UBports community. It is written in ReStructuredText and converted into this readable form by Sphinx, ReCommonMark, and Read the Docs. You can start contributing by checking out the Documentation intro. All documents are licensed under the Creative Commons Attribution ShareAlike 4.0 (CC-BY-SA 4.0) license. Please give attribution to “The UBports Community”. Attribution This page was heavily modeled after the Godot Engine’s Documentation Introduction, attribution to Juan Linietsky, Ariel Manzur and the Godot community. 3 UBports Documentation, Release 1.0 4 Chapter 1. About UBports CHAPTER 2 Install Ubuntu Touch There are many ways to install Ubuntu Touch on your supported device. To check if your device is supported, check this page Back up your data Your data on your phone is important. You don’t need to lose it in the upgrade. If you’re already using Ubuntu Touch on your phone and any distro that supports snaps on your PC, use the magicdevice-tool to back up your device. Non-Canonical devices These instructions will help you install our OS on the “Core Devices” such as the Nexus 5 or Fairphone 2. Switch from Android to Ubuntu Touch • On any Linux distro with Snaps: Use the magic-device-tool. Please read the instructions carefully! • On Windows or MacOS (beta!): Use the UBports GUI installer Official “Ubuntu for Devices” devices These instructions will help you install to a device that ran an official Canonical build of Ubuntu for Devices, such as the BQ M10 or Meizu MX4 5 UBports Documentation, Release 1.0 Switch from Canonical builds to UBports builds • On any Linux distro with Snaps: Use the magic-device-tool. Please read the instructions carefully! • On Windows or MacOS (beta!): Use the UBports GUI installer Switch from Android to Ubuntu BE VERY CAREFUL! This can permanantly damage or brick your device. NEVER check the “Format All” option in SP Flash Tool and carefully read everything that it tells you. Some users have destroyed the partition that holds their hardware IDs and can no longer connect to Wi-Fi or cellular networks. • BQ devices: Download the official Ubuntu Edition firmware from here and use SP Flash Tool to flash it. • Meizu devices: You are pretty much stuck on Flyme. For the MX4, there are some instructions floating around for downgrading your OS, gaining root with an exploit, unlocking your bootloader, and so on. We aren’t going to link to them here for obvious reasons. The Pro5 is Exynos-based and has its own headaches. You’re even more at your own risk on these. We are being vague with these instructions on purpose. While we appreciate that lots of people want to use our OS, flashing a device with OEM tools shouldn’t be done without a bit of know-how and plenty of research. People have destroyed their phones. 6 Chapter 2. Install Ubuntu Touch CHAPTER 3 Daily use This section of the documentation details common tasks that users may want to perform while using their Ubuntu Touch device. Run desktop applications Libertine allows you to use standard desktop applications in Ubuntu Touch. To display and launch applications you need the Desktop Apps Scope which is available in the Canonical App Store. To install applications you need to use the commandline as described below. Manage containers Create a container The first step is to create a container where applications can be installed: libertine-container-manager create -i CONTAINER-IDENTIFIER You can add extra options such as: • -n name name is a more user friendly name of the container • -t type type can be either chroot or lxc. Default is chroot and is compatible with every device. If the kernel of your device supports it then lxc. 7 UBports Documentation, Release 1.0 List containers To list all containers created run: libertine-container-manager list Destroy a container libertine-container-manager destroy -i CONTAINER-IDENTIFIER Manage applications Once a container is set up, you can list the installed applications: libertine-container-manager list-apps Install a package: libertine-container-manage install-package -p PACKAGE-NAME Remove a package: libertine-container-manager remove-package -p PACKAGE-NAME Note: If you have more than one container, then you can use the option -i CONTAINER-IDENTIFIER to specify for which container you want to perform an operation. Files Libertine applications do have access to these folders: • Documents • Music • Pictures • Downloads • Videos Tipps Locations For every container you create there will be two directories created: • A root directory ~/.cache/libertine-container/CONTAINER-IDENTIFIER/rootfs/ and • a user directory CONTAINER-IDENTIFIER/ 8 ~/.local/share/libertine-container/user-data/ Chapter 3. Daily use UBports Documentation, Release 1.0 Shell access To execute any arbitrary command as root inside the container run: libertine-container-manager exec -c COMMAND For example, to get a shell into your container you can run: libertine-container-manager exec -c . To get a shell as user phablet run: DISPLAY= libertine-launch -i CONTAINER-IDENTIFIER /bin/bash Background. 3.1. Run desktop applications 9 UBports Documentation, Release 1.0 10 Chapter 3. Daily use CHAPTER 4 Advanced use! Shell access via adb You can put your UBports device into developer mode and access a Bash shell from your PC. This is useful for debugging or more advanced shell usage. Install ADB. 11 UBports Documentation, Release 1.0 Enable developer mode Next, you’ll need to turn on Developer Mode. 1. Reboot your device 2. Place your device into developer mode (Settings - About - Developer Mode - check the box to turn it on) 3. Plug the device into a computer with adb installed 4. Shell access via ssh You can use ssh to access a shell from your PC. This is useful for debugging or more advanced shell usage. You need a ssh key pair for this. Logging in via password is disabled by default. Copy the public key to your device First you need to transfer your public key to your device. There are multiple ways to do this. For example: • Connect the ubports device and the PC with a USB cable. Then copy the file using your filemanager. • Or transfer the key via the internet by mailing it to yourself, or uploading it to your own cloud storage, or webserver, etc. • You can also connect via adb and use the following command to copy it: adb push ~/.ssh/id_rsa.pub /home/phablet/ 12 Chapter 4. Advanced use UBports Documentation, Release 1.0 Configure your device service ssh start To make sure the ssh server is automatically started in the future, execute: sudo setprop persist.service.ssh true Connect Now everything is set up and you can use ssh ssh [email protected]<ip-address> Of course you can now also use scp or sshfs to transfer files. References • askubuntu.com: How can I access my Ubuntu phone over ssh? • gurucubano: BQ Aquaris E 4.5 Ubuntu phone: How to get SSH access to the ubuntu-phone via Wifi Switch release channels 4.3. Switch release channels 13 UBports Documentation, Release 1.0 14 Chapter 4. Advanced use CHAPTER 5 Contributing to UBports. Bug reporting This page contains information to help you help us reporting an actionable bug for Ubuntu Touch. It does NOT contain information on reporting bugs in apps, most of the time their entry in the OpenStore will specify where and how to do that. Get the latest Ubuntu Touch Open up the bug tracker for ubports. 15 UBports Documentation, Release 1.0 If the report is missing any of the information specified later in this document, please add it yourself to help the developers fix the bug. Reproduce the issue you’ve found Next, find out exactly how to recreate the bug that you’ve found. Document the exact steps that you took to find the problem in detail. Then, reboot your phone and perform those steps again. If the problem still occurs, continue on to the next step. If not... Getting Logs We appreciate as many good logs as we can get when you report a bug. In general, /var/log/dmesg and the output of /android/system/bin/logcat are helpful when resolving an issue. I’ll show you how to get these logs. To get set ready, follow the steps to set up ADB. Now, you can get the two most important logs. dmesg 1. Using the steps you documented earlier, reproduce the issue you’re reporting 2. cd to a folder where you’re able to write the log 3. Delete the file UTdmesg.log if it exists 4. Run the command: adb shell “dmesg” > “UTdmesg.txt” This log should now be located at UTdmesg.txt under your working directory, ready for uploading later. logcat 1. Using the steps you documented earlier, reproduce the issue you’re reporting 2. cd to a folder where you’re able to write the log 3. Delete the file UTlogcat.log if it exists 4. Run the command: adb shell “/android/system/bin/logcat -d” > “UTlogcat.txt” This log will be located at UTlogcat.txt in your current working directory, so you’ll be able to upload it later. Making the bug report Now it’s time for what you’ve been waiting for, the bug report itself!: • What happened: A synopsis of the erroneous behavior • What I expected to happen: A synopsis of what should have happened, if there wasn’t an error • Steps to reproduce: You wrote these down earlier, right? • Logs: Attach your logs by clicking and dragging them into your GitHub issue. 16 Chapter 5. Contributing to UBports UBports Documentation, Release 1.0 • Software Version: Go to (Settings - About) and list what appears on the “OS” line of this screen. Also include the release channel that you used when you installed Ubuntu on this phone. Once you’re finished with that, post the bug. You can’t add labels yourself, so please don’t forget to state the device you’re experiencing the issue on in the description so a moderator can easily add the correct tags later. A developer or triager will confirm and triage your bug, then work can begin on it. If you are missing any information, you will be asked for it, so make sure to check in often! Documentation Tip: Documentation on this site is written in ReStructuredText, or RST for short. Please check the RST Primer if you are not familiar with RST. This page will guide you through writing great documentation for the UBports project that can be featured on this site. Documentation guidelines These rules govern how you should write your documentation to avoid problems with style, format, or linking. If you don’t follow these guidelines, we will not accept your document. Title All pages must have a document title. This title is shown on the table of contents (to the left) and at the top of the page. The title, underlined with the Equals sign, is shown in the table of contents to the left of the page. Titles should be sentence cased rather than Title Cased. For example: Incorrect casing: Writing A Good Bug Report Correct casing: Writing a good bug report Correct casing when proper nouns are involved: Installing Ubuntu Touch on your phone There isn’t a single definition of title casing that everyone follows, but sentence casing is easy. This helps keep capitalization in the table of contents consistent. Reference References create a permanent link. One should always appear as the first line of your document. For example, take a look at this document’s first three lines: .. _contribute-doc-index: Documentation ============= The reference name can be called in another document to easily link to a page: 5.2. Documentation 17 UBports Documentation, Release 1.0 For example, check out the :ref:`Documentation intro <contribute-doc-index>` This will create a link to this page that won’t change if this page changes directories in a reorganization later. Your reference should follow the naming scheme part-section-title. This document, for example, is the index of the Documentation (doc) section in the Contribute part of the documentation. Table of contents People can’t navigate to your new page if they can’t find it. Neither can Sphinx. That’s why you need to add new pages to Sphinx’s table of contents. You can do this by adding the page to the index.rst file in the same directory that you created it. For example, if you create a file called “newpage.rst”, you would add the line marked with a chevron (>) in the nearest index: .. toctree:: :maxdepth: 1 :name: example-toc > oldpage anotheroldpage newpage The order matters. If you would like your page to appear in a certain place in the table of contents, place it there. In the previous example, newpage would be added to the end of this table of contents. Contribution workflow Note: You will need a GitHub account to complete these steps. If you do not have one, click here to begin the process of making an account. Directly on GitHub Read the Docs and GitHub make it fairly simple to contribute to this documentation. This section will show you the basic workflow to get started by editing an existing page on GitHub 1. Find the page you would like to edit 2. Click the “Edit on GitHub” link to the right of the title 3. Make your changes to the document. Remember to write in ReStructuredText! 4. Propose your changes as a Pull Request. If there are any errors with your proposed changes, the documentation team will ask you make some changes and resubmit. This is as simple as editing the file on GitHub from your fork of the repository. Manually forking the repository You can make more advanced edits to our documentation by forking ubports/docs.ubports.com on GitHub. If you’re not sure how to do this, check out the excellent GitHub guide on forking projects. 18 Chapter 5. Contributing to UBports UBports Documentation, Release 1.0 Building this documentation locally If you’d like to build this documentation before sending a PR (which you should), follow these instructions on your local copy of your fork of the repository. Note: You must have pip installed before following these instructions. On Ubuntu, install the pip package by running sudo apt install python-pip. This page has instructions for installing Pip on other operating systems and distros. 1. Install the Read the Docs theme and ReCommonMark (for Markdown parsing): pip install sphinx sphinx_rtd_theme recommonmark 2. Change into the docs.ubports.com directory: cd path/to/docs.ubports.com 3. Build the documentation: python -m sphinx . _build This tells Sphinx to build the documentation found in the current directory, and put it all into _build. There will be a couple of warnings about README.md and a nonexistent static path. Watch out for warnings about anything else, though, they could mean something has gone wrong. If all went well, you can enter the _build directory and double-click on index.html to view the UBports documentation. Translations Although English is the official base language for all UBports projects we believe you have the right to use it in any language you want. We are working hard to meet that goal, and you can help as well. There are two levels for this: • A casual approach, as a translator volunteer. • A fully committed approach as a UBports Member, filling in this application. Tools for Translation For everyone: A web based translation tool called Weblate. This is the recommended way. • For advanced users: Working directly on .po files with the editor of your choice, and a GitHub account. The .po files for each project are in their repository on our GitHub organization. A Translation Forum to discuss on translating Ubuntu Touch and its core apps. How-To 5.3. Translations 19 UBports Documentation, Release 1.0 UBports Weblate: • Register using a valid email address, a username, and your full name. You’ll need to resolve an easy control question too. •. You decide how much time you can put into translation. From minutes to hours, everything counts. .po file editor As was said up above, you need a file editor of your choice and a GitHub account to translate .po files directly. There are online gettext .po editors and those you can install in your computer. You can choose whatever editor you want, but we prefer to work with free software only. There are too many plain text editors and tools to help you translate .po files to put down a list here. If you want to work with .po files directly you know what you’re doing for sure. Translation Team Communication The straightforward and recommended way is to use? Just for your information, some projects are using Telegram groups too, and some teams are still using the Ubuntu Launchpad framework. In your interactions with your team you’ll find the best way to coordinate your translations. License All the translation projects, and all your contributions to this project, are under a Creative Commons AttributionShareAlike 4.0 International (CC BY-SA 4.0) license that you explicitly accept by contributing to the project. Go to that link to learn what this exactly means. 20 Chapter 5. Contributing to UBports CHAPTER 6 App development Make the next Generation of apps Welcome to an open source and free platform under constant scrutiny and improvement by a vibrant global community, whose energy, connectedness, talent and commitment is unmatched. Ubuntu is also the third most deployed desktop OS in the world. Get started with app development 21 UBports Documentation, Release 1.0 Getting started Here you can install everything needed to get developing apps for Ubuntu. 1. Start by Installing the Ubuntu SDK. 2. The Ubuntu development model Frameworks: targeting 22 Chapter 6. App development UBports Documentation, Release 1.0 Pick your language For the UI, you can choose either QML or HTML5 to write Ubuntu apps. For the logic, JavaScript, Qt and other languages such as Python or Go can power refined QML UIs. Note: for starters, we recommend QML and JavaScript, which are the languages used in most tutorials. Write your first app Design Together we can design and build beautiful and usable apps for Ubuntu. Get started Fig. 6.1: 366w_GetStarted_GetStarted Familiarise yourself with the essentials before designing your app. Design values › Style (coming soon) Fig. 6.2: 366w_GetStarted_Style (2) Make your app look beautiful by using the uniquely designed Ubuntu fonts and colours. Patterns Fig. 6.3: 366w_GetStarted_Patterns (1) Use common patterns to allow users to get where they want to naturally and with little effort. Gestures &rsauo; 6.3. The Ubuntu App platform - develop with seamless device integration 23 UBports Documentation, Release 1.0 Fig. 6.4: 366w_GetStarted_BuildingBlock (2) Building blocks See uses cases and advice to get the best out of the Ubuntu toolkit. Use the header › System integration (coming soon) Fig. 6.5: 366w_GetStarted_SystemIntegration See how your app can integrate with the Ubuntu shell. Resources (coming soon) Fig. 6.6: 366w_GetStarted_Resources (3) Download handy templates and the Ubuntu color palette to help you on your way. Start building your app! The toolkit contains all the important components you need to make your own unique Ubuntu experience. Follow the link below for all the API and developer documentation. Ubuntu SDK › Release phases The new App Guide will be released in phases over the coming days and weeks. • Phase 1 – Get started and Building blocks • Phase 2 – Patterns • Phase 3 – System integration • Phase 4 – Resources and Style ||See the Insights blog for more updates.| | |—|—–| Or follow us on Google+ and see the Canonical Design blog for all the latest news and designs. Get started overview Understand the Ubuntu design values and how to achieved a seamless experience 24 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.7: 366w_GetStarted_Toolkit (1) Fig. 6.8: 366w_Overview_Convergence (1) Convergence See how convergence is achieved to provide a seamless experience across all devices. Mapping interactions › Design values Fig. 6.9: 366w_Overview_DesignValues(tablet) Understand the Ubuntu design values and how they can be applied to your designs. Focus on content › Why design for Ubuntu? Discover how your designs can be part of a thriving community. Get involved › Make it Ubuntu Apply Ubuntu’s key components and patterns to achieve a great user experience inside your app. Use the bottom edge › Convergence Use one operating system for all devices to provide familiar experiences from phone to tablet to desktop, and back again. • What is convergence? › • Why are we doing it? › • How are we doing it? › • See for yourself › What is convergence?. 6.3. The Ubuntu App platform - develop with seamless device integration 25 UBports Documentation, Release 1.0 Fig. 6.10: 366w_Overview_WhyDesignUbuntu (1) Fig. 6.11: 366w_Overview_MakeItUbuntu (2) Why are we doing it? Over the last twenty years computing has become exponentially faster, cheaper and more power efficient. As a result, phones and tablets today have the processing power to undertake tasks that only a few years ago required PC hardware. The boundaries between form factors are becoming blurred; there is very little difference in terms of hardware between an ultrabook with a touchscreen and a 12in tablet with a keyboard attached. By using convergence we breakdown the last barrier between form factors with a single operating system and app ecosystem for all different types of hardware. This enables new forms of interaction. For example, drafting an email on your phone during your journey to work, and then when you arrive at your desk you can plug the phone into a monitor and continue composing the same email in a desktop environment. How are we doing it? In 2013, Ubuntu announced a crowdfunding effort to build a flagship device called the Ubuntu Edge. It was to be a next-generation smartphone that also worked as a full desktop PC. Although the device was never realized, the vision of a convergent operating system that shifts seamlessly from smartphone to desktop is still alive and well. Responsiveness and consistency When designing across different sized devices you have to bear in mind how an app will adapt to having more or less real-estate when presented in a small, medium or large screen. Where possible place panels together to take full advantage of additional screen real estate on different devices, in order to create a consistent and proportionate design that makes use of the available space. Dekko app The Dekko app responses to more real-estate and keeps its look and feel from mobile to tablet to desktop. Adaptive layouts Applications live in windows (in a windowed environment) or surfaces (in a non-windowed environment). Application layouts change in a responsive manner depending on the size of their window or surface. One common method of creating a responsive layout is to use panels. In a small window or surface, only a single panel needs to be displayed. The user can navigate through the panels by tapping on items or going back. When the window or surface size gets larger, the application can switch to displaying two or more surfaces side by side. Thus reducing the amount of navigational actions the user needs to undertake. Typical examples of this are applications like contacts, messages, and email. Of course, there can be any number of combinations of panels depending on the specific app’s needs. Fig. 6.12: 750w_Convergence_MainImage 26 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.13: Convergence - Responsivness and consistency 02 Fig. 6.14: 750w_WhyDesignUbuntu_DekkoApp The AdaptivePageLayout API component eliminates guesswork for developers when adapting from one form factor to another. It works by tracking an infinite number of virtual columns that may be displayed on a screen at once. For example, an app will automatically switch between a 1-panel and 2-panel layout when the user changes the size of the window or surface, by dragging the app from the main stage to the side stage. Changing the size of the window or surface resizes one or more joined panels. Typically, the right-most panel resizes and the left-most panel maintains its original dimensions. The dimensions of the right-most panel will normally be 40 grid units or 50 grid units, though this panel may itself be resizable depending on the developer’s requirements. How it works The developer will be able to specify where panels should go and the breakpoint in which they can expand to. The adaptive layout will automatically place them. Fig. 6.15: 750w_Convergence_HowItWorks Minimal changes to functionality For a consistent and familiar user experience, the SDK maps touch, pointer, and keyboard (focus) interactions to every function. Context menus Using touch a user can swipe or long-press on a list item to reveal a contextual menu. Using a pointer (mouse or trackpad) a user can right-click the item to reveal the contextual menu. Using a keyboard a user can focus the desired item and press the MENU key to open the context menu. This is a great example of how each SDK component supports all input types equally and simultaneously. All the components in the toolkit adapt to a convergent environment. See how the header converges to provide more room for actions within different surfaces. See for yourself Ubuntu devices are shipped with built-in apps that converge over multiple devices, such as: Dekko, Calendar, Contacts and Music. They all work in the same way on your phone, tablet and desktop, giving you a seamless experience across all devices. Design values This guide is intended to help designers and developers create unique and valuable user experiences. |750w\_DesignValues(tablet)\_MainImage| • All input types supported equally › 6.3. The Ubuntu App platform - develop with seamless device integration 27 UBports Documentation, Release 1.0 Fig. 6.16: 750w_Design_Values_AllInputEqualv2 Fig. 6.17: link_external • Fast and effortless interactions › • Action placement › • Meaning in colors › • Focus on content › All input types supported equally In order to achieve convergence, the toolkit has adapted all components to work seamlessly across all devices with minimal changes to functionality and visual appearance. This means that touch, pointer and focus interactions are now mapped to perform similar functions across different devices for a consistent and familiar user experience. No matter what the input method, the UI will respond to the user’s interaction with what they expect to happen automatically. Use case ||For more details on how a seamless experience can be achieved in your app, see Convergence.| | |—|—–| Fast and effortless interactions Allow users to effortlessly move through your app with minimum effort, where it is both natural and logical to them. Bottom edge The bottom edge allows for a natural progressive swipe from the bottom of the screen. By using touch, clicking on the bottom edge tab with a pointer, or pressing Return when the bottom edge tab is focused to open using keyboard navigation. Task switcher The task switcher allows the user to easily switch between apps or scopes using a right edge swipe. By pushing the pointer against the right edge of the screen, or pressing SUPER+W. Action placement Throughout the Ubuntu platform positive actions, such as OK, Yes and Accept are placed on the right, and negative actions, such as Delete and Cancel are placed on the left. Fig. 6.18: 750w_Convergence_Calendar 28 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.19: 366w_BottomEdge_AdditonalInfo3 (1) ||The position of positive and negative actions are important to consider when designing your app, because it can reinforce behavior when used in a consistent way.| | |—|—–| Negative swipes left The user swipes left to right to reveals a red deletion option when editing a contact. Positive swipes right The user swipes right to left to reveal contextual options, such as information and messaging. ||Users can access the same actions with a pointer or keyboard by pressing the right mouse button or menu key to open a context menu.| | |—|—–| Meaning in colors The Suru design language associates meanings with certain colours to help the user distinguish between actions. Most color blind people have difficulty distinguishing red from green. Don’t use color in isolation, but instead bring them together with additional visual cues (e.g. text labels, button position and style). ||Think about how colors complement each other and how they can create a harmony that is pleasing on the eye.| | |—|—–| Green Positive actions, such as OK, new, add or call. Red Negative and destructive actions, such as delete or block contact. Blue Blue is an informative colour, it is neither positive or negative. Use blue for selected activity states. It works with all other elements, on both dark and light backgrounds, and stands out clearly and precisely when used in combination with a focus state. ||For more information on how color is used across the platform see Color palette (coming soon).| | |—|—–| Fig. 6.20: 366w_DesignValues_TaskSwitcher 6.3. The Ubuntu App platform - develop with seamless device integration 29 UBports Documentation, Release 1.0 Fig. 6.21: 366w_ListItems_ContextualActions1 (2) Fig. 6.22: 366w_ListItems_ContextualActions2 Focus on content Too much user interface can interfere with content; but too little can make your app difficult to use. By focusing clearly on content many pitfalls can be avoided. Make it easy to find content Allow users to access content easily through navigational methods by using the most appropriate components. |366w\_Overview\_FocusOnContentDO| |do\_32| Do The header can provide quick access to important actions and navigational options at the top of the screen or window. |366w\_Overview\_FocusOnContentDont| |dont\_32| Don’t Drawers have low discoverability and can hide important views from the user. Consider using the header or header section instead. Design philosophy The Ubuntu interface has been designed according to a philosophy called Suru. • Suru meaning › • Translated in design › • Suru mood board › ||See how the Suru visual language is integrated into the new Xerus 16.04 wallpaper here.| | |—|—–| Suru meaning Here at Ubuntu we believe that everyone should have access to free, reliable, and trusted software that can be shared and developed by anyone. This has paved the way to allow the community to grow and prosper with freedom trust and collaboration. This integral belief has been translates into our design philosophy called Suru. • Suru stems from the Ubuntu brand values alluding to Japanese culture. • The design of Suru is inspired by origami, because it give us a solid and tangible foundation. Fig. 6.23: 366w_Overview_MeaningInColoursGreen 30 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.24: 366w_DesignValues_MeaningInColoursRed Fig. 6.25: 750w_DesignValues_ColourBlue (1) • Paper can be used in all areas of the brand in two and three dimensional forms, because it is transferable and diverse. Translated in design Suru brings a precise yet organic structure to the Ubuntu interface. The sharp lines and varying levels of transparency evoke the edges and texture of paper. All elements are placed deliberately, with the express aim of being easy for the user to identify and use. When using a small layout, the information and functionality is folded into a compact object that that can be refolded to expose different areas. As the layout size increases, the object can become progressively larger, allowing more of the information and functionality to be exposed at any one time. Origami Origami has long been associated with good fortune and represents the visual style for the Ubuntu Phone. Origami folds are used to define the design. Simple details What is most important is that screen layouts retain a natural, rhythmic quality, and a neatness and clarity that helps the user find things quickly and use them intuitively. Using subtle grids for accuracy The folds from origami produce simple graphical details that allow designers to create a subtle grid for positioning brand elements and components, such as logos, icons or copylines. This helps maintain focus on the main image or graphic element. Suru mood board Influences and inspiration Why design for Ubuntu? Design an app that will be part of a growing new eco-system which is powered by a thriving community. • Your app will be part of the third most deployed desktop OS in the world, which is free and accessible to all • Your app will be able to work seamlessly across all Ubuntu client platforms (desktop, phone, tablet) Fig. 6.26: 750w_DesignPhilosophy_MainImage 6.3. The Ubuntu App platform - develop with seamless device integration 31 UBports Documentation, Release 1.0 Fig. 6.27: 366w_DesignPhilosophy_origami Fig. 6.28: 366w_DesignPhilosophy_simple • The list of Ubuntu App Platform APIs is ever expanding, integrating all Ubuntu apps seamlessly into the Unity shell and user experience, whatever the app’s toolkit and coding language •. Contribute to design › Fig. 6.29: 750w_DesignPhilosophy_SubtleGrid 32 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.30: suru_mood_board Fig. 6.31: 750w_WhyDesignUbuntu_MainImage (2) Developer Write and package new software or fix bugs in existing software. Write apps on QML or HTML5. Write apps for Ubuntu › Documentation Help produce official documentation, share the solution to a problem, or check, proof and test other documents for accuracy. Improve and assist with documentation › Make it Ubuntu Consider the following features to create a truly unique and beautiful Ubuntu experience within your app. |750w\_MakeItUbuntu\_MainImage (1)| 1. Create a consistent look across your app Simply place your designs over the layout by using the grid units to help you arrange your content in a more readable, manageable way. Layouts › 2. Create a unique experience with the bottom edge The bottom edge provides a more accessible way to obtain content or actions within your app. Use it to create something special. Bottom edge › 3. Surface most important features inside your app Let the user know where they are, what they can do and where they can go by using the Ubuntu Header. Ubuntu header › 4. Make your app beautiful with our Ubuntu fonts and icon designs The uniquely stylish Ubuntu font influences UI elements and icons, making them distinctive and consistent. Fig. 6.32: 750w_WhyDesignUbuntu_MusicApp 6.3. The Ubuntu App platform - develop with seamless device integration 33 UBports Documentation, Release 1.0 Fig. 6.33: information-link Fig. 6.34: 750w_WhyDesignUbuntu_ClockApp Building blocks overview Start creating your app with components from the UI toolkit for the best user experience. Header Use the header for placing actions and navigational options inside your app. Use the header › Bottom edge Learn how you can create something special from the bottom of the screen. Inspirational patterns › List items Find recommendations for list item layouts and what type of actions a list item can contain. List items › Selection controls See the different components that can be used for selecting and controlling inside a form. Use checkboxes › Header Use the header to let the user know where they are, what they can do, and where they can go inside your application. • Usage › • Slots › • Toolbar › • Edit mode › • Responsive layout › • Header appearance › • Header section › Fig. 6.35: information-link 34 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.36: 750w_WhyDesignUbuntu_DekkoApp Fig. 6.37: external-link • Best practices › |no alt text | The ‘Heade r API <. ./../a pi-qml -curre nt/Ubu ntu.Co mponen ts.Hea der.md >‘__ includ es the expose d, flicka ble and moving proper ties of the header . Usage The header area can contain the main navigation options and actions inside your app. It is used to enhance the user experience in specific device layouts. When should I use a header? • If your app has multiple sections • If your app performs an action that requires the full screen, such as a camera, then don’t use a header. Multiple panels may appear when the surface or window increases in size. When this happens, each panel can contain its own header. For example, on a mobile surface, one panel is present at a time as the pages are stacked on top of each other in a hierarchical order. However, when translated onto a medium to large surface the panels become adjacent to each other and will contain their own header, while still remaining in a hierarchical order. • **Navigational options **on the left The navigation area can include a Back Button, title, a subtitle or a navigation drawer for when there is no room to fit all buttons for major views. • **Actions **on the right The action area can include actions such as settings, search, views, or an action drawer for when there’s no room to place further actions. ||Don’t use a navigation drawer and an action drawer at the same time, because users are unlikely to distinguish between them.| | |—|—–| Slots The header contains a number of slots that can hold actions or navigational options. Depending on the surface or window size, additional slots can be added to show the actions otherwise hidden in drawers. ||Think about the most important actions and views you want the user to perform and make it easy for them to find by using the header.| | |—|—–| For smaller surfaces, such as on mobile, the SDK provides a maximum of four slots per header that can be arranged in two ways. Fig. 6.38: 366w_MakeItUbuntu_GridLayout (1) 6.3. The Ubuntu App platform - develop with seamless device integration 35 UBports Documentation, Release 1.0 Fig. 6.39: 366w_MakeItUbuntu_BottomEdge (1) Fig. 6.40: 366w_MakeItUbuntu_Header Slot arrangement Slots can be arranged in a variety of ways to surface actions and navigational options to best suit the user experience of your application. Slot A • First position on the left hand side • When slot A is not needed, slot B should move to this position • A navigation drawer can displays all main views in an application Slot B • Mandatory title of your app or view, only one line • An optional subtitle can sit below the title, which can be two lines Slot C Slot C can have any action inside it, such as ‘Add new contact’ or a ‘Call’ action. Search If you are using Slot C for Settings, then it should always be positioned last. Settings If you are using Slot C to place a Search icon, or any other action, then place it to the right of the title. Action drawer An action drawer can be used for when no other slots are available to show them. However, when your app is on a larger surface, like on a desktop, then actions will appear in the slots. Fig. 6.41: 366w_MakeItUbuntu_Fonts (1) 36 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.42: 366w_BuildingBlocks_Overview_Header Fig. 6.43: 366w_BuildingBlocks_Overview_BottomEdge (1) Fig. 6.44: 366w_BuildingBlocks_Overview_ListItems (1) Fig. 6.45: 366w_BuildingBlocks_Overview_FormElements Fig. 6.46: 750w_Header_Orientation (3) Fig. 6.47: l Fig. 6.48: 750w_Header_Usage2panels Fig. 6.49: 750w_Header_HeaderComponents Fig. 6.50: 750w_Header_Slots (2) Fig. 6.51: 366w_Header_SlotAexample1 (3) Fig. 6.52: 366w_Header_SlotBexample1 (3) Fig. 6.53: 366w_Header_SlotCexample1 (3) Fig. 6.54: 366w_Header_SlotCexample2 (2) Fig. 6.55: 366w_Header_ActionDrawerExpanded (1) 6.3. The Ubuntu App platform - develop with seamless device integration 37 UBports Documentation, Release 1.0 Responsive layout As the header gains width across screen sizes, additional slots become visible and actions in the drawer will appear automatically. 3 slot layout Fig. 6.56: Header_SlotArrangement1 (2) 4 slot layout Fig. 6.57: Header_SlotArrangement2 (2) 5 slot layout Fig. 6.58: Header_SlotArrangement3 (2) 6 slot layout |Header\_SlotArrangement4 (3)| |Header\_SlotArrangement5 (2)| Medium to large screens The maximum number of visible action slots in a convergent environment is 6. If this is exceeded then additional actions will migrate to the action drawer. ||If your header has no more slots for actions, then everything after Slot D goes into Slot E inside an action drawer.| | |—|—–| Search inside the header You can use search within the main header to filter the currently displayed content; or as a global search. Multi-panel layout Search can appear in both panels when two or more headers are present. For example, in a mail client you may want a filter for your inbox in the first panel, and a search in the second panel to find a recipient. Avoid placing search in both panels unless necessary, because it could confuse the user as to what content is being filter. For example, they may type in the wrong field to search for a specific query if it isn’t in a hierarchical order. ||Find more information on search in the header see Navigation (coming soon).| | |—|—–| 38 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.59: 750w_Header_HeaderSearchV2 (4) Fig. 6.60: 750W_Header_Convergence search (1) Toolbar The toolbar is an additional component that can be used to hold actions. ||The Toolbar API allows you to determine the action or options you want to display in the toolbar.| | |—|—–| Edit mode Edit mode allows users to modify a particular item or multiple items at once. To enter edit mode users can initiate it by directly interacting with a list item, title or card, or through an action inside the header. When should I use edit mode? Use a separate edit mode if making the information editable all the time would substantially interfere with viewing, copying, or other tasks. For example, in the Notes app, if a note were editable all the time then the OSK would take up valuable reading space, and hyperlinks in notes would be hard to click or tap. A toolbar can be used below the header to provide additional actions associated with editing. When editing content the actions that appear inside the main header and toolbar are relevant to an edit state allowing the user to perform tasks on the content, such as: select, rearrange or delete. Use cases #Actions in the header picking and editing content If a primary action of your app is to allow users to select and move content in a list, such as a list of contacts, then surface the editing action inside the main header. Once the user has initiated the editing action, the toolbar will appear below the header with the associated editing actions for the content. If you only use one text button then place it on the left hand side, because it will be easier for the user to reach with one gesture. ||The toolbar can contain additional actions other than editing ones, such as ‘Share’ or ‘Forward’.| | |—|—–| Edit mode in a multi-panel layout Edit mode can be triggered through an action in the header or right-clicking or long-pressing the contextual menu. An activated edit mode must always apply to the panel view it is triggered in. It should not affect any other panels. If you need a delete icon place it on the left of the toolbar. If the content you are editing needs to be saved then use two text buttons instead, such as ‘Cancel’ and ‘Save’. Fig. 6.61: 750w_Header_Convergence search box (1) 6.3. The Ubuntu App platform - develop with seamless device integration 39 UBports Documentation, Release 1.0 Fig. 6.62: 366w_Header_EditInHeader1 (5) Fig. 6.63: 366w_Header_EditInHeader2 (3) ||Place negative actions on the left and positive actions on the right in the main header for consistency across the platform. See Design values for more information.| | |—|—–| Toolbar placement The toolbar appears below the main header when edit mode is initiated. 1. Main header 2. Toolbar Header appearance You can decide how you want the header to appear in four ways: Fixed, Fixed and Opaque, Fixed and Transparent and Hidden. ||When a header is displayed in a larger surface or a window, such as in a desktop, it will be fixed, because there will be more room to display content.| | |—|—–| Fixed (default) A fixed header will appear at all times until the user starts to scroll down within your app’s content. Having a fixed header can be useful if you have a few sections or actions that need to be accessible even when the user scrolls. For instance, in a photo editing app the user may want the editing tools to be fixed in the header for easier access. If your app displays a header section below the main header, then it will follow the defined behavior of the main header. The header can be brought back into view by: • scrolling up on the content • tapping or interacting with the content. Fixed and transparent The header will be available at all times and have a transparency of 80-90%. This type of header can be useful if you don’t want it to be the focus of attention, but still available if the user wishes to have quicker access to a view or action. Fig. 6.64: 750w_Header_MultiPanelLayout1 (4) 40 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.65: 750w_Header_MultiPanelLayout2 (2) Fig. 6.66: 750w_Header_ToolBar (1) Multi-panel layout If your app is presented in a multi-panel layout, then the headers that appears in each panel will remain fixed and always visible when scrolling. Overwritten fixed header If you choose to overwrite the default header, then it should: • react with its associated panel • not affect other panels. Hidden Overlay The header is not visible to the user. This type of header is useful for full-screen applications, such as the Camera app. Useful in displaying more content in a single screen. Apps without a header If you choose not to have a header then think about how users will navigate through your UI in a different way. Overview Top level For example, the Clock app has a customized header and uses icons at the top of the screen to take the user to different modes of the app. Header section The header section allow users to easily shift between category views within the same page. It has the same visibility as the main header. For example, if the header is set to default it will slide away with the sections when the user scrolls down. ||The Section API displays a list of sections that the user can select. It is strongly recommended to limit the number of sections to two or three to avoid a cultured looking header.| | |—|—–| Fig. 6.67: 366w_Header_TouchEnvironment1 (1) 6.3. The Ubuntu App platform - develop with seamless device integration 41 UBports Documentation, Release 1.0 Fig. 6.68: 366w_Header_TouchEnvironment2 (2) Fig. 6.69: 366w_Header_HeaderFixedTransparent Dekko app For example, if your app was presenting an inbox of emails, from ‘All’, the sub-sections could display ‘Recent’ and ‘Archive’ to further filter the content. More sections on the screen can be visible by swiping right. When a mouse is attached More tabs are indicated by an arrow revealed when the user interacts with the header section using a mouse. 1. The main header is a separate component that can hold actions and navigational options 2. The header section sits below the main header and allows for sub-navigation or filtering within the screen, which is indicated by the header above. One option is always selected Best practices Header section |366w\_Header\_ClearHeader1 (1)| |do\_32| Do Make your sections clear and concise. |366w\_Header\_ClearHeader2 (2)| |dont\_32| Don’t The header section can look cluttered if you make the titles too big. Actions Allow users quick access to the most important actions by placing them inside the header. For example, in the Contact app: ‘Call’ and ‘Add Contact’ are available in the header to give quick access to the Dialler and Address book. Bottom edge Create something special with a unique bottom edge that belongs to your app from the bottom of the screen. Fig. 6.70: 750w_Header_TouchMultiPanelView1 (1) 42 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.71: 750w_Header_TouchMultiPanelView2 (1) Fig. 6.72: 750w_Header_OverwrittenFixedheader1 (1) Quick access to new content • Overview › • Use cases › • Hints › Hint: The BottomEdge API provides bottom edge content handling. See also the BottomEdgeHint API, which displays a label or an icon, or both, at the bottom of the component it is attached to. Overview The bottom edge allows for a very natural transition through a progressive gesture from the bottom of the screen. The gesture should take logical steps to reach a point of interest for the user. It can provide access to a view via page stack, important actions, or access to app settings and features. Tip: You can create your own customised bottom edge and add different content depending on the context of your app. See ‘Loving the bottom edge’ for more information. Use cases The bottom edge can be used to give access to the most important features inside your app. Is your app often used to create new content? Use the bottom edge to quickly create or draft new content, such as composing a new email or text message. Does your app need access to a commonly used feature that needs a separate view? Use the bottom edge to give the user quick access to an app setting or feature, such as setting a new alarm in the Clock app. Does your app allow the user to add information in a form? Use the bottom edge to provide quick access to a form, such as adding a new contact or creating a new account. Fig. 6.73: 750w_Header_OverwrittenFixedheader2 (1) 6.3. The Ubuntu App platform - develop with seamless device integration 43 UBports Documentation, Release 1.0 Fig. 6.74: 366w_Header_HeaderHidden Fig. 6.75: 366w_Header_HeaderFixedTransparent Does your app allow users to access more views? You can use the bottom edge to reveal all views or tabs currently open to allows the user to switch between them easily and quickly. For example, the bottom edge in the Browser app reveals all the open tabs the user has open. Hints The toolkit provides a hint that consists of two elements: Hint 1 and Hint 2. The hint is used to let the user know that there is something worth trying at the bottom of the screen. Hint 1 When your application is launched for the first time, the user will see a floating icon, known as Hint 1. Hint 2 After the user has interacted with Hint 1, the hint will morph to become Hint 2, which contains a label, icon or a combination of the two. Using a label with an icon gives the user more detail of the content it will show. Hint labels It is important that your hint label is concise and clear to avoid confusing the user. Do Don’t Step 1. Unfolding hint Hint 1 is visible when the user first interacts with your app. By short swiping from Hint 1; Hint 2 starts to replace Hint 1 which then becomes fully visible. Step 2. Collapsing Hint 2 is now fully visible; however if the user doesn’t interact with the content or screen for a period time, then Hint 1 it will automatically fade in and replace Hint 2. Fig. 6.76: 366w_Header_HeaderCustumised1 44 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.77: 366w_Header_HeaderCustumised2 Fig. 6.78: 366w_Header_ActionInHeader1 (6) Fig. 6.79: 366w_Header_ActionInHeader2 (2) Fig. 6.80: 750w_Header_Pointer environment Fig. 6.81: 750w_Header_HeaderSection Fig. 6.82: 366w_Header_ClearHeaderAction (1) 6.3. The Ubuntu App platform - develop with seamless device integration 45 UBports Documentation, Release 1.0 Hiding the hint You can choose to have the bottom edge hint hidden from view when the user scrolls the content on the screen. This would work well for apps that need the whole screen, such as the Camera app, because the primary goal is to take a picture. List items List items can be used to make up a list of ordered scrollable items that are related to each other. Fig. 6.83: 750w_ListItems_MainImage (1) A list of emails • Overview › • Contextual actions for list items › • Lists in edit mode › • Structure › • Actions › • Communicating feedback › • List item layouts › ||See the ListItemLayout API that provides customisable templates, and the ListItem API that provides swiping actions.| | |—|—–| Overview Lists are displayed in a single column layout and are made up of items that can contain one or more controls. Items should be grouped together in a logical way that makes sense to the user. Items in a form Fig. 6.84: 366w_ListItems_UseCases1 (2) A list of settings Use appropriately to the content When images or icons are presented without text or actions, it would make more sense to show them inside a grid rather than a list; like in a photo gallery. 46 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.85: 366w_ListItems_UseCases2 (2) Fig. 6.86: 366w_ListItems_ImageList (1) Use search function Fig. 6.87: 366w_ListItems_UseCaseSearchFunction (2) Consider adding a search function for lists that are likely to contain a large number of items, in order for the users to quickly search a particular item. Contextual actions for list items Items in a list can have actions that can be placed in a context menu. The context menu can be accessed in two ways: by swiping or right-clicking the list item. Touch and pointer interactions perform the same functions across convergent devices for consistency and familiarity across the platform. Swiping right may reveal a button for the leading action, such as ‘Delete’ or something similar. Swiping left may reveal buttons for (up to) three other important actions; these are the trailing actions. When the user interacts with an item using a mouse, right-clicking will reveal the context menu, and click and drag will reveal the leading and trailing actions either side of the item. This gives the same experience as swiping. The actions are placed within two categories: leading for negative actions and trailing for positive actions. Grouping actions into positive and negative areas inside your list items will reinforce familiarity inside your app; allowing users to find and identify important actions easily. Touch – Leading action Swipe left to right Touch – Trailing action Swipe right to left Pointer A user can right-click to reveal the contextual menu, or drag right to left to reveal the leading or trailing options in an item. Focus A user can reveal the contextual menu by focusing on an item using keyboard navigation and hitting a keyboard key to reveal it. 6.3. The Ubuntu App platform - develop with seamless device integration 47 UBports Documentation, Release 1.0 Fig. 6.88: 366w_ListItems_ContextualActions1 (3) Fig. 6.89: 366w_ListItems_ContextualActions2 Lists in edit mode Edit mode allows users to modify a particular item or multiple items at once. You can use edit mode to allow users to multi-select, rearrange or delete items inside a list. When edit mode is entered the whole screen becomes an edit state and the header will show associated editing actions for the content. Alternatively, if the user long presses an item a context menu will show the associated editing actions too. Use case Edit contacts In the Contacts app for example, the list of contacts is made editable to allow users to delete or edit a contact’s information. 1. A user selects an item in the list by using the edit icon in the header. 2. The list becomes selectable with checkboxesthat provides swiping actions for multi-select mode. 3. The header changes to reveal editing actions, and the header section is replaced with a toolbar underneath the main header with further editing actions. ||For more information about how edit mode is used see Header.| | |—|—–| Structure The toolkit provides list item layouts that consist of 1 to 4 slots which can be arranged in a variety of ways. These slots can contain components that allow the list item to perform actions and display content. Slot A (mandatory) Can only contain text, such as a title with an optional subtitle. Slot B (optional) For additional text, an icon or a component. List items must always contain at least one slot. Fig. 6.90: 366w_ListItems_ContextualActionsPointer (3) 48 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.91: 366w_ListItems_ContextualActionsFocus Fig. 6.92: 366w_ListItems_ListEditMode1 (4) Chevron (optional) If your list item allows for navigation through to an associated view, then a ProgressionSlot (chevron) is used in a fixed position in the right-most slot. No other actions is displayed in this slot, because this would conflict with the chevron navigation. ||The ProgressionSlot API is designed to provide an easy way for developers to add a progression symbol to the list item created using ListItemLayout or SlotsLayout.| | |—|—–| Content If you use the ListItemLayout API then Slot A can contain a 1 line title, a subtitle, and a 2 line summary. If you use SlotsLayout API, you can put whatever you choose in to Slot A. A recommendation is to place the most distinguishing content in the first line of your list item. Text is always aligned according to the currently displayed language. For example, in the case of English it is left to right, whereas Arabic is right to left. ListItemLayout labels: 1. 1 line – Title 2. 1 line – Subtitle 3. 2 lines – Summary ||Developers are free to override the maximum amount lines for each label. See the Label API for more information.| | |—|—–| Actions Primary The primary action is the main action you want a user to perform. Secondary A secondary action is an action the user may wish to perform instead of the primary action. One action Primary action: a user wants to turn their dial paid sound on or off. Fig. 6.93: 366w_ListItems_ListEditMode2 (3) 6.3. The Ubuntu App platform - develop with seamless device integration 49 UBports Documentation, Release 1.0 Fig. 6.94: 750w_ListItems_4SlotLayout Fig. 6.95: 750w_ListItems_1SlotLayout Two actions Primary action: a user can call using tap or click on a contacts name. Secondary action: a user can message a contact by taping or clicking on the message action icon. Two actions – with primary icon Primary action: call using tap or click on the dial action. Secondary action: message using tap or click on the message action icon. ||Avoid creating visual noise by repeatedly using additional actions in list items.| | |—|—–| Touch regions Tapping anywhere in the list item should perform the primary action. The secondary action is only triggered by touching a particular touch region where the action resides. For example, user will expect to tap on the contact name or call button (primary action) to call a contact. The secondary action would be to message the contact using the message action icon. Primary action – call Secondary action – message Communicating feedback You can use a slot to communicate if something has changed within a list item. For example, a timestamp on a message indicates when the message was received and a tick to show the message has been read. Use text labels If a list item needs to provide feedback from an associated action, then the list item should not be used to communicate this. In System Settings if a user has tried to connect to another device using Bluetooth and no device has been found, a text label within the view is used to indicate feedback. List item layouts The toolkit provides a number of layouts when creating a list item to ensure users get the best experience from your app across different surfaces. Fig. 6.96: developer_links. 50 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.97: 366w_ListItems_Chevron1 (4) Fig. 6.98: 366w_ListItems_Chevron2 (3) Consider: • Slot A is mandatory and should always contain text. • The maximum number of slots is four. ||You can place what you wish inside the slots. However, these recommendations take into consideration cognitive familiarity to provide a clean and minimalist look.| | |—|—–| One slot Two slot Three slot Four slot ||Provide a caption under the title to give the user more information if necessary. For example, displaying a contact’s email address saves the user clicking through to find the information.| | |—|—–| Avoid cluttered list items In this example, the list item is too overcrowded and it is not immediately apparent what the primary action is. Selection controls The following components are used to change the state of a property or setting from a set of predefined values. • Checkbox › • Radio buttons › • Switches › • Date and time pickers › • Slider ›’. Fig. 6.99: 750w_ListItems_Content3 6.3. The Ubuntu App platform - develop with seamless device integration 51 UBports Documentation, Release 1.0 Fig. 6.100: 750w_ListItems_1action2action (3) Fig. 6.101: 366w_ListItems_ActionsPrimary (1) ||The Checkbox API is a component with two states: checked or unchecked. It can be used to set boolean options.| | |—|—–|. |366w\_FormElements\_UseCasesDont (1)| |dont\_32|. Fig. 6.102: 366w_ListItems_ActionsSecondary (1) 52 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.103: 366w_ListItems_InformationStates Fig. 6.104: 366w_ListItems_CommunicatingFeedback1 (1) Fig. 6.105: 366w_ListItems_CommunicatingFeedback2 (1) Fig. 6.106: 366w_ListItems_OneSlotSmall (2) Fig. 6.107: 366w_ListItems_TwoSlotSmall2 Fig. 6.108: 366w_ListItems_ThreeSlotSmall2 (1) Fig. 6.109: 366w_ListItems_FourSlotSmall2 (1) Fig. 6.110: 366w_ListItems_FourSlotBad Fig. 6.111: 750w_FormElements_Alignment (1) Fig. 6.112: 366w_FormElements_MultipleOptions Fig. 6.113: 366w_FormElements_SingleOptions Fig. 6.114: 366w_ListItems_CommunicatingFeedback1 (1) Fig. 6.115: 366w_FormElements_UseCasesDo Fig. 6.116: do_32 Fig. 6.117: 366w_FormElements_Selection (1) Fig. 6.118: 366w_FormElements_Confirmation (1) 6.3. The Ubuntu App platform - develop with seamless device integration 53 UBports Documentation, Release 1.0 Confirmation Use for single selection where users confirm an action, such as accepting Terms and Conditions of a setting. ||Use indeterminate checkboxes when the value is neither checked or unchecked.| | |—|—–| Make it obvious Don’t make it hard for the user to understand the effect of the unchecked value. |366w\_FormElements\_MakeItObvious\_Good (1)| |do\_32| Do |366w\_FormElements\_MakeItObvious\_Bad (1)| |dont\_32| Don’t Alignment When aligning checkboxes with labels, or other dependent controls, it is important that the user knows which checkbox belongs to the corresponding explanation. Fig. 6.119: 750w_FormElements_Alignment (1) ||For more guidance on using familiar language and the right tone of voice for labels see Writing (coming soon).| | |—|—–| Radio buttons Use radio buttons when there is a list of two or more options that are exclusive of each other and only one choice can be selected. Choosing a message tone Fig. 6.120: 366w_FormElements_MessageTone Clicking a non-selected radio button will deselect whichever button was previously selected. For example, ‘Soft delay’ will be deselected if the user selects another option. ||Options presented with radio buttons require less mental effort, because users can easily compare options as they are all visible at once.| | |—|—–| 54 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.121: 366w_FormElements_OneSelection Fig. 6.122: 366w_FormElements_MultipleSelection One selection – use radio buttons Multiple selection – use checkboxes Use other controls if necessary If you have a selection of options that are long to list and the user could type it faster, then use a text field instead. |366w\_FormElements\_OtherControls\_radio| |do\_32| Do |366w\_FormElements\_OtherControls\_form| |dont\_32| Don’t Don’t use a radio menu entirely for command items. If the menu never contains any radio items, then use a toolbutton menu instead. ||A toolbutton is a borderless button, as found in the header or a bottom-edge panel. It usually consists of an icons, but may instead contain text buttons. See Buttons (coming soon) for more details.| | |—|—–| Radio list If you have a large set of radio buttons then place them in a list. That way users can easily navigate and scroll through the options. A list of organizations Fig. 6.123: 366w_FormElements_Organisations Don’t interrupt the user When a user selects an option avoid hindering them from choosing another option by opening up a dialog or closing the window. Switches The switch allows the user to perform an action by turning it on or off. ||The Switch API is a component with two states: checked or unchecked. It can be used to set boolean options. The behavior is the same as CheckBox, the only difference is the graphical style.| | |—|—–| 6.3. The Ubuntu App platform - develop with seamless device integration 55 UBports Documentation, Release 1.0 Fig. 6.124: 750w_FormElements_DontInterupt Fig. 6.125: 750w_FormElements_UseCasesBluetooth Use cases If you are asking the user to turn a setting or instruction on or off, then use a switch. Fig. 6.126: 366w_ListItems_UseCases1 (2). ||The PickerPanel API is a component that provides the date and time values with picking functionality.| | |—|—–|. 56 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.127: 366w_FormElements_WhenNotToUse (1) Fig. 6.128: 366w_FormElements_DayMonth ||An AM/PM selector will be added if the 12-hour clock is used.| | |—|—–| Slider Use interactive sliders to select a value from a continuous or discrete range of values. ||The Slider API is a component that allow the user to select a value from a continuous range of values.| | |—|—–| Slider types You can choose between different slider types to allow the user to set different values. ||The interactive nature of the slider makes it a great choice for settings that reflect intensity levels, such as volume, brightness, or color saturation.| | |—|—–|. Fig. 6.129: 366w_FormElements_time 6.3. The Ubuntu App platform - develop with seamless device integration 57 UBports Documentation, Release 1.0 Fig. 6.130: 366w_FormElements_date desktop Fig. 6.131: 366w_FormElements_time picker System volume control component is currently under heavy development because it might also include other audio features, so you won’t have to worry about developing it yourself.| | |—|—–| The advantages of using system volume control: • People won’t be annoyed that your app is louder or quieter than others, because your app uses the system audio volume • Volume change notifications don’t appear in front of your app when the slider is altered (especially important for a video player) • You don’t need to implement your own volume-adjusting code, because Ubuntu changes the volume of your app automatically •. Activity indicators Use Activity Indicators to give the user an indication of how long a running task might take and how much work has already been done. Hint: The Activity Indicator API visually indicates that a task of unknown or known duration is in progress. Fig. 6.132: 366w_FormElements_embedded 58 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.133: 366w_FormElements_time 3 inputs (1) Fig. 6.134: 366w_FormElements_DefaultSlider (1) Hint: The toolkit progress bars and spinners automatically handle presence for individual tasks by waiting for two seconds. If the task takes less than that they won’t appear at all. Fig. 6.135: 366w_FormElements_MinimumValueSlider 6.3. The Ubuntu App platform - develop with seamless device integration 59 UBports Documentation, Release 1.0 Fig. 6.136: 366w_FormElements_IntervalValueSlider Fig. 6.137: 366w_FormElements_VolumeControl. Hint: See Communicating Progress (coming soon) for best practices on labelling Activity Indicators. Context menus Use a context menu to |750w\_Menus\_MainImage| provide quick access to important actions within your application. • Overview › • Revealing actions › • Layouts › • Behavior › 60 Chapter 6. App development UBports Documentation, Release 1.0. |750w\_Menus\_PointerEnvironment| ||See how context menus behave in List items.| | |—|—–| Cascading menus Cascading menus act as sub-menus within your main contextual or application menu. ||Try to limit nesting to one level deep, because it can be difficult for the user to navigate through multiple nested submenus in staged environments.| | |—|—–|. 6.3. The Ubuntu App platform - develop with seamless device integration 61 UBports Documentation, Release 1.0 ||For more information about leading and trailing actions see List Item.| | |—|—–| Layouts It is important that each menu retains a consistency in its layout and content when used across different devices. 1. Select item 2. Region 3. Window 4.. 62 Chapter 6. App development UBports Documentation, Release 1.0 Text labels It is important that you accurately describe the associated action or option in a succinct manner when using text labels inside your menus. |do\_32| Do Be concise and clear to avoid confusing or misinforming the user. |dont\_32| Don’t Use over-long text labels that result in truncation (. . . ). ||By default the SDK applies a truncation to long text labels, therefore avoid placing them manually.| | |—|—–| Label examples • Add • Edit • New (rather than ‘create’) • Move • Save/ Save As • Delete/ Remove • Send • Share Grouping menu items Items should be grouped in a logical manner using dividers to separate related actions that have been grouped together. |366w\_Menus\_ItemGroupingDo| |do\_32| Do |dont\_32| 6.3. The Ubuntu App platform - develop with seamless device integration 63 UBports Documentation, Release 1.0 Don’t Divide a predictable set of commands, such as clipboard commands (Cut, Copy, Paste) from app-specific or viewspecific. ||Developer can choose to input a burger menu to store the actions inside the header rather than inside the list item, if they wish.| | |—|—–|. ||For more information on checkboxes and radio buttons see Selection controls.| | |—|—–| Behavior Fig. 6.138: 366w_Menus_ContextualActionsTouch 64 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.139: 366w_Menus_ContextualActionsFocus Fig. 6.140: 366w_ListItems_ContextualActions2 (1). Fig. 6.141: 366w_ListItems_ContextualActions1 (1) 6.3. The Ubuntu App platform - develop with seamless device integration 65 UBports Documentation, Release 1.0 Fig. 6.142: 366w_Menus_LayoutMenuItems Fig. 6.143: do_32 Pointer interaction Menu is aligned down and to the right of the pointing device cursor point at which the user right clicked or longpressed. Scrolling The toolkit provides a ScrollView component that allows users to scroll content inside panels, text fields and lists across all devices. ||The ScrollView API is a scrollable view that features scrollbars and scrolling when using keyboard keys.| | |—|—–| ScrollView vs. Scrollbar APIs The ScrollView API works by wrapping the Scrollbar API in a view and provides additional features such as: • keyboard navigation and focus handling for a complete convergent experience • automatic positioning of vertical and horizontal scrollbars, which prevents them from overlapping one another when both are present on screen The Scrollbar API doesn’t handle keyboard input and has the following requirements: • the content position is driven through the attached Flickable item • the alignment management has to adhere to the anchors for built-in alignment functionality • every style implementation should drive the position through contentX/contentY properties, depending on whether the orientation is vertical or horizontal Handling overlay A ScrollView handles scrollbar placement by automatically placing the scrollbars horizontally and vertically where appropriate in the device layout. Scrollbar |366w\_Scrollbar\_HandlingOverlay\_Good| |do\_32| Do ScrollView Fig. 6.144: dont_32 66 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.145: 366w_Menus_DisablingActions Fig. 6.146: 366w_Menus_FlagGutter |366w\_Scrollbar\_HandlingOverlay\_Bad| |dont\_32| Don’t Use cases Borderless content If the content of your app is borderless, like the camera, it wouldn’t be practical to have scrollbars because it can hinder the user’s view and primary task of taking a picture. Borderless |366w\_Scrollbar\_BorderlessContent\_Good| |do\_32| Do With scrollbars |366w\_Scrollbar\_BorderlessContent\_Bad| |dont\_32| Don’t Avoid custom scrollers Custom scrollers usually work poorly because they are hard to recognise, or they do not include all the functions people expect. Scrolling through a list Place any ListView API inside a ScrollView to present a scrollbar when items have scrolled off-screen. ||Use the ListView API or see List Items for more guidance on using lists inside your application.| | |—|—–| Scrolling within a text field If your app allows for multi-line input inside a text field, then the user will expect to scroll the content. Fig. 6.147: 366w_Menus_DefaultPositioning 6.3. The Ubuntu App platform - develop with seamless device integration 67 UBports Documentation, Release 1.0 Fig. 6.148: 750w_Scrollbar_MainImage Fig. 6.149: 750w_Scrollbar_CustomScrollbar In a text field, such as in the Messaging app, the field automatically displays a scrollbar that overlays the content to allow users to scroll once they have entered more than five lines of text. Scrolling inside panels The toolkit provides panels that can be used to display anything from images, large amount of text or videos. The user will expect to scroll either vertically or horizontally, or both to view the content. By wrapping the panel inside a ScrollView it will automatically adhere to the content in any device layout. Design patterns Solve reoccurring design problems with common patterns to provide a familiar and usable interface. Gestures Apply natural and progressive gestures to your app to allow users to get where they want to be. Gestures activities › Navigation Allow users to navigate through your app in logical steps using components with innate behaviour. Understand page stack › Layouts Use predefined layouts to help you achieve a seamless experience across all devices. Use an adaptive layout › Gestures Make the most of Ubuntu’s gestures to establish consistency and familiarity within your application. • Edge gesture › • Gestural activities › • Discoverability › Fig. 6.150: 366w_Scrollbar_List 68 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.151: 366w_Scrollbar_Text Fig. 6.152: 750w_Scrollbar_InsidePanel 1. Top edge swipe reveals the indicator menu that contains settings and notifications when swiping down. 2. **Short left edge swipe **reveals favorited and frequently used apps from the launcher menu. 3. **Long left edge swipe **takes you back to the app screen (shows all the installed apps) when you are inside an application. 4. **Short right edge swipe **reveals the previous app used. 5.. Fig. 6.153: 366w_Overview_Gestures (1) 6.3. The Ubuntu App platform - develop with seamless device integration 69 UBports Documentation, Release 1.0 Fig. 6.154: 366w_Ovreview_Navigation (1) Fig. 6.155: 366w_GetStarted_BuildingBlock (2). Fig. 6.156: 750w_Getsures_MainImage (1) 70 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.157: 750w_Gestures_EdgeGestures Fig. 6.158: 750w_Gestures_StackScreen’. ||For more information on the behavior of the bottom edge hint see Bottom edge.| | |—|—–|. ||For more information on instructional overlays see Coach Marks (Coming soon).| | |—|—–|. Fig. 6.159: gesture_1f_tap (1) 6.3. The Ubuntu App platform - develop with seamless device integration 71 UBports Documentation, Release 1.0 Fig. 6.160: gesture_1f_double-tap Navigation Consistent and effortless navigation |750w\_Navigation\_MainImage (2)| is an essential element of the overall user experience. • Usage › • Structure › • Components ›. 1. Overview – the most accessible features you want the user to have instant access to, such as a list of emails. 2. Top level – filters of the overview, such as threads or recent emails. 3. Lower level – detailed views that show detailed information, such as contact information. 4. App settings – a place for the settings of your app, such as notification settings for receiving emails. Overview – Dialer Top level – Contacts Fig. 6.161: gesture_1f_drag-right (1) 72 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.162: gesture_1f_touch Fig. 6.163: gesture_2f_rotate. Fig. 6.164: gesture_2f_pinch-in 6.3. The Ubuntu App platform - develop with seamless device integration 73 UBports Documentation, Release 1.0 Fig. 6.165: 366w_BottomEdge_BehaviourHints2 Fig. 6.166: 366w_BottomEdge_BehaviourHints1 (2) Fig. 6.167: 366w_Gestures_CoachMarks1 Fig. 6.168: 750w_Gestures_CoachMarksTablet1 Fig. 6.169: 750w_Gestures_CoachMarksTablet Fig. 6.170: 366w_Navigation_GroupingContent (2) Fig. 6.171: 366w_Navigation_SignPosting Fig. 6.172: 366w_Navigation_UserJoureny2 (2) Fig. 6.173: 366w_Navigation_UserJourney1 (1) Fig. 6.174: 366w_navigation_UserJourney3 (2) Fig. 6.175: 750w_Navigation_PageStack_HowItWorks (4) Fig. 6.176: 750w_Navigation_PageStackWithTwoPanelView (3) Fig. 6.177: 750w_Navigation_PageStackWithJustRightPanelView (2) Fig. 6.178: 750w_Navigation_SecondPanelView (1) Fig. 6.179: 750w_Navigation_Header (3) 74 Chapter 6. App development UBports Documentation, Release 1.0 Slot arrangement The header features a maximum of four slots that can be arranged and combined to fulfills the user needs. Slot Navigational option A • **Back – **use to navigate to a previous page of the app (if other pages are available) • **Navigation drawer – **use to store more pages if there is no room in the header B • **Title (mandatory) – **provide a one line title of the app or view • **Subtitle (optional) – ** extra explanatory text up to two lines C/D • **Search – **use to search for specific content • **Settings – **use to navigate to your app’s settings page Use drawers sparingly because it: • Hides pages and actions from the user • Conflicts with the Back Button • Requires a tap to see available pages/or actions and two taps every time a user switches pages. ||A Back Button would be irrelevant if your app only has one page, because there would be no pages to go back from; so it is not required.| | |—|—–|. Fig. 6.180: 750w_Navigation_ConvergentHeader3actions (2) ||For more slot layout examples see Header| | |—|—–| Header appearance You can decide how you want the the header to appear in four ways: Fixed, Fixed and Opaque, Fixed and Transparent, Hidden. 6.3. The Ubuntu App platform - develop with seamless device integration 75 UBports Documentation, Release 1.0 Fig. 6.181: 366w_Navigation_HeaderFixed (1) Fixed (default) Useful for making section or action always accessible for when the user scrolls. Transparent Fig. 6.182: 366w_Navigation_HeaderTransparent (1) Useful if you don’t want the header to be the focus of attention, but want it readerly available if the user needs it. Hidden Fig. 6.183: 366w_Navigation_HeaderHidden (1). 1. The main header is a separate component that can hold actions and navigational options. 2. The header section sits below the main header and allows for sub-navigation or filtering within the screen indicated by the header above. One option is always selected. 76 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.184: 366w_Navigtaion_HeaderOverlay Fig. 6.185: 366w_Navigation_HeaderCustumised1 (1). Fig. 6.186: 366w_Navigation_HeaderCustumised2 (2) 6.3. The Ubuntu App platform - develop with seamless device integration 77 UBports Documentation, Release 1.0 Fig. 6.187: 750w_Navigation_HeaderSection (3) Fig. 6.188: 366w_Navigation_Tabs (2) New view A new view stacks over the previous page once the user has committed to the swipe. Layouts Make your app consistent and adaptive across all screen sizes with just one API. • Grid unit system › • Layouts › • Good practice › ||The Adaptive Layout API allows you to add multiple columns to a page (under heavy development).| | |—|—–|: • 40/50GU for mobile and phablets screens • 90GU for tablets, desktop and larger screens. Example of 50GU layout for mobile A mobile device would typically suit a 50 GU-wide virtual portrait screen, because it offers the right balance of content to screen real estate for palm-sized viewing. Fig. 6.189: 366w_Navigation_TabsRecent (2) 78 Chapter 6. App development UBports Documentation, Release 1.0 Fig. 6.190: 750w_Header_Pointer environment Fig. 6.191: 750w_Navigation_HeaderSearchV2 (3) Example of 90GU layout on tablet in portrait mode 90GU is ideal for tablet sized screens, because it offers more real-estate for panels. ||See the design blog for developer specifications of Grid Units and layouts.| | |—|—–| your app can use multiple columns then use a single screen layout on mobile touch that changes to a 2 or 3 panel layout on tablet. ||Developers can choose to create completely adaptive 2 or 3 panel layouts for desktop if they desire.| | |—|—–| Adaptive layout Use the AdaptiveLayout API to display panels in one or more columns from left to right. ||The AdaptiveLayout API provides a flexible way of viewing a stack of pages in one or more columns. Unlike in PageStack, there can be more than one Page active at a time, depending on the number of the columns in the view.| | |—|—–| Fig. 6.192: 750w_Navigation_Convergence search 6.3. The Ubuntu App platform - develop with seamless device integration 79 UBports Documentation, Release 1.0 Fig. 6.193: 750w_Navigation_Convergence search box Fig. 6.194: 366w_Navigation_BottomEdge1 Fig. 6.195: 366w_Navigation_BottomEdge2 Fig. 6.196: 366w_Navigation_BottomEdge4 Fig. 6.197: 366w_Navigation_BottomEdge4 Fig. 6.198: 750w_Layout_MainImage Fig. 6.199: 750w_Layouts_GridUnitSystem Fig. 6.200: 366w_layout_PanelsMusic_50gu Fig. 6.201: 750w_Layouts_PanelsMusicPortrait Fig. 6.202: 750w_Layouts_Panels Fig. 6.203: 366w_Layouts_PanelsCalendar1 Fig. 6.204: 366w_Layouts_PanelsCalendar2 Fig. 6.205: 750_Layouts_Panels3 80 Chapter 6. App development UBports Documentation, Release 1.0. Fig. 6.206: 750w_Layouts_AdaptiveLayout Fig. 6.207: 750w_Layouts_GoodPractice To provide a consistent user experience across the whole platform leave at least one of the panels fixed at a minimum size of either 50 or 40GU inside each screen size. This creates a familiar experience from mobile, tablet and desktop. QML apps QML - the best tool to unlock your creativity QML is an extremely powerful JavaScript-based declarative language for designing intuitive, natural and responsive user interfaces. The Ubuntu SDK provides fluid and natural user interface QML elements that blend into Ubuntu without getting in the way. And with a rich framework and APIs, based on the cross-platform Qt framework, QML features an extensive set of APIs that cover the needs of the most demanding developers. Read the API documentation (coming soon...). QML tutorials The tutorials below will help you get started writing your first QML applications, as well as learning how to better use specifics parts of the Ubuntu SDK. 6.3. The Ubuntu App platform - develop with seamless device integration 81 UBports Documentation, Release 1.0 Tutorials - • Ubuntu 14.04 or later – get Ubuntu •compiling 82 Chapter 6. App development UBports Documentation, Release 1.0. 1. In Ubuntu SDK, press Ctrl+N to create a new project 2. Select the Projects > Ubuntu > App with Simple UI template and click Choose. . . 3. Give the project CurrencyConverter as a Name. You can leave the Create in: field as the default and then click Next. 4. You can optionally set up a revision control system such as Bazaar in the final step, but that’s outside the scope of this tutorial. Click on Finish. 5. Replace the Column component and all of its children, and replace them with the Page as shown below, and then save it with Ctrl+S: import QtQuick 2.4 import Ubuntu.Components 1.3 /*! \brief MainView with a Label and Button elements. */ MainView { id: root 6.3. The Ubuntu App platform - develop with seamless device integration 83 UBports Documentation, Release 1.0 // objectName for functional testing purposes (autopilot-qt5) objectName: "mainView" // Note! applicationName needs to match the "name" field of the click manifest applicationName: "currencyconverter.yourname" width: units.gu(100) height: units.gu(75) property real margins: units.gu(2) property real buttonWidth: units.gu(9) Page { title: i18n.tr("Currency Converter") } } Try to run it now to see the results: 1. Inside Ubuntu SDK, press the Ctrl+R key combination. It is a shortcut to the Build > Run menu entry Or alternatively, from the terminal: 1. Open a terminal with Ctrl+Alt+T. MainView { id: root // objectName for functional testing purposes (autopilot-qt5) objectName: "mainView" // Note! applicationName needs to match the "name" field of the click manifest applicationName: "currencyconverter.yourname" 84 Chapter 6. App development UBports Documentation, Release 1.0 width: units.gu(100) height: units.gu(75) property real margins: units.gu(2) property real buttonWidth: units.gu(9) Page { title: i18n.tr("Currency Converter") } }property:: Device Most laptops Retina laptops Smart phones Conversion 1 gu = 8 px 1 gu = 16 px 1 gu = 18 px. 6.3. The Ubuntu App platform - develop with seamless device integration 85 UBports Documentation, Release 1.0 namespaceshow</button> </div> <div data- <button data-Hide</button> </div> </div> The following JavaScript handles the button click events and shows/hides the dialog: window.onload = function () { var UI = new UbuntuUI(); UI.init(); var dialog = UI.dialog('dialog'); var show = UI.button('show').click( function () { dialog.show(); }); var hide = UI.button('hide').click( function () { dialog.hide(); }); }; Lists The Ubuntu HTML5 framework provides flexible lists. A list can optionally have header text. Each list item supports various options, including primary and secondary text labels, an icon, and more. Here’s a sample list declaration: <div data- <header>My header text</header> <ul> <li> <a href="#">Main text, to the left</a> </li> 150 Chapter 6. App development UBports Documentation, Release 1.0 <li> <a href="#">Main text</a> <label>Right text</label> </li> <li> <aside> <img src="someicon.png"> </aside> <a href="#">Main text</a> <label>Right</label> </li> </ul> </div> More widgets That’s a quick overview of some of the key Ubuntu widgets, but there are more, for example shapes and popups. For a presentation of Ubuntu HTML5 widgets, check out the HTML5 Gallery App (installed by the ubuntu-html5ui-toolkit-examples package). You can launch the gallery by searching the Ubuntu Applications scope for “Ubuntu HTML5 UI Gallery”. Be sure to check out the JavaScript API reference docs for everything. Initializing the Ubuntu JavaScript framework As noted above, your index.html file imports Ubuntu JavaScript framework files. These bring the app to life as a true Ubuntu app. Your app must initialize the framework from JavaScript. Note: When you create an HTML5 app in the Ubuntu SDK, your app already has the code needed for this. Here we simply take a look at this code to understand why it exists. The app’s JavaScript file Your brand new app has a js/app.js file by default. It does a few key things after the DOM is loaded: • Creates an UbuntuUI object: var UI = new UbuntuUI(); • Runs its init.() method: UI.init(); • (Optional) Create an event handler for the Cordova ready event (below). This code runs when the window.onload event is received, which means when the DOM is fully loaded. Here’s an example: window.onload = function () { var UI = new UbuntuUI(); UI.init(); document.addEventListener("deviceready", function() { if (console && console.log) console.log('Platform layer API ready'); }, false); }; 6.3. The Ubuntu App platform - develop with seamless device integration 151 UBports Documentation, Release 1.0 As previous examples show, this onload even handler is where initialize your own GUI, adding objects and event handlers to be sure the GUI is ready to respond to user interactions right from the start. HTML5 Tutorials - Meanings app This is a great starting place to learn the basics of writing an HTML5 app. Here, you: • Start with a new, default HTML5 app project in the Ubuntu SDK • Implement a simple Ubuntu HTML5 GUI • Add some Javascript • Run and test the app • Take a quick run through packaging the app as a click package These are the steps you follow for most apps. You will put together a simple app called “Meanings”. The app displays a simple Ubuntu HTML5 GUI with a header, a text input box, and a button. When the user enters a word in the box and clicks the button, a web API is called that returns the meanings of the word. They are displayed in an Ubuntu List. This simple app does not use any Ubuntu Platform APIs. Nor does it use a Cordova API. It is a straightforward Ubuntu App that happens to be written in HTML5. Be sure to check out other tutorials that dive into these important areas too. Before getting started There are a couple requirements: • You need to install the Ubuntu SDK • You need to know how to create an HTML5 app project in the SDK • You should have some experience running apps from the SDK Getting the app source The completed app source tree is available as a Bazaar branch. You can get it as follows: 1. Open a terminal with Ctrl + Alt + T. 2. Ensure the bzr package is installed with: $ sudo apt install bzr 3. Get the branch with: $ bzr branch lp:ubuntu-sdk-tutorials 4. Move into the html5/html5-tutorial-meanings directory: html5-tutorial-meanings $ cd ubuntu-sdk-tutorials/html5/ Now, let’s get developing! Create your HTML5 app project in the SDK Go ahead and create an HTML5 app project in the SDK. Give the project any name you want. Here, we call it “meanings” Later we give the app the proper title displayed to users at runtime: “Meanings”. 152 Chapter 6. App development UBports Documentation, Release 1.0 Practise running the app After creating an HTML5 app project in the SDK, you can run it directly from the SDK on the Ubuntu Desktop (and on an attached devices, including physical devices and Ubuntu emulators you have created with the SDK). Get it running on the Desktop with: Build > Run. Tip: The SDK has an icon for this (on the left side vertical panel) and a keyboard shortcut: Ctrl + R. Here’s how a brand new app looks when run from the SDK (the actual GUI may vary as refinements are released): The brand new HTML5 app project has the basic set of files you need. But, naturally, the GUI and control logic are simply the defaults for any new app. We’ll implement a GUI and control logic that suits the needs of or Meanings app below. Note: If you have a physical device, you can try running it there by following the tips in the Ubuntu SDK section. You can also try creating an emulator and running it there, again following those tips. Run the app from the terminal This is a great time to try running the unmodified app directly from the terminal. This can be convenient. 1. Open a terminal. There are many ways. A quick way is Ctrl + Alt + T. 6.3. The Ubuntu App platform - develop with seamless device integration 153 UBports Documentation, Release 1.0 2. Move to your app project directory. 3. Launch the app as follows: $ ubuntu-html5-app-launcher --www=www Let’s take a closer look at that command: • ubuntu-html5-app-launcher: This is the executable that launches the web container in which that HTML5 app runs. The container exposes built-in Ubuntu App Platform APIs that your app’s JavaScript can call directly. • --www=www: This argument simply tells ubuntu-html5-app-launcher where to find the directory that contains the app’s HTML5 files. Currently, the HTML5 files are required to be in the www/ directory of the app project. Debugging the app’s JavaScript Before taking a closer look at Ubuntu HTML5, let’s take a moment to learn how to debug the app’s JavaScript. Many web developers are familiar with debugging a web page they are developing right in the browser displaying the page using the browser’s own development tools. That’s the approach used with Ubuntu HTML5 apps. The Ubuntu HTML5 app runtime container is based on WebKit. So is Chrome/Chromium. The approach used here is to send the debug data behind the scenes to a URL. You then open that URL in a WebKit browser, and you can then use its debug capabilities, for example having direct access the the JavaScript console. Add the –inspector argument to launch in debug mode When you launch the app from the terminal with ubuntu-html5-app-launcher, you simply add the –inspector argument. Then watch the output in the launch terminal for Inspector server. . . and open the stated URL with the Chrome, Chromium (or other WebKit) browser. For example, you would use a command like this: $ ubuntu-html5-app-launcher --www=www --inspector Now, watch the output for something like this: Inspector server started successfully. Try pointing a WebKit browser to. ˓→168.1.105:9221 Then, you would open the URL in a WebKit browser (like Chromium) and use its native development tools. In the case of chromium, the displayed web page has a link you click, which takes you to the debug tools for the running app instance. Tip: An app with a JavaScript error may fail to load the HTML GUI, so getting used to launching in inspector (debug) mode and opening the URL in a WebKit browser is an essential skill. Let’s move on and take a look at the key files in your new app project. HTML5 app project files index.html Naturally, your new HTML5 app project has an index.html, the root file for the app. Tip: Currently, all HTML5 files, including index.html, are expected to be in the www/ directory. The index.html file imports all it needs, including Ubuntu CSS and Ubuntu JavaScript, which provides a convenient set of methods 154 Chapter 6. App development UBports Documentation, Release 1.0 to control you Ubuntu HTML5 widgets. By default, it also imports ./js/app.js, the app-specific JavaScript file. And, it may also import a Cordova JavaScript file (not needed for this app, so you can delete it if you want). Let’s zero in on ./js/app.js. App specific JavaScript: app.js This is your app’s essential JavaScript file. You add your control code here. But first, let’s take a quick look at some critical code it contains by default: window.onload = function () { var UI = new UbuntuUI(); UI.init(); [...] } This is the required code that creates an UbuntuUI object (locally named UI). This object is your entry point into the UbuntuUI API. This API is used to control the Ubuntu HTML5 GUI. Tip: Later, take a look at the HTML5 UbuntuUI API reference docs. This is an event handler for the window.onload event. It provides an anonymous function that executes when the event is received. This event is received after the DOM fully loads, which is the proper time to initialize the UbuntuUI. Note: Another approach is to use the JQuery(document).ready() event handler method, as we do later in this app. After the UI object is created, the code runs the essential UI.init() method. This method is needed to initialize the UI framework. Other project files Here’s a quick summary of other key files: • APP.desktop: As noted, this is the file used by the system to launch the app. Check it out and note the critical Exec line that shows the command line the system uses to start the app. Note also useful bits like the Icon line that you use to name the icon file the system uses to represent the app in Unity. This is usually an icon in the app’s source tree. There are two files that are hidden in the SDK GUI: • APP.ubuntuhtmlproject: This is the Ubuntu SDK (really, the QtCreator) project file. Select this when browsing the file system from the SDK to open a project. • APP.ubuntuhtmlproject.user: This contains per project SDK settings. This is normally not edited directly – use the SDK GUI to set preferences instead. Note, this file is normally not added to version control. Other key files are added when you package the app, as we see below. Let’s get on with the HTML5 development! Ubuntu HTML5 markup intro Ubuntu HTML5 apps use specific markup to implement the GUI. Let’s take a super fast look at Ubuntu HTML5 highlights. Tip: Check out the HTML5 Guide for a more detailed look. 6.3. The Ubuntu App platform - develop with seamless device integration 155 UBports Documentation, Release 1.0 App layout You can have “flat” organization with tab-style navigation or “deep” organization with pagestack-style navigation. Our app will use the simple tab- style navigation with a single tabitem and a single corresponding tab (for content). Ubuntu widgets Ubuntu HTML5 provides a set of widgets you can declare in markup for things like buttons, lists, toolbars (also called footers), dialogs, and etc. Our app will use: • A header with a single tabitem with text: “Meanings” • A corresponding tab that contains the main content, including: • An input box where the user enters a word • A button looks up the word in the web API • A list that displays the returned meanings of the word Replacing the default HTML5 We don’t need most of the default HTML in index.html. So let’s replace the whole <body>[...]</body> with HTML5 that declares our app’s GUI. Copy the following into index.html, replacing the <body>[...]</body>: <body> <div data- <header data- <ul data- <li data-Meanings</li> </ul> </header> <div data- <div data- <div><input type="text" id="word">Enter a word</input></div> <button data-Get</button> <div data- </div> <!-- tab: main-page --> </div> <!-- content --> </div> <!-- mainview --> </body> Tip: It may be easier to copy and paste from the app source branch described above. Let’s check out how the app looks if you run it now with Ctrl + R. Note that the GUI does not function yet because we have not yet added the JavaScript control logic. App HTML5 highlights Let’s examine some highlights of this HTML. 156 Chapter 6. App development UBports Documentation, Release 1.0 6.3. The Ubuntu App platform - develop with seamless device integration 157 UBports Documentation, Release 1.0 Mainview All the HTML5 inside the body is wrapped in a <div data-role=”mainview”>. This is standard for Ubuntu HTML5 apps. Header • There is a header: • The header contains an unordered list (ul) • The unorder list has a single listitem (li) whose data-role is “tabitem”: Meanings This implements the header part of our tab-style layout: • We have a single tab. • The text that displays is “Meanings” • Note the tabitem’s data-page attribute. This value (main-page) is what connects the tabitem to the tab declared lower down whose id is the same: <div data-. When the user clicks the tabitem in the header, the corresponding tab displays. We have only a single tabitem/tab. Content Below the header, we have a content div, declared like this: <div data- [...] </div> <!-- content --> This div contains the tabs that correspond with each tabitem declared in the header (in our case, only one tab). Let’s take a look at our tab. Tab Here is our one tab: <div data- [...] </div> <!-- tab: main-page --> The data-role=”tab” is what declares it as an Ubuntu tab. As noted above, theEnter a word</input></div> We put this in a div so it is rendered as block, not inline, per normal HTML5. Note theGet</button> This button is declared as an Ubuntu button, with a data-role of button. This means it is pulled into the framework and therefore you get a convenient API for it. For example, you can add an click event handler using the id easily. Tip: Ubuntu CSS provide styles for several button classes. Check out the actual Ubuntu CSS files to see what is available. For example, check out: /usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/ buttons.css Empty List, populated later We declare a list that starts off empty: <div data- That’s an Ubuntu list. We will use the UbuntuUI framework to obtain the list in JavaScript and populate it with the meanings for the word that are returned from the web API lookup. That’s about it for the HTML5. Pretty straightforward. Now, let’s add the JavaScript we need to complete this app’s basic pieces. Implementing our Javascript Adding the JQuery lib This app uses JQuery to call the web API. We need to add the JQuery lib to our package, which takes a few steps: • Ensure libjs-jquery package is installed with: $ sudo apt-get install libjs-jquery • Copy the lib into your app directory with $ cp /usr/share/javascript/jquery/jquery.min.js . • Tip: You might need to close and open the project for the jquery.min.js file to display in the SDK project. • Include the jquery.min.js file into your index.html file by adding this line into the main HTML <header> .. </header>: 6.3. The Ubuntu App platform - develop with seamless device integration 159 UBports Documentation, Release 1.0 Using the JQuery ready event handler In js/app.js, find the default window.onload event handler: window.onload = function () { var UI = new UbuntuUI(); UI.init(); [...] } Change the first and last lines to use the JQuery ready method, like this: $( document ).ready(function() { var UI = new UbuntuUI(); UI.init(); [...] }); Note that the last line has changed!. Add the button event handler As noted above, the button event handler code gets the word the user entered and calls the web API to get meanings for it. Start by deleting all the code inside ready function except for the creation of the UI object and running of its init() method, so it looks like this: $( document ).ready(function() { var UI = new UbuntuUI(); UI.init(); DELETE ALL THIS CODE }); Now, after the UI.init(); line, add the following: UI.button('lookup').click(function () { var lookup = document.getElementById('word').value; console.log('Looking up: ' + lookup); var Then, you should build the application for the target device: $ cordova build --device Note: On first run, you may have to install some build dependencies in the click chroot. Check the section above for details And then just start the application on the phone: $ cordova run --device --debug At this point, you should see the familiar Cordova logo in the application running on your phone. Your Ubuntu system is ready for Cordova development. Now, let’s take a high-level look at using the Cordova APIs. So many APIs! Which to use? There is overlap in APIs from various sources for use in HTML5 apps. Consider geolocation. Many web engines now support a geolocation API. W3C has a proposed geolocation API as well. Cordova also provides a geolocation API. Here we provide some guidelines for developers to align with Ubuntu directions: 166 Chapter 6. App development UBports Documentation, Release 1.0 6.3. The Ubuntu App platform - develop with seamless device integration 167 UBports Documentation, Release 1.0 First Choice: Ubuntu App Platform APIs When an Ubuntu App Platform API is available and not deprecated, it is the best choice. This provides the best integration with the platform. However, it will affect your ability to port to other platforms, if that is your goal. For example, developers should use Content Hub, Online Accounts and Alarms APIs even if other APIs may exist that provide similar functionality. Second Choice: W3C Working W3C standard APIs should be used when there is no Ubuntu App Platform API for the functionality. W3C APIs are quickly and well supported in browsers and web containers and are likely to provide the most stability and standard APIs, so these are the best choice when platform APIs do not exist. Rocking with Cordova APIs Cordova APIs provide key functionality not yet present in W3C standards or the Ubuntu Platform. Examples include Splash Screen and Accelerometer. As such Cordova APIs are a great choice for these system and device level features that can really make your HTML5 app rock! Ubuntu HTML5, Cordova and Web APIs are in constant development, so the recommendations for the particular APIs mentioned above may be updated. Please stay tuned. Programming with Cordova Here we look at how your app knows that Cordova is loaded and ready. This is where you can place code that should only run once Cordova has fully detected your device, for example event handlers that use Cordova navigator objects. Handling Cordova’s deviceready event Web developers are familiar with the window.onload event that signals when the DOM is fully loaded. This event is useful is for running event handler code right after the DOM is loaded. In Ubuntu HTML5 apps, we use that event to run the code that initializes the Ubuntu UI framework. After that initialization code, your Cordova app can set up an event handler for Cordova’s deviceready event. This event signals that the Cordova runtime is fully ready for operations. For example, this is where you should place your event handlers that invoke Cordova objects. Let’s take a look at sample code that has these parts: window.onload = function () { /* Optional: Initialize the Ubuntu UI framework */ var UI = new UbuntuUI(); UI.init(); /* Handle the Cordova deviceready event */ document.addEventListener("deviceready", function() { if (console && console.log) console.log('Platform layer API ready'); /* Add event listeners that invoke Cordova here */ // take picture with Cordova navigator.camera object UI.button("click").click( function() { navigator.camera.getPicture(onSuccess, onFail, { destinationType: Camera.DestinationType.DATA_URL 168 Chapter 6. App development UBports Documentation, Release 1.0 }); console.log("Take Picture button clicked"); }); // "click" button event handler }, false); }; function onSuccess(data){ DO SOMETHING }; function onFail(data){ DO SOMETHING }; Here, inside the deviceready event handler, we add an event handler for an Ubuntu button that callsnavigator. camera.getPicture(...). That’s a standard and straightforward pattern for a lot of what you can do with Cordova APIs. Next steps Check out the Cordova Camera Tutorial, which provides all the steps you need to make a working HTML5 Camera app that let’s you snap a picture and then displays it in the app. You may also want to check out the HTML5 Guide for an overview of Ubuntu HTML5. HTML5 Tutorials - Cordova camera app This tutorial takes you through the steps needed to create an HTML5 app that uses the Cordova runtime and its Camera API. The app we develop here is quite simple: • It provides a Take Picture button. • When Take Picture is clicked, the Cordova Camera displays. • The user takes a picture. • The picture is returned through Cordova and is displayed in the app’s HTML. Before getting started Cordova guide You may want to read the Cordova Guide. It contains all the info you need to set up your development environment. The three prerequisites being: • Installing cordova-cli from the Ubuntu Cordova PPA • Creating a click chroot for the armhf architecture, to run and contain your application • Installing build dependencies in the click chroot; refer to the corresponding section in the Cordova Guide HTML5 UI Toolkit basics This tutorial is not focused on the UI Toolkit. For help, see the Ubuntu HTML5 UI Toolkit Guide. 6.3. The Ubuntu App platform - develop with seamless device integration 169 UBports Documentation, Release 1.0 Getting the resources for this app You can obtain the source tree for this app as follows: Open a terminal with Ctrl+Alt+T and get the branch with: $ bzr branch lp:ubuntu-sdk-tutorials Creating your Cordova app project We will be creating the application from scratch and copy-pasting parts from the reference code. You will need to instantiate a new project, with the following Cordova command: $ cordova create cordovacam cordovacam.mydevid $ cd cordovacam Tip: You may want to add app project files to revision control such as Bazaar and commit them (except the .user file, which is typically not stored in VCS). Define your application icon To define the icon for you application, you should first copy the sample icon from the ubuntu-sdk-tutorials/ html5/html5-tutorial-cordova-camera directory $ cp ../ubuntu-sdk-tutorials/html5/html5-tutorial-cordova-camera/www/icon.png ./www/ ˓→img/logo.png Then you need to add this entry into the Cordova app configuration file. Edit the config.xml file and add the line below: <icon src="www/img/logo.png" /> Note: this is a mandatory step, to let the application pass the package validation tests. Add the Ubuntu platform support code to your project As explained in the Cordova Guide, you need to add platform support code to your project, which will be compiled and integrated in the Cordova runtime shipped with your application. Add the Cordova Ubuntu runtime files into your app project: $ cordova platform add ubuntu Now, your project contains some additional files, notably: platforms/ubuntu/ Add support for the Camera API Add the Cordova Ubuntu runtime files into your app project: 170 Chapter 6. App development UBports Documentation, Release 1.0 $ cordova plugin add cordova-plugin-camera Tip: Put all of files added by the previous commands into your version control system and commit them as appropriate. Build the app Use the standard Cordova command line tool to prepare the app for running on your Ubuntu phone. Generally, you don’t need to build the app and then run it. The run command will ensure the app builds and the click package sent to the phone before starting the application directly. $ cordova run --device --debug 6.3. The Ubuntu App platform - develop with seamless device integration 171 UBports Documentation, Release 1.0 Tip: you may see warning messages after the build. For example: you haven’t specified an icon for your application yet. As the application is started on the device, you should also notice that the output contains debug messages to let you connect to the running Javascript code and inspect the HTML5 UI. At this point, the app GUI is still in its default unmodified state. We implement our app GUI in the next section. Define the HTML5 GUI Here we replace the GUI declared in the default app with one appropriate for this Camera app. • In index.html, add the following stylesheet declarations in the section of the document: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, ˓→user-scalable=0"> <!-- Ubuntu UI Style imports - Ambiance theme --> <link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel= ˓→"stylesheet" type="text/css" /> <!-- Ubuntu UI javascript imports - Ambiance theme --> /tabs.js"></script> • Ensure, you call the following 2 Javascript files in the section as well: <!-- Cordova platform API access - Uncomment this to have access to the Javascript ˓→APIs --> <script src="cordova.js"></script> <!-- Application script and css --> <script src="js/app.js"></script • Then, delete the entire div inside the ... element and add the following new HTML fragment: <div data- <header data- <ul data- <li data-Camera</li> </ul> </header> <div data- <div data- <div id="loading"> <header>Loading...</header> <progress class="bigger">Loading...</progress> 172 Chapter 6. App development UBports Documentation, Release 1.0 </div> <div id="loaded"> <button data-Take Picture</button> <img id="image" src="" /> </div> </div> <!-- tab: camera --> </div> <!-- content --> </div> <!-- mainview --> This is a simple implementation of an Ubuntu HTML5 app. It declares the following: • A mainview div (required) • A header with a single tabitem: “Camera” • A content div with two internal divs: loading and loaded • loading div displays at launch time and includes a progress spinner. This is hidden when Cordova is ready by JavaScript code we look at later • loaded div displays when Cordova is ready by JavaScript and contains: • A Take Picture button: We create an event listener for this below to popup the Cordova Camera • An empty img element: When the camera takes a picture, it uses this element to display the return image If you run the app now, the GUI appears as follows: As noted above, that is the loading div that displays until Cordova deviceready event is received. Tip: To isolate your application UI from future UI toolkit changes, we now recommend to bundle a copy of the toolkit inside your application package. There is a small tool documented here that will assist you in migrating your project. See Note: at the end of the index.html file you should also see a reference to a cordova.js script file which is loaded at the beginning of the page. This file is not present in the source ‘www’ directory. However it is automatically copied with the rest of the cordova runtime startup code, during the build phase. So don’t worry, the file will be present in the resulting click package. Let’s take the next step and add the JavaScript that responds to the Cordova deviceready event by hiding the loading div, displaying the loaded div, and providing an event handler for the Take Picture button. 6.3. The Ubuntu App platform - develop with seamless device integration 173 UBports Documentation, Release 1.0 Adding JavaScript to display the Cordova Camera Here we add an event handler for the Cordova deviceready event and, inside that code, sets up our Take Picture to call the Cordova Camera API to let the user take a picture. You should mostly replace the default www/js/index.js file with a new file called app.js from the tutorial branch. We will look at the key elements of this file below. The first step is to init the UbuntuUI object to setup the main user interface parts. The following event listener will be triggered on the initial window load event, and prepare the rest of the UI window.onload = function () { var UI = new UbuntuUI(); UI.init(); document.addEventListener("deviceready", function() { if (console && console.log) console.log('Platform layer API ready'); //hide the loading div and display the loaded div document.getElementById("loading").style. Now, the Loading page and the home page look like this: Next steps Check out the Cordova Guide for a high level review of using Cordova in Ubuntu HTML5 apps and for adding Ubuntu as a built platform for native Cordova projects. The Cordova APIs give your HTML5 apps access to other system and device-level things, so check these out by visiting the Cordova API docs. 6.3. The Ubuntu App platform - develop with seamless device integration 177 UBports Documentation, Release 1.0 HTML5 Tutorials - online accounts Here we provide and discuss two example HTML5 apps that use the Ubuntu App Platform JavaScript Online Accounts API: • html-example-online-accounts app: This app lets you browse all currently enabled Online Accounts and lets you drill down to see account details including authorization status and token. •. • Provider: An object that represents a web service provider. For example, Facebook is a Provider. Google is another. •. 178 Chapter 6. App development UBports Documentation, Release 1.0 Getting the source trees The app source trees for these two example apps are available as subdirectories in the ubuntu-sdk-tutorials Bazaar branch on launchpad.net. Get the branch as follows: 1. Open a terminal with Ctrl + Alt + T. 2. Ensure the bzr package is installed with: $ sudo apt-get install bzr Tip: Tell bzr who you are with bzr whoami. 3. Get the branch with: $ bzr branch lp:ubuntu-sdk-tutorials 4. Move into the branch’s html5/ directory: $ cd ubuntu-sdk-tutorials/html5 The two apps are subdirectories named for the app: • html5-example-online-accounts/ • html5-example-online-accounts-facebook-albums/ Run the apps Run both apps to familiarize yourself with them: 1. Ensure you have enabled some Online Accounts with System Settings > Online Accounts 2. Move into the appropriate app subdirectory: html5-example-online-accounts or html5-example-online-accounts-facebook-albums $ cd ubuntu-sdk-tutorials/ $ cd ubuntu-sdk-tutorials/ 3. Launch the app, for example on the Desktop with ubuntu-html5-app-launcher --www=www App 1: Online Accounts browser This app lets you browse and drill into currently available Online Accounts. • The app’s home page provides optional input fields to limit the displayed accounts by filtering by Provider and Service. • There’s a Show Accounts button to list accounts. • You can click an account to show the Account Details page, which includes the authorization status and token. • When on Account Details, you can click to show the Raw Details page for the account, which is simply the account details displayed as JSON. •): 6.3. The Ubuntu App platform - develop with seamless device integration 179 UBports Documentation, Release 1.0(‘1. • The FILTERS object has two keys: ‘provider’ and ‘service’. When these keys have values, the returned accounts are limited to those that match. • The CALLBACK runs']); 180 Chapter 6. App development UBports Documentation, Release 1.0: • The first displays a single item, the account’s displayName, obtained with ACCOUNT.displayname() • The second iterates through the Provider object keys and adds a list item with the key and its value • The third does the same, but for the Service object •. • The app home page has a Get Albums button that displays a list of your Facebook albums • You can click an album list item to display an Album page that displays photos in the album using the Ubuntu Shapes widget 6.3. The Ubuntu App platform - develop with seamless device integration 181 UBports Documentation, Release 1.0 • You can click a photo shape on the Album page to display the Photo page that displays the photo in larger format •: 182 Chapter 6. App development UBports Documentation, Release 1.0 • Online Accounts keeps track of user enabled web accounts, including authorization status and tokens • The Online Accounts JavaScript API lets your HTML5 app obtain this information • You can get a list of Accounts identified by Provider and Service • You can get authorization data for each account for the current user • You can use the authorization data to interact with the external web site with their API and build rich apps that include personal content from protected external sources HTML5 Tutorials - unit testing In this tutorial you will learn how to write a unit test to strengthen the quality of your Ubuntu HTML5 application. It builds upon the HTML5 development tutorials. Requirements • Ubuntu 14.10 or later – Get Ubuntu • The HTML5 development tutorials – If you haven’t already complete the HTML5 development tutorials • nodejs – Open a terminal with Ctrl+Alt+T and run these commands to install all required packages: – sudo apt-get install nodejs. 6.3. The Ubuntu App platform - develop with seamless device integration 183 UBports Documentation, Release 1.0 HTML5 application: • Are written in javascript • Utilize jasmine, grunt and nodejs Speaking Jasmine A simple spec (testcase) A basic spec is very simple. • Declare a describe() function. This forms the test suite definition • Using the it function, create test cases using javascript • Utilize expect and matchers to make an assertion about results describe("Testsuite", function() { it("testname", function() { expect(true).toBe(true); }); }); Example For example, heres a simple test suite for a function which reverses a string: describe('String Tests',function(){ beforeEach(function(){ stringFunc = { reverse: function(string) { var reversed; for(var i = string.length - 1; i >= 0; i--) { reversed += string[i]; } return reversed; } }; }); it("string is reversed", function() { string = 'thisismystring'; expect(stringFunc.reverse(string)).toEqual('gnirtsymsisiht'); }); }); Building blocks of a spec 184 Chapter 6. App development UBports Documentation, Release 1.0 describe function This defines the testsuite. It takes two parameters: a simple string argument which is utilized as the name of the suite, and a function which contains the testsuite code. it function This defines the testcase. It also takes two parameters: a simple string argument which is utilized as the name of the testcase, and a function which contains the testcase code. expect function This is used in unison with matchers to allow expectations or assertions to be made. This takes a single parameter that is utilized as the first part of the assertion. Matchers Matches are utilized to provide the logic for expect as above. There is a plethora of built-in matchers that jasmine makes available by default. These matchers all take a single parameter that combined with the matcher, serves as the second part of the assertion. Below is a list of built-in matchers: • toBe – compares with === • toEqual – compares == • toMatch – for regular expressions • toBeDefined – compares against undefined • toBeNull – compares against null • toBeTruthy – for boolean casting testing • toBeFalsy – for boolean casting testing • toContain – for finding an item in an array • toBeLessThan – for mathematical comparisons • toBeGreaterThan 6.3. The Ubuntu App platform - develop with seamless device integration 185 UBports Documentation, Release 1.0 – for mathematical comparisons • toBeCloseTo – for precision math comparison • toThrow – for testing if a function throws an exception • toThrowError – for testing a specific thrown exception Advanced Usage Setup and Teardown Should you need to perform actions before or after each testcase runs; or before or after an entire testsuite runs, you can utilize the aptly named Each and All functions. These are beforeEach, afterEach, beforeAll, and afterAll. The All functions will be performed before and after each testsuite, while the Each functions will be performed before and after each testcase. Here’s an example with two simple testcases: describe("testsuite1", function() { beforeEach(function() { before = 1; }); afterEach(function() { before = 0; }); afterAll(function() { waybefore = 0; }); it("test1", function() { expect(true).toBe(true); }); it("test2", function() { expect(false).toBe(false); }); }); And finally here’s how they will be executed: beforeAll testsuite1 beforeEach test1 afterEach beforeEach test2 afterEach afterAll 186 Chapter 6. App development UBports Documentation, Release 1.0 Custom Matchers Sometimes you might need to make an assertion that isn’t readily covered by the built-in matchers. To alleviate this problem, you can define your own custom matcher for later use. A custom matcher must contain a compare function that returns a results object. This object must have a pass boolean that is set to true when successful, and false when unsuccessful. While optional, you should also define a message property that will be utilized when a failure occurs. Example Here’s an example custom matcher to check and ensure a value is even. var customMatchers = { toBeEven: function() { return { compare: function(actual, expected) { result.pass: (actual % 2) === 0 if (not result.pass) { result.message = "Expected " + actual + "to be even"; } return result; }; } } }; To include a custom matcher in your testcases, utilize the addMatchers function. This can be done for each testcase or testsuite using the aforementioned Each and All functions. For example for our toBeEven custom matcher, beforeEach(function() { jasmine.addMatchers(customMatchers(); }); Spies A spy allows you to spy on any function, tracking all calls and arguments to that function. This allows you to easily keep track of things and gain useful insight into what is happening inside of different functions. This also allows you to fake any piece of a function you wish. For example, you can fake a return value from a function, throw an error, or even call a different function. • and.throwError – force an error to be thrown • and.callThrough – calls the spy function before invoking the actual function • and.callFake – allows you to call a different function completely • and.stub – calls the original function, ignoring callFake and callThrough 6.3. The Ubuntu App platform - develop with seamless device integration 187 UBports Documentation, Release 1.0 • and.returnValue – forces the returned value from the function call Here’s an example of changing a returned value via the and.returnValue function. describe('Spy Fake Return',function(){ beforeEach(function(){ myFunc = { returnZero: function() { return 0; } }; }); it("spy changes value", function() { foo = spyOn(myFunc, "returnZero").and.returnValue(1) expect(foo).toEqual(1); }); it("normal value is zero", function() { foo = myFunc.returnZero expect(foo).toEqual(0); }); }); Conclusion Let me try! Try Jasmine is an excellent web based resource that will let you experiment with and learn jasmine from the comfort of your browser. Try it out! You’ve just learned how to write unit tests for a Ubuntu HTML5 application. But there is more information to be learned about how to write HTML5 tests. Check out the links below for more documentation and help. Resources • Jasmine • Grunt • NodeJS • HTML5 SDK documentation HTML5 tutorials - writing functional tests In this tutorial you will learn how to write functional tests to strengthen the quality of your Ubuntu HTML5 application. It builds upon the HTML5 development tutorials. Requirements • Ubuntu 14.10 or later – Get Ubuntu 188 Chapter 6. App development UBports Documentation, Release 1.0 • The HTML5 development tutorials – If you haven’t already complete the HTML5 development tutorials • autopilot, selenium – Open a terminal with Ctrl+Alt+T and run these commands to install all required packages: – sudo apt-add-repository ppa:canonical-platform-qa/selenium – sudo apt-get update – sudo apt-get install python3-autopilot python3-selenium oxideqt-chromedriver What are acceptance tests? Functional or acceptance tests help ensure your application behaves properly from a user perspective. The tests seek to mimic the user as closely as possible. Acceptance tests are the pinnacle of the testing pyramid. The testing pyramid describes the three levels of testing an application, going from low level tests at the bottom and increasing to high level tests at the top. As acceptance tests are the highest level, they will represent the smallest number of tests, but will also likely be the most complex. In Ubuntu, functional tests for your HTML5 application: • Are written in python • Utilize selenium and autopilot What is autopilot? selenium? Autopilot is a tool for introspecting applications using dbus. What this means is autopilot can read application objects and their properties, while also allowing you to mock user interactions like clicking, tapping and sending keystrokes. Selenium is also a testing tool meant for testing web applications. Like autopilot, it allows you to find and interact with page elements, but does this by driving a browser and providing programmatic access to it. A simple testcase The setup Before you can run a testcase, you’ll need to setup your environment. • Create a test class that inherits AutopilotTestCase • Define your Setup() and TearDown() functions • Launch the application with introspection via launch_test_application Fortunately, this setup is taken care of for you by the testing templates provided by the SDK. Let’s break down a few important pieces to understand. First is how we launch the application. Autopilot is used to introspect the html5-app-launcher executable which will run the web app and contains the web view. def launch_html5_app_inline(self, args): return self.launch_test_application( 'ubuntu-html5-app-launcher', 6.3. The Ubuntu App platform - develop with seamless device integration 189 UBports Documentation, Release 1.0 *args, emulator_base=uitk.UbuntuUIToolkitCustomProxyObjectBase) Next, we define a webdriver for selenium that we can use to interact with the webview. A webdriver an interface to a browser allowing for programmatically interacting with an application. Each browser has a separate browser driver. Since our HTML5 application will be running utilizing Blink, we launch a Chrome driver. def launch_webdriver(self): options = Options() options.binary_location = '' options.debugger_address = '{}:{}'.format( DEFAULT_WEBVIEW_INSPECTOR_IP, DEFAULT_WEBVIEW_INSPECTOR_PORT) self.driver = webdriver.Chrome( executable_path=CHROMEDRIVER_EXEC_PATH, chrome_options=options) Finally we are able to launch the application and start the webdriver once it’s loaded. def launch_html5_app(self): self.app_proxy = self.launch_html5_app_inline() self.wait_for_app_to_launch() self.launch_webdriver() Building blocks of a testcase Testcase • Create a Testcase class that inherits your test class • Define your Setup() (and perhaps TearDown()) functions • Launch the application with introspection via launch_test_application Here’s a simple test example of testing an HTML5 app with 2 buttons. def test_for_buttons(self): html5_doc_buttons = self.page.find_elements_by_css_selector( "#hello-page a") self.assertThat(len(html5_doc_buttons), Equals(2)) Making use of selenium Once you’ve launched the application successfully, you will have access to the object tree as usual. You will find the objects you need under the WebAppContainer object. A simple select will get you the object: select_single(WebAppContainer) Even further, you can also utilize the selenium webdriver methods to interact with the application. For example, you will find it useful to search for objects using selenium, while interacting with the container will be easier using autopilot (tapping the back button for example). As you see in the example above we are able to easily find elements on the page using a find_elements_by_css_selector method which is provided by the selenium webdriver. This is in contrast to introspecting for the object over the dbus tree via autopilot. 190 Chapter 6. App development UBports Documentation, Release 1.0 Finding and Selecting Objects Fortunately selenium also makes it easy to find and introspect objects. You can issue a find by id, name, path, link, tag, class, and css! You can also find multiple elements by most of the same attributes. You can read more about finding elements in the Selenium documentation. Once you have found an element you can interact with it by reading its properties or performing an action. Let’s talk about each one. Reading attributes You can read element attributes by utilizing the get_attribute method. For example, we can read attributes of the button from the previous example. button.get_attribute(“class”) Note that getting a list of all attributes isn’t possible via the API. Instead, you can visualize the element using web developer tools or javascript to list it’s attributes. You can also get values of css properties via the value_of_css_property method. Action Chains Now that we can find objects and get details about them, let’s interact with them as well. A user interacting with our application will swipe and tap our UI elements. To do the same in selenium, we can utilize what is known as an action chain. This is simply a set of actions that we ask selenium to perform in the same way as a user. Let’s provide an example, by expanding the example testcase we gave above. After finding the buttons, let’s add an action to click the first button. First, let’s define a new actionchain for the main page. actions = ActionChains(self.page) Now we can add actions to perform. Selenium allows us to click on items, drag, move, etc. For our purposes let’s add a single action to click the button. actions.click(button) Once all of our actions are added, we call the perform method to execute the actions. So putting it all together, here’s our full testcase: def test_click_button(self): button = self.page.find_elements_by_class_name(“ubuntu”)[0] actions = ActionChains(self.page) actions.click(button) actions.perform() To find out about other useful methods, check out the Actions Chain documentation. Assertions and Expectations In addition to the suite of assertions that autopilot has, selenium allows for you to create expectations about elements. These are called expected conditions. For example, we could wait for an element to be clickable before clicking on it. wait.until(expected_conditions.element_to_be_clickable(By.class("ubuntu"))) 6.3. The Ubuntu App platform - develop with seamless device integration 191 UBports Documentation, Release 1.0 Page Object Model When you are architecting your test suite, it’s important to think about design. Functional tests are the most UI sensitive testcases in your project and are more likely to break than lower level tests. To address this issue, the page object model can guide you towards writing tests that can scale and deal with changes over time easily. Check out the Page ObjectModel for more information. Conclusion You’ve just learned how to write acceptance tests for a Ubuntu HTML5 application. But there is more information to be learned about how to write HTML5 tests. Check out the links below for more documentation and help. Resources • Autopilot API • Selenium Webdriver API • HTML5 SDK documentation HTML 5 API Ubuntu HTML5 APIs enable a rich set of technologies for your applications to integrate with and blend in with the platform. The documentation will provide you with detailed technical information and examples on how to make the most of device and platform functionalities. Note: The API documentation has not yet been imported. The old canonical documentation can be found here. Autopilot Note: Here be dragons! This part of the docs could be very outdated or incomplete and has not been completely triaged. Refer to the Ubuntu docs for further reference. ubuntuuitoolkit Ubuntu UI Toolkit Autopilot tests and helpers. class ubuntuuitoolkit.AppHeader(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase AppHeader Autopilot custom proxy object. click_action_button(action_object_name)¶ Click an action button of the header. Parameters: object_name – The QML objectName property of the action 192 Chapter 6. App development UBports Documentation, Release 1.0 Raises ToolkitException: If there is no action button with that object name. click_back_button()¶ click_custom_back_button()¶ ensure_visible()¶ get_selected_section_index()¶ switch_to_next_tab(instance, *args, **kwargs)¶ Open the next tab. Raises ToolkitException: If the main view has no tabs. switch_to_section_by_index(instance, *args, **kwargs)¶ Select a section in the header divider Parameters: index – The index of the section to select Raises ToolkitEmulatorException: If the selection index is out of range or useDeprecatedToolbar is set. switch_to_tab_by_index(instance, *args, **kwargs)¶ Open a tab. This only supports the new tabs in the header Parameters: index – The index of the tab to open. Raises ToolkitException: If the tab index is out of range or useDeprecatedToolbar is set. wait_for_animation()¶ ubuntuuitoolkit.check_autopilot_version()¶ Check that the Autopilot installed version matches the one required. Raises ToolkitException: If the installed Autopilot version does’t match the required by the custom proxy objects. class ubuntuuitoolkit.CheckBox(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase CheckBox Autopilot custom proxy object. change_state(instance, *args, **kwargs)¶ Change the state of a CheckBox. If it is checked, it will be unchecked. If it is unchecked, it will be checked. Parameters: time_out – number of seconds to wait for the CheckBox state to change. Default is 10. check(instance, *args, **kwargs)¶ 6.3. The Ubuntu App platform - develop with seamless device integration 193 UBports Documentation, Release 1.0 Check a CheckBox, if its not already checked. Parameters: timeout – number of seconds to wait for the CheckBox to be checked. Default is 10. uncheck(instance, *args, **kwargs)¶ Uncheck a CheckBox, if its not already unchecked. Parameters: timeout – number of seconds to wait for the CheckBox to be unchecked. Default is 10. ubuntuuitoolkit.get_keyboard()¶ Return the keyboard device. ubuntuuitoolkit.get_pointing_device()¶ Return the pointing device depending on the platform. If the platform is Desktop, the pointing device will be a Mouse. If not, the pointing device will be Touch. class ubuntuuitoolkit.Header(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._header.AppHeader Autopilot helper for the deprecated Header. class ubuntuuitoolkit.Dialog(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase Autopilot helper for the Dialog component. class ubuntuuitoolkit.UCListItem(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase Base class to emulate swipe for leading and trailing actions. toggle_selected(instance, *args, **kwargs)¶ Toggles selected state of the ListItem. trigger_leading_action(instance, *args, **kwargs)¶ Swipe the item in from left to right to open leading actions and click on the button representing the requested action. parameters: action_objectName - object name of the action to be triggered. wait_function - a custom wait function to wait till the action is triggered trigger_trailing_action(instance, *args, **kwargs)¶ Swipe the item in from right to left to open trailing actions and click on the button representing the requested action. parameters: action_objectName - object name of the action to be triggered. wait_function - a custom wait function to wait till the action is triggered class ubuntuuitoolkit.MainView(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase MainView Autopilot custom proxy object. click_action_button(instance, *args, **kwargs)¶ Click the specified button. 194 Chapter 6. App development UBports Documentation, Release 1.0 Parameters: action_object_name – the objectName of the action to trigger. Raises ToolkitException: The requested button is not available. close_toolbar(instance, *args, **kwargs)¶ Close the toolbar if it is opened. Raises ToolkitException: If the main view has no toolbar. get_action_selection_popover(object_name)¶ Return an ActionSelectionPopover custom proxy object. Parameters: object_name – The QML objectName property of the popover. get_header()¶ Return the AppHeader custom proxy object of the MainView. get_tabs()¶ Return the Tabs custom proxy object of the MainView. Raises ToolkitException: If the main view has no tabs. get_text_input_context_menu(object_name)¶ Return a TextInputContextMenu emulator. Parameters: object_name – The QML objectName property of the popover. get_toolbar()¶ Return the Toolbar custom proxy object of the MainView. Raises ToolkitException: If the main view has no toolbar. go_back(instance, *args, **kwargs)¶ Go to the previous page. open_toolbar(instance, *args, **kwargs)¶ Open the toolbar if it is not already opened. Returns: The toolbar. Raises ToolkitException: If the main view has no toolbar. switch_to_next_tab(instance, *args, **kwargs)¶ Open the next tab. 6.3. The Ubuntu App platform - develop with seamless device integration 195 UBports Documentation, Release 1.0 Returns: The newly opened tab. switch_to_previous_tab(instance, *args, **kwargs)¶ Open the previous tab. Returns: The newly opened tab. switch_to_tab(instance, *args, **kwargs)¶ Open a tab. Parameters: object_name – The QML objectName property of the tab. Returns: The newly opened tab. Raises ToolkitException: If there is no tab with that object name. switch_to_tab_by_index(instance, *args, **kwargs)¶ Open a tab. Parameters: index – The index of the tab to open. Returns: The newly opened tab. Raises ToolkitException: If the tab index is out of range. classmethod validate_dbus_object(path, state)¶ class ubuntuuitoolkit.OptionSelector(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase OptionSelector Autopilot custom proxy object get_current_label()¶ gets the text of the currently selected item get_option_count()¶ Gets the number of items in the option selector get_selected_index()¶ Gets the current selected index of the QQuickListView get_selected_text()¶ gets the text of the currently selected item select_option(*args, **kwargs)¶ Select delegate in option selector 196 Chapter 6. App development UBports Documentation, Release 1.0 Example usage: .. raw:: html </dt> select_option(objectName=”myOptionSelectorDelegate”) select_option(‘Label’, text=”some_text_here”) Parameters: kwargs – keywords used to find property(s) of delegate in option selector class ubuntuuitoolkit.QQuickFlickable(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._flickable.Scrollable pull_to_refresh(instance, *args, **kwargs)¶ Pulls the flickable down and triggers a refresh on it. Raises ubuntuuitoolkit.ToolkitException: If the flickable has no pull to release functionality. swipe_child_into_view(instance, *args, **kwargs)¶ Make the child visible. Currently it works only when the object needs to be swiped vertically. TODO implement horizontal swiping. –elopio - 2014-03-21 swipe_to_bottom(instance, *args, **kwargs)¶ swipe_to_show_more_above(instance, *args, **kwargs)¶ swipe_to_show_more_below(instance, *args, **kwargs)¶ swipe_to_top(instance, *args, **kwargs)¶ class ubuntuuitoolkit.QQuickGridView(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._flickable.QQuickFlickable Autopilot helper for the QQuickGridView component. class ubuntuuitoolkit.QQuickListView(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._flickable.QQuickFlickable click_element(instance, *args, **kwargs)¶ Click an element from the list. It swipes the element into view if it’s center is not visible. Parameters: objectName – The objectName property of the element to click. direction – The direction where the element is, it can be either ‘above’ or ‘below’. Default value is None, which means we don’t know where the object is and we will need to search the full list. drag_item(instance, *args, **kwargs)¶ enable_select_mode(instance, *args, **kwargs)¶ Default implementation to enable select mode. Performs a long tap over the first list item in the ListView. The delegates must be the new ListItem components. class ubuntuuitoolkit.TabBar(*args)¶ 6.3. The Ubuntu App platform - develop with seamless device integration 197 UBports Documentation, Release 1.0 Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase TabBar Autopilot custom proxy object. switch_to_next_tab(instance, *args, **kwargs)¶ Open the next tab. class ubuntuuitoolkit.Tabs(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase Tabs Autopilot custom proxy object. get_current_tab()¶ Return the currently selected tab. get_number_of_tabs()¶ Return the number of tabs. class ubuntuuitoolkit.TextArea(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._textfield.TextField TextArea autopilot emulator. clear()¶ Clear the text area. class ubuntuuitoolkit.TextField(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase TextField Autopilot custom proxy object. clear(instance, *args, **kwargs)¶ Clear the text field. is_empty()¶ Return True if the text field is empty. False otherwise. write(instance, *args, **kwargs)¶ Write into the text field. Parameters: text – The text to write. clear – If True, the text field will be cleared before writing the text. If False, the text will be appended at the end of the text field. Default is True. class ubuntuuitoolkit.Toolbar(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._common.UbuntuUIToolkitCustomProxyObjectBase Toolbar Autopilot custom proxy object. click_back_button(instance, *args, **kwargs)¶ Click the back button of the toolbar. click_button(instance, *args, **kwargs)¶ Click a button of the toolbar. 198 Chapter 6. App development UBports Documentation, Release 1.0 The toolbar should be opened before clicking the button, or an exception will be raised. If the toolbar is closed for some reason (e.g., timer finishes) after moving the mouse cursor and before clicking the button, it is re-opened automatically by this function. Parameters: object_name – The QML objectName property of the button. Raises ToolkitException: If there is no button with that object name. close(instance, *args, **kwargs)¶ Close the toolbar if it’s opened. open(instance, *args, **kwargs)¶ Open the toolbar if it’s not already opened. Returns: The toolbar. exception ubuntuuitoolkit.ToolkitException¶ Bases: exceptions.Exception Exception raised when there is an error with the custom proxy object. class ubuntuuitoolkit.UbuntuListView11(*args)¶ Bases: ubuntuuitoolkit._custom_proxy_objects._qquicklistview.QQuickListView Autopilot helper for the UbuntuListView 1.1. manual_refresh_nowait()¶ manual_refresh_wait()¶ pull_to_refresh_enabled()¶ wait_refresh_completed()¶ class ubuntuuitoolkit.UbuntuUIToolkitCustomProxyObjectBase(*args)¶ Bases: autopilot.introspection.dbus.CustomEmulatorBase A base class for all the Ubuntu UI Toolkit custom proxy objects. is_flickable()¶ Check if the object is flickable. If the object has a flicking attribute, we consider it as a flickable. Returns: True if the object is flickable. False otherwise. swipe_into_view(instance, *args, **kwargs)¶ Make the object visible. Currently it works only when the object needs to be swiped vertically. TODO implement horizontal swiping. –elopio - 2014-03-21 6.3. The Ubuntu App platform - develop with seamless device integration 199 UBports Documentation, Release 1.0 tutorial-getting_started This document contains everything you need to know to write your first autopilot test. It covers writing several simple tests for a sample Qt5/Qml application. However, it’s important to note that nothing in this tutorial is specific to Qt5/Qml, and will work equally well with any other kind of application. Files and Directories Your autopilot test suite will grow to several files, possibly spread across several directories. We recommend that you follow this simple directory layout: The autopilot folder can be anywhere within your project’s source tree. It will likely contain a setup.py file. The autopilot/<projectname>/ folder is the base package for your autopilot tests. This folder, and all child folders, are python packages, and so must contain an init.py file. If you ever find yourself writing custom proxy classes (This is an advanced topic, and is covered here: Writing Custom Proxy Classes), they should be imported from this top-level package. Each test file should be named test_<component>.py, where <component> is the logical component you are testing in that file. Test files must be written in the autopilot/<projectname>/tests/ folder. A Minimal Test Case Autopilot tests follow a similar pattern to other python test libraries: you must declare a class that derives from AutopilotTestCase. A minimal test case looks like this: Autopilot Says Make your tests expressive! It’s important to make sure that your tests express your intent as clearly as possible. We recommend choosing long, descriptive names for test functions and classes (even breaking PEP 8, if you need to), and give your tests a detailed docstring explaining exactly what you are trying to test. For more detailed advice on this point, see Write Expressive Tests The Setup Phase Before each test is run, the setUp method is called. Test authors may override this method to run any setup that needs to happen before the test is run. However, care must be taken when using the setUp method: it tends to hide code from the test case, which can make your tests less readable. It is our recommendation, therefore, that you use this feature sparingly. A more suitable alternative is often to put the setup code in a separate function or method and call it from the test function. Should you wish to put code in a setup method, it looks like this: Note Any action you take in the setup phase must be undone if it alters the system state. See Cleaning Up for more details. Starting the Application At the start of your test, you need to tell autopilot to launch your application. To do this, call launch_test_application. The minimum required argument to this method is the application name or path. If you pass in the application name, autopilot will look in the current working directory, and then will search the PATH environment variable. Otherwise, autopilot looks for the executable at the path specified. Positional arguments to this method are passed to the executable being launched. Autopilot will try and guess what type of application you are launching, and therefore what kind of introspection libraries it should load. Sometimes autopilot will need some assistance however. For example, at the time of writing, autopilot cannot automatically detect the introspection type for python / Qt4 applications. In that case, a RuntimeError will be raised. To provide autopilot with a hint as to which introspection type to load, you can provide the app_type keyword argument. For example: 200 Chapter 6. App development UBports Documentation, Release 1.0 See the documentation for launch_test_application for more details. The return value from launch_test_application is a proxy object representing the root of the introspection tree of the application you just launched. Autopilot Says What is a Proxy Object? Whenever you launch an application, autopilot gives you a “proxy object”. These are instances of the ProxyBase class, with all the data from your application mirrored in the proxy object instances. For example, if you have a proxy object for a push button class (say, QPushButton, for example), the proxy object will have attribute to match every attribute in the class within your application. Autopilot automatically keeps the data in these instances up to date, so you can use them in your test assertions. User interfaces are made up of a tree of widgets, and autopilot represents these widgets as a tree of proxy objects. Proxy objects have a number of methods on them for selecting child objects in the introspection tree, so test authors can easily inspect the parts of the UI tree they care about. A Simple Test To demonstrate the material covered so far, this selection will outline a simple application, and a single test for it. Instead of testing a third-party application, we will write the simplest possible application in Python and Qt4. The application, named ‘testapp.py’, is listed below: As you can see, this is a trivial application, but it serves our purpose. For the upcoming tests to run this file must be executable: We will write a single autopilot test that asserts that the title of the main window is equal to the string “Hello World”. Our test file is named “test_window.py”, and contains the following code: Note that we have made the test method as readable as possible by hiding the complexities of finding the full path to the application we want to test. Of course, if you can guarantee that the application is in PATH, then this step becomes a lot simpler. The entire directory structure looks like this: The init.py files are empty, and are needed to make these directories importable by python. Running Autopilot From the root of this directory structure, we can ask autopilot to list all the tests it can find: Note that on the first line, autopilot will tell you where it has loaded the test definitions from. Autopilot will look in the current directory for a python package that matches the package name specified on the command line. If it does not find any suitable packages, it will look in the standard python module search path instead. To run our test, we use the autopilot ‘run’ command: You will notice that the test application launches, and then dissapears shortly afterwards. Since this test doesn’t manipulate the application in any way, this is a rather boring test to look at. If you ever want more output from the run command, you may specify the ‘-v’ flag: You may also specify ‘-v’ twice for even more output (this is rarely useful for test authors however). Both the ‘list’ and ‘run’ commands take a test id as an argument. You may be as generic, or as specific as you like. In the examples above, we will list and run all tests in the ‘example’ package (i.e.- all tests), but we could specify a more specific run criteria if we only wanted to run some of the tests. For example, to only run the single test we’ve written, we can execute: A Test with Interaction Now lets take a look at some simple tests with some user interaction. First, update the test application with some input and output controls: 6.3. The Ubuntu App platform - develop with seamless device integration 201 UBports Documentation, Release 1.0 We’ve reorganized the application code into a class to make the event handling easier. Then we added two input controls, the hello and goodbye buttons and an output control, the response label. The operation of the application is still very trivial, but now we can test that it actually does something in response to user input. Clicking either of the two buttons will cause the response text to change. Clicking the Hello button should result in Response: Hello while clicking the Goodbye button should result in Response: Goodbye. Since we’re adding a new category of tests, button response tests, we should organize them into a new class. Our tests module now looks like: In addition to the new class, ButtonResponseTests, you’ll notice a few other changes. First, two new import lines were added to support the new tests. Next, the existing MainWindowTitleTests class was refactored to subclass from a base class, HelloWorldTestBase. The base class contains the launch_application method which is used for all test cases. Finally, the object type of the main window changed from QMainWindow to AutopilotHelloWorld. The change in object type is a result of our test application being refactored into a class called AutopilotHelloWorld. Autopilot Says Be careful when identifing user interface controls Notice that our simple refactoring of the test application forced a change to the test for the main window. When developing application code, put a little extra thought into how the user interface controls will be identified in the tests. Identify objects with attributes that are likely to remain constant as the application code is developed. The ButtonResponseTests class adds two new tests, one for each input control. Each test identifies the user interface controls that need to be used, performs a single, specific action, and then verifies the outcome. In test_hello_response, we first identify the QLabel control which contains the output we need to check. We then identify the Hello button. As the application has two QPushButton controls, we must further refine the select_single call by specifing an additional property. In this case, we use the button text. Next, an input action is triggered by instructing the mouse to click the Hello button. Finally, the test asserts that the response label text matches the expected string. The second test repeats the same process with the Goodbye button. The Eventually Matcher Notice that in the ButtonResponseTests tests above, the autopilot method Eventually is used in the assertion. This allows the assertion to be retried continuously until it either becomes true, or times out (the default timout is 10 seconds). This is necessary because the application and the autopilot tests run in different processes. Autopilot could test the assert before the application has completed its action. Using Eventually allows the application to complete its action without having to explicitly add delays to the tests. Autopilot Says Use Eventually when asserting any user interface condition You may find that when running tests, the application is often ready with the outcome by the time autopilot is able to test the assertion without using Eventually. However, this may not always be true when running your test suite on different hardware. tutorial. 202 Chapter 6. App development UBports Documentation, Release 1 python: Create a logger object. You can either do this at the file level scope, or within a test case class: 6.3. The Ubuntu App platform - develop with seamless device integration 203 UBports Documentation, Release 1.0 Log some messages. You may choose which level the messages should be logged at. For example: Note To view log messages when using debug level of logging pass -vv when running autopilot. For more information on the various logging levels, see the python 204 Chapter 6. App development UBports Documentation, Release 1.0): Then perform the swipe operation from the center of the screen to the left edge, using autopilot.input.Pointer.drag:browser, or gallery application to zoom in or out of currently displayed content. To zoom in, pinch vertically outwards from the center point by 100 pixels: To zoom back out, pinch vertically 100 pixels back towards the center point:: 6.3. The Ubuntu App platform - develop with seamless device integration 205 UBports Documentation, Release 1.0 .. raw:: html </dt>: Similarly, to specify that a UInput keyboard should be created: Finally, for the Onscreen Keyboard:: In this example, uinput was mis-spelled (backend names are case sensitive). Specifying the correct backend name works as expected:): Keyboard Backends A quick introduction to the Keyboard backends Each backend has a different method of operating behind the scenes to provide the Keyboard interface. Here is a quick overview of how each backend works. Backend Description X11 206 Chapter 6. App development UBports Documentation, Release 1.0 The X11 backend generates X11 events using a mock input device which it then syncs with X to actually action the input. Uinput The UInput backend injects events directly in to the kernel using the UInput device driver to produce input. OSK The Onscreen Keyboard backend uses the GUI pop-up keyboard to enter input. Using a pointer object it taps on the required keys to get the expected output. 6.3. The Ubuntu App platform - develop with seamless device integration 207 UBports Documentation, Release 1.0: If you wish to implement more specific selection criteria, your class can override the validate_dbus_object method, which takes as arguments the dbus path and state. For example: This method should return True if the object matches this custom proxy class, and False otherwise. If more than one custom proxy class matches an object, a ValueError will be raised at runtime. 208 Chapter 6. App development UBports Documentation, Release 1.0 An example using Ubuntu UI Toolkit which would be used to swipe up a PageWithBottomEdge object to reveal it’s bottom edge menu could look like this: Pass the custom proxy base class as an argument to the launch_test_application method on your test class. This base class should be the same base class that is used to write all of your custom proxy objects: For applications using objects from Ubuntu UI Toolkit, the emulator_base parameter should be: You can pass the custom proxy class to methods like select_single instead of a string. So, for example, the following is a valid way of selecting the QLabel instances in an application: click package This example shows how to launch an installed click application from within a test case: Outside of testcase classes, the NormalApplicationLauncher, UpstartApplicationLauncher, and ClickApplicationLauncher fixtures can be used, e.g.: or a similar example for an installed click package: Within a fixture or a testcase, self.useFixture can be used: or for an installed click package: Additional options can also be specified to set a custom addDetail method, a custom proxy base, or a custom dbus bus with which to patch the environment:: However, using this method it will not be possible to return an application specific custom proxy object, see Writing Custom Proxy Classes. guides-installation Contents Installing Autopilot Ubuntu Other Linux’s 6.3. The Ubuntu App platform - develop with seamless device integration 209 UBports Documentation, Release 1.0: Once the PPA has been added to your system, you should be able to install the autopilot packages below. Which packages should I install? Are you working on ubuntu touch applications? The autopilot-touch metapackage is for you: If you are sticking with gtk desktop applications, install the autopilot-desktop metapackage instead:?). guides: where <modulename> is the base name of the module you want to look at. The module must either be in the current working directory, or be importable by python. For example, to list the tests inside autopilot itself, you can run: Some results have been omitted for clarity. The list command takes only one option: -ro, –run-order Display tests in the order in which they will be run, rather than alphabetical order (which is the default). Run Tests Running autopilot tests is very similar to listing tests: However, the run command has many more options to customize the run behavior: -h, –help show this help message and exit -o OUTPUT, –output OUTPUT Write test result report to file. Defaults to stdout. If given a directory instead of a file will write to a file in that directory named: <hostname>_<dd.mm.yyy_HHMMSS>.log 210 Chapter 6. App development UBports Documentation, Release 1.0 -f FORMAT, –format FORMAT Specify desired output format. Default is “text”. Other option is ‘xml’ to produce junit xml format. -r, –record Record failing tests. Required ‘recordmydesktop’ app to be installed. Videos are stored in /tmp/autopilot. -rd PATH, –record-directory PATH Directory to put recorded tests (only if -r) specified. -v, –verbose If set, autopilot will output test log data to stderr during a test run. Common use cases Run autopilot and save the test log: Run autopilot and record failing tests:: Launching an Application to Introspect In order to be able to introspect an application, it must first be launched with introspection enabled. Autopilot provides the launch command to enable this: The <application> parameter could be the full path to the application, or the name of an application located somewhere on autopilot3 launch gedit A Qt example which passes on parameters to the application being launched:. 6.3. The Ubuntu App platform - develop with seamless device integration 211 UBports Documentation, Release 1.0. guides: Write Docstrings You should write docstrings for your tests. Often the test method is enough to describe what the test does, but an English description is still useful when reading the test code. For example: We recommend following: This test tests one thing only. Its three lines match perfectly with the typical three stages of a test (see above), and it only tests for things that it’s supposed to. Remember that it’s fine to assume that other parts of unity work as expected, 212 Chapter 6. App development UBports Documentation, Release 1.0: This test can be simplified into the following: Here’s what we changed: Removed the set_unity_option lines, as they didn’t affect the test results at all. 6.3. The Ubuntu App platform - develop with seamless device integration 213 UBports Documentation, Release 1.0 PEP8 and PEP257 guidelines. Avoid words like “should” in favor of stronger words like “must”. Contain a one-line summary of the test. Additionally, they should: Include the launchpad bug number (if applicable). Good Example: Within the context of the test case, the docstring is able to explain exactly what the test does, without any ambiguity. In contrast, here’s a poorer example: Bad Example: The docstring explains what the desired outcome is, but without how we’re testing it. This style of sentence assumes test success, which is not what we want! A better version of this code might look like this:. 214 Chapter 6. App development UBports Documentation, Release 1.0: In contrast, we can refactor the test to look a lot nicer: Good Example:: Since we can use any testtools matcher, we can also write code like this: Note that you can pass any object that follows the testtools matcher protocol (so you can write your own matchers, if you like). In Proxy Classes 6.3. The Ubuntu App platform - develop with seamless device integration 215 UBports Documentation, Release 1.0: In this example we’re assuming that two seconds is long enough for the dash to open. To use the wait_for feature, the code looks like this: Good Example: Note that wait_for assumes you want to use the Equals matcher if you don’t specify one. Here’s another example where we’re using it with a testtools matcher:: (please ignore the fact that we’re assuming that we always have two monitors!) In the test classes setUp method, we can then set the appropriate unity option and make sure we’re using the correct launcher: Which allows us to write tests that work automatically in all the scenarios: This works fine. So far we’ve not done anything to cause undue pain.... until we decide that we want to extend the scenarios with an additional axis: 216 Chapter 6. App development UBports Documentation, Release 1.0: This code may work initially, but there’s absolutely no guarantee that the order of objects won’t change in the future. A better approach is to select the individual components you need: Good Example: This code will continue to work in the future. guides: 6.3. The Ubuntu App platform - develop with seamless device integration 217 UBports Documentation, Release 1.0: We have to add a new method to the stopwatch page object: get_time. But it only returns the state of the GUI as the user sees it. We leave in the test method the assertion that checks it’s the expected value.: 218 Chapter 6. App development UBports Documentation, Release 1.0. Porting your autopilot tests This document contains hints as to what is required to port a test suite from any version of autopilot to any newer version. Contents Porting Autopilot Tests A note on Versions Porting to Autopilot v1.4.x Gtk Tests and Boolean Parameters select_single Changes DBus backends and DBusIntrospectionObject changes Python 3 Porting to Autopilot v1.3.x QtIntrospectionTestMixin and GtkIntrospectionTestMixin no longer exist autopilot.emulators namespace has been deprecated: and instead had to write something like this: 6.3. The Ubuntu App platform - develop with seamless device integration 219 UBports Documentation, Release 1.0: You will instead need to have something like this instead: 220 Chapter 6. App development UBports Documentation, Release 1.0: In Autopilot 1.3, the AutopilotTestCase class contains this functionality directly, so the QtIntrospectionTestMixin and GtkIntrospectionTestMixin classes no longer exist. The above example becomes simpler: Autopilot will try and determine the introspection type automatically. If this process fails, you can specify the application type manually:: Old module New Module autopilot.emulators.input autopilot.input autopilot.emulators.X11 Deprecated - use autopilot.input for input and autopilot.display for getting display information. autopilot.emulators.bamf Deprecated - use autopilot.process instead. faq-contribute Contents Contribute Autopilot: Contributing 17. How can I contribute to autopilot? 17. Where can I get help / support? 17. How do I download the code? 17. How do I submit the code for a merge proposal? 17. How do I list or run the tests for the autopilot source code? 17. Which version of Python can Autopilot use? Autopilot: Contributing 17. How can I contribute to autopilot? Documentation: We can always use more documentation. if you don’t know how to submit a merge proposal on launchpad, you can write a bug with new documentation and someone will submit a merge proposal for you. They will give you credit for your documentation in the merge proposal. 6.3. The Ubuntu App platform - develop with seamless device integration 221 UBports Documentation, Release 1.0 New Features: Check out our existing Blueprints or create some yourself... Then code! Test and Fix: No project is perfect, log some bugs or fix some bugs. 17. Where can I get help / support? The developers hang out in the #ubuntu-autopilot IRC channel on irc.freenode.net. 17. How do I download the code? Autopilot is using Launchpad and Bazaar for source code hosting. If you’re new to Bazaar, or distributed version control in general, take a look at the Bazaar mini-tutorial first. Install bzr open a terminal and type: Download the code: This will create an autopilot directory and place the latest code there. You can also view the autopilot code on the web. 17. How do I submit the code for a merge proposal? After making the desired changes to the code or documentation and making sure the tests still run type: Write a quick one line description of the bug that was fixed or the documentation that was written. Signup for a launchpad account, if you don’t have one. Then using your launchpad id type: Example: All new features should have unit and/or functional test to make sure someone doesn’t remove or break your new code with a future commit. 17. How do I list or run the tests for the autopilot source code? Running autopilot from the source code root directory (the directory containing the autopilot/ bin/ docs/ debian/ etc. directories) will use the local copy and not the system installed version. An example from branching to running: Note The ‘Loading tests from:’ or ‘Running tests from:’ line will inform you where autopilot is loading the tests from. To run a specific suite or a single test in a suite, be more specific with the tests path. For example, running all unit tests: For example, running just the ‘InputStackKeyboardTypingTests’ suite: Or running a single test in the ‘test_version_utility_fns’ suite: 17. Which version of Python can Autopilot use? Autopilot supports Python 3.4. faq-faq Contents Frequently Asked Questions Autopilot: The Project 17. Where can I get help / support? 17. Which version of autopilot should I install? 17. Should I write my tests in python2 or python3? 222 Chapter 6. App development UBports Documentation, Release 1.0 Q: Should I convert my existing tests to python3? 17. Where can I report a bug? 17. What type of applications can autopilot test? Autopilot Tests Q. Autopilot tests often include multiple assertions. Isn’t this bad practise? Q. How do I write a test that uses either a Mouse or a Touch device interchangeably? 17. How do I use the Onscreen Keyboard (OSK) to input text in my test? Autopilot Tests and Launching Applications Q. How do I launch a Click application from within a test so I can introspect it? Q. How do I access an already running application so that I can test/introspect it? Autopilot Qt & Gtk Support Q. How do I launch my application so that I can explore it with the vis tool? 17. What is the impact on memory of adding objectNames to QML items? Autopilot: The Project 17. Where can I get help / support? The developers hang out in the #ubuntu-autopilot IRC channel on irc.freenode.net. 17. Which version of autopilot should I install? Ideally you should adopt and utilize the latest version of autopilot. If your testcase requires you to utilize an older version of autopilot for reasons other than Porting Autopilot Tests, please file a bug and let the development team know about your issue. 17.. 17. Where can I report a bug? Autopilot is hosted on launchpad - bugs can be reported on the launchpad bug page for autopilot (this requires a launchpad account). 17.? 6.3. The Ubuntu App platform - develop with seamless device integration 223 UBports Documentation, Release 1.0: Some tests need to wait for the application to respond to user input before the test continues. The easiest way to do this is to use the Eventually matcher in the middle of your interaction with the application. For example, if testing the Firefox browsers ability to print a certain web comic, we might produce a test that looks similar to: Combined with test scenarios, this can be used to write tests that are run twice - once with a mouse device and once with a touch device: If you only want to use the mouse on certain platforms, use the autopilot.platform module to determine the current platform at runtime. 17.: Autopilot Tests and Launching Applications Q. How do I launch a Click application from within a test so I can introspect it? Launching a Click application is similar to launching a traditional application and is as easy as using launch_click_package:: 224 Chapter 6. App development UBports Documentation, Release 1.0: Now that it has been launched with Autopilot support we can introspect and explore out application using the vis tool. 17.. Measurement of memory consumption of 10000 Items Without objectName With unique objectName With same objectName 65292 kB 66628 kB 66480 kB => general QML guidelines for performance should be followed. faq-troubleshooting Contents 6.3. The Ubuntu App platform - develop with seamless device integration 225 UBports Documentation, Release 1.0 Troubleshooting General Techniques Common Questions regarding Failing Tests Q. Why is my test failing? It works some of the time. What causes “flakyness?” StateNotFoundError Exception General Techniques The single hardest thing to do while writing autopilot tests is to understand the state of the application’s object tree. This is especially important for applications that change their object tree during the lifetime of the test. There are three techniques you can use to discover the state of the object tree: Using Autopilot Vis The Autopilot vis tool is a useful tool for exploring the entire structure of an application, and allows you to search for a particular node in the object tree. If you want to find out what parts of the application to select to gain access to certain information, the vis tool is probably the best way to do that. Using print_tree The print_tree method is available on every proxy class. This method will print every child of the proxy object recursively, either to stdout or a file on disk. This technique can be useful when: The application cannot easily be put into the state required before launching autopilot vis, so the vis tool is no longer an option. The application state that has to be captured only exists for a short amount of time. The application only runs on platforms where the vis tool isn’t available. The print_tree method often produces a lot of output. There are two ways this information overload can be handled: Specify a file path to write to, so the console log doesn’t get flooded. This log file can then be searched with tools such as grep. Specify a maxdepth limit. This controls how many levels deep the recursive search will go. Of course, these techniques can be used in combination. Using get_properties The get_properties method can be used on any proxy object, and will return a python dictionary containing all the properties of that proxy object. This is useful when you want to explore what information is provided by a single proxy object. The information returned by this method is exactly the same as is shown in the right-hand pane of autopilot vis. Common Questions regarding Failing Tests Q. Why is my test failing? It works some of the time. What causes “flakyness?” Sometimes a tests fails because the application under tests has issues, but what happens when the failing test can’t be reproduced manually? It means the test itself has an issue. Here is a troubleshooting guide you can use with some of the common problems that developers can overlook while writing tests. StateNotFoundError Exception Not waiting for an animation to finish before looking for an object. Did you add animations to your app recently? problem: solution: 226 Chapter 6. App development UBports Documentation, Release 1.0 Not waiting for an object to become visible before trying to select it. Is your app slower than it used to be for some reason? Does its properties have null values? Do you see errors in stdout/stderr while using your app, if you run it from the commandline? Python code is executed in series which takes milliseconds, whereas the actions (clicking a button etc.) will take longer as well as the dbus query time. This is why wait_select_* is useful i.e. click a button and wait for that click to happen (including the dbus query times taken). problem: solution: Waiting for an item that is destroyed to be not visible, sometimes the objects is destroyed before it returns false: problem: problem: solution: solution: Trying to use select_many like a list. The order in which the objects are returned are non-deterministic. problem: solution: autopilot autopilot.get_test_configuration() Get the test configuration dictionary. Tests can be configured from the command line when the autopilot tool is invoked. Typical use cases involve configuring the test suite to use a particular binary (perhaps a locally built binary or one installed to the system), or configuring which external services are faked. This dictionary is populated from the –config option to the autopilot run command. For example: autopilot run –config use_local some.test.id Will result in a dictionary where the key use_local is present, and evaluates to true, e.g.-: Values can also be specified. The following command: autopilot run –config fake_services=login some.test.id ...will result in the key ‘fake_services’ having the value ‘login’. Autopilot itself does nothing with the conents of this dictionary. It is entirely up to test authors to populate it, and to use the values as they see fit. autopilot.get_version_string() Return the autopilot source and package versions. autopilot.have_vis() Return true if the vis package is installed. 6.3. The Ubuntu App platform - develop with seamless device integration 227 UBports Documentation, Release 1.0 autopilot.application.ClickApplicationLauncher class autopilot.application.ClickApplicationLauncher(case_addDetail=None, dbus_bus=’session’) emulator_base=None, Fixture to manage launching a Click. Raises: RuntimeError – If the specified package_id cannot be found in the click package manifest. RuntimeError – If the specified app_name cannot be found within the specified click package. Returns: proxy object for the launched package application autopilot.application.NormalApplicationLauncher class autopilot.application.NormalApplicationLauncher(case_addDetail=None, dbus_bus=’session’) emulator_base=None, Fixture to manage launching an(application, arguments=[], app_type=None, launch_dir=None, capture_output=True) Launch an application and return a proxy object. 228 Chapter 6. App development UBports Documentation, Release 1.0 Use this method to launch an application and start testing it. The arguments passed in arguments are used as arguments to the application to launch. Additional keyword arguments are used to control the manner in which the application is launched. This fixture is designed to be flexible enough to launch all supported types of applications. Autopilot can automatically determine how to enable introspection support for dynamically linked binary applications. For example, to launch a binary Gtk application, a test might start with: For use within a testcase, use useFixture: from autopilot.application import NormalApplicationLauncher launcher = self.useFixture(NormalApplicationLauncher()) app_proxy = launcher.launch(‘gedit’) Applications can be given command line arguments by supplying an arguments argument) arguments – If set, the list of arguments is passed to the launched. Returns: A proxy object that represents the application. Introspection data is retrievable via this object. autopilot.application Base package for application launching and environment management. Elements ClickApplicationLauncher Fixture to manage launching a Click application.A class that knows how to launch an application with a certain typ NormalApplicationLauncher Fixture to manage launching an application.A class that knows how to launch an application with a certain typ UpstartApplicationLauncher 6.3. The Ubuntu App platform - develop with seamless device integration 229 UBports Documentation, Release 1.0 A launcher class that launches applications with UpstartAppLaunch.A class that knows how to launch an application with a certain typ autopilot.application.UpstartApplicationLauncher class autopilot.application.UpstartApplicationLauncher(case_addDetail=None, dbus_bus=’session’) emulator_base=None, A launcher class that launches applications with UpstartAppLaunch(app_id, app_uris=[]) Launch an application with upstart. This method launches an application via the upstart-app-launch library, on platforms that support it. Usage is similar to NormalApplicationLauncher: Parameters: app_id – name of the application to launch app_uris – list of separate application uris to launch Raises RuntimeError: If the specified application cannot be launched. Returns: proxy object for the launched package application autopilot.application.get_application_launcher_wrapper(app_path) Return an instance of ApplicationLauncher that knows how to launch the application at ‘app_path’. autopilot.display.Display class autopilot.display.Display The base class/inteface for the display devices. static create(preferred_backend=’‘) Get an instance of the Display class. For more infomration on picking specific backends, see Advanced Backend Picking Parameters: preferred_backend – A string containing a hint as to which backend you would like. possible backends are: X11 - Get display information from X11. 230 Chapter 6. App development UBports Documentation, Release 1.0 UPA - Get display information from the ubuntu platform API.. Returns: Instance of Display with appropriate backend. exception BlacklistedDriverError Cannot set primary monitor when running drivers listed in the driver blacklist. Display.get_num_screens() Get the number of screens attached to the PC. Display.get_primary_screen() Display.get_screen_width(screen_number=0) Display.get_screen_height(screen_number=0) Display.get_screen_geometry(monitor_number) Get the geometry for a particular monitor. Returns: Tuple containing (x, y, width, height). autopilot.display.get_screenshot_data(display_type) Return a BytesIO object of the png data for the screenshot image. display_type is the display server type. supported values are: “X11” “MIR” Raises: RuntimeError – If attempting to capture an image on an unsupported display server. RuntimeError – If saving image data to file-object fails. autopilot.display The display module contaions support for getting screen information. autopilot.display.is_rect_on_screen(screen_number, rect) Return True if rect is entirely on the specified screen, with no overlap. autopilot.display.is_point_on_screen(screen_number, point) Return True if point is on the specified screen. point must be an iterable type with two elements: (x, y) 6.3. The Ubuntu App platform - develop with seamless device integration 231 UBports Documentation, Release 1.0 autopilot.display.is_point_on_any_screen(point) Return true if point is on any currently configured screen. autopilot.display.move_mouse_to_screen(screen_number) Move the mouse to the center of the specified screen. Elements Display The base class/inteface for the display devices. autop. autop.ProcessSearchError Object introspection error occured.. exception autopilot.exceptions.InvalidXPathQuery Raised when an XPathselect query is invalid or unsupported. 232 Chapter 6. App development UBports Documentation, Release 1.0 autopilot.gestures Gestural support for autopilot. This module contains functions that can generate touch and multi-touch gestures for you. This is a convenience for the test author - there is nothing to prevent you from generating your own gestures! autopilot.gestures.pinch(center, vector_start, vector_end) Perform a two finger pinch (zoom) gesture. Parameters: center – The coordinates (x,y) of the center of the pinch gesture. vector_start – The (x,y) values to move away from the center for the start. vector_end – The (x,y) values to move away from the center for the end. The fingers will move in 100 steps between the start and the end points. If start is smaller than end, the gesture will zoom in, otherwise it will zoom out. autopilot.input.Keyboard class autopilot.input.Keyboard A simple keyboard device class. The keyboard class is used to generate key events while in an autopilot test. This class should not be instantiated directly. To get an instance of the keyboard class, call create instead. static create(preferred_backend=’‘) Get an instance of the Keyboard class. For more infomration on picking specific backends, see Advanced Backend Picking For details regarding backend limitations please see: Keyboard backend limitations Warning The OSK (On Screen Keyboard) backend option does not implement either release methods due to technical implementation details and will raise a NotImplementedError exception if used. Parameters: preferred_backend – A string containing a hint as to which backend you would like. Possible backends are: X11 - Generate keyboard events using the X11 client libraries. UInput - Use UInput kernel-level device driver. OSK - Use the graphical On Screen Keyboard as a backend. Raises: RuntimeError if autopilot cannot instantate any of the possible backends. Raises: RuntimeError if the preferred_backend is specified and is not one of the possible backends for this device class. Raises: 6.3. The Ubuntu App platform - develop with seamless device integration 233 UBports Documentation, Release 1.0 BackendException if the preferred_backend is set, but that backend could not be instantiated. focused_type(input_target, pointer=None) Type into an input widget. This context manager takes care of making sure a particular input_target UI control is selected before any text is entered. Some backends extend this method to perform cleanup actions at the end of the context manager block. For example, the OSK backend dismisses the keyboard. If the pointer argument is None (default) then either a Mouse or Touch pointer will be created based on the current platform. An example of using the context manager (with an OSK backend): press(keys, delay=0.2) Send key press events only. Parameters: keys – Keys you want pressed. delay – The delay (in Seconds) after pressing the keys before returning control to the caller. Raises: NotImplementedError If called when using the OSK Backend. Warning The OSK backend does not implement the press method and will raise a NotImplementedError if called. Example: presses the ‘Alt’ and ‘F2’ keys, but does not release them. release(keys, delay=0.2) Send key release events only. Parameters: keys – Keys you want released. delay – The delay (in Seconds) after releasing the keys before returning control to the caller. Raises: NotImplementedError If called when using the OSK Backend. Warning The OSK backend does not implement the press method and will raise a NotImplementedError if called. Example: releases the ‘Alt’ and ‘F2’ keys. press_and_release(keys, delay=0.2) Press and release all items in ‘keys’. This is the same as calling ‘press(keys);release(keys)’. Parameters: keys – Keys you want pressed and released. 234 Chapter 6. App development UBports Documentation, Release 1.0 delay – The delay (in Seconds) after pressing and releasing each key. Example: presses both the ‘Alt’ and ‘F2’ keys, and then releases both keys. type(string, delay=0.1) Simulate a user typing a string of text. Parameters: string – The string to text to type. delay – The delay (in Seconds) after pressing and releasing each key. Note that the default value here is shorter than for the press, release and press_and_release methods. Note Only ‘normal’ keys can be typed with this method. Control characters (such as ‘Alt’ will be interpreted as an ‘A’, and ‘l’, and a ‘t’). on_test_end(*args) on_test_start(*args) autopilot.input.Mouse class autopilot.input.Mouse A simple mouse device class. The mouse class is used to generate mouse events while in an autopilot test. This class should not be instantiated directly however. To get an instance of the mouse class, call create instead. For example, to create a mouse object and click at (100,50): static create(preferred_backend=’‘) Get an instance of the Mouse class. For more infomration on picking specific backends, see Advanced Backend Picking Parameters: preferred_backend – A string containing a hint as to which backend you would like. Possible backends are: X11 - Generate mouse events using the X11 client libraries.. x Mouse position X coordinate. y 6.3. The Ubuntu App platform - develop with seamless device integration 235 UBports Documentation, Release 1.0 Mouse position Y coordinate. press(button=1) Press mouse button at current mouse location. release(button=1) Releases mouse button at current mouse location. click(button=1, press_duration=0.1, time_between_events=0.1) Click mouse at current location. Parameters: time_between_events – takes floating point to represent the delay time between subsequent clicks. Default value 0.1 represents tenth of a second. click_object(object_proxy, button=1, press_duration=0.1, time_between_events=0.1) Click the center point of a given object. It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in order): globalRect (x,y,w,h) center_x, center_y x, y, w, h Parameters: time_between_events – takes floating point to represent the delay time between subsequent clicks. Default value 0.1 represents tenth of a second. Raises: ValueError if none of these attributes are found, or if an attribute is of an incorrect type. move(x, y, animate=True, rate=10, time_between_events=0.01) Moves mouse to location (x,y). Callers should avoid specifying the rate or time_between_events parameters unless they need a specific rate of movement. move_to_object(object_proxy) Attempts to move the mouse mouse pointer. Returns: 236. on_test_end(*args) on_test_start(*args) autopilot.input.Pointer class autopilot.input.Pointer(device) A wrapper class that represents a pointing device which can either be a mouse or a touch, and provides a unified API. This class is useful if you want to run tests with either a mouse or a touch device, and want to write your tests to use a single API. Create this wrapper by passing it either a mouse or a touch device, like so: or, like so: Warning Some operations only make sense for certain devices. This class attempts to minimise the differences between the Mouse and Touch APIs, but there are still some operations that will cause exceptions to be raised. These are documented in the specific methods below. x Pointer X coordinate. If the wrapped device is a Touch device, this will return the last known X coordinate, which may not be a sensible value. y Pointer Y coordinate. If the wrapped device is a Touch device, this will return the last known Y coordinate, which may not be a sensible value. press(button=1) Press the pointer at it’s current location. If the wrapped device is a mouse, you may pass a button specification. If it is a touch device, passing anything other than 1 will raise a ValueError exception. 6.3. The Ubuntu App platform - develop with seamless device integration 237 UBports Documentation, Release 1.0 release(button=1) Releases the pointer at it’s current location. If the wrapped device is a mouse, you may pass a button specification. If it is a touch device, passing anything other than 1 will raise a ValueError exception. click(button=1, press_duration=0.1, time_between_events=0.1) Press and release at the current pointer location.(x, y) Moves the pointer to the specified coordinates. If the wrapped device is a mouse, the mouse will animate to the specified coordinates. If the wrapped device is a touch device, this method will determine where the next release or click will occur. click_object(object_proxy, button=1, press_duration=0.1, time_between_events=0.1) Attempts to move the pointer to ‘object_proxy’s centre point. and click a button It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in order): globalRect (x,y,w,h) center_x, center_y x, y, w, h_to_object(object_proxy) Attempts to move the pointer pointer. Returns: 238. autop. Mouse - traditional mouse devices (Currently only avaialble on the desktop).. Elements Keyboard A simple keyboard device class. 6.3. The Ubuntu App platform - develop with seamless device integration 239 UBports Documentation, Release 1.0 Mouse A simple mouse device class. Pointer A wrapper class that represents a pointing device which can either Touch A simple touch driver class. autopilot.input.Touch class autopilot.input.Touch A simple touch driver class. This class can be used for any touch events that require a single active touch at once. If you want to do complex gestures (including multi-touch gestures), look at the autopilot.gestures module. static create(preferred_backend=’‘) Get an instance of the Touch class. Parameters: preferred_backend – A string containing a hint as to which backend you would like. If left blank, autopilot will pick a suitable backend for you. Specifying a backend will guarantee that either that backend is returned, or an exception is raised. possible backends are: UInput - Use UInput kernel-level device driver.. pressed Return True if this touch is currently in use (i.e.- pressed on the ‘screen’). tap(x, y, press_duration=0.1, time_between_events=0.1) Click (or ‘tap’) at given x,y coordinates. Parameters: time_between_events – takes floating point to represent the delay time between subsequent taps. Default value 0.1 represents tenth of a second. tap_object(object, press_duration=0.1, time_between_events=0.1) Tap the center point of a given object. It does this by looking for several attributes, in order. The first attribute found will be used. The attributes used are (in order): 240 Chapter 6. App development UBports Documentation, Release 1.0 globalRect (x,y,w,h) center_x, center_y x, y, w, h Parameters: time_between_events – takes floating point to represent the delay time between subsequent taps. Default value 0.1 represents tenth of a second. Raises: ValueError if none of these attributes are found, or if an attribute is of an incorrect type. press(x, y) Press and hold at the given x,y coordinates. move(x, y) Move the pointer coords to (x,y). Note The touch ‘finger’ must be pressed for a call to this method to be successful. (see press for further details on touch presses.) Raises: RuntimeError if called and the touch ‘finger’ isn’t pressed. release() Release a previously pressed finger drag(x1, y1, x2, y2, rate=10, time_between_events=0.01) Perform a drag gesture. The finger finger will be moved per iteration. Default is 10 pixels. A higher rate will make the drag faster, and lower rate will make it slower. time_between_events – The number of seconds that the drag will wait between iterations. Raises: RuntimeError – if the finger is already pressed. RuntimeError – if no more finger slots are available. 6.3. The Ubuntu App platform - develop with seamless device integration 241 UBports Documentation, Release 1.0 autopilot.introspection.ProxyBase class autopilot.introspection.ProxyBase(state_dict, path, backend) A class that supports transparent data retrieval from the application under test. This class is the base class for all objects retrieved from the application under test. It handles transparently refreshing attribute values when needed, and contains many methods to select child objects in the introspection tree. This class must be used as a base class for any custom proxy classes. See also Tutorial Section Writing Custom Proxy Classes Information on how to write custom proxy classes. get_all_instances() Get all instances of this class that exist within the Application state tree. For example, to get all the LauncherIcon instances: Warning Using this method is slow - it requires a complete scan of the introspection tree. You should only use this when you’re not sure where the objects you are looking for are located. Depending on the application you are testing, you may get duplicate results using this method. Returns: List (possibly empty) of class instances. get_children() Returns a list of all child objects. This returns a list of all children. To return only children of a specific type, use get_children_by_type. To get objects further down the introspection tree (i.e.- nodes that may not necessarily be immeadiate children), use select_single and select_many. get_children_by_type(desired_type, **kwargs) Get a list of children of the specified type. Keyword arguments can be used to restrict returned instances. For example: will return only Launcher instances that have an attribute ‘monitor’ that is equal to 1. The type can also be specified as a string, which is useful if there is no emulator class specified: Note however that if you pass a string, and there is an emulator class defined, autopilot will not use it. Parameters: desired_type – Either a string naming the type you want, or a class of the type you want (the latter is used when defining custom emulators) See also Tutorial Section Writing Custom Proxy Classes get_parent(type_name=’‘, **kwargs) Returns the parent of this object. One may also use this method to get a specific parent node from the introspection tree, with type equal to type_name or matching the keyword filters present in kwargs. Note: The priority order is closest parent. 242 Chapter 6. App development UBports Documentation, Release 1.0 If no filters are provided and this object has no parent (i.e.- it is the root of the introspection tree). Then it returns itself. Parameters: type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden emulator classes). Raises StateNotFoundError: if the requested object was not found. get_path() Return the absolute path of the dbus node get_properties() Returns a dictionary of all the properties on this class. This can be useful when you want to log all the properties exported from your application for a particular object. Every property in the returned dictionary can be accessed as attributes of the object as well. get_root_instance() Get the object at the root of this tree. This will return an object that represents the root of the introspection tree. classmethod get_type_query_name() Return the Type node name to use within the search query. This allows for a Custom Proxy Object to be named differently to the underlying node type name. For instance if you have a QML type defined in the file RedRect.qml: You can then define a Custom Proxy Object for this type like so: class RedRect(DBusIntrospectionObject): @classmethod def get_type_query_name(cls): This is due to the qml engine storing ‘RedRect’ as a QQuickRectangle in the UI tree and the xpathquery query needs a node type to query for. By default the query will use the class name (in this case RedRect) but this will not match any node type in the tree. is_moving(gap_interval=0.1) Check if the element is moving. Parameters: gap_interval – Time in seconds to wait before re-inquiring the object co-ordinates to be able to evaluate if, the element is moving. Returns: True, if the element is moving, otherwise False. no_automatic_refreshing() Context manager function to disable automatic DBus refreshing when retrieving attributes. Example usage: with instance.no_automatic_refreshing(): # access lots of attributes. 6.3. The Ubuntu App platform - develop with seamless device integration 243 UBports Documentation, Release 1.0 This can be useful if you need to check lots of attributes in a tight loop, or if you want to atomicaly check several attributes at once. print_tree(output=None, maxdepth=None, _curdepth=0) Print properties of the object and its children to a stream. When writing new tests, this can be called when it is too difficult to find the widget or property that you are interested in in “vis”. Warning Do not use this in production tests, this is expensive and not at all appropriate for actual testing. Only call this temporarily and replace with proper select_single/select_many calls. Parameters: output – A file object or path name where the output will be written to. If not given, write to stdout. maxdepth – If given, limit the maximum recursion level to that number, i. e. only print children which have at most maxdepth-1 intermediate parents. refresh_state() Refreshes the object’s state. You should probably never have to call this directly. Autopilot automatically retrieves new state every time this object’s attributes are read. Raises StateNotFound: if the object in the application under test has been destroyed. select_many(type_name=’*’, ap_result_sort_keys=None, **kwargs) Get a list of nodes from the introspection tree, with type equal to type_name and (optionally) matching the keyword filters present in kwargs. You must specify either type_name, keyword filters or both. This method searches recursively from the instance this method is called on. Calling select_many on the application (root) proxy object will search the entire tree. Calling select_many on an object in the tree will only search it’s descendants. Example Usage: As mentioned above, this method searches the object tree recursively: Warning The order in which objects are returned is not guaranteed. It is bad practise to write tests that depend on the order in which this method returns objects. (see Do Not Depend on Object Ordering for more information). If you want to ensure a certain count of results retrieved from this method, use wait_select_many or if you only want to get one item, use select_single instead. Parameters: type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden emulator classes). ap_result_sort_keys – list of object properties to sort the query result with (sort key priority starts with element 0 as highest priority and then descends down the list). Raises ValueError: if neither type_name or keyword filters are provided. 244 Chapter 6. App development UBports Documentation, Release 1.0 See also Tutorial Section Writing Custom Proxy Classes select_single(type_name=’*’, **kwargs) Get a single node from the introspection tree, with type equal to type_name and (optionally) matching the keyword filters present in kwargs. You must specify either type_name, keyword filters or both. This method searches recursively from the instance. Parameters: type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden emulator classes). classmethod validate_dbus_object(path, _state) Return whether this class is the appropriate proxy object class for a given dbus path and state. The default version matches the name of the dbus object and the class. Subclasses of CustomProxyObject can override it to define a different validation method. Parameters: path – The dbus path of the object to check state – The dbus state dict of the object to check (ignored in default implementation) Returns: Whether this class is appropriate for the dbus object wait_select_many(type_name=’*’, ap_query_timeout=10, ap_result_count=1, ap_result_sort_keys=None, **kwargs) Get a list of nodes from the introspection tree, with type equal to type_name and (optionally) matching the keyword filters present in kwargs. This method is identical to the select_many method, except that this method will poll the application under test for ap_query_timeout seconds in the event that the search result count is not greater than or equal to ap_result_count. You must specify either type_name, keyword filters or both. This method searches recursively from the instance this method is called on. Calling wait_select_many on the application (root) proxy object will search the entire tree. Calling wait_select_many on an object in the tree will only search it’s descendants. Example Usage: 6.3. The Ubuntu App platform - develop with seamless device integration 245 UBports Documentation, Release 1.0 Warning The order in which objects are returned is not guaranteed. It is bad practise to write tests that depend on the order in which this method returns objects. (see Do Not Depend on Object Ordering for more information). Parameters: type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden emulator classes). ap_query_timeout – Time in seconds to wait for search criteria to match. ap_result_count – Minimum number of results to return. ap_result_sort_keys – list of object properties to sort the query result with (sort key priority starts with element 0 as highest priority and then descends down the list). Raises ValueError: if neither type_name or keyword filters are provided. Also raises, if search result count does not match the number specified by ap_result_count within ap_query_timeout seconds. See also Tutorial Section Writing Custom Proxy Classes wait_select_single(type_name=’*’, ap_query_timeout=10, **kwargs) Get a proxy object matching some search criteria, retrying if no object is found until a timeout is reached. This method is identical to the select_single method, except that this method will poll the application under test for 10 seconds in the event that the search criteria does not match anything. This method will return single proxy object from the introspection tree, with type equal to type_name and (optionally) matching the keyword filters present in kwargs. You must specify either type_name, keyword filters or both. This method searches recursively from the proxy object after ap_query_timeout seconds. Parameters: type_name – Either a string naming the type you want, or a class of the appropriate type (the latter case is for overridden emulator classes). ap_query_timeout – Time in seconds to wait for search criteria to match. wait_until_destroyed(timeout=10) Block until this object is destroyed in the application. 246 Chapter 6. App development UBports Documentation, Release 1.0 Block until the object this instance is a proxy for has been destroyed in the applicaiton under test. This is commonly used to wait until a UI component has been destroyed. Parameters: timeout – The number of seconds to wait for the object to be destroyed. If not specified, defaults to 10 seconds. Raises RuntimeError: if the method timed out. wait_until_not_moving(retry_attempts_count=20, retry_interval=0.5) Block until this object is not moving. Block until both x and y of the object stop changing. This is normally useful for cases, where there is a need to ensure an object is static before interacting with it. Parameters: retry_attempts_count – number of attempts to check if the object is moving. retry_interval – time in fractional seconds to be slept, between each attempt to check if the object moving. Raises RuntimeError: if DBus node is still moving after number of retries specified in retry_attempts_count. autopilot.introspection.CustomEmulatorBase alias of ProxyBase autopilot.introspection.is_element(ap_query_func, *args, **kwargs) Call the ap_query_func with the args and indicate if it raises StateNotFoundError. Param: ap_query_func: The dbus query call to be evaluated. Param: args: The *ap_query_func positional parameters. Param: **kwargs: The ap_query_func optional parameters. Returns: False if the ap_query_func raises StateNotFoundError, True otherwise. autopilot.introspection.get_classname_from_path(object_path) Given an object path, return the class name component. autopilot.introspection.get_path_root(object_path) Return the name of the root node of specified path. exception autopilot.introspection.ProcessSearchError Object introspection error occured. autopilot.introspection.get_proxy_object_for_existing_process(**kwargs) Return a single proxy object for an application that is already running (i.e. launched outside of Autopilot). 6.3. The Ubuntu App platform - develop with seamless device integration 247 UBports Documentation, Release 1.0 Searches the given bus (supplied by the kwarg dbus_bus) for an application matching the search criteria (also supplied in kwargs, see further down for explaination on what these can be.) Returns a proxy object created using the supplied custom emulator emulator_base (which defaults to None). This function take kwargs arguments containing search parameter values to use when searching for the target application. Possible search criteria: (unless specified otherwise these parameters default to None) Parameters: pid – The PID of the application to search for. process – The process of the application to search for. If provided only the pid of the process is used in the search, but if the process exits before the search is complete it is used to supply details provided by the process object. connection_name – A string containing the DBus connection name to use with the search criteria. application_name – A string containing the applications name to search for. object_path – A string containing the object path to use as the search criteria. lot.introspection.constants.AUTOPILOT_PATH. Defaults to: autopi- Non-search parameters: Parameters: dbus_bus – The DBus bus to search for the application. Must be a string containing either ‘session’, ‘system’ or the custom buses name (i.e. ‘unix:abstract=/tmp/dbus-IgothuMHNk’). Defaults to ‘session’ emulator_base – The custom emulator to use when creating the resulting proxy object. Defaults to None Exceptions possibly thrown by this function: Raises: ProcessSearchError – If no search criteria match. RuntimeError – If the search criteria results in many matches. RuntimeError – If both process and pid are supplied, but process.pid != pid. Examples: Retrieving an application on the system bus where the applications PID is known: Multiple criteria are allowed, for instance you could search on pid and connection_name: If the application from the previous example was on the system bus: It is possible to search for the application given just the applications name. An example for an application running on a custom bus searching using the applications name: autopilot.introspection.get_proxy_object_for_existing_process_by_name(process_name, emulator_base=None) Return the proxy object for a process by its name. Parameters: process_name – name of the process to get proxy object. This must be a string. emulator_base – emulator base to use with the custom proxy object. Raises ValueError: if process not running or more than one PIDs associated with the process. Returns: 248 Chapter 6. App development UBports Documentation, Release 1.0 proxy object for the requested process. autop. Elements ProxyBase A class that supports transparent data retrieval from the applica autopilot.introspection.types.DateTime class autopilot.introspection.types.DateTime(*args, **kwargs) The DateTime class represents a date and time in the UTC timezone. DateTime is constructed by passing a unix timestamp in to the constructor. The incoming timestamp is assumed to be in UTC. Note This class expects the passed in timestamp to be in UTC but will display the resulting date and time in local time (using the local timezone). This is done to mimic the behaviour of most applications which will display date and time in local time by default Timestamps are expressed as the number of seconds since 1970-01-01T00:00:00 in the UTC timezone: This timestamp can always be accessed either using index access or via a named property: DateTime objects also expose the usual named properties you would expect on a date/time object: Two DateTime objects can be compared for equality: You can also compare a DateTime with any mutable sequence type containing the timestamp (although this probably isn’t very useful for test authors): Finally, you can also compare a DateTime instance with a python datetime instance: Note Autopilot supports dates beyond 2038 on 32-bit platforms. To achieve this the underlying mechanisms require to work with timezone aware datetime objects. This means that the following won’t always be true (due to the naive timestamp not having the correct daylight-savings time details): But this will work: And this will always work to: 6.3. The Ubuntu App platform - develop with seamless device integration 249 UBports Documentation, Release 1.0 Note DateTime.timestamp() will not always equal the passed in timestamp. To paraphrase a message from [. python.org/msg229393] “datetime.timestamp is supposed to be inverse of datetime.fromtimestamp(), but since the later is not monotonic, no such inverse exists in the strict mathematical sense.” DateTime instances can be converted to datetime instances: autop: However, a special case exists for boolean values: because you cannot subclass from the ‘bool’ type, the following check will fail ( object.visible is a boolean property): However boolean values will behave exactly as you expect them to. autopilot.introspection.types.Point class autopilot.introspection.types.Point(*args, **kwargs) The Point class represents a 2D point in cartesian space. To construct a Point, pass in the x, y parameters to the class constructor: These attributes can be accessed either using named attributes, or via sequence indexes: Point instances can be compared using == and !=, either to another Point instance, or to any mutable sequence type with the correct number of items: autopilot.introspection.types.Rectangle class autopilot.introspection.types.Rectangle(*args, **kwargs) The RectangleType class represents a rectangle in cartesian space. To construct a rectangle, pass the x, y, width and height parameters in to the class constructor: These attributes can be accessed either using named attributes, or via sequence indexes: You may also access the width and height values using the width and height properties: Rectangles can be compared using == and !=, either to another Rectangle instance, or to any mutable sequence type: autop. 250 Chapter 6. App development UBports Documentation, Release 1.0. autopilot.introspection.types.Size class autopilot.introspection.types.Size(*args, **kwargs) The Size class represents a 2D size in cartesian space. To construct a Size, pass in the width, height parameters to the class constructor: These attributes can be accessed either using named attributes, or via sequence indexes: Size instances can be compared using == and !=, either to another Size instance, or to any mutable sequence type with the correct number of items: autopilot.introspection.types.Time class autopilot.introspection.types.Time(*args, **kwargs) The Time class represents a time, without a date component. You can construct a Time instnace by passing the hours, minutes, seconds, and milliseconds to the class constructor:: Time instances can be compared to other time instances, any mutable sequence containing four integers, or datetime.time instances: Note that the Time class stores milliseconds, while the datettime.time class stores microseconds. Finally, you can get a datetime.time instance from a Time instance: 6.3. The Ubuntu App platform - develop with seamless device integration 251 UBports Documentation, Release 1.0 autop: Callable Objects: In this example we’re using the autopilot.platform.model function as a callable. In this form, Eventually matches against the return value of the callable. This can also be used to use a regular python property inside an Eventually matcher:: Warning The Eventually matcher does not work with any other matcher that expects a callable argument (such as testtools’ ‘Raises’ matcher) autopilot.matchers Autopilot-specific testtools matchers. Elements Eventually Asserts that a value will eventually equal a given Matcher object. autopilot.platform autopilot.platform.model() Get the model name of the current platform. For desktop / laptop installations, this will return “Desktop”. Otherwise, the current hardware model will be returned. For example: 252 Chapter 6. App development UBports Documentation, Release 1.0 autopilot.platform.image_codename() Get the image codename. For desktop / laptop installations this will return “Desktop”. Otherwise, the codename of the image that was installed will be returned. For example: platform.image_codename() ... “maguro” autopilot.platform.is_tablet() Indicate whether system is a tablet. The ‘ro.build.characteristics’ property is checked for ‘tablet’. For example: platform.tablet() ... True Returns: boolean indicating whether this is a tablet autopilot.platform.get_display_server() Returns display server type. Returns: string indicating display server type. Either “X11”, “MIR” or “UNKNOWN” autop. icon Get the application icon. Returns: The name of the icon. is_active Is the application active (i.e. has keyboard focus)? is_urgent Is the application currently signalling urgency? user_visible 6.3. The Ubuntu App platform - develop with seamless device integration 253 UBports Documentation, Release 1.0 Is this application visible to the user? Note Some applications (such as the panel) are hidden to the user but may still be returned. get_windows() Get a list of the application windows. autopilot.process.ProcessManager class autopilot.process.ProcessManager A simple process manager class. The process manager is used to handle processes, windows and applications. This class should not be instantiated directly however. To get an instance of the keyboard class, call create instead. KNOWN_APPS = {‘System Settings’: {‘process-name’: ‘unity-control-center’, ‘desktop-file’: ‘unity-controlcenter.desktop’}, ‘Mahjongg’: {‘process-name’: ‘gnome-mahjongg’, ‘desktop-file’: ‘gnome-mahjongg.desktop’}, ‘Text Editor’: {‘process-name’: ‘gedit’, ‘desktop-file’: ‘gedit.desktop’}, ‘Terminal’: {‘process-name’: ‘gnometerminal’, ‘desktop-file’: ‘gnome-terminal.desktop’}, ‘Character Map’: {‘process-name’: ‘gucharmap’, ‘desktop-file’: ‘gucharmap.desktop’}, ‘Remmina’: {‘process-name’: ‘remmina’, ‘desktop-file’: ‘remmina.desktop’}, ‘Calculator’: {‘process-name’: ‘gnome-calculator’, ‘desktop-file’: ‘gcalctool.desktop’}} static create(preferred_backend=’‘) Get an instance of the ProcessManager class. For more infomration on picking specific backends, see Advanced Backend Picking Parameters: preferred_backend – A string containing a hint as to which backend you would like. Possible backends are: BAMF - Get process information using the BAMF Application Matching Framework... Parameters: name – The name to be used when launching the application. desktop_file – The filename (without path component) of the desktop file used to launch the application. 254 Chapter 6. App development UBports Documentation, Release 1.0 process_name – The name of the executable process that gets run. Raises: KeyError if application has been registered already classmethod unregister_known_application(name) Unregister an application with the known_apps dictionary. Parameters: name – The name to be used when launching the application. Raises: KeyError if the application has not been registered.. Parameters: app_name – The application name. This name must either already be registered as one of the built-in applications that are supported by autopilot, or must have been registered using register_known_application beforehand. files – (Optional)’. Returns: A Application instance. start_app_window(app_name, files=[], locale=None) Open a single window for one of the known applications, and close it at the end of the test. Parameters: app_name – The application name. This name must either already be registered as one of the built-in applications that are supported by autopilot, or must have been registered with register_known_application beforehand. files – (Optional) Should be’. Raises: AssertionError if no window was opened, or more than one window was opened. Returns: A Window instance. get_open_windows_by_application(app_name) Get a list of ~autopilot.process.Window‘ instances for the given application name. 6.3. The Ubuntu App platform - develop with seamless device integration 255 UBports Documentation, Release 1.0 Parameters: app_name – The name of one of the well-known applications. Returns: A list of Window instances. close_all_app(app_name) get_app_instances(app_name) app_is_running(app. wait_until_application_is_running(desktop_file, timeout) Wait until a given application is running. Parameters: desktop_file (string) – The name of the application desktop file. timeout (integer) – The maximum time to wait, in seconds. If set to something less than 0, this method will wait forever. Returns: true once the application is found, or false if the application was not found until the timeout was reached. launch_application(desktop_file, files=[], wait=True) Launch an application by specifying a desktop file. Parameters: files (List of strings) – List of files to pass to the application. Not all apps support this. Note If wait is True, this method will wait up to 10 seconds for the application to appear. Raises: TypeError on invalid files parameter. Returns: The Gobject process object. 256 Chapter 6. App development UBports Documentation, Release 1.0 autopilot.process Elements Application Get the application desktop file. ProcessManager A simple process manager class. Window Get the X11 Window Id. autopilot.process.Window class autopilot.process.Window x_id Get the X11 Window Id. x_win Get the X11 window object of the underlying window. get_wm_state Get the state of the underlying. geometry Get the geometry for this window. Returns: Tuple containing (x, y, width, height). is_maximized Is the window maximized? Maximized in this case means both maximized vertically and horizontally. If a window is only maximized in one direction it is not considered maximized. application 6.3. The Ubuntu App platform - develop with seamless device integration 257 UBports Documentation, Release 1.0 Get the application that owns this window. This method may return None if the window does not have an associated application. The ‘desktop’ window is one such example. user_visible Is this window visible to the user in the switcher? is_hidden Is this window hidden? Windows are hidden when the ‘Show Desktop’ mode is activated. is_focused Is this window focused? is_valid Is this window object valid? Invalid windows are caused by windows closing during the construction of this object instance. monitor Returns the monitor to which the windows belongs to closed Returns True if the window has been closed close() Close the window. set_focus() autop: Applications can be given command line arguments by supplying positional arguments: 258 Chapter 6. App development UBports Documentation, Release 1.0. emulator_base – If set, specifies the base class to be used for all emulators for this loaded application. Returns: A proxy object that represents the application. Introspection data is retrievable via this object.:. emulator_base – If set, specifies the base class to be used for all emulators for this loaded application. Raises: RuntimeError – If the specified package_id cannot be found in the click package manifest. RuntimeError – If the specified app_name cannot be found within the specified click package. Returns: proxy object for the launched package application launch_upstart_application(application_name, uris=[], lot.application._launcher.UpstartApplicationLauncher’>, **kwargs) launcher_class=<class ‘autopi- Launch an application with upstart. This method launched an application via the ubuntu-app-launch library, on platforms that support it. Usage is similar to the AutopilotTestCase.launch_test_application: Parameters: application_name – The name of the application to launch. 6.3. The Ubuntu App platform - develop with seamless device integration 259 UBports Documentation, Release 1.0 launcher_class – The application launcher class to use. Useful if you need to overwrite the default to do something custom (i.e. using AlreadyLaunchedUpstartLauncher) Parameters: emulator_base – If set, specifies the base class to be used for all emulators for this loaded application. Raises RuntimeError: If the specified application cannot be launched.. Parameters: stack_start – An iterable of Window instances. Raises AssertionError: if the top of the window stack does not match the contents of the stack_start parameter. assertProperty. 260 Chapter 6. App development UBports Documentation, Release 1.0 ValueError – if a named attribute is a callable object. AssertionError – if any of the attribute/value pairs in kwargs do not match the attributes on the object passed in. assertProperties. ValueError – if a named attribute is a callable object. AssertionError – if any of the attribute/value pairs in kwargs do not match the attributes on the object passed in. autopilot.testcase Quick Start The AutopilotTestCase is the main class test authors will be interacting with. Every autopilot test case should derive from this class. AutopilotTestCase derives from testtools.TestCase, so test authors can use all the methods defined in that class as well. Writing tests Tests must be named: test_<testname>, where <testname> is the name of the test. Test runners (including autopilot itself) look for methods with this naming convention. It is recommended that you make your test names descriptive of what each test is testing. For example, possible test names include: Launching the Application Under Test If you are writing a test for an application, you need to use the launch_test_application method. This will launch the application, enable introspection, and return a proxy object representing the root of the application introspection tree. Elements AutopilotTestCase Wrapper around testtools.TestCase that adds significant functionality. App development cookbook The App Developer Cookbook is a collection of short examples, how to’s and answered questions from our developer community. In the sections below you will find information about how to perform common tasks, answers to frequently asked questions, and code snippets from real world examples. 6.3. The Ubuntu App platform - develop with seamless device integration 261 UBports Documentation, Release 1.0 General App Development • Basic QML tutorial • Ubuntu Touch app development book • Is there way to compile Qt5 programs, written with c++, to Ubuntu Touch? • Is QML the only way to create apps in Ubuntu for tablets? • Are Ubuntu Phone apps compatible across different devices? And if yes, how? • Is it possible to write a Mobile app with a engine written in C? • How can I install commonly used developer tools? • Can I develop a hybrid native/HTML5 app for the Ubuntu Phone? • Will developers be able to use ruby or python for apps on ubuntu mobile? Platform and System Services • When to use gconf vs dconf? • How to add support for new services to Friends? • How can I use the voice recognition used by Android on Ubuntu? • How Will App Permissions be Handled in Ubuntu Touch? • How do I programmatically detect presence changes in the messaging menu? • Is there a digital protection system in place to prevent piracy of commercial applications? • What is the best practice for saving data in Ubuntu one db from mobile? • How to retrieve a list of devices connected to Ubuntu One? • Ubuntu one API file upload maximum size • How can I run a command from a qml script? • Run system commands from QML App UI Components and Shell Integration • Is there any tutorial for programming Unity indicators? • How do I create a working indicator with Qt/C++? • What is the equivalent to Android Intent and BroadcastReceiver? • How to use gettext to make QML files translatable • What is the @ sign added as a suffix to images used for apps based in the Ubuntu SDK? • How to remove my application’s appindicator programmatically? • What Interface Toolkit is being recommended for Ubuntu on Nexus7/Mobile Devices? • Unity Launcher API for C++ • How to use theming in QML for Ubuntu Phone • How to create a dialog and set title and text dynamically 262 Chapter 6. App development UBports Documentation, Release 1.0 • What icon does Unity use for an application? • How can I center an ActivityIndicator within the screen? • How to create very very simple GUI application for Ubuntu? • How to emit onDropped in QML drag n drop example? • Re-use toolbar code for each tab • Screen dependent image resolution • Using AppIndicators with the Qt framework • Where are the default Unity lenses stored? • Large amount of scrollable text in Ubuntu touch • How do I get an UbuntuShape to transition (fade) between different images? • Bad color of backgroundColor in a MainView when fixed to “#F1E1A3” • Set background for Page{} element in ubuntu touch • Buttons in ubuntu touch toolbar • How can I invoke the soft-keyboard widget on Ubuntu-touch? Device Sensors • Low-level 10-finger multi-touch data on the Nexus 7? • Will location data be available to ubuntu mobile apps? Games • Is there a simple “Hello World” for making games? • What 2D/3D engines and game SDKs are available? • Which free 2D game engine for Ubuntu is the best choice for me? • Where is the documentation for programming OpenGL ES for Ubuntu Touch? Files and Storage • Where do applications typically store data? • What is the preferred way to store application settings? • Can commercial applications use Gsettings? Multimedia • How to pass the image preview from the QML Camera component to a C++ plugin • Is there a standard or recommended sound lib in Ubuntu? • Playing Sound with Ubuntu QML Toolkit preview • Problem with SVG image in QML 6.3. The Ubuntu App platform - develop with seamless device integration 263 UBports Documentation, Release 1.0 Networking • How to programmatically get a list of wireless SSIDs in range from NetworkManager • get text from a Website in javascript/qml 264 Chapter 6. App development * Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
https://manualzz.com/doc/44578769/ubports-documentation
CC-MAIN-2020-34
en
refinedweb
Building Flappy Bird #6 – Randomization & Ground Right now, our game is a bit too easy. It goes on forever, but it’s always exactly the same. What we want to do next is add some variation to the seaweed. To accomplish this, we’ll have our game pick a randomized Y value for the position. Since we already move our seaweed when it gets to -15 on the X axis, we can make do the randomization at that time. To do the randomization, we’ll just call into the Random function Unity provides us. float randomYPosition = UnityEngine.Random.Range(-3, 3); UnityEngine.Random.Range will return a value between the first and second numbers passed in. For this example, we’re passing negative 3 and positive 3, so we’ll get a number somewhere between there. Change your “MoveLeft”) { float randomYPosition = UnityEngine.Random.Range(-3, 3); transform.position = new Vector3(15, randomYPosition, 0); } } } Give the game a play now. Notice how the seaweed Y position is changing just enough to add some difficulty to our game. Cheats (Bugs we should fix) If you’ve been playing all these times I said to Play, you’ve probably noticed a few issues. For example, if you fall down without hitting a seaweed, you just fall, there’s no ground. The same goes for flying too high, you can go above the seaweed and just hang out there safely. Open the “Fish” script and modify the Update() method to match this //); } if (transform.position.y > 6f || transform.position.y < -6f) { Application.LoadLevel(0); } } If you play again, you’ll see that when the fish drops below -6 on the Y axis, the fish dies and the level re-loads. The same happens if you click fast enough and bring your above positive 6 on the Y axis. Real Ground Let’s add some real ground now. In Flappy Bird, we have a simple striped ground (mario ground). For our game, we have some dirt. To do this, we’re actually going to add a quad to our scene. The quad is located under 3D assets, but it does exactly what we need for our 2D game. Remember you can mix and match 2D/3D in your games. Rename the new quad to “Ground” Adjust the position to [0, -4.8, 0]. Y is -4.8 Set the X value of Scale to 20. Your Game View should now look like this Materials A quad is not a sprite, so we don’t have a Sprite Renderer. What we have instead is a Mesh Renderer. What we need to do is change the Material of our renderer. The art package you downloaded in part 1 has a materials folder with a material named “Ground”. Drag that “Ground” material and drop it onto the “Ground” Game Object in the Hierarchy. You could also drag the material onto the area that says “Element 0” in the screenshot above. Since the quad is a 3D object, when we added it, there was a 3D collider attached to it. That collider is a new type that we haven’t used before called a MeshCollider. We’re building a 2D game though, so we need to remove that 3D collider. Then add a BoxCollider2D to our “ground”. Your BoxCollider2D should have a nice rectangular green outline. When we hit the ground, we want it to do the same thing the seaweed does, so let’s reuse one of our old scripts. Add the “SeaweedCollisionHandler” script to the “ground” Get ready to play Now think for a second. What do you expect to happen? .. .. Go ahead and hit play to see if you were right. What’s going on? Right now, it probably seems think things have gotten worse. Your fish is sliding along the ground. The seaweed is sliding along the ground. And your fish isn’t dying until he slides in. If you remember from part 2, we forgot to check the IsTrigger checkbox. Go ahead and check it now, then give it another try. Your fish should now be dying the moment it touches the ground. Animating the Ground The last thing we need to do is get our ground animating. Previously, when we wanted things to move, we applied a Translate on their rigidbody. For the ground though, we’re doing something different. We created the ground as a Quad for a specific reason. We want to animate the texture on it, without actually moving the transform. To do that, we’ll need to create a new script. Create one now named “GroundScroller”. Open the “GroundScroller” script and edit it to match this using UnityEngine; public class GroundScroller : MonoBehaviour { [SerializeField] private float _scrollSpeed = 5f; // Update is called once per frame void Update() { // Get the current offset Vector2 currentTextureOffset = this.GetComponent<Renderer>().material.GetTextureOffset("_MainTex"); // Determine the amount to scroll this frame float distanceToScrollLeft = Time.deltaTime * _scrollSpeed; // Calculate the new offset (Add current + distance) float newTextureOffset_X = currentTextureOffset.x + distanceToScrollLeft; // Create a new Vector2 with the updated offset currentTextureOffset = new Vector2(newTextureOffset_X, currentTextureOffset.y); // Set the offset to our new value this.GetComponent<Renderer>().material.SetTextureOffset("_MainTex", currentTextureOffset); } } Attach the “GroundScroller” script to the Ground GameObject Try playing again and watch the ground scroll along with the seaweed! Well, maybe it’s not quote scrolling “along” with the seaweed. It’s going a bit too fast and looks pretty strange. Luckily, if you were paying attention to the code, we’ve added a variable to adjust the scroll speed. If you try to adjust the speed while playing, it’s going to keep resetting. This is because of how we’re handling death. When we do a LoadLevel to reload the scene, all of those gameobjects are being re-loaded and re-created. We have a couple of options for finding the correct speed. - Change our death system to not use LoadLevel - Stop the game, adjust the value, hit play, rinse, repeat (until we get the right value). - Disable the fish and adjust while the game plays. Personally, I prefer the easy way, so let’s go with option c. - Stop playing. - Now disable the Fish. - Start Playing again - Adjust the speed until find a reasonable value. (I ended up with 1.5) - Memorize that number - Stop playing - Re-enter that number. Now play one more time and enjoy what you’ve made. See how far you can get. Next Up Great work so far. Our game has come together and is functional and fun. In the next part, we’ll make our game a little fancier with some props and add a unique scoring system.
https://unity3d.college/2015/11/17/unity3d-intro-building-flappy-bird-part-6/
CC-MAIN-2020-34
en
refinedweb
JavaScript is the new Perl >>IMAGE. Let’s take a moment to talk about Perl. superseded by Dart, or maybe a non-backwards compatible mode of ECMAScript. In the mean time, to work around some of these issues, JavaScript is still being used much like an Assembly language. (GWT, CoffeeScript, TypeScript all compile to JavaScript.) Now where is JavaScript? We are seeing a similar explosion of packages (libraries), like Perl did, which led to the development of CPAN (you could akin this to the jQuery plugin ecosystem, which is neither as formal, reliable, nor as convenient or automated.) There’s a similar explosion of JavaScript implementations on server side and in other languages, leading to issues with compatibility and runtime bugs. If you tried to use ActiveState Perl on windows, or JavaScript via Rhino in Java, then you know what I am talking about. This is improving now, but so has it also improved in Perl, which compared to JavaScript was far more stable. Still don’t believe that JavaScript is the new Perl? jQuery and NodeJS modules, likened to a very distributed collection of Perl modules, are the glue that holds together the JavaScript ecosystem, provides browser compatibility, and it admittedly does a pretty good job; however, sooner or later, the lack of language constructs like truly enforceable namespace boundaries, and the general mess created when teams get a little bit bigger is going to set in. This is seen over and over as the new wave of developers comes into corporate life: Larger companies try out new technologies all the time, then decide it’s costing measurably, and switch back to a stack that is resilient enough to withstand sloppy code. We are even seeing a re-emergence of age-old discussions on, “how to effectively architect large-scale applications”, and how to keep from falling into the same pit of snakes that’s been around for years – snakes that are now very long in the tooth. These are thoughts and principles apply to any programming language, really, Perl, Python, Java included – so if you think the revival of this discussion will produce different results for JavaScript, then I think you are forgetting human nature. We ARE inherently lazy and most of us will ignore nearly any best practice or principle once “that deadline” gets too close. Nobody ever goes back to fix their mistakes once the project ends, or once they get rolled onto a new team. Java has been the only modern language to show moderate survivability when exposed to corporate laziness. “So if JavaScript is doomed to become the next Perl like you say it is, then what do we do in the mean-time?” We do what we have been doing all along, because these are necessary steps for advancement. We continue to invest both in JavaScript, but also in technologies like Errai and GWT – two technologies whos’ growth echoes of an early 2000’s Java. We should be mindful of the fact that while JavaScript is HOT right now, it does actually have programmatic and strategic shortcomings that must not be forgotten, ignored, or “shooed” under the carpet. If you show me a long-term maintainable solution for JavaScript that enforces the team/feature barrier and holds up against “corporate meltdown,” due to incompetence and laziness as application size increases, then I’d be willing to entertain the idea of using it for a bigger project. Keep trying the latest frameworks and cool UI plugins; keep trying to bridge the server-browser impedance mismatch; find what works and what does not – JavaScript is not going away. We still have Perl apps out there, and whatever replaces JavaScript as a dynamic language (think ECMAScript 6, unless a non-backwards compatibility mode is introduced) will probably be viewed similarly to Python or Ruby vs. Perl today. Backwards compatibility will be a problem for ECMAScript because it is necessary to enforce these constraints; it is not enough just make them “available.” Still, you don’t see that many big Python and Ruby shops either (Google is an exception,) so unless ECMAScript offers some of the same safety features of Java, it will probably end up much like Python and Ruby – “A little better.” In all reality, though, there is a big part of the JavaScript -> Perl/Python/Ruby comparison that we’ve omitted from the story to this point. Java. Java has eclipsed most dynamic languages in the corporation. We see new statically typed dynamic languages on the JVM practically every day to provide some more programmatic sugar and flexibility, but on bigger projects, Java is king. Now apply this to the JavaScript picture and you get a slightly different flavor of the same result. Not only are we seeing experiments replacing JavaScript with a similar dynamic language (Dart, CoffeeScript, or maybe just some necessary enhancements to JavaScript itself,) but in order to support large projects, we are also likely to see a type-safe revolution in the browser as well. GWT is a good start, but progress has been slow – just like Java was back in 2002. We’ve waited 10 years to see Java turn into the actually very useful and extremely powerful technology that it is today. Without a doubt, Java has the largest ecosystem of shared libraries in any programming ecosystem. Java has seen ubiquitous corporate adoption. Java is taught at most colleges and universities, and while you might try to make the point that “Python is being favored over Java” in some schools now, this is really not because of its technical capability, but more about teaching a more general set of programming knowledge that may or may not actually be useful in a business environment. Functional programming, variable interpolation, and the lack of a separate compile step make Python an appealing educational tool, certainly when combined with a shell language interpreter. This does not change what we use in the enterprise, in our daily jobs. For example. When I graduated with my BS in Computer Science, before moving to Red Hat, I worked first at one of the top 5 American mutual fund companies; a big bank. I was tasked with something that nobody else had been able to do before, using Java, and I said, “Okay fine, I can do this easily in Perl.” So I did it. Success? Or something else? The result was a nice pat on the back for figuring it out – it was even deployed to production, but since I had left the team shortly before the release, nobody could figure out how to make changes to my scripts, didn’t think to come ask me when environmental changes caused a failure, and it got abandoned and re-written in Java. Was that a good reason to abandon Perl as a solution? Definitely not, but there’s a lot to be said for using a technology that is safe, using a technology that enforces ‘some’ good practices (“training wheels” of type-safety), and using a technology that is well known among the industry. This will be the reality for JavaScript and its corporate replacement, unless it can catch up soon. In the end, JavaScript is good for us, just like Perl. It pushes us to do better, pushes us to think outside the box, and pushes us to think twice about what we have been doing in the past. It certainly has its place, we can’t ignore it, and we must acknowledge that it is very good at what it does; but, like Perl and Python, it’s not the end of the line. Until we get our hands on the still evolving ECMAScript 6, which may alleviate JavaScript’s enterprise problems, we still haven’t seen our “Java of the browser” yet, except wait, yes we have. It’s Errai and GWT. See you in 10 years, JavaScript. Until then, I’m going to practice my Regular Expressions.?
https://www.ocpsoft.org/opensource/javascript-is-the-new-perl/
CC-MAIN-2020-34
en
refinedweb
Introduction Nightwatch.js is one of the popular testing frameworks for End-to-end tests in web development. It's written in Node.js and works fine with most popular browsers and devices.. Problem As most programmers claim await/async commands are currently the easiest and most effective way to handle asynchronous code. I'm not going into much details - but just remind you about famous "callback hell" problem. If you want details, google it. Actually Nightwatch.js library uses callbacks everywhere. Every command or assertion in it returns promise. It might be a bit difficult to get your head around it at the first time and with time you're looking for other possible ways to handle asynchronicity within this framework. I'm going to show you what I've come up with. Simple example Please note: All examples use nightwatch-cucumber library. When(/^Load the game$/g, () => { client.url("", function() { client.waitForElementToBeVisible(".main-div"); }); }); What this code means is: - Go to certain URL - Wait for the game to load These two steps need to be executed one after another. Above implementation can be rewritten to: When(/^Load the game$/g, async () => { await client.url(""); await client.waitForElementToBeVisible(".main-div"); }); These two implementations are exactly the same. The second one is cleaner: instead of two levels of code we have only one plain text, easily readable. It is a common practice within programmers to reduce code complexity. For more detalis refer to Sonar documentation, for instance. Complex example But what if we use Nightwatch function to get some information from the website. Nightwatch commands doesn't return such value directly, it is passed only to callback. When(/^Balance in the game should equal (.+)$/g, (expectedBalance) => { client.getBalance(function(result) { client.assert.ok(expectedBalance === result.value); }); }); What this code means is: - Take user balance from the game - Assert user balance to expected value And again: these two steps need to be executed one by one. We can refactor this code by adding another variable which will be assigned in the callback. It will look like that: When(/^Balance in the game should equal (.+)$/g, async (expectedBalance) => { const actualBalance = {}; await client.getBalance(function(result) { actualBalance.value = result.value; }); await client.assert.ok(expectedBalance === actualBalance.value); }); or making the code cleaner: # callbacks.js const assignVariable = (variable) => { return (result) => { variable.value = result.value; }; }; module.exports = { assignVariable }; and # main.js const { assignVariable } = require("utils/callbacks"); When(/^Balance in the game should equal (.+)$/g, async (expectedBalance) => { const actualBalance = {}; await client.getBalance(assignVariable(actualBalance)); await client.assert.ok(expectedBalance === actualBalance.value); }); Warning! Please note that passing parameters by variable in JavaScript is a bit complicated. In function assignVariable you cannot just replace one variable with another. For details refer to this Stackoverflow question. Summary This article was about introducing async/await commands into Nightwatch.js framework. It makes the code cleaner and shorter. Instead of the jungle of tangled callbacks we get plane, easily readable code. This one practice made my work with Nightwatch.js much easier and even pleasant. :) Happy coding everyone. Sources - Nightwatch.js website: - Nightwatch cucumber documentation:
https://blog.j-labs.pl/index.php?page=2018/10/How-to-make-Nigthwatch.js-library-work-with-asyncawait
CC-MAIN-2020-34
en
refinedweb
Improved declarative SQLAlchemy models Project description Introducing SQLAlchemy Unchained Enhanced declarative models for SQLAlchemy. Useful Links Installation Requires Python 3.6+, SQLAlchemy and Alembic (for migrations) $ pip install sqlalchemy-unchained First let's create a directory structure to work with: mkdir your-project && cd your-project && \ mkdir your_package && mkdir db && \ touch setup.py your_package/config.py your_package/db.py your_package/models.py From now it is assumed that you are working from the your-project directory. All file paths at the top of code samples will be relative to this directory, and all commands should be run from this directory (unless otherwise noted). Configuration Using SQLite # your_package/config.py import os class Config: PROJECT_ROOT = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) DATABASE_URI = 'sqlite:///' + os.path.join(PROJECT_ROOT, 'db', 'dev.sqlite') Here we're creating an on-disk SQLite database at project-root/db/dev.sqlite. Using PostgreSQL or MariaDB/MySQL If instead you'd like to use PostgreSQL or MariaDB/MySQL, now would be the time to configure it. For example, to use PostgreSQL with the psycopg2 engine: # your_package/config.py import os class Config: DATABASE_URI = '{engine}://{user}:{pw}@{host}:{port}/{db}'.format( engine=os.getenv('SQLALCHEMY_DATABASE_ENGINE', 'postgresql+psycopg2'), user=os.getenv('SQLALCHEMY_DATABASE_USER', 'your_db_user'), pw=os.getenv('SQLALCHEMY_DATABASE_PASSWORD', 'your_db_user_password'), host=os.getenv('SQLALCHEMY_DATABASE_HOST', '127.0.0.1'), port=os.getenv('SQLALCHEMY_DATABASE_PORT', 5432), db=os.getenv('SQLALCHEMY_DATABASE_NAME', 'your_db_name')) For MariaDB/MySQL, replace the engine parameter with mysql+mysqldb and the port parameter with 3306. Note that you'll probably need to install the relevant driver package, eg: # for postgresql+psycopg2 pip install psycopg2-binary # for mysql+mysqldb pip install mysqlclient See the official documentation on SQLAlchemy Dialects to learn more about connecting to other database engines. Connect # your_package/db.py from sqlalchemy_unchained import * from .config import Config engine, Session, Model, relationship = init_sqlalchemy_unchained(Config.DATABASE_URI) If you need to customize the creation of any of these parameters, this is what init_sqlalchemy_unchained is doing behind the scenes: # your_package/db.py from sqlalchemy.orm import relationship as _relationship from sqlalchemy_unchained import * from sqlalchemy_unchained import _wrap_with_default_query_class from .config import Config engine = create_engine(Config.DATABASE_URI) Session = scoped_session_factory(bind=engine, query_cls=BaseQuery) SessionManager.set_session_factory(Session) Model = declarative_base(bind=engine) relationship = _wrap_with_default_query_class(_relationship, BaseQuery) Models # your_package/models.py from . import db class Parent(db.Model): name = db.Column(db.String, nullable=False) children = db.relationship('Child', back_populates='parent') class Child(db.Model): name = db.Column(db.String, nullable=False) parent_id = db.foreign_key('Parent', nullable=False) parent = db.relationship('Parent', back_populates='children') This is the first bit that's really different from using stock SQLAlchemy. By default, models in SQLAlchemy Unchained automatically have their __tablename__ set to the snake_cased model class name, and include a primary key column id as well as the automatically-timestamped columns created_at and updated_at. This is customizable. For example, if you wanted to rename the columns on Parent, and disable timestamping on Child and rename its table name to children: # your_package/models.py from . import db class Parent(db.Model): class Meta: pk = 'pk' created_at = 'created' updated_at = 'updated' name = db.Column(db.String, nullable=False) children = db.relationship('Child', back_populates='parent') class Child(db.Model): class Meta: table = 'children' created_at = None updated_at = None name = db.Column(db.String, nullable=False) parent_id = db.foreign_key('Parent', nullable=False) parent = db.relationship('Parent', back_populates='children') The are other Meta options that SQLAlchemy Unchained supports, and we'll have a look at those in a bit. We'll also cover how to change the defaults for all models, as well as how to add support for your own custom Meta options. But for now, let's get migrations configured before we continue any further. Migrations Initialize Alembic: alembic init db/migrations Next, we need to configure Alembic to use the same database as we've already configured. This happens towards the top of the db/migrations/env.py file, which the alembic init command generated for us. Modify the following lines: from your_package.config import Config from your_package.db import Model from your_package.models import * For these import statements to work, we need to install our package. Let's create a minimal setup.py: # setup.py from setuptools import setup, find_packages setup( name='your-project', version='0.1.0', packages=find_packages(exclude=['docs', 'tests']), include_package_data=True, zip_safe=False, install_requires=[ 'sqlalchemy-unchained==0.11.0', ], ) And install the package into the virtual environment you're using for development: pip install -e . That should be all that's required to get migrations working. Let's generate a migration for our models and run it: alembic revision --autogenerate -m 'create models' # verify the generated migration is going to do what you want, and then run it: alembic upgrade head Session and Model Managers SQLAlchemy Unchained encourages embracing the design patterns recommended by the Data Mapper Pattern that SQLAlchemy uses. This means we use managers (or services, if you prefer) to handle all of our interactions with the database. SQLAlchemy Unchained includes two classes to facilitate making this as easy as possible: SessionManager and ModelManager. SessionManager is a concrete class that you can and should use directly whenever you need to interact with the database session. ModelManager is an abstract subclass of SessionManager that you should extend for each of the models in your application: from your_package import db class YourModel(db.Model): name = db.Column(db.String, nullable=False) class YourModelManager(db.ModelManager): class Meta: model = YourModel def create(self, name, commit=False, **kwargs) -> YourModel: return super().create(name=name, commit=commit, **kwargs) def find_by_name(self, name) -> Union[YourModel, None]: return self.get_by(name=name) instance = YourModelManager().create(name='foobar', commit=True) Both SessionManager and ModelManager are singletons, so whenever you call SessionManager() or YourModelManager(), you will always get the same instance. Meta Options table ( __tablename__) class Foo(db.Model): class Meta: table: str = 'foo' Set to customize the name of the table in the database for the model. By default, we use the model's class name converted to snake case. NOTE: The snake case logic used is slightly different from that of Flask-SQLAlchemy, so if you're porting your models over and any of them have sequential upper-case letters, you will probably need to change the default. pk (primary key column) class Foo(db.Model): class Meta: pk: Union[str, None] = _ModelRegistry().default_primary_key_column = 'id' Set to a string to customize the column name used for the primary key, or set to None to disable the column. NOTE: Customizing the default primary key column name used for all models is different from customizing the defaults for other meta options. (You should subclass _ModelRegistry and set its default_primary_key_column attribute. This is necessary for the foreign_key helper function to work correctly.) created_at (row insertion timestamp) class Foo(db.Model): class Meta: created_at: Union[str, None] = 'created_at' # 'created_at' is the default Set to a string to customize the column name used for the creation timestamp, or set to None to disable the column. updated_at (last updated timestamp) class Foo(db.Model): class Meta: updated_at: Union[str, None] = 'updated_at' # 'updated_at' is the default Set to a string to customize the column name used for the updated timestamp, or set to None to disable the column. repr (automatic pretty __repr__) class Foo(db.Model): class Meta: repr: Tuple[str, ...] = (_ModelRegistry.default_primary_key_column,) # default is ('id',) print(Foo()) # prints: Foo(id=1) Set to a tuple of column (attribute) names to customize the representation of models. validation class Foo(db.Model): class Meta: validation: bool = True # True is the default Set to False to disable validation of model instances. polymorphic (mapped model class hierarchies) class Foo(db.Model): class Meta: polymorphic: Union[bool, str, None] = True # None is the default class Bar(Foo): pass This meta option is disabled by default, and can be set to one of 'joined', 'single', or True (an alias for 'joined'). See the SQLAlchemy documentation on class inheritance hierarchies for more info. When polymorphic is enabled, there are two other meta options available to further customize its behavior: class Foo(db.Model): class Meta: polymorphic = True polymorphic_on: str = 'discriminator' # column name to store polymorphic_identity in polymorphic_identity: str = 'models.Foo' # unique identifier to use for this model class Bar(Foo): class Meta: polymorphic_identity = 'models.Bar' polymorphic_identity is the identifier used by SQLAlchemy to distinguish which model class a row should use, and defaults to using the model's class name. The polymorphic_identity gets stored in the polymorphic_on column, which defaults to 'discriminator'. IMPORTANT: The polymorphic and polymorphic_on Meta options should be specified on the base model of the hierarchy only. Conversely if you want to customize polymorphic_identity, it should be specified on every model in the hierarchy. Model Validation SQLAlchemy Unchained adds support for validating models before persisting them to the database. This is enabled by default, although you can disable it with the validation Meta option. There is one included validator: Required. It can be used like so: from your_package import db class YourModel(db.Model): first_name = db.Column(db.String, info=dict(required=True)) middle_name = db.Column(db.String, info=dict(required='a custom message')) last_name = db.Column(db.String, info=dict(validators=[db.Required])) suffix = db.Column(db.String, info=dict(validators=[db.Required('a custom message')])) There are two different ways you can write custom validation for your models. The first is by extending BaseValidator, implementing __call__, and raising ValidationError if the validation fails: from your_package import db class ValidateEmail(db.BaseValidator): def __call__(self, value): super().__call__(value) if '@' not in value: # not how you should actually verify email addresses raise db.ValidationError(self.msg or 'Invalid email address') class YourModel(db.Model): email = db.Column(db.String, info=dict(validators=[ValidateEmail])) The second is by defining a validation classmethod or staticmethod directly on the model class: from your_package import db class YourModel(db.Model): email = db.Column(db.String) @staticmethod def validate_email(value): if '@' not in value: # not how you should actually verify email addresses raise db.ValidationError('Invalid email address') Validation methods defined on model classes must follow a specific naming convention: either validate_<column_name> or validates_<column_name> will work. Just like when implementing __call__ on BaseValidator, model validation methods should raise ValidationError if their validation fails. Validation happens automatically whenever your create or update a model instance. If any of the validators fail, ValidationErrors will be raised. Customizing Meta Options The meta options available are configurable. Let's take a look at the implementation of the created_at meta option: import sqlalchemy as sa from py_meta_utils import McsArgs from sqlalchemy import func as sa_func from sqlalchemy_unchained import ColumnMetaOption class CreatedAtColumnMetaOption(ColumnMetaOption): def __init__(self, name='created_at', default='created_at', inherit=True): super().__init__(name=name, default=default, inherit=inherit) def get_column(self, mcs_args: McsArgs): return sa.Column(sa.DateTime, server_default=sa_func.now()) For examples sake, let's say you wanted every model to have a required name column, but no automatic timestamping behavior. First we need to implement a ColumnMetaOption: # your_package/base_model.py import sqlalchemy as sa from py_meta_utils import McsArgs from sqlalchemy_unchained import (BaseModel as _BaseModel, ColumnMetaOption, ModelMetaOptionsFactory) class NameColumnMetaOption(ColumnMetaOption): def __init__(self): super().__init__('name', default='name', inherit=True) def get_column(self, mcs_args: McsArgs): return sa.Column(sa.String, nullable=False) class CustomModelMetaOptionsFactory(ModelMetaOptionsFactory): _options = ModelMetaOptionsFactory._options + [NameColumnMetaOption] class BaseModel(_BaseModel): _meta_options_factory_class = CustomModelMetaOptionsFactory class Meta: created_at = None updated_at = None The last step is to tell SQLAlchemy Unchained to use our customized BaseModel class: # your_package/db.py from sqlalchemy_unchained import * from .base_model import BaseModel from .config import Config engine, Session, Model, relationship = init_sqlalchemy_unchained(Config.DATABASE_URI, model=BaseModel) Customizing the Default Primary Key Column Name The primary key column is special in that knowledge of its setting is required for determining foreign key column names during model class creation. The first step is to subclass the _ModelRegistry and set its default_primary_key_column class attribute: # your_package/model_registry.py from sqlalchemy_unchained import ModelRegistry as BaseModelRegistry class CustomModelRegistry(BaseModelRegistry): default_primary_key_column = 'pk' And then, in order to inform SQLAlchemy Unchained about your customized model registry, you need call _ModelRegistry.set_singleton_class: # your_package/db.py from sqlalchemy_unchained import * from sqlalchemy_unchained import ModelRegistry from .config import Config from .model_registry import CustomModelRegistry ModelRegistry.set_singleton_class(CustomModelRegistry) engine, Session, Model, relationship = init_sqlalchemy_unchained(Config.DATABASE_URI) Lazy Mapping (experimental) Lazy mapping is feature that this package introduces on top of SQLAlchemy. It's experimental and disabled by default. In stock SQLAlchemy, when you define a model, the second that code gets imported, the base model's metaclass will register the model with SQLAlchemy's mapper. 99% of the time this is what you want to happen, but if for some reason you don't want that behavior, then you have to enable lazy mapping. There are two components to enabling lazy mapping. The first step is to customize the model registry: # your_package/model_registry.py from py_meta_utils import McsInitArgs from sqlalchemy_unchained import ModelRegistry class LazyModelRegistry(ModelRegistry): enable_lazy_mapping = True def should_initialize(self, mcs_init_args: McsInitArgs) -> bool: pass # implement your custom logic for determining which models to register # with SQLAlchemy And just like for customizing the primary key column, we need to inform _ModelRegistry of our subclass by calling _ModelRegistry.set_singleton_class: # your_package/db.py from sqlalchemy_unchained import * from sqlalchemy_unchained import ModelRegistry from .config import Config from .model_registry import LazyModelRegistry ModelRegistry.set_singleton_class(LazyModelRegistry) engine, Session, Model, relationship = init_sqlalchemy_unchained(Config.DATABASE_URI) The last step is to define your models like so: class Foo(db.Model): class Meta: lazy_mapped = True Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/SQLAlchemy-Unchained/
CC-MAIN-2020-34
en
refinedweb
Constructor in Java – Master all the Concepts in One Shot! Constructor in Java is an object builder or a block of code which creates an object. It is very similar to a Java method the main difference is that it does not have a return type, not even void. It is often referred to as a method. Constructor invoked automatically when an object is created. Note – When an object is created at least one constructor is called. If there is no constructor then the default constructor is called. Keeping you updated with latest technology trends, Join DataFlair on Telegram Rules for Writing Constructors These are some rules for writing constructors in Java: - The name of the constructor must be the same as the class name. - A constructor must have no return type not even void. - In constructor access modifiers are used to control its access so that other classes can call the constructor. - A constructor cannot be final, abstract, abstract and synchronized. It’s time to revise the concept of Classes and Objects in Java Types of Constructor in Java Following are the two types of constructors, let’s discuss them with examples: 1. Default Constructor A constructor that has no parameter is called as a default constructor. If we do not implement any constructor in our program then Java compiler automatically creates a constructor. Example- package com.dataflair.constructor; class DataFlair { int number; String name; DataFlair() { System.out.println("Default Constructor called"); } } public class DefaultConstructor { public static void main(String[] args) { DataFlair objectName = new DataFlair(); System.out.println(objectName.name); System.out.println(objectName.number); } } 2. Parameterized Constructor A constructor that has parameters is called a parameterized constructor. It is mainly used to provide different values to distinct objects. Although you can use the same values too. Example- package com.dataflair.constructor; class DataFlair { String employeeName; int employeeId; DataFlair(String employeeName, int employeeId) { this.employeeName = employeeName; this.employeeId = employeeId; } } public class ParameterizedConstructor { public static void main (String[] args) { DataFlair employeeObject = new DataFlair("Renuka", 20); System.out.println("DataFlarian Name " + employeeObject.employeeName + " and DataFlarian Id :" + employeeObject.employeeId); } } Java Constructor Chaining When we call a constructor from another constructor of the same class then the procedure is known as constructor chaining in Java. The main purpose of constructor chaining is that you can pass parameters through different constructors, but the initialization is done at the same place. If two different constructors require the same parameter then unnecessary we need to initialize that parameter twice. Constructor Overloading in Java Overloading is basically having multiple instances of the same thing. Constructor Overloading in java simply means that having more than one constructor but with different parameter lists. So that the multiple constructors perform different tasks. Like methods, we can overload constructors. Don’t forget to check Constructor Overloading in Java Example – package com.dataflair.constructor; class DataFlair { DataFlair(String name) { System.out.println("Constructor with one " + "argument - String : " + name); } DataFlair(String name, int age) { System.out.print("Constructor with two arguments - String and Integer : " + name + " "+ age); } DataFlair(long id) { System.out.println(); System.out.println("Constructor with one argument - Long : " + id); } } public class ConstructorOverloading { public static void main(String[] args) { DataFlair ObjectName = new DataFlair("Renuka"); DataFlair ObjectName1 = new DataFlair("Aakash", 26); DataFlair ObjectName2 = new DataFlair(325614567); } } Difference Between Constructor and Method in Java These are the following point of comparison between constructor vs method in Java. - The constructor is a block of code which instantiates a newly created object. On the other hand, a method is a collection of statements that always return value depends upon its execution. - Constructors are called implicitly, while methods are called explicitly. - A constructor does not have any return type but methods may have. - The name of the constructor should be the same as the class name. On the other hand, the Method name should not be the same as the class name. - If there is no constructor then the default constructor is created by the compiler itself. In the case of the method, there is no default method provided. Summary From the above tutorial, we can say constructors are more important than methods in Java. We can use it to initialize the objects to default values at the time of object creation. Constructors are not mandatory, but if you create them they should follow all the protocols. It’s right time to know about Copy Constructor in Java The English used in the tutorial is quite confusing and is not easy to understand. You guys should look to simplify the use of English, for example, this line “Default constructor in Java gives the default esteems to the protest like 0, invalid and so on relying upon the sort.” is not easy for a newbie(both for Java and English) to understand. Really Awesome explanations for begineers Hi Koki, Do rate us on Google and share your feedback there too.
https://data-flair.training/blogs/constructor-in-java/
CC-MAIN-2020-34
en
refinedweb
Changelog History v1.0.0J1.0.0-alphaJ0.10.3May 15, 2020 Info 🚀 This is a maintenance release for the 0.10.x release train. Please find the complete list of changes here. 📄 The API Docs can be found here Committers 🍱 🎉 MANY THANKS TO ALL COMMITTERS! 🎉 - 🍱 ⭐️ Mincong Huang (@mincong-h) - 🍱 ⭐️ Daniel Dietrich (@danieldietrich) 🔄 Changes v0.10.2August 02, 2019 🚀 This patch release fixes the bug of overlapping JPMS module names by removing the Automatic-Module-Nameattributes from the MANIFEST.MF files. 🚀 The upcoming release v1.0.0 will not have Automatic-Module-Nameattributes. 🚀 The next release v2.0.0 will have proper JPMS modules. v0.10.1July 23, 2019 Info 🚀 This is a maintenance release for the 0.10.x release train. Please find the complete list of changes here.️ Theodor A. Dumitrescu (@thadumi) - 🍱 ⭐️ Bram Schuur (@craffit) 🔄 Changes - 🛠 Bugfix: #2430 Future.reduce considers executor - 🛠 Bugfix: #2426 Fixes DistictIterator to not eat null values - 🛠 Bugfix: #2405 Fixes patmat corner case that might produce a ClassCastException - 🛠 Bugfix: #2403 ClassCastException during pattern matching - 🛠 Bugfix: #2399 Fix: CharSeq implements Comparable - 👌 Improvement: #2400 Improve performance of last() call on TreeMap v0.10.0January 20, 2019 Info 🚀 The minor release 0.10.0 focuses on several API improvements. Please find the complete list of changes here. 📄 The API Docs can be found here Comitters 🍱 🎉 MANY THANKS TO ALL COMMITTERS (AND THEIR PATIENCE)! 🎉 - 🍱 ⭐️ Amy (@amygithub) -️ Emmanuel Touzery (@emmanueltouzery) - 🍱 ⭐️ Erlend Hamnaberg (@hamnis) - 🍱 ⭐️ Florian Stefan (@florian-stefan) - 🍱 ⭐️ Grzegorz Gałęzowski (@grzesiek-galezowski) - 🍱 ⭐️ Igor Konoplyanko (@cauchypeano) - 🍱 ⭐️ J. B. Rainsberger (@jbrains) - 🍱 ⭐️ James Lorenzen (@jlorenzen) - 🍱 ⭐️ Jia Chen (@grievejia) - 🍱 ⭐️ Julien Debon (@Sir4ur0n) - 🍱 ⭐️ Nándor Előd Fekete (@nfekete) - 🍱 ⭐️ Nataliia Privezentseva (@nataliiaprivezentseva) - 🍱 ⭐️ Maciej Górski (@mg6maciej) - 🍱 ⭐️ Mathias Düsterhöft (@mduesterhoeft) - 🍱 ⭐️ Michał Patejko (@miszasty93) - 🍱 ⭐️ Michael Ummels (@ummels) - 🍱 ⭐️ Mikołaj Fejzer (@mfejzer) - 🍱 ⭐️ Nazarii Bardiuk (@nbardiuk) - 🍱 ⭐️ Pap Lőrinc (@paplorinc) - 🍱 ⭐️ Pascal Schumacher (@PascalSchumacher) - 🍱 ⭐️ Peter Buckley (@dx-pbuckley) - 🍱 ⭐️ Robert Erdin (@roberterdin) - 🍱 ⭐️ Ruslan Sennov (@ruslansennov) - 🍱 ⭐️ Sebastian Zarnekow (@szarnekow) - 🍱 ⭐️ Sergey Pereverzov (@serp92) - 🍱 ⭐️ Stephen Kestle (@skestle) - 🍱 ⭐️ Valery (@valery1707) - 🍱 ⭐️ Victor Buldakov (@v1ctor) Note: A few contributions didn't made it into 0.10.0 because of backward incompatibilities. 🔄 Changes Instead of describing all changes in detail, I will provide a list and show some examples. Beside new features there were also several (internal) improvements not shown here. Core/API - 🔄 Change (internal): Removed internal interface io.vavr.Lambda which was on top of the (Checked)Function type hierarchy. It was not public. - 🔋 Feature: For-comprehension supports List, Option, Future, Try - 🔋 Feature: Tuple - append(), concat() and hash() - 🔋 Feature: CheckedConsumer, CheckedPredicate and CheckedRunnable enhancements - 🔋 Feature: PartialFunction now implements Function1 - 🔋 Feature: Predicates.not() - 🔋 Feature: Value: toJavaArray(IntFunction), toTree(Function, Function) - 🗄 Deprecation (for removal): API.Map(Tuple2) - 🗄 Deprecation (for removal): API.LinkedMap(Tuple2) - 🗄 Deprecation (for removal): API.SortedMap(Tuple2) - 🗄 Deprecation (for removal): API.SortedMap(Comparator, Tuple2) - 🗄 Deprecation (for removal): API.SortedMap(java.util.Map) - 🗄 Deprecation (for removal): Value.toLeft() - 🗄 Deprecation (for removal): Value.toRight() - 🗄 Deprecation (for removal): Value.toValid() - 🗄 Deprecation (for removal): Value.toInvalid() Collections - 🔋 Feature: Traversable: forEachWithIndex, reject(Predicate) - 🔋 Feature: Iterator/Stream: fill(int, Object) - 🔋 Feature: Map/Multimap: reject(BiPredicate), rejectKeys(Predicate), rejectValues(Predicate), keysIterator(), valuesIterator() - 🔋 Feature: Map/Seq: asPartialFunction() - 🔋 Feature: Seq.rotateLeft, rotateRight, takeRight, takeRightUntil, takeRightWhile Concurrent - 🔄 Change: Future now uses Executor instead of ExecutorService. The executorService() works as before if Future was initialized with an ExecutorService, otherwise it throws. User executor() instead. - 🔄 Change: Future DEFAULT_EXECUTOR: ForkJoinPool.commonPool() - 🔋 Feature: Future.await(long timeout, TimeUnit unit) - 🔋 Feature: Future.isCancelled() - 🔋 Feature (experimental): (Experimental) Future.run(Task), Future.run(Executor, Task) - 🗄 Deprecation (for removal): Seq/Map/Set withDefault, withDefaultValue Controls - 🔋 Feature: Either.sequence, Either.sequenceRight - 🔋 Feature: Either.traverse, Either.traverseRight - 🔋 Feature: Either.filterOrElse - 🔋 Feature: Either.toValidation - 🔋 Feature: Option.traverse - 🔋 Feature: Option.fold - 🔋 Feature: Try.traverse - 🔋 Feature: Try.onFailure - 🔋 Feature: Try.fold - 🔋 Feature: Try.toValidation - 🔋 Feature: Validation.fromTry - 🔋 Feature: Validation.traverse - 🗄 Deprecation (for removal): Either.left(), Either.right() - 🗄 Deprecation (for removal): Either.LeftProjection, Either.RightProjection v0.9.3January 07, 2019 Info 🚀 The release increases stability and performance. Please find the complete list of changes here. 📄 The API Docs can be found here Comitters 🚀 🎉 MANY THANKS TO ALL COMMITTERS THAT MADE THIS RELEASE POSSIBLE 🎉 -️ Igor Konoplyanko (@cauchypeano) - 🍱 ⭐️ J. B. Rainsberger (@jbrains) - 🍱 ⭐️ Jia Chen (@grievejia) - 🍱 ⭐️ Nándor Előd Fekete (@nfekete) - 🍱 ⭐️ Nataliia Privezentseva (@nataliiaprivezentseva) - 🍱 ⭐️ Ruslan Sennov (@ruslansennov) - 🍱 ⭐️ Sebastian Zarnekow (@szarnekow) - 🍱 ⭐️ Stephen Kestle (@skestle) - 🍱 ⭐️ Valery (@valery1707) 🐛 Bug fixes 🍱 🚨 LinkedHashMap duplicate entries In Vavr 0.9.2, all LinkedHashMapfactory methods internally did not store keys and values correctly. Example: var map = LinkedHashMap(1, "1", 1, "2", 2, "3", 2, "4");// = 2 (CORRECT)map.size();// = LinkedHashSet(1, 1, 2, 2) (WRONG)// = LinkedHashSet(1, 2) (FIXED)map.keySet() = LinkedHashSet(1, 1, 2, 2)// = List("1", "2", "3", "4") (WRONG)// = List("2", "4") (FIXED)map.values() = List(1, 2, 3, 4) Details can be found here. 🍱 🚨 TreeSet fell back to natural comparator after removing all elements // = TreeSet(2, 1)var set1 = TreeSet.ofAll(Comparator.reverseOrder(), List(1, 1, 2, 2));// = TreeSet() has now natural comparator (WRONG)// = TreeSet() keeps reverse order (FIXED)var set2 = set1.removeAll();// = TreeSet(1, 2) (WRONG)// = TreeSet(2, 1) (FIXED)set2.addAll(List(1, 1, 2, 2)); Details can be found here. 🍱 🚨 Stream flatMap memory consumption Stream.flatMap used an inner class for iteration, with the effect of the result stream holding an unnecessary indirect reference to the head of the source stream, resulting in a "temporary" memory leak. However, when the reference to the original Stream was garbage-collected, the memory was completely freed. Details can be found here. 🐎 Performance improvements 🍱 🏁 Hash code calculation Internally, we relied on Objects.hash(T... varargs) for hashCode calculation. A call Objects.hash(1, 2, 3) results in an array creation. In order to prevent that unnecessary instance creation, we added internal methods that preserve our hash semantics. 🍱 🏁 Micro-optimizations of collections We did some micro-optimizations to CharSeq.ofAll(Iterable) CharSeq.prependAll(Iterable) Vector.ofAll(Iterable) Vector.appendAll(Iterable) Vector.prependAll(Iterable) Low-level details can be found here. 🆕 New API 🍱 🎉 Map additions 🚀 We follow the Semantic Versioning scheme. Although this release is a patch release, there are two new methods: I hope, your OSGi infrastructure does not complain about it. Jar files 📦 📦 Separate annotation processor jar We separated annotation vavr-match-processor-<version>.jarfrom vavr-match-<version>.jar. If you want to create your own pattern matching patterns, you need to include these two dependencies now instead of only vavr-match. 📚 Documentation 🍱 📚 Javadoc improvements - We clarified that LinkedHashMap.put(K, V)and LinkedHashMap.add(T)have a worst-case linear complexity. This is because equal elements need to be replaced while preserving their position. - 🛠 Several small improvements and fixes Other improvements - 👌 Improved interoperability with the GWT compiler - Improved Eclipse integration for Vavr committers v0.9.2November 24, 2017 Contributors Daniel Dietrich Erlend Hamnaberg Michael Ummels Pap Lőrinc Robert Erdin Valeriy Vyrva 🔄 Changelog Works fine on JDK 9 🚀 Vavr 0.9.2 now works fine on JDK 9. The previous release 0.9.1 had an internal dependency that broke the JDK 9 build - we fixed that. Note: JDK 9 introduced a type inference bug that may affect Vavr's pattern matching in some cases. Please see JDK-8039214. Collections - We fixed the implementation of Multimap.last(). We did not override the default Traversable implementation. - We fixed a problem with the intersection of ordered collections that are based on RedBlackTree (such as TreeSet ). Concurrent - We fixed Future.traverse(ExecutorService, Iterable, Function). The ExecutorService was not taken into account. 🛠 More fixes... - Beside the above, we fixed some javadoc typos. Please find the complete list of changes here. v0.9.1September 17, 2017 Contributors Christian Bongiorno Daniel Dietrich Emmanuel Touzery Julien Debon Nazarii Bardiuk Pascal Schumacher Ruslan Sennov 🔄 Changelog Concurrent operations - 🛠 We fixed a bug that prevented onFailure actions to be performed when a Future has been cancelled. - 👀 There are known problems with Promise that occur under certain circumstances (see details below). Please note that we did not fix this problem in 0.9.1. We currently work on it in #2093. 0️⃣ The main thread may be blocked forever if we use an operation that blocks on a Future returned by a Promise. We observed this behavior when we used a ForkJoinPool instead of the default CachedThreadPool. Example: // we choose a work-stealing thread pool ExecutorService executor = java.util.concurrent.ForkJoinPool.commonPool(); Future<Object> someFuture = Future.of(executor, () -> longRunningTask()); // the internal Future of a Promise might deadlock the system if we block on that Future Promise<Object> promise = Promise.make(executor); someFuture.onComplete(promise::complete); // the bug only shows up when calling a blocking operation, like get() or isEmpty() Object result = promise.future().get(); Numeric operations - ✂ Removed the Traversable.min(), max()overloads TreeSet.min()and TreeSet.max() - Made Traversable.average(), sum()and product()more accurate. TreeSet min()/max() TreeSet implements SortedSet, which represents distinct elements that are ordered using a specific Comparator. 0️⃣ By default, Traversable.min() and max() calculate the minimum resp. maximum element in linear time O(n) using the natural element order. However, we used the TreeSet collection characteristic to calculate the min() / max() in constant time O(1). This was wrong for two reasons: The Traversable API spec states that min() and max() are calculated using the natural element order. This has to be the case because of the Liskov substitution principle, see examples below. The minimum of any non-empty collection containing double values is Double.NaN if one or more elements are NaN. But the natural Comparator of Double is defined in the way that NaN >= d for every double d. Example: // = TreeSet(3, 2, 1) Set<Integer> ints = TreeSet.of(Comparator.reverseOrder(), 1, 2, 3); // = 3 (before), = 1 (after) ints.min(); // = 1 (before), = 3 (after) ints.max(); // = List(1.0, NaN, 3.0) List<Integer> doubles = List.of(1.0, Double.NaN, 3.0); // = 1.0 (before), = NaN (after) doubles.min(); // = NaN (both ok, before and after this change) doubles.max(); Traversable average(), sum() and product() sum() and product() operate on elements of type Number. Now we return a Number according to the input argument or fallback to double. 🛠 sum() and average() now internally use an improved summation compensation algorithm that fixes problems that occur in standard Java. Example: // = OptionalDouble(0.0) (wrong) j.u.s.DoubleStream.of(1.0, 10e100, 2.0, -10e100).average() // = Some(0.75) (correct) List.of(1.0, 10e100, 2.0, -10e100).average() Missing methods We added Either.sequence(Iterable<? extends Either<? extends L, ? extends R>>) Either.sequenceRight(Iterable<? extends Either<? extends L, ? extends R>>)Examples: // = Right(List(all, right)) of type Either<Seq<Integer>, Seq<String>> Either.sequence(List.of(Either.right("all"), Either.right("right"))); // = Left(List(1, 2)) of type Either<Seq<Integer>, Seq<String>> Either.sequence(List.of(Either.left(1), Either.left(2), Either.right("ok"))); // = Right(List(all, right)) of type Either<Integer, Seq<String>> Either.sequenceRight(List.of(Either.right("all"), Either.right("right"))); // = Left(1) of type Either<Integer, Seq<String>> Either.sequenceRight(List.of(Either.left(1), Either.left(2), Either.right("ok"))); Type narrowing We changed the generic bounds of these method arguments: Function0<R>.narrow(Function0<? extends R>)(before: narrow(Supplier<? extends R>)) Function1<T1, R> Function1.narrow(Function1<? super T1, ? extends R>)(before: narrow(Function<? super T1, ? extends R>)) Function2<T1, T2, R> Function2.narrow(Function2<? super T1, ? super T2, ? extends R>)(before: narrow(BiFunction<? super T1, ? super T2, ? extends R>)) Background: Java is not able to do the following type assignment: M<? extends T> m = ...; M<T> narrowed = m; // does not work but it is correct for immutable objects. Therefore almost all Vavr types have narrow methods. M<? extends T> m = ...; M<T> narrowed = M.narrow(m); // works as expected ### 🛠 GWT compatibility fixes The following methods were annotated with @GwtIncompatible: Predicates#instanceOf(Class) asJava(), asJava(Consumer), asJavaMutable(), asJavaMutable(Consumer)of io.vavr.collection.Seqand all its subtypes, namely IndexedSeq, LinearSeq, Array, CharSeq, List, Queue, Streamand Vector 📚 Documentation We added more examples and improved the readability of the Javadoc: 💅 Thanks to Stuart Marks, he was so kind to initiate an issue in order to improve the default Javadoc style. You find the Vavr 0.9.1 API specification here. 🛠 More fixes... - 🚚 We removed internal memoization of sortBy() in order to fix an issue with lazy collections that have infinite size - ⚡️ We optimized collection conversion - 🏗 We fixed the generics of Multimap builders - We improved Traversable.reduceLeft - We improved Iterator.dropWhile and slideBy Please find the complete list of changes here. v0.9.0May 16, 2017 🔄 Changes to the Base Package io.vavr 🚚 We removed the interfaces Kind1and Kind2. They served as bridge for the removed module javaslang-pure, which contained experimental algebraic extensions. Values - 🚚 We removed getOption()in favor of toOption()(which has the same semantics) - We changed the functional interface argument of getOrElseTry(CheckedFunction0)(was: getOrElseTry(Try.CheckedSupplier)) - 🚚 We removed the conversion method toStack() - We replaced the conversion methods toJavaList(Supplier)by toJavaList(Function) toJavaSet(Supplier)by toJavaSet(Function)We added introspection methods isAsync() and isLazy() that provide information about a Value type at runtime We added getOrNull() which returns null if the Value is empty We added Java-like collect() methods We added several conversion methods: toCompletableFuture() toEither(Supplier) toEither(L) toInvalid(Supplier) toInvalid(T) toJavaArray(Class) toJavaCollection(Function) toJavaCollection(Supplier) toJavaList(Function) `toJavaMap(Supplier, Function, Function toJavaParallelStream() toJavaSet(Function) toLinkedMap(Function) toLinkedMap(Function, Function) toLinkedSet() toMap(Function, Function) toPriorityQueue() toPriorityQueue(Comparator) toSortedMap(Comparator, Function) toSortedMap(Comparator, Function, Function) toSortedMap(Function) toSortedMap(Function, Function) toSortedSet() toSortedSet(Comparator) toValid(Supplier) toValid(E) toValidation(Supplier) toValidation(L) ### Functions We removed the interface λ (the mother of all functions). It was neat but it had no practical purpose. The unicode character caused several problems with 3rd party tools, which did not handle unicode characters properly. - 🚚 We renamed the interface io.vavr.λto io.vavr.Lambdaand removed it from the public API. - 🚚 We removed the interface λ.Memoizedfrom the public API. We added PartialFunction, which is an enabler for a more performant pattern matching implementation #### Functional interfaces With Vavr 0.9 we bundled our functions in io.vavr. - 🚚 We moved the functional interfaces Try.CheckedConsumer, Try.CheckedPredicate, Try.CheckedRunnableto io.vavr. - We replaced the functional interface Try.CheckedSupplierby the existing CheckedFunction0. 👻 Exception Handling We added some methods to uncheck an existing throwing function, e.g.CheckedFunction(x -> { throw new Error(); }).unchecked() lift checked functions to an Option return type, e.g.// = NoneCheckedFunction1.lift(x -> { throw new Error(); }).apply(o); lift checked functions to a Try return type, e.g.// = Failure(Error)CheckedFunction1.liftTry(x -> { throw new Error(); }).apply(o); Other Factory Methods create constant functions, e.g.Function2.constant(1).apply(what, ever); // = 1 narrowing the generic types, e.g. Function0<? extends CharSequence> f_ = () -> "hi"; Function0<CharSequence> f = Function0.narrow(f_); Tuples - We renamed transform()to apply(), e.g. y = f(x1, x2, x3)can be understood as y = Tuple(x1, x2, x3).apply(f). Additions: ⚡️ Tuple fields can be updated using one of the update* methods, e.g.Tuple(1, 2, 3).update2(0). A Tuple2 can be swapped, e.g. Tuple(1, 2).swap(). Tuples can be created from java.util.Map.Entry instances, e.g.Tuple.fromEntry(entry) // = Tuple2 Tuples can be sequenced, e.g.Tuple.sequence1(Iterable<? extends Tuple1<? extends T1>>) // = Tuple1<Seq<T1>> Tuples can be narrowed, e.g.Tuple.narrow(Tuple1<? extends T1>) // = Tuple1<T1> ### The API Gateway We added io.vavr.APIthat gives direct access to most of the Vavr API without additional imports. We are now able to start using Vavr by adding one gateway import. More imports can be added on demand by the IDE. 'Companion' Factory Methods import static io.vavr.API.*; The new static factory methods serve two things: They add syntactic sugar. E.g. instead of Try.of(() -> new Error()) we now just write Try(() -> new Error()). They reflect the expected return type. Try<Integer> _try = Try(1); Success<Integer> success = Success(1); Failure<Integer> failure = Failure(new Error()); Option<Integer> option = Option(1); Some<Integer> some = Some(1); None<Integer> none = None(); Array<Integer> array = Array(1, 2, 3); List<Integer> list = List(1, 2, 3); Stream<Integer> stream = Stream(1, 2, 3); Vector<Integer> vector = Vector(1, 2, 3); Tuple1<T> tuple1 = Tuple(t); Tuple3<T, U, V> tuple3 = Tuple(t, u, v); E.g. Some(1) is expected to be Option.Some, not Option. However, type narrowing is possible. // types work as expected Option<CharSeqeuence> option = Some(""); // strmight be null Option<CharSeqeuence> option = Option(str); // also possible, it is a Some(null)! Option<CharSeqeuence> option = Some(null); Uncheck Functions We are now able to uncheck checked functions: Function1<String, User> getUserById = CheckedFunction1.of(id -> throw new IOException()).unchecked(); // = CheckedFunction1.of(User::getById).unchecked(); It is recommended to use the API.unchecked() shortcut instead: Function1<String, User> getUserById = unchecked(id -> throw new IOException()); // = unchecked(User::getById); More Syntacic Sugar 🖨 We are now able to println to console without having to type the System.out boilerplate. 🖨 println("easy"); Rapid prototyping may require to defer implementations. We use TODO() for that purpose: void fancyNewAlgorithm(Arg arg) { return TODO("some fancy stuff will appear soon"); } fancyNewAlgorithm(TODO("need to construct the arg")); The TODO() calls will throw a NotImplementedError at runtime. Pattern Matching 🐎 Internally pattern matching now uses the new PartialFunctioninterface, which gives a performance boost. Pattern Names We removed the possibility to create pattern matching cases outside of the pattern scope. Now we always use the existing $()methods to lift objects and functions into a pattern context. // beforeCase(obj, ...) // e.g. Case(1, ...)Case(predicate, ...) // e.g. Case(t -\> true, ...)// afterCase($(obj), ...) // e.g. Case($(1), ...)Case($(predicate), ...) // e.g. Case($(t -\> true), ...) Our pattern generator vavr-matchfollows the new naming scheme and adds a $to all generated pattern names. Please prefix all patterns with $, e.g. $Some(...)instead of Some(...). import static io.vavr.API.\*;import static io.vavr.Patterns.\*;// same as `intOption.map(i -\> i * 2).getOrElse(-1)`String result = Match(intOption).of( Case($Some($()), i -\> i \* 2), Case($None(), -1) ); Pre-defined Patterns 🛠 Accordingly all pattern names in io.vavr.Patternsare now prefixed with a $, and - we replaced the List()patterns by $Cons(...)and $Nil(). - 🚚 we removed the Stream()patterns because we need to enhance our pattern generator to express inner patterns $Stream.Cons(...)and $Stream.Empty()(API not finished). Pre-defined Predicates We added the predicates: exists(Predicate) forAll(Predicate) instanceOf(Class) isNotNull() isNull() More details here. 🔄 Changes to the Base Package io.vavr.control 👻 Try keeps original Exception - 🚚 We removed Try.FatalExceptionand Try.NonFatalException - Instead we sneaky throw the original exception when calling get()(even if it is checked!) 👀 For additions see the Try API. 🔄 Changes to the Collections io.vavr.collection - 🚚 We removed AbstractIteratorfrom the public API - We changed the index type from longto int. That strikes many methods, like take(int), drop(int), zipWithIndex(), ... - We removed the unsafe Map.of(Object...)factory methods which interpreted the given objects as pairs. - We added the safe Map.of(K, V, ...)factory methods (up to 10 key/value pairs). Java Collection Views Our sequential collections, i.e. all collections that implement Seq, can be converted to a java.util collection view in O(1). 0️⃣ We provide conversion method for mutable and immutable collections. By default collections are immutable, like our persistent collections. java.util.List<Integer> list = Vector(1, 2, 3).asJava(); More examples can be found here. More Collections We completely re-implemented Vector. We added more collections: BitSet PriorityQueue Multimap: HashMultimap and LinkedHashMultimap SortedMultimap: TreeMultimap 📄 The collections got many additions. Please check out the API docs for further details.
https://java.libhunt.com/javaslang-changelog
CC-MAIN-2020-34
en
refinedweb
FMOUTCHAR(3W) FMOUTCHAR(3W) fmoutchar - render a single glyph. #include <fmclient.h> long fmoutchar(fh, ch) fmfonthandle fh; unsigned int ch; fmoutchar renders a single glyph from the given font. It does not change the current font. If the glyph doesn't exist, it spaces forward the width of a space; if a space doesn't exist, it spaces forward the width of the font. The width used is returned. Note that 'ch' is declared as 'unsigned int' so that characters with code > 256 can be displayed. fminit(3W), fmfindfont(3W), fmscalefont(3W), fmsetfont(3W). This routine is available only in immediate mode. PPPPaaaaggggeeee 1111
https://nixdoc.net/man-pages/IRIX/man3w/fmoutchar.3w.html
CC-MAIN-2020-34
en
refinedweb
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hey guys! Newbie here (to the forum). So I just updated my macbook OS to OSX10.10.5 (Yosemite) and have some issues with java when I try to run a sketch in Processing. The sketch: import processing.sound.*; SoundFile file; void setup() { size(640, 360); background(255); // Load a soundfile from the /data folder of the sketch and play it back file = new SoundFile(this, "buzz.wav"); file.play(); } void draw() { } The console after I press 'run': # # A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x000000019a2e4547, pid=559, tid=61807 # # JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode bsd-amd64 compressed oops) # Problematic frame: # C [libsndfile.1.dylib+0x2547] psf_open_file+0xc3 # # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # # An error report file with more information is saved as: # /Users/Andy/hs_err_pid559.log # # If you would like to submit a bug report, please visit: # # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Could not run the sketch (Target VM failed to initialize). Make sure that you haven't set the maximum available memory too high. For more information, read revisions.txt and Help → Troubleshooting. I never had any issues with Processing before updating my OS so I have no idea what's going on... So far I've tried updating Java to version 8, uninstalling it and reverting back to the Apple Java 6, restarting my machine but no luck with any of these so far :( Any help/guidance would be amazing! Thanks!!! :)
https://forum.processing.org/two/discussion/12313/java-error-on-osx-10-10-5
CC-MAIN-2020-34
en
refinedweb
The CNCF Technical Steering Committee (TOC) announced that they have accepted Contour as an incubating project. Contour is a Kubernetes Ingress controller that uses the Envoy Layer 7 (L7) proxy as a data plane. Contour is an Ingress controller for Kubernetes clusters to accept external traffic into the cluster. It works with Envoy by functioning as a 'management server' - in Envoy parlance - effectively working as the control plane that directs how traffic should be managed in the Envoy proxy. Contour is implemented as a Custom Resource Definition (CRD) in Kubernetes and offers advanced routing features compared to the vanilla Kubernetes Ingress specification. Other similar Ingress controllers depend on annotations to provide advanced routing features. Contour was created at Heptio in 2017 to get around limitations [PDF file] in the standard Kubernetes Ingress specification, and became part of VMware when the latter acquired Heptio in 2018. Contour deploys Envoy as a reverse proxy and load balancer and provides dynamic configuration updates to Envoy. To get around limitations of the vanilla Kubernetes Ingress spec - which is still in beta, the IngressRoute was created as a Custom Resource Definition (CRD) in Contour. Since then, it has been superseded by another Contour CRD called HTTPProxy. HTTProxy also happens to be in beta, but it adds additional features like TLS configuration for backend services, HTTP header filters and manipulation policies, multiple, weighted upstreams for a single path, traffic mirroring, and configurable load balancing strategies. An HTTPProxy object can include other HTTPProxies - both in the same as well as in different namespaces. The latter feature is meant to address multi-team ingress management, in which different teams are segregated by using namespaces in the same Kubernetes cluster. Envoy was designed as a "universal data plane", and with this in mind, it provides a set of management APIs meant to be implemented by providers that can implement control plane services. These APIs enable dynamic configuration changes to Envoy. The control plane, among other things, configures the filter chain that Envoy requests pass through. Other efforts to write a control plane for Envoy are Ambassador, and Gloo. The recommended deployment topology is to deploy Contour as a Deployment and Envoy as a DaemonSet with secure gRPC communication between them. Contour runs as an init container in "bootstrap" mode and writes a config to a temporary volume. The volume is then used by the Envoy container as "bootstrap configuration", which directs Envoy to use Contour as its management server. The management server supplies dynamic configuration at runtime using gRPC to Envoy to make routing decisions. Contour watches for changes in the Ingress, HTTPProxy (the successor to the deprecated IngressRoute), Secret, Service and Endpoint objects using the SharedInformer framework. In the press release, Chris Aniszczyk, CTO of the Cloud Native Computing Foundation, noted that "Contour is a logical complement to Envoy and makes it easier to consume in a cloud native, multi-team environment". Contour has been adopted by quite a few organizations and products, some of them being Kinvolk, Replicated, Adobe's Project Ethos, Kintone and PhishLabs. At the time of writing, Contour has had 50 releases so far. The 1.6.1 release of Contour supports Kubernetes versions 1.16 to 1.18 and the source code is hosted on GitHub. Community comments
https://www.infoq.com/news/2020/07/cncf-contour-kubernetes-ingress/?itm_source=presentations_about_Containers&itm_medium=link&itm_campaign=Containers
CC-MAIN-2020-34
en
refinedweb
🎁 Using Git with Repl.it: A Short Guide I stumbled upon this post, which described a method to access Git commands from within your repl. Using a Version Control System (VCS) like Git is incredibly useful, and even more so when augmented with GitHub. In the post, the accepted answer recommended using the os Python module and accessing system commands from there. import os os.system('git clone') os.chdir('./Sigag') os.system('git status') I created a little repl that demonstrates this. Make sure you delete the Sigag directory before starting the program (although it's not a strict requirement). After the clone has finished, I'm able to leave the repl, reopen the repl, and have the Git repository still there. However, there is a much easier way to use Git commands. In most repls, you're able to enter the shell. Press F1, and type shell. Note that with some keyboards, you may need to press Fn+ F1. (You can also press Ctrl+Shift+p - thanks @ArchieMaclean!) Now, you can just clone it the usual way. git clone cd ./Sigag git status Once the clone has finished, you should see the Sigag directory in your file tree! However, the output of ls and your file tree may be different sometimes. For example, I would type ls into the shell, and it will show Sigag as a directory. However, my file tree would only show main.py. To fix this, simply refresh the page. It may seem a bit convoluted getting this to work, but easier methods of using git will be introduced at a later date, according to the post below. The screenshot below was taken on the publish date of this guide. I hope this was helpful 😄. Let me know if this helped you! The problem is after cloning a repo, it will be cloned into a separate folder. This prevents me from running the index.js file inside of that repo folder, since repl.it looks for the index.js located in /home/runner and not in /home/runner/repo. Another problem is file updates. After cloning a repo and refreshing the page (to update the file GUI) the repo folder is deleted and the file tree is replaced with the original repl files. Repl.it, please fix this so It will work. Do you guys maybe know the keys on a chromebook? I tried Ctrl+Shift+P but it is not working. I have no FN Key. I tried to get this to work with my python project. You can also press Ctrl Shift P(and type shell) to open the shell, if you don't have the function keys (like me). @ArchieMaclean Hey, thanks for the tip! I put that in there. I'll edit it in - hopefully it helps out some fine chaps! :)
https://repl.it/talk/learn/Using-Git-with-Replit-A-Short-Guide/13491?order=votes
CC-MAIN-2020-34
en
refinedweb
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi everyone, I am doing a demo about background subtraction and people extraction. I already picked the foreground pixels out of the whole pixels through comparing the different pixels between the static scene and later a people enter in this scene. I use createGraphics to store the foreground pixels and I would like to get a fully transparent background png. However, I only get a series of png files as normal camera feed frame, not the foreground extraction. I think the key section is here: if (diff > threshold) { pixels[loc] = fgColor; } else { pixels[loc] = color(0, 0); } I always get the error "NullPointerException" for the line" pg.pixels[loc] = color(0, 0); " so that I couldn't set the rest pixels as fully transparent. Is there anyone have any idea about that? I have been trapped for too long but have no idea. Any help will be appreciated. Here is the whole code if necessary: import processing.video.*; Capture video; PGraphics pg; PImage backgroundImage; float threshold = 30; void setup() { size(320, 240); video = new Capture(this, width, height); video.start(); backgroundImage = createImage(video.width, video.height, RGB); pg = createGraphics(320, 240); } void captureEvent(Capture video) { video.read(); } void draw() { loadPixels(); video.loadPixels(); backgroundImage.loadPixels(); //pg.noSmooth(); pg.beginDraw(); pg.background(0, 0); pg.image(video,0,0); for (int x = 0; x < video.width; x++) { for (int y = 0; y < video.height; y++) { int loc = x + y * video.width; color fgColor = video.pixels[loc]; color bgColor = backgroundImage.pixels[loc]; float r1 = red(fgColor); float g1 = green(fgColor); float b1 = blue(fgColor); float r2 = red(bgColor); float g2 = green(bgColor); float b2 = blue(bgColor); float diff = dist(r1, g1, b1, r2, g2, b2); //pg.loadPixels(); if (diff > threshold) { pg.pixels[loc] = fgColor; } else { pg.pixels[loc] = color(0, 0); //color(gray, alpha) // pg.clear(); clears everything in a PGraphics object to make all of the pixels 100% transparen } } } noFill(); pg.updatePixels(); pg.endDraw(); image(pg, 0, 0); pg.save("image_" + millis() + ".png"); } void mousePressed() { backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); backgroundImage.updatePixels(); } Answers Hi jeremydouglass, thanks for your response. I tried pg.clear() or pg.background(0,0,0,0) but I got cumulative drawing. I found that maybe the issue is to send PGraphics to each frame, not set the specific pixels transparent. But I have no idea how to put each PGraphics into each frame. Do you any further advice for that? Thanks a lot and looking forward to your reply. Please share your code. If you are actually clearing your PGraphics then it shouldn't still be there -- unless you are accumulating in a different PGraphics than you are clearing, or if you are copying from the main screen back into your PGraphics after clearing, etc. Hi jeremydouglass, thanks for your response. Here my code is. I think I didn't copy from the main screen. I still have no idea about that.
https://forum.processing.org/two/discussion/24341/set-specific-pixels-fully-transparent-in-pgraphics
CC-MAIN-2020-34
en
refinedweb
Details - Type: Sub-task - Status: Closed - Priority: Major - Resolution: Invalid - Affects Version/s: 2.4.5 - Fix Version/s: None - Component/s: class generator - Labels:None Description The class names generated for closures in inner classes break Class.getSimpleName() For example, the closure passed to .each in this example has name Example$_Inner_closure1.class: public class Example { private class Inner { def _ = [1, 2, 3].each {} } } Calling getSimpleName() on this class (e.g. as done by Weld on startup) throws a java.lang.InternalError: Caused by: java.lang.InternalError: Malformed class name at java.lang.Class.getSimpleName(Class.java:1133) [:1.6.0_29] at java.lang.Class.isAnonymousClass(Class.java:1188) [:1.6.0_29] I believe the class name is expected to be in the format Example$Inner$closure1. I've attached a test case to demonstrate the problem - extract the archive, cd to groovy-closure-classname-test and run mvn test. The test uses Weld to inject a ClosureClassNameTest instance, but fails when Weld calls getSimpleName() on the class for the closure on line 10. Attachments Issue Links - is a clone of GROOVY-5351 Malformed class names for closures in inner classes - Closed
https://issues.apache.org/jira/browse/GROOVY-7757
CC-MAIN-2019-35
en
refinedweb
QBluetoothAddress Since: 1.2 #include <QtConnectivity/QBluetoothAddress> More information will be added here shortly. For now, you'll find more extensive information about this class in the Qt reference for QBluetoothAddress The QBluetoothAddress class provides a Bluetooth address. QtConnectivity This class holds a Bluetooth address in a platform- and protocol- independent manner. Overview Public Functions Index Public Functions Constructs an null Bluetooth address. Constructs a new Bluetooth address and assigns address to it. Constructs a new Bluetooth address and assigns address to it. The format of address can be either XX:XX:XX:XX:XX:XX or XXXXXXXXXXXX, where X is a hexadecimal digit. Case is not important. Constructs a new Bluetooth address which is a copy of other. Destructor. void Sets the Bluetooth address to 00:00:00:00:00:00. bool Returns true if the Bluetooth address is valid, otherwise returns false. bool Compares this Bluetooth address with other. Returns true if the Bluetooth addresses are not equal, otherwise returns false. bool Returns true if the Bluetooth address is less than other; otherwise returns false. QBluetoothAddress & Assigns other to this Bluetooth address. bool Compares this Bluetooth address to other. Returns true if the Bluetooth address are equal, otherwise returns false. quint64 Returns this Bluetooth address as a quint
https://developer.blackberry.com/native/reference/cascades/qbluetoothaddress.html
CC-MAIN-2019-35
en
refinedweb
September 2012 Telephone If you have comments or suggestions about this documentation, contact Information Development by email at doc_feedback. IBM, DB2, and AIX are registered trademarks of International Business Machines Corporation in the United States, other countries, or both. Linux is the registered trademark of Linus Torvalds. UNIX is the registered trademark of The Open Group in the US and other countries. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. BMC Software considers information included in this documentation to be proprietary and confidential. Your use of this information is subject to the terms and conditions of the applicable End User License Agreement for the product and the proprietary and restricted rights notices included in this documentation. Customer Support You can obtain technical support by using the Support page on the BMC Software website or by contacting Customer Support by telephone or email. To expedite your inquiry, please see Before Contacting BMC Software. Support Website You can obtain technical support from BMC Software 24 hours a day, 7 days a week at. From this website, you can Read overviews about support services and programs that BMC Software offers. Find the most current information about BMC Software products. Search a database for problems similar to yours and possible solutions. Order or download product documentation. Report a problem or ask a question. Subscribe to receive email notices when new product versions are released. Find worldwide BMC Software support center locations and contact information, including email addresses, fax numbers, and telephone numbers. Product information Product name Product version (release number) License number and password (trial or permanent) Machine type Operating system type, version, and service pack System hardware configuration Serial numbers Related software (database, application, and communication) including type, version, and service pack or maintenance level Messages received (and the time and date that you received them) White Paper This white paper explains how to use BIRTTM reporting tools with BMC Remedy Action Request System Web reports. BIRT is an open source, Eclipse-based reporting system. Using Web reports and the BIRT Report Designer, you can: Create reports based on BMC Remedy AR System data in the BIRT Report Designer, and then deploy those reports to the AR System server using the Report form. Modify out-of-the-box Web reports in the BIRT Report Designer, and deploy those reports to the AR System server using the Report form. IMPORTANT This white paper is intended for administrators with expertise using BMC Remedy AR System Web reports and the BIRT Report Designer. This white paper is only intended to document what is specific or different to modify reports for BMC Remedy AR System. For information and tutorials about using the BIRT Report Designer, choose Help > Help Contents in the BIRT Report Designer. For additional information about the BIRT Report Designer, see. For information about Web reports, see the BMC Remedy Mid Tier Guide. This white paper guides you through the following process: Step 1 Install the BIRT Report Designer to work with Web reports Step 2 Enable BIRT to access your BMC Remedy AR System data by setting resource and White Paper Installing the BIRT Report Designer to work with AR System Web reports (page 6) Enabling BIRT to access your BMC Remedy AR System data by setting BIRT preferences (page 7) Creating a new report with the BIRT Report Designer (page 10) Deploying BIRT reports to the AR System server using the Report form (page 23) Examples for modifying reports with the BIRT Report Designer (page 27) NOTE For information about system requirements for the BIRT Report Designer, see the Eclipse documentation. NOTE The BIRT Report Designer is a 32-bit application, and requires a 32-bit Java installation. 2 Extract the BIRT Report Designer zip file to a destination directory (BIRTInstallDir). 3 To open the BIRT Report Designer application, click the BIRT.exe executable file. The BIRT Report Designer application opens. Now, proceed to Enabling BIRT to access your BMC Remedy AR System data by setting BIRT preferences on page 7. your browser: 2 In the Type field, select BIRT Library, and then click Search. Enabling BIRT to access your BMC Remedy AR System data by setting BIRT preferences W 7 White Paper directory. directory. example: BIRTInstallDir/Library/Resources the Resources directory into the Resources folder field, and then click OK. Figure 1-3: Resource panel for Preferences in the BIRT Report Designer path. Enabling BIRT to access your BMC Remedy AR System data by setting BIRT preferences W 9 White Paper 18 In the Preferences box, choose Report Design > Templates, and copy the path for the Templates directory into the Template folder field, and then click OK. Figure 1-4: Template panel for Preferences in the BIRT Report Designer Create a new report file in the BIRT Report Designer (page 11) click Finish. The new report appears in the Layout editor pane of the BIRT Report Designer. Figure 1-5: New report in the BIRT Report Designer White Paper NOTE If the Data Explorer tab is not open, choose Window > Show View > Data Explorer. 2 In the New Data Source box, make sure Create from a data source type in the follow list (default) and BMC Remedy AR System ODA Data Source (default) are selected, and then click Next. Figure 1-7: New Data Source box NOTE If BMC Remedy AR System ODA Data Source does not appear as a selection in the New Data Source box, make sure the correct BMC Remedy AR System plugin files were copied to the BIRT install directory. For more information, see To copy BMC Remedy AR System plug-ins for use with BIRT on page 7. 3 In the New BMC Remedy AR System Designtime Data Source Profile box, enter the User Name and Server for the AR System server data source. Figure 1-8: New BMC Remedy AR System Designtime Data Source Profile source information, and then test the connection again until you are successful. NOTE You must create a BIRT data source before you build a data set. For details, see To enable data access for a BIRT report by creating a data source on page 11. White Paper 1 In the BIRT Report Designer, go to the Data Explorer tab, right-click Data Sets, and 2 Configure the New Data Set box by doing the following tasks: a Enter the Data Set Name. For example, Incident Data Set. b Under the Data Source node, select the data source, and then click Next. For b Click Add. The Available Fields box displays all fields in the selected form. White Paper c Select the fields you want to include in the report, and then click OK. You can d In the Qualification field, enter the criteria to be used for the query, click Verify, and then click Finish. Enter the qualification using the format 'field' operator For example, 'Status' !="Closed" for the incidents that are not closed. "Parameter". e Click Finish. NOTE BMC recommends using query for a data set instead of filters. Data set filters are applied to the entire result set, and can impair performance. Use the qualification in the query configuration to filter data. 4 If you want to change a column label in a report (for example, change the Short c Click OK. NOTE For a description of the parameters that can be configured in the Data Set editor, see the BIRT online help. The data that fills in the criteria of the data set appears in the right pane of the Edit Data Set box. Figure 1-13: Preview Results in Edit Data Set White Paper 2 Save the new BIRT report by choosing File > Save As, and then enter a meaningful 3 Click Finish. NOTE To deploy the newly created BIRT report to the AR System server using the Report form, see To import the .rptdesign file from the BIRT Report Designer to the Report form on page 21 and Deploying BIRT reports to the AR System server using the Report form on page 23. IMPORTANT Before modifying an out-of-the-box Web report with BIRT, BMC recommends saving a copy of the report definition file (.rptdesign file) and its corresponding record in the Report form. To make sure that future upgrades do not overwrite your modified Web report, set Status to Pending or Inactive. You can also import your original exported Web report (.rptdesign file) to a different AR System server. For details, see Importing a Web report to a different AR System server on page 22. 2 Search for an out-of-the-box Web report that you want to edit, and select the report 3 In the Instance ID field, copy the Instance ID value of the report you want to edit. 4 Open the Report Definition form, paste the Instance ID you copied in the previous step from the Report form into the Report Definition GUID field, and then click Search. 5 In the search result, right-click the attached .rptdesign file, and then click Save. 6 Save the .rptdesign file to a folder where you store reports. White Paper Figure 1-15: Saving the report file in the Report Definition form 2 Edit the report using the BIRT Report Designer, and then Save the edited file. NOTE For details about modifying reports with the BIRT Report Designer, such as adding a row or column, see the BIRT Report Designer help. Modifying reports with BIRT Report Designer is discussed further in Examples for modifying reports with the BIRT Report Designer on page 27. 2 In the Report Definition File area, right-click the .rptdesign file, and click Add. Figure 1-16: Adding a file in the Report form 4 In the Add Attachment window, click Choose File, then browse and select the .rptdesign file you edited in the BIRT Report Designer, and click OK. White Paper a When you search for an out-of-the-box report in the Report form, select the Print Incident report. NOTE BMC recommends the process in this section instead of the one detailed in Modifying an out-of-the-box Web report with the BIRT Report Designer on page 18. file. 3 Import the .arx file to the target AR System server using the BMC Remedy Data Import Tool. To make sure that duplicate report entries are not created in the target AR System, import the report file using the following fields as key fields: Report Set Name, Locale, and Report Type. This is a concern when a report has been modified and a fixed report is being moved. Configuring how the report is filtered by the Category menu in the Report Console Before you start, determine where in the Category menu hierarchy you want the report to appear. As shown in Figure 1-19, this affects how you complete the Category 1 (for example, Incident), Category 2 (for example, Open Incidents), and Category 3 (for example, Count by Assignee Group) fields in step 3 on page 24. You can complete up to three Category fields, or you can create your own categories, using the Category fields. Figure 1-19: Category menu in the Report Console Deploying BIRT reports to the AR System server using the Report form W 23 White Paper 2 In the Report form, click New Request to create a record for the report. 3 Complete the required fields of the form and the Category fields. Enter unique, meaningful names for the Report Name and Report Set Name. 4 Attach the report definition file for the report to the request by doing the following: a Click Add. b Attach the .rptdesign file for the report. c Click OK. Figure 1-21: Attaching the .rptdesign file to a new request in Report form 7 To open the Report Definition form, type the following URL into your browser: Definition 8 In the Report Definition form, paste the Instance ID you copied for the report in the Figure 1-23: Report Definition GUID field in the Report Definition form 9 Click Search. The search results show the report design file for the report you are Deploying BIRT reports to the AR System server using the Report form W 25 White Paper NOTE The procedures in this section are also discussed in the BIRT Report Designer online help. This section discusses the following topics: Using a stacked bar chart to compare different series of results (page 60) and then selecting the file. The report opens in the Layout tab of the layout editor. 2 In the Data Explorer tab, right-click a data set under the Data Sets node, and drag White Paper Figure 1-27: Data set element dragged into layout editor in the BIRT Report Designer Data set in Data Explorer tab After you drop the data set into the layout editor, the table is automatically created with the fields you selected for the data set. Figure 1-28: Table for an unformatted report 3 In the bottom left corner of the table, click the Table icon. The icons for other parts 4 Right-click the icon of the report element to which you want to apply a style, then choose Style > Apply Style, and select a style appropriate for the report element. For example, apply the bmcReportTheme:TableHeader theme to the Table-Header. Figure 1-30: Selecting a style White Paper Edit. tab and double-click a data set under the Data Sets node. White Paper 4 In the Available Fields box, select the fields you want to sort by in the report, and then click OK. For example, select the Status and Assignee Groups fields. 5 Preview the report to make sure the report is sorted by the parameter you added If the report is not sorted as expected, review step 2 through step 4. In the following figure, the report is sorted by the Status parameter. Figure 1-36: Example: report sorted by a Status parameter click the table header row, and select Insert Group in the submenu. Figure 1-37: Layout editor want to group the results, and then click OK. In Figure 1-38, the Status parameter is selected in the Group On list so that the results are grouped by status. 3 Preview the report to make sure the results are grouped by the parameter you configured in the New Group box. To preview the report (Save, Run, and Close), repeat the procedures in step 5 on page 32. If the report is not grouped as expected, review step 2. In Figure 1-39, the report is grouped and sorted by the Status parameter. The first Assigned row has no data, then the next Assigned rows contain records. Then the first In Progress row has no data, then the following In Progress rows contain records. Figure 1-39: Report sorted by a parameter White Paper 4 To apply a style to the group header, go back to the report file in the Layout tab, 2 In the Aggregation Builder box, configure a report row by doing the following tasks: White Paper 3 Preview the report (Save, Run, Close). To preview the report, repeat the This report groups results by the parameter you configured in the New Group box in, and displays the number of incidents in the Table Group Header row for incidents with Assigned and In Progress. Figure 1-44: Report displaying number of incidents and sorted by group To add a column 1 In the BIRT Report Designer, go to the Layout pane of a report. 2 Click a column above its table header to select the entire column in a table. Figure 1-45: Highlighting a column in the Layout pane 3 Right-click in the selected column and choose Insert > Column to the Right (or Left). 4 Go to Data Explorer tab and drag a data set into the new column (for example, Data Explorer tab > Data Sets > HPD_Help_Desk > Incident Number). To add a parameter 1 In the BIRT Report Designer, go to the Outline tab and right-click Report Parameters. White Paper Explorer tab, right-click the appropriate data set, click Edit, click Parameters, and click New. 10 In the New Parameter box, configure the new parameter by doing the following tasks: a In the Name field, type the name of the new parameter in the data set. NOTE Select one of the following two options as the Default Value field (see step c on page 39) can only be edited if you do not edit the Linked to Report Parameter field. b In the Linked To Report Parameter field, select the name of the new parameter c In the Default Value field, edit the qualification query of the data set based on the report parameter that you linked to in step b on page 38. For example, enter the following in the Expression Builder: 'License Type' = [param:dsLicenseType] where dsLicenseType is the data set parameter which refers to the report parameter. 11 Preview the report (Save, Run, Close). To preview the report, repeat the The Parameter box of the report appears and specifies a fixed License type parameter. The report shows People records having a fixed License type. White Paper want to edit, and save its .rptdesign file to a folder where you store reports. For details, see To export a .rptdesign file from the Report form to the BIRT Report Designer on page 18. a In the Report form, search for a report with the words Date and Range in its b Select a report. For example, the Expiring Contracts by Date Range report. 2 In the BIRT Report Designer, choose File > Open File, and open the .rptdesign file 4 To edit a report parameter, right-click the report parameter you want to edit, and NOTE For details on editing parameters, see the BIRT Report Designer online help. For example, if you want to edit the Start Date field, right-click Start Date. 5 If you want to see how a parameter is filtered for a data set, go to the Data Explorer tab, right-click the data set you want to examine, click Edit, then click Filters. White Paper NOTE For details on editing a filter condition, see the BIRT Report Designer online help. If a customer listing provides a customer ID, that customer ID can be input into a subreport that displays details about each customer. An ID value is often the common data between two data sets. If a parent report displays only the amount spent (or other customer data), a subreport can provide customer details (such as address). A subreport can show a combined parent report and subreport. Testing each subreport before building the next subreport can help minimize difficulties that can arise with subreports. NOTE The example in this section provides high-level instructions. Subreports are discussed in detail in the BIRT Report Designer online help. To create a subreport 1 Design the structure of a report and its subreports. This includes details of required data sets and how they are related, and can be a simple relationship diagram of forms. 2 Create a data source and a required number of data sets. For an example, consider a parent "People" report that can have an added subreport, which shows roles attached to a particular people record. The relationship structure is: CTM:People (Person ID) > CTM:SupportGroupFuncRoleLookUp (Person ID) This structure has two data sets that are required to develop the subreport. The two required data sets are CTM:People and CTM:SupportGroupFuncRoleLookUp. 3 Create the required data sets as follows: a Create a data set for the parent record (CTM:People). b Create a data set for the child records (CTM:SupportGroupFuncRoleLookUp). c In the Parameter tab of data set creation for child record, create a parameter for the parent key based on which will pull child records. For example, the key will be Person ID. Do not link the parameter to any report parameter. 4 Click Query and modify the query as follows to pull child records having a key 5 Insert a second table element inside the new detail row of table. The child table parameter. White Paper 10 Preview the report (Save, Run, Close). To preview the report, repeat the NOTE Adding interactive features, such as hyperlinks, to charts is discussed in detail in the BIRT Report Designer online help. Drill down reports are summary reports which can be drilled through to get related detail data in other reports at a granular level.. When a user clicks a slice of the pie chart, a detailed report opens for that particular slice of data. The following scenario looks at the report development of the All incidents by Status and assigned Group drill down report in ITSM. For example, the example report has a high number of incidents based on their status. The incidents have a particular status and are assigned to groups. There are also records for the incidents. A particular record in an incident form can provide details of that incident. Therefore, the design of this drill down report needs the following levels: Figure 1-57: Drill down report design example In this example: - A pie chart shows Incident Count categories based on Status White Paper - A bar chart shows the Incident Count by Assigned Group for the Status slice selected in the pie chart - A detailed incident report based on the Assigned Group bar selected in the bar chart 2 Create a data source and data set required for a report (for example, HPD: Help Desk). 6 Insert groups in the table by right-clicking in the detail row of the table and selecting Insert Group. Insert the first group based on Status, and the second group based on Assigned Group. 7 Insert a chart in the table by right-clicking in a table header and choose Insert > Chart. Insert a pie chart showing the number of Incidents categorized based on Incident Status. Figure 1-58: Drill down report pie chart example 8 In the Status group header (header of first group) of the table, insert a bar chart, which will show the number of Incidents categorized based on Assigned Group. 9 In the second group header, choose Insert > Grid to display details of Incident 10 In the first group header, right-click and select Properties, select Bookmark in the row["Status"] White Paper This starts the process of adding drill down functionality by hyperlinking to the table. When clicking on a particular slice of the Status pie chart, the bar chart should appear and show details based on the Assigned Group of that Status. 11 To set the bookmark with a hyperlink to the pie chart, right-click the pie chart created in step 7 on page 46, and choose Format Chart > Series > Value Series > Interactivity, then click Add in the Series Interactivity box. Figure 1-62: Editing the pie chart Value series Interactivity 12 In the Hyperlink Editor box, type the name of the hyperlink in the Name field, and 13 In the Hyperlink Options box, select Internal Bookmark, and then select the This adds the functionality for clicking a particular slice of a pie chart, then navigating to a bar chart that shows the incident count of a selected Status based on the Assigned Group. 14 Create a hyperlink for a bar chart by repeating the process from step 10 on page 47 OK. This new parameter will be used in script to fetch the mid tier URL. White Paper 2 Select a data set. In this example, select the HPD:Help Desk data set. 3 Go to the Script tab of the report, and select the BeforeOpen event in the Script list. The script fetches the MidTier URL at runtime and sets it to the hidden parameter created in step 1 on page 49. Figure 1-65: Script tab of the report 4 Go to Layout tab of the report, and select Incident Number, which was inserted in 6 In the Hyperlink Options box, select URL as the hyperlink type, and enter the following value: White Paper <Field Name>.OBJECT <Field Name>.VALUE <Field Name>.TYPE OBJECT is of type String and has currency value as well as currency type. tab and double-click a data set under the Data Sets node. 3 Click New. 4 In the Column Name field, type a name for the computed column. 5 In the Data Type field, select Decimal. 6 In the Expression field, type the expression. For example: if (row["Associated Cost.OBJECT"].toFunctionalValue(params["Currency Type"].value) The function on the currency field object named toFunctionalValue requires input as the expected currency type, which is obtained from the user parameter. The function returns Decimal currency value converted to a given currency type. 7 Click OK, then click Preview Results. Preview Results in Figure 1-70 shows the currency field converted into the .OBJECT, .VALUE, and .TYPE columns. White Paper Using subreports if using the BIRT Report Designer. See Using subreports to link reports together on page 42. Creating a join form to create a Web report. See the BMC Remedy Mid Tier Guide. This report starts with the Assigned Groups field in the left column. You can merge table group header cells and add a label to the left of the data set field as shown. Figure 1-71: Viewing the report in the Layout tab 3 In the Layout pane, add a column for the Incident Number to the right of the Assignee column. For details on adding a column, see Adding a column to report results on page 36. Figure 1-73: Adding a second column White Paper 4 Add another column for the Submit Date to the right of the Incident Number column. The report shows columns for the Assigned Groups, Incident Number, and Submit Date. Figure 1-74: Report preview after adding two columns 6 In the Layout pane, right-click in the Table Group Header row, and choose Insert 7 In the New Group box, select Status in the Group On list, then click OK. The 8 Right-click the Status group, and select a Group Header style for it. Figure 1-77: Selecting a style for the Status group 9 Apply a style to the other group headers and other rows in the table. 10 For formatting, merge the Status table group header cells and add a label to the left Figure 1-78: Merging and labelling the Status table group header 11 Right-click in the Status group header row, and choose Insert > Grid. 12 In the Insert Grid box, set the grid size as 2 columns and 1 row. White Paper The report shows the results grouped by their Assigned Group (for example, A Test Support, ABC Group) and then grouped by their Status (for example, Assigned and In Progress). Figure 1-80: Report preview showing grouping by Assigned Group and then Status To take the grouping to another level, this example now adds the Assignee group within the Assigned Group group. 14 Right-click in the Status table group header, and choose Insert Group > Above. 15 In the New Group box, set the grid size as 2 columns and 1 row 16 New Group box, select Assignee in the Group On list, then click OK. The 17 For formatting, merge the Assigned table group header cells and add a label to the Figure 1-82: Merging and labelling the Assigned table group header 18 Right-click in the Assigned group header row, and choose Insert > Grid. 19 In the Insert Grid box, set the grid size as 2 columns and 1 row. 20 Save and run the report to preview it. The report shows the results grouped by their Assigned Group (for example, A SupGrp), then grouped by Assignee (for example, A1 User), and then grouped by their Status (for example, Assigned and In Progress). White Paper Figure 1-83: Report preview showing grouping by Assigned Group, then grouped by Assignee, and then Status 2 Configure the data set for the report by adding the Status, Assignee Groups, 3 In the Layout pane, right-click in the report layout and choose Insert > Chart. 4 On the Select Chart Type tab of the New Chart box, select Bar as the Chart type, b Under In the Value (Y) Series, configure at least two series of data for Status White Paper c Under In the Value (X) Series, configure a series of data for Assignee Groups 6 Preview the report (Save, Run, Close). To preview the report, repeat the If the bar chart needs adjustment, click the Binding tab in the Property Editor for the chart and review the data binding settings. *271662* *271662* *271662* *271662* *271662*
https://www.scribd.com/document/286302369/ARS-UsingBIRTEditor-7604
CC-MAIN-2019-35
en
refinedweb
Hi, We are trying to use stdcxx library on a environment where POSIX environment is not available (and it is not a win32 platform), as a continuation of RoguWave library. We would like to know if stdcxx supports it. We have found some occurrences of POSIX headers and symbols in file.cpp and iostream.cpp. Although run-time behavior for iostreams can be changed by passing unofficial extension to openmode _RWSTD_IOS_STDIO, but the POSIX symbols are still present, so the compilation will fail. There is _RWSTD_NO_NATIVE_IO as the README file says: o _RWSTD_NO_NATIVE_IO [lib, over] #defined to force file streams to use the facilities of libc stdio as opposed to the POSIX I/O interface. and here: #ifndef _RWSTD_NO_NATIVE_IO # define _RWSTD_INVALID_FILE -1 # define _RWSTD_INVALID_OPENMODE -1 #else # define _RWSTD_INVALID_FILE (FILE*)0 # define _RWSTD_INVALID_OPENMODE 0 #endif // _RWSTD_NO_NATIVE_IO but that does not to fix the problem at all.. Thanks..
http://mail-archives.apache.org/mod_mbox/stdcxx-dev/201006.mbox/%[email protected]%3E
CC-MAIN-2019-35
en
refinedweb
. Introduction. We can start collecting data from various sources and at large scale without knowing why we’re collecting it or what value it might have towards helping us to achieve our overall product development and business goals. For some readers, alarm bells will be ringing. Why collect data if you don’t know what its value is? After all, for the information manager spending months adding structure by gathering business requirements, modelling the data and simply surfacing it to business users has worked for decades. Other readers may have read the title and thought, “Why add structure at all?”. After all, for a data engineer the NoSQL revolution means that adding structure to data by modelling it for a relational data store is no longer a necessary requirement for most use cases. While there is a place for both of these concerns, the reality is that neither the information manager nor the leading edge data engineer would be entirely right either. The reality is that data will exist at varying levels of structure throughout our data ecosystems and in order to make the right decisions to define each level, we have to understand the costs and benefits of adding structure. The people costs of completely unstructured data ecosystems While the infrastructure costs of storing data is relatively immaterial, the people costs can increase exponentially with both data volumes and variety. With unstructured data, you will need more data engineers or new highly skilled analysts, versus traditional relational data stores, to make use of the data at all. For every new business need, analysts and business/product managers will have to spend time understanding the supply of data and how they might change their decisions based on it. After all, structured data is sort of a menu of data and unstructured data is more like a bulk size grocery store. Even if the analysts know their way around the data, for each (new) business question data engineering skills will have to be deployed in a potentially complex data preparation process, which every analyst knows requires a specific skill sets and is immensely time consuming, leaving less time for analysis and supporting decisions. In short, the people cost implications for unstructured data are that: - A complete absence of structure leaves little room for company wide efficiency gains in the data preparation process, more time will be spent to answer each business question. - Less room for regular support to the business from analysts without more advanced data engineering skills, meaning more expensive analysts will need to be attracted and retained. So a data ecosystem that relies completely on unstructured data would be a heavy burden on human resource indeed! This is both true in terms of hours spent in response to each business need, and in terms of skills you will need on staff in your business analyst teams. But what would happen if the data ecosystem relied on structured data only? The time costs of completely structured data ecosystems In the traditional information management funnel, we would start by collecting requirements, defining a target schema (i.e. what should structure of the data look like) and developing pipelines from source to schema. Whether or not to introduce new data into decisions is hardly an isolated process. Time is spent preparing test data, evaluating it and forming consensus about whether it should find its way into routine decision making. All of this is usually done away from the immediate data needs of the business (i.e. not a live business case, but a historical or hypothetical one). In a fully structured data ecosystem, all data sets must be understood and modelled before they are introduced into business decision making. If that work is required before the data can be used, then no data department will be able to keep up with the growing supply of and demand for data by its business users. Data that is absolutely essential to informing decisions about the present and future course of our business could be stuck in an information management pipeline before it gets in the hands of a data analyst providing decision support, never mind before it finds its way into the decision-making and actions of a business manager! So while a traditional information manager may prefer to bring data through the IM funnel which has worked so well for 20+ years, they will never succeed in making data a source of competitive advantage. The development time for structured data stores will be too long and will require too much distraction from data and business resources. So we want to design our data ecosystems to balance the agility and low cost per kb storage associated with NoSQL solutions and the efficiency gains and lower cost analysts associated with adding structure via the IM funnel. The business benefits of structure People have a natural tendency to organise around information and as such, a business needs a vast amount of structure to its information in order to achieve that structure in its operations. For example, providing a common view of business performance helps everyone to organise around common objectives (i.e. less debating about what good performance is or how to measure it and more debates about how to improve performance in specific areas). Structure such as key performance indicators, standardised user/business/product metrics and even generic consumer behaviour data stores have an essential role to play in a helping a business communicate and stay on target. Not only that, but structured data can also support communities of practice amongst your analysts. Since analysts have to operationalise each business concept to be scientific, establishing a common language and common operational definitions in your data is often essential for organisations with distributed data analysis teams. Analysts can then share insights and techniques, rather than debating how to translate common business concepts into data objects. More generally, structure is a systematic way of collecting information about a specific real life phenomena or area. It’s a precise set of need requirements for that phenomena and to enforce a structure is to say to any system trying to insert data about this phenomena, you need to have the following information (i.e. data objects), which in turn has to have the following properties (i.e. data types). Striking balance So while modern day data engineers and scientists may prefer a world without structure, overall if an organisation wants to embrace data into its decision making it must also embrace structure onto its data. And so we must find balance: balance between the agility and storage (cost per kb) benefits associated with NoSQL solutions and the efficiency gains in data preparation, the lower cost of business analysts and most importantly the unavoidable need for structure that makes data a common source of value in an organisation. To do this we must define a data management funnel, whose purpose is neither to proliferate unstructured data nor to lock data up in a length evaluation and structuring exercise. So what would that look like? Data management funnel The root problem underlying the tension between structure and unstructured data is a common problem between data management and software management; the need to build tangible products quickly, to explore their value and gain learnings about the strengths before significant investment. Data products could be any of the following: - Unstructured data set for data scientists - Structure data sets for data analysts - A data set of model outputs (e.g. price elasticity coefficients) - A self-service dashboard for business users or non-specialist analysts - A data-enabled product or platform capability (e.g. trained neural network underlying recommendation engines) In the development of all of these products, more and more we see software management principles make their way into data science with great success. Using unstructured data and a minimum viable product style project, data teams can evaluate both the value of the data and the extent to which structure must be added in order to the value to be realised by non-specialist analysts. This is often done using live data to solve a real business challenge, unlike the IM funnel where historical or hypothetical needs are used to define the idealised schema. So what are the key questions we need to consider to adapt a product management style funnel to be a data management style funnel? Key questions to answer to define your DM funnel stages - When does a data set get applied to a live business problem? - When and how does the data itself get evaluated for its structural needs and value? - When does structured data set get added to the production supply of data in your organization? While you may want to define more stages, effectively these following three stages follow the question areas that must addressed. Three stages of data maturity 1. Raw data: Massive volumes, entirely unstructured. Source: Usually comes directly from a machine (e.g. server logs, api outputs). Access: There is limited access to this data because of the skills to manipulate and analyse it or for privacy reasons. Typical audience: data engineers or data scientists using NoSQL queries or advanced processing using python. 2. Analysis Friendly data: Massive to medium volumes, partially structured or structured data. Source: data objects are calculated in an extract load transform style process from raw data to the tables or NoSQL collections, ideally organised by analysable units (e.g. a user based collection with everything we know about users). Access: Highly skilled analysts and data scientists only as the size and variety of formats in this data make it difficult to query. 3. Management Reporting data: Entirely structured. Source: calculated from either the raw data or the analysis friendly data sets and includes both the numbers that business/product managers use to run the business (e.g. KPIs) as well as any business drivers at an aggregate level (e.g. when companies report their financial growth they often report the drivers of price, volume, foreign exchange, acquisitions, etc). Access: Open as data security policy will allow. Typical audience: Business intelligence analysts or even self-service analysis with business/product managers. At each stage of the funnel, more business logic and terminology is added, and of course more structure is given to the data. Data objects become more predictable in both their format and their range of values and the types of analysis tools used to consume this data are increasingly hungry for relational database form. There are also governance and privacy concerns, as earlier stages are likely to contain personally identifiable and sensitive information than later stages. At each stage of the funnel you should ask yourself who has access to the data and what operations can they perform on it (although that subject is not addressed here). The first stage of the funnel is a land without law, mostly managed for cost per kb, usually only accessible by the central core of data capabilities. Systems and data engineers throw the data in there for exploration by others. Populating data at this stage is low effort and as such this area can get messy quickly! The final stage is the polar opposite, a land of highly governed high structured data, widely available (with the exception of market sensitive data in large publically listed companies). Business/product managers are using data to measure the performance of their decisions, and gain high level understanding of the drivers, and as such it's unlikely that by the time the data has reached this stage it isn’t narrowly defined by definition and variable type. The middle layer, where raw unstructured data is converted into something of partial structure and potentially high value, is the most important and also the most confusing. It’s this layer where balance is structure, communities of practice are serviced and deep insight into the behaviour of your consumers and the reason behind your performance can be discovered. To go from the first (raw data) layer to the second (analysis friendly) layer is daunting indeed! A single piece of unstructured raw data can populate an entire analysis friendly schema. For example, an image posted on Twitter can be broken into data objects for three separate SQL tables: - Tweet Image pixels, with columns position_x, postition_y, red, green, blue - Tweet, with columns post_text, datetime etc - Tweet publisher, with columns account_name, followers, etc Potentially all of that information is relevant to make use of the tweet data. Rather than flushing out the rest of the nuisances of defining a funnel, the rest of this article will speak to those needing to make that first step away from the unstructured land without law and answer the question: where to begin with raw unstructured data? Using data in a raw unstructured format and applying it to real business challenge can be a more effective and efficient way to establish the desired structure of the data, than scoping out the data needs and defining a data schema from there anyways. Solving the challenges in this layer means unlocking the potential to observe, to understand, to predict and control various aspects of your business, especially consumer behaviour, and therefore it’s worth spending time and effort on. Clearly the potential value of this process outweighs the resource requirements, or else data scientists would be out of a job! So how can we can go from unstructured chaos to something of value to our organisations and an informed opinion about what structure is needed for that value to be repeated on a regular basis? Working with unstructured data The reality today is that it’s extremely difficult to know what the value of a data set is until it’s been analysed in the context of a real business challenge (past or present). Using data in a raw unstructured format and applying it to real business challenges can also be a more effective and efficient way to establish the ideal structure of the data, versus scoping out the data needs of business stakeholder and defining a data schema from there. So before doing any data modelling or developing data pipelines, try developing a MVDP (minimum viable data product) and provide insight into a real business problem or two. What are the common types of unstructured data suitable for an MVDP? Common types of unstructured data - Web application data formats (e.g. JSON, HSTORE) - Document file formats (e.g. HTML, XML, pdfs) - Free form text (e.g. text from within a pdf, the text from a tweet or blog post) - Images (e.g. jpg, png) - GIS vector and raster (although there are databases with these variable types so I won’t count them as unstructured here) Most readers will be familiar with web application data and the process of parsing the data from that format into a structured SQL database. The ubiquitous need to transform JSON and other such inputs from web interfaces into fragmented entries in SQL databases underlies much of the motivation to develop the NoSQL solutions. Document file formats such as HTML and XMLs can be parsed with tools like Nokogiri. Most of the time, nearly all of a pdf is free form text (except document scans, of course). So of the common types listed above, images and free form text pose an industry-class problem for extracting value. When it comes to text and image analysis, the realm of possible tools for analysis is seemingly endless. Rather than advocating for a specific tool, we’re going to understand a specific process. The process involves: - Structuring the business problem conceptually - Implementing the full space of concepts into computable signals - Using data science techniques to establish a mapping from the space of signals to a single structured concept that is relevant to the business problem Why bother with a process at all, why not just reference existing tools for text and images? Of course, when analysing images, a perfect image recognition solution would be fantastic. In such an ideal solution, you could apply a method contains_couch and it would return a yes, no, or even better a .png of just the couch, along with a hash of its properties, e.g. {colour: brown}. In that same ideal world, when analysing text, a perfect natural language processing solution would have a method is_sarcastic or contains_political_opinion that would return a boolean answer or better yet, the text containing the sarcasm or reduced form of the political opinion itself. And we’d be done! Unstructured data to highly structured and specific business relevant variables. In my experience, we’re not quite there yet. We may be close, but we’re not quite there yet. Not everyone can install a python package or ruby gem, query the text or images with the methods described above and get a robust enough result to make decisions on. In the world between these ideal solutions and the raw data, anything that enhances your understanding, moves you towards better products for your user or better decisions for your business/product managers is helpful! Three places to look for structure conceptually in unstructured data Part of what underlies the explosion of investment in data is not necessarily that there’s so much of it, but instead that behind most data is a human behaviour intermediated by some piece of technology. What we’re really investing in is systems of observation and understanding into human behaviour, particularly as it pertains to our business. We’re going to use this basic idea to define our approach to adding structure to unstructured data. Three places to look for structure conceptually: 1. Meta data: Look for structure in the system of observation (i.e. technology) that created the data. All technology operates in a highly structured way; did the system leave a record of structure (e.g. css tags in an html or meta data in server logs)? 2. Data created by the user: Look for structure in the bespoke user interactions inherent in the software that created the data. Most software has unique and distinct interactions that translate to relevant structures when taken into context of that software. For example, a # doesn’t mean anything in general but on Twitter a hashtag is a functional mechanism that categorises the user created content by topic. It’s also used to tell jokes, e.g. #SorryNotSorry. Similarly, an @mention denotes an interaction between two people via their Twitter accounts. Are there unique features in the content created by users on your site or product? Do they correspond to distinct behaviours or concepts? A record of which of those features were used can provide a huge amount of structure to the behavioural state of the person at that time (e.g. location on the site, intended action from the site, interaction with others, etc). 3. Data about the users themselves: This is sometimes called “properties of the root”, and generally speaking refers to the object external to your system that causes a change in state of the system and at the same time creates the data. In the case of software users, the root is the person who interacted with the software. Once we have established a structured concept (in one of the three areas above or elsewhere), how can we implement this structure of the concept in the data? Steps two and 3 to add structure computationally First, we define signals. Signals are quantitative properties of the concept that help us to identify the presence and nature of the concept. They are not the concept itself. For example, if all couches were brown, then observing any shade of brown in an image would be one property that signals the presence of the couch. Second, we use the signals to distinguish between the different structured values we’ve defined for our concept by establishing the relationship between the signals we’ve computed and the structure of our concept. The relationship could be deterministic (e.g. contains_couch: TRUE or False) or probabilistic (e.g. contains_couch: 0.9). When the relationship between signals and concept is probabilistic, it’s usually because there isn’t enough information in the signals to define the structure of the concept uniquely. An image that contains the colour brown is a signal of an image with a brown couch, but there will be a chance that it contains a couch and a chance that it does not. For that reason, practitioners will define as many independent signals as is possible, given the unstructured data that they’re working with. In doing so they must strike a balance between three competing factors: the numbers of signals, the independence of those signals and the overall computational cost of computing the signals and the relationship (independence is particularly important since two signals that contribute the same information, e.g. % of image dark_brown and % of image light_brown, won’t improve your ability to identify your concept). To begin with, any quantitative property that corresponds with the concept could be helpful. If a signal is always present when the concept is present and never present when the concept is present, then it would be a perfect signal of concept presence. If furthermore variations in the signal always corresponded to variations in the concept (e.g. different colours of brown, for different colours of couch) then that signal can uniquely define your concept of the couch! In practice, an individual signal rarely reveals the concept as we wish to define it. Instead, it is very common that a large number of signals would be used to define a single concept. Hundreds of signals are used in search engines to produce the results for the search, even though you only type a handful of words. Defining signals is as much an art and leap of creative imagination as it is a science. On the other hand, once signals are defined, establishing how these signals distinguish between different states of your concept is more of a (data) science. To make it through this final step, the first consideration is what data structure will be best to enforce on the data representing your concept? For example, a binary concept such as contains_a_couch, is_political would be a method that takes the signals as inputs and outputs a boolean response. On the other hand, a concept such as text_topic_list would be a method that takes the signals and outputs an array of topics (they could be topic ids, if you have a denormalized master table for the topics themselves). With a set of signals and a target data structured for your concept, machine learning is high on the list of go-to tools to establish your mapping, but not always necessary. In the example below we use a neural network style classification method based on numerical signals to demonstrate how this could be done. Hello, World! In my grad school courses on machine learning and AI, chairs and couches were always the text-book examples. This is because a mundane concept like a chair turns out to be simple to structure conceptually, and therefore nice to explore mathematically, but hugely complex to implement in practice. In our example, the situation will be the opposite: the concepts are incredibly complex to structure in practice but they are idealised in such a way that makes them simple to structure using simple methods. For our example, we will be analysing the social media behaviour of a character called World. World is not a real person and the examples are heavily stylised to demonstrate the concepts we’ve discussed. The unstructured data will be a social media profile webpage in HTML format. The concept we’re going to use to add structure to this profile page is its Selfie-ish-ness, defined as a floating decimal representing the % of the images that are a selfie from World’s profile page. I’ll be using Ruby to demonstrate the methods with code, but they can be done in Python or any language that can process images as well. Step 1 - Creating a structured data set based on web page meta data WARNING! Web scraping is frowned upon and even violates the terms and conditions of the site in many cases. Before scraping any web pages, you should check if there’s an API that has the information and if not, read the terms and conditions to see if you can scrape the data and what the data can be used for if you do. The first place to look for structure is in the technology that created the data. Web pages are highly structured, especially javascript and html, and a great deal of structure can come directly from there. In our example, we’ll be using the css tags in the .html page to parse out “posts”,”post text” and the “post_image” all of which we’ll need to define how Selfie-ish the page is. To do this we’ll be using Nokogiri.org to parse the html using the method: def generate_structured_post_array(web_page_html) post_array = [] css_selectors = { posts: ".post", post_img_url: "img_src" } @profile_html = Nokogiri::HTML(web_page_html) @profile_html.css_selector(css_selectors[:posts]).each do |post| post_image_url = post[css_selectors[:post_img_url:]] #you could open and save the file to a server post_text = post.text.gsub(/\s+/,’ ') #get rid of extra lines and spaces post_data = { post_image_url: post_image_url, post_text: post_text } post_array+=[post_data] end return post_array end Already we’ve gone from a web page to an array of post Jsons containing the relevant information from the account. This is enough structure to begin extracting relevant structured information, for example the post_array length corresponds to the number of posts. HTML and XML files alike are ideal “unstructured” formats because the creators of these formats (machines or individuals) are using the syntax to add structure to information so that the machine reading it can use this structure to visualise the information or perform other operations. In other words, the syntax is a contract between the creator and the consumer and every contract requires a certain amount of predictable structure. It’s this structure that often contains the valuable meta data we need for our analysis. Step 2. Using the data generated by the user So far, we’ve only established how many posts the account has and for each post what the post_image_urls and post_text is. Images and free form text are traditionally unstructured formats as well so for each post we need to add even more structure before we can establish if the post contains a selfie. Luckily for us, World is a methodical and unique looking individual who has created posts where the signals for a selfie are pretty clear. For example, all the selfies have World in them and furthermore they are hashtagged with some sort of #selfie variant (e.g. #DangerousSelfie, #Selfie). However, it’s not as easy as looking at the hashtags because, say, you can see in post 2, World also has #SelfieGoals. It’s also true that World posts photos of himself that aren’t selfies. So we’ll need to use both signals to establish whether or not each post is a selfie. Here’s an overview of the exercise: When processing with free text, it’s worth trying the simplest possible analysis you can think of as this can go a long way to solving your problem and can provide valuable insights into what might be your next (slightly) more complicated step. Never go straight to the most advanced solution (e.g. Natural Language Processing) unless you are highly comfortable with the data and the subject matter area. If not, you can build comfort with both by analysing a few simple properties. - Binary single pass variables -> establishing whether or not the text possesses a certain property or doesn’t. For example, does it include the word selfie? Is it more than 100 characters? And so on. - Integer variables -> Any property of the text that has a number. e.g. number of “#”, character count, word count (i.e. number of spaces). - Text array -> Breaking the full free form text into an array of relevant words (or more commonly n-grams). For a popular open source package that separates various Twitter type arrays (e.g. hashtags) see. In our case, we’ll need to separate out all the hashtags then calculate the binary single pass variable of contains_selfie as follows: def classification_topics_array(post_text) return post_text.split("#").map{|h|h.split(" ")[0]} end def classification_possible_selfie(topics_array) keyword= "selfie" lower_threshold = 1 keyword_count = 0 topics_array.each do |t| count+=1 if t.downcase.includes?(selfie_keyword.downcase) end return (keyword_count>lower_threshold) end Next, it’s time to establish which images actually contain World. When processing images, once again, it’s worth trying the simplest possible analysis although in this case that may not get you anywhere. Unlike the previous step, building your own image processing capability that will suffice for your purpose may be difficult or impossible given your time and other resource constraints. Still, it’s worth going through the thinking here! Step 3 - Using what we know about real life Luckily World is a unique individual indeed. Taking a step back to think about the selfie concept in real life, we ask ourselves: what quantitative properties about World uniquely define them visually? Well, World is made up of oceans, with a very unique blue, and land, with a very unique green. The scale of World in the photo is never the same but the relative composition of oceans and lands will be. Eureka! We’re in luck. The quantifiable attribute that uniquely defines world is the ration of World Blue to World Green. Even here, there will be some variation. When World is laughing, the eyes on World’s faces become smaller and more of the blue and green is revealed. When World is scared, eyelids come down and more blue and green are revealed. Neither case will preserve the ratio exactly, so instead of saying blue is defined by a specific ratio value, blue will instead be defined by a range (i.e. when the ratio of blue to green is within the tolerances, then we can say the photo contains blue). To implement this, I’m going to use the open source package. To get started, there’s a helpful tutorial. This will help you get started and define the method to produce a HEX histogram (left as an exercise): def compute_colour_ratio_signal(hex_distribution,top_colour,bottom_colour) bottom_colour_volume = 1 top_colour_volume = 0 @hex_distribution.each do |pixel_volume,hex_code| case hex_code when top_colour top_colour_volume = pixel_volume when bottom_colour bottom_colour_volume = pixel_volume end end r = top_colour_volume/bottom_colour_volume*1.0 return r end Once the signal is defined (a ratio), it’s quite common to use machine learning to establish the thresholds upon which the classification can be made. In this case, I’m going to eyeball it. This gives use the following classification method: def classification_contains_world(image_url) world_blue = "#1683FB" world_green = "#2AFD85" upper_threshold = 2.3 lower_threshold = 1.8 @rmagic_image_object = Magick::Image.read(image_url).first @hex_distribution = generate_hex_distribution(@rmagic_image_object) #hint: use quantize method and loop through all the pixels colour_ratio = compute_colour_ratio_signal(@hex_distribution,world_blue,world_green) return (ratio <upper_threshold && ratio>lower_threshold) end The results from running the methods against all images gives output: - "image 1 had ratio 1.9686841390724195 contains_world and is possible_selife" - "image 2 had ratio 0 DID NOT contains_world and is possible_ selfie" - "image 3 had ratio 2.1200486683790727 contains_world and is possible_ selfie" - "image 4 had ratio 1.8354139761802266 but IS NOT possible_ selfie " - "image 5 had ratio 2.215403012087131 but IS NOT possible_ selfie " - "image 6 had ratio 2.256290438533429 but IS NOT possible_ selfie " In our network, only when both nodes fire as true do we classify the image as a selfie. This only happens for Image 1 and 3 and so the data representing our unstructured concept of “selfie-ish-ness” for world is 33% (2/6)! About the Author Rishi Nalin Kumar is a co-founder and chief data scientist at eBench.com, a data-led digital marketing advisory and SaaS start up. He’s also a chapter lead at DataKind UK, a data philanthropy charity who partner with third sector organisations to use data science in the service of humanity. In the past, he’s worked with companies such as Unilever, The Guardian and Spotify both to develop data capabilities, covering a variety of domains from product development to human rights, and to develop their data strategy, helping those organisations to introduce data science and data scientists into their everyday decision making.
https://www.infoq.com/articles/raw-data-to-data-science/
CC-MAIN-2019-35
en
refinedweb
![if !(IE 9)]> <![endif]> The analyzer detected a potential error: a condition is always true or false. Such conditions do not always signal an error but still you must review such code fragments. Consider a code sample: LRESULT CALLBACK GridProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { ... if (wParam<0) { BGHS[SelfIndex].rows = 0; } else { BGHS[SelfIndex].rows = MAX_ROWS; } ... } The "BGHS[SelfIndex].rows = 0;" branch here will never be executed because the wParam variable has an unsigned type WPARAM which is defined as "typedef UINT_PTR WPARAM". Either this code contains a logical error or we may reduce it to one line: "BGHS[SelfIndex].rows = MAX_ROWS;". Now let's examine a code sample which is correct yet potentially dangerous and contains a meaningless comparison: unsigned int a = _ttoi(LPCTSTR(str1)); if((0 > a) || (a > 255)) { return(FALSE); } The programmer wanted to implement the following algorithm. 1) Convert a string into a number. 2) If the number lies outside the range [0..255], return FALSE. The error here is in using the 'unsigned' type. If the _ttoi function returns a negative value, it will turn into a large positive value. For instance, value "-3" will become 4294967293. The comparison '0 > a' will always evaluate to false. The program works correctly only because the range of values [0..255] is checked by the 'a > 255' condition. The analyzer will generate the following warning for this code fragment: "V547 Expression '0 > a' is always false. Unsigned type value is never < 0." We should correct this fragment this way: int a = _ttoi(LPCTSTR(str1)); if((0 > a) || (a > 255)) { return(FALSE); } Let's consider one special case. The analyzer generates the warning: V547 Expression 's == "Abcd"' is always false. To compare strings you should use strcmp() function. for this code: const char *s = "Abcd"; void Test() { if (s == "Abcd") cout << "TRUE" << endl; else cout << "FALSE" << endl; } But it is not quite true. This code still can print "TRUE" when the 's' variable and Test() function are defined in one module. The compiler does not produce a lot of identical constant strings but uses one string. As a result, the code sometimes seems quite operable. However, you must understand that this code is very bad and you should use special functions for comparison. Another example: if (lpszHelpFile != 0) { pwzHelpFile = ((_lpa_ex = lpszHelpFile) == 0) ? 0 : Foo(lpszHelpFile); ... } This code works quite correctly but it is too tangled. The "((_lpa_ex = lpszHelpFile) == 0)" condition is always false, as the lpszHelpFile pointer is always not equal to zero. This code is difficult to read and should be rewritten. This is the simplified code: if (lpszHelpFile != 0) { _lpa_ex = lpszHelpFile; pwzHelpFile = Foo(lpszHelpFile); ... } Another example: SOCKET csd; csd = accept(nsd, (struct sockaddr *) &sa_client, &clen); if (csd < 0) .... The accept function in Visual Studio header files returns a value that has the unsigned SOCKET type. That's why the check 'csd < 0' is invalid since its result is always false. The returned values must be explicitly compared to different constants, for instance, SOCKET_ERROR: if (csd == SOCKET_ERROR) The analyzer warns you far not of all the conditions which are always false or true. It diagnoses only those cases when an error is highly probable. Let's consider some samples that the analyzer considers absolutely correct: // 1) Eternal loop while (true) { ... } // 2) Macro expanded in the Release version // MY_DEBUG_LOG("X=", x); 0 && ("X=", x); // 3) assert(false); if (error) { assert(false); return -1; } Note. Every now and then, we get similar emails where users tell us they don't understand the V547 diagnostic. Let's make things clear. This is the typical scenario described in those emails: for (int i = 0; i <= 1; i++) { if(i == 0) A(); else if(i == 1) // V547 B(); } The analyzer issues the warning "Expression 'i == 1' is always true", but it's not actually true. The value of the variable can be not only one but also zero. Perhaps you should fix the diagnostic. Explanation. The warning doesn't say that the value of the 'i' variable is always 1. It says that 'i' equals 1 in a particular line and points this line out. When executing the check 'if (i == 1)', it is known for sure that the 'i' variable will be equal to 1. There are no other options. This code is of course not necessarily faulty, but it is definitely worth reviewing. As you can see, the warning for this code is absolutely legal. If you encounter a warning like that, there are two ways to deal with it: Simplified code: for (int i = 0; i <= 1; i++) { if(i == 0) A(); else B(); } If it's an unnecessary check, but you still don't want to change the code, use one of the false positive suppression options. Let's take a look at another example, this time, related to enumeration types. enum state_t { STATE_A = 0, STATE_B = 1 } state_t GetState() { if (someFailure) return (state_t)-1; return STATE_A; } state_t state = GetState(); if (state == STATE_A) // <= V547 The author intended to return -1 if something went wrong while running the 'GetState' function. The analyzer issues the "V547 CWE-571 Expression 'state == SOME_STATE' is always true" warning here. This may seem a false positive since we cannot predict the function's return value. However, the analyzer actually behaves this way due to undefined behavior in the code. No named constant with the value of -1 is defined inside 'state_t', and the 'return (state_t)-1' statement can actually return any value due to undefined behavior. By the way, in this example, the analyzer warns about undefined behavior by issuing the "V1016 The value '-1' is out of range of enum values. This causes unspecified or undefined behavior" warning in the 'return (state_t)-1' line. Therefore, since 'return (state_t)-1;' is in fact undefined behavior, the analyzer does not consider -1 a possible return value of the function. From the analyzer's perspective, the 'GetState' function can return only 'STATE_A'. This is the cause of the V547 warning. In order to correct the issue, we should add a constant indicating an erroneous result to the enumeration: enum state_t { STATE_ERROR = -1, STATE_A = 0, STATE_B = 1 } state_t GetState() { if (someFailure) return STATE_ERROR; return STATE_A; } Both the V547 and V1016 warnings will now be resolved. Additional materials on this topic: ...
https://www.viva64.com/en/w/V547/
CC-MAIN-2019-35
en
refinedweb
I have exactly the same problem that you have. FOP is essential to the project of my company. But I don't think going on with C1 is a good solution. As soon as you want to generate "large" pdf output, C1 gets performance problems, especially if you combine it (as I do) with XSLT. That is why we decided to test C2 in terms of stability,usability and "migration". As I am in the middle of getting into C2, I cannot yet tell you if its still too early or not. But I don't think that the problems "performance" and "fop support" can be easily fixed in C1. Ulrich Mayring wrote: > > Alexander Weinmann wrote: > > > > I think that the real problem is inside the Xalan version > > that comes with Cocoon. Xalan uses DOM Level 1 (no namespaces) > > and fop0.14 relies on the dom nodes created with DOM Level2. > > This means that I do not see how to get XSLT+FOP working > > with Cocoon1.8 and FOP 0.14. > > > > We need C2 for that. > > C2 is not even in Beta yet, so "cocoon" currently is defined as C1, > right? Does this mean that cocoon officially will not support newer > versions of fop anymore? > > If that is the case it would be nice to have an official announcement of > some sort, so I can take that to my employer and tell him we need to > migrate to C2 right now, if we want to make use of the new fop features. > Migration is a big issue for us, we have much C1 stuff to port and I'd > rather migrate, when C2 is ready. On the other hand fop support is > mission-critical here :) > > cheers, > > Ulrich > > -- > Ulrich Mayring > DENIC eG, Systementwicklung > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] -- Alexander Weinmann | Web Developer BCT Technology AG | D-77731 Willstätt/Germany |
http://mail-archives.us.apache.org/mod_mbox/cocoon-users/200011.mbox/%[email protected]%3E
CC-MAIN-2019-35
en
refinedweb
Determine the age of the tree by the number of circles Hello! Can i determine the age of the tree by the number of circles? Hello! Can i determine the age of the tree by the number of circles? Merry Christmas. Here's your present... The ring count that I get is 70. Make sure to mark my answer as correct, if you find that it works for you. :) Here is a cropped version of your input image, which contains only the necessary data: Here is the C++ code to count the rings: #include <opencv2/opencv.hpp> using namespace cv; #pragma comment(lib, "opencv_world340.lib") #include <iostream> using namespace std; int main(void) { Mat frame = imread("rings_slice.png"); if (frame.empty()) { cout << "Error loading image file" << endl; return -1; } cvtColor(frame, frame, CV_BGR2GRAY); adaptiveThreshold(frame, frame, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 11, 2); // Use centre column int column_index = frame.cols / 2; int ring_count = 0; // Start with the second row for (int i = 1; i < frame.rows; i++) { // If this pixel is white and the previous pixel is black if (255 == frame.at<unsigned char>(i, column_index) && 0 == frame.at<unsigned char>(i - 1, column_index)) ring_count++; } cout << ring_count << endl; return 1; } Alternatively, here is the Python code: import numpy as np import cv2 import sys frame = cv2.imread("rings_slice.png") if frame is None: print('Error loading image') exit() frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame = cv2.adaptiveThreshold(frame, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2) rows = frame.shape[0] cols = frame.shape[1] # Use centre column column_index = cols / 2 ring_count = 0; # Start with the second row for i in range(1, rows): # If this pixel is white and the previous pixel is black if 255 == frame[i, column_index] and 0 == frame[i - 1, column_index]: ring_count += 1; print ring_count I wrote a C++ code that scans for rings in both horizontal and vertical mode... the code selects the highest ring count:... Making the same changes to the Python code should be fairly easy. Anyway, the code is picking up too many rings. You'll have to experiment with the threshold()/ adaptiveThreshold() parameters for each wood type, I'd think. Have fun! Asked: 2017-12-25 04:30:31 -0500 Seen: 304 times Last updated: Dec 25 '17
https://answers.opencv.org/question/181038/determine-the-age-of-the-tree-by-the-number-of-circles/?answer=181057
CC-MAIN-2019-35
en
refinedweb
Ownership and User-Schema Separation in SQL Server A core concept of SQL Server security is that owners of objects have irrevocable permissions to administer them. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. User-Schema Separation User-schema separation allows for more flexibility in managing database object permissions. A schema is a named container for database objects, which allows you to group objects into separate namespaces. For example, the AdventureWorks sample database contains schemas for Production, Sales, and HumanResources. The four-part naming syntax for referring to objects specifies the schema name. Server.Database.DatabaseSchema.DatabaseObject Schema Owners and Permissions Schemas can be owned by any database principal, and a single principal can own multiple schemas. You can apply security rules to a schema, which are inherited by all objects in the schema. Once you set up access permissions for a schema, those permissions are automatically applied as new objects are added to the schema. Users can be assigned a default schema, and multiple database users can share the same schema. By default, when developers create objects in a schema, the objects are owned by the security principal that owns the schema, not the developer. Object ownership can be transferred with ALTER AUTHORIZATION Transact-SQL statement. A schema can also contain objects that are owned by different users and have more granular permissions than those assigned to the schema, although this is not recommended because it adds complexity to managing permissions. Objects can be moved between schemas, and schema ownership can be transferred between principals. Database users can be dropped without affecting schemas. Built-In Schemas SQL Server ships with ten pre-defined schemas that have the same names as the built-in database users and roles. These exist mainly for backward compatibility. You can drop the schemas that have the same names as the fixed database roles if you do not need them. You cannot drop the following schemas: dbo guest sys INFORMATION_SCHEMA If you drop them from the model database, they will not appear in new databases. Note The sys and INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create objects in these schemas and you cannot drop them. The dbo Schema The dbo schema is the default schema for a newly created database. The dbo schema is owned by the dbo user account. By default, users created with the CREATE USER Transact-SQL command have dbo as their default schema. Users who are assigned the dbo schema do not inherit the permissions of the dbo user account. No permissions are inherited from a schema by users; schema permissions are inherited by the database objects contained in the schema. Note When database objects are referenced by using a one-part name, SQL Server first looks in the user's default schema. If the object is not found there, SQL Server looks next in the dbo schema. If the object is not in the dbo schema, an error is returned. External Resources For more information on object ownership and schemas, see the following resources.
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/ownership-and-user-schema-separation-in-sql-server
CC-MAIN-2019-35
en
refinedweb
Introducing Source Control Visual Studio supports source control using the Visual Studio Integration Protocol (VSIP) layer in its Integrated Development Environment (IDE). VSIP can host a variety of source control packages, usually implemented as plug-ins written to the appropriate protocols. An example of a source control plug-in is the SourceSafe LAN plug-in supported by Visual SourceSafe. For details of this plug-in, see the Visual SourceSafe Help. Note Visual Studio refers to source control packages as plug-ins, although they can be implemented as other types of software modules. Visual Studio source control is simply an environment for third party source control plug-ins. Therefore its functionality is only activated by installation of a plug-in. To use a third-party source control plug-in, you must usually install the third party application and/or the source control plug-in(s) on the client and server machines for your site. Once they have been installed as indicated by the third party instructions, their functionality will be available through Visual Studio. The operations that are enabled are variable, depending on the source control plug-in. You must see your third party documentation for package-specific operational details. See "Overview (Source Control)" in the Visual Studio Help for design details of source control in Visual Studio. This section of the Help also provides all the information you will need for developing a third-party source control package that is compatible with Visual Studio. Basics of Source Control Support in Visual Studio Basic source control support in Visual Studio includes setting of source control plug-in and environment options, plug-in switching, database access, and versioning for and manipulation of Visual Studio projects, solutions, and files and associated metadata. Source control in Visual Studio also enforces protocols for the control of database accesses, for example, the Lock-Modify-Unlock work style, in which a user who wants to modify a file must check it out exclusively. It is important to remember that you should use source control in Visual Studio mechanisms to interact with a source control plug-in. Do not use other client applications presented by the third party furnishing the plug-in, for example, Visual SourceSafe Explorer. Proper use of the source control mechanisms in Visual Studio ensures that only correct files are added to source control and that your Visual Studio project and solution files are updated with the correct plug-in-specific details. Source Control Plug-in Configuration and Switching Visual Studio source control supports configuration and plug-in switching through the Source Control entry in the Options dialog box. This entry is accessible by selecting Options on the Tools menu of Visual Studio. You will use the Options dialog box to select the plug-in that you want to use for source control, and set up environment options for that plug-in. Before you and your team can take advantage of source control features in the Visual Studio IDE, you must: Determine if any source control plug-ins are available. If the source control plug-in that you want to use is not installed on your computer, install the third-party product that supports the plug-in and restart Visual Studio to register it. Create a source control database according to the functionality of the particular plug-in. Send a link to the database location to all team members. Database Access Basic database access commands, for example, Check Out and Add to Source Control, are available in the File menu of Visual Studio. However, these commands are only activated after you have chosen the source control plug-in that you want to use. When you use one of the basic database access commands, the plug-in that you have chosen invokes the corresponding third-party functionality and/or environment to complete the associated operation. Some access operations are active with only the plug-in selected, while other operations are only available when you have also selected a Visual Studio project, solution, or file in Visual Studio's Solution Explorer. For example, you can use an Add to Source Control command once you have chosen a plug-in. However, to use a Check In command, you must have an item selected in Solution Explorer. File Handling by Source Control. Namespace Change Propagation Visual Studio source control supports namespace change propagation in source control plug-ins. Change propagation applies to delete, rename, and move operations. When you request an operation for which change propagation is enabled, the source control plug-in changes your working copy of the source-controlled item, the master copy in the database, and the copies of other users when you check in the item and the other users retrieve it. How Source Control Handles Solutions and Projects When you add a solution or project to source control, the first thing a source control plug-in must do is identify a unified root for the item being added. This root is a path to the parent directory for all working folders and files making up the solution or project. A unified root generally maps to a physical path on disk. However, if a solution contains files or projects that reside on more than one disk drive, there is no physical folder to which a unified root can map. A solution can span drives but a source control unified root cannot. To support this situation, Visual Studio source control supports the concept of a super-unified root. This type of root is a virtual container beneath which all projects and files in a source-controlled solution are located. When you add a solution using a source control plug-in with advanced capabilities, the plug-in creates an empty solution root folder in the database. This folder will contain all items in a source-controlled solution. By default, this folder is <solutionname>.root. Note When you add a single project to source control, a .root folder is not created. Use of the solution root provides the following benefits: Fewer Prompts. The solution root minimizes the potential number of source control bindings for a solution and thus minimizes user prompts when you add a solution to source control and perform other tasks. Project Encapsulation. The solution root ensures that all projects in a solution can be readily identified as belonging together, even when one or more of the projects reside on different partitions or computers. You can disable the creation of the <solutionname>.root folder, but doing so is not recommended. For more information, see How to: Disable Creation of the <solutionname>.root Folder. Solutions in Visual Studio are either well-formed or not. A well-formed solution is one for which the hierarchical structure on disk matches its structure in Solution Explorer. All projects in a well-formed solution are stored in subfolders of the solution folder on disk. If the solution is well-formed when you add it to source control, the source control plug-in creates a folder beneath *.root folder to contain the master copies of the solution file (*.sln) and solution user option files (*.suo) for the solution. Finally, the source control plug-in creates a folder beneath the .sln folder for each additional project in the source control database. If a solution is not well-formed, the source control plug-in creates a folder for the solution and its initial project. Then folders for each additional project are created in parallel to the solution folder. Views of a Solution or Project Visual Studio provides three distinct views of a source-controlled solution or project: design, source control, and physical. Many source control tasks are easier to perform when there is a one-to-one mapping between the individual elements of these views. However, if you create your solutions and projects and add them to source control using the default settings in Visual Studio, your solutions and projects will not necessarily be organized in the same way on disk as they are in Solution Explorer and in the database. The design view of a solution or project, which you see in Solution Explorer, is a logical depiction of the contents of a solution or project. Generally, the design view is tidy and meaningful. Unnecessary files are hidden and files from many different physical locations are pressed into a single project container. The source control view of a solution or project, which you see in a standalone application, such as Visual SourceSafe Explorer, is also a logical view of a solution or project. However, the source control view is not necessarily a reflection of the logical view. The physical view of a solution or project, which you see in Windows File Explorer, is unlikely to reflect the hierarchical structure of either the logical or the source control view. The following guidelines can help you achieve organizational fidelity between the design, physical, and source control views of your source-controlled solutions and projects: Create a blank solution first, and then add projects to it. This helps you maintain the logical parent-child relationship between a solution and its projects in storage. When you then add the solution to source control, the source control view and design view will both mirror the solution hierarchy on disk. Give each solution a unique and descriptive name that differs from the name of each of the contained projects. Avoid adding link files to a source-controlled solution or project. If possible, store all files in a solution or project on one disk drive. Source Control Connections and Bindings Visual Studio defines a connection as a live data link between Visual Studio and a database server. When you add a solution or project to source control, your source control plug-in copies the item and all of its contents from disk into the database. One source control folder is created for each folder containing a solution or project file. After adding the item, the source control plug-in binds your local working copy of a solution or project to its version in the database. Every source-controlled solution has at least one source control binding. However, an item can have multiple bindings and require multiple connections to the database. The number of bindings and connections depends on how you create the solution initially and whether or not its projects and files are all saved on the same partition. As an example of bindings and connections, think of a well-formed source-controlled solution, with multiple projects, as a house with several rooms. When you build the house, you can install a single high-speed data line from one room to the street. You install a router behind a firewall to distribute the data feed to other rooms and you pay an Internet service provider to connect your house to the Internet. You might think of a source control binding as representing the single data line created for the house. When you open a source-controlled solution, a connection is created across that binding. The connection establishes a handshake between your working copy of the solution on disk and the master copy of the solution in the database. If a source-controlled solution is not well-formed, you can look at it like a house in which every room is connected to the Internet directly. Internet charges are more expensive than in the single-connection house, maintenance costs are higher, and switching to a different Internet service provider is much more difficult and time-consuming. Ideally, a solution and its projects share a single source control binding. Single-binding solutions are more manageable than multiple-binding solutions. They are easier to: Disconnect from source control in order to work offline. Connect to the database after reconnecting to the network. Branch in one step. You can create a multi-project solution with a single binding by creating a blank solution before adding its projects. You can also do this by selecting the Create Directory for Solution option in the New Project dialog box when creating a solution-project pair. If you create a solution-project pair in one step and do not select the Create Directory for Solution option in the New Project dialog box (off by default), a second binding will be created when you add a second project to the solution. One binding is created for the initial project and the solution. Additional bindings are created for each additional project. Source Control Terminology The Visual Studio documentation uses a number of terms to describe source control features and concepts. The following table defines some of the common terms. Basis version The server version of a file from which a local version is derived. Binding Information that correlates a working folder for a solution or project on disk to its folder in the database. Branching Process of creating a new version, or branch, of a shared file or project under source control. Once a branch has been created, the two versions under source control will have a shared history up to a certain point and divergent histories after that point. Conflict Two or more different changes to the same line of code in situations where two or more developers have checked out and edited the same file. Connection A live data link between a source control client (for example, Visual Studio) and a source control database server. Database Location where all master copies, history, project structures, and user information are stored. A project is always contained within one database. Multiple projects can be stored in one database, and multiple databases can be used. Other terms commonly used for a database are repository and store. History Record of changes to a file since it was initially added to source control. With version control, you can return to any point in the file history and recover the file as it existed at that point. Label User-defined name attached to a specific version of a source-controlled item. Local copy File in a user’s working folder to which changes are saved until a check-in occurs. A local copy is sometimes referred to as a working copy, Master copy The most recently checked-in version of a source-controlled file, as opposed to the local copy of a file in your working folder. Other terms for master copy are server version and database version. Merging Process of combining differences in two or more modified versions of a file into a new file version. Merging can affect different versions of the same file or changes made to the same file version. Shared file A file having versions that reside in more than one source control location. Other terms for a shared file are copy and shortcut. Solution root An empty folder in a database that contains all items in a source-controlled solution. By default, this folder is <solutionname>.root. Super-unified root A virtual container beneath which all projects and files in a source-controlled solution are located. For example [SUR]:\ is the super-unified root of a source-controlled solution containing projects that are located in [SUR]:\C:\Solution\ProjOne and [SUR]:\D:\ProjTwo. Unified root A path to the parent directory for all working folders and files in a source-controlled solution or project. For example, C:\Solution is the unified root of a source-controlled solution containing files that are located in C:\Solution, C:\Solution\ProjOne and C:\Solution\ProjTwo. Working folder Location where your local copies of source-controlled items are stored, usually on your own computer. Another term for a working folder is workspace. See Also Tasks How to: Disable Creation of the <solutionname>.root Folder
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/ms171339%28v%3Dvs.90%29
CC-MAIN-2019-35
en
refinedweb
Because each song is so well written, and the lyrics are so clever and poignant, this central 69 Love Songs node has been constructed to index each song (**not** namespaced). This particular noder is not really predisposed to noding the lyrics to every song he's ever heard, but because Merritt has been able to so wonderfully depict every stage of love in these 69 songs, I hope is that they will not only beautify Everything but also provide a handy source of nodes that you can softlink to in order to very nicely express a particular feeling. Editor's note: In September 2003, we removed cpoyrighted texts. The lyrics to Stephen Merrit's 69 Love Songs fell victim to that change in policy. 69 Love Songs Volume 1: Stephin Merritt, the mad genius responsible for The Magnetic Fields, was struck by inspiration in a midtown Manhattan gay piano bar. Originally conceived as 100 Love Songs, a musical revue or cabaret, Merritt compromised in number when he realized that 100 2-minute songs, with no downtime or intermissions, would still be a bit lengthy. Eventually, the logistical absurdity of putting on a cabaret with rotating singers and dozens of songs led Merritt to put his ideas on tape (DAT? I have no idea), and 69 Love Songs was born. Like most of Merritt's work, 69 Love Songs is thoroughly toungue-in-cheek, although many of the songs are incredibly poignant. Merritt places some of his narrators fairly bizarre settings -- a Scotsman leaving for war ("Abigail, Belle of Kilronan"), an encounter with Ferdinand de Saussure ("The Death of Ferdinand de Saussure") -- and writes heartfelt songs of passion, confusion, and loss. Other songs are unapologetically horny and silly, or saccharine. There is a whole set of songs -- "Love Is Like Jazz", "Experimental Music Love", "Punk Love", "World Love", "Acoustic Guitar", and "Xylophone Track" (about death by the blues) -- in which Merritt toys with specific musical styles, pairing them with totally incongruous subjects. Merritt also keeps the genders of his narrators and their lovers (partners, stalkees, whatever) ambiguous -- Merritt is a really, really flaming homosexual -- or turns the expected roles on their head, by writing queer country songs ("Papa Was A Rodeo") and the like. 69 Love Songs is Merritt's masterpiece. It still astounds me that a man cut 2:50:00 of music I like, music that manages to be hilarious, embittered, heartfelt, and playful in two consecutive songs. Some noders have made comparisons between Stephin Merritt and Morrissey, kind of apt given their tendencies for drama, but Merritt's ability to balance self-parody with earnestness makes 69 Love Songs a real pleasure (whereas The Smiths, one of my favorite bands, can be pretty grating). As for comparisons to Merritt's other work: I really like The 6ths's Wasps' Nests (another album of rotating singers] and The Charm of the Highway Strip (another theme album), although both of those lack the variety and depth of 69 Love Songs. "Take Ecstacy With Me" is the one song that ought to be appended. Really, there is nothing, and will never be anything, quite like this album. I am not going to append a whole song list; rather, I am going to node songs (not whole lyrics, don't shoot) about which I have non-stupid things to say. Right now, that's none of them. Log in or register to write something here or to contact authors. Need help? [email protected]
https://everything2.com/title/69+Love+Songs
CC-MAIN-2019-35
en
refinedweb
bindings to picosat (a SAT solver) Project description PicoSAT is a popular SAT solver written by Armin Biere in pure C. This package provides efficient Python bindings to picosat on the C level, i.e. when importing pycosat, the picosat solver becomes part of the Python process itself. For ease of deployment, the picosat source (namely picosat.c and picosat.h) is included in this project. These files have been extracted from the picosat source (picosat-965.tar.gz). Usage The pycosat module has two functions solve and itersolve, both of which take an iterable of clauses as an argument. Each clause is itself represented as an iterable of (non-zero) integers. - The function solve returns one of the following: - one solution (a list of integers) - the string “UNSAT” (when the clauses are unsatisfiable) - the string “UNKNOWN” (when a solution could not be determined within the propagation limit) The function itersolve returns an iterator over solutions. When the propagation limit is specified, exhausting the iterator may not yield all possible solutions. - Both functions take the following keyword arguments: - prop_limit: the propagation limit (integer) - vars: number of variables (integer) - verbose: the verbosity level (integer)cosat >>> cnf = [[1, -5, 4], [-1, 5, 3, 4], [-3, -4]] >>> pycosat.solve(cnf) [1, -2, -3, -4, 5] This solution translates to: x1 = x5 = True, x2 = x3 = x4 = False To find all solutions, use itersolve: >>> for sol in pycosat.itersolve(cnf): ... print sol ... [1, -2, -3, -4, 5] [1, -2, -3, 4, -5] [1, -2, -3, 4, 5] ... >>> len(list(pycosat.itersolve(cnf))) 18 In this example, there are a total of 18 possible solutions, which had to be an even number because x2 was left unspecified in the clauses. The fact that itersolve returns an iterator, makes it very elegant and efficient for many types of operations. For example, using the itertools module from the standard library, here is how one would construct a list of (up to) 3 solutions: >>> import itertools >>> list(itertools.islice(pycosat.itersolve(cnf), 3)) [[1, -2, -3, -4, 5], [1, -2, -3, 4, -5], [1, -2, -3, 4, 5]] Implementation of itersolve How does one go from having found one solution to another solution? The answer is surprisingly simple. One adds the inverse of the already found solution as a new clause. This new clause ensures that another solution is searched for, as it excludes the already found solution. Here is basically a pure Python implementation of itersolve in terms of solve: def py_itersolve(clauses): # don't use this function! while True: # (it is only here to explain things) sol = pycosat.solve(clauses) if isinstance(sol, list): yield sol clauses.append([-x for x in sol]) else: # no more solutions -- stop iteration return This implementation has several problems. Firstly, it is quite slow as pycosat.solve has to convert the list of clauses over and over and over again. Secondly, after calling py_itersolve the list of clauses will be modified. In pycosat, itersolve is implemented on the C level, making use of the picosat C interface (which makes it much, much faster than the naive Python implementation above). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pycosat/
CC-MAIN-2019-35
en
refinedweb
Is there a way to list all the available drive letters in python? More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system? (My google-fu seems to have let me down on this one.) Related: - Enumerating all available drive letters in Windows (C++ / Win32) Answers import win32api drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] print drives Adapted from: Without using any external libraries, if that matters to you: import string from ctypes import windll def get_drives(): drives = [] bitmask = windll.kernel32.GetLogicalDrives() for letter in string.uppercase: if bitmask & 1: drives.append(letter) bitmask >>= 1 return drives if __name__ == '__main__': print get_drives() # On my PC, this prints ['A', 'C', 'D', 'F', 'H'] Those look like better answers. Here's my hackish cruft import os, re re.findall(r"[A-Z]+:.*$",os.popen("mountvol /").read(),re.MULTILINE) Riffing a bit on RichieHindle's answer; it's not really better, but you can get windows to do the work of coming up with actual letters of the alphabet >>> import ctypes >>> buff_size = ctypes.windll.kernel32.GetLogicalDriveStringsW(0,None) >>> buff = ctypes.create_string_buffer(buff_size*2) >>> ctypes.windll.kernel32.GetLogicalDriveStringsW(buff_size,buff) 8 >>> filter(None, buff.raw.decode('utf-16-le').split(u'\0')) [u'C:\\', u'D:\\'] The Microsoft Script Repository includes this recipe which might help. I don't have a windows machine to test it, though, so I'm not sure if you want "Name", "System Name", "Volume Name", or maybe something else._LogicalDisk") for objItem in colItems: print "Access: ", objItem.Access print "Availability: ", objItem.Availability print "Block Size: ", objItem.BlockSize print "Caption: ", objItem.Caption print "Compressed: ", objItem.Compressed print "Config Manager Error Code: ", objItem.ConfigManagerErrorCode print "Config Manager User Config: ", objItem.ConfigManagerUserConfig print "Creation Class Name: ", objItem.CreationClassName print "Description: ", objItem.Description print "Device ID: ", objItem.DeviceID print "Drive Type: ", objItem.DriveType print "Error Cleared: ", objItem.ErrorCleared print "Error Description: ", objItem.ErrorDescription print "Error Methodology: ", objItem.ErrorMethodology print "File System: ", objItem.FileSystem print "Free Space: ", objItem.FreeSpace print "Install Date: ", objItem.InstallDate print "Last Error Code: ", objItem.LastErrorCode print "Maximum Component Length: ", objItem.MaximumComponentLength print "Media Type: ", objItem.MediaType print "Name: ", objItem.Name print "Number Of Blocks: ", objItem.NumberOfBlocks print "PNP Device ID: ", objItem.PNPDeviceID z = objItem.PowerManagementCapabilities if z is None: a = 1 else: for x in z: print "Power Management Capabilities: ", x print "Power Management Supported: ", objItem.PowerManagementSupported print "Provider Name: ", objItem.ProviderName print "Purpose: ", objItem.Purpose print "Quotas Disabled: ", objItem.QuotasDisabled print "Quotas Incomplete: ", objItem.QuotasIncomplete print "Quotas Rebuilding: ", objItem.QuotasRebuilding print "Size: ", objItem.Size print "Status: ", objItem.Status print "Status Info: ", objItem.StatusInfo print "Supports Disk Quotas: ", objItem.SupportsDiskQuotas print "Supports File-Based Compression: ", objItem.SupportsFileBasedCompression print "System Creation Class Name: ", objItem.SystemCreationClassName print "System Name: ", objItem.SystemName print "Volume Dirty: ", objItem.VolumeDirty print "Volume Name: ", objItem.VolumeName print "Volume Serial Number: ", objItem.VolumeSerialNumber Found this solution on Google, slightly modified from original. Seem pretty pythonic and does not need any "exotic" imports import os, string available_drives = ['%s:' % d for d in string.ascii_uppercase if os.path.exists('%s:' % d)] I wrote this piece of code: import os drives = [ chr(x) + ":" for x in range(65,90) if os.path.exists(chr(x) + ":") ] It's based on @Barmaley's answer, but has the advantage of not using the string module, in case you don't want to use it. It also works on my system, unlike @SingleNegationElimination's answer. More optimal solution based on @RichieHindle def get_drives(): drives = [] bitmask = windll.kernel32.GetLogicalDrives() letter = ord('A') while bitmask > 0: if bitmask & 1: drives.append(chr(letter) + ':\\') bitmask >>= 1 letter += 1 return drives On Windows you can do a os.popen import os print os.popen("fsutil fsinfo drives").readlines() Here's my higher-performance approach (could probably be higher): >>> from string import ascii_uppercase >>> reverse_alphabet = ascii_uppercase[::-1] >>> from ctypes import windll # Windows only >>> GLD = windll.kernel32.GetLogicalDisk >>> drives = ['%s:/'%reverse_alphabet[i] for i,v in enumerate(bin(GLD())[2:]) if v=='1'] Nobody really uses python's performative featurability... Yes, I'm not following Windows standard path conventions ('\\')... In all my years of using python, I've had no problems with '/' anywhere paths are used, and have made it standard in my programs. Here is another great solution if you want to list only drives on your disc and not mapped network drives. If you want to filter by different attributes just print drps. import psutil drps = psutil.disk_partitions() drives = [dp.device for dp in drps if dp.fstype == 'NTFS'] This code will return of list of drivenames and letters, for example: ['Gateway(C:)', 'EOS_DIGITAL(L:)', 'Music Archive(O:)'] It only uses the standard library. It builds on a few ideas I found above. windll.kernel32.GetVolumeInformationW() returns 0 if the disk drive is empty, a CD rom without a disk for example. This code does not list these empty drives. These 2 lines capture the letters of all of the drives: bitmask = (bin(windll.kernel32.GetLogicalDrives())[2:])[::-1] # strip off leading 0b and reverse drive_letters = [ascii_uppercase[i] + ':/' for i, v in enumerate(bitmask) if v == '1'] Here is the full routine: from ctypes import windll, create_unicode_buffer, c_wchar_p, sizeof from string import ascii_uppercase def get_win_drive_names(): volumeNameBuffer = create_unicode_buffer(1024) fileSystemNameBuffer = create_unicode_buffer(1024) serial_number = None max_component_length = None file_system_flags = None drive_names = [] # Get the drive letters, then use the letters to get the drive names bitmask = (bin(windll.kernel32.GetLogicalDrives())[2:])[::-1] # strip off leading 0b and reverse drive_letters = [ascii_uppercase[i] + ':/' for i, v in enumerate(bitmask) if v == '1'] for d in drive_letters: rc = windll.kernel32.GetVolumeInformationW(c_wchar_p(d), volumeNameBuffer, sizeof(volumeNameBuffer), serial_number, max_component_length, file_system_flags, fileSystemNameBuffer, sizeof(fileSystemNameBuffer)) if rc: drive_names.append(f'{volumeNameBuffer.value}({d[:2]})') # disk_name(C:) return drive_names As I don't have win32api installed on my field of notebooks I used this solution using wmic: import subprocess import string #define alphabet alphabet = [] for i in string.ascii_uppercase: alphabet.append(i + ':') #get letters that are mounted somewhere mounted_letters = subprocess.Popen("wmic logicaldisk get name", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) #erase mounted letters from alphabet in nested loop for line in mounted_letters.stdout.readlines(): if "Name" in line: continue for letter in alphabet: if letter in line: print 'Deleting letter %s from free alphabet %s' % letter alphabet.pop(alphabet.index(letter)) print alphabet alternatively you can get the difference from both list like this simpler solution (after launching wmic subprocess as mounted_letters): #get output to list mounted_letters_list = [] for line in mounted_letters.stdout.readlines(): if "Name" in line: continue mounted_letters_list.append(line.strip()) rest = list(set(alphabet) - set(mounted_letters_list)) rest.sort() print rest both solutions are similiarly fast, yet I guess set list is better for some reason, right? As part of a similar task I also needed to grab a free drive letter. I decided I wanted the highest available letter. I first wrote it out more idiomatically, then crunched it to a 1-liner to see if it still made sense. As awesome as list comprehensions are I love sets for this: unused=set(alphabet)-set(used) instead of having to do unused = [a for a in aphabet if a not in used]. Cool stuff! def get_used_drive_letters(): drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] letters = [d[0] for d in drives] return letters def get_unused_drive_letters(): alphabet = map(chr, range(ord('A'), ord('Z')+1)) used = get_used_drive_letters() unused = list(set(alphabet)-set(used)) return unused def get_highest_unused_drive_letter(): unused = get_unused_drive_letters() highest = list(reversed(sorted(unused)))[0] return highest The one liner: def get_drive(): highest = sorted(list(set(map(chr, range(ord('A'), ord('Z')+1))) - set(win32api.GetLogicalDriveStrings().split(':\\\000')[:-1])))[-1] I also chose the alphabet using map/range/ord/chr over using string since parts of string are deprecated. if you don't want to worry about cross platform issues, including those across python platforms such as Pypy, and want something decently performative to be used when drives are updated during runtime: >>> from os.path import exists >>> from sys import platform >>>>> drives 'CZ' here's my performance test of this code: 4000 iterations; threshold of min + 250ns: __________________________________________________________________________________________________________code___|_______min______|_______max______|_______avg______|_efficiency ⡇⠀⠀⢀⠀⠀⠀⠀⠀⣷⣶⣼⣶⣴⣴⣤⣤⣧⣤⣤⣠⣠⣤⣤⣶⣤⣤⣄⣠⣦⣤⣠⣤⣤⣤⣤⣄⣠⣤⣠⣤⣤⣠⣤⣤⣤⣤⣤⣤⣄⣤⣤⣄⣤⣄⣤⣠⣀⣀⣤⣄⣤⢀⣀⢀⣠⣠⣀⣀⣤⣀⣠ drives = ''.join( l for l in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if exists('%s:/'%l) ) if platform=='win32' else '' | 290.049ns | 1975.975ns | 349.911ns | 82.892% Need Your Help jQuery UI Resizable alsoResize reverse jquery jquery-ui jquery-ui-resizableHow to make the jQuery UI Resizable alsoResize reverse direction. Bug tracker priorities and policies project-management bug-trackingTriggered by a question on The Old New Thing, I would like to ask:
http://www.brokencontrollers.com/faq/23431616.shtml
CC-MAIN-2019-43
en
refinedweb
The dynamic linker is one of the most important yet widely overlooked components of a modern Operating System. Its job is to load and link-in executable code from shared libraries into executables at run time. There are many intricate details on how the dynamic linker does what it does, but one of the more interesting is the use of the LD_LIBRARY_PATH and LD_PRELOAD environment variables. When defined, these variables will let you override the default behaviour of the dynamic linker to load specific code into your executables. For example, using LD_PRELOAD, you can override the C standard library functions in libc with your own versions (think printf, puts, getc, etc.) Let's see this in action! We'll start by making a simple program to test the (now deprecated) gets() function. Here, we will create a file called test.c and put the follow contents inside it: #include <stdio.h> int main (void) { char str[128]; printf ("Testing gets()...\n"); gets(str); return 0; } Note that this code is not safe and should not be used for production, but it makes a simple test scenario. Next, we can compile the source with gcc. (Since gets() is deprecated, we're going to throw in the -w flag to suppress warning messages. We don't really care for this example.) $ gcc -w -o test test.c Finally, we can run the program and examine it's output: $ ./test Testing gets()... womp $ Success! When the executable is run, it links the gets() code from libc into memory and executes that code when we call gets(). Now let's see how we can override libc's implementation with our own. First, we'll write a new version of gets() that we want run. Make a file called mygets.c and enter the following: char *gets( char *str ) { printf("Error: Stop using deprecated functions!\n"); return ""; } Once finished, we can compile this into our own shared object library: gcc -w -fPIC -shared -o mygets.so mygets.c Finally, let's run the test executable again, but this time we will call it with LD_PRELOAD to load our custom shared library before dynamically linking libc: $ LD_PRELOAD=./mygets.so ./test Testing gets()... Error: Stop using deprecated functions! $ As you can see, now our custom code is displaying where we were once being prompted for input. Of course, we could write any code we want to go in here. The only limit is whatever we can think up. This technique could be extremely useful when trying advanced debugging or when trying to replace specific parts of a shared library in your program. You can even take this a step further to create hooks for the original overridden functions. To illustrate this, let's modify our shared library one more time: #define _GNU_SOURCE #include <stdio.h> #include <dlfcn.h> char *gets( char *str ) { printf("Error: Stop using deprecated functions!\n"); char *(*original_gets)( char *str ); original_gets = dlsym(RTLD_NEXT, "gets"); return (*original_gets)(str); } We've done a few things here. We have now referenced the stdio header and we use dlsym to find the original gets() function. Notice that we use the RTLD_NEXT pseudo-handle with dlsym. In order to use this handle, we must include the _GNU_SOURCE test macro (otherwise RTLD_NEXT will not be found). This finds the next occurrence of gets() after the current library and allows us to map it to original_gets(). We can then use it in this function with the mapped name. We can compile our library again to test out the new code (this time linking the dl lib): $ gcc -w -fPIC -shared -o mygets.so mygets.c -ldl Using this method, we can run our test executable again: $ LD_PRELOAD=./mygets.so ./test Testing gets()... Error: Stop using deprecated functions! womp $ At this point, you should notice the custom code we provided for gets(), followed by the prompt by the original libc function. Hopefully this dispels a little bit of the voodoo and gives you another valuable tool to stash in your belt.
https://www.unix-ninja.com/p/Dynamic_Linker_Voodoo
CC-MAIN-2019-43
en
refinedweb
import "go.chromium.org/luci/appengine/gaesettings" Package gaesettings implements settings.Storage interface on top of GAE datastore. By default, gaesettings must have its handlers installed into the "default" AppEngine module, and must be running on an instance with read/write datastore access. See go.chromium.org/luci/server/settings for more details. Storage knows how to store JSON blobs with settings in the datastore. It implements server/settings.EventualConsistentStorage interface. FetchAllSettings fetches all latest settings at once. GetConsistencyTime returns "last modification time" + "expiration period". It indicates moment in time when last setting change is fully propagated to all instances. Returns zero time if there are no settings stored. func (s Storage) UpdateSetting(ctx context.Context, key string, value json.RawMessage, who, why string) error UpdateSetting updates a setting at the given key. Package gaesettings imports 11 packages (graph) and is imported by 4 packages. Updated 2019-10-14. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/appengine/gaesettings
CC-MAIN-2019-43
en
refinedweb
Github user xndai commented on a diff in the pull request: --- Diff: c++/src/OrcHdfsFile.cc --- @@ -66,22 +64,22 @@ namespace orc { options = config->GetOptions(); } hdfs::IoService * io_service = hdfs::IoService::New(); - //Wrapping fs into a shared pointer to guarantee deletion - std::shared_ptr<hdfs::FileSystem> fs(hdfs::FileSystem::New(io_service, "", options)); - if (!fs) { + //Wrapping file_system into a unique pointer to guarantee deletion + file_system = std::unique_ptr<hdfs::FileSystem>(hdfs::FileSystem::New(io_service, "", options)); + if (!file_system) { --- End diff -- Unfortunately unique_ptr can be redefined as auto_ptr if platform doesn't support unique_ptr (see orc_config.hh). We talked about removing this redefine, but it hasn't been done yet. So I'd suggest we stick to file_system.get() != nullptr. ---
http://mail-archives.apache.org/mod_mbox/orc-dev/201707.mbox/%[email protected]%3E
CC-MAIN-2019-43
en
refinedweb
Interfaces First (and Foremost) With JavaPaolo A. G. SivilottiComputer Science and EngineeringThe Ohio State UniversityColumbus, OH [email protected] LangMathematics and Computer ScienceMoravian CollegeBethlehem, PA [email protected] is a critical concept that underlies many topicsin computing science. For example, in software engineering,the distinction between a component’s behavior andits implementation is fundamental. Java provides two constructsthat correspond to precisely this distinction: A Javainterface is a client’s abstract view of a component’s behavior,while a class is a concrete implementation of that samecomponent. We have developed a course that introducesJava while following a discipline of diligently decomposingevery component into these two separate linguistic elements.In this course, interfaces are given the same prominence asclasses since both are needed for a complete component.This approach is helpful to students by providing: (i) a clearmanifestation of the role of abstraction in software systems,and (ii) a framework that naturally motivates many goodcoding practices adopted by professional programmers.Categories and Subject DescriptorsK.3.2 [Computers and Education]: Computer and InformationScience Education—computer science educationGeneral TermsDesign, LanguagesKeywordsabstraction, concrete state, behavioral specification1. INTRODUCTIONAbstraction is a key concept in computing science andsoftware engineering [3,5–7]. Students encounter it, in someform, in practically every major topic including architecture,operating systems, complexity, data structures, andprogramming languages. Indeed, the ACM 2001 ComputingCurricula recognizes the importance of abstraction atthe pedagogical core of our discipline [22]. As one of its. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SIGCSE’10, March 10–13, 2010, Milwaukee, Wisconsin, USA.Copyright 2010 ACM 978-1-60558-885-8/10/03 ...$10.00.guiding principles, the Computer Science volume states (p.12):All computing science students must learn to integratetheory and practice, to recognize the importanceof abstraction, and to appreciate thevalue of good engineering design.Ironically, the ubiquity of abstraction may actually be abarrier to its recognition and appreciation by students. It issuch a common theme that educators may use it casually,without explicitly drawing attention to it.We have developed an introduction to Java course basedon the principle of deliberate and explicit application of behavioralabstraction. Although it is a first course in Java, itis not designed as a first course in programming. Studentshave completed an introductory sequence (using C++) beforetaking this course. Beyond teaching students aboutabstraction, this approach has the additional benefit of motivatingand explaining several important (but subtle) conceptsand professional best practices in Java.From the beginning, and throughout the course, studentsthink about components as having two distinct parts: an abstractclient-side view and one or more concrete implementations.The two views are segregated in distinct programartifacts: the client-side view is given by a Java interfacewhile each concrete realization is given by a Java class thatimplements the interface.Of course, these are exactly the intended roles of these twolanguage constructs. What is unique about our approach isour insistence that all components be decomposed into thesetwo parts. Interfaces are used early, they are used throughout,and they are used prominently. This approach doesnot require special frameworks or IDEs. It simply leveragesexisting Java constructs.This discipline of component decomposition enforces aclear separation of concerns. It creates an explicit scaffolding,supported by Java language constructs, for distinguishingbehavioral abstraction from concrete implementation.This scaffolding can then buttress and augment students’understanding of abstraction and modularity.The rest of the paper is organized as follows. Section 2compares our work with similar curricular structures. Section3 describes the separation discipline and its implicationson documentation and testing. Section 4 sketches some exemplarhomework assignments from this course. Section 5lists many good coding practices that follow or are relatedto this discipline, while Section 6 outlines some difficultiesin adopting the discipline in full. Finally, Sections 7 and 8evaluate this curricular structure and conclude.515 2. RELATED WORKThe order of topics in a first Java course has been a matterof considerable debate within the SIGCSE community.One popular structure is to introduce objects early, even assoon as day 1. This technique, known as “objects first”, isexemplified by the BlueJ IDE which allows students to easilyinteract with objects and observe object state with minimalsyntactic scaffolding [1, 12]. Although such structures canhelp build up student intuitions about object-oriented systems,they can mingle abstract and concrete state, blurringfor students the distinction between the two.An alternative structure, termed “components-first”, emphasizesthe separation of client-side view and implementer’sview of components. Students begin by learning how tobe clients of components, or APIs, or libraries. They subsequentlylearn how to implement these components. Examplesof this approach include [11, 16, 18]. All of theseapproaches are similar to ours in philosophy in that theyrecognize the importance of distinguishing between the twoviews of a component. The differences are that our designleverages standard Java constructs and that the decompositionis applied consistently throughout the course.The separation of abstract and concrete states for specificationpurposes is a well-known technique from abstractdata types. Serious Java-based specification notations allsupport this separation. For example, Liskov & Guttag’sbook [14], JML [13], and the “Laboratory in Software Engineering”course at MIT, number 6.170 [17], all use specificationfields in the documentation of a class to define objectstate and method behavior. These strategies rely ona documentation discipline of declaring specification variables,then writing pre and postconditions in terms of thesevariables, rather than fields in the implementation. Suchdisciplines, however, are difficult to strictly enforce and studentsmay see the declaration of specification variables thatclosely match fields in the concrete implementation as anunnecessary extra step.Perhaps the closest work to our own is [20], where the caseis made for teaching interfaces before inheritance. This orderingis reflected in one popular introductory textbook [9]where interfaces and polymorphism come before inheritance.We agree with this ordering, but this is just one example ofplacing interfaces earlier in the curriculum and giving themgreater prominence. Our approach advocates consistentlydecomposing components into interfaces and classes. Forexample, we present the Collections Framework entirely interms of interfaces (List, Queue, Deque, Set, SortedSet).Iterators (and ListIterators) over these collections are interfacesas well, so the presentation can be quite sophisticatedbefore the implementing classes are even discussed.3. SEPARATION OF CONCERNSOne way to motivate the interface construct is simply as amechanism for overcoming Java’s single-inheritance restriction.Another option is to present the interface construct assimply a variant of the class construct, much like an abstractclass, but with further restrictions (e.g., it can contain onlypublic members, no constructors, and no static methods).Such characterizations, as exemplified in [19], are commonin introductory Java courses and serve to relegate the interfaceconstruct to secondary standing. In [19], interfaces arenot even mentioned until p. 694, where they are coveredin half a chapter. This treatment is similar to other popularintroductory Java textbooks. While this treatment mayreflect the use of Java interfaces in practice, it misses an opportunityto accomplish a larger pedagogical goal: teachingstudents about abstraction.Our curricular approach, on the other hand, sets interfacesand classes on equal footing. Every software component isdecomposed into two distinct artifacts: (i) a Java interfacewhich is the (abstract) client-side view of the component,and (ii) a Java class which is the (concrete) implementation.Thus, the interface construct is present early and often.Both an interface and a class, together, are required fora complete component.The notion of separating abstract and concrete state is aclassic idea of abstract data types. Specification notationssuch as JML permit exactly this separation (through modelor specification fields). What characterizes our “interfacesfirst and foremost” approach is the requirement of creatinga Java interface; separation of abstraction from realizationfollows as a consequence. Since the interface is a distinctlexical scope from any implementing classes, students see theneed for a clear and thorough description of the behavioralcover story in an implementation-neutral manner.3.1 Documentation with JavadocIn addition to all the standard Javadoc tags, interfacesand classes are documented in accordance with the distinctroles they serve in defining a component. Custom tags areused to further structure this information.For interfaces, the documentation defines both the abstractstate of the component and the behavior of each (public)method in terms of its effect on that abstract state. Thatis, the interface documentation must present a cover storythat is understandable to a client with absolutely no knowledgeof the component implementation (i.e., class).The cover story can be given with various degrees of formalrigor. We use a collection of custom tags as hooks fortelling the client-side cover story:@mathmodel: abstract fields whose types are mathematicalconcepts such as integers, sets, and strings@mathdef: derived abstract fields that serve as convenientshorthands@constraint: invariants on state@initially: guarantees on the initial state@requires, @alters, @ensures: classic behavioral descriptionsof each methodThe tag arguments can be written as formal mathematicalexpressions or informal prose. Either way, the cover story isisolated in a construct with no lexical connection to any underlyingimplementation. Thus, invariants, preconditions,and postconditions must perforce be written in terms of abstractstate. This explicit separation reinforces for studentsthe mental model of distinguishing the abstraction from therealization.Classes are also carefully documented with Javadoc. Inthis case, however, the documentation is written in termsof concrete state (i.e., private fields). Custom tags for classdocumentation include:@convention: invariants on (concrete) state@correspondence: the abstraction relation [21] for mapping(concrete) state to abstract fieldsBehavior descriptions (requires and ensures clauses) ofpublic methods are not written because this (client-side)516 public abstract class BigNaturalTest {private BigNatural b;protected abstractBigNatural create (int value );}@Testpublic void smallInitialization () {b = create (34);assertEquals(“Two−digit initialization”,“34”, b. toString ());}//more test casesListing 1: Abstract test class with test casespublic class SlowBigNaturalTestextends BigNaturalTest {}@Overrideprotected BigNatural create (int value ) {return new SlowBigNatural( value );}Listing 2: Derived test class provides factorydocumentation is already in the interface. On the otherhand, private (helper) methods, which exist only in the class,are documented with @requires, @alters, @ensures tags asabove. An important difference is that these descriptionsare written in terms of concrete state. Again, this practiceis easy for students to adopt because of the lexical scopingprovided by the Java interface and class constructs.3.2 Testing with JUnitThe use of JUnit to test software components can furtherleverage and reinforce the explicit separation of client-sideview and implementation.Black-box test cases are written in a standard JUnit testclass using only the interface of the component under test.The test class includes one or more abstract factory methodsthat serve to generate instances for test cases to exercise.Each test case begins by calling these factory methods.Thus, the test class is completely independent of any particularimplementation. Listing 1 illustrates part of a testclass written in this manner, where BigNatural is an interfacetype.The test class is an abstract class since it includes an abstractmethod. In order to execute tests, this class is extendedand an implementation for the factory provided (seeListing 2). The derived test class does not provide any newtest cases. 1 Of course, this style of test case coding is notnew. The point is for students to observe and carefully respectthe division achieved by separating abstraction fromrealization.In summary, each component consists of an interface/classpair related by implementation. To test this component, apair of test classes related by inheritance is used.1 Implementation-specific test cases could be provide in thederived test class, but this is the exception rather than therule.4. CANONICAL SAMPLE ASSIGNMENTSA good early assignment is to develop an unbounded naturalnumber component. The requirements are simple: A naturalnumber can be initialized, incremented without bound,decremented when it is positive, and its value displayed asa string. The abstraction is clean. The implementation,however, is more complicated since it must account for anunbounded growth. Students quickly see that many designchoices for implementing the concrete state exist, includingan array of bytes and a string of characters. Furthermore,these design choices involve trade-offs in performance andcomplexity for implementing the small set of required behaviors.A simple component such as this one is then refined overthe subsequent assignments to illustrate concepts as coveredin lectures, including documentation, testing, exceptions,comparability, and immutability. Comparing the (abstract)view of an unbounded natural number with Java’sBigInteger is also a nice hook for introducing the subtletiesof behavioral subtyping.For an assignment related to subtyping, we have used aset of three components: Person, Student, and Faculty. Allthree can contribute to a university’s scholarship fund andall three can enter a lottery for football tickets. Only theJava interface is given for each of these components. Thecomponents differ in how much prior contribution is requiredfor eligibility in the ticket lottery and in the quality of seatsthe nondeterministic lottery might yield. Students are askedto identify subtyping relationships and to modify interfacedescriptions so new subtyping relationships exist. Again, thediscipline of decomposing components into both an interfaceand a class clarifies for students the distinction between subtypingand inheritance [4].5. GOOD CODING PRACTICES FOR JAVABeyond language syntax and structures, students shouldalso learn effective idioms and strategies that support writinggood code. There are many such strategies [2], some ofwhich are quite subtle. Using an “interfaces first and foremost”approach clarifies the motivation and key conceptsbehind many of these strategies, making them easier for studentsto understand, remember, and appreciate.Code to the interface.Recommended practice is to prefer the use of interfacetypes (over class types) for all declared types (i.e., localvariables,fields, parameters, and return types). The advantageof this practice is the resulting generality and loose-couplingof the code.This good practice follows directly from our decompositiondiscipline. Clients, as far as possible, work only withinterfaces.Document the contract.Recommended practice is to document method behaviorsrather than implementations with Javadoc. This practiceis sensible given the role Javadoc plays as client-side documentationfor a class. Unfortunately, an uncomfortable tensionexists between this ideal and the pragmatic observationthat Javadoc, as a universally understood documentationnotation, can also effectively be used to describe matters ofinterest to the implementer and future maintainers of an im-517 plementation. Indeed, a single command-line flag instructsJavadoc to produce documentation for all private membersof a class too.By decomposing every component into an interface anda class, students do not encounter this tension. They useJavadoc to produce all possible documentation for the interfaceand all possible documentation for the class, includingprivate members. The former is for the client’s consumptionand the latter is for the implementer and maintainer.Design getters/setters properly.Using public methods to read and write private fields iscertainly better than making the fields themselves public.ToolssuchasEclipsecanevengeneratethesemethodsautomaticallyfor a class with private fields. The problem,however, with this style is that the concrete state drives theabstract behavior instead of the other way around [8].When students work with an interface and class-based decomposition,however, they recognize the role of getters/settersas readers and writers of abstract state. Facilitating theimplementation of these methods is just one factor in the designof a class’s concrete state.Make defensive copies.Given the ubiquity of references in Java, it is easy for dangerousaliases to a class’s concrete state to exist. For example,if a constructor assigns a private field x to an argumenty, both the caller of the constructor and the object itselfhave references (through y and x respectively) to the samething. This is dangerous since the caller of the constructorcan make changes to the concrete state of the constructedobject directly without going through its public interface.The separation of interface and implementation does not,itself, mitigate the dangers of aliasing in Java. It does, however,simplify the presentation of these dangers. If studentsare comfortable with simultaneously considering both theabstract and concrete state, and with maintaining the correspondencebetween the two, the dangers of aliasing areeasily illustrated and quickly appreciated.Use exceptions properly.The proper use of exceptions is a matter of much debate,even amongst seasoned Java developers. The choiceof whether exceptions should be checked or unchecked, andwhat kind of exception should be used is often a subtle designchoice involving many trade-offs.Some aspects of exception design, however, follow directlyfrom the disciplined decomposition of our approach. For example,the need to catch an exception and then re-throwit as a new exception, possibly of a different type, is clear.The method signatures, including checked exceptions, thatappear in the interface must make sense to the client interms of the abstract cover story. Students recognize whenan exception reveals aspects specific to a particular implementation.As for when to use exceptions, advice is often generic andeven circular, for example: Use exceptions for exceptionalsituations. A better guide is to clearly characterize situationswhere exceptions are useful. For example, if the clientcan not unilaterally guarantee the precondition of a method,exceptions are appropriate. A classic example is the existenceof a file. Because the code runs concurrently with areal file system in which files may be created and deleted, aclient can not know whether the file exists when the methodit calls actually starts to execute. Appreciating this lesson iseasier if the students are comfortable with behavioral specifications.Respect behavioral subtyping.Behavioral subtypes can be dynamically substituted fortheir supertypes without affecting the correctness of clientcode [15]. This substitution is sound only when the subtype’sinvariant and ensures clauses are covariant, while itsrequirements are contravariant. Since class inheritance involvescoupling of concrete implementations, subtyping isbest modeled in Java as a relationship between interfaces.A discipline of always declaring Java interfaces is thereforehelpful in creating a context for presenting subtyping andits implications.6. CHALLENGESIn Java, the interface construct corresponds most closelyto a purely abstract, client-side view of a component. A Javainterface, however, can not include a constructor. That is,an interface is actually the client-side view of an instance of acomponent, and the creation of (other) instances is not generallypart of that behavior. On the other hand, some clientsdo need to create instances. Ideally, such clients would onlyneed the name of the implementing class, nothing else. Unfortunately,without a constructor in the interface, the implementingclass must be consulted to confirm the existenceof a constructor with the proper signature.There are several ways to circumvent this difficulty. Oneis to use a creation pattern in which a separate componentserves as a factory. The interface of that factory componentdefines the valid signatures for instantiating the originalcomponent. Apart from leaving a bootstrapping problem(how does the client know how to create a factory?), thissolution is somewhat cumbersome since it requires componentsto consist of 4 artifacts: an interface/class pair for thecore component and an interface/class pair for the factory.Another approach is to require all classes to have a zeroargumentconstructor. This strategy, however, is limitedwhen it comes to immutable types (which typically do nothave zero-argument constructors). Our compromise is todocument in the interface the signatures of constructors thatimplementing classes are expected to have.7. EVALUATIONThe “interfaces first and foremost” approach describedhere has been the basis for a course that has been offered 5times, with a total enrollment of 156 students. Survey resultsindicate the course was well-received by students. Thecourse has averaged 4.7 for “overall rating” (on a Likert-typeresponse scale with 5 being the highest) and 4.2 for “relativeranking compared to other courses in computer science”.Beyond student reaction, however, a better measure forthe success of such an approach is the degree to which ittransforms students’ thinking and instills sound principlesof the discipline. To this end, we have followed the cohort ofstudents from our early pilot offerings that subsequently enrolledin the“programming in the large”software engineeringcourse (an existing course that entails significant softwaredesign, development, testing, and documentation all doneas part of a team). In that second course, students are free518 to use whatever language they prefer and most choose C++(the language used in the introductory sequence).The followed cohort consisted of 85 CS majors. Theirwork was qualitatively different than that of their peers, accordingto the instructors for that subsequent course. Therecognition and clear application of separation between abstractbehavior and concrete representation was present inall of their work (and not their peers). This separation wasmanifested in their design, documentation, and testing. Onsurveys, they reported feeling better prepared for a significantsoftware development project than their peers.8. CONCLUSIONSIn the “real world”, Java programmers do not define aninterface for every class. While the benefit of encapsulationand information hiding offered by OO are widely recognized,the effort of defining two separate structures for each type isusually too onerous in a deadline-driven environment. Thus,professional programmers often work with just a class andintermingle the realization with its abstraction. As a mentalmodel, the two are hopefully kept somewhat distinct, butthis distinction is usually not directly reflected in the code(beyond visibility modifiers such as private and public).For students of computing science, however, this interminglingshould be avoided. Not only does a clear separationhelp to motivate a wide variety of good coding practices, italso provides an exemplar for the general notion of abstraction,which plays such a fundamental and cross-cutting rolein our discipline.We have used this strategy in the development of a newclass that follows an introductory course sequence. Studentscome to the class knowing imperative programming but notJava. We are optimistic that this strategy could also beadopted for the intro level, especially given the success reportedfrom other efforts in that direction [10,11,18,20]. Forexample, the materials for this course have been adopted atClemson and initial reaction there has been positive.9. ACKNOWLEDGMENTSThis material is based upon work supported in part bythe National Science Foundation under Grant No. 0931669.10. REFERENCES[1] D. J. Barnes and M. Kölling. Objects First with Java:A Practical Introduction Using BlueJ. Prentice Hall,2002.[2] J. Bloch. Effective Java. Prentice Hall, 2nd edition,2008.[3] T. Colburn and G. Shute. Abstraction in computerscience. Minds Mach., 17(2):169–184, 2007.[4] W. R. Cook, W. Hill, and P. S. Canning. Inheritanceis not subtyping. In POPL ’90: Proceedings of the17th ACM SIGPLAN-SIGACT symposium onPrinciples of programming languages, pages 125–135,New York, NY, USA, 1990. ACM.[5] D. Gries. A principled approach to teaching OO first.In SIGCSE ’08: Proceedings of the 39th SIGCSEtechnical symposium on Computer science education,pages 31–35, New York, NY, USA, 2008. ACM.[6] O. Hazzan and J. Kramer. The role of abstraction insoftware engineering. In ICSE Companion ’08:Companion of the 30th international conference onSoftware engineering, pages 1045–1046, New York,NY, USA, 2008. ACM.[7] P. B. Henderson, D. Baldwin, V. Dasigi, M. Dupras,J.Fritz,D.Ginat,D.Goelman,J.Hamer,L. Hitchner, W. Lloyd, J. Bill Marion, C. Riedesel,and H. Walker. Striving for mathematical thinking.SIGCSE Bull., 33(4):114–124, 2001.[8] A. Holub. Why getter and setter methods are evil., September 2003.[9] C. Horstmann. Big Java. John Wiley & Sons, 3rdedition, 2008.[10] E. Howe, M. Thornton, and B. W. Weide.Components-first approaches to CS1/CS2: Principlesand practice. In SIGCSE ’04: Proceedings of the 35thSIGCSE technical symposium on Computer scienceeducation, pages 291–295, New York, NY, 2004. ACM.[11] A. Koenig and B. E. Moo. Accelerated C++: PracticalProgramming by Example. C++ In-Depth Series.Addison-Wesley, 2000.[12] M. Kölling, B. Quig, A. Patterson, and J. Rosenberg.The BlueJ system and its pedagogy. Journal ofComputer Science Education, 13(4):249–268,December 2003.[13] G. T. Leavens, K. R. M. Leino, E. Poll, C. Ruby, andB. Jacobs. JML: notations and tools supportingdetailed design in Java. Technical Report TR #00-15,Iowa State University, August 2000.[14] B. Liskov and J. Guttag. Program Development inJava: Abstraction, Specification, and Object-OrientedDesign. Addison-Wesley Longman Publishing Co.,Inc., Boston, MA, USA, 2000.[15] B. H. Liskov and J. M. Wing. A behavioral notion ofsubtyping. ACM Trans. Program. Lang. Syst.,16(6):1811–1841, 1994.[16] T.J.Long,B.W.Weide,P.Bucci,D.S.Gibson,J. Hollingsworth, M. Sitaraman, and S. Edwards.Providing intellectual focus to CS1/CS2. SIGCSEBull., 30(1):252–256, 1998.[17] Massachusetts Institute of Technology. 6.170: Lab insoftware engineering. Course notes on web, Fall 2007..[18] H. Roumani. Practice what you preach: Fullseparation of concerns in CS1/CS2. SIGCSE Bull.,38(1):491–494, 2006.[19] W. Savitch. Absolute Java. PearsonEducation,3rdedition, 2008.[20] A. Schmolitzky. “Objects first, interfaces next” orinterfaces before inheritance. In OOPSLA ’04:Companion to the 19th annual ACM SIGPLANconference on Object-oriented programming systems,languages, and applications, pages 64–67, New York,NY, USA, 2004. ACM.[21] M. Sitaraman, B. W. Weide, and W. F. Ogden. Onthe practical need for abstraction relations to verifyabstract data type representations. IEEE Trans.Softw. Eng., 23(3):157–170, 1997.[22] The Joint Task Force on Computing Curricula.Computing curricula 2001. Journal on EducationalResources in Computing (JERIC), 1(3es):1–240, 2001.519
https://www.yumpu.com/en/document/view/52390463/interfaces-first-and-foremost-with-java-moravian-college-
CC-MAIN-2019-43
en
refinedweb
Asked by: AccessVioalationException & Assembly Load & Assembly file name Question - Hi, I have an AccessVioalationException, which always happens when I run my application in the installation directory. As I cannot regenerate the problem under Visual Studio debug directory, I cannot debug it. When I receive the exception the application terminates immediately so there is no chance to connect a debugger. Most strange thing is that I only see this problem in the release directory with the original executable name. If I change the name of the executable or if I install it to another directory, the problem suddenly disappears. I couldn't find a relation between executable's file name (or path) and AccessVioalationException. Maybe you guys have at least some idea about what is happening. Here are the details. - The application consists of two parts, Launcher and the main executable, encrpyted in a binary file. - Launcher decodes the main file and loads it as a raw assembly by using Assembly.Load(byte[]) and than invokes the Main() - Assembly names of Launcher and the main executable are the same, they are generated from the same source tree and use the same strong key. - The problem happens, when the application performs a P/Invoke operation, where it gets the icons associated to the file types The following is how I get the Icons using P/Invoke:; } public class Shell { [DllImport("shell32.dll", EntryPoint = "SHGetFileInfo", CharSet = CharSet.Auto)] public static extern IntPtr SHGetFileInfo(string pszPath, uint dwFileAttributes, ref SHFileInfo psfi, uint cbSizeFileInfo, SHGFIFlags uFlags); public static Icon GetIcon(string path, IconSize size, bool copy) { SHFileInfo shinfo = new SHFileInfo(); IntPtr hSuccess = GetIcon(path, size, ref shinfo); if(hSuccess != IntPtr.Zero){ Icon icon; if(copy){ icon = (Icon)Icon.FromHandle(shinfo.hIcon).Clone(); Win32.DestroyIcon(shinfo.hIcon); shinfo. }else{ icon = Icon.FromHandle(shinfo.hIcon); } return icon; }else{ return null; } } public static IntPtr GetIcon(string path, IconSize size, ref SHFileInfo shinfo){ SHGFIFlags flags; if(size == IconSize.Small){ flags = SHGFIFlags.SmallIcon | SHGFIFlags.Icon; }else{ flags = SHGFIFlags.LargeIcon | SHGFIFlags.Icon; } return SHGetFileInfo(path, 0, ref shinfo, (uint)Marshal.SizeOf(shinfo), flags); } } Icon GetFileIcon(string path) { //SHFileInfo fi = new SHFileInfo(); //IntPtr iconPtr = return Shell.GetIcon(path, IconSize.Large, true); //return System.Drawing.Icon.FromHandle(fi.hIcon); } Thanks in advance, Onur All replies - The 2nd member of SHFileInfo is an int, not IntPtr. SHGetFileInfo() doesn't return an IntPtr. You'll leak when you ever pass copy=false. There is no loading context for the main assembly since you loaded it with Load(byte[]), there'll be trouble when that main assembly has dependent assemblies. That has a casual link to your problem with the installation directory. Do weigh the cost of creating a maintenance headache against the unlikely odds of anybody ever actually disassembling your code. .NET obfuscators are a dime a dozen. Hans Passant. - Hi Nobugz, Thanks for the info. I am going to fix the errors and try again. Neither the main assembly nor the launcher don't have dependent assemblies. They are compiled using command line compiler as single assemblies. I use the same assembly name for both launcher and the main app. And as they are from the same sourcebase and namespace they both have objects with the same names. Do you think it may cause any problems? I am already using an Obfuscator. Encryption is the secondary protection :) Of course I know that there is no uncrackable code. I am just trying to make things more complex for hackers. Regards, Onur Hi, You're right about the hacking issue :|. I checked fusion log. There are no binding errors. I also run the exe in CLR Debugger. But this time the exception I got was ExecutionEngineException. I couldn't load pdb file for the the main module symbols, that I load by using Assembly.Load(). Debugger complains that the PDB file is incompatible, although it is. The CLR debugger cannot step into the loaded module from the launcher at the point of invocation. By the way the problem disappears when I compile the main module with debugging enabled. But I don't know whether this is a random case or not, like the path problem above. If we forget all the stuff above for a moment; - What should I care when loading another exe by using Assembly.Load(byte[])? - Is there any precautions that we should take when doing P/Invoke inside the loaded assembly. So as you see I am stuck. Any suggestions are appreciated. Thanks, Onur
https://social.msdn.microsoft.com/Forums/en-US/5e0cc793-3c96-418b-9417-d535daa9e1e2/accessvioalationexception-assembly-load-assembly-file-name?forum=clr
CC-MAIN-2020-45
en
refinedweb
Delete files of a particular type in C++ In this tutorial, we will come to know how to delete files of a particular type in C++. Many times, we need to delete multiple files of the same extension. But deleting so many files by selecting it is quite tedious. So through a simple C++ program, we can do it easily. We will delete all the files of a given extension present in a particular directory. So let’s start learning the commands to delete all files of the same extension. System command to delete files of a particular type in C++ To delete files of particular extension we need to call system() function. It executes a system command to delete the files of a given extension. The command depends on the operating system which you are using. So, we will learn how to delete files of a particular extension in two systems. - Linux operating system - Windows operating system In the Linux operating system, the command to delete all files of particular extension is – rm *.file_extension For example – rm *.txt This command will delete all the text files from the current directory. In Windows operating system, the command to delete all files of particular extension is – del *.file_extension For example – del *.jpeg This command will delete all the jpeg image files from the current directory. Command creation using C++ string So, now we know the command. We will create a command for Linux operating system. So, we have to take the extension from the user. Then we have to concatenate the extension in our command. Then we will pass this command to system() function as an argument. The code snippet to create command is – char extension[10],cmd[50]; cout<<"\nENTER EXTENSION OF FILES : "; cin>>extension; strcpy(cmd,"rm *."); strcat(cmd,extension); Program to delete files of particular extension using C++ So, the C++ program which deletes all the files of a particular extension is given below. This program executes a system command of the Linux operating system. #include<iostream> #include<stdio.h> #include<stdlib.h> #include<string.h> using namespace std; int main() { char extension[10],cmd[50]; cout<<"\nENTER EXTENSION OF FILES : "; cin>>extension; strcpy(cmd,"rm *."); strcat(cmd,extension); system(cmd); return 0; } So, this is the program that deletes all the files of extension intended by the user. The system() function executes a system command which deletes the desired files. C++ program output After executing this C++ program, the files of the given extension get deleted from the directory where the program is stored. For your better understanding, I will show you the following – - the directory files before deletion - the output of the terminal - the directory files after executing this C++ program Directory before executing program Before executing this program the directory files are – I have highlighted the text files which are to be deleted after executing the program. Now let’s execute the program. The output of Linux terminal After executing this C++ program in Linux operating system you will get the following output – siddharth@siddharth-Lenovo-Y520-15IKBN:~/intern$ g++ filedel.cpp siddharth@siddharth-Lenovo-Y520-15IKBN:~/intern$ ./a.out ENTER EXTENSION OF FILES : txt siddharth@siddharth-Lenovo-Y520-15IKBN:~/intern$ The user has given the extension ‘txt’. So, all the text files must be deleted from the current directory. Let’s see the directory contents. Directory after executing program After executing this program, all the text files are deleted from the directory. Now, the directory contents are – So, you can see that there is no text file present in this directory after executing the C++ program. Also read:
https://www.codespeedy.com/delete-files-of-a-particular-type-in-cpp/
CC-MAIN-2020-45
en
refinedweb
In this article, we will learn all about Dapper in ASP.NET Core and make a small implementation to understand how it works. Let’s not limit it just with Dapper. We will build an application that follows a very simple and clean Architecture. In this implementation we will try to under Repository Pattern and Unit Of Work as well. Everything put together, this article helps you to understand How Dapper can be used in an ASP.NET Core Application following Repostitory Pattern and Unit of Work. Here is the source code of the entire implementation. Let’s get started. Table of Contents What is Dapper? Dapper is a simple Object Mapping Framework or a Micro-ORM that helps us to Map the Data from the Result of an SQL Query to a .NET Class effeciently. It would be as simple as executing a SQL Select Statement using the SQL Client object and returning the result as a Mapped Domain C# Class. It’s more like an Automapper for the SQL World. This powerful ORM was build by the folks at StackOverflow and is definitely faster at querying data when compared to the performance of Entity Framework. This is possible because Dapper works directly with the RAW SQL and hence the time-delay is quite less. This boosts the performance of Dapper. Implementing Dapper in ASP.NET Core We’ll build a simple ASP.NET Core 3.1 WebAPI following a Clean Architecture , Repository Pattern and Unit of Work. At the Data Access Layer, we will be using Dapper. I will be using Visual Studio 2019 Community Edition as my IDE , and MS-SQL / SQL Management Studio as my RDBMS. Creating the MS-SQL Database and Table Let’s create our Database and Related Table First. Open up SQL Management Studio and connect to your local SQL Server. I will add a new database and name is ‘ProductManagementDB’. For this demonstration, I will create a simple Product Table with Columns like ID, Name, Description and so on. Set Id as the Primary Key. With Id as the selection, scroll down and Enable the ‘Is Identity’ Property. This makes your ID column auto-increment at every Insert Operation. Here is the final schema for the Product Table. Alternatively, you can Execute this Script to Create the Required Table as well. CREATE TABLE [dbo].[Products]( [Id] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](50) NOT NULL, [Barcode] [nvarchar](50) NOT NULL, [Description] [nvarchar](max) NOT NULL, [Rate] [decimal](18, 2) NOT NULL, [AddedOn] [datetime] NOT NULL, [ModifiedOn] [datetime] NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO Getting Started with ASP.NET Core WebApi Project With the Database and Table done, let’s proceed with creating a new ASP.NET Core 3.1 WebAPI Project. I am naming the Solution and Project as Dapper.WebApi. Here is what we will build. It will be a real simple WebApi that Performs CRUD Operation using Dapper and Repository Pattern / Unit Of work. We will also follow some Clean Architecture, so that we learn some good practices along with the implementation. I will explain the Architecture that we will follow. So basically, we will have 3 Main Layers. - Core and Application – All the Interfaces and Domain Models live here. - Infrastructure – In this scenatio, Dapper will be present here, along with implementations of Repository and other interfaces - WebApi – API Controllers to access the Repositories. If you need a more indepth knowledge about Clean Architecture in ASP.NET Core, I have written a highly detailed article on Onion Architecture in ASP.NET Core 3.1 using CQRS Pattern along with the complete source code. Coming back to our implementation, Let’s now add a new .NET Core Library Project and Name it Dapper.Core. Here, Add a new Folder Entities and Create a new Class in this folder. Since we are developing a simple CRUD Operation Application for Products, Let’s name this class Product. public class Product { public int Id { get; set; } public string Name{ get; set; } public string Description { get; set; } public string Barcode { get; set; } public decimal Rate { get; set; } public DateTime AddedOn { get; set; } public DateTime ModifiedOn { get; set; } } This is everything you need in this Dapper.Core Project. Please note that since our application is simple, the content of the Core Library is also minimal. But in this way, we are also learning a simple implementaion of the Onion Architecture, yeah? Remember, The Core layer is not going to depend on any other Project / Layer. This is very important while following Onion Architecture. Next, Add another .NET Core Library Project and name it Dapper.Application. This is the Application Layer, that has the interfaces defined. So what will happen is, we deinf the interfaces for Repositories here, and implement these interfaces at another layer that is associated with Dataaccess, in our case, Dapper. Create a New Folder Interfaces, and add a new interface, IGenericRepository. public interface IGenericRepository<T> where T : class { Task<T> GetByIdAsync(int id); Task<IReadOnlyList<T>> GetAllAsync(); Task<int> AddAsync(T entity); Task<int> UpdateAsync(T entity); Task<int> DeleteAsync(int id); } As mentioned earlier, we will be using Repository Pattern along with Unit Of work in our Implementation. In IGenericRepository, we are building a generic definition for the repository pattern. These include the most commonly used CRUD Operations like GetById, GetAll, Add, Update and Delete. Add a Reference to the Core Project from the Application Project. The Application project alwats depends on the Core Project Only. Nothing else. Now that we have a generic Interface, let’s build the product Specific Repository Interface. Add a new interface and name it IProductRepository. We will Inherit the IGenericRepository Interface with T as Product. public interface IProductRepository : IGenericRepository<Product> { } Finally, add the last Interface, IUnitOfWork. public interface IUnitOfWork { IProductRepository Products { get; } } That’s everything you need to add in the Dapper.Application Project. Now, we need to define the connection string of our database, so that the application can connect to our Database for performing CRUD operations. Open up the appsettings.json file in the Dapper.WebApi Project and add the following "ConnectionStrings": { "DefaultConnection": "Data Source=DESKTOP-QCM5AL0;Initial Catalog=ProductManagementDB;Integrated Security=True;MultipleActiveResultSets=True" } Make sure that you add in your actual connection string. With that out of the way, create another .NET Core Class Library Project, Dapper.Infrastructure. Here, we will add the implementation of the Interfaces. Once the Project is created, Let’s install the required Packages to the Dapper.Infrastructure Project. Install-Package Dapper Install-Package Microsoft.Extensions.Configuration Install-Package Microsoft.Extensions.DependencyInjection.Abstractions Install-Package System.Data.SqlClient Next, Add a new folder, Repositories in th Dapper.Infrastructure Project. Add a reference to the Application Project from the Infrastructure Project. You can slowly get the idea of this entire architecture, right? Let’s first implement the IProductRepository Interface. Create a new class, ProductRepository.cs public class ProductRepository : IProductRepository { private readonly IConfiguration configuration; public ProductRepository(IConfiguration configuration) { this.configuration = configuration; } public async Task<int> AddAsync(Product entity) { entity.AddedOn = DateTime.Now; var sql = "Insert into Products (Name,Description,Barcode,Rate,AddedOn) VALUES (@Name,@Description,@Barcode,@Rate,@AddedOn)"; using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection"))) { connection.Open(); var result = await connection.ExecuteAsync(sql, entity); return result; } } public async Task<int> DeleteAsync(int id) { var sql = "DELETE FROM Products WHERE Id = @Id"; using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection"))) { connection.Open(); var result = await connection.ExecuteAsync(sql, new { Id = id }); return result; } } public async Task<IReadOnlyList<Product>> GetAllAsync() { var sql = "SELECT * FROM Products"; using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection"))) { connection.Open(); var result = await connection.QueryAsync<Product>(sql); return result.ToList(); } } public async Task<Product> GetByIdAsync(int id) { var sql = "SELECT * FROM Products WHERE Id = @Id"; using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection"))) { connection.Open(); var result = await connection.QuerySingleOrDefaultAsync<Product>(sql, new { Id = id }); return result; } } public async Task<int> UpdateAsync(Product entity) { entity.ModifiedOn = DateTime.Now; var sql = "UPDATE Products SET Name = @Name, Description = @Description, Barcode = @Barcode, Rate = @Rate, ModifiedOn = @ModifiedOn WHERE Id = @Id"; using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection"))) { connection.Open(); var result = await connection.ExecuteAsync(sql, entity); return result; } } } Line 3 – We added the connection string to the appsettings.json, Remember? We need to accesss that string in another project, Dapper.Infrastructure. Hence we use the IConfiguration interface to make the connection string available throughout the application. Line 6 – Injecting the IConfiguration to the constructor of the ProductRepository. Line 11 – A straightforward SQL Command to Insert data to the Products Table. Line 12 – Using the connection string from the appsettings.json, we open a new SQL Connection. Lne 15 – We pass the product object and the SQL command to the Execute Function. Similary we written the other CRUD Operations. Line 29 – Function to get all Products Line 35 – Here we use Dapper to Map all the products from database to a list of Product Class. Here we use the QueryAsync Method. We use this for the GetById Function as well. Next, Let’s implement the IUnitOfWork. Create a new class, UnitOfWork and inherit from the interface IUnitOfWork. public class UnitOfWork : IUnitOfWork { public UnitOfWork(IProductRepository productRepository) { Products = productRepository; } public IProductRepository Products { get; } } Remeber we have a few Interfaces and it;s implemention. Our next step is to register these interfaces with the implementations to the Service Container of ASP.NET Core. Add a new class in the Infrastructure Project and name it ServiceRegistration. public static class ServiceRegistration { public static void AddInfrastructure(this IServiceCollection services) { services.AddTransient<IProductRepository, ProductRepository>(); services.AddTransient<IUnitOfWork, UnitOfWork>(); } } This is more or less a extension method for the IServiceCollection. Here we add the interfaces with the Concrete Classes. Finally go to the Startup.cs/ConfigureServices method in the WebApi Project and let’s call the above made extenstion method. services.AddInfrastructure(); Finally, let’s wire up the Repository to the Controller. Ideally, you may need a Service layer in between the Controller and the Repository Classes. But it would be an overkill for this implementation. Let’s keep things simple and proceed. In the WebApi Project, Add a new Controller under the Controllers folder. Let’s name it Product Controller. [Route("api/[controller]")] [ApiController] public class ProductController : ControllerBase { private readonly IUnitOfWork unitOfWork; public ProductController(IUnitOfWork unitOfWork) { this.unitOfWork = unitOfWork; } [HttpGet] public async Task<IActionResult> GetAll() { var data = await unitOfWork.Products.GetAllAsync(); return Ok (data); } [HttpGet("{id}")] public async Task<IActionResult> GetById(int id) { var data = await unitOfWork.Products.GetByIdAsync(id); if (data == null) return Ok(); return Ok(data); } [HttpPost] public async Task<IActionResult> Add(Product product) { var data = await unitOfWork.Products.AddAsync(product); return Ok(data); } [HttpDelete] public async Task<IActionResult> Delete(int id) { var data = await unitOfWork.Products.DeleteAsync(id); return Ok(data); } [HttpPut] public async Task<IActionResult> Update(Product product) { var data = await unitOfWork.Products.UpdateAsync(product); return Ok(data); } } Here we will just define the IUnitOfWork and inject it to the Controller’s cosntructor. After that, we create seperate Action Methods for each CRUD operation and use the unit of work object. That’s it for the implementation. Let’s test it. Testing with Swagger Swagger is the favourite API testing tool for nearly every developer. It makes your life so easy. Let’s add swagger to our WebApi and test our implementation so far. First, Install the required packages to the WebApi Project. Install-Package Swashbuckle.AspNetCore Install-Package Swashbuckle.AspNetCore.Swagger Open Startup.cs/ConfigureServices method and add the following. services.AddSwaggerGen(c => { c.IncludeXmlComments(string.Format(@"{0}\Dapper.WebApi.xml", System.AppDomain.CurrentDomain.BaseDirectory)); c.SwaggerDoc("v1", new OpenApiInfo { Version = "v1", Title = "Dapper - WebApi", }); }); Next, in the Configure method, let’s add the Swager Middleware. app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "Dapper.WebApi"); }); Finally, Open up your WebApi Project Properties , enable the XML Documentation file and give the same file path. Now, build the application and Run it. Navigate to localhost:xxxx/swagger. This is your Swagger UI. Here you get to see all the available enpoints of your API. Pretty neat, yeah? Let’s add a new Product. Click on the POST tab and enter in your Product Object. Click Execute. It’s that easy to add a new record through Swagger. Makes the testing process blazing fast. I added a couple of Products. Now let’s see all the added products. Go to the GET tab and Click Execute. Here are all the Added Products. If you want to get a product by id, Click the Get Tab ({id}) and enter in the required ID. I will leave the remaing endpoints for you to test. With that let’s wind up this article. If you found this article helpful, consider supporting.Buy me a coffee Summary In this detailed article, we have learnt much more than just Dapper in ASP.NET Core. We were able to follow a clean architecture to organize the code, Implement Repository pattern along with Unit Of Work, Implement Swagger for effecient testing and much more Dapper looks like a great alternative to EF. However, I feel one has to have a great grasp of writing plain SQL statements when using it. Also how will it work with lazyloading and eagerloading? Great article still Mukesh. I really dont’t think Dapper can be an alternative of EFCore. Neither can EFCore replace Dapper. An Ideal use case would be to use both these ORMs in our applications. Dapper is much faster than EFCore while querying complex and huge joins. EFCore has it’s own set of features where it is the KING. Using both these ORMs side by side would take ASP.NET Core Applications to the next level! Thanks for the regular support! 🙂 Regards We tend to use Dapper for parts where reading needs to be tuned since it’s faster than EF for pulling out data. EF is better for writes since it will handle all rollbacks and tracking for you, it’s much less hassle and boilerplate code needed. However, I tend to always go with EF for personal projects since I love the migrations and code first approach. Thanks for this amazing work. You added so many posts within 2 weeks. I need to go thru everything in this weekend. 😀 Could you please post articles related to Logging, Caching practices etc Hey Arjunan , Thanks a lot for the feedback! I have already written an article on Logging with Serilog – Caching is in the list already, Will post soon. Thanks and Regards Please make a downloadable version of the article for people who don’t like reading online in real-time. Thanks. Thanks for the suggestion. I will look into it. Regards Just download it or print it. It’s not hard. If you like working with SQL, SQL+ is the best way to do what you are doing. It’s nearly twice as fast as Dapper and a lot less work. Have never heard of SQL+. Thanks for the tip! will give it a look. Regards Looking at your implementation of the UnitOfWork, what happens when the number of entities increases? Are you going to pass all of them through the constructor in the class? Hi, it actually depends on the scope of the project. If the project is somewhat a smaller one, Injecting the Repositories to the Constructor is a easier way to get around. But, when the Repository count keeps on increase, it is better to separate the Repository away from the UNIT of work and inject both the UOW and Repository in the constructor of the calling class. Thanks and Regards. Isn’t the point of Unit of Work to keep track of all changes and then do a complete Save call with rollback management? Like if you have the repositories for products, orders and customers in the same UoW and need to modifications for all of them inside that scope? EF handles this for you, but with dapper I assume you would manually have to manage transactions and do rollbacks? Yes, that is correct. I thought that the article would give some interesting ideas on that matter. The UoW in the article is not really an UoW. It just seems to be a collection of repositories. Hi, For a clearer Implementation of UOW / Repository pattern, please follow this article – Regards Hi. This is great. If I want to use EFCore use connect = context.Database.GetConnection(), then use Dapper connect.Query and use UnitOfWork for Dbcontext . Is this okay? Hi, Thanks for the feedback. I wouldnt use it that way. But it depends on the developer and the exact scenario you are at. The basic idea is to decouple everything. With this approach, you are forcing Dapper to depend on EFCore, which isn’t actually needed. Try to make the connection centralized. But, yeah you could still use this if that’s what your application wants and doesn’t cause an issue later on down the road. Thanks and regards I can understand that after the use of the ”using” connections will close. Isnt it better to ensure that you will close every connection in your repository methods or at least use the Dispose Pattern? how to handle common class and objects in the above layers.(i mean which layer used for common functionality) Which layer can I use to add Identity an Jwt to my project it gives the error while i try to add multiple repositories like UserRepository and CompanyRepositor Thanks Mukesh for sharing this post.
https://www.codewithmukesh.com/blog/dapper-in-aspnet-core/
CC-MAIN-2020-45
en
refinedweb
Warehouse Apps 684 Apps found. category: Warehouse × This module allow your employees/users to create Purchase Requisitions. Product/Material Purchase Requisitions by Employees/Users import data App for import stock inventory adjustment import inventory adjustment import product stock import inventory with lot import stock with lot import serial import inventory data import stock balance import stock with lot import stock with Serial
https://apps.odoo.com/apps/modules/category/Warehouse/browse?amp%3Border=Relevance&amp%3Bamp%3Bold_category=&amp%3Bamp%3Bsearch=ecosoft&amp%3Bamp%3Bversion=&amp%3Bseries=12.0&amp%3Bseries=
CC-MAIN-2020-45
en
refinedweb
Introduction to SYCL Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. Hello World This first exercise will guide you through the steps involved in writing your first SYCL application. We'll work through the equivalent of "hello world" for parallel programming, a vector add. This will add two vectors together, but crucially SYCL will enable this addition to be done in parallel. Including the SYCL Header File The first line in every SYCL application is to include the header file CL/sycl.hpp. #include <CL/sycl.hpp> Setup Host Storage In main, we begin by setting up host storage for the data that we want to operate on. Our goal is to compute c = a + b, where the variables are vectors. To help us achieve this, SYCL. We use float4, which is just vec<float, 4>. sycl::float4 a = { 1.0, 2.0, 3.0, 4.0 }; sycl::float4 b = { 4.0, 3.0, 2.0, 1.0 }; sycl::float4 c = { 0.0, 0.0, 0.0, 0.0 }; Selecting Your Device In SYCL there are different ways to configure and select the devices we want to use. SYCL provides a default selector that tries to select the most appropriate device in your system. It's possible to use a custom selector but since we only have one device we use the default selector. cl::sycl::default_selector selector; Setting up a SYCL Queue In order to send our tasks to be scheduled and executed on the target device we need to use a SYCL queue. We set this up and pass it our selector so that it knows what device to select when running the tasks. cl::sycl::queue myQueue(selector); Setup Device Storage)); Executing the Kernel Creating a Command Group. myQueue.submit([&](cl::sycl::handler &cgh). Data Accessors. We passed ownership of our data to the buffer, so we can no longer use the float4 objects, and accessors are the only way to access data in buffer objects. auto a_acc = a_sycl.get_access<sycl::access::mode::read>(cgh); auto b_acc = b_sycl.get_access<sycl::access::mode::read>(cgh); auto c_acc = c_sycl.get_access<sycl::access::mode::discard_write>(cgh); The buffer::get_access(handler&) method takes an access mode argument.. Defining a Kernel Function In SYCL there are various ways to define a kernel function that will execute on a device depending on the kind of parallelism you want and the different features you require. The simplest of these is the cl::sycl::handler::single_task function, which takes a single parameter, being a C++ function object and executes that function object exactly once on the device. The C++ function object does not take any parameters, however it is important to note that if the function object is a lambda it must capture by value and if it is a struct or class it must define all members as value members. cgh.single_task<class vector_addition>([=] () { c_acc[0] = a_acc[0] + b_acc[0]; }); }); Cleaning Up One of the features of SYCL is that it makes use of C++ RAII (resource aquisition is initialisation), meaning that there is no explicit cleanup and everything is done via the SYCL object destructors. { ... }
https://tech.io/playgrounds/48226/introduction-to-sycl/hello-world
CC-MAIN-2020-45
en
refinedweb
/etc/portage/patches User patches provide a way for users to apply patches to package source code if the ebuild provides this feature. Ebuilds cannot be patched by this. This is useful for applying upstream patches to unresolved bugs and for the rare cases of site-specific patches. Contents Precondition EAPI 5 and older - The ebuild must call the epatch_userfunction explicitly. - The ebuild must inherit an eclass and rely on its default implementation of the src_preparefunction. EAPI 6 and greater User patching is supported as a requirement of EAPI 6 and greater. Failure to support user patching results in an error. Adding user patches First choose the location for the patches. Granularity can be determined by package name and the version(s) for which the patch is intended. Use the following locations and optionally append :${SLOT} to any of them: - /etc/portage/patches/${CATEGORY}/${P} - /etc/portage/patches/${CATEGORY}/${PN} - /etc/portage/patches/${CATEGORY}/${P}-${PR} Examples: - /etc/portage/patches/dev-lang/python - /etc/portage/patches/dev-lang/python:3.4 - /etc/portage/patches/dev-lang/python-3.4.2 - /etc/portage/patches/dev-lang/python-3.3.5-r1 Example An example shows how to easily apply an upstream patch for CVE-2017-8934 of x11-misc/pcmanfm. The affected version of that package is 1.2.5 and upstream provides the patch for it but has not yet released a new version. For applying the patch from upstream, the appropriate directory needs to be created: root # mkdir -p /etc/portage/patches/x11-misc/pcmanfm-1.2.5 Next, an arbitrarily named file with suffix .patch or .diff has to be dropped here with the content provided from upstream: # index 8c2049a..876f7f3 100644 (file) # --- a/NEWS # +++ b/NEWS # @@ -1,3 +1,7 @@ # +* Fixed potential access violation, use runtime user dir instead of tmp dir # + for single instance socket. # + # + # Changes on 1.2.5 since 1.2.4: * Removed options to Cut, Remove and Rename from context menu on mounted diff --git a/src/single-inst.c b/src/single-inst.c index 62c37b3..aaf84ab 100644 (file) --- a/src/single-inst.c +++ b/src/single-inst.c @@ -2,7 +2,7 @@ * single-inst.c: simple IPC mechanism for single instance app * * Copyright 2010 Hong Jen Yee (PCMan) <[email protected]> - * Copyright 2012 Andriy Grytsenko (LStranger) <[email protected]> + * Copyright 2012-2017 Andriy Grytsenko (LStranger) <[email protected]> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by @@ -404,11 +404,16 @@ static void get_socket_name(SingleInstData* data, char* buf, int len) } else dpynum = 0; +#if GLIB_CHECK_VERSION(2, 28, 0) + g_snprintf(buf, len, "%s/%s-socket-%s-%d", g_get_user_runtime_dir(), + data->prog_name, host ? host : "", dpynum); +#else g_snprintf(buf, len, "%s/.%s-socket-%s-%d-%s", g_get_tmp_dir(), data->prog_name, host ? host : "", dpynum, g_get_user_name()); +#endif } For testing, step into the package's ebuild directory and run the ebuild pcmanfm-1.2.5.ebuild clean prepare: user $ cd $(portageq get_repo_path / gentoo)/x11-misc/pcmanfm user $ ebuild pcmanfm-1.2.5.ebuild clean prepare * pcmanfm-1.2.5.tar.xz SHA256 SHA512 WHIRLPOOL size ;-) ... [ ok ] * checking ebuild checksums ;-) ... [ ok ] * checking auxfile checksums ;-) ... [ ok ] * checking miscfile checksums ;-) ... [ ok ] >>> Unpacking source... >>> Unpacking pcmanfm-1.2.5.tar.xz to /var/tmp/portage/x11-misc/pcmanfm-1.2.5/work >>> Source unpacked in /var/tmp/portage/x11-misc/pcmanfm-1.2.5/work >>> Preparing source in /var/tmp/portage/x11-misc/pcmanfm-1.2.5/work/pcmanfm-1.2.5 ... * Applying patches from /etc/portage/patches/x11-misc/pcmanfm-1.2.5 ... * CVE-2017-8934.patch ... [ ok ] * User patches applied. >>> Source prepared. With the message "User patches applied." all is good and the package needs to be re-emerged as normally. Once the patch gets merged to the ebuild repository, do not forget to remove it from the /etc/portage/patches directory. Otherwise next time compiling the ebuild might fail. Using a git directory as a source of patches Instead of creating the directory, a symlink can be created to a git directory on the system. root # mkdir -p /etc/portage/patches/sys-libs && ln -s /home/user/projects/glibc /etc/portage/patches/sys-libs/glibc When using userprivas a FEATURES value in Portage (eg. in /etc/portage/make.conf), Portage drops root privileges to portage:portagewhich means that the folder that the symlink points to must be accessible by the user or group portage otherwise the patches will be silently ignored and not applied (file epatch_user.log contains the string none); ie. in this case, all the folders of /home/user/projects/glibc are already accessible due to o+rxpermissions but in the case of root and using this path /root/projects/glibc then /root, unlike /home, is inaccessible due to u+rxpermissions... Now, in the git directory, perform the usual work. After finishing remove all patches from the previous run and use git format-patch to create a patchset from the branch based on another known branch. user $ rm -f *.patch && git format-patch origin/master This solution relies on the fact that only files ending with .patch are processed in the patch directory. Enabling /etc/portage/patches for all ebuilds If an ebuild has EAPI=5 or older and does not call epatch_user, but user patches are still needed to be applied, it is possible to use /etc/portage/bashrc and hooks provided by Portage. For details see the /etc/portage/bashrc article. Enabling epatch_user for a single package An example is shown in /etc/portage/package.env#Example_3:_Amending_an_ebuild_function See also - Using epatch_user (AMD64 Handbook) - patches — describes how to create a source code patch. - GLEP 25 External resources - eutils.eclass: Disable epatch_user in EAPI 6. - EAPI 6 has eapply_user which should be used instead. - The Ultimate Guide to EAPI 6 - Patching with epatch - Patching within ebuilds, from devmanual.gentoo.org - How to write clean patches when not using git-format-patch.
https://wiki.gentoo.org/wiki/Epatch
CC-MAIN-2020-45
en
refinedweb
# Disable API/Database Did you know you could deploy your Redwood app without an API layer or database? Maybe you have a simple static site that doesn't need any external data, or you only need to digest a simple JSON data structure that changes infrequently. So infrequently that changing the data can mean just editing a plain text file and deploying your site again. Let's take a look at these scenarios and how you can get them working with Redwood. # Assumptions We assume you're deploying to Netlify in this recipe. Your mileage may vary for other providers or a custom build process. # Remove the /api directory Just delete the /api directory altogether and your app will still work in dev mode: rm -rf api You can also run yarn install to cleanup those packages that aren't used any more. # Turn off the API build process When it comes time to deploy, we need to let Netlify know that it shouldn't bother trying to look for any code to turn into AWS Lambda functions. Open up netlify.toml. We're going to comment out one line: [build] command = "yarn rw build" publish = "web/dist" # functions = "api/dist/functions" [dev] command = "yarn rw dev" [[redirects]] from = "/*" to = "/index.html" status = 200 If you just have a static site that doesn't need any data access at all (even our simple JSON file discussed above) then you're done! Keep reading to see how you can access a local data store that we'll deploy along with the web side of our app. # Local JSON Fetch Let's display a graph of the weather forecast for the week of Jan 30, 2017 in Moscow, Russia. If this seems like a strangely specific scenario it's because that's the example data we can quickly get from the OpenWeather API. Get the JSON data here or copy the following and save it to a file at web/public/forecast.json: { "cod": "200", "message": 0, "city": { "geoname_id": 524901, "name": "Moscow", "lat": 55.7522, "lon": 37.6156, "country": "RU", "iso2": "RU", "type": "city", "population": 0 }, "cnt": 7, "list": [ { "dt": 1485766800, "temp": { "day": 262.65, "min": 261.41, "max": 262.65, "night": 261.41, "eve": 262.65, "morn": 262.65 }, "pressure": 1024.53, "humidity": 76, "weather": [ { "id": 800, "main": "Clear", "description": "sky is clear", "icon": "01d" } ], "speed": 4.57, "deg": 225, "clouds": 0, "snow": 0.01 }, { "dt": 1485853200, "temp": { "day": 262.31, "min": 260.98, "max": 265.44, "night": 265.44, "eve": 264.18, "morn": 261.46 }, "pressure": 1018.1, "humidity": 91, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 4.1, "deg": 249, "clouds": 88, "snow": 1.44 }, { "dt": 1485939600, "temp": { "day": 270.27, "min": 266.9, "max": 270.59, "night": 268.06, "eve": 269.66, "morn": 266.9 }, "pressure": 1010.85, "humidity": 92, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 4.53, "deg": 298, "clouds": 64, "snow": 0.92 }, { "dt": 1486026000, "temp": { "day": 263.46, "min": 255.19, "max": 264.02, "night": 255.59, "eve": 259.68, "morn": 263.38 }, "pressure": 1019.32, "humidity": 84, "weather": [ { "id": 800, "main": "Clear", "description": "sky is clear", "icon": "01d" } ], "speed": 3.06, "deg": 344, "clouds": 0 }, { "dt": 1486112400, "temp": { "day": 265.69, "min": 256.55, "max": 266, "night": 256.55, "eve": 260.09, "morn": 266 }, "pressure": 1012.2, "humidity": 0, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 7.35, "deg": 24, "clouds": 45, "snow": 0.21 }, { "dt": 1486198800, "temp": { "day": 259.95, "min": 254.73, "max": 259.95, "night": 257.13, "eve": 254.73, "morn": 257.02 }, "pressure": 1029.5, "humidity": 0, "weather": [ { "id": 800, "main": "Clear", "description": "sky is clear", "icon": "01d" } ], "speed": 2.6, "deg": 331, "clouds": 29 }, { "dt": 1486285200, "temp": { "day": 263.13, "min": 259.11, "max": 263.13, "night": 262.01, "eve": 261.32, "morn": 259.11 }, "pressure": 1023.21, "humidity": 0, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 5.33, "deg": 234, "clouds": 46, "snow": 0.04 } ] } Any files that you put in web/public will be served by Netlify, skipping any build process. Next let's have a React component get that data remotely and then display it on a page. For this example we'll generate a homepage: yarn rw generate page home / Next we'll use the browser's builtin fetch() function to get the data and then we'll just dump it to the screen to make sure it works: import { useState, useEffect } from 'react' const HomePage = () => { const [forecast, setForecast] = useState({}) useEffect(() => { fetch('/forecast.json') .then((response) => response.json()) .then((json) => setForecast(json)) }, []) return <div>{JSON.stringify(forecast)}</div> } export default HomePage We use useState to keep track of the forecast data and useEffect to actually trigger the loading of the data when the component mounts. Now we just need a graph! Let's add chart.js for some simple graphing: yarn workspace web add chart.js Let's generate a sample graph: import { useState, useEffect, useRef } from 'react'import Chart from 'chart.js' const HomePage = () => { const chartRef = useRef() const [forecast, setForecast] = useState({}) useEffect(() => { fetch('/forecast.json') .then((response) => response.json()) .then((json) => setForecast(json)) }, []) useEffect(() => { new Chart(chartRef.current.getContext('2d'), { type: 'line', data: { labels: ['Jan', 'Feb', 'March'], datasets: [ { label: 'High', data: [86, 67, 91], }, { label: 'Low', data: [45, 43, 55], }, ], }, }) }, [forecast]) return <canvas ref={chartRef} />} export default HomePage If that looks good then all that's left is to transform the weather data JSON into the format that Chart.js wants. Here's the final HomePage including a couple of functions to transform our data and display the dates properly: import { useState, useEffect, useRef } from 'react' import Chart from 'chart.js' const MONTHS = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', ] const getDates = (forecast) => { return forecast.list.map((entry) => { const date = new Date(0) date.setUTCSeconds(entry.dt) return `${MONTHS[date.getMonth()]} ${date.getDate()}` }) } const getTemps = (forecast) => { return [ { label: 'High', data: forecast.list.map((entry) => kelvinToFahrenheit(entry.temp.max)), borderColor: 'red', backgroundColor: 'transparent', }, { label: 'Low', data: forecast.list.map((entry) => kelvinToFahrenheit(entry.temp.min)), borderColor: 'blue', backgroundColor: 'transparent', }, ] } const kelvinToFahrenheit = (temp) => { return ((temp - 273.15) * 9) / 5 + 32 } const HomePage = () => { const chartRef = useRef() const [forecast, setForecast] = useState(null) useEffect(() => { fetch('/forecast.json') .then((response) => response.json()) .then((json) => setForecast(json)) }, []) useEffect(() => { if (forecast) { new Chart(chartRef.current.getContext('2d'), { type: 'line', data: { labels: getDates(forecast), datasets: getTemps(forecast), }, }) } }, [forecast]) return <canvas ref={chartRef} /> } export default HomePage If you got all of that right then you should see: All that's left is to deploy it to the world! # Wrapping Up Although we think Redwood will make app developers' lives easier when they need to talk to a database or third party API, it can be used with static sites and even hybrid sites like this when you want to digest and display data, but from a static file at your own URL.
https://redwoodjs.com/cookbook/disable-api-database
CC-MAIN-2020-45
en
refinedweb
SUSI AI 5 Star Skill Rating System For making a system more reliable and robust, continuous evaluation is quite important. So is server side implementation A new java class has been created for the API, FiveStarRateSkillService.java. public class FiveStarRateSkillService extends AbstractAPIHandler implements APIHandler { private static final long serialVersionUID =7947060716231250102L; @Override public BaseUserRole getMinimalBaseUserRole() { return BaseUserRole.USER; } @Override public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) { return null; } @Override public String getAPIPath() { return "/cms/rateSkill.json"; } ... } The getMinimalBaseRole method tells the minimum User role required to access this servlet it can also be ADMIN, USER or ANONYMOUS. In our case it is USER. A user needs to be logged in to rate a skill on a scale of 1-5 stars. The API runs at “/cms/fiveStarRateSkill.json” endpoint. Next, create serviceImpl method in the above class to handle the request from the client and respond to it. 1. Fetch the required query parameters and store them in variables. They include skill model, group, language, skill name and starts that the user has given in the rating. String skill_name = call.get("skill", null); String skill_stars = call.get("stars", null); 2. Then check if the skill exists. If not them throw an exception. Otherwise, increment the count of the corresponding rating. The rating object has keys as one_star, two_star, three_star, four_star and five_star that has the count of that star rating. if (skill_stars.equals("1")) { skillName.put("one_star", skillName.getInt("one_star") + 1 + ""); } else if (skill_stars.equals("2")) { skillName.put("two_star", skillName.getInt("two_star") + 1 + ""); } else if (skill_stars.equals("3")) { skillName.put("three_star", skillName.getInt("three_star") + 1 + ""); } else if (skill_stars.equals("4")) { skillName.put("four_star", skillName.getInt("four_star") + 1 + ""); } else if (skill_stars.equals("5")) { skillName.put("five_star", skillName.getInt("five_star") + 1 + ""); } 3. Re-calculate the total rating done on that skill and its average rating and update the object. If the skill has not been already rated then create a new rating object and initialize it with the 0 star counts. public JSONObject createRatingObject(String skill_stars) { JSONObject skillName = new JSONObject(); JSONObject skillStars = new JSONObject(); skillStars.put("one_star", 0); skillStars.put("two_star", 0); skillStars.put("three_star", 0); skillStars.put("four_star", 0); skillStars.put("five_star", 0); skillStars.put("avg_star", 0); skillStars.put("total_star", 0); skillName.put("stars", skillStars); } The complete FiveStarRateSkillService.java is available here : – Rating a skill Sample endpoint This gives 3 star rating to the “aboutsusi” skill. Parameters - Model - Group - Language - Skill - Stars Response { "ratings": { "one_star": 0, "four_star": 0, "five_star": 1, "total_star": 1, "three_star": 0, "avg_star": 5, "two_star": 0 }, "session": {"identity": { "type": "email", "name": "[email protected]", "anonymous": false }}, "accepted": true, "message": "Skill ratings updated" } Getting the stats of Skill Ratings Sample endpoint This fetches the current ratings of the “aboutsusi” skill. Parameters - Model - Group - Language - Skill Response { "session": { "identity": { "type": "host", "name": "172.68.144.159_81c88a10", "anonymous": true } }, "skill_name": "aboutsusi", "accepted": true, "message": "Skill ratings fetched", "skill_rating": { "negative": "0", "positive": "0", "stars": { "one_star": 0, "four_star": 2, "five_star": 1, "total_star": 4, "three_star": 1, "avg_star": 4, "two_star": 0 }, "feedback_count": 3 } } Conclusion So this 5 star rating system will help in improving the SUSI skills. Also, it will help in making better decisions when we have multiple similar skills and we have to choose one to respond to the user query. References - Using SUSI AI Server to Store User Feedback for a Skill
https://blog.fossasia.org/tag/5-star-rating/
CC-MAIN-2020-45
en
refinedweb
User talk:SwifT/Complete Handbook. SwifT For your information, some history... The complete handbook was an effort that I started back in 2005 or so, trying to create a more resourceful handbook (with more than just installation instructions). It has always lingered in the draft/ location in the gentooo documentation repository, and due to time constraints I had to leave Gentoo for a while. Since the chances of the document getting finished on the official repository are fairly slim, I decided to upload this to the wiki and continue here. This is a massive effort and I greatly appreciate any editors and contributors/authors to add to it. I will work on the content more in the coming days, so it's definitely not a "let me drop this here and wait" activity ;-) --SwifT 08:14, 30 June 2012 (UTC) - SwifT, thank you for all the effort you have put into this Handbook. I will make it my goal to pick up where you left off. There is some work I need to do before I can get around to it, but I should get to it eventually. It will make a nice late summer project! --Maffblaster (talk) 09:16, 7 May 2015 (UTC) - Appparently this project was completed about five years ago. --Davidbryant (talk) 20:38, 11 August 2020 (UTC) Opening paragraph "... tries to extend on various subjects ..." sounds like a transliteration of a colloquialism in another language. Perhaps, "This handbook provides background on various subjects regarding Linux and the Gentoo Linux operating system." Or perhaps instead of "background" the word should be "context". And did you mean "questions" when you used "quests"? NeoPhyte Rep (talk) 02:56, 25 November 2013 (UTC) - Fixed it for you, NeoPhyte. --Davidbryant (talk) 20:53, 11 August 2020 (UTC) BASEPAGENAME FYI, in the main namespace there is no apparent difference between a link like [[{{BASEPAGENAME}}/Subpage]] and simply [[/Subpage]], except that the second link is quite a bit simpler looking. In other namespaces, the first way wouldn't work (always points to the main namespace) but the second would still work as expected. So why are we using {{BASEPAGENAME}} in our links? - dcljr (talk) 02:09, 23 September 2015 (UTC) - Why? Because Gentoo Linux users like doing things the hard way, I guess. --Davidbryant (talk) 20:56, 11 August 2020 (UTC)
https://wiki.gentoo.org/wiki/User_talk:SwifT/Complete_Handbook
CC-MAIN-2020-45
en
refinedweb
Forward declarations. Our calculator can deal with symbolic variables. The user creates a variable by inventing a name for it and then using it in arithmetic operations. Every variable has to be initialized—assigned a value in an assignment expression—before it can be used in evaluating other expressions. To store the values of user defined variables our calculator will need some kind of "memory." We will create a class Store that contains a fixed number, size, of memory cells. Each cell can store a value of the type double. The cells are numbered from zero to size-1. Each cell can be in either of two states—uninitialized or initialized. enum { stNotInit, stInit }; The association between a symbolic name—a string—and the cell number is handled by the symbol table. For instance, when the user first introduces a given variable, say x, the string "x" is added to the symbol table and assigned an integer, say 3. From that point on, the value of the variable x will be stored in cell number 3 in the Store object. We would also like to pre-initialize the symbol table and the store with some useful constants like e (the base of natural logarithms) and pi (the ratio of the circumference of a circle to its diameter). We would like to do it in the constructor of Store, therefore we need to pass it a reference to the symbol table. Now here’s a little snag: We want to put the definition of the class Store in a separate header file, store.h. The definition of the class SymbolTable is in a different file, symtab.h. When the compiler is looking at the declaration of the constructor of Store Store (int size, SymbolTable & symTab); it has no idea what SymbolTable is. The simple-minded solution is to include the file symtab.h in store.h. There is nothing wrong with doing that, except for burdening the compiler with the processing of one more file whenever it is processing symtab.h or any file that includes it. In a really big project, with a lot of header files including one another, it might become a real headache. If you are using any type of dependency checker, it will assume that a change in symtab.h requires the recompilation of all the files that include it directly or indirectly. In particular, any file that includes store.h will have to be recompiled too. And all this unnecessary processing just because we wanted to let the compiler know that SymbolTable is a name of a class? Why don’t we just say that? Indeed, the syntax of such a forward declaration is: class SymbolTable; As long as we are only using pointers or references to SymbolTable, this will do. We don’t need to include symtab.h. On the other hand, a forward declaration would not be sufficient if we wanted to call any of the methods of SymbolTable (including the constructor or the destructor) or if we tried to embed or inherit from SymbolTable. class SymbolTable; // forward declaration class Store { public: Store (int size, SymbolTable & symTab); ~Store () { delete []_cell; delete []_status; } bool IsInit (int id) const { return (id < _size && _status [id] != stNotInit); } double Value (int id) const { assert (IsInit (id)); return _cell [id]; } void SetValue (int id, double val) { if (id < _size) { _cell [id] = val; _status [id] = stInit; } } private: int _size; double * _cell; unsigned char * _status; }; Store contains two arrays. The array of cells and the array of statuses (initialized/uninitialized). They are initialized in the constructor and deleted in the destructor. We also store the size of these arrays (it's used for error checking). The client of Store can check whether a given cell has been initialized, get the value stored there, as well as set (and initialize) this value. The constructor of Store is defined in the source file store.cpp. Since the constructor calls actual methods of the SymbolTable, the forward declaration of this class is no longer sufficient and we need to explicitly include the header symtab.h in store.cpp. Store::Store (int size, SymbolTable & symTab): _size (size) { _cell = new double [size]; _status = new unsigned char [size]; for (int i = 0; i < size; ++i) _status [i] = stNotInit; // add predefined constants // Note: if more needed, do a more general job cout << "e = " << exp(1) << endl; int id = symTab.ForceAdd ("e", 1); SetValue (id, exp (1)); cout << "pi = " << 2 * acos (0.0) << endl; id = symTab.ForceAdd ("pi", 2); SetValue (id, 2.0 * acos (0.0)); } We add the mapping of the string "e" of size 1 to the symbol table and then use the returned integer as a cell number in the call to SetValue. The same procedure is used to initialize the value of "pi."
http://www.relisoft.com/book/lang/project/7store.html
crawl-002
en
refinedweb
We show you how .NET Services within the Azure Services Platform makes it easy to bring workflow apps to the cloud. Aaron Skonnard Creating events on classes by adding the event keyword to a delegate member variable declaration. Stephen Toub MSDN Magazine November 2006 This month: memory access issues in multi-core systems and diagnosing and avoiding false sharing in your parallel computing applications. Stephen Toub, Igor Ostrovsky, and Huseyin Yildiz MSDN Magazine October 2008 This month Stephen Toub explains how you can get some of the functionality found in the TransactionScope class in your own classes. MSDN Magazine September 2006 The author creates a managed wrapper to use the new IFileOperations interface in Windows Vista from managed code. MSDN Magazine December 2007 In this month’s installment of .NET Matters, columnist Stephen Toub answers reader questions concerning asynchronous I/O . MSDN Magazine July WPF is one of the most important new technologies in the .NET Framework 3.0. This month John Papa introduces its data binding capabilities. John Papa Here we introduce you to some of the concepts behind the new F# language, which combines elements of functional and object-oriented .NET languages. We then help you get started writing some simple programs. Ted Neward MSDN Magazine Launch 2008 public class ThreadPoolPriority { private ThreadPoolPriority(){} private static Queue _priorityQueue = new Queue(); private static Queue _standardQueue = new Queue(); public static void QueueUserWorkItem( WaitCallback callback, object state, bool isPriority) { if (callback == null) throw new ArgumentNullException("callback"); new PermissionSet(PermissionState.Unrestricted).Demand(); QueuedCallback qc = new QueuedCallback(); qc.Callback = callback; qc.State = state; lock(_priorityQueue) { (isPriority ? _priorityQueue : _standardQueue).Enqueue(qc); } ThreadPool.UnsafeQueueUserWorkItem( new WaitCallback(HandleWorkItem), null); } private static void HandleWorkItem(object ignored) { QueuedCallback qc; lock(_priorityQueue) { qc = (QueuedCallback)(_priorityQueue.Count > 0 ? _priorityQueue : _standardQueue).Dequeue(); } qc.Callback(qc.State); } private class QueuedCallback { public WaitCallback Callback; public object State; } } new PermissionSet(PermissionState.Unrestricted).Demand(); public static class ThreadPoolPriority { private static Queue<QueuedCallback> _priorityQueue = new Queue<QueuedCallback>(), _standardQueue = new Queue<QueuedCallback>(); public static void QueueUserWorkItem( WaitCallback callback, object state, bool isPriority) { if (callback == null) throw new ArgumentNullException("callback"); QueuedCallback qc = new QueuedCallback(); qc.Callback = new ContextCallback(callback); qc.State = state; qc.ExecutionContext = ExecutionContext.Capture(); lock(_priorityQueue) { (isPriority ? _priorityQueue : _standardQueue).Enqueue(qc); } ThreadPool.UnsafeQueueUserWorkItem(HandleWorkItem, null); } private static void HandleWorkItem(object ignored) { QueuedCallback qc; lock(_priorityQueue) { qc = (_priorityQueue.Count > 0 ? _priorityQueue : _standardQueue).Dequeue(); } ExecutionContext.Run(qc.ExecutionContext, qc.Callback, qc.State); } private struct QueuedCallback { public ContextCallback Callback; public object State; public ExecutionContext ExecutionContext; } } qc.Callback = new ContextCallback(callback); qc.Callback = New ContextCallback(AddressOf callback.Invoke) .class public abstract auto ansi sealed beforefieldinit ThreadPoolPriority extends [mscorlib]System.Object ThreadPool.UnsafeQueueUserWorkItem( new WaitCallback(HandleWorkItem), null); ThreadPool.UnsafeQueueUserWorkItem(HandleWorkItem, null); [MethodImpl(MethodImplOptions.InternalCall)] private extern bool CompleteGuid(); static ECFunc gGuidFuncs[] = { {FCFuncElement("CompleteGuid", NULL, (LPVOID)GuidNative::CompleteGuid)}, {NULL, NULL, NULL} }; Send your questions and comments to [email protected].
http://msdn.microsoft.com/en-us/magazine/cc163896.aspx
crawl-002
en
refinedweb
I just came across this blog post from John W Powell detailing his experience creating automated builds with VSeWSS 1.3. Just came across this tutorial for doing web page development and deploying them using VSeWSS 1.3 tool. For the demos I used the WSS Developer VPC which is available here. I uninstalled the VSeWSS 1.2 that comes on that image and installed the VSeWSS 1.3 from here. I also installed SPDisposeCheck from here. So that's my machine image to which I also created a sample Employees list and a Projects list in the default SharePoint site. Here's the code that I used in the first demo. Pretty simple stuff. namespace WebPart1 { [Guid("64a11214-36e3-4b1a-b8a7-fbb0ca9370c6")] public class WebPart1 : System.Web.UI.WebControls.WebParts.WebPart { public WebPart1() { } protected override void CreateChildControls() SPGridView customerGridView = new SPGridView(); SPWeb web = SPContext.Current.Web; SPList list = web.Lists["Employees"]; SPQuery query = new SPQuery(list.DefaultView); query.Query = "<Where><Eq><FieldRef Name='JobTitle' /><Value Type='Text'>SDE</Value></Eq></Where>"; SPListItemCollection items = list.GetItems(query); AutoAddColumns(customerGridView, list); customerGridView.DataSource = items.GetDataTable(); customerGridView.DataBind(); Controls.Add(customerGridView); base.CreateChildControls(); private void AutoAddColumns(SPGridView gridView, SPList list) gridView.AutoGenerateColumns = false; foreach (string fieldname in list.DefaultView.ViewFields) { SPField field = list.Fields.GetFieldByInternalName(fieldname); BoundField column = new BoundField(); column.DataField = field.StaticName; column.HeaderText = field.Title; gridView.Columns.Add(column); } } } I'm presenting at TechEd in Los Angeles next week. If you're at the event I hope you'll come to my talk. It is "OFC204 Easy SharePoint Development with VSeWSS 1.3". The talk is suitable for developers who currently don't work on SharePoint. I will demo the coding of building a simple SharePoint web parts that accesses SharePoint data lists. I will also talk about the types of applications people build on SharePoint, how you can integrate SharePoint development with application lifecycle management tools and about the first SharePoint development best practices you need to know and tools to help you with them. Find my talk here: Or find it here: Mon 5/11 | 2:45 PM-4:00 PM | Room 153 Kirk Evans has started recording a series of screen casts for Channel 9 on SharePoint Development. They are great for getting started with the tools and the development environment. Kirk’s first screencast shows building a simple web part with the Visual Studio 2008 extensions for SharePoint (VSeWSS). Kirk’s second screencast shows building a low level feature with VSeWSS. Cool how he fixes a bug in his code during the screencast. For More Information: VSeWSS v1.3 March 2008 CTP We do this for Windows Server 2008 here.. We2nd Prize - an Xbox 360 Arcade34.. I had a problem with a virtual machine I'm building for a demo today and after some time getting frustrated, I did a search and found Alex Riley's blog. Visual Studio 2008 reported this error "The Application Cannot Start" on startup when run as Administrator, it was working as a regular user but that doesn't allow me to debug. I copied the two file sthat Alex referred to and it works nicely now. Thanks Alex, that saved me some debugging time. This week we released an updated CTP of VSeWSS 1.3 here. Let us know your feedback on the Connect Site or on the MSDN SharePoint Developer Forum. Also today Soma blogged on the Visual Studio 2010 tools for SharePoint and ComputerWorld wrote an article on it here. Recently I blogged about Microsoft's publishing of the SharePoint Dispose Checker tool and the update to the SharePoint Dispose Best Practices Guidance. Also recently we published the Application Lifecycle Management Resource Center for SharePoint. Now there's a new update to the Best Practices: Common Coding Issues When Using the SharePoint Object Model. Check it out. We just released the Application Lifecycle Management Resource Center for SharePoint Server. Here you can find answers to common questions about Application Lifecycle Management with SharePoint such as: The resource center points to a number of new guidance articles, and tools that we have released over the past 6 months or so. After reviewing this if you still have a question about Application Lifecycle Management for SharePoint Development, then I want to hear about it. Please comment on this blog or send me feedback.. note: The MSDN article hasn't been updated at the time of the blog post. The MSDN article should be updated in the next 24 hours.. Trademarks | Privacy Statement
http://blogs.msdn.com/pandrew/default.aspx
crawl-002
en
refinedweb
]: Allow mknod of ptmx and tty in devpts [PATCH 5/7]: Implement get_pts_ns() and put_pts_ns() [PATCH 6/7]: Determine pts_ns from a pty's inode [PATCH 7/7]: Enable cloning PTY namespaces Todo: - This patchset depends on availability of additional clone flags. and relies on on Cedric's clone64 patchset. See - Needs some cleanup and more testing - Ensure patchset is bisect-safe --- Changelogs from earlier posts to Containers@. Changelog[v2]: (Patches 4 and 6 differ significantly from [v1]. Others are mostly the same) - [Alexey Dobriyan, Pavel Emelyanov] Removed the hack to check for user-space mount. - [Serge Hallyn] Added rcu locking around access to sb->s_fs_info. - [Serge Hallyn] Allow creation of /dev/pts/ptmx and /dev/pts/tty devices to simplify the process of finding the 'owning' pts-ns of the device (specially when accessed from parent-pts-ns) See patches 4 and 6 for details. Changelog[v1]: - Fixed circular reference by not caching the pts_ns in sb->s_fs_info (without incrementing reference count) and clearing the sb->s_fs_info when destroying the pts_ns - To allow access to a child container's ptys from parent container, determine the 'pts_ns' of a 'pty' from its inode. - Added a check (hack) to ensure user-space mount of /dev/pts is done before creating PTYs in a new pts-ns. -.
http://article.gmane.org/gmane.linux.kernel/663354
crawl-002
en
refinedweb
Oh, what a tangled web I love the IQueryable interface, but it’s got a dark checkered past that most of you might not know about. IQueryable is a great way to expose your API or domain model for querying or provide a specialized query processor that can be used directly by LINQ. It defines the pattern for you to gather-up a user’s query and present it to your processing engine as a single expression tree that you can either transform or interpret. It’s the way LINQ becomes ‘integrated’ for many LINQ to XXX products. Yet it was not supposed to be that way; with all that ease of use, plugging automatically into LINQ with an abundance of pre-written query operators at your disposal. You were not supposed to use it for your own ends. It was not meant for you at all. It was meant for LINQ to SQL. Period. The interface, the ‘Queryable’ query operators and whatnot were all part of the LINQ to SQL product and namespace. The original plan of record was to require all LINQ implementations to define their own query operators that abided by the standard query operator pattern. You’d have to cook up your own clever way to connect calls to your ‘Where’ method back to some encoding that your engine could understand. It would be daunting work to be sure, but not impossible. After all, you were likely building a query processor anyway, so what’s another thousand lines of code? Of course, that was until that fateful day in December 2005 when the gauntlet was thrown down and the challenge was made; a challenge that had nothing what so ever to do with IQueryable. It started out as a simple little request by the infamous Don Box. “Why don’t you guys have an eval for these expression trees?” He said to me that day in an email. He was working on another project unrelated to LINQ and saw potential use of the LINQ expression trees for his own purpose, as long as there was some way to actually execute or interpret them. Of course, he wanted us to take it on. Yet our product was already so full of features that we were having a hard time as it was to convince management that even an ultra slimmed down LINQ to SQL would fit into the Orcas schedule. So I mailed him back with, “Yes, it should be straightforward to convert these trees into IL using the reflection emit API. Why don’t you build it?” You see, I challenged him to write the code, not the other way around. I figured that would shut him up. Yet, to my surprise he actually agreed. He was going to do it over the holiday break, hand it over to me when he was done and I’d find some way to get it into the product. As it turns out, I was actually relieved. It wasn’t like we had not already thought about it. Most of the design team wanted there to be an eval like mechanism, but it was not high priority since the primary consumer (LINQ to SQL and other ORM’s) where not going to need it. So over the holiday break I actually built up anticipation for it. I was pre-geeking-out. What was he going to build? Did this guy even know how to write code? Would he figure out how to solve the closure mess? My god, what had I started? As it turns out, Don did not find the time to build anything, and I was somewhat let down. However, I had gotten myself so juiced up about the idea of it working that I didn’t care. It just gave me the excuse to do it myself, and I love to write brand-new geek’n-out code. So the next weekend in January I spent all the brownie points I had built up over the break by engaging in ‘family time’ and plugged myself to my machine for an all night coding session. I was running high on adrenaline, and the solutions just seemed to come as fast as I could type. By Sunday it was all working beautifully. On Monday I was eager to show it off and so I did during the design meeting. I showed everyone a mechanism that could turn any LINQ expression tree into a delegate that could be called directly at runtime. The IL generated was the same as what the compiler would give you, so it performed just a well. Of course, that’s when it happened. That’s when this seemingly unrelated geek-fest over the expression tree blossomed into something much more. You see Anders had been thinking about something else over the break. He was looking for a way to solve the polymorphism problem of making queries first class things within the language, since what we had so far was a really just an illusion. Query objects were really just IEnumerables. LINQ to SQL queries were IQueryables, which were IEnumerables by inheritance. The only way someone could write a general piece of code to operate over ‘any’ query was to specify its type as IEnumerable. Yet, the compiler would treat the query differently depending on its static type. LINQ to SQL’s IQueryable mechanism wouldn’t work if the compiler thought it was IEnumerable, no expression tree would be built; the query would just run locally and not inside the database server where it belonged. After seeing the demonstration everything just clicked. If IEnumerables could be turned into IQueryables such that the operations applied to it were captured as expression trees (as LINQ to SQL was already doing) and if those expression trees could be turned back into executable IL as delegates (which so happened to be just what we needed to feed into the locally executing standard query operators) then we could easily turn IQueryables back into locally executing IEnumerables. The IQueryable interface could become the polymorphic query interface instead of IEnumerable. Queries meant to run against local objects could be manipulated just like their expression tree toting brethren. Dynamic mini-languages could be written to generate expression trees and apply query operators to any type of query generically. Life was good. The whole was suddenly greater than the sum of its parts. It became obvious that we needed the expression compiler as part of the product and that IQueryable should be promoted out of the private domain of LINQ to SQL and into the limelight to become the general definition of a query. It was a done deal. And all because Don wanted us to do Evil Eval. THANK YOU Don and Matt!!! Simply put.... thank you. Glad those Ruby guys don't get to have all the fun :) Come on Aaron... Everyone know *I* suggested it in a forum. but called it IDomainnameProvider And I am a Ruby guy now lol :) Nicolas One of the questions I've had about the new lambda expression syntax in C# 3.0 is how to pronounce it... Some quick links about LINQ: Articles about extension methods by the Visual Basic team Third-party LINQ In search of a solution to how LINQ to SQL should be used in an N-tier application architecture with...
http://blogs.msdn.com/mattwar/archive/2007/06/01/iqueryable-s-deep-dark-secret.aspx
crawl-002
en
refinedweb
This column shows you how to secure the .NET Services Bus and also provides some helper classes and utilities to automate many of the details. Juval Lowy MSDN Magazine July 2009 Read more!. Jason Clark MSDN Magazine October 2004 MSDN Magazine January 2004 MSDN Magazine October 2003 MSDN Magazine July 2003 Now you can perform efficient, sophisticated text analysis using regular expressions in SQL Server 2005. David Banister MSDN Magazine February 2007 void MethodA() { try { MethodB(1); } catch (InvalidOperationException e) { // place handling code here } } void MethodB(Int32 num) { if (num == 1) { throw new InvalidOperationException(); } else { throw new ArgumentException(); } } using System; using System.Threading; using System.Drawing; using System.Windows.Forms; class ExceptionForm : Form { public ExceptionForm() { String[] buttonStrings = { "Throw From This Button's Event Handler", "Close Window and Throw After the Fact", "Create a Thread and Throw From It", "Throw From a Thread-pool Thread", "Throw From a Finalizer Method" }; for (Int32 index = 0; index < buttonStrings.Length; index++) { Button button = new Button(); button.Text = buttonStrings[index]; button.Size = new Size(224, 24); button.Location = new Point(8, 11 + index * 32); button.Click += new EventHandler(OnButtonClick); Controls.Add(button); } ClientSize = new Size(240, 174); Text = "Exception Form"; } private void OnButtonClick(Object sender, EventArgs args) { switch (((Button)sender).Text) { case "Throw From This Button's Event Handler": throw new InvalidOperationException(); case "Close Window and Throw After the Fact": App.shouldThrowOnExit = true; break; case "Create a Thread and Throw From It": Thread t = new Thread(new ThreadStart(ThreadMethod)); t.Start(); break; case "Throw From a Thread-pool Thread": ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadPoolMethod)); break; case "Throw From a Finalizer Method": CreateObjectThatThrowsOnFinalize(); GC.Collect(); GC.WaitForPendingFinalizers(); break; } } void ThreadMethod() { throw new InvalidOperationException(); } void ThreadPoolMethod(Object param) { throw new InvalidOperationException(); } void CreateObjectThatThrowsOnFinalize() { new ThrowsOnFinalize(); } class ThrowsOnFinalize { ~ThrowsOnFinalize() { throw new InvalidOperationException(); } } } class App { internal static Boolean shouldThrowOnExit = false; public static void Main() { Application.Run(new ExceptionForm()); if (shouldThrowOnExit) throw new InvalidOperationException(); } } try { ••• } catch (Exception e) { // handle any CLS-compliant exception } class App { public static void Main() { try { SubMain(); } catch (Exception e) { HandleUnhandledException(e); } } public static void SubMain() { // Setup unhandled exception handlers AppDomain.CurrentDomain.UnhandledException += // CLR new UnhandledExceptionEventHandler(OnUnhandledException); Application.ThreadException += // Windows Forms new System.Threading.ThreadExceptionEventHandler( OnGuiUnhandedException); // Start application logic // Perhaps call to Application.Run(...); } // CLR unhandled exception private static void OnUnhandledException(Object sender, UnhandledExceptionEventArgs e) { HandleUnhandledException(e.ExceptionObject); } // Windows Forms unhandled exception private static void OnGuiUnhandedException(Object sender, ThreadExceptionEventArgs e) { HandleUnhandledException(e.Exception); } static void HandleUnhandledException(Object o) { Exception e = o as Exception; if (e != null) { // Report System.Exception info Debug.WriteLine("Exception = " + e.GetType()); Debug.WriteLine("Message = " + e.Message); Debug.WriteLine("FullText = " + e.ToString()); } else { // Report exception Object info Debug.WriteLine("Exception = " + o.GetType()); Debug.WriteLine("FullText = " + o.ToString()); } MessageBox.Show("An unhandled exception occurred " + "and the application is shutting down."); Environment.Exit(1); // Shutting down } } class App { public static void Main() { String[] entries = Directory.GetFileSystemEntries(@"C:\"); foreach (String s in entries) Console.WriteLine(s); } } Unhandled Exception: System.Security.SecurityException: Request for the permission of type System.Security.Permissions.FileIOPermission, mscorlib, Version=1.0.5 000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 failed..Directory.GetFileSystemEntries(String path, String searchPattern) at System.IO.Directory.GetFileSystemEntries(String path) at App.Main() The state of the failed permission was: <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" PathDiscovery="C:\."/> class App { public static void Main() { try { // Demand full trust permissions PermissionSet fullTrust = new PermissionSet(PermissionState.Unrestricted); fullTrust.Demand(); // Perform normal application logic String[] entries = Directory.GetFileSystemEntries(@"C:\"); foreach (String s in entries) Console.WriteLine(s); // Report that permissions were not full trust } catch (SecurityException) { Console.WriteLine("This application requires full-trust " + "security permissions to execute."); } } } Send your questions and comments for Jason to [email protected].
http://msdn.microsoft.com/en-us/magazine/cc188720.aspx
crawl-002
en
refinedweb
A synchronization primitive that can also be used for interprocess synchronization. <ComVisibleAttribute(True)> _ <HostProtectionAttribute(SecurityAction.LinkDemand, Synchronization := True, _ ExternalThreading := True)> _ Public NotInheritable Class Mutex _ Inherits WaitHandle Dim instance As Mutex [ComVisibleAttribute(true)] [HostProtectionAttribute(SecurityAction.LinkDemand, Synchronization = true, ExternalThreading = true)] public sealed class Mutex : WaitHandle [ComVisibleAttribute(true)] [HostProtectionAttribute(SecurityAction::LinkDemand, Synchronization = true, ExternalThreading = true)] public ref class Mutex sealed : public WaitHandle public final class Mutex extends WaitHandle The HostProtectionAttribute attribute applied to this type or member.. using namespace System; using namespace System::Threading; const int numIterations = 1; const int numThreads = 3; ref class Test { public: // Create a new Mutex. The creating thread does not own the // Mutex. static Mutex^ mut = gcnew Mutex; static void MyThreadProc() { for ( int i = 0; i < numIterations; i++ ) { UseResource(); } } private: // This method represents a resource that must be synchronized // so that only one thread at a time can enter. static void UseResource() { //Wait until it is OK to enter. mut->WaitOne(); Console::WriteLine( "{0} has entered protected the area", Thread::CurrentThread->Name ); // Place code to access non-reentrant resources here. // Simulate some work. Thread::Sleep( 500 ); Console::WriteLine( "{0} is leaving protected the area\r\n", Thread::CurrentThread->Name ); // Release the Mutex. mut->ReleaseMutex(); } }; int main() { // Create the threads that will use the protected resource. for ( int i = 0; i < numThreads; i++ ) { Thread^ myThread = gcnew Thread( gcnew ThreadStart( Test::MyThreadProc ) ); myThread->Name = String::Format( "Thread {0}", i + 1 ); myThread->Start(); } // The main thread exits, but the application continues to // run until all foreground threads have exited. } Hi, The example is fine but don't really show de complexity of use of mutex and cros-threading, can you make de same example with the log inserted in a richtextbox who didn't owned by the new thread. Thank a lot
http://msdn.microsoft.com/en-us/library/system.threading.mutex.aspx
crawl-002
en
refinedweb
We were unable to locate this content in hi-in. Here is the same content in en-us. Updated: July 2008 This topic contains information about some of the new features and enhancements in Visual Studio 2008 and associated service releases. Topic Contents New in Visual Studio 2008 SP1 Smart Device Projects Occasionally Connected Applications Power Packs Controls and Components DataRepeater Control Line and Shape Controls PrintForm Component Printer Compatibility Library Distributing Power Packs .NET Framework Client Profile Support New in Visual Studio 2008 Original Release Version Target a Specific .NET Framework Multiple Processor Capabilities Enhanced Logging Item Definitions Assembly Location and Name Changes More About What's New in Languages and Technologies Visual Studio provides tools to create occasionally connected smart device applications by using SQL Server Compact and Microsoft Synchronization Services for ADO.NET (Devices) version 1.0. For more information, see Occasionally Connected Applications (Devices). The Microsoft Visual Basic Power Packs 3.0 controls and components that were previously available for download are now included in Visual Studio 2008 SP1. Power Packs can be used in both Visual Basic and Visual C# Windows Forms Application projects. Included in the Power Packs are a new DataRepeater control, and also updated versions of the previously released Line and Shape controls, the PrintForm component, and the Printer Compatibility Library. The new DataRepeater control lets you use standard Windows Forms controls to display rows of data in a scrollable container. This control provides more flexibility than standard grid controls. For more information, see Introduction to the DataRepeater Control (Visual Studio). The Line and Shape controls are a set of three graphical controls that let you draw lines, ovals, and rectangles on forms and containers at design time. These controls make it easier to enhance the look of your user interface. Line and Shape controls encapsulate many of the graphics methods that are contained in the System.Drawing namespace so that you so that you lets you print text without using a forced carriage return, like the semicolon syntax used by the Print method in Visual Basic 6.0. For more information, see Printer Compatibility Library. Also included is a bootstrapping package that lets you easily redistribute the Visual Basic Power Packs with an application. This lets you include the Power Packs in a ClickOnce Project or Setup Project by just clicking a check box. For more information, see Deploying Applications That Reference Power Packs Controls. Visual Studio 2008 SP1 supports the new .NET Framework Client Profile, a subset of the .NET Framework redistributable library that is optimized for client scenarios. These are some of the benefits of the Client Profile: A bootstrapper, which is just 200K, enables a fast response to the setup URL of an application. An integrated custom UI lets you package your application together with the Client Profile for a seamless installation experience. A small file size of 26.5MB helps to make installation fast. ClickOnce deployment includes the following features: Support for publishing unsigned manifests. For more information, see "Generating an Unsigned Manifest" in How to: Sign Application and Deployment Manifests. Enhancements to the Publish Options dialog box. These enhancements include support for configuring the following options: File Associations. Error URL, which specifies a Web site that is displayed in dialog boxes that are encountered during ClickOnce installations. Suite name, which specifies the name of the folder on the Start menu in which the application will be installed. Exclude Deployment Provider URL, which specifies whether to exclude the deployment provider URL from the deployment manifest. For more information, see Publish Options Dialog Box. If you have Visual Studio 2005 and Visual Studio 2008 installed on the same computer, then when you first start Visual Studio 2008, you can migrate most of your settings from Visual Studio 2005. Third-party code snippets and add-ins cannot be automatically migrated and must be manually installed again for use in Visual Studio 2008. If you do not have Visual Studio 2005 and Visual Studio 2008 installed on the same computer, you can still manually migrate your Visual Studio 2005 settings for use in Visual Studio 2008. For more information, see How to: Share Settings Between Computers or Visual Studio Versions and Visual Studio Settings. When you author community components by using Visual Studio 2008, you can specify whether you intend the component to be installed for use with Visual Studio 2005 and Visual Studio 2008 or just with Visual Studio 2008 by using a new value for the ContentVersion element. If you install a community component designed in Visual Studio 2005, the component will automatically be installed for use with both Visual Studio 2005 and Visual Studio 2008. A community component created by using Visual Studio 2005 might not work in Visual Studio 2008 and vice versa, depending on the design. For more information, see How to: Package Community Components to Use the Visual Studio Content Installer and Community Component Essentials. The Community menu has been removed for Visual Studio 2008. The commands formerly known as Ask a Question and Check Question Status have been combined into a new command named MSDN Forums, which is on the Help menu. The Send Feedback command is now the Report a Bug command, also on the Help menu. All other commands that were on the Community menu were removed from Visual Studio 2008. Several user interface (UI) elements have been updated. These include the following: IDE Navigator: An improved interface makes switching between items easier. Tool window docking targets have been improved to make tool windows easier to dock. Common dialog boxes: Visual Studio 2008 uses Windows standard dialog boxes instead of custom dialog boxes. This makes the navigation experience more consistent with that of Windows. You can now specify a custom font for IDE elements not identified individually in the Show settings for list in the Fonts and Colors, Environment, Options Dialog Box by using the new option Environment Font. In earlier versions of Visual Studio, the Class Designer supported only the managed languages (Visual C# and Visual Basic). In Visual Studio 2008, Class Designer adds limited support for native C++ code that can be used only for visualization and documentation. For more information about Visual C++ support in Class Designer, see Working with Visual C++ Code in Class Designer. version 2.0 (such as master pages, data controls, membership and logon, role management, Web parts, personalization, site navigation, and themes). The Web application project model in Visual Studio 2005 removes two elements that are required for Web projects in Visual Studio .NET 2003: Using FrontPage Server Extensions. These are no longer required, but they are supported if your site already uses them. Using a local copy of Internet Information Services (IIS). The new project model supports both IIS and the built-in ASP.NET Development Server. Use Web application projects when you have to do one of the following: Migrate large applications from Visual Studio .NET 2003 to Visual Studio 2005. Control the names of output assemblies. Use stand-alone classes to reference page and user-control classes. Build a Web application that includes multiple Web projects. Add pre-build and post-build steps during compilation. For more information about Web application projects, see Web Application Projects Overview. You can now create Web applications that feature next-generation user interfaces and reusable client components that use the new features of Visual Studio 2005. You can develop Web pages by using a server-based approach, a client-based approach, or a combination of both, according to your requirements. The AJAX server-based and client-based programming models are supported by the following: Server controls that support server-based AJAX development. This includes the ScriptManager, UpdatePanel, UpdateProgress, and Timer controls. These controls enable you to create rich client behavior, such as partial-page rendering and displaying update progress during asynchronous postbacks, with little or no client script.. Support for script globalization and localization. Globalization enables you to display dates and numbers based on a culture value (locale). Localization enables you to specify localized content (text, images, and so on) for client components for UI elements or exception messages. Access to Web services and to ASP.NET authentication, roles management, and profile application services. Visual Studio 2008 inside the panel, which creates a more fluid user experience. You can display the progress of the partial-page update by using UpdateProgress controls. Windows Presentation Foundation (WPF) applications have been added to Visual Studio 2008. There are four WPF project types: WPF Application (.xaml, .exe) WPF Browser Application (.exe, .xbap) WPF Custom Control Library (.dll) WPF User Control Library (.dll) When a WPF project is loaded in the IDE, the user interface of the Project Designer pages lets you specify properties specific to WPF applications. Web Application projects were added to Visual Studio in Visual Studio 2005 Service Pack 1 and are also included in Visual Studio 2008. The new Web Application project model provides the same Web Application project semantics as the Visual Studio .NET 2003 Web project model, except updated with features of Visual Studio 2005 and ASP.NET version 2.0. The Visual Studio Project Designer supports Web application projects, with the following limitations: On the Settings page, Web application projects can only be application-scoped. For more information, see Settings Page, Project Designer. On the Signing page, the manifest signing option is disabled because Web application projects do not use ClickOnce deployment. For more information, see Signing Page, Project Designer. Multitargeting lets you target code to a specific .NET Framework version: .NET Framework 2.0, which was included with Visual Studio 2005. .NET Framework 3.0, which is included with Windows Vista. .NET Framework 3.5, which is included with Visual Studio 2008. To support multitargeting, the Advanced Compiler Settings (Visual Basic) and Advanced Build Settings (C#) dialog boxes have a new Target framework drop-down list that lets you specify these operating systems. For more information, see Advanced Compiler Settings Dialog Box (Visual Basic) and Advanced Build Settings Dialog Box (C#). will provide appropriate user interface and default values. ClickOnce gives ISVs the option to re-sign the application manifest with their customer's company name, application name, and deployment/support URL. When end users install the application, the ISV's original company branding still appears on the "Do you want to trust this application?" dialog box. You can build and deploy Visual Studio Tools for Office applications by using the Project Designer's Publish page or the Publish Wizard. ClickOnce supports manifest generation under User Account Control (UAC) on Windows Vista. ClickOnce supports the deployment of Office add-ins and documentation when you use Visual Studio Tools For Office. For more information, see the Visual Studio Tools for Office Developer Center Web site. ClickOnce has better support for third-party browsers. Earlier versions supported installation in third-party browsers by using plug-ins, which sometimes caused problems. In this version, a user can install a ClickOnce file directly by using the Run command. You can associate file name extensions with a ClickOnce application, so that the application can be started directly from the associated file type. For more information, see How to: Create File Associations For a ClickOnce Application. ClickOnce has better support for changing the deployment location of an application and handling certificate expiration. For more information about the ClickOnce security model, see ClickOnce Deployment and Authenticode. For security, ClickOnce applications are always installed and run on a per-user basis. An application that requests Administrator privileges from Windows Vista UAC fails gracefully during installation. Windows Installer deployment has been updated for Windows Vista and the latest .NET Framework versions: Windows Installer has been updated so that installation on Windows Vista is smooth, even when it is running under UAC. The .NET Framework Launch Condition supports targeting applications for the new .NET Framework 3.0 and 3.5 versions. When you open an existing Visual Studio project in Visual Studio 2008, the Version property of .NET Framework Launch Conditions in the existing project is changed to the current version. You must change the Version property back to the appropriate value. For more information, see What's New in Deployment. Visual Studio 2008 now has a rich CSS editing experience with several new tools to make working with cascading style sheets (CSS) easier than ever. Much of the work designing the layout and styling content can be done in Design view using the CSS Properties grid, the Apply Styles and Manage Styles panes, and the Direct Style Application tool. You can also change positioning, padding, and margins in Design view using WYSIWYG visual layout tools. IntelliSense has been significantly improved and now supports JScript authoring and ASP.NET AJAX scripting. Client script that is included in a Web page by using <script> tags now has the benefit of IntelliSense, as do .js script files. Additionally, IntelliSense displays XML code comments. XML code comments are used to describe the summary, parameter, and return details of the client script. ASP.NET AJAX also uses XML code comments to provide IntelliSense of ASP.NET AJAX types and members. IntelliSense is also supported for external script file references that use XML code comments. You now can specify that the Object Browser display only information for a single version of the .NET Framework or the .NET Compact Framework. In addition, Find Symbol, Find and Replace Window searches can be restricted to a single version of the .NET Framework or the .NET Compact Framework. The Windows Presentation Foundation (WPF) Designer lets you create WPF applications and custom controls in the IDE. The WPF Designer combines real-time editing of XAML with an enhanced graphical design-time experience. The following features are new for the WPF Designer: SplitView lets you adjust objects in the graphical designer and immediately view the changes to the underlying XAML code. Likewise, changes to the XAML code are immediately reflected in the graphical designer. The Document Outline window lets you view and move through your XAML with full selection synchronization between the designer, the document outline, the XAML editor, and the Properties window. IntelliSense in the XAML editor enables rapid code entry. IntelliSense now supports types that you have defined. Grid lines can be added to grids in the designer to enable easy grid-based control placement. Snap lines let you easily align controls and text. The designer now supports the loading of types you have defined. These include custom controls and user controls. You can cancel loading of large XAML files. Design-time extensibility supports design mode and property editors. For more information, see WPF Designer.. N-Tier support for typed datasets provides enhancements to the Dataset Designer that assist in separating TableAdapter code and typed dataset code into discrete projects. For more information, see N-Tier Data Application Overview. more information about data in Visual Studio 2008, see What's New in Data. Language-Integrated Query (LINQ) is a new set of features in Visual Studio 2008 that extend powerful query capabilities into the language syntax of C# and Visual Basic. LINQ introduces standard, easily-learned patterns for querying and transforming data, and can be extended to support potentially any kind of data source. Visual Studio 2008 includes LINQ provider assemblies that enable language-integrated querying of .NET Framework collections (LINQ to Objects), SQL Databases (LINQ to SQL), ADO.NET Datasets (LINQ to ADO.NET), and XML documents (LINQ to XML). For more information, see: What's New in Visual C# What's New in Visual Basic LINQ to ADO.NET (Portal Page) What's New in System.Xml The standard query operators are the methods that comprise the query capabilities in the LINQ pattern. For more information about the standard query operators, see: Standard Query Operators Overview Enumerable Queryable Client application services are new in the .NET Framework 3.5 and enable Windows-based applications (including Windows Forms and Windows Presentation Foundation applications) to easily access the ASP.NET login, roles, and profile services. These services let let you access the Web services through existing .NET Framework login, roles, and settings APIs. Client application services also support occasional connectivity by storing and retrieving user information from a local data cache when the application is offline. For more information, see Client Application Services. Visual Studio 2008 provides several new reporting features and improvements. Visual Studio 2008 includes two new project templates for creating reporting applications. You will find the Reports Application template available on the New Project dialog box and the ASP.NET Reports Web Site template available on the New Web Site dialog box. When you create a new Reports Application project, Visual Studio provides a report (.rdlc) and a form (.vb/.cs) with a ReportViewer control bound to the report. For an ASP.NET Reports Web Site project, Visual Studio will create a Web site that contains a report (.rdlc), default ASP.NET page (.aspx) with a ReportViewer control bound to the report, and Web configuration file (.config). When you create a report project, a new Report Wizard is started. You can then use the wizard to build the report, or alternatively, close the wizard and build the report manually. Visual Studio 2008 introduces a Report Wizard, which guides you through the steps to create a basic report. You will select a report data source, define a data set, select a report type (tabular or matrix), and apply a style to the report. After you complete the wizard, you can enhance the report by using Report Designer. The Report Wizard is started automatically when you create a new Reports Application project or ASP.NET Reports Web Site. The Expression Editor now provides sample expressions that you can use in report expressions. You can copy the sample expressions to your report to use as is or modify to suit your needs. The RSClientPrint control is now available when the ASP.NET ReportViewer control is configured for local processing. This enables you to print reports that have been processed by the control and are independent of a report server. The ReportViewer controls will now compress reports that are rendered or exported to the PDF format when they are configured for local processing. MSBuild now lets you build projects for specific versions of the .NET Framework. Several new API functions support this new functionality. For more information, see Building for Specific .NET Frameworks. MSBuild now recognizes when a system is using multiple processors, either multicore processors or multiple separate processors. MSBuild uses all the available processors to reduce the overall build time for projects. For more information, see Using Multiple Processors to Build Projects. Build event logging has been upgraded to handle multi-processor builds. MSBuild now supports the distributed logging model in addition to the central logging model, and introduces a new technology known as "forwarding loggers." For more information, see Logging in MSBuild. The new ItemDefinitionGroup project file element lets you define a set of Item Definitions, which are global default metadata values that are applied to all items in the project. For more information, see Item Definitions. The file names and locations of the MSBuild assemblies have been updated for Visual Studio 2008. The following assemblies now have "v3.5" appended to their file names: Microsoft.Build.Conversion.v3.5.dll Microsoft.Build.Utilities.v3.5.dll Microsoft.Build.Tasks.v3.5.dll In addition, the following build assemblies are now located in \Program Files\Reference Assemblies\Microsoft\Framework\v3.5\: Microsoft.Build.Engine.dll Microsoft.Build.Framework.dll The Microsoft.Build.Tasks.v3.5.dll file is located in \Windows\Microsoft.NET\Framework\v3.5\. Date History Reason July 2008 Added a section about new features in Visual Studio 2008 SP1. SP1 feature change.
http://msdn.microsoft.com/hi-in/library/bb386063(en-us).aspx
crawl-002
en
refinedweb
The Resource File Generator (Resgen.exe). Create a new Windows Application named "WindowsApplication1". For details, see How to: Create a Windows Application Project. In the Properties window, set the form's Localizable property to true. The Language property is already set to (Default). Drag a Button control from the Windows Forms tab of the Toolbox to the form, and set its Text property to Hello World. Set the form's Language property to German (Germany). Set the button's Text property to Hallo Welt. Set the form's Language property to French (France). Set the button's Text property to Bonjour le Monde. You can resize the button to accommodate the longer string, if necessary. Save and build the solution. Click the Show All Files button in Solution Explorer. The resource files appear underneath Form1.vb, Form1.cs, or Form1.jsl. Form1.resx is the resource file for the default. Press the F5 key or choose Start from the Debug menu. You will now see a dialog box with an English, French, or German greeting depending on the UI language of your operating system. The UI language used in Windows is a function of the CurrentUICulture setting. If your copy of Windows has a Multilingual User Interface Pack (MUI) installed, you can change UI language in Control Panel. For more information, see the Windows Server 2003, Windows XP & Windows 2000 MUI. In the Code Editor, add the following code at the beginning of the module, before the Form1 declaration: ' Visual Basic Imports System.Globalization Imports System.Threading // C# using System.Globalization; using System.Threading; // Visual J# import System.Globalization.*; import System.Threading.*; Add the following code. In Visual Basic, it should go in the New function, before calling the InitializeComponent function. In Visual C# and Visual"); // Visual J# // Sets the UI culture to French (France). System.Threading.Thread.get_CurrentThread().set_CurrentUICulture( new CultureInfo("fr-FR")); Now the form will be always displayed in French. If you changed the size of the button earlier to accommodate the longer French string, notice that the button size has also been persisted in the French resource file. On the Project menu, click Add New Item. In the Templates box, select the Assembly Resource File template. Type the file name "WinFormStrings.resx" in the Name box. The file WinFormStrings.resx will contain fallback resources in English. These resources will be accessed whenever the application cannot find resources more appropriate to the UI culture. The file is added to your project in Solution Explorer and automatically opens in the XML Designer in Data view. In the Data Tables pane, select data. In the Data pane, click an empty row and enter strMessage in the name column and Hello World in the value column.. On the File menu, click Save WinFormStrings.resx. Do steps 1-5 twice more to create two more resource files named WinFormStrings.de-DE.resx and WinFormStrings.fr-FR.resx, with the string resources specified in the following table. The file WinFormStrings.de-DE.resx will contain resources that are specific to German as spoken in Germany. The file WinFormStrings.fr-FR.resx will contain resources that are specific to French as spoken in France. WinFormStrings.de-DE.resx strMessage Hallo Welt WinFormStrings.fr-FR.resx Bonjour le Monde In the Code Editor, import the System.Resources namespace at the beginning of the code module. ' Visual Basic Imports System.Resources // C# using System.Resources; // Visual J# import System.Resources.*; In Design view, double-click the button to display the code for its Click event handler and add the following code. The ResourceManager constructor takes two arguments. The first is the root name of the resources — that is, the name of the resource file without the culture and .resx suffixes. The second argument is the main assembly.")); // Visual J# // Declare a Resource Manager instance. ResourceManager LocRM = new ResourceManager("WindowsApplication1.WinFormStrings", System.Type. GetType("WindowsApplication1.Form1").get_Assembly()); // Assign the string for the "strMessage" key to a message box. MessageBox.Show(LocRM.GetString("strMessage")); By default, the ResourceManager object is case-sensitive. If you want. Build and run the form. Click the button. The message box will display a string appropriate for the UI culture setting; or if it cannot find a resource for the UI culture, it will display a string from the fallback resources. If you receive this error then ensure you've inserted the full namespace into the ResourceManager constructor as in the example below: System.Resources.ResourceManager rm = new System.Resources.ResourceManager("My.Name.Space.StringResources", typ... If this doesn't rectify the problem, take a look through your Project folder in Windows Explorer and find the .resources file. I found that mine had "Properties" appended before the filename, because it was in a subfolder called Properties (the VS.IDE put it there). So my code went something like this: SysRes.ResourceManager rm = new SysRes.ResourceManager("My.Name.Space.Properties.StringResources", typ... Hope this helps! Luke
http://msdn.microsoft.com/en-us/library/y99d1cd3%28VS.80%29.aspx
crawl-002
en
refinedweb
pthread_create - thread creation #include <pthread.h> int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void*), void *arg); The. If pthread_create() fails, no new thread is created and the contents of the location referenced by thread are undefined. If successful, the pthread_create() function returns zero. Otherwise, an error number is returned to indicate the error. The pthread_create() function will will not return an error code of [EINTR]. None. None. None. pthread_exit(), pthread_join(), fork(), <pthread.h>. Derived from the POSIX Threads Extension (1003.1c-1995)
http://www.opengroup.org/onlinepubs/007908799/xsh/pthread_create.html
crawl-002
en
refinedweb
realpath - resolve a pathname #include <stdlib.h> char *realpath(const char *file_name, char *resolved_name); The realpath() function derives, from the pathname pointed to by file_name, an absolute pathname that names the same file, whose resolution does not involve ".", "..", or symbolic links. The generated pathname is stored, up to a maximum of {PATH_MAX} bytes, in the buffer pointed to by resolved_name. On successful completion, realpath() returns a pointer to the resolved name. Otherwise, realpath() returns a null pointer and sets errno to indicate the error, and the contents of the buffer pointed to by resolved_name are undefined. The realpath() function will fail if: - path. - [ENAMETOOLONG] - The file_name argument is longer than : - [ENAMETOOLONG] - Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. - [ENOMEM] - Insufficient storage space is available. None. None. None. getcwd(), sysconf(), <stdlib.h>.
http://www.opengroup.org/onlinepubs/007908799/xsh/realpath.html
crawl-002
en
refinedweb
Microsoft Corporation September 1999 Summary: This article provides guidelines for ensuring that applications run well under Microsoft® Windows® 2000 Terminal Services. It also provides information on enhancing the user experience by tuning the application for the Terminal Services environment, and taking advantage of the capabilities Terminal Services provides. (26 printed pages) Terminal Services is a configurable service incorporated into the Microsoft Windows 2000 Server operating system that delivers the Windows 2000 Professional desktop and 32-bit Windows-based applications to diverse desktop platforms. Terminal Services can run any well-behaved Windows-based application, but its multiuser nature tends to expose flaws and shortcuts in applications. Introduction Conforming to Core Practices Optimizing 32-Bit Applications Legacy Applications Conclusion For More Information Appendix A Appendix B Appendix C Appendix D Terminal Services is the configurable service included in the Windows 2000 Server operating system that gives it the capability to run 32-bit Windows-based applications centrally from a server. This technology was introduced in Windows NT® Server, Terminal Server Edition, version 4.0, where it continues to deliver the base terminal emulation functionality that customers ask for today. In the Windows 2000 operating system, Terminal Services are fully integrated with the Windows 2000 Server kernel. Terminal Services emulator clients are available for many different desktop platforms (MS-DOS®, Windows, Macintosh, UNIX, and others.) Non-Windows-based desktops require third party add-on software. Unlike the traditional client/server environment, when Terminal Services are enabled in Windows 2000 Server, all the application processing occurs on the server. The Terminal Services client performs no local processing of applications, it simply displays the application output. The Terminal Services technology transmits only the application presentation—the graphical user interface (GUI)—to the client. Each user logs on and perceives only his or her session, which is transparently managed by the server operating system and is independent from any other client session. From an application development perspective, one of the biggest benefits of Terminal Services is that well-behaved 16- or 32-bit Windows-based applications run as is—no programming changes are required to run them under Terminal Services. However, this does not mean that all existing applications run equally well under Terminal Services. Understanding how to design applications that take advantage of the new capabilities of Windows 2000 Terminal Services is important. It's also important to understand how bad programming habits are magnified in the Terminal Services environment, and to recognize where specific programming practices for the multiuser environment can be applied. Following these guidelines does not limit or compromise the ability of an application to function in the traditional Windows-based client/server environment—Terminal Services-optimized applications work well in both environments. Note This article examines optimization techniques any application developer can use to ensure that his or her Windows-based application runs well under Terminal Services. There is also support in the Windows 2000 Server operating system for a small set of Windows APIs available for Terminal Services. This article does not include information on those APIs. A separate article, "Using and Understanding APIs for Terminal Server," is available for download from. In addition to providing a means to serve Windows-based applications to terminals and other thin client devices, Terminal Services extends the multiuser capacity of the Windows 2000 Server and Windows NT Server 4.0 operating system family. Of course, Windows 2000 Server and Windows NT Server 4.0 are inherently multiuser-capable in the following ways: The Terminal Services technology goes beyond the client/server multiuser services listed above, which are an integral part of the Windows 2000 Server and Windows NT Server 4.0 operating system family. The Terminal Services architecture allows users and applications to share hardware and software resources commonly found on a Windows 2000 Professional or Windows NT-based clients in the traditional two or three-tiered client/server architecture. These resources, which are shared instead on the server, include use of a central CPU, memory, and storage, as well as operating system resources such as the registry and other data structures. Developers of applications written for Windows-based desktops can use the information in this article to optimize their applications to run under Terminal Services. In some ways, Terminal Services are analogous to older, centralized host or mainframe environments. In the centralized host architecture, dumb terminals provide a simple, character-oriented conduit between the user and the host. Users can log on, run programs, read and write shared files, direct output to shared printers, and access shared databases. Furthermore, each terminal session functions independently from other terminal sessions because the arbitration between shared resources is performed deep inside the host operating system. Terminal Services differs somewhat from the centralized host architecture. The primary difference is the graphical nature of the Windows 2000 Server operating system environment. Host environments have traditionally been character-oriented, requiring only a small amount of traffic (ASCII characters) to travel the communication lines between the host and the terminal or terminal emulator. With Terminal Services, all of the graphical screen output and related input/output (I/O) (for example, from a mouse or keyboard) must flow between the desktop client (that is, the Windows-based terminal or terminal emulator running on a computer) and the Windows 2000 Server running Terminal Services. This means that, for highly graphical and animated applications, a lot of information must travel over the network to the client device. Fortunately, the display protocol that operates between the Terminal Services client and the server optimizes this transmission and is completely transparent to the application developer. Additional information on the Microsoft Remote Desktop Protocol (RDP), the display protocol used by Terminal Services, can be found in the sections on Remote Desktop Protocol in the white papers "Using Terminal Services for Remote Administration of the Windows 2000 Server Family" and "Windows 2000 Terminal Services: An Integrated, Server-based Computing Solution," both available from. Another important difference between the host-based and Terminal Services environments is how applications that run in these environments must be designed. In a centralized host environment, applications must be developed specifically to run in that environment. With Terminal Services, applications designed for any Windows-based environment should work without having to be explicitly developed for the Terminal Services environment. Applications that run on Windows 2000 Server and Windows NT Server 4.0 should run without modification when Terminal Services is enabled. This is important when you consider the implications of multiple users sharing a Windows 2000-based system simultaneously. Instead of different users running applications that use their own computer hardware resources (such as CPU, memory, and disk) and local software resources (such as registry settings, preference files, and dynamic link libraries), Terminal Services users share hardware and software resources. For example, if two users run the same application in a Terminal Services environment, two copies of that application are started in the same system, each operating under a different user context. All of this is managed transparently by Terminal Services within the operating system. Multiple users accessing the same set of applications in a common system can create contention: You can mitigate many of these points of contention by sizing the Terminal Services system with sufficient CPU, memory, and disk resources to handle the client demand. For example, a multiple processor configuration can maximize CPU availability. Installing extra physical memory can maximize memory accessibility. Finally, disk access performance can be made optimal by configuring multiple SCSI channels and distributing your operating system and application loads across different physical drives. Properly configuring a Terminal Services system is a critical element that improves application performance for the client. Deployment guidelines and capacity planning information for Terminal Services system administrators are available on the Microsoft Windows 2000 Web site. To ensure that an application installs and works properly in a Terminal Services environment, it is critical to test it in that environment. It is important to construct a typical usage scenario with the appropriate number of sessions running the application for a time period that simulates actual usage. In a typical personal computer desktop environment, the application and system may be shut down frequently enough to mask application problems such as memory leaks. Capacity planning guidelines for Terminal Services, which can be found the Windows NT Server Web site, demonstrate three user applications usage scenarios: light (task-oriented) user, medium (administrative) user, and heavy (knowledge) user. Developers should determine which scenario best fits the likely usage of their application, and follow the guidelines to configure a representative Terminal Services testing environment. Terminal Services client emulation tools with application scripting support are available in the Windows 2000 Server Resource Kit to assist in this testing. Although hardware sizing and testing is an important part of creating a scalable Terminal Services environment, software considerations are equally important. In fact, fine-tuning an application can often considerably reduce resource competition and improve application performance for the user. The next section of this article presents suggestions that you can easily implement to create programs that are optimized for the Terminal Services environment. Managing and deploying applications from a central server provides benefits that you can enhance by fine-tuning applications that run when the Terminal Services feature is enabled in Windows 2000 Server. When you employ the programming practices suggested in this section, you ensure that an application will operate properly under Terminal Services. Keep in mind that following these practices does not inhibit the ability of your application to run in a non-Terminal Services environment. In fact, following these practices can often improve performance and increase compatibility with other services. The following guidelines describe programmatic behavior that is necessary for an application to operate efficiently and effectively in the Terminal Services environment. There are three main areas where applications tend to run into trouble in a Terminal Services environment: In addition, Windows NT 4.0 Terminal Server Edition has some restrictions on DCOM usage. Application installation is different when Terminal Services is enabled in Windows 2000 Server. The registry and .ini file mapping support that is built into Terminal Services allows applications that were not originally designed to run in a multiuser environment to run correctly under Terminal Services. This means that users should be able to execute these applications simultaneously and save whatever preferences the application allows for each of them. Of course, each user must have a unique home directory. If no home directory is specified for a user by the administrator, the user's home directory defaults to his or her user profile directory, \Wtsrv\Profiles\Username. To enable each user to retain individual application settings, he or she must have a unique copy of the appropriate .ini files or registry entries. To accomplish this, Terminal Services replicates the .ini files and registry entries from a common system location to each user as necessary. For .ini files, this means that the .ini files in the system directory (%systemroot%) will be copied to each user's Windows directory. For registry entries, the registry entries will be copied from HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\Install\Software to HKEY_CURRENT_USER\Software. %systemroot% In order for Terminal Services to replicate the necessary registry entries or .ini files for each user, the user must install the application in Install mode. This is accomplished by using Add/Remove Programs in Control Panel. Install mode may also be enabled from the command line after executing the change user /install command, though using Add/Remove Programs is preferable. If the administrator uses this function, application installation should properly allow for user-specific application settings. The built-in Windows Installer service included in Windows 2000 Server will be the best way for application developers to ensure correct installation. For more information on the Windows Installer service, see the current guidelines for the Windows 2000 Logo program located at. Because many application versions share DLLs, only one version of an application can be run at a time. If multiple versions are installed on the system, it is very possible that different users will attempt to run various versions of the same application simultaneously. For example, both Microsoft Internet Explorer 3.x and Microsoft Internet Explorer 4.x share various DLLs that will fail to work properly when both versions are installed on the same server. If an older application does not install properly, an Application Compatibility Script may be needed to correct installation problems with the registry or other issues. Scripts for many popular applications, such as Microsoft Office, are included with Windows 2000 Terminal Services. Additional information on these scripts can be found on the Microsoft Web site in the article "Developing Applications Compatibility Scripts with Windows NT Server 4.0." In the Terminal Services environment, each user receives an HKEY_CURRENT_USER registry hive that stores user-specific information at logon time; however, all users share the HKEY_LOCAL_MACHINE hive. This means that any information placed in the HKEY_LOCAL_MACHINE hive affects all users, while information placed in the HKEY_CURRENT_USER hive affects only one user session. Some applications make the assumption that one machine equates to one user, and they store user information in the HKEY_LOCAL_MACHINE hive. This practice can create serious problems in a multiuser environment. With this in mind, applications should properly separate global registry information from local (user) registry information, and store information in the correct hive. In addition to separating global and local information in the registry, global and local file-based data constructs should also be maintained separately. For example, user preference files should not be stored in the system directory (for example, Winnt) or program directory (for example, Program Files) structures. Instead, preference files or other user-specific local data constructs should be stored in the user's home directory or a user-specified directory. This consideration also applies to temporary files used to store interim information (such as cached data) or to pass data on to another application. User-specific temporary files must also be stored on a per-user basis. There are some types of applications that should only run with one instance on the server. Typically, these are applications that monitor or manage system resources, such as a disk administration program. These applications should check if they are already running and not initiate a second application process. In particular, if an application of this type polls a system resource continually, multiple instances of the application are not needed, and instead could seriously degrade system performance. The danger of memory leaks is intensified in the Terminal Services environment. A memory leak in a program running in the traditional Windows client environment will eventually cause trouble, but may, in fact, be masked by the fact that the desktop device is turned on and off frequently and memory is thus cleared. In the Terminal Services environment, that same application can be run multiple times by multiple users, thus rapidly magnifying the effect of a memory leak. In the traditional distributed Windows-based client/server architecture, one user is logged on to one computer at a time; therefore, the computer name or Internet Protocol (IP) address assigned to either a desktop or server computer equate to one user. In the Terminal Services environment, the application can only see the IP or NetBIOS address of the Terminal Server. Applications that use the computer name or IP address for licensing or as a means of identifying an iteration of the application on the network will not work properly in the Terminal Services environment because the server's computer name or IP address can really equate to many different desktops or users. Some applications assume that the Windows shell (including the browser) will be running and use these as resources in the application. If the administrator chooses, Terminal Services allows applications to run entirely without the shell or desktop. This feature is provided so that administrators can lock down a client session and deny the user access to anything except a single application. This feature can be used in a task-based worker environment, where limiting end-user access to the desktop and file system reduces potential security or configuration problems and reduces help-desk costs. Don't assume persistence of any files in the Temp folder beyond the current user session on any machine, because administrators can set a policy to delete everything in the Temp folder each time the user logs on. For example, if a recovery file that is regularly updated during an editing session is being stored in the Temp folder, it may not be present to restore changes if an application crashes. Also, these types of files are clearly per-user, so they should be saved in the Application Data folder in the User Profile, where the appropriate file security is also in place so that others cannot view the file. An administrator in an enterprise environment may also configure Terminal Services to save per user Application Data in a directory on a completely separate file server for recoverability reasons, particularly in a multiserver farm configuration. Modifications to the Graphical Identification and Authentication (GINA) component are supported in Windows 2000 Terminal Services with the availability of Terminal Services APIs that allow for session management and client credential access. Windows NT Server 4.0, Terminal Server Edition does not support modifications to GINA. For more information on the Terminal Services APIs, see the article "Using and Understanding APIs for Terminal Server," at. Because Terminal Server Edition is a modified version of Windows NT Server 4.0, it uses operating system files with the same file names as Windows NT Server 4.0 without the Terminal Server enabled, but in fact the files may be very different. Replacing system files, such as the TCP/IP network stack, could result in serious system problems. The same situation can occur if an application replaces Windows 2000 Server operating system files while Terminal Services are enabled. If your application is composed of a server component (such as a service) and client components (such as foreground applications) that communicate with the server component, make sure that the server component can differentiate between multiple clients residing on the same system. To accomplish this, clients should establish communication with the server component through a well-defined global interface (for example, Remote Procedure Call or named pipes) and the server and client should negotiate a different communication channel for each user session. This same client/server consideration applies to client applications running in the Terminal Services environment that need to equates to a single user session. Many applications can be customized at installation time to include or exclude specific features or components. This approach does not work in the Terminal Services environment, because users typically access a common set of program files and libraries. Therefore, if the administrator excludes a component during the initial setup of the application under the Terminal Services, all users will be prevented from accessing that component. A better way of addressing customization—in both Terminal Services and in the traditional Windows client/server environment—is to enable feature selection through user profiles. To adopt this approach, determine feature selection or deselection at run time based on settings in the current user's registry hives. In Windows 2000 Server, you can use the Group Policy MMC snap-in to configure which features are available for which users. Windows NT Server makes use of the Systems Policy Editor tool to specify settings for the user desktop. In addition, the administrator in a Terminal Services environment must be able to override end-user selections by setting policies using the Group Policy MMC snap-in or by setting System Policies in the Windows NT Server operating system. For example, CPU-intensive processes, such as a background spelling checker (which might be fine to run in the background on a desktop PC), should be disabled automatically by the application in a Terminal Services environment so they do not degrade performance for other users. Application options should be designed to check Group Policies (or System Policies in Windows NT Server) first and default to those policies. This process should be transparent to the user, who should not have to care if the application is running locally on the desktop or centrally from Terminal Services. Windows 2000 Server with Terminal Services enabled offers multilanguage support. Using this capability, Terminal Services can simultaneously serve users in as many languages as are installed on the server. In Windows NT Server 4.0, Terminal Server Edition, a single Terminal Server does not have the ability to simultaneously host multiple system languages. (For example, in Windows NT, on a North American English version of Terminal Server, users can read and create documents using non-Western character sets, provided the required font files are installed, but the system uses English menus, dialog boxes, and other operating system functions.) The Terminal Services environment supports traditional serial, parallel, and sound ports attached to the server, but does not natively support serial, parallel, and sound ports that are integrated into the client desktop system (except for keyboard and mouse). This means that the hardware environment can appear different to the application when it runs in different user contexts. Automatic configuration of local client printing or drive resources from the server is not available in Windows NT Server 4.0, Terminal Server Edition. Configuration of local printers is available for clients running the RDP protocol in Windows 2000 Server. In Windows NT Server 4.0, Terminal Server Edition print or file access to or from the client must use network redirection. Distributed Component Object Model (DCOM) is fully supported in Windows 2000 Terminal Services, so there are no special considerations there. In Windows NT Server 4.0, Terminal Server Edition, DCOM functionality is a subset of DCOM in the Standard Edition of Microsoft Windows NT Server 4.0. For this reason, some applications that are written for and function properly in a Windows NT Server 4.0 environment may not function properly when running on Windows NT Server 4.0, Terminal Server Edition. For more information regarding DCOM functionality in Terminal Server Edition, please see the white paper "Using DCOM with Windows NT Server 4.0, Terminal Server Edition and Terminal Services in Windows 2000," located at. An application can detect if it is being set up in or is running in the Terminal Services environment. Once that determination is made, the application can then optimize its behavior, based on the suggestions presented in this section. In some cases, the application may choose to alter its multiuser behavior when it detects that it is running in a Terminal Services environment. In others, the application may base changes on whether it is running on the console or a remote session. Since Terminal Services is a configurable service under Windows 2000, proper detection of the service is even more critical during both application installation and execution. In the Terminal Server 4.0 product, you could simply check a product suite key in the registry and look for a "Terminal Server" string. In Windows 2000 this is not the case because the "Terminal Server" string will always be included in the product suite key even when Terminal Services isn't enabled. Detection of Terminal Services in Windows 2000 must be done through a new set of product suite APIs (defined in WINBASE.H). See the IsTerminalServicesEnabled function in Appendix A for the proper code to use to determine if Terminal Services are enabled on the Windows 2000 Server. Terminal Server Checking to see if your application is running in a Terminal Server 4.0 environment is possible using the product suite concept that was added to Windows NT 4.0 Service Pack 3. Appendix A contains an example function called ValidateProductSuite that can be used on Terminal Server 4.0 machines to determine if a particular product suite has been installed. Using the example function, a Terminal Server 4.0 system can be detected with the following code: BOOL fIsTerminalServer40; fIsTerminalServer40 = ValidateProductSuite("Terminal Server"); For certain management tools or utilities, only one instance of the application should run on the server at a time. An example is a disk defragmenter, or a tool that monitors system resources such as hard drive space, memory, or I/O. Code can be used to detect if another instance of the application is already running under Terminal Services, so that initiation of another application process does not happen. See Appendix B for this code. A developer may want an application to behave differently when it is run on the console versus in a remote session. Animation, sounds, and peripheral access might be enabled for the console session. The code found in Appendix C can be used to detect what type of session the application execution request is being initiated from. Windows 2000 or Service Pack 4 for Windows NT Server 4.0, Terminal Server Edition, is required for this code to function. Many services attempt to display simple status information to the active user by displaying a message box. This is typically done by calling one of the Win32® MessageBox* APIs in conjunction with the MB_SERVICE_NOTIFICATION flag. However, on a Terminal Services-enabled system, these pop-up message boxes will usually be displayed on the console, because that is the session in which the service is running. This can be a problem for applications running in remote sessions that make Remote Procedure Call (RPC) calls into the service and then wait for the user to respond to the service's pop-up window. Because this pop-up window appears on the console, the application running in the remote session appears to hang. In the Windows 2000 operating system, RPC calls into a service keep track of the session that initiated the RPC call. The Win32 MessageBox* APIs have been enhanced to look for this session information when the MB_SERVICE_NOTIFICATION flag is specified. Before making the MessageBox* API call, the service must impersonate the calling client. The code found in Appendix D can be used in RPC-based services that need to display pop-up windows to their calling clients. Many applications use background tasks to provide a mechanism for handling low-priority tasks in a single-user environment. In the Terminal Services environment, the scheduler is generally optimized for interactive responsiveness of foreground tasks (more like Windows NT Workstation or Windows 2000 Professional than Windows NT or Windows 2000 Server), though under Windows 2000 Terminal Services the administrator can select this. Also, the demands of running many interactive sessions on a single server required some architectural changes to the kernel that resulted in a small system cache. This means that, when Terminal Services are enabled, one user's background task will compete for CPU cycles with another user's foreground tasks. When multiple users are running both foreground and background tasks, the CPU demands are much higher than when all users are running only foreground tasks. This situation typically arises when a Terminal Server is also being used as an applications server to host a client-server application such as Microsoft SQL Server™ or Microsoft Exchange, perhaps in a branch office environment. It is generally not a performance issue when the Terminal Server has only a few users attached, but could become an issue when the server is trying to serve many users. Microsoft recommends that Terminal Servers be dedicated to serving client applications, but customers sometimes wish to run the Terminal Server as a multipurpose applications server. To maximize CPU availability for all users, application developers should create efficient background tasks that are not resource-intensive, or turn off background tasks when Terminal Services are running. In Windows 2000 Server, the administrator will be able to choose whether to give more priority to background or foreground processes even when Terminal Services are enabled. This allows more granular performance tuning by administrators for their particular network and applications configuration. Threads provide a convenient way of allowing an application to maximize its usage of CPU resources in a system, especially in a multiple processor configuration. When this same technique is used in a Terminal Services environment, the thread demands are intensified. In the Terminal Services environment, multiple users are running multithreaded applications, and all of the threads for all of the users compete for the central CPU resources of that system. With this in mind, you should tune and balance application thread usage for the multiuser, multiprocessor Terminal Services environment. In Windows 2000 and future versions, operating system capabilities such as using I/O completion ports and thread pooling can help the system use multiple threads more efficiently. I/O completion ports allow multiple process threads that are reading a system port to be pooled or reduced to a single thread per processor instead of requiring a single thread per individual process. Splash screens—graphical product, company, or user information that appears while an application is starting—work well in local video environments because of the speed of delivery. When that same splash screen is transmitted to a Terminal Services desktop client over the network, the transmission consumes extra network bandwidth and forces the user to wait before accessing the application. Limit the use of splash screens in order to speed up the application start-up process and to enhance the user experience. Particularly for remote access, reducing the size of the bitmap and testing at lower network speeds, for example, over a 28.8 kilobyte (KB) modem, ensures the best user experience. When a program uses on-screen animation, that animation consumes both CPU time and video bandwidth. This has several ramifications in the Terminal Services environment. First, the resource consumption of animation affects other users; while staring at an animated icon on the screen, the individual user is depriving others of CPU access and network bandwidth. Second, the actual effect of the animation is compromised because the video output is being rerouted over the network. Therefore, to decrease network activity and to improve the user experience, keep the use of animation to a minimum. For optimal performance and feature-rich user experience, do not run animated or bitmap-intensive applications in a Terminal Services remote session. Allow these applications to execute only when they detect a local Windows-based desktop operating system, such as Windows 2000 Professional or Windows NT Workstation 4.0, or when they are running on the console. Many programmers are accustomed to the speed of a local video subsystem—a local subsystem is typically fast enough to prevent the user from seeing multiple images or windows being overlaid on the screen. This is not the case when the video stream flows over a network connection. In this case, the user will see (and be frustrated by) the amount of time it takes to render the final screen. Applications should avoid direct input or output to the video display. If an application needs to read bits from the screen, it should maintain a separate, off-screen copy of the video buffer. Similarly, if an application needs to do elaborate screen output, such as overlaying several images to arrive at a final composite screen, the application should do that work in an off-screen buffer and then send the results to the actual video buffer. User input and prompts should be handled by foreground applications and not by services called by those applications. In the Terminal Services environment, if a user runs an application that calls a service that requests console input, the application will appear to stop or hang until the input is satisfied from the server console. Such problems are particularly evident when Terminal Services is being used for Remote Administration tasks. The automated setup procedure for many existing applications assumes that the application is being installed for a single user, and therefore updates the registry hive and desktop environment pertaining to just one user. If additional users need to access that application, either the entire package must be installed again or an administrator must manually copy information from the registry and desktop of one user to the other users. In the Terminal Services environment, however, more than one user will access an application on a system. Also, the user accessing the application (the end user) is typically not the same person who installed the application (system administrator). Therefore it is appropriate to install applications in the default user environment common to all users when installing under the Terminal Services environment. For applications that enumerate global system resources, such as a function that returns the number of running processes on the system, be aware that these applications will return the information for all sessions, not just the individual session. Also, this type of function should be limited so that multiple instances of it are not initiated (see discussion above), because they are typically resource-intensive and multiple instances will quickly degrade other users' performance. The Microsoft Foundation Class Library (MFC) has a long list of tried-and-true classes that perform a wide variety of tasks. Most of these classes work well in the Terminal Services environment, usually much better than do re-engineered solutions. A good example is the class available to provide context-sensitive Help text—Help text that appears on-screen when the mouse pointer moves over a button or menu item. If an application uses the MFC implementation to provide this feature, it works well on the Terminal Services client. But if the application implements this feature using dialog boxes or an alternate approach, the final result may not function well in the Terminal Services environment. The Windows 2000 Server operating environment has specific requirements for hosting MS-DOS-based and 16-bit Windows-based applications. These requirements also apply to the Terminal Services environment—any MS-DOS or 16-bit Windows-based application that won't run on Windows 2000 Server or Windows NT Server 4.0 also won't run on Windows 2000 Server with Terminal Services enabled, or on Windows NT Server 4.0, Terminal Server Edition. When Terminal Services is enabled, the system is more sensitive to ill-behaved legacy applications than the traditional Windows client/server environment is, because a single ill-behaved application can affect all users on the Terminal Services network. Predicting which legacy applications will work well in the Terminal Services environment and which will not is difficult. In most cases, the only way to determine the feasibility of a specific application is to test it in the Terminal Services environment. However, there are some application behaviors and origins that are known to be incompatible with or detrimental to the Terminal Services environment: The amount of work it will take to optimize an application for a Terminal Services environment varies greatly from one application to another. If an application was designed to operate in a multiuser environment, only minor changes will be needed to turn it into an application that is optimized for Terminal Services in Windows 2000 or Windows NT Server, Terminal Server Edition, version 4.0. Single-user, desktop applications may require more modification. This document has presented a variety of guidelines and suggestions for fine-tuning applications for the Terminal Services environment. All of these guidelines and suggestions should be taken into consideration when you develop 32-bit applications; however, you should remember these three points in particular about the Terminal Services environment: By taking these points into consideration, developers can make the most of their applications in both the traditional Windows-based client/server environment and in the Windows 2000 Terminal Services environment. For additional information on porting 16-bit applications to the 32-bit Windows-based environment, see the MSDN Web site. To download any of the white papers referenced in this article, or to find the latest information on Windows 2000 Terminal Services, see. For information on Windows NT Server 4.0, Terminal Server Edition, check out the Microsoft World Wide Web site. The following is the code for IsTerminalServicesEnabled, which can be used to detect whether Terminal Services is enabled. It is compatible with all Win32 platforms. (Note, if your application is designed to run only on the Windows 2000 platform, a simplified version of this code is provided below.) // This function compares the passed in "suite name" string // to the product suite information stored in the registry. // This only works on the Terminal Server 4.0 platform. BOOL ValidateProductSuite (LPSTR SuiteName) { BOOL rVal = FALSE; LONG Rslt; HKEY hKey = NULL; DWORD Type = 0; DWORD Size = 0; LPSTR ProductSuite = NULL; LPSTR p; Rslt = RegOpenKeyA( HKEY_LOCAL_MACHINE, "System\\CurrentControlSet\\Control\\ProductOptions", &hKey ); if (Rslt != ERROR_SUCCESS) goto exit; Rslt = RegQueryValueExA( hKey, "ProductSuite", NULL, &Type, NULL, &Size ); if (Rslt != ERROR_SUCCESS || !Size) goto exit; ProductSuite = (LPSTR) LocalAlloc( LPTR, Size ); if (!ProductSuite) goto exit; Rslt = RegQueryValueExA( hKey, "ProductSuite", NULL, &Type, (LPBYTE) ProductSuite, &Size ); if (Rslt != ERROR_SUCCESS || Type != REG_MULTI_SZ) goto exit; p = ProductSuite; while (*p) { if (lstrcmpA( p, SuiteName ) == 0) { rVal = TRUE; break; } p += (lstrlenA( p ) + 1); } exit: if (ProductSuite) LocalFree( ProductSuite ); if (hKey) RegCloseKey( hKey ); return rVal; } // This function performs the basic check to see if // the platform on which it is running is Terminal // services enabled. Note, this code is compatible on // all Win32 platforms. For the Windows 2000 platform // we perform a "lazy" bind to the new product suite // APIs that were first introduced on that platform. BOOL IsTerminalServicesEnabled( VOID ) { BOOL bResult = FALSE; // assume Terminal Services is not enabled DWORD dwVersion; OSVERSIONINFOEXA osVersionInfo; DWORDLONG dwlConditionMask = 0; HMODULE hmodK32 = NULL; HMODULE hmodNtDll = NULL; typedef ULONGLONG (*PFnVerSetConditionMask)(ULONGLONG,ULONG,UCHAR); typedef BOOL (*PFnVerifyVersionInfoA) (POSVERSIONINFOEXA, DWORD, DWORDLONG); PFnVerSetConditionMask pfnVerSetConditionMask; PFnVerifyVersionInfoA pfnVerifyVersionInfoA; dwVersion = GetVersion(); // are we running NT ? if (!(dwVersion & 0x80000000)) { // Is it Windows 2000 (NT 5.0) or greater ? if (LOBYTE(LOWORD(dwVersion)) > 4) { // In Windows 2000 we need to use the Product Suite APIs // Don't static link because it won't load on non-Win2000 systems hmodNtDll = GetModuleHandleA( "ntdll.dll" ); if (hmodNtDll != NULL) { pfnVerSetConditionMask = (PFnVerSetConditionMask )GetProcAddress( hmodNtDll, "VerSetConditionMask"); if (pfnVerSetConditionMask != NULL) { dwlConditionMask = (*pfnVerSetConditionMask)(dwlConditionMask, VER_SUITENAME, VER_AND); hmodK32 = GetModuleHandleA( "KERNEL32.DLL" ); if (hmodK32 != NULL) { pfnVerifyVersionInfoA = (PFnVerifyVersionInfoA)GetProcAddress( hmodK32, "VerifyVersionInfoA") ; if (pfnVerifyVersionInfoA != NULL) { ZeroMemory(&osVersionInfo, sizeof(osVersionInfo)); osVersionInfo.dwOSVersionInfoSize = sizeof(osVersionInfo); osVersionInfo.wSuiteMask = VER_SUITE_TERMINAL; bResult = (*pfnVerifyVersionInfoA)( &osVersionInfo, VER_SUITENAME, dwlConditionMask); } } } } } else { // This is NT 4.0 or older bResult = ValidateProductSuite( "Terminal Server" ); } } return bResult; } Here is a sample program that calls the IsTerminalServicesEnabled function in order to display a pop up window indicating if Terminal Services is enabled. int WINAPI WinMain( HINSTANCE hInstance, // handle to current instance HINSTANCE hPrevInstance, // handle to previous instance LPSTR lpCmdLine, // pointer to command line int nCmdShow // show state of window); ) { BOOL fIsTerminalServer; UNREFERENCED_PARAMETER (hInstance); UNREFERENCED_PARAMETER (hPrevInstance); UNREFERENCED_PARAMETER (lpCmdLine); UNREFERENCED_PARAMETER (nCmdShow); fIsTerminalServer = IsTerminalServicesEnabled(); if (fIsTerminalServer) MessageBoxA( NULL, "Terminal Services is running.", "Status", MB_OK ); else MessageBoxA( NULL, "Not a Terminal Services box.", "Status", MB_OK ); return 0; } If your application or service runs exclusively on the Windows 2000 operating system, you can simplify the IsTerminalServicesEnabled function by directly linking with the Windows 2000 product suite API VerifyVersionInfo (defined in WINBASE.H) and specifying a wSuiteMask of VER_SUITE_TERMINAL (defined in WINNT.H). This simplified code is as follows: #include <windows.h> #include <stdio.h> // This code will only work on the Windows 2000 platform BOOL IsTerminalServices_TERMINAL; VER_SET_CONDITION( dwlConditionMask, VER_SUITENAME, VER_AND ); return VerifyVersionInfo( &osVersionInfo, VER_SUITENAME, dwlConditionMask ); } In the Windows 2000 operating system, Terminal Services can be enabled in two modes: Application Server and Remote Administration. Remote Administration mode is a two-client, limited form of Terminal Services that is specifically intended for remote administrative access to a server. Terminal Services enhanced application compatibility code is disabled in this mode in order to simplify installation of server class applications. Note that all of the previous code examples will return TRUE because they only check to see that Terminal Services is enabled. The following code distinguishes the Remote Administration mode of Terminal Services. Note that this is the same Windows 2000 example, except that the wSuiteMask value is set to VER_SUITE_SINGLEUSERTS (defined in WINNT.H). BOOL IsTerminalServicesRemoteAdmin_SINGLEUSERTS; VER_SET_CONDITION( dwlConditionMask, VER_SUITENAME, VER_AND ); return VerifyVersionInfo( &osVersionInfo, VER_SUITENAME, dwlConditionMask ); } The following code example shows how to limit your application to running only a single instance under Terminal Services. The code is Win32-compatible, so it can run on all platforms. The key is to use a mutex (mutual exclusion) object to determine if the application is already running. The old practice of using a window handle does not work under Terminal Services, because window handles are specific to a session, thus there are no system global window handles. The code uses the Terminal Services multisession object manager Global\ prefix to force the mutex into the system-wide global name space. // // Global Variables // BOOL g_fIsTerminalServer = FALSE; TCHAR g_szAppName[] = TEXT("Generic"); HANDLE g_hAppRunningMutex = NULL; // // IsAppAlreadyRunning // // This routine check to see if the application is already running. // The fOnePerSystem flag is used for Terminal Server, thus allowing // you to limit the running instances to one per system or one per // user session. // // NOTE: The g_hAppRunningMutex handle must remain open while your // application is running. // BOOL IsAppAlreadyRunning( PCTSTR pszAppName, BOOL fOnePerSystem ) { TCHAR szMutexName[MAX_PATH]; ASSERT(pszAppName != NULL); ASSERT(g_hAppRunningMutex == NULL); // Create a mutex in the global name space to see if an instance // of this application is already running. If so, exit the app. *szMutexName = TEXT('\0'); if (fOnePerSystem && g_fIsTerminalServer) { // // We're running on Terminal Server, so prefix the mutex name // with Global\ to force it into the system global name space // lstrcpy(szMutexName, TEXT("Global\\")); } lstrcat(szMutexName, g_szAppName); lstrcat(szMutexName, TEXT(" is running")); g_hAppRunningMutex = CreateMutex(NULL, FALSE, szMutexName); if (g_hAppRunningMutex != NULL) { // // Make sure we are the only process with a handle to our named mutex. // if (GetLastError() == ERROR_ALREADY_EXISTS) { // The app is already running CloseHandle(g_hAppRunningMutex); g_hAppRunningMutex = NULL; } return (g_hAppRunningMutex == NULL); } // // Cleanup routine. This should be called right before the application // exists. Once this routine closes the g_hAppRunningMutex handle, then // another instance of your application will be allowed to run. // VOID LetAnotherInstanceRun( VOID ) { if (g_hAppRunningMutex != NULL) { CloseHandle(g_hAppRunningMutex); g_hAppRunningMutex = NULL; } } // // Example of how these routined would be called from WinMain // WinMain( ... ) { // // First, determine if we're running on a TS enabled system. // g_fIsTerminalServer = IsTerminalServicesEnabled(); // // Check to see if another instance is running // if (IsAppAlreadyRunning(g_szAppName, TRUE)) { // Display message box to user to let them know // that only one instance is allowed. return; } .... // // Close the App's mutex, thus allowing another instance // to run. // LetAnotherInstanceRun(); } This code can be used to detect what type of Terminal Services session the application execution request is being initiated from. The Windows 2000 operating system or Service Pack 4 for Windows NT Server 4.0, Terminal Server Edition is required to run the code. On all other Win32 platforms, this code will always indicate that the process is running on the console. The following code is compatible with all Win32 platforms, however, the SM_REMOTESESSION value (defined in WINUSER.H) is only defined when you compile with a WINVER value >= 5.0. if (GetSystemMetrics(SM_REMOTESESSION)) { // App is running on a remote session. } else { // App is running on the console. } In the Windows 2000 operating system, the MessageBox* APIs check for the presence of an impersonation token when the MB_SERVICE_NOTIFICATION flag is specified. The impersonation token contains the session ID of the client that is being impersonated and that is the session in which the pop-up message box is displayed. RPC-based services can use this new MessageBox* feature in conjunction with client impersonation in order to make sure the message box is displayed in the client's session. The following code demonstrates how an RPC service can take advantage of this feature. (Note, this works with any form of impersonation, not just with RPC services.) RPC_STATUS RpcStatus; int iResult; // // Impersonate the client. We do this before we call MessageBox* // because the impersonation token contains the session id of // our calling client. // RpcStatus = RpcImpersonateClient( NULL ); if( RpcStatus != RPC_S_OK ) { return( STATUS_CANNOT_IMPERSONATE ); } // // Now that we're impersonating, call the MessageBox* API with // the MB_SERVICE_NOTIFICATION flag. This will redirect the popup // to the client's session. // iResult = MessageBox( HWND, TEXT("Message to display to user in remote session"), TEXT("Caption of Message Box"), MB_OK | MB_SERVICE_NOTIFICATION ); // // Stop Impersonating // RpcRevertToSe, FoxPro, MS-DOS, Win32, Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Other product or company names mentioned herein may be the trademarks of their respective owners.
http://msdn.microsoft.com/en-us/library/ms811523.aspx
crawl-002
en
refinedweb
Use static imports rarely Static imports allow the static items of another class to be referenced without qualification. Used indiscriminately, this will likely make code more difficult to understand, not easier to understand. Example import java.util.*; import static java.util.Collections.*; public final class StaticImporter { public static void main(String... aArgs){ List<String> things = new ArrayList<String>(); things.add("blah"); //This looks like a simple call of a method belonging to this class : List<String> syncThings = synchronizedList(things); //However, it actually resolves to : //List<String> syncThings = Collections.synchronizedList(things); } }?
http://www.javapractices.com/topic/TopicAction.do%3FId=195
crawl-002
en
refinedweb
› Forums › IIS 7.0 › IIS7 - Configuration & Scripting › Microsoft.Web.Administration namespace? Last post 09-21-2008 11:37 PM by erikkl2000. 19 replies. Average Rating Rate It (5)Thank you for the rating! Page 2 of 2 (20 items) < Previous 1 2 Sort Posts: Oldest to newest Newest to oldest I too would like to know this. I am looking at IIS7 and Vista/Longhorn from the perspective of a web hoster. I am impressed with the ability to programatically control webs and ftp sites and sql databases, but going forward, the ability to control dns, pop, smtp would make this a complete solution. I know this isnt the forum for these topics, but perhaps some of you top IIS guns have some insight you would like to share. ThanksRobert Allan I see no one addresses the question that you had about a Microsoft.DNS.Administration. Does anyone know when this may become available? Erik There is a System.Net.Dns class that exposes some features from DNS, but not necesarilly a lot of functionality. Could you share what operations your are looking to use from a DNS class. Thanks I have a site that requires to check the existence of a node from with in a zone. If it does not exist then I create that node. This is for a site that allows logged in users create a sub name like :: MyCompany.EriksNewSite.Com. This way it looks a little more personalized. ---------- The service i wrote is fairly straight forward but getting the correct data back out of the stream from the response of a dns command is a B**CH to say the least. Service: ---- public class AFCCDnsManager : MarshalByRefObject, AFCCDns.IAFCCDnsManager { public AFCCDnsManager() { } /// <summary> /// Checks to see if a zone contains a particular node /// </summary> /// <param name="server">dns server ( use ip when possible )</param> /// <param name="zone">domain name that contains the node</param> /// <param name="node">the prefix node in question</param> /// <returns>true if has records</returns> public bool CheckIfDomainZoneNodeHasRecords(string server, string zone, string node) { // Process myProcess = null; ProcessStartInfo myProcessStartInfo = null; StreamReader myStreamReader = null; string cmdFailed = string.Empty; string dnsCmd = string.Empty; StringBuilder output = null; try { cmdFailed = "DNS Server failed"; //string cmdCompleted = "command completed successfully"; //Command failed: DNS_ERROR_NAME_DOES_NOT_EXIST 9714 //cmd server cmd zone node //dnscmd afcc-inc-ns1 /enumrecords AFCCINC.COM handlers dnsCmd = string.Format("dnscmd {0} /enumrecords {1} {2}", server, zone, node); output = new StringBuilder(); myProcess = new Process(); myProcessStartInfo = new ProcessStartInfo("cmd.exe"); myProcessStartInfo.UseShellExecute = false; myProcessStartInfo.CreateNoWindow = true; myProcessStartInfo.RedirectStandardOutput = true; myProcessStartInfo.RedirectStandardInput = true; myProcessStartInfo.Arguments = dnsCmd; myProcess.StartInfo = myProcessStartInfo; myProcess.Start(); myStreamReader = myProcess.StandardOutput; do { output.Append(myStreamReader.ReadLine() + "\n"); } while (myStreamReader.Peek() >= 0); myProcess.StandardInput.WriteLine(dnsCmd);); do { output.Append(myStreamReader.ReadLine()); } while (myStreamReader.Peek() >= 0); myStreamReader.Close(); myProcess.Close(); Console.WriteLine(output.ToString()); if (output.ToString().ToLower().Contains(cmdFailed.ToLower())) return false;//0 return true; } catch (Exception ex) { System.Net.Mail.MailMessage mm = null; SmtpClient smtp = null; mm = new System.Net.Mail.MailMessage (new System.Net.Mail.MailAddress("[email protected]"), new System.Net.Mail.MailAddress("[email protected]")); // mm.Subject = "Problems with the AFCC Dns Manager Service"; mm.Body = "Message: " + Environment.NewLine + ex.Message + Environment.NewLine + Environment.NewLine; mm.Body += "Source : " + Environment.NewLine + ex.Source; smtp = new SmtpClient("smtp.afccinc.com"); smtp.Send(mm); mm = null; smtp = null; // return true; } finally { myProcess = null; myProcessStartInfo = null; myStreamReader = null; cmdFailed = string.Empty; dnsCmd = string.Empty; output = null; } } ----------- From web app using System;namespace AFCCDns{ public interface IAFCCDnsManager { bool CheckIfDomainZoneNodeHasRecords(string server, string zone, string node); }}----------------\ bool hasRecords = true; //select channel to communicate with server ChannelServices.RegisterChannel(new TcpClientChannel(), false); AFCCDns.IAFCCDnsManager remObject = (AFCCDns.IAFCCDnsManager)RemotingServices.Connect (typeof(AFCCDns.IAFCCDnsManager), "tcp://111.11.29.111:8875/AFCCDnsQuerry");//111.11.29.111:8875 if (remObject == null) Console.WriteLine("cannot locate server"); else { hasRecords = remObject.CheckIfDomainZoneNodeHasRecords("111.11.29.111", "HomeBuildersBlog.Com", "MyCompanyName"); Console.WriteLine(hasRecords); } return hasRecords; } I am needing to querry the DNS server for records. Basically if there is a way MS can give us a object that will have the same commands as the DNS command .exe then we are good. This is what we are in need of. BIG TIME! 2009 Microsoft Corporation. All Rights Reserved. | Terms of Use | Privacy Statement | About our Team Questions/Problems with? | Interested in Advertising with us? | Ads served by BanManPro
http://forums.iis.net/p/993229/1879966.aspx
crawl-002
en
refinedweb
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Yesno: the other side ot the Gödelian coin Any universally powerful programming language must either offer consistent semantics, or allow the possibility of programs not halting. … Almost all programming languages to date choose consistency. … Yesno is an inconsistent and complete programming language, and that every program returns a value. I think that in the current state it is, it doesn't quite live up to what it promises. It relies on some value being returned in every loop-branch (cause afaict, it doesn't go down the 'wrong' side of a loop branch). So a simple program like: foo := [x | if cond: (x == 0) then: [:1] else [:foo doWith: x]]. foo doWith: 1. Would hang. And then you'd have to apply the recommended pattern at the bottom. (Where you 'return' all possible values). Now, a naive solution (and this is already taking for granted we have typing) would then have to be: pseudo: for i in Int: return i. return (foo doWith: 1). I think the idea in itself is interesting (and conceptually it seems to be smalltalk with non-det execution) though afaict, it doesn't quite live up to its idea yet. Shouldn't that be: Any universally powerful programming language must either offer consistent semantics, and allow the possibility of programs not halting. (1) The author does not seem to distinguish between two cases: "yes AND no" and "yes OR no". It seems that, in YesNo, the latter is the case. Programs returning "yes OR no" are obviously not inconsistent - so it is not obvious in what sense should the YesNo language yield "inconsistent" results. (2) The premise of YesNo, that all programs return values in the form "return 1", seems to be quite primitive. Supposing we give YesNo the ability to construct programs on the fly, we could write something like: construct program X. return (evaluate X). I think that all of us would be delighted to see the output of YesNo for the case when the construction of X is a complicated piece of code or when X is a non-halting program.
http://lambda-the-ultimate.org/node/2180
crawl-002
en
refinedweb
Encapsulate collections In general, Collections are not immutable objects. As such, one must often exercise care that collection fields are not unintentionally exposed to the caller. One technique is to define a set of related methods which prevent the caller from directly using the underlying collection, such as : - addThing(Thing) - removeThing(Thing) - getThings() - return an unmodifiable Collection import java.util.*; public final class SoccerTeam { public SoccerTeam(String aTeamName, String aHeadCoachName){ //..elided } public void addPlayer(Player aPlayer){ fPlayers.add(aPlayer); } public void removePlayer(Player aPlayer){ fPlayers.remove(aPlayer); } public Set<Player> getPlayers(){ return Collections.unmodifiableSet(fPlayers); } //..elided // PRIVATE // private Set<Player> fPlayers = new LinkedHashSet<Player>(); private String fTeamName; private String fHeadCoachName; } Example 2 BaseballTeam is an example of exposing the collection directly to the caller. This is not necessarily an incorrect design, but it is riskier, since the contents of the collection can be directly changed by both BaseballTeam and its caller : import java.util.*; public final class BaseballTeam { public BaseballTeam(String aTeamName, String aHeadCoachName){ //..elided } public void setPlayers(Set<Player> aPlayers){ fPlayers = aPlayers; } public Set<Player> getPlayers(){ return fPlayers; } //..elided // PRIVATE // private Set<Player> fPlayers; private String fTeamName; private String fHeadCoachName; } See Also : Would you use this technique?
http://www.javapractices.com/topic/TopicAction.do%3FId=173
crawl-002
en
refinedweb
Java.lang.Double.compareTo() Method Description The java.lang.Double.compareTo() method compares two Double objects numerically. There are two ways in which comparisons performed by this method differ from those performed by the Java language numerical comparison operators (<, <=, ==, >= >) when applied to primitive double values: - Double.NaN is considered by this method to be equal to itself and greater than all other double values (including Double.POSITIVE_INFINITY). - 0.0d is considered by this method to be greater than -0.0d. Declaration Following is the declaration for java.lang.Double.compareTo() method public int compareTo(Double anotherDouble) Parameters anotherDouble -- This is the Double to be compared. Return Value This method returns the value 0 if anotherDouble is numerically equal to this Double; a value less than 0 if this Double is numerically less than anotherDouble; and a value greater than 0 if this Double is numerically greater than anotherDouble. Exception NA Example The following example shows the usage of java.lang.Double.compareTo() method. package com.tutorialspoint; import java.lang.*; public class DoubleDemo { public static void main(String[] args) { // compares two Double objects numerically Double obj1 = new Double("8.5"); Double obj2 = new Double("11.50");: obj1 is less than obj2
http://www.tutorialspoint.com/java/lang/double_compareto.htm
CC-MAIN-2016-50
en
refinedweb
There is none. I dare not give an “introduction” to Interop and risk exposing the extent of my ignorance. Please Google for Interop and you will find a better explanation about it than what I can provide. As I was working in some of my recent projects, I had a need to work with C# and COM Interop extensively.I had to pass on various varieties of information between the Managed & Unmanaged worlds and naturally,I looked to Google so that I can reuse a.k.a. copy/paste the code from various sources. To my discomfort, I found the code I wanted but not in a single location. It was spread all over the world and Iwant to congregate all the information into a single webpage usable as a reference for any one. The below is the first in a series of articles I intend to write regarding C#/ATL COM Interop. As I learn more andmore information, I will revise these articles. (At least, I hope to. But as God is my witness, I am a lazy slob!!!) When I started with my project work, my manager knew that I was faking my resume about“extensive .Net experience”. So, he was kind enough to give me a little task which involveddebugging an existing Interop code rather than writing a new one. The first problem I faced with it was how to debug the code??? The COM code was written inone project and C# code was written in another. They had different .sln files and when I startone of them in debugger, the breakpoints set in the other solution simply refused to hit!!! TheDLLs loaded properly and code flow was happening the way it is written. But how am I supposedto debug the problem if the breakpoints are not hit!!! sln One solution I tried was to run the C# solution file in debugger when needed to debug C# sideof the code and VC++ solution file when debugging the unmanaged code. This helped me onlyfor a little while. There was very soon a need to debug them together and the problemresurfaced. The solution is to set the Debugger Type in Visual Studio. In the project properties >Debugging, there exists a small option called “Enable Unmanaged Debugging”. Check this boxand rerun the solution. Voilla!!! You can now debug the COM code from C# solution. See theImage below for illustration. If you are debugging from the Unmanaged VC++ solution, then you will have a different optionto set. See the image below to find it out. These options carry drawbacks. Enabling "Unmanaged Debugging" in C# robs you of thefacility to Edit-and-Continue in C#. Luckily, you get a message whenever to attempt such athing. Check this out. In VC++ the case is slightly different. Here, the default option is "Auto" which tells thedebugger to debug the environment in which the EXE file is built. If EXE file is built inunmanaged environment, then you can walkthrough the COM code. If it is built in Managedenvironment, then you can walk through C# code but not the COM code (even though you havelaunched the debugger from that very project). Making it “Mixed” helps you to walk acrossthese worlds but you can edit and continue only COM code. This done, I was able to walkthrough the Managed & Unmanaged worlds and fixed the bug. Mymanager saw my work and was happy. “You are ready for the next task” he said. “You are going to learn more exciting things” he said. He left unsaidthat he is rewarding my work with more work. Phew…..Life. I geared up. This time, I am supposed to write afunction in the COM component and call it from C#. The fun began…. The process of sending data between the two worlds is called Marshalling & Unmarshalling. I want to marshal asingle dimensional array of real number. My IDL file declaration looks as shown below. [id(1), helpstring("method NotGoodInterop")] HRESULT NotGoodInterop([in]long nAraySize, [in]float *RealNumbersList); And the prototype in the header file is STDMETHOD(NotGoodInterop)(long nAraySize, float *RealNumbersList); Now calling this function in C# as simple as float[] Numbers = new float[_NumbersCount]; _SomeClassObject = new ATLSimpleObjectDemoLib.SomeClassClass(); // I want to marshal the array starting from position 3. _SomeClassObject.NotGoodInterop(2, ref Numbers[3]); // I want to marshal the array starting from position 25. _SomeClassObject.NotGoodInterop(3, ref Numbers[25]); It is as simple as that. We call the function and we are done. The data crosses the world like a Penguincrossing the ocean. The reason for calling this as NotGoodInterop is purely personal. I don’t like this way of specifying thearray size and array’s starting position. This can be very helpful but is confusing for me. So, when writing thisexample I have named it so, but it might not be bad. I leave it for you to decide. I personally prefer the waydescribed below. NotGoodInterop The IDL declaration is as below [id(2), helpstring("method PutRealNumbers")] HRESULT PutRealNumbers([in] long nAraySize, [in,size_is(nAraySize)]float RealNumbersList[]); [id(3), helpstring("method GetRealNumbers")] HRESULT GetRealNumbers([in] long nAraySize, [ref,size_is(nAraySize)]float RealNumbersList[]); Note the use of size_is attribute in the IDL file. Please read more about it here and here.The second one is from Adam Nathan and I am indebted to him for his excellent article. And prototype inheader file is as below: size_is STDMETHOD(PutRealNumbers)(long nArraySize,float RealNumbersList[]); STDMETHOD(GetRealNumbers)(long nArraySize,float RealNumbersList[]); Calling this from C# is the same as in the case of NotGoodInterop. Repeating it here _SomeClassObject.PutRealNumbers(_NumbersCount - 3, ref Numbers[2]); _SomeClassObject.GetRealNumbers(2, ref Numbers[4]); My manager saw my work and was happy. He rewarded me with more work. Now I have to Marshall a two dimensional array of real numbers. This gets tricky. Marshallingmultidimensional arrays is not the same as marshalling single dimensional arrays. This is because,when we send the pointer to array, the unmanaged world has no clue what-so-ever aboutthe dimensions of the array. In C++ a multi-dimensional array is actually stored in a single memorysequence. Compiler does a little magic and when we use arr[][] notation and passes on theappropriate memory location. In order to have this done appropriately, compiler enforces the developerto specify the array’s column size unambiguously. i.e. to say, in C++ you cannot declare something like: float fltArr[][] = new float[10][20]; // This is not possible Don’t even think about sending the data as float ** and specifying the array sizes intwo explicit variables. If we have more than 3 arrays to Marshall in the same function, thisbecomes very very cumbersome. If the arrays are all of different dimensions, then it is simply apain. Kim Kartavyam??? (That is Sanskrit for ‘What is the solution’?) Use SAFEARRAYs SAFEARRAYs SafeArray is a very elegant way to Marshall data across functions written in variousprogramming languages. It binds very well with CLR and with a little pain at the languageboundaries, we can achieve a cool way of marshalling data. Please read more about SafeArrayhere. Though for the rest of the discussion I will not assume you are familiar with SafeArray, Iwill not discuss it exhaustively either. Using SafeArrays, my IDL declaration becomes simple [id(4), helpstring("method PutMultiDimensionalArray")] HRESULT PutMultiDimensionalArray([in] SAFEARRAY(float) saNumbers); [id(5), helpstring("method GetMultiDimensionalArray")] HRESULT GetMultiDimensionalArray([out] SAFEARRAY(float) *saNumbers); Please pay due attention to two facts here: After noting this, please observe the prototype necessary in the header file STDMETHOD(PutMultiDimensionalArray)(SAFEARRAY* saNumbers); STDMETHOD(GetMultiDimensionalArray)(SAFEARRAY **saNumbers); Data Type is not specified here for SAFEARRAY. Also, Put has a ‘*’ and Get has ‘**’.One more than the IDL file. Calling this from C# is deceivingly simple. float[,] TwoDimNumbers = new float[2, 3]; // This is a two dimensional array _SomeClassObject.PutMultiDimensionalArray(TwoDimNumbers); Array TwoDimNumbers1; // Note the difference when I am Get() data _SomeClassObject.GetMultiDimensionalArray(out TwoDimNumbers1); The fun is in the C++ method. We have to reach into this SAFEARRAY variable to get the data sent to us. Like I said, this can be a bit of a pain. Accessing data in a SAFEARRAY can be done in three ways. In the first, we achieve speed. In the latter, we trade away some performance but can be surethat we are not accessing illegal data. Third is the combination of bests from both. Both methods have some common ground work to be done. I will cover this common area firstand then move on to the individual methods. I will first verify the dimensions of the array. From C#, I have sent a two dim array. The belowcode is in the C++ method. nDimensions = SafeArrayGetDim(saNumbers); nDimensions must be equal to 2. nDimensions SafeArrayGetVartype(saNumbers,&vt); From C# I've sent a float array. vt must be equal to VT_R4 vt VT_R4 LowerBounds = new LONG[nDimensions]; UpperBounds = new LONG[nDimensions]; for(int inx=1;inx<=nDimensions;inx++) // <------- Note: the loop begins with 1 here. { _com_util::CheckError(SafeArrayGetLBound(saNumbers, inx, &LowerBounds[inx-1])); _com_util::CheckError(SafeArrayGetUBound(saNumbers, inx, &UpperBounds[inx-1])); } Now get the array boundaries m_Dimension1Length = UpperBounds[0]-LowerBounds[0]+1; m_Dimension2Length = UpperBounds[1]-LowerBounds[1]+1; I create a C++ array and copy the data from SafeArray into this array. This saves me performance especially if I have to repeatedly access array contents. Please read this : float *pfNumbers = NULL; _com_util::CheckError(SafeArrayAccessData(saNumbers,(void HUGEP* FAR*)&pfNumbers)); float **CppArr = NULL; CppArr = (float **)malloc(sizeof(float*)*m_Dimension1Length); for(int inx=0; inx<m_Dimension1Length; inx++) { CppArr[inx] = new float[m_Dimension2Length]; for(int jnx=0; jnx<m_Dimension2Length; jnx++) { long SafeArrayIndex = jnx*m_Dimension1Length + inx; long CppArrayIndex = inx*m_Dimension2Length + jnx; // In SafeArray, the rank is reversed when storing. So, when we // construct our Cpp array, we have to calculate the appropriate // array index. That is what the above two lines do. This can be // avoided in method 2. But it carries a performance overhead. float f; f = pfNumbers[SafeArrayIndex]; CppArr[inx][jnx] = f; m_vecFloatingNumbers.push_back(f); } } _com_util::CheckError(SafeArrayUnaccessData(saNumbers)); Note: It is not necessary to create a copy of the Safe Array into C++ array. do this only for demonstration purpose. If you want to access pfNumbers directly for downstream computing, itis absolutely OK. Only, remember to calculate the array index appropriately. Else, you will end upaccessing wrong array location. Here we will access the array elements via the safearray. We will not get our hands dirty withthe raw memory. This method is safe and gives proper error handling mechanism BUT consumestime for locking and unlocking SafeArray when SafeArrayGetElement() is invoked. This can be aperformance hit. for(int inx=0; inx<m_Dimension1Length; inx++) { for(int jnx=0; jnx<m_Dimension2Length; jnx++) { // LowerBound "can" be non Zero. Especially if the caller is written // in a language where arrays dont begin with Zero. // So, we add the LowerBounds[] to the index. And we are done. long ArrayIndex[2] = {LowerBounds[0]+inx,LowerBounds[1]+jnx}; // Here we are unconcerned with the internal storage of SafeArray. Simply call it with // the index number and we get our data. // In method 1, we have to do the array index calculation ourselves. float f; _com_util::CheckError(SafeArrayGetElement(saNumbers,ArrayIndex,(void*)&f)); } } This is for Put()ting the data. Getting the data is a corollary to this. Please see the sampleproject I have attached to this article. It gives you the complete documented code. Very Important Note: Why do we need to bother with array index calculation when accessing data? Because SAFEARRAYs are designed to marshall data to and from all languages. And some languages have arrays as Row-Major and others have them as Column-Major. SafeArray has a standard way of accessing them which is Column-Major. Unfortunately, themethod in which SafeArray stores arrays is not same as the C++ Row-Major order. So, we have to worry about array index calculation. I will cover this in more detail when marshalling strings. The only reason for dealing thisseparately is, I learnt using this class when marshalling strings and it became a habit for me touse it whenever dealing with strings. So, my sample code was written so and hence I amexplaining it there. I was told to send an array of User IDs collected from a C# UI form to a database access component written in COM. This is almost exactly the same as marshalling arrays of floating point numbers. The only difference is the IDL file prototype contains a BSTR as SAFEARRAY’s data type. The IDL declaration will be [id(6), helpstring("method PutStrings")] HRESULT PutStrings([in] SAFEARRAY(BSTR) Strings); [id(7), helpstring("method GetStrings")] HRESULT GetStrings([out] SAFEARRAY(BSTR) *Strings); And the prototype in header file is STDMETHOD(PutStrings)(SAFEARRAY * Strings); STDMETHOD(GetStrings)(SAFEARRAY **Strings); Please compare this with marshalling multi-dimensional arrays part of the code and you will see the similarities and differences. Calling from C# is also the same way: string[] Strings = new string[5]; _SomeClassObject.PutStrings(Strings); And then we are done. On the C++ side, the processing is same as in MarshallingMulti-dimensional real arrays except that we deal with strings. I will not discussthem here again. You can try them by your own. The sample project I have attachedcontains these methods and you can try them to your heart’s content. std::vector<BSTR> vecStrings2; CComSafeArray<BSTR> saBSTRs; saBSTRs.CopyFrom(Strings); vecStrings2.clear(); for (long inx=0; inx<cElements; inx++) { vecStrings2.push_back(saBSTRs[inx]); } As simple as that!!! This gives the simplicity and elegance of array style data access and avoids all the hassles of lower bounds and upper bounds. I am not sure of the performance impact but, personally, I don’t care in this case.The simplicity of code means a lot more to me and I am sure MS has incorporated all the necessary performance tweaks. A lot of SAFEARRAY related coding can be avoided by using CComSafeArray wrappers. Method 1 & 2 are necessary if we are using SAFEARRAY in C style code. Even though I’ve put the heading as marshalling structures & Enums, I am going todiscuss very little about it. I will explain the general case first and then list the points which I have left uncovered and the reasons for doing so. I need to Marshall a data structure and Enum values. These, I will declare in my IDL file as below typedef enum MyEnum { Good, Bad, Ugly } MyEnum; typedef struct SData { int Id; BSTR Name; MyEnum eEnumVal; } Data; I also declare a function which accepts these an input. // Enum and structure [id(8), helpstring("method SampleEnumAndStruct")] HRESULT SampleEnumAndStruct([in] MyEnum enumVal,[in]Data data); The prototype in Header file becomes STDMETHOD(SampleEnumAndStruct)(MyEnum enumVal,Data data); This is the standard declaration method in any IDL file. Now calling this from C# is a breeze. In fact, it does not feel any different at all. ATLSimpleObjectDemoLib.SData data = new ATLSimpleObjectDemoLib.SData(); data.eEnumVal = ATLSimpleObjectDemoLib.MyEnum.Bad; data.Id = 0; data.Name = "Lee Van Cleef"; _SomeClassObject.SampleEnumAndStruct(ATLSimpleObjectDemoLib.MyEnum.Bad, data); The implementation part in C++ is STDMETHODIMP CSomeClass::SampleEnumAndStruct(MyEnum enumVal,Data data) { if(enumVal == Bad) { MessageBox(NULL,L"The Baddies was Lee Van Cleef", L"Did you know that?",MB_OK|MB_ICONQUESTION); } return S_OK; } Quite simple and straight forward. Isn't it??? To Dos: You might ask, what else is left. If Method 1 and Method 2 are for C-Style access of SAFEARRAY, what are we left with? Very little, actually. I will quickly run throughthem here as this is the appropriate place to do so. Consider a sample function which puts a uni-dimensional array of integers. extern "C" void SamplePutFunction(int nArraySize,int * Arrays); For calling this in C#, we first of all need to tell the compiler where it can find the DLL at runtime and also how the function is declared in C. We do thatusing the DllImport attribute in the C# file. [DllImport("CStyleDLL.dll")] public static unsafe extern void SamplePutFunction(int nArraySize, int* InputArray); And now we can call this function from wherever required. Please note that C#compiler will not check for the validity of function prototype you have written in C# file with DLL. If there is a mismatch, then you will get an exception or worse. If you are importing more than one function from the same DLL, please remember that you have to write a DllImport for each function. Missing it forone function shall not give a compiler error. You will only get an exception at runtime. [DllImport("CStyleDLL.dll")] public static unsafe extern void SamplePutFunction(int nArraySize, int* InputArray); private void btnPutCStyleArray_Click(object sender, EventArgs e) { int[] IntArray = new int[100]; unsafe // <---- In 2008, this requires an explicit compiler option { fixed (int* pArray = IntArray) // <--- fixes the pointer during GC. { SamplePutFunction(IntArray.Length, pArray); } } } That is how the cookie crumbles. Please pay close attention to the unsafe and fixed keywords in this snippet. They can wreak havoc in coding and I have spent sleepless nights chasingafter the elusive bug. unsafe fixed Even though this seems unnecessary from the code above, unsafe andfixed code is especially helpful when we need to marshal pointer to astructure to the C function. Using fixed is always advisable becauseit saves you from pointers being relocated during Garbage Collection. If you don’t fixed your pointer variable, you will run a potential risk of page faults and access violations - The perfect entry point for hackers. In VS 2008, using unsafe keyword in the code requires a compiler option tobe set explicitly in UI. Please see below image What I have not mentioned here is – For all the COM components to be called from C#,we need an AxInterop and Interop DLLs. I believe these DLLs take the responsibility of doing the necessary operation for converting the C# arrays into SAFEARRAYS and then pass them to the COM functions. These DLLs are auto-generated when we add the COM component in the project references. Please see the below screen shots for how to add a COM component as reference. From the C# project’s References option choose “Add Reference”. Choose the tab COM. Select the required COM component and click on OK. These steps create the AxInterop and Interop DLLs and put them in the appropriate folders. But it is often required (in large software projects) that these DLLs are to be placed in locations elsewhere than the default locations and signed approprately. For achieving this, please use the TlbImp.exe and axImp.exe utilities. Please see a sample use of these utilities here. Read more about them here and here. TlbImp /silent /nologo /sysarray /publickey:"PublicKey.snk" /delaysign /out:"Interop.ComComponent.dll" /namespace:ComInteropDemo "InteropDemo.dll" "/asmversion:1.0.0.0" aximp /silent " InteropDemo.dll" /out:"AxInterop.InteropDemo.dll" /rcw:"InteropDemo.dll" /publickey:"PublicKey.snk" /delaysign /silent /nologo For the examples we have been discussing here, we don’t need aximp. If we are creating an ActiveX component, then aximp is necessary. The snippets we have seen above are not ActiveX components. So, we don’t need it. I will try to deal with it in Part2 of this series. My manager was very happy. And what did he do? Please read next article to find out what hed.
https://www.codeproject.com/Articles/31927/C-ATLCOM-Interop-code-snipperts-Part?msg=3756020
CC-MAIN-2016-50
en
refinedweb
[‘result’][‘SCAN_RESULT’]) url = “” % isbn droid.startActivity(‘. 🙂 52 Responses to Android barcode scanner in 6 lines of Python code (Leave a comment) Hmmm, perhaps I should add an iPhone to my shopping list instead of that barcode scanner you recommended, eh Matt? Can I tell my wife you’re in favour of this purchase?! 😀 That is very cool, I can’t wait to try this!! “make phone calls” are you mad 😉 text-to-speech is cool, but I much prefer speech-to-text on cellphones. Any S60 apps you know of Matt? I mean, Symbian is the Linux of the mobile world. You must be interested in that OS as well. Android phone becoming more useful compare to iPhone, the applications are more industrialized. I salute the Android developer for this. hate to say admit it – but seriously tempted by an Android, after the #o2fail pricing of the new iPhone, its great but I am a poor broke SEO bloke with a family … Excellent that I can get my hands ‘mucky’ enough without breaking it … 6 lines of Python backed by a STRONG built in API! Without the heavy lifting of having : droid.scanBarcode() built in, it definitely would be a much different proof of concept. I’m with Gerry on the iPhone, absolutely love it, but 1) AT&T (on this side of the pond) really shot themselves in the foot with their $499/$699 pricing and 2) Trying to code for the iPhone is like trying to learn Chinese from a cow. Gotta say that I’m looking forward to this: It’s going to be really useful to write quick scripts, rather than having to take on the Android SDK, since testing even very basic applications is ridiculously laborious when you’re only running a Linux netbook. That said, I only bought a G1 a week or so ago, and aside from the usual battery issues (for which I’ve already ordered a larger battery), I’m noticing that I’m getting multiple issues with freezing and rebooting, sometimes just after loading the OS; I’m fairly sure it’s an app or widget I’m using causing it, since booting into Safe Mode seems to alleviate any problems, but since there seems to be no obvious form of logging or system monitoring apps available through the OS, it’s impossible to tell which might be the cause without individually removing each app, or making a clean start, then reinstalling each app one by one and testing each time. I don’t suppose you, or any of your readers have any tips for tracking down the cause of my woes? It’d be nice to have a stable phone before I start working on scripts 🙂 When I saw your post about the barcode scanner I thought to myself, there has got to be a way to get my G1 to do this. It has all sorts of other barcode scanning applications. To those above excited about the iphone pricing, I got my G1 from T-mobile for $99 with a 2 year commitment. I think you may have to be a new customer, but we easily talked the guy at the mall into coming down $150. You just have to be willing to haggle. And if the T-mobile store you go to tells you no, try an independent reseller at a kiosk. I can’t say enough good things about my G1, though. I’ve played with the iphone, and you couldn’t get me to switch. 🙂 Matt, So when are you going to get to the post where Android writes the books for me and I can retire? That’s what I call displacement technology:-) Morris I like the bar code scanner app for the iphone, I wind up using it at best buy. Damn. Cool. Not as cool as the garage door opening automagically, but… Damn. Any chance that tech will come to the iPhone soon? seems its time to buy G1 now :)) and start using bar code applications This only emphasizes my point from a blog article I wrote: In the future everybody develops… Does this work with those ISBN barcodes that end in an `X’ rather than a digit? Only I note you’re using `%d’ as a format specifier for the Google URL. Some guys have used the Barcode reader built into android (I believe it’s the zxing library) to scan barcodes in to Beep My Stuff (, disclaimer, I coded and run BMS). I don’t think it’s in the app store yet but the code is open source “Symbian is the Linux of the mobile world” Symbian is not Linux. Android is closest to Linux. iPhone has a *nix heart (BSD). Symbian is it’s own thing. And the hardest of the 3 to program for. But it’s been around a very long time. I wouldn’t try to coerce it to an integer — you’re using it as an unmodified string value. isbn = int(code['result']['SCAN_RESULT']) url = “” % isbn If you had an exception handler, I suppose it might make some sense. Anyways, knock that “6 lines” down to 5. 🙂 Yea, go buy an G1 so you can buy more things “more easy”. I would like to see Intel’s OpenCV (Computer Vision) API implemented on the Android. So you could put your face on a Muscle Man or a Seal or something… Sony Camcorder style. that code doesn’t tell me much those are all encapsulated functions. Since I renewed my site with Google apps, it has been giving me hell. not showing up in SERP’s then reappearing like nothing. then saying that the site has expired, then going back to normal. now its sending me to a godaddy parked free page. the site is a solid music blog musicandartsblog.com My friend has one and he loves it. Runs all kinds of crazy stuff on it. I’d love to get one as well. Alex, thats pretty weird. Having the same issue for my main term, there one day gone the next. Perhaps Matt could enlighten us? I’ve been in technology and business for about 20 years now, and I love how new technologies continually appear. There is always something getting faster and better yet cheaper, and there is always someone finding a new way to apply it. Gotta love that! Totally off subject, but I read an article that Google is looking at Twitter and may even display twitter returns in the serps. If that’s true, all I’ve got to say is “are you kidding me”? Many people, myself included would not like to search through useless 140 posts by narcissistic people when I’m looking for something. If I did, I would go to Twitter and search. If Google does this, you will be playing right into the hands of MS. I’ll stop using Google as I’m sure many others who find Twitter useless will. Looking forward to trying this out! Thanks Matt Is it possible to access the Speech Recognition engine in a similar manner using ASE? Specifically “RecognizerIntent”? It’s pretty oversized. Let’s start with removing the useless formatting code import android droid=android.Android() code=droid.scanBarcode() url=””+code[‘result’][‘SCAN_RESULT’] droid.startActivity(’android.intent.action.VIEW’, url) We still have some useless assignments import android droid.startActivity(’android.intent.action.VIEW’, “”+android.Android().scanBarcode()[‘result’][‘SCAN_RESULT’]) Well, we still have an import. Let’s make it into a nice oneliner.Nathan droid.startActivity(’android.intent.action.VIEW’, “”+__import__(‘android’).Android().scanBarcode()[‘result’][‘SCAN_RESULT’]) Wow, Nathan, you took an easily comprehensible script and turned it into a real mess. Missing the entire point of writing a script. Congrats. Please stay away from all code in the future. HI, Can someone post the complete working sample code of using the same script above for scanning the barcode I get a syntax error on line 2: Syntax error: “(” unexpected I also get “import: permission denied” just before that. Any ideas? I cannot make these six lines of code working.. It alwas comes with a syntax error for the isbn that says something about a tuple. I’m using ZXing’s barcode scanner.. is that the problem? Not able to start an activity. I have followed the same code. can anyone tell how to start an simple activity ? This is what worked for me: I have installed XZing barcode scanner from marked. Then I have this python code: ———- import android droid = android.Android() (id, result, error) = droid.scanBarcode() isbn = int(result['SCAN_RESULT']) url = "" % isbn droid.view(url) ——– And it works! THANK YOU for this AWSOME POST! hmm.. a new bug… my browser Problem is that my browser do not work correctly at .. When I add the book to a shelf it’s not really done though I’ve pressed save… I don’t believe that this is something that ASE can bypass.. Okay.. this is not a six line code. But I am totally new at coding.. From the six lines and the ASE API about Android I have made this 77 lines code that will allow you to scan books without starting the application all the time. You can find the code here : Know it can be short and prettier, but I think it has a good layout that explains everything. If you can help me debug how to get the sendMail() working it would be awsome. Here’s what I’ve got now. Nothing above was working for me, and I figured it was the result of the scanBarcode function call, and I was right (see fix below). I’m on an HTC Incredible, and on Android 2.1. I hope this helps someone down the line: import android droid = android.Android() (id, result, error) = droid.scanBarcode() isbn = int(result[‘extras’][‘SCAN_RESULT’]) url = “” % isbn droid.view(url) hmm.. don’t know what happend at pastebin… you can download the script to your phone from this link: Some things that helped me get it working. #1 I had to have the interpreter set to Python. Change it by going to Menu > View > Interpreters hit the menu button again to add Python. Somehow in the mad copying and pasting that I was doing, some things got erased. Go through and double check to make sure everything’s there. I ended up using Carl M’s code and changing the url to a different url, but it works perfectly for me on a Droid running 2.2 after about an hour of tinkering. Thanks for the article! The original version failed to work for me, a complaint about using a string as an index for a tuple. The versions suggested by Carl M and Lasse Nørfeldt work for me… this may have something to do with the way particular barcode applications return their information? Does anyone know if there is a URL parameter for adding a book straight to your Google library? I’ve expanded on your script to allow adding the books automatically to your library through the gdata.books API. It isn’t quite 6 lines any more but it is still pretty simple. Cheers, Craig Okay, so here’s my question…..I’ve been wondering if anyone made a scanner or application for phones that would scan a bar code of a book and tell you that you had already read or bought this book. I cannot tell you how many times I have bought the same book, because they’ve brought it out again, but in a new cover!!!! My mother, who’s 80 and her friends are also curious about this. It has to be simple, though. I’ve been reading about the other phones that will scan bar codes, but I don’t understand half of what they’re saying, so…..is there an item (that is portable…you don’t hook it up to your computer to complete the scannning process) that will do this for us older people who are technically unable to mess around with aps and codes and whatever….and if not, Christmas is coming and boy, wouldn’t that be a great thing to sell to us older people? Moira: There is a reference to beepmystuff.com in this thread, but they are closing the site. They recommend these services as more complete and crucially better supported products. Give them a try: Delicious Library, Library Thing and Shelfworthy @ Peter (IMC) ewww Symbian is the Linux of phones? how about Symbian is the swamp of eternal despair of phones? can’t install: not signed can’t install: certificate expired pre-installed pdf reader takes up 30% of the screen for the UI with no full screen option ever tried to make a playlist in the audio player? ouch… stoneage 70mb PC sync software that is absolute crap oh and the occasional OS crashes that suddenly reset everything including the date to 1980 or something? Was it really that iPhone and Android are so amazing or was there just such an incredible vacuum, anything would have been good enough to fill the void? Well, I guess Android is pretty amazingly well thought out… thanks for the light This is actually going to be really useful for me to scan books at thrift stores to check their prices, to see if I should buy them for resale on Ebay/Amazon. Very neat tool! Anyone had luck with decoding ITF (Interleaved 2 of 5) barcodes using python scripting. I need to scan some lengths that the scanner does not read. Is there a way to send DecodeHints like “Allowed_Lengths” with Python scripting? I have started learning Android Programming with Python! You can’t survive in futuer without Android programming! I get the following error , TypeError : list indices must be integers , not str I used the folliwing code import android droid = android.Android() (id, result, error) = droid.scanBarcode() isbn = int(result['extras']['SCAN_RESULT']) url = “” % isbn droid.view(url) The error comes up at line isbn = int(result['extras']['SCAN_RESULT']) I’m using Nexus with android 2.3.4 (build GRJ22) and i have QRDroid as the barcode scanner app , the QRDroid intent opens when i run the script , but when the intent returns , this error is thrown and the script stops Thanks Vidhuran In PHP: scanBarcode(); $isbn = $code[“result”]->extras->SCAN_RESULT; $url = “”.$isbn; $droid->startActivity(“android.intent.action.VIEW”, $url); ?> require_once("Android.php"); $droid = new Android(); $code = $droid->scanBarcode(); $isbn = $code["result"]->extras->SCAN_RESULT; $url = "".$isbn; $droid->startActivity("android.intent.action.VIEW", $url); ?> I had the same problem, Vidhuran. Switch the lines 3 and 4 from your code to: code = droid.scanBarcode() isbn = int(code.result[‘extras’][‘SCAN_RESULT’]) i solve problem. import android droid = android.Android() (id, result, error)=droid.scanBarcode() barcode = (result[‘extras’]) droid.view(‘’ + barcode[‘SCAN_RESULT’])
https://www.mattcutts.com/blog/android-barcode-scanner/
CC-MAIN-2016-50
en
refinedweb
Well, perhaps this isn't the sole reason, but nonetheless it's a more than feasible idea these days to have headings, titles and short lines of text using fonts a user doesn't have installed on their system. And, until the spec for embedding fonts is finialised in around 2065, there are two main options: - Render the text using *cough* Flash. While Flash is perhaps not as bad as some would make out, it's still horribly proprietary as well as having a noticeable loading time and a few other invented disadvantages that strengthen my case for using... - Images. They're lighterweight, and have been used for showing custom graphics since the dawn of [UNIX] time. So, we need header images. One horribly labour-intensive way of doing this is making them manually in Generic Graphics Editor 8.6. However, since we're sensible people, we'll generate them on the fly. And, since we're sensible people, we'll be using Django*, so we need to write some nice Python code. I'll be using Cairo to generate graphics, in part because it's a nice library, and is pretty common these days. You'll need the Python Cairo bindings; on debian-like systems, this is the package python-cairo; in other places, your mileage may vary. The key to making Cairo work with Django is wrapping a Cairo canvas in a django view. For this reason, I have this function lying around: def render_image(drawer, width, height): import os, tempfile, cairo # We render to a temporary file, since Cairo can't stream nicely filename = tempfile.mkstemp()[1] # We render to a generic Image, being careful not to use colour hinting surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, int(width), int(height)) font_options = surface.get_font_options() font_options.set_antialias(cairo.ANTIALIAS_GRAY) context = cairo.Context(surface) # Call our drawing function on that context, now. drawer(context) # Write the PNG data to our tempfile surface.write_to_png(filename) surface.finish() # Now stream that file's content back to the client fo = open(filename) data = fo.read() fo.close() os.unlink(filename) return HttpResponse(data, mimetype="image/png") The idea is, you pass it a function which will draw the image onto a context, and the image's width and height, and it takes care of all the boring tedium of wrapping cairo and django together. Now, that's not very useful by itself, is it? Time to draw some text! Firstly, as an aside, we need a way of seeing how big a certain text string will be for a given font and size, so we can render an image just big enough for it. This function achieves that: def text_bounds(text, size, font="Sans", weight=cairo.FONT_WEIGHT_NORMAL, style=cairo.FONT_SLANT_NORMAL): import cairo surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 1, 1) context = cairo.Context(surface) context.select_font_face(font, style, weight) context.set_font_size(size) width, height = context.text_extents(text)[2:4] return width, height Yes, yes, it's somewhat cryptic, but it does the job. Now, we can write a text-rendering view! def render_title(request, text, size=60): # Get some variables pinned down size = int(size) * 3 font = "Meta" width, height = text_bounds(text, size, font) def draw(cr): import cairo # Paint the background white. Replace with 1,1,1,0 for transparent PNGs. cr.set_source_rgba(1,1,1,1) cr.paint() # Some black text cr.set_source_rgba(0,0,0,1) cr.select_font_face(font, cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_NORMAL) cr.set_font_size(size) # We need to adjust by the text's offsets to center it. x_bearing, y_bearing, width, height = cr.text_extents(text)[:4] cr.move_to(-x_bearing,-y_bearing) # We stroke and fill to make sure thinner parts are visible. cr.text_path(text) cr.set_line_width(0.5) cr.stroke_preserve() cr.fill() return render_image(draw, width, height) Here, we construct the draw function with a simple text drawing command, and run the wrapper. There's some interesting text positioning going on up there; for more on this, and cairo in general, read through the excellent Cairo Tutorial for Python Programmers. The last thing is to add an appropriate URL into your URLconf, such as (r'^title/([^\/]+)/(\d+)/$', "render_title") And then, when you browse to /title/HelloWorld/20/, you'll hopefully get a nice PNG of your new title! Then, you can just use img tags instead of titles, in this sort of style: <img src="/title/{{ item.title }}/20" alt="{{ item.title }}" /> This process is quite quick, but not without a small cost of processing power; if you're using it a lot, think about some sort of caching. Apart from that, be happy with your newfound title freedom... * Or possily Pylons. As long as you don't go and cavort with those Gems On Guiderails people, or heaven forbid the [PH/AS]P guys...
http://www.aeracode.org/2007/12/15/django-and-cairo-rendering-pretty-titles/
CC-MAIN-2016-50
en
refinedweb
Building a Drupal 8 Module: Blocks and FormsBy Daniel Sipos How to Build a Drupal 8 Module. In the first installment of this article series on Drupal 8 module development we started with the basics. We’ve seen what files were needed to let Drupal know about our module, how the routing process works and how to create menu links programatically as configuration. In this tutorial we are going to go a bit further with our sandbox module found in this repository and look at two new important pieces of functionality: blocks and forms. To this end, we will create a custom block that returns some configurable text. After that, we will create a simple form used to print out user submitted values to the screen. Drupal 8 blocks A cool new change to the block API in D8 has been a switch to making blocks more prominent, by making them plugins (a brand new concept). What this means is that they are reusable pieces of functionality (under the hood) as you can now create a block in the UI and reuse it across the site – you are no longer limited to using a block only one time. Let’s go ahead and create a simple block type that prints to the screen Hello World! by default. All we need to work with is one class file located in the src/Plugin/Block folder of our module’s root directory. Let’s call our new block type DemoBlock, and naturally it needs to reside in a file called DemoBlock.php. Inside this file, we can start with the following: <?php namespace Drupal\demo\Plugin\Block; use Drupal\block\BlockBase; use Drupal\Core\Session\AccountInterface; /** * Provides a 'Demo' block. * * @Block( * id = "demo_block", * admin_label = @Translation("Demo block"), * ) */ class DemoBlock extends BlockBase { /** * {@inheritdoc} */ public function build() { return array( '#markup' => $this->t('Hello World!'), ); } /** * {@inheritdoc} */ public function access(AccountInterface $account) { return $account->hasPermission('access content'); } } Like with all other class files we start by namespacing our class. Then we use the BlockBase class so that we can extend it, as well as the AccountInterface class so that we can get access to the currently logged in user. Then follows something you definitely have not seen in Drupal 7: annotations. Annotations are a PHP discovery tool located in the comment block of the same file as the class definition. Using these annotations we let Drupal know that we want to register a new block type ( @Block) with the id of demo_block and the admin_label of Demo block (passed through the translation system). Next, we extend the BlockBase class into our own DemoBlock, inside of which we implement two methods (the most common ones you’ll implement). The build() method is the most important as it returns a renderable array the block will print out. The access() method controls access rights for viewing this block. The parameter passed to it is an instance of the AccountInterface class which will be in this case the current user. Another interesting thing to note is that we are no longer using the t() function globally for translation but we reference the t() method implemented in the class parent. And that’s it, you can clear the caches and go to the Block layout configuration page. The cool thing is that you have the block types on the right (that you can filter through) and you can place one or more blocks of those types to various regions on the site. Drupal 8 block configuration Now that we’ve seen how to create a new block type to use from the UI, let’s tap further into the API and add a configuration form for it. We will make it so that you can edit the block, specify a name in a textfield and then the block will say hello to that name rather than the world. First, we’ll need to define the form that contains our textfield. So inside our DemoBlock class we can add a new method called blockForm(): /** * {@inheritdoc} */ public function blockForm($form, &$form_state) { $form = parent::blockForm($form, $form_state); $config = $this->getConfiguration(); $form['demo_block_settings'] = array( '#type' => 'textfield', '#title' => $this->t('Who'), '#description' => $this->t('Who do you want to say hello to?'), '#default_value' => isset($config['demo_block_settings']) ? $config['demo_block_settings'] : '', ); return $form; } This form API implementation should look very familiar from Drupal 7. There are, however, some new things going on here. First, we retrieve the $form array from the parent class (so we are building on the existing form by adding our own field). Standard OOP stuff. Then, we retrieve and store the configuration for this block. The BlockBase class defines the getConfiguration() method that does this for us. And we place the demo_block_settings value as the #default_value in case it has been set already. Next, it’s time for the submit handler of this form that will process the value of our field and store it in the block’s configuration: /** * {@inheritdoc} */ public function blockSubmit($form, &$form_state) { $this->setConfigurationValue('demo_block_settings', $form_state['values']['demo_block_settings']); } This method also goes inside the DemoBlock class and all it does is save the value of the demo_block_settings field as a new item in the block’s configuration (keyed by the same name for consistency). Lastly, we need to adapt our build() method to include the name to say hello to: /** * {@inheritdoc} */ public function build() { $config = $this->getConfiguration(); if (isset($config['demo_block_settings']) && !empty($config['demo_block_settings'])) { $name = $config['demo_block_settings']; } else { $name = $this->t('to no one'); } return array( '#markup' => $this->t('Hello @name!', array('@name' => $name)), ); } By now, this should look fairly easy. We are retrieving the block’s configuration and if the value of our field is set, we use it for the printed statement. If not, use use a generic one. You can clear the cache and test it out by editing the block you assigned to a region and add a name to say hello to. One thing to keep in mind is that you are still responsible for sanitizing user input upon printing to the screen. I have not included these steps for brevity. Drupal 8 forms The last thing we are going to explore in this tutorial is how to create a simple form. Due to space limitations, I will not cover the configuration management aspect of it (storing configuration values submitted through forms). Rather, I will illustrate a simple form definition, the values submitted being simply printed on the screen to show that it works. In Drupal 8, form definition functions are all grouped together inside a class. So let’s define our simple DemoForm class inside src/Form/DemoForm.php: <?php /** * @file * Contains \Drupal\demo\Form\DemoForm. */ namespace Drupal\demo\Form; use Drupal\Core\Form\FormBase; class DemoForm extends FormBase { /** * {@inheritdoc}. */ public function getFormId() { return 'demo_form'; } /** * {@inheritdoc}. */ public function buildForm(array $form, array &$form_state) { $form['email'] = array( '#type' => 'email', '#title' => $this->t('Your .com email address.') ); $form['show'] = array( '#type' => 'submit', '#value' => $this->t('Submit'), ); return $form; } /** * {@inheritdoc} */ public function validateForm(array &$form, array &$form_state) { if (strpos($form_state['values']['email'], '.com') === FALSE ) { $this->setFormError('email', $form_state, $this->t('This is not a .com email address.')); } } /** * {@inheritdoc} */ public function submitForm(array &$form, array &$form_state) { drupal_set_message($this->t('Your email address is @email', array('@email' => $form_state['values']['email']))); } } Apart from the OOP side of it, everything should look very familiar to Drupal 7. The Form API has remained pretty much unchanged (except for the addition of some new form elements and this class encapsulation). So what happens above? First, we namespace the class and use the core FormBase class so we can extend it with our own DemoForm class. Then we implement 4 methods, 3 of which should look very familiar. The getFormId() method is new and mandatory, used simply to return the machine name of the form. The buildForm() method is again mandatory and it builds up the form. How? Just like you are used to from Drupal 7. The validateForm() method is optional and its purpose should also be quite clear from D7. And finally, the submitForm() method does the submission handling. Very logical and organised. So what are we trying to achieve with this form? We have an email field (a new form element in Drupal 8) we want users to fill out. By default, Drupal checks whether the value input is in fact an email address. But in our validation function we make sure it is a .com email address and if not, we set a form error on the field. Lastly, the submit handler just prints a message on the page. One last thing we need to do in order to use this form is provide a route for it. So edit the demo.routing.yml file and add the following: demo.form: path: '/demo/form' defaults: _form: '\Drupal\demo\Form\DemoForm' _title: 'Demo Form' requirements: _permission: 'access content' This should look familiar from the previous article in which we routed a simple page. The only big difference is that instead of _content under defaults, we use _form to specify that the target is a form class. And the value is therefore the class name we just created. Clear the caches and navigate to demo/form to see the form and test it out. If you are familiar with drupal_get_form() and are wondering how to load a form like we used to in Drupal 7, the answer is in the global Drupal class. Thus to retrieve a form, you can use its formBuilder() method and do something like this: $form = \Drupal::formBuilder()->getForm('Drupal\demo\Form\DemoForm'); Then you can return $form which will be the renderable array of the form. Conclusion In this article we’ve continued our exploration of Drupal 8 module development with two new topics: blocks and forms. We’ve seen how to create our own block type we can use to create blocks in the UI. We’ve also learned how to add a custom configuration to it and store the values for later use. On the topic of forms, we’ve seen a simple implementation of the FormBase class that we used to print out to the screen the value submitted by the user. In the next tutorial we will take a quick look at configuration forms. We will save the values submitted by the user using the Drupal 8 configuration system. Additionally, we will look at the service container and dependency injection and how those work in Drupal 8. See you then. No Reader comments
https://www.sitepoint.com/building-drupal-8-module-blocks-forms/
CC-MAIN-2016-50
en
refinedweb
Agenda See also: IRC log, previous 2008-01-08 <scribe> agenda: PROPOSED to accept minutes of the Jan 8 telecon: RESOLUTION: accepted minutes next telecon: 22 January 2008 <scribe> ACTION: Quentin to review Editor's draft of SKOS Reference [recorded in] [DONE] <scribe> ACTION: Vit to review Editor's draft of SKOS Reference [recorded in] [DONE] -> Quentin's review of SKOS reference -> Vit's review of SKOS reference aliman: looked at those reviews ... neither quentin's nor vit's comments are about technical issues ... mostly about wording Guus: propose to write a new version and ask the reviewers ... send the new draft and a mail explaining how the comments by the reviewer's were addressed aliman: the schedule is quite aggressive guus: you can put TODO's in the document tom: there was some discussion on the naming of some properties and a class which contain the word "relation" ... it might be clearer if we re-order the words ... "labelRelated" -->"relatedLabel" <Antoine> +1 Guus: please take this into consideration for the draft aliman: I chose the previous name to make a distinction with all of the "*Label" relationships Guus: we should add a note to explain it <vit> <Quentin> aliman: quentin pointed an inconsistency between the text and the resolution at the f2f ... we can do a quick fix Guus: let's add pointers to the issues in the document <scribe> ACTION: Alistair send an email to the list by the end of next week that the reviewers can agree with and then propose publishing as WD by Jan 22 [recorded in] [CONTINUES] guus: move on into SKOS primer -> Current draft of SKOS Primer Quentin: I sent the review today Quentin: mainly two comments: 1) we do not make any reference in the primer to the semantics ... 2) a use case instead of separate examples would be quite useful ... other comments are related to the issues to be discussed later marghe: I plan to send my review by next week guus: the actions on marghe and Quentin to review the SKOS primer were not captured last week <edsu> Quentin++ <scribe> ACTION: marghe to review the SKOS primer [recorded in] <scribe> ACTION: Alistair and Guus write draft section in primer on relationship between SKOS concepts and OWL classes for OWL DL users [recorded in] [CONTINUES] guus: the deadline for the previous action is 22 Jan guus: move to ISSUE 36 -> Antoine on problems with closing ISSUE-36 Antoine: last week we made a resolution about ISSUE 36 ... actually when looking at the initial wording of the issue ... it is about linking relationships with the schema ... so the resolution is not complete, part of the problem still exists GuusS: our resolution last week was an amendment of a previous one ... we need to track these resolutions ... please look at the initial wording <scribe> ACTION: Antoine to track the resolutions to ISSUE 36 [recorded in] <Zakim> aliman, you wanted to comment on label naming and to mention I have a placeholder for ... in reference aliman: in SKOS reference we have a small note: we haven't made any commitment on this issue ... it is implied that there will be a section showing a pattern for querying <Zakim> Tom, you wanted to ask about Tom's comments on Primer (in the agenda) <Antoine> aliman: the reference has a section on SKOS and named graphs GuusS: to Antoine: write what you think the resolution to issue 36 should be Tom: there was some discussion on the syntax of the examples ... graphs represented as pictures might be more readable ... it depends on the intended audience <Ralph> I heard Tom express concern that N3 could be _less_ readable ? Tom: which components of SKOS are basic and which are advanced? Tom: which document should be cited for N3? <Ralph> Notation3 (N3): A readable RDF syntax <Ralph> Turtle - Terse RDF Triple Language <edsu> Ralph: nice! GuusS: it makes sense to use the same notation in both documents ... unless there is a very good reason ... many people will read first the primer and then the reference ... i suggest to the editors to look at the pointers by Ralph Antoine: we need to sync with aliman and seanb <scribe> ACTION: Guus to schedule to discussion on the notation (syntax) used in SKOS examples in Reference & Primer in two weeks time [recorded in] GuusS: move to issue 44, only a few minutes for this -> This and other threads in the mailing list Antoine: there is some discussion on if broader/narrows should be transitive ... some people are not convinced by our decision during the f2f GuusS: I suggest that in the reference we state that broad/narrower are not transitive, discuss the rationale, and point to a specialization in which we define a transitive broad/narrower aliman: there is a need for both of them (transitive and non-transitive) in different use cases ... this is common pattern ... a design pattern to solve this is two have a non-transitive property and a transitive subproperty GuusS: I agree, I know this pattern ... but technically it cannot be a subproperty, it leads to inconsistent semantics aliman: one of the rules of thumb in OWL reference is "do not mess with the vocabulary" ... I wonder if we should have rules of thumb for SKOS GuusS: this is a different matter, SKOS is not a language like OWL seanb: alistair's point is that if we allow users to make assumptions about the vocabulary, we can put interoperability at risk GuusS: I suggest to leave this for now <scribe> ACTION: Alistair to propose an approach to clarify which aspects of the extension module should be in scope for the candidate recommendation package. [recorded in] [CONTINUES] <scribe> ACTION: Alistair and Guus to prepare material for next week on Concept Schemes vs OWL Ontologies [recorded in] [CONTINUES] <scribe> ACTION: Guus to write up the issue [of Label Resource] and add to the issue list. [recorded in] [CONTINUES] <scribe> ACTION: Ralph to add pointer to Alistair's mail on grouping constructs as a note to resolution of ISSUE-39. [recorded in] [DONE] <Ralph> resolution of ISSUE-39 <Ralph> "RESOLUTION: Accept Antoine's proposal as a resolution to ISSUE-39." Ralph: do you agree with closing the issue? <scribe> ACTION: Ralph to check whether the common interpretation of [recorded in] [CONTINUES] GuusS: the second action is a duplicate, sorry ben: we hope we can have something for the reviewers in a couple of days <scribe> ACTION: Ben and Michael to address comments by Tom [regarding maintenance of wiki document] [recorded in] [CONTINUES] <scribe> ACTION: Ben to prepare draft implementation report for RDFa (with assistance from Michael) [recorded in] [CONTINUES] <scribe> ACTION: Ben to distribute RDFa syntax draft to reviewers by Monday [recorded in] [CONTINUES] <scribe> ACTION: Diego to review RDFa syntax document [recorded in] [CONTINUES] <scribe> ACTION: Ed to review RDFa syntax document [recorded in] [CONTINUES] GuusS: if the document is available by the end of this week, we have to postpone the decision by one week ... the decision might be scheduled for Feb 5th <Zakim> Tom, you wanted to ask if there is an action on the RDFa editors to request a decision <scribe> ACTION: Ben to prepare the email to request the decision for publishing on Feb 5th [recorded in] <scribe> ACTION: Ralph to review recipes document [recorded in] [DONE] -> Ralph's review of the Recipes <scribe> ACTION: Ralph see if W3C Systems Team can help with question on Apache conditional redirects [recorded in] [DONE] <scribe> ACTION: Ralph propose resolution to ISSUE-16 "Default behavior" [recorded in] [CONTINUES] <scribe> ACTION: Ralph/Diego to work on Wordnet implementation [of Recipes implementations] [recorded in] [CONTINUES] jon: the reviews are great, we are working on integrating their comments ... we are still shooting for a pre-publication next week ... w.r.t. the comments from W3C Systems Team, not sure what to do GuusS: if you can integrate ralph's and ed's comments, we are in a position to publish a new draft ... decision in Jan 29 <scribe> ACTION: Jon and Diego to propose a decision on publishing the next Recipes draft by next week [recorded in] GuusS: (to the editors) make sure that the WG has the proposal Elisa: planning to hold a call later this week and to work on our action <scribe> ACTION: Vit and Elisa to include in the document all the target sections plus an allocation of sections to people and potentially a standard structure for sections [recorded in] [CONTINUES] [adjourned] <Ralph> scribenick: ralph Antoine: the idea is that transitive broader be a superproperty of broader ... if we do that then statements using broader cannot be retrieved using the super property ... the standard modelling pattern is good but we have a standardization problem Daniel: so there really aren't two types of 'broader' ... to me, there's only one kind of 'broader' Sean: seems to me from reading the discussion that people want to be able to query against 'broader' and get transitive closure on query ... so there's really only one 'broader' but there's a way to query over a more general notion ... the general notion would not be used in assertions Daniel: I don't see a difference between query and assertion Sean: there may be inferences I can draw from assertions Daniel: in the OWL community you make limited assertions and do a lot of inferencing Alistair: if transitive form is superproperty we could have a convention that we only ever assert the subproperty ... but the superproperty is available for query <Quentin> Guus: [worries about community usage] Alistair: choice of which is subproperty and which is superproperty ... direct one could be 'broader' -- the one that people use -- which would affect existing SKOS data Sean: is there really an analog for the transitive closure of 'broader' in current thesauri ? ... if I have a paper thesaurus, I don't really have a transitive closure without having to do a lot of work ... the transitive closure is not actually represented anywhere Alistair: agree, but the point is the practicalities ... in certain applications it is convenient to compute the transitive closure and then query it Sean: that's fine, which suggests the pattern of using direct 'broader' in assertions and a transitive 'broader' that I use in queries Guus: whatever we do, the transitive property should be a superproperty of the direct one else the semantics are wrong ... the direct property: "a is a direct broader term of b", without saying anything about transitivity ... remember that transitivity does not inherit Daniel: I'm worried about this confusing the community Guus: Sean points out that the community does not currently have this logical notion; they do it at query time Sean: yes, I'm suggesting non-transitive 'broader' used in assertions and a different, transitive, relation that is only used in query ... I hope this satisfies those who want 'broader' to be transitive in some way Alistair: I agree Guus: but be clear that this is not [currently] being used in a pure logical way; it's a procedural thing Ed: there's an example at the end of the primer, but it doesn't follow this pattern and would need to be changed Quentin: the transitive version of broader and narrower should be present; we're speaking of creating knowledge organization systems so these should be taken logically ... in some application the developer might want to use SKOS as a simple representation and might want some very simple logical inference ... without requiring the full capabilities of OWL ... their concept of hierarchy is a simplification of subsumption ... this might just mean that we need to look at a SKOS extension ... but I know there is opposition to extensions as they require additional namespaces Guus: if you're going to write an assertion, e.g. in a namespace document, you use 'broader' and if you want to write a query you can use 'broaderTransitive' Alistair: if we do have 'broaderTransitive' or 'broaderClosed' in SKOS then two of the semantic conditions in the data model become very easy to state ... e.g. 'skos:related disjointfrom skos:broader' ... and to assert some irreflexive relations ... I would like to see a broaderTransitive/broaderClosed superproperty described normatively ... rather than omitting it or leaving it to a community extension Antoine: agree Sean: agree, and it would reduce the repetition of this discussion Guus: any chance of getting this written up for discussion next week? Alistair: are you suggesting we introduce two new terms in the SKOS vocabulary and include them in the editor's draft? Guus: yes, in particular the editor's draft we're going to review next week Ralph: I'd recommend sending this to the WG in a separate email Sean: related to ISSUE 44 <aliman> Al's notes ... <inserted> scribenick: aliman quentin: transitive version of broader should be present -- speaking of creating KOS, forcing applications to take them logically. In some systems & applications, use SKOS as simple representation, and simple inference with it (not full OWL). SKOS vocab to do thesauri, taxonomies, hierarchies, concept of hierarchy very simplification of subsumption, as broader is loose meaning. SKOS... transitive as super ... antoine: problem, all statements asserted using transitive broader cannot be retrieved by daniel: aren't two types of broader? sean: if use pattern (transitive super) don't use that for assertions, use for querying? people want to query against broader, and get transitive when query; assertions about direct; daniel: assertions vs. query? sean: assertions -- directly asserted; may be inferences I can draw. daniel: proposing two different types of broader, confusing. agree with you, make minimal assertions, do the rest by inference, legitimate. guus: BT standard term in thesaurus community; what people state as BT is always direct broader; so by definition, our semantics of broader, if it is equal to BT, then it needs to be not transitive, otherwise people get confused. quentin: as a sub-property, examples described as in skos core guide? guus: sub-property has to be direct; if do that, what we call broader, will not be same semantics as thesauri, because only assert direct one. that's only way semantics. aliman: other way around from guus, would affect existing SKOS data; if do as guus says, sean: analog for transitive closure of broader in thesauri? If have a paper thesaurus, don't really have transitive closure, not represented... aliman: required sean: to have skos:broader as direct, and introduce some new super-property as transitive closure quentin: I would agree as well. guus: I you want to have a transitive and a direct, then transitive is always super-property. transitivity doesn't inherit daniel: existing relations so, worried about confusing the community. How do you know which to use? guus: sean is saying, in community, don't have logical notion. Do it at query/computation time. sean: broader used in assertions, not transitive, then property used in query which is transitive. relatively clear statement, answers concerns of people requiring broader to be transitive. guus: clear not being used in a logical sense; if want to get closure, have to do procedural thing; haven't seen logical use of thesauri yet. sean: needs some careful explanation in reference and primer. ed: in tail end of primer, example of doing it not the right way, will have to be changed. antoine: will not be huge effort ... extension described in docs earlier, need to look at again. guus: broader & broaderTransitive should be in spec, if write docs, use broader, if want to query, use broaderTransitive. If SKOS spec specifies broaderTransitive. aliman: makes some conditions easier to state guus: comes down to wording aliman: would like to see super-property in the spec antoine: i agree sean: i agree guus: i agree too ... can we have this in some short form in editor's. Two new URIs in SKOS vocabulary. Suggest broaderTransitive rather than broaderClosed ralph: recommend separate email on this -- here's what we've done to editor's draft and why guus: who is issue owner? temporary resolution of issue 44. I'll write it tonight.
http://www.w3.org/2008/01/15-swd-minutes.html
CC-MAIN-2016-50
en
refinedweb
So I find myself porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux). I. 2016年12月02日49分36秒 Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no. You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings. Also, look into the clock_getres() function. 2016年12月02日49分36秒 High Resolution, Low Overhead Timing for Intel Processors If you're on Intel hardware, here's how to read the CPU real-time instruction counter. It will tell you the number of CPU cycles executed since the processor was booted. This is probably the finest-grained counter you can get for performance measurement. Note that this is the number of CPU cycles. On linux you can get the CPU speed from /proc/cpuinfo and divide to get the number of seconds. Converting this to a double is quite handy. When I run this on my box, I get 11867927879484732 11867927879692217 it took this long to call printf: 207485 Here's the Intel developer's guide that gives tons of detail. #include <stdio.h> #include <stdint.h> inline uint64_t rdtsc() { uint32_t lo, hi; __asm__ __volatile__ ( "xorl %%eax, %%eax\n" "cpuid\n" "rdtsc\n" : "=a" (lo), "=d" (hi) : : "%ebx", "%ecx"); return (uint64_t)hi << 32 | lo; } main() { unsigned long long x; unsigned long long y; x = rdtsc(); printf("%lld\n",x); y = rdtsc(); printf("%lld\n",y); printf("it took this long to call printf: %lld\n",y-x); } 2016年12月02日49分36秒 @Bernard: I have to admit, most of your example went straight over my head. It does compile, and seems to work, though. Is this safe for SMP systems or SpeedStep? That's a good question... I think the code's ok. From a practical standpoint, we use it in my company every day, and we run on a pretty wide array of boxes, everything from 2-8 cores. Of course, YMMV, etc, but it seems to be a reliable and low-overhead (because it doesn't make a context switch into system-space) method of timing. Generally how it works is: Specific notes: out-of-order execution can cause incorrect results, so we execute the "cpuid" instruction which in addition to giving you some information about the cpu also synchronizes any out-of-order instruction execution. Most OS's synchronize the counters on the CPUs when they start, so the answer is good to within a couple of nano-seconds. The hibernating comment is probably true, but in practice you probably don't care about timings across hibernation boundaries. regarding speedstep: Newer Intel CPUs compensate for the speed changes and returns an adjusted count. I did a quick scan over some of the boxes on our network and found only one box that didn't have it: a Pentium 3 running some old database server. (these are linux boxes, so I checked with: grep constant_tsc /proc/cpuinfo) I'm not sure about the AMD CPUs, we're primarily an Intel shop, although I know some of our low-level systems gurus did an AMD evaluation. Hope this satisfies your curiosity, it's an interesting and (IMHO) under-studied area of programming. You know when Jeff and Joel were talking about whether or not a programmer should know C? I was shouting at them, "hey forget that high-level C stuff... assembler is what you should learn if you want to know what the computer is doing!" 2016年12月02日49分36秒 You may be interested in Linux FAQ for clock_gettime(CLOCK_REALTIME) 2016年12月02日49分36秒 Wine is actually using gettimeofday() to implement QueryPerformanceCounter() and it is known to make many Windows games work on Linux and Mac. Starts leads to 2016年12月02日49分36秒. I obtained this answer from High Resolution Time Measurement and Timers, Part I 2016年12月02日49分36秒 So it says microseconds explicitly, but says the resolution of the system clock is unspecified. I suppose resolution in this context means how the smallest amount it will ever be incremented? The data structure is defined as having microseconds as a unit of measurement, but that doesn't mean that the clock or operating system is actually capable of measuring that finely. Like other people have suggested, gettimeofday() is bad because setting the time can cause clock skew and throw off your calculation. clock_gettime(CLOCK_MONOTONIC) is what you want, and clock_getres() will tell you the precision of your clock. 2016年12月02日49分36秒 This answer mentions problems with the clock being adjusted. Both your problems guaranteeing tick units and the problems with the time being adjusted are solved in C++11 with the <chrono> library. The clock std::chrono::steady_clock is guaranteed not to be adjusted, and furthermore it will advance at a constant rate relative to real time, so technologies like SpeedStep must not affect it. You can get typesafe units by converting to one of the std::chrono::duration specializations, such as std::chrono::microseconds. With this type there's no ambiguity about the units used by the tick value. However, keep in mind that the clock doesn't necessarily have this resolution. You can convert a duration to attoseconds without actually having a clock that accurate. 2016年12月02日49分36秒 From my experience, and from what I've read across the internet, the answer is "No," it is not guaranteed. It depends on CPU speed, operating system, flavor of Linux, etc. 2016年12月02日49分36秒 Reading the RDTSC is not reliable in SMP systems, since each CPU maintains their own counter and each counter is not guaranteed to by synchronized with respect to another CPU. I might suggest trying clock_gettime(CLOCK_REALTIME). The posix manual indicates that this should be implemented on all compliant systems. It can provide a nanosecond count, but you probably will want to check clock_getres(CLOCK_REALTIME) on your system to see what the actual resolution is. 2016年12月02日49分36秒
http://www.91r.net/ask/96.html
CC-MAIN-2016-50
en
refinedweb
Installing .NET component into Visual Studio .NET component is contained in CBProcNet.dll, located in subfolders of <CallbackProcess>\dotNET\ folder. CBProcNet.dll requires MSVC Runtime DLLs. Please refer to Deployment instructions for details on installing those Runtime DLLs on your system for development and on target systems during deployment. To install components to Visual Studio Toolbox - Use Main Menu -> Tools -> Choose Toolbox items to open Toolbox Customization dialog - In the dialog window that appears switch to .NET Framework Components tab - Find CallbackProcess component in the list and check it. - If you don't find the component in the list, use Browse button to find and add the assembly, which contains the component, to the list. Assemblies (32-bit and 64-bit) for .NET 4.6 are located in <CallbackProcess>\dotNET\NET_456 folder. Assemblies (32-bit and 64-bit) for .NET 4.5.1 are located in <CallbackProcess>\dotNET\NET_451 folder. Assemblies (32-bit and 64-bit) for .NET 4.5 are located in <CallbackProcess>\dotNET\NET_45 folder. Assemblies (32-bit and 64-bit) for .NET 4.0 are located in <CallbackProcess>\dotNET\NET_40 folder. Assemblies (32-bit and 64-bit) for .NET 2.0 are located in <CallbackProcess>\dotNET\NET_20 folder. Using components To use CallbackProcess in your project, you need to include CBProcNet.dll to the list of project references. Then in source file add the following line: - C#: using CBProc; - VB.NET: imports CBProc; - C++: #using <CBProcNet.dll> Referencing platform-specific assemblies from AnyCPU project To use CallbackProcess in your AnyCPU project, you need to tell the loader, how to find and load the assembly, which matches your architecture. The sample code in C#-based pseudocode is provided below. The code assumes that you have copies of the assembly, stored in "x86" and "x64" subdirectories of your project's directory. Be sure to turn off "Copy local" option for the assembly reference, so that the compiler doesn't make a (wrong) copy of the assembly to the directory with your project assemblies. Thanks to Tyrone Erasmus for the custom resolver implementation.Thanks to Tyrone Erasmus for the custom resolver implementation. AppDomain.CurrentDomain.AssemblyResolve += PlatformDepResolve; ... private static Assembly PlatformDepResolve(object sender, ResolveEventArgs args) { if (args.Name.StartsWith("CBProcNet")) { string architecture; if (Environment.Is64BitOperatingSystem) architecture = "x64"; else architecture = "x86"; string fileName = Path.Combine(Environment.CurrentDirectory, architecture, "CBProcNet.dll"); Assembly assembly = Assembly.LoadFile(fileName); if (args.Name == assembly.FullName) return assembly; return null; } }
https://www.eldos.com/documentation/cbproc/ref_gen_install_net_vs.html
CC-MAIN-2016-50
en
refinedweb
This procedure uses the clsetup utility to register the associated VxVM disk group as a Sun Cluster device group. After a device group has been registered with the cluster, never import or export a VxVM disk group by using VxVM commands. If you make a change to the VxVM disk group or volume, follow the procedure SPARC: How to Register Disk Group Configuration Changes (Veritas Volume Manager) to register the device group configuration changes. This procedure ensures that the global namespace is in the correct following prerequisites have been completed prior to registering a VxVM device group: Superuser privilege on a node in the cluster. The name of the VxVM disk group to be registered as a device group. A preferred order of nodes to master the device group. A desired number of secondary nodes for the device group. When you define the preference order, you also specify whether the device group should be switched back to the most preferred node if that node fails and later returns to the cluster. See cldevicegroup(1CL) for more information about node preference and failback options. Nonprimary cluster nodes (spares) transition to secondary according to the node preference order. The default number of secondaries for a device group is normally set to one. This default setting minimizes performance degradation that is caused by primary checkpointing of multiple secondary nodes during normal operation. For example, in a four-node cluster, the default behavior configures one primary, one secondary, and two spare nodes. See also How to Set the Desired Number of Secondaries for a Device Group. Become superuser or assume a role that provides register a VxVM device group, type the number that corresponds to the option for registering a VxVM disk group as a device group. Follow the instructions and type the name of the VxVM disk group to be registered as a Sun Cluster device group. If this device group is replicated by using storage-based replication, this name must match the replication group name. If you use VxVM to set up shared disk groups for Oracle Parallel Server/Oracle RAC, you do not register the shared disk groups with the cluster framework. Use the cluster functionality of VxVM as described in the Veritas Volume Manager Administrator's Reference Guide. If you encounter the following error while attempting to register the device group, reminor the device group. To reminor the device group, use the procedure SPARC: How to Assign a New Minor Number to a Device Group (Veritas Volume Manager). This procedure enables you to assign a new minor number that does not conflict with a minor number that an existing device group uses. If you are configuring a replicated device group, set the replication property for the device group. Verify that the device group is registered and online. If the device group is properly registered, information for the new device group is displayed when you use the following command. If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must synchronize the device group by using clsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global Device Namespace. The following example shows the cldevicegroup command generated by clsetup when it registers a VxVM device group (dg1), and the verification step. This example assumes that the VxVM disk group and volume were created previously. To create a cluster file system on the VxVM device group, see How to Add a Cluster File System. If problems occur with the minor number, see SPARC: How to Assign a New Minor Number to a Device Group (Veritas Volume Manager).
http://docs.oracle.com/cd/E19316-01/820-4679/cihiiihh/index.html
CC-MAIN-2016-50
en
refinedweb
.server.filesys;18 19 /**20 * <p>21 * Thrown when an attempt is made to write to a file that is read-only or the user only has read22 * access to, or open a file that is actually a directory.23 */24 public class AccessDeniedException extends java.io.IOException 25 {26 private static final long serialVersionUID = 3688785881968293433L;27 28 /**29 * AccessDeniedException constructor30 */31 public AccessDeniedException()32 {33 super();34 }35 36 /**37 * AccessDeniedException constructor.38 * 39 * @param s java.lang.String40 */41 public AccessDeniedException(String s)42 {43 super(s);44 }45 } Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/alfresco/filesys/server/filesys/AccessDeniedException.java.htm
CC-MAIN-2016-50
en
refinedweb
. Let’s start with an example before looking at the maths! Imagine that we are at a supermarket and we are looking at the number of people that queue up at the till. The number of people in queue is the state of our system. There can be 0 person in the queue, or 1, or 2, … or 10 … or 20 … The system can change from one state to the other depending if a person arrived in the queue or someone finished checkout and left the queue, or nothing changed. We can model this system like so: If we generalise this example we can say that a Markov chain is composed of a set of states \(S={s_1,s_2,…,s_m}\) and at every time step the system can move from one state to another according to a transition model \(T\). Therefore a Markov chain is defined by: - A set of states \(S={s_1,s_2,…,s_m}\) - An initial state \(s_0\) - A transition model \(T(s,s’)\) A Markov chain observe a very important property: The next state depends only of the current state. That means that the next state doesn’t depend on the past. It doesn’t depend on how many people were in the queue 2 or 3 or 10 times before. The only thing that matters is the current number of people in the queue. This is expressed by the transition model \(T(s,s’)\) which means that the next state \(s’\) depends only of the current state \(s\). This is a very important property which means that if you want to model your process as a Markov chain you need to define your state very carefully as it should include everything you need to predict the next state. Let’s take another example and see how we can model it as a Markov chain. I go to the gym a few times a week and I usually do 2 kind of sessions: - Cardio workout - Strength workout In this case our model has 3 states: - Cardio workout - Strength workout - Rest day How de we move from one state to another? Well if I did a strength session I’ll be pretty exhausted so I am most likely to have a rest day the day after. If I had a rest day I can a cardio or a strength workout, or being lazy and take another rest day … nothing is certain it’s all a matter of probability, which gives us this kind of diagram Now that we know how to transition from one state to the other we can answer interesting questions like: - Given that I had a rest day how many chances are there that I’ll train tomorrow ? - Given that I had a cardio workout today what will I do in 2 days time (or 3 or 10 or 100) ? - Given that I had a strength workout today what is the probabilities than do a strength workout in 2 days (or 10 or 100) ? Knowing the probabilities for the next day is straightforward but computing the probabilities over several steps is more interesting as several paths are possible: We can then sum up the probabilities of all the possible paths to answer the question: “Given that today is a rest day, what is the probability that I’ll rest again in 2 days time?”. As you can see it gets tedious pretty quickly. In fact to compute this kind of probabilities we define the transition model as a \(m * m\) matrix where \(T_{ij}\) is the probability to move from state \(s_i\) to state \(s_j\). We can then compute the probabilities in \(n\) steps by computing \(T^n\). If we do it we numpy it looks like this: import numpy as np T = np.array([ [0.4, 0.3, 0.3], [0.5, 0.2, 0.3], [0.7, 0.2, 0.1] ]) T_2 = np.linalg.matrix_power(T, 2) T_3 = np.linalg.matrix_power(T, 3) T_10 = np.linalg.matrix_power(T, 10) T_50 = np.linalg.matrix_power(T, 50) T_100 = np.linalg.matrix_power(T, 100) // start in state "Rest" v = np.array([[1.0, 0.0, 0.0]]) print(" v_1: " + str(np.dot(v,T))) print(" v_2: " + str(np.dot(v,T_2))) print(" v_3: " + str(np.dot(v,T_3))) print(" v_10: " + str(np.dot(v,T_10))) print(" v_50: " + str(np.dot(v,T_50))) print("v_100: " + str(np.dot(v,T_100))) and it produces the following output: v_1: [[ 0.4 0.3 0.3]] v_2: [[ 0.52 0.24 0.24]] v_3: [[ 0.496 0.252 0.252]] v_10: [[ 0.50000005 0.24999997 0.24999997]] v_50: [[ 0.5 0.25 0.25]] v_100: [[ 0.5 0.25 0.25]] In this case the probabilities converges to a steady state (only the probabilities converge not the states which keep changing randomly) but it’s not always the case depending on the structure of the chain. In fact the structure of the chain demonstrates interesting properties. By looking only at the structure of the graph we can tell if the probabilities will converge or not and if the initial state matters or not. To understand if the initial state matters or not we need to define 2 kind of states: - Recurrent state - Transient state A recurrent state is a state for which whatever the transitions you make there is always a path to go back to that initial state. A transient state is a state that is not recurrent – there exists a path for which it’s not possible to go back to the initial state. Then if we group all the connected recurrent state into groups we can say that the initial state doesn’t matter as long as there is only 1 recurrent group in the chain (no matter where you start from you’ll always end up into a state within this group). If there is more than 1 recurrent group the initial state matters. Then the convergence property is a bit trickier to observe because the system may oscillate between 2 (or more) sets of states. In this case the system is said to be periodic. A system is periodic if it exists some set of states (not necessarily connected) for which you always from one set to another (there is no way to stay within the same set). It means that as long as a state is connected to itself the chain is not periodic (because it exists a transition that stays in the same set of states). Hopefully this post gave you a good feeling of what a Markov chain is. I didn’t really got into the mathematics involved beyond this but now that you have a good intuition of how it works it should be less painful. The MIT provides very detailed courses on Markov chains: Markov chain are used in wide variety of domains – basically everywhere you need planning – from computer science (buffer / queue) to HR to management …
http://www.beyondthelines.net/machine-learning/markov-chain/
CC-MAIN-2017-26
en
refinedweb
My goal is to have a simple piece of code that will raise an exception if input of non-int is entered. My problem so far, is that I haven't been able to reach the catch block. I have read a little about bad_typeid, but do not know how to implement it. #include <iostream> #include <typeinfo> using namespace std; int isnum() //reads an int from user. Raises error if not of type int { int temp; cout << "Please enter an int\n"; cin >> temp; if (typeid(temp) != typeid(2)) //if input is not of same type as int { throw 4; } return temp; } int main() { try { int temp = isnum(); } catch (int e) { cout << "An exception occurred. Exception Nr. " << e << endl; } return 0; } Thanks.
https://www.daniweb.com/programming/software-development/threads/294222/simple-try-catch-block-problem
CC-MAIN-2017-26
en
refinedweb
0 Why doesn't this work: #include <iostream> class myclass { public: union d { int i; }; }; int main() { myclass i; i.d.i = 3; return 0; } but this does: #include <iostream> class myclass { public: union { int a; }; }; int main() { myclass i; i.a = 3; return 0; } Union with a name seems a lot like a structure or a class, but a union without a name seems pointless to me. So whats the point of it? The first example just looks like a declaration of data type inside a declaration of another data type, and that supposed to be impossible, but the compiler only gives me an error when I try to reference there variables.... So can someone explain this to me? :D
https://www.daniweb.com/programming/software-development/threads/377180/unions
CC-MAIN-2017-26
en
refinedweb
Hello, so here is the program that I am trying to create: write a program to read a collection of exam scores, ranging in value from 0 to 100, until a sentinel of -1 should display the numer of outstanding scores (90-100), the number of satisfactory scores (60-89), and the number of unsatisfactory scores (1-59). It should also prompt the user if they want to see the averages and the percentage of students with satisfactory scores or above for all scores entered. I have gotten the first part done just fine, but I am stuck with the prompt of asking the user if they want to see the averages and percentages. Here is what I have so far: import java.util.Scanner; //Needed for user input import java.util.ArrayList; //Needed for array list //This program calculates a collection of exam scores //and gives outstanding, satisfactory, unsatisfactory, averages, and passing rates public class examScores public static void main(String[] args) { //variables to hold exam scores. int outstandingScore = 0; int satisfactoryScore = 0; int unsatisfactoryScore = 0; int count = 0; //create keyboard for input Scanner in = new Scanner(System.in); //Ask user to input exam score. System.out.println("Enter a list of exam scores from 0-100 "); System.out.println("Enter -1 if you are done with your list : "); int score = in.nextInt(); //While loop for determination of scores while(score>0) { count++; if(score>=90 && score<=100) outstandingScore++; else if(score>=60 && score<=89) satisfactoryScore++; else if(score>=1 && score<=59); unsatisfactoryScore++; score = in.nextInt(); }//end while System.out.println("Total number of grades: " + count); System.out.println("Total number of outstanding scores: " + outstandingScore); System.out.println("Total number of satisfactory scores: " + satisfactoryScore); System.out.println("Total number of unsatisfactory scores: " + unsatisfactoryScore); String input; //To hold keyboard input double averages = 0; //Average of exam scores char prompt; //Holds 'y' or 'n' double total = 0; //Total number of grades int numInputs = 0; //Total number of scores entered by user double scores = 0; //Create a scanner object for keyboard input Scanner keyboard = new Scanner(System.in); System.out.println("Do you want to see the averages and passing rates of all scores? "); System.out.print("Enter Y for yes or N for no: "); prompt = keyboard.next().charAt(0); //While loop if prompt is "yes" if(prompt == 'Y' || prompt == 'y'){ while(prompt == 'Y' || prompt == 'y'); { numInputs++; averages = total/numInputs; if(scores>=100 && scores<=60) outstandingScore++; satisfactoryScore++; if(score>=1 && score<=59); unsatisfactoryScore++; score = in.nextInt(); }//end while } System.out.println("The average is" + averages); System.out.println("The percentage of students with satisfactory scores " + "or above is: " + count); } Everything runs great until I hit "Y" to see the averages, and then it just stops there.....any help is greatly appreciated!
https://www.daniweb.com/programming/threads/504156/help-i-m-stuck-in-a-loop
CC-MAIN-2017-26
en
refinedweb
Have you ever wanted to design your own game controller? It’s easier than you think! In this short project we will build a simple custom game controller to use with the Unity game engine. This controller will be powered by an Arduino Uno, though you could use one of the many alternatives out there for this project too. We will also create a basic game where you will use your controller to avoid falling objects and slow down time. For This Project You Will Need - Arduino or similar microcontroller - 1 x 10k Ohm resistor - 1 x Momentary switch - 1 x Potentiometer - Hook-up wires - A breadboard - Unity game engine - The Uniduino plugin from Unity Asset Store ($30) - Complete project code, in case you don’t want to write it out (doesn’t include the Uniduino plugin) Most of these things are available in an Arduino starter kit. If haven’t got a starter kit check out our guide for choosing the best one for you. You can make your controller as complicated as you wish, though for this example we will set up a potentiometer and a button – perfect for controlling a simple arcade game. Assembling Your Controller Set up your breadboard and Arduino as shown in the image below. This is what we will be using as our game controller, although you could use almost the exact same setup as a DIY too! Preparing Your Arduino Once you have everything wired up, connect your Arduino via USB. In Arduino Software IDE head to Tools > Board and Tools > Port to select which microcontroller and port you are using. The Arduino IDE comes bundled with the sketch we need, and you can find it under File > Examples > Firmata > StandardFirmata. Click Upload and you will be ready to go. If you are new to Arduino and your head is melting slightly, check out our to help you get it talking with your computer nicely. Setting Up Your Unity Project In Unity, open Window > Asset Store to access Unity’s Asset Store from within the Unity Editor. Search the Asset Store for the Uniduino plugin. This plugin will allow you to receive and send data to and from your Arduino pins inside Unity. The plugin at the time of writing costs $30. It is possible to do this project without buying the plugin, though it is rather more complicated and you may find the plugin more convenient all round. This video from the creators of the plugin takes you through the process of testing everything is working, along with first time setup. Note that you may also have to reset the Unity editor on Windows. We can use this same test panel to test out our controller. Set Pin D2 to INPUT and Digital. Further down, set Pin A5 to ANALOG. Your potentiometer and button should display values on screen next to their pin numbers now. Progress! Now to Make Something We Can Control So we have a controller, but what shall we control? Well, the possibilities are endless, but for today we shall create a very simple dodging game to test out our new control system. We will move over the game setup quite quickly, so if you are totally new to the Unity engine you may find our Unity Game Programming Beginner’s Guide useful to get your bearings. We will build a very basic game in which your aim is to dodge your sphere to the left and right to avoid falling cubes, which will utilize your newly made custom controller. Create a new scene and drag the Uniduino prefab from Assets > Uniduino > Prefabs into your hierachy and drag the Uniduino prefab into the hierarchy. We need it there to do the talking between our game and controller. In the Unity hierarchy click Create > Sphere and use Transform tab in the Inspector to move it to the bottom of the game screen. It’s Time to Get Coding Now to add some code to this party. With the sphere selected in the Hierarchy, click Add Component > New Script at the bottom of it’s Inspector window. Name it sphereMover and select C Sharp from the drop down menu. Click Create and Add and the script will be added to the GameObject. Double-click on it to open the script and enter this code: using UnityEngine; using System.Collections; using Uniduino; public class sphereMover : MonoBehaviour { //Headers aren't scrictly neccesary, but they make life easier back in the Inspector. [Header("Arduino Variables")] //we need to declare the Arduino as a variable public Arduino arduino; //we need to declare an integer for the pin number of our potentiometer, //making these variables public means we can change them in the editor later //if we change the layout of our arduino public int potPinNumber; //a float variable to hold the potentiometer value (0 - 1023) public float potValue; //we will later remap that potValue to the y position of our capsule and hold it in this variable public float mappedPot; //public int for our button pin public int buttonPinNumber; [Header("Sphere Variables")] //variables to hold the values we noted earlier for the sides of our screen public float leftEdge; public float rightEdge; // Use this for initialization void Start () {//and initialize we shall, starting with the Arduino Variable. //we are only using one arduino, so we can use Arduino.global to grab it. arduino = Arduino.global; arduino.Setup(ConfigurePins); } void ConfigurePins() { //configure the Arduino pin to be analog for our potentiometer arduino.pinMode(potPinNumber, PinMode.ANALOG); //Tell the Arduino to report any changes in the value of our potentiometer arduino.reportAnalog(5, 1); //configure our Button pin arduino.pinMode(buttonPinNumber, PinMode.INPUT); arduino.reportDigital((byte)(buttonPinNumber / 8), 1); } } Take a moment to read through the code comments. So far, we have declared some variables for our Arduino, its pins, and our Sphere. We have also used the Start and ConfigurePins methods to initialize our Arduino on run time. Lets save our script, and go back into the Unity editor and see what’s changed. We can now see our public variables in the Inspector window. Let’s see what we can enter at this stage to help us later. We know what pin’s we are using on the Arduino from our build earlier, we can enter them. We also know from our experiment earlier how far we want our sphere to be able to travel left and right so it does not fall off the screen. Lets enter these values now. First Signs of Life It’s time to actually see values from our Arduino inside the Unity Editor. For now, we can add one line of code to our sphereMover script’s Update function, and save the script again. void Update () { //We assign the value the arduino is reading from our potentionmeter to our potValue variable potValue = arduino.analogRead(potPinNumber); } Now that we have our potValue variable being updated every frame, we can see it’s value in real time in the Unity Inspector. Before we give it a test, now would be a good time to check that the Uniduino plug in is listening on the right port. Click on Uniduino in the Heirarchy, and check it’s Port Name in the Inspector. If it is blank, fill in the correct port number for your Arduino. In this case it was COM4, though it may be different for you. Check using the Arduino IDE if you’re not sure. Select your sphere in the hierarchy and click the Play button at the top of the screen. The system needs a few seconds to initialise, after which you should start seeing the Pot Value variable change in the inspector when you move the potentiometer. Now we are talking! Well, strictly speaking Unity and the Arduino are talking, but who’s counting? If you have got this far and are not seeing the value change in the inspector, check over the setup steps, and make sure you have the correct Port selected for your Arduino. Let’s Move This Sphere Now that we have out potValue variable being updated, we want to use this value to move our sphere. When the potentiometer is all the way to left we want the sphere to be at the left side of the screen, and vice versa. Objects in Unity are positioned at a point in Vector space, determined by the values of it’s Transform.position. In the below image, where the sphere is at the furthest point to the left we would want it, you can see that it’s position vector is 9.5, -4, 0. We want to affect the sphere’s X position. Unfortunately using the values from our potentiometer directly will not work, as when the potentiometer is all the way to the left it gives a value of 0 — which would put our sphere right in the middle of the screen. At the other extreme, the potentiometer’s top value, 1023, would place the cube way off to the right of our screen. Not useful. What we need here is some math. Why Do Math When Unity Will Do It For You? For those of you out there dreading staring at a piece of paper covered in nonsensical numbers (although there are some great websites that can help you learn Maths), fear not. We need a way of making our potentiometer values to correspond with our sphere’s X position. Luckily, we can use an Extension Method. An Extension Method is a script that does a specific job for us. In this case, we give it the values we have, and it returns them mapped to one an other, ready to be used in our sphereMover script. At the top of the Project panel, click Create > C# Script and name it ExtensionMethods. Enter the code below into the script: using UnityEngine; using System.Collections; public static class ExtensionMethods { //our handy dandy Remapper function public static float Remap (this float value, float from1, float to1, float from2, float to2) { return (value - from1) / (to1 - from1) * (to2 - from2) + from2; } } Save the script, and head back to your sphereMover script. We can now use this Remap function in our ExtensionMethods script in our Update function to convert our potentiometer values into useable values in our game. Under where we just assigned the potValue variable, type the following: The prompt shows us that our Remap takes two sets of From and To values, and maps them together. We can enter our values into this. mappedPot = potValue.Remap(0, 1023, leftEdge, rightEdge); Save your script, head back to the Unity editor, and hit the play button. You should now see that the Mapped Pot variable changes when you move the potentiometer, to correspond with the values we determined for our Left and Right Edges. Take a moment to sit back and thank your ExtensionMethods script. Not a calculator in sight. Note: if you are noticing that your values are reversed, so when your potentiometer is all the way to the right you are getting a negative value for your Mapped Pot variable, you may have your potentiometer set up the wrong way round. Luckily, you can fix this without doing any rewiring. You can simply switch the values when you remap them: Now we finally have usable values. Now all that is left to do is to assign those values to our sphere’s X position: //Assign the mapped pot value to the sphere's x position transform.position = new Vector3(mappedPot, transform.position.y, transform.position.z); Save your script, head back to the Unity editor and press play. You should now be able to move your Sphere to the left and right using your potentiometer! Putting the Button to Work Now that we have our sphere moving, wouldn’t it be nice to have a way to slow things up a bit when we get in a tight spot? We are going to use our button to slow down time in our game. Open your sphereMover script, and add this code to your Update function //if Unity detects the button is being pressed, the time scale slows down if (arduino.digitalRead(buttonPinNumber) == 1){ Time.timeScale = 0.4f; } else Time.timeScale = 1.0f; Now we have the mechanics of our game, let’s add some obstacles! We are going to use the natural enemy of the sphere, the cube. In the hierarchy, click Create > 3d Object > Cube. In the cube’s inspector, Add Component > Physics > Rigidbody. Set the Drag value of the rigidbody to 5. Also, under the Box Collider component in the inspector, select Is Trigger. This will allow us to detect collisions with our Sphere. Create a script on the cube and call it collideWithSphere, open the script and delete the Start and Update functions as we won’t be needing them this time. Enter this code: using UnityEngine; using System.Collections; public class collideWithSphere : MonoBehaviour { void OnTriggerEnter(Collider other) { Destroy(other.gameObject); } } OnTriggerEnter sends a message whenever the trigger collider hits another collider. In this instance, we are telling it to destroy whatever it touches. Save the script and head back to the Unity editor. Drag the cube from the hierarchy to the Project panel. You’ll notice the text of the cube in the hierarchy has turned blue. This is because we have created a prefab and saved it in our project. Delete your cube from the hierarchy now. All we need now is a script to spawn the cubes. In the hierarchy click Create > Create Empty, and rename it to Game Manager in the Inspector and add a script to it called gameManager. Open the script and add this code: using UnityEngine; using System.Collections; public class gameManager : MonoBehaviour { //a variable to hold the prefab we want to spawn public GameObject cube; //we want some variables to decide how any cubes to spawn //and how high above us we want them to spawn public int numberToSpwan; public float lowestSpawnheight; public float highestSpawnheight; // Use this for initialization void Start () { for (int i = 0; i < numberToSpwan; i++) { Instantiate(cube, new Vector3(Random.Range(-9, 9), Random.Range(lowestSpawnheight, highestSpawnheight), 0), Quaternion.identity); } } // Update is called once per frame void Update () { } } Save the script. Back in the editor, select the Game Manager in the hierarchy, and drag your cube prefab from the project panel to the Cube variable in the Inspector. Fill out the values for your spawning here too. You can fiddle with it to make it as hard or easy as you like. Note that it is worth having your lowest cubes spawn high enough to allow Uniduino to initialise — losing the game before you are able to move may be frustrating! The Finished Project Now when you press play, the cubes will spawn above you and fall. You can use your potentiometer to avoid them, and your button to slow down time. In this project we have created a custom controller with an Arduino, configured Unity and Uniduino to communicate with it, and created a simple game to test it out. The concepts here can be applied to almost any project, and there are even game jams which specialize in custom controllers. With Arduino and Unity you could create a custom controller from almost anything. Have you created a hi-fi that controls a spacecraft? A toaster that controls a platform game? If you’ve made a project like this I’d love to see it! Post it in the comments below! Awesome! Thank you! It was a fun project!
http://www.makeuseof.com/tag/make-custom-game-controller-arduino-unity/
CC-MAIN-2017-26
en
refinedweb
After taking a look at Automapper attributes I have tried to answer this question so I have made a quick Console application to reproduce the behavior. I have added (copy-pasted) the classes in the first example from the GitHub documentation: [MapsTo(typeof(Customer))] public class Person { public string FirstName { get; set; } public string LastName { get; set; } public string Notes { get; set; } } public class Customer { public string FirstName { get; set; } public string LastName { get; set; } public string MyCustomerNotes { get; set; } } typeof(Program).Assembly.MapTypes(); //this throws exception var person = new Person { FirstName = "John", LastName = "Lackey" }; var customer = AutoMapper.Mapper.Map<Customer>(person); MapTypes An unhandled exception of type 'System.TypeInitializationException' occurred in ConsoleApplication.exe {"Sequence contains no matching element"} Looks like you are using Automapper 5, but the Automapper.Attributes seems only to work with version 4. I tried it with version 4 and it worked as expected. The problem seems to be that a method signature changed, which is looked up via reflection in Attributes.Extensions
https://codedump.io/share/7iWAs0ddERN5/1/an-unhandled-exception-of-type-39systemtypeinitializationexception39-occurred-in-consoleapplicationexe---automapper
CC-MAIN-2017-26
en
refinedweb
- NAME - SYNOPSIS - DESCRIPTION - EXAMPLES - NOTES - BUGS - VERSION - AUTHOR - SEE ALSO NAME alias - declare symbolic aliases for perl data attr - auto-declare hash attributes for convenient access const - define compile-time scalar constants SYNOPSIS use Alias qw(alias const attr); alias TEN => $ten, Ten => \$ten, Ten => \&ten, Ten => \@ten, Ten => \%ten, TeN => \*ten; { local @Ten; my $ten = [1..10]; alias Ten => $ten; # local @Ten } const pi => 3.14, ten => 10; package Foo; use Alias; sub new { bless {foo => 1, _bar => [2, 3]}, $_[0] } sub a_method { my $s = attr shift; # $foo, @_bar are now local aliases for # $_[0]{foo}, @{$_[0]{_bar}} etc. } sub b_method { local $Alias::KeyFilter = "_"; local $Alias::AttrPrefix = "main::"; my $s = attr shift; # local @::_bar is now available, ($foo, $::foo are not) } sub c_method { local $Alias::KeyFilter = sub { $_ = shift; return (/^_/ ? 1 : 0) }; local $Alias::AttrPrefix = sub { $_ = shift; s/^_(.+)$/main::$1/; return $_ }; my $s = attr shift; # local @::bar is now available, ($foo, $::foo are not) } DESCRIPTION Provides general mechanisms for aliasing perl data for convenient access. This module works by putting some values on the symbol table with user-supplied names. Values that are references will get dereferenced into their base types. This means that a value of [1,2,3] with a name of "foo" will be made available as @foo, not $foo. The exception to this rule is the default behavior of the attr function, which will not dereference values which are blessed references (aka objects). See $Alias::Deref for how to change this default behavior. Functions - alias Given a list of name => value pairs, declares aliases in the callers namespace. If the value supplied is a reference, the alias is created for the underlying value instead of the reference itself (there is no need to use this module to alias references--they are automatically "aliased" on assignment). This allows the user to alias most of the basic types. If the value supplied is a scalar compile-time constant, the aliases become read-only. Any attempt to write to them will fail with a run time error. Aliases can be dynamically scoped by pre-declaring the target variable as local. Using attrfor this purpose is more convenient, and recommended. - attr Given a hash reference, aliases the values of the hash to the names that correspond to the keys. It always returns the supplied value. The aliases are local to the enclosing block. If any of the values are unblessed references, they are available as their dereferenced types. Thus the action is similar to saying: alias %{$_[0]} but, in addition, also localizes the aliases, and does not dereference objects. Dereferencing of objects can be forced by setting the Derefoption. See $Alias::Deref. This can be used for convenient access to hash values and hash-based object attributes. Note that this makes available the semantics of localsubroutines and methods. That makes for some nifty possibilities. We could make truly private methods by putting anonymous subs within an object. These subs would be available within methods where we use attr, and will not be visible to the outside world as normal methods. We could forbid recursion in methods by always putting an empty sub in the object hash with the same key as the method name. This would be useful where a method has to run code from other modules, but cannot be certain whether that module will call it back again. The default behavior is to create aliases for all the entries in the hash, in the callers namespace. This can be controlled by setting a few options. See "Configuration Variables" for details. - const This is simply a function alias for alias, described above. Provided on demand at usetime, since it reads better for constant declarations. Note that hashes and arrays cannot be so constrained. Configuration Variables The following configuration variables can be used to control the behavior of the attr function. They are typically set after the use Alias; statement. Another typical usage is to localize them in a block so that their values are only effective within that block. - $Alias::KeyFilter Specifies the key prefix used for determining which hash entries will be interned by attr. Can be a CODE reference, in which case it will be called with the key, and the boolean return value will determine if that hash entry is a candidate attribute. - $Alias::AttrPrefix Specifies a prefix to prepend to the names of localized attributes created by attr. Can be a CODE reference, in which case it will be called with the key, and the result will determine the full name of the attribute. The value can have embedded package delimiters ("::" or "'"), which cause the attributes to be interned in that namespace instead of the callers own namespace. For example, setting it to "main::" makes use strict 'vars';somewhat more palatable (since we can refer to the attributes as $::foo, etc., without actually declaring the attributes). - $Alias::Deref Controls the implicit dereferencing behavior of attr. If it is set to "" or 0, attrwill not dereference blessed references. If it is a true value (anything but "", 0, or a CODE reference), all references will be made available as their dereferenced types, including values that may be objects. The default is "". This option can be used as a filter if it is set to a CODE reference, in which case it will be called with the key and the value (whenever the value happens to be a reference), and the boolean return value will determine if that particular reference must be dereferenced. Exports - alias - - attr - EXAMPLES Run these code snippets and observe the results to become more familiar with the features of this module. use Alias qw(alias const attr); $ten = 10; alias TEN => $ten, Ten => \$ten, Ten => \&ten, Ten => \@ten, Ten => \%ten; alias TeN => \*ten; # same as *TeN = *ten # aliasing basic types $ten = 20; print "$TEN|$Ten|$ten\n"; # should print "20|20|20" sub ten { print "10\n"; } @ten = (1..10); %ten = (a..j); &Ten; # should print "10" print @Ten, "|", %Ten, "\n"; # this will fail at run time const _TEN_ => 10; eval { $_TEN_ = 20 }; print $@ if $@; # dynamically scoped aliases @DYNAMIC = qw(m n o); { my $tmp = [ qw(a b c d) ]; local @DYNAMIC; alias DYNAMIC => $tmp, PERM => $tmp; $DYNAMIC[2] = 'zzz'; # prints "abzzzd|abzzzd|abzzzd" print @$tmp, "|", @DYNAMIC, "|", @PERM, "\n"; @DYNAMIC = qw(p q r); # prints "pqr|pqr|pqr" print @$tmp, "|", @DYNAMIC, "|", @PERM, "\n"; } # prints "mno|pqr" print @DYNAMIC, "|", @PERM, "\n"; # named closures my($lex) = 'abcd'; $closure = sub { print $lex, "\n" }; alias NAMEDCLOSURE => \&$closure; NAMEDCLOSURE(); # prints "abcd" $lex = 'pqrs'; NAMEDCLOSURE(); # prints "pqrs" # hash/object attributes package Foo; use Alias; sub new { bless { foo => 1, bar => [2,3], buz => { a => 4}, privmeth => sub { "private" }, easymeth => sub { die "to recurse or to die, is the question" }, }, $_[0]; } sub easymeth { my $s = attr shift; # localizes $foo, @bar, %buz etc with values eval { $s->easymeth }; # should fail print $@ if $@; # prints "1|2|3|a|4|private|" print join '|', $foo, @bar, %buz, $s->privmeth, "\n"; } $foo = 6; @bar = (7,8); %buz = (b => 9); Foo->new->easymeth; # this will not recurse endlessly # prints "6|7|8|b|9|" print join '|', $foo, @bar, %buz, "\n"; # this should fail at run-time eval { Foo->new->privmeth }; print $@ if $@; NOTES It is worth repeating that the aliases created by alias and const will be created in the callers namespace (we can use the AttrPrefix option to specify a different namespace for attr). If that namespace happens to be localized, the aliases created will be local to that block. attr localizes the aliases for us. Remember that references will be available as their dereferenced types. Aliases cannot be lexical, since, by neccessity, they live on the symbol table. Lexicals can be aliased. Note that this provides a means of reversing the action of anonymous type generators \, [] and {}. This allows us to anonymously construct data or code and give it a symbol-table presence when we choose. Any occurrence of :: or ' in names will be treated as package qualifiers, and the value will be interned in that namespace. Remember that aliases are very much like references, only we don't have to dereference them as often. Which means we won't have to pound on the dollars so much. We can dynamically make subroutines and named closures with this scheme. It is possible to alias packages, but that might be construed as abuse. Using this module will dramatically reduce noise characters in object-oriented perl code. BUGS use strict 'vars'; is not very usable, since we depend so much on the symbol table. You can declare the attributes with use vars to avoid warnings. Setting $Alias::AttrPrefix to "main::" is one way to avoid use vars and frustration. Tied variables cannot be aliased properly, yet. VERSION Version 2.32 30 Apr 1999 AUTHOR Gurusamy Sarathy [email protected] Copyright (c) 1995-99 Gurusamy Sarathy. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO perl(1)
https://metacpan.org/pod/Alias
CC-MAIN-2017-26
en
refinedweb
Hey :) Uhm, I just finished a program today, and I was wondering, how i should retrieve the errors that the user may encounter :S Ive seen the logo { sa } somewhere, what does it stand for? Or what should i do about this :) Cheers -Jazerix Hey :) Uhm, I just finished a program today, and I was wondering, how i should retrieve the errors that the user may encounter :S Ive seen the logo { sa } somewhere, what does it stand for? Or what should i do about this :) Cheers -Jazerix First you must catch the error (that seems obvious) Then I normally write a plain text file with all the relevant info for the error like: *Point in the program.class.function where the error occured *Description of the error, ie exception class, exception message and exception stack *Variables affected and their current content Then I name the file with the application name, username, date and time (using microseconds) and the extension of log. This file can be written in a predefined place in the user PC (preferred) or in a predefined place in a server (you can have communication and/or permissions problems here). If you write it in the user's pc, you can ask the user to send the textfile by mail to you. Hope this helps Edited by lolafuertes: n/a Logging is the most effective method as lola pointed out. To get access to the Microsoft bug reports you need to pay them a yearly fee. Yes, but I suggest not doing so and therefore will not say. "Catching" your entire program is ridiculously bad design and implementation. Fix your bugs, don't hide them. IE: You have a program that opens a file, reads it into a buffer, copies the buffer contents into a richtextbox, higlights the word 'World' if any, and wais for the user to exit it. If you do a global try catch probably you will have a generic Exception like: Null reference found. The stack trace, maybe is not enough to determine the right place where this occurs. Instead, if you do a try catch for every relevant action, you can determine the kind of exception and the action to do in this specific case. IE: determine if the open file function files is due to a bad name or a lack of permissions; in the first case, you can interactively ask to the user for a well formed file name. As I mentioned before, you will need to have information about the global and local variables content to 'clarify' the origin of the error. In a global try catrch you will loose the info for local variables in other modules, or inside functions or methods. And yes, it is a lot of work. :( Hope this helps Yes, but I suggest not doing so and therefore will not say. "Catching" your entire program is ridiculously bad design and implementation. Fix your bugs, don't hide them. I wholeheartedly disagree. Per Microsoft if you want your software to be logo certified (Certified for Windows 7, etc) you are not supposed to catch all unhandled exceptions but having your own internal bug reporting mechanism is far better than dealing with Microsoft's WinQual. However you shouldn't just ignored WER (Windows Error Reporting) as some exceptions you'll never have an opportunity to collect but will wind up on a WinQual. If you have a code signing certificate and sign your assemblies then check out WinQual. The user interface is somewhat cumbersome to deal with but they expose web APIs to gather WER data. Check out StackHash. Its free and syncs with WER to grab CAB files and other crash data. As far as handling application crashes internally here is what I do. There is a lot of code below but the two main "catch all" event subscriptions you care about would be Application.ThreadException and AppDomain.CurrentDomain.UnhandledException Here is a typical program startup: sealed class Program : IApplicationExceptionProvider { Program() { } /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { DebugHelper.Configure(); Program prog = new Program(); if (!Vea.Extensions.Forms.Log4NetConfigure.ConfigureLog4Net(args)) return; var reporter = ExceptionReporter.WireUpForUnhandledExceptions(prog); //continues } #region IApplicationExceptionProvider Members string IApplicationExceptionProvider.FormSkinName { get { return CFG.FormSkinName; } } bool IApplicationExceptionProvider.FormSkinsEnabled { get { return CFG.FormSkinsEnabled; } } Vea.Extensions.LK.SensourceLicense IApplicationExceptionProvider.LicenseKey { get { return CFG.Key; } } string IApplicationExceptionProvider.LastConnectionString { get { return Vea.Data.QueryHistory.LastConnStr; } } string IApplicationExceptionProvider.LastQuery { get { return Vea.Data.QueryHistory.LastQuery; } } #endregion Then for reporting (there was a bit of code removed explaining why so many empty catches -- it has a few different paths): using System; using System.Diagnostics; using System.IO; using System.Threading; using System.Windows.Forms; using Vea.Extensions; using Vea.Extensions.LK; namespace Vea.Extensions.Forms { public sealed class ExceptionReporter : IDisposable { // readonly IApplicationExceptionProvider provider; readonly log4net.ILog log; bool m_enabled; public bool Enabled { get { return m_enabled; } private set { m_enabled = value; } } /* -------------------------------------------------------------------- */ ExceptionReporter() : base() { this.Enabled = false; } /* -------------------------------------------------------------------- */ [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2122:DoNotIndirectlyExposeMethodsWithLinkDemands")] ExceptionReporter(IApplicationExceptionProvider provider) :this() { if (provider == null) throw new ArgumentNullException("provider"); this.provider = provider; log = log4net.LogManager.GetLogger(provider.GetType()); WireUp(); } void WireUp() { #if !DEBUG m_enabled = true; if (m_enabled) { Application.ThreadException += new ThreadExceptionEventHandler(ThreadException); AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); } #endif } /* -------------------------------------------------------------------- */ void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { Exception ex = e.ExceptionObject as Exception; if (e.IsTerminating) { if (log.IsFatalEnabled) log.Fatal("Unhandled application exception (terminating)", ex); } else { if (log.IsErrorEnabled) log.Error("Unhandled application exception (non-terminating)", ex); } if (ex != null) { WriteExceptionRunTime(ex, false); } } /* -------------------------------------------------------------------- */ #region IDisposable Members [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2122:DoNotIndirectlyExposeMethodsWithLinkDemands")] public void Dispose() { if (m_enabled) { Application.ThreadException -= new ThreadExceptionEventHandler(ThreadException); AppDomain.CurrentDomain.UnhandledException -= new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); } } #endregion /* -------------------------------------------------------------------- */ void ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e) { WriteExceptionRunTime(e.Exception, true); } [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Design", "CA1031:DoNotCatchGeneralExceptionTypes")] void WriteExceptionRunTime(Exception error, bool showMessage) { if (log.IsFatalEnabled) log.Fatal("Unhandled application exception", error); try { string msg = "date/time : " + DateTime.Now.ToString() + Environment.NewLine + "computer name : " + Environment.MachineName + Environment.NewLine + "user name : " + Environment.UserName + Environment.NewLine + "operating system : " + Environment.OSVersion.VersionString + Environment.NewLine + "domain user name : " + Environment.UserDomainName + Environment.NewLine + "system up time : " + VeaUtils.GetSystemUptime() + Environment.NewLine + "program up time : " + VeaUtils.GetProgramUptime() + Environment.NewLine + "allocated memory : " + (System.Diagnostics.Process.GetCurrentProcess().WorkingSet64 / 1048576).ToString() + " MB" + Environment.NewLine + "physical memory : " + VeaUtils.GetTotalMemory() + Environment.NewLine + "processor : " + VeaUtils.GetCpuInformation() + Environment.NewLine + "display mode : " + SystemInformation.PrimaryMonitorSize + Environment.NewLine + "executable : " + System.IO.Path.GetFileName(Application.ExecutablePath) + Environment.NewLine + "exec. date/time : " + System.Diagnostics.Process.GetCurrentProcess().StartTime.ToString() + Environment.NewLine + "path : " + Application.ExecutablePath + Environment.NewLine + "version : " + System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString() + Environment.NewLine + "exception class : " + error.GetType() + Environment.NewLine + "exception message : " + error.Message + Environment.NewLine + "framework ver : " + Environment.Version + Environment.NewLine + "skin name : " + provider.FormSkinName + Environment.NewLine + "form skins enabled: " + (provider.FormSkinsEnabled ? "Yes" : "No") + Environment.NewLine + "Company : " + (provider.LicenseKey != null ? provider.LicenseKey.Company : "Unknown?") + Environment.NewLine + Environment.NewLine + "Stack Trace: " + Environment.NewLine + error.GetErrorText() + Environment.NewLine + Environment.NewLine + "Environment Stack Trace: " + Environment.NewLine + Environment.StackTrace + Environment.NewLine + Environment.NewLine + "Processes: " + Environment.NewLine + VeaUtils.GetProcessList() + Environment.NewLine + Environment.NewLine + "Referenced Assemblies: " + Environment.NewLine + VeaUtils.GetReferencedAssemblies() + Environment.NewLine + Environment.NewLine + "Connection String: " + provider.LastConnectionString + Environment.NewLine + "Last Query: " + Environment.NewLine + provider.LastQuery; try { Vea.Extensions.VeaUtils.PostRpt(msg); } catch { } //May consider rethrowing the exception so it is submitted to WinQual try { string bugRptfile = Vea.Extensions.Forms.Functions.GetBugRptFileName(); using (StreamWriter writer = new StreamWriter(bugRptfile)) { writer.WriteLine(msg); writer.Close(); } } catch { } } catch { } if (showMessage) { DevExpress.XtraEditors.XtraMessageBox.Show("An unhandled exception has occured and the application must be restarted." + Environment.NewLine + error.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); Application.Exit(); } } /* -------------------------------------------------------------------- */ public static ExceptionReporter WireUpForUnhandledExceptions(IApplicationExceptionProvider exceptionProvider) { if (exceptionProvider == null) throw new ArgumentNullException("ExceptionProvider"); return new ExceptionReporter(exceptionProvider); } /* -------------------------------------------------------------------- */ } } In another unit I handle submitting the message back to a bug server and I have a separate server side component for accepting inbound bug reports then logging them to an SQL database and one last client application to view bug report feedback. Its optional and the user can elect whether to submit reports back or not. This is a controversial topic and I agree its not the "best thing" to do but it works 98% of the time and for the %2 of failures (ex OutOfMemoryException) it will probably wind up on winqual anyway. ...
https://www.daniweb.com/programming/software-development/threads/379859/bug-reports
CC-MAIN-2018-26
en
refinedweb
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int main() { char buf[15]; char *buf1 = "CE_and_IST"; char buf2 = 'z'; int fd = open("myfile", O_RDWR); lseek(fd, 10, SEEK_SET); int n = read(fd, buf, 10); printf("%d\n", n); n = write(fd, (const void *)buf1, 10); printf("%s\n", buf); lseek(fd, 0, SEEK_SET); close(1); dup(fd); close(fd); n = write(1, (const void *)&buf2, 1); close(1); return 0; } and the contents of "myprofile" is Welcome_to_CIS so the output for this code is, and I'm really confused, is 5 come from 15 minus 10, I thought the length of the code is 15. like what does exactly this code do? 5 _CIS and the contents of "myprofile" is after execution is Zelcome_to_CIS CE_and_IST How did they get to this?
https://www.daniweb.com/programming/software-development/threads/492527/can-someone-explain-this-c-code-to-me-i-m-confused-by-the-output
CC-MAIN-2018-26
en
refinedweb
QTSerialPort read/write - desperatenewbie Hi, I'm using QT 4.8 with qtserialport and i am having some trouble communicating with the device. Here is my simple main function int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); foreach (const QSerialPortInfo &info, QSerialPortInfo::availablePorts()) { qDebug() << "Name : " << info.portName(); qDebug() << "Description : " << info.description(); qDebug() << "Manufacturer: " << info.manufacturer(); // Example use QSerialPort QSerialPort serial; serial.setPort(info); serial.setBaudRate(QSerialPort::Baud2400); serial.setDataBits(QSerialPort::Data8); serial.setParity(QSerialPort::NoParity); serial.setFlowControl(QSerialPort::SoftwareControl); if (serial.open(QIODevice::ReadWrite)){ qDebug()<<"Opened successfully"; } serial.write("PSN"); if (!serial.waitForReadyRead(1000)) { qDebug()<<"failed"; } qDebug()<<serial.readAll(); serial.close(); } return a.exec(); } PSN should return the product serial number and the device. i keep getting "failed" as an output and readAll() gives me "" in the console. Can anybody give me a piece of code I can use to make it work without redirecting me to other posts or examples that came with qtserialport, i wasted hours there. Reading/Writing to the device in MATLAB and putty works as expected. Its just in QT that i get no response. Thanks Hi and welcome to devnet this code qDebug() << QByteArray("\0 1 2 3 4 5 6 7"); produces "". You must check if the received buffer contains or begins with 0x00 check with qDebug() << serial.bytesAvailable(); qDebug() << serial.readAll().size(); if the buffer is not empty - desperatenewbie @mcosta Thanks for the help. qDebug() << serial.bytesAvailable(); qDebug() << serial.readAll().size(); both output 0 which means the buffer is empty. From past experience, the buffer remains empty for about 60-90 ms after a command is sent, i believe waiting for 1000ms using the waitForReadyRead(1000) would have given enough time for the buffer to fill. Is the problem thus with the .write() function? Sorry, I don't have so much experience with QtSerialPort. but if waitForReadyReadreturns true means something is arrived on the port You didn't mention what device you are trying to talk to... I'm wondering if PSN is the command to get the product serial number but AFTER that command you need to send a CR or LF? So something like: serial.write("PSN\r"); or serial.write("PSN"); serial.putChar( 0x13 ); The reason I bring this up is in putty you may have a CR going out without knowing it. The other thing you might try is connecting up the serial port readyRead() signal to a slot and see if it ever gets called. This is an indication data is coming in. I have never tried using QSerialPort outside of the message loop (i.e. 'a.exec()' from main() in your program). I do get the impression that it relies on the message loop to function properly so this test may not work. If you modify your test program a bit I suspect it will work: - Create a class derived from QWidget - Add a 'Test' button with associated slot - Put all your test code in the test slot. In Qt4 you had to use the external QSerialDevice class (this, or some variation of it, was migrated into Qt5 but didn't exist internally as part of Qt4). There was something about the open command mode options that I ran into, I don't remember the details unfortunately. There is a namespace called 'AbstractSerial' with various options, at least in the version I have. You might want to use this instead: if (serial.open(AbstractSerial::ReadWrite)){ qDebug()<<"Opened successfully"; }
https://forum.qt.io/topic/52780/qtserialport-read-write
CC-MAIN-2018-26
en
refinedweb
This is only the 2nd program I've ever worked on and I am stuck right off the bat. Our teacher wrote some of the code for us. We just need to call a function for each section of classes, where we remove the spaces between the classes (for example: CMSC 201 CMSC 202 CMSC 301 -- Just remove the spaces between the classes not between CMSC 201.) LINK to the classes we just need to do A-F: import string def main(): # set up course lists requiredList = 'CMSC 201 CMSC 202 CMSC 203 CMSC 304 CMSC 313 CMSC 331 CMSC \ 341 CMSC 345 CMSC 411 CMSC 421 CMSC 441' requiredMath = 'MATH 151 MATH 152 MATH 221' mainElectives = 'CMSC 426 CMSC 431 CMSC 435 CMSC 445 CMSC 451 CMSC 455 CMSC\ 456 CMSC 461 CMSC 471 CMSC 481 CMSC 483' optionalMath = 'MATH 430 MATH 441 MATH 452 MATH 475 MATH 481 MATH 483' sci4Cred = 'BIOL 100 CHEM 101 CHEM 102 PHYS 121 PHYS 122' sci3Cred = 'BIOL 301 BIOL 252 BIOL 275 BIOL 302 BIOL 303 BIOL 304 BIOL 305 \ GES 110 GES 111' sci2Cred = 'BIOL 100L BIOL 251L BIOL 252L BIOL 275L BIOL 302L BIOL 303L BIO\ L 304L BIOL 305L CHEM 102L PHYS 122L PHYS 340L' userClasses = '' numCSReqElectives = 0 numCSElectives = 0 numReqMath = 0 numSciCred = 0 numOptMath = 0 # get the classes the user has taken concatenating them into a list # and counting how many of each type there is numClasses = input("How many courses in the CS program have you taken? ") for i in range(numClasses): uclass = raw_input("Enter the class in the form <MAJOR-CODE> <\ COURSE-NUMBER>: ") userClasses = userClasses + ' ' + uclass if mainElectives.find(uclass) != -1: numCSReqElectives += 1 if requiredList.find(uclass) == -1 and mainElectives.find(uclass) == -1\ and uclass.find('CMSC 4') != -1: if uclass.find('CMSC 404') == -1 and uclass.find('CMSC 495') == -1 \ and uclass.find('CMSC 496') == -1 and uclass.find('CMSC 497') == -1 and uclass.\ find('CMSC 498') == -1 and uclass.find('CMSC 499') == -1: numCSElectives += 1 if requiredMath.find(uclass) != -1: numReqMath += 1 if optionalMath.find(uclass) != -1: numOptMath += 1 if sci4Cred.find(uclass) != -1: numSciCred += 4 elif sci3Cred.find(uclass) != -1: numSciCred += 3 elif sci2Cred.find(uclass) != -1: numSciCred += 2 # restrict user to two optional math classes if numOptMath > 2: numOptMath = 2 # adjust counts for part E & F if numCSReqElectives > 2: numCSElectives += (numCSReqElectives - 2) numCSReqElectives = 2 # call functions to produce output for each part main() here is what I have started on, i started coding it right after function main. def (requiredList, userClasses): print print 'You still need to take these classes from Section A Required Computer Science Courses: ' print # strips spaces between each class in the string uclass.rjust(1).strip() # not sure why i have 1 in the parameter but i want to remove 1 space from the end of each class # slices every 8 characters each class is 8 characters long pos = uclass[pos:pos+8] # I'm not sure how to define pos. # then i need to use a the string command .find to find each class that isnt in the string so something like if userClasses.find(uclass) == -1 print # i dont know what to print or if my if statement is checking the string for each class I need a lot of help, I understand the logic about 75% on what needs to be done, but only know how to code about 5% of it. Thanks in advance.
https://www.daniweb.com/programming/software-development/threads/226894/string-slicing-string-finding-homework
CC-MAIN-2018-26
en
refinedweb
0 //// point.h using namespace System::Drawing; class point { protected: int x; int y; Color col; public: point(); }; ////point.cpp #include "stdafx.h" #include "point.h" point::point() { x = 0; y = 0; col = Color::Blue; } ///Error c:\...\point.h(10) : error C3265: cannot declare a managed 'col' in an unmanaged 'point' 1> the value type may not contain members of managed types Everything compiled without errors until I created the .cpp file. I'm almost sure it's a missing namespace or include, but i don't know which one. Help would be appreciated! Edited by Rhap: n/a
https://www.daniweb.com/programming/software-development/threads/283009/error-using-color-object-in-class
CC-MAIN-2018-26
en
refinedweb