text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
The controlID of the current hot control. The hot control is one that is temporarily active. When the user mousedown's on a button, it becomes hot. No other controls are allowed to respond to mouse events while some other control is hot. once the user mouseup's, the control sets hotControl to 0 in order to indicate that other controls can now respond to user input. using UnityEngine; public class Example : MonoBehaviour { // Click on the button to see the id void OnGUI() { GUILayout.Button("Press Me!"); Debug.Log("id: " + GUIUtility.hotControl); } }
https://docs.unity3d.com/es/2019.2/ScriptReference/GUIUtility-hotControl.html
CC-MAIN-2021-31
en
refinedweb
Swift version: 5.4 You can calculate the distance between two CGPoints by using Pythagoras's theorem, but be warned: calculating square roots is not fast, so if possible you want to avoid it. More on that in a moment, but first here's the code you need: func CGPointDistanceSquared(from: CGPoint, to: CGPoint) -> CGFloat { return (from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y) } func CGPointDistance(from: CGPoint, to: CGPoint) -> CGFloat { return sqrt(CGPointDistanceSquared(from: from, to: to)) } Note that there are two functions: one for returning the distance between two points, and one for returning the distance squared between two points. The latter one doesn't use a square root, which makes it substantially faster. This means if you want to check "did the user tap within a 10-point radius of this position?" it's faster to square that 10 (to make 100) then use CGPointDistanceSquared().
https://www.hackingwithswift.com/example-code/core-graphics/how-to-calculate-the-distance-between-two-cgpoints
CC-MAIN-2021-31
en
refinedweb
#include "UT_ArrayMap.h" #include "UT_StringHolder.h" Go to the source code of this file. Definition at line 14 of file UT_ArrayStringMap.h. We want methods like find() to take a const UT_StringRef& instead of a const UT_StringHolder& for the following reasons: Definition at line 25 of file UT_ArrayStringMap.h. Specialization of the above macro for methods that return an iterator range, since something like std::pair<iterator, iterator> is interpreted as two arguments when being passed to a macro (due to the comma). Definition at line 33 of file UT_ArrayStringMap.h.
https://www.sidefx.com/docs/hdk/_u_t___array_string_map_8h.html
CC-MAIN-2021-31
en
refinedweb
A layout manager which arranges widgets horizontally or vertically. More... #include <Wt/WBoxLayout> A layout manager which arranges widgets horizontally or vertically. This layout manager arranges widgets horizontally or vertically inside the parent container. The space is divided so that each widget is given its preferred size, and remaining space is divided according to stretch factors among widgets. If not all widgets can be given their preferred size (there is not enough room), then widgets are given a smaller size (down to their minimum size). If necessary, the container (or parent layout) of this layout is resized to meet minimum size requirements. The preferred width or height of a widget is based on its natural size, where it presents its contents without overflowing. WWidget::resize() or (CSS width, height properties) can be used to adjust the preferred size of a widget. The minimum width or height of a widget is based on the minimum dimensions of the widget or the nested layout. The default minimum height or width for a widget is 0. It can be specified using WWidget::setMinimumSize() or using CSS min-width or min-height properties. You should use WContainerWidget::setOverflow(OverflowAuto) or use a WScrollArea to automatically show scrollbars for widgets inserted in the layout to cope with a size set by the layout manager that is smaller than the preferred size. When the container of a layout manager does not have a defined size (by having an explicit size, or by being inside a layout manager), or has has only a maximum size set using WWidget::setMaximumSize(), then the size of the container will be based on the preferred size of the contents, up to this maximum size, instead of the default behaviour of constraining the size of the children based on the size of the container. Note that because of the CSS defaults, a WContainer has by default no height, but inherits the width of its parent widget. The width is thus by default defined. A layout manager may provide resize handles between items which allow the user to change the automatic layout provided by the layout manager (see setResizable()). Each item is separated using a constant spacing, which defaults to 6 pixels, and can be changed using setSpacing(). In addition, when this layout is a top-level layout (i.e. is not nested inside another layout), a margin is set around the contents. This margin defaults to 9 pixels, and can be changed using setContentsMargins(). You can add more space between two widgets using addSpacing(). For each item a stretch factor may be defined, which controls how remaining space is used. Each item is stretched using the stretch factor to fill the remaining space. Usage example: Enumeration of the direction in which widgets are layed out. Creates a new box layout. This constructor is rarely used. Instead, use the convenient constructors of the specialized WHBoxLayout or WVBoxLayout classes. Use parent = 0 to created a layout manager that can be nested inside other layout managers. Adds a layout item. The item may be a widget or nested layout. How the item is layed out with respect to siblings is implementation specific to the layout manager. In some cases, a layout manager will overload this method with extra arguments that specify layout options. Implements Wt::WLayout. Adds a nested layout to the layout. Adds a nested layout, with given stretch factor. Adds extra spacing. Adds extra spacing to the layout. Adds a stretch element. Adds a stretch element to the layout. This adds an empty space that stretches as needed. Adds a widget to the layout. Adds a widget to the layout, with given stretch factor. When the stretch factor is 0, the widget will not be resized by the layout manager (stretched to take excess space). The alignment parameter is a combination of a horizontal and/or a vertical AlignmentFlag OR'ed together.). Removes and deletes all child widgets and nested layouts. This is similar to WContainerWidget::clear(), with the exception that the layout itself is not deleted. Implements Wt::WLayout. Returns the number of items in this layout. This may be a theoretical number, which is greater than the actual number of items. It can be used to iterate over the items in the layout, in conjunction with itemAt(). Implements Wt::WLayout. Returns the layout direction. Inserts a nested layout in the layout. Inserts a nested layout in the layout at position index, with given stretch factor. Inserts extra spacing in the layout. Inserts extra spacing in the layout at position index. Inserts a stretch element in the layout. Inserts a stretch element in the layout at position index. This adds an empty space that stretches as needed. Inserts a widget in the layout. Inserts a widget in the layout at position index, with given stretch factor. When the stretch factor is 0, the widget will not be resized by the layout manager (stretched to take excess space).). Returns whether the user may drag a particular border. This method returns whether the border that separates item index from the next item may be resized by the user. Returns the layout item at a specific index. If there is no item at the index, 0 is returned. Implements Wt::WLayout. Removes a layout item (widget or nested layout). Implements Wt::WLayout. Sets the layout direction. Sets whether the use may drag a particular border. This method sets whether the border that separates item index from the next item may be resized by the user, depending on the value of enabled. The default value is false. If an initialSize is given (that is not WLength::Auto), then this size is used for the size of the item, overriding the size it would be given by the layout manager. Sets spacing between each item. The default spacing is 6 pixels. Sets the stretch factor for a nested layout. The layout must have previously been added to this layout using insertLayout() or addLayout(). Returns whether the stretch could be set. Sets the stretch factor for a widget. The widget must have previously been added to this layout using insertWidget() or addWidget(). Returns whether the stretch could be set. Returns the spacing between each item.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WBoxLayout.html
CC-MAIN-2021-31
en
refinedweb
Environment Development: Amazon EC2 instance. Operational: Unknown at this time Scenario This is a *nix question but things start out in php, so please bear with me. I'm trying to get php script (php_script_1) to to run another php script (php_script_2) in the background. This will allow php_script_1 to complete and return (with a suitable message to the user), having kicked off the lengthy php_script_2. The output of php_script_2 will be cached (in a database) for future use. Progress to date I have identified php's shell_exec() and a neat little "run_in_background" function wrapper to issue commands. function run_in_background($Command, $Priority = 0){ if($Priority){ $PID = shell_exec("nohup nice -n $Priority $Command 2> /dev/null & echo $!"); } else{ $PID = shell_exec("nohup $Command 2> /dev/null & echo $!"); } return($PID); } With this, I can successfully call my php_script_2 (from php_script_1) with :- ... $command = "php /var/www/make_objects.php"; $priority = 0; $pid = run_in_background($command, $priority);//kick off background process ... make_objects.php is the main target of my current development effort so it isn't yet fully working, but at least I can call it. What I need I need to pass arguments to php_script_2. The equivalent href (HTTP request) would be .../make_objects.php?dt=data99&z=4 so these arguments become available in php as $_REQUEST and $_REQUEST. What I have tried I tried the following: ... $command = "php /var/www/make_objects.php?dt=data99&z=4"; $priority = 0; $pid = run_in_background($command, $priority);//kick off background process ... but get the error: Could not open input file: /var/www/make_objects.php?dt=data99&z=4 . The *nix command parser clearly doesn't allow that. Research on the web has failed to identify the correct approach. My Question How do I construct $command to call /var/www/make_objects.php and pass named arguments dt=data99 and z=4 ? Many thanks in advance to the *nix guru(s) who understand this stuff. Airshow
https://www.daniweb.com/hardware-and-software/linux-and-unix/threads/386448/command-line-equivalent-of-http-request-with-query-string
CC-MAIN-2015-40
en
refinedweb
#include "petscsys.h" PetscErrorCode PetscOptionsSetValue(const char iname[],const char value[])Not collective, but setting values on certain processors could cause problems for parallel objects looking for options. Developers Note: Uses malloc() directly because PETSc may not yet have been fully initialized Level:intermediate Location:src/sys/objects/options.c Index of all Sys routines Table of Contents for all manual pages Index of all manual pages
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsSetValue.html
CC-MAIN-2015-40
en
refinedweb
IRC log of tagmem on 2003-11-10 Timestamps are in UTC. 19:25:59 [RRSAgent] RRSAgent has joined #tagmem 19:26:01 [Zakim] Zakim has joined #tagmem 19:26:04 [Ian] zakim, this will be TAG 19:26:04 [Zakim] ok, Ian; I see TAG_Weekly()2:30PM scheduled to start in 4 minutes 19:26:28 [Ian] Ian has changed the topic to: 19:51:58 [Stuart] Stuart has joined #tagmem 19:59:10 [Zakim] TAG_Weekly()2:30PM has now started 19:59:17 [Zakim] +Norm 20:00:31 [TBray] TBray has joined #tagmem 20:00:55 [Zakim] +??P1 20:01:13 [Stuart] zakim, ??P1 is me 20:01:13 [Zakim] +Stuart; got it 20:01:26 [Zakim] +DOrchard 20:01:54 [Zakim] +Tim_Bray 20:01:59 [Ian] zakim, call Ian-BOS 20:01:59 [Zakim] ok, Ian; the call is being made 20:02:00 [Zakim] +Ian 20:02:34 [DaveO] DaveO has joined #tagmem 20:02:44 [Ian] Regrets: CL 20:02:52 [Ian] At risk: PC 20:03:14 [Ian] NW: I may have to drop off suddenly; excuse me in advance. 20:03:28 [Zakim] +Roy 20:03:43 [Ian] [RF back in 10 mins] 20:03:48 [Zakim] -Roy 20:04:27 [Stuart] zakim, who is here? 20:04:27 [Zakim] On the phone I see Norm, Stuart, DOrchard, Tim_Bray, Ian 20:04:28 [Zakim] On IRC I see DaveO, TBray, Stuart, Zakim, RRSAgent, Ian, Norm 20:05:09 [DanC] DanC has joined #tagmem 20:05:16 [Zakim] +DanC 20:05:39 [Zakim] +??P4 20:06:18 [Stuart] zakim, ??p4 is PaulC 20:06:18 [Zakim] +PaulC; got it 20:06:46 [Stuart] zakim, who is hre 20:06:46 [Zakim] I don't understand 'who is hre', Stuart 20:06:52 [Stuart] zakim, who is here? 20:06:52 [Zakim] On the phone I see Norm, Stuart, DOrchard, Tim_Bray, Ian, DanC, PaulC 20:06:53 [Zakim] On IRC I see DanC, DaveO, TBray, Stuart, Zakim, RRSAgent, Ian, Norm 20:07:34 [Ian] Roll call: PC, SW, DC, IJ, NW, DO, TB. 20:07:40 [timbl] timbl has joined #tagmem 20:07:43 [Ian] Regrets: CL. 20:07:45 [Ian] TBL arriving 20:07:48 [Ian] RF back in a few. 20:07:58 [Ian] Resolved to accept minutes of 27 Oct teleconf 20:08:05 [Ian] 20:08:14 [Ian] Accept the minutes of the 3 Nov teleconference? 20:08:20 [Ian] NW: I skimmed, looks ok. 20:08:32 [DanC] interesting... XMLVersioning-41 20:08:50 [Ian] DO: Please do not accept the 3 Nov minutes as accurate record. 20:08:55 [Ian] DO: I will review them over the next few days. 20:09:10 [Zakim] +TimBL 20:09:18 [Ian] Accept this agenda? 20:09:28 [Ian] 20:10:14 [Ian] zakim, ??P4 is Paul 20:10:14 [Zakim] sorry, Ian, I do not recognize a party named '??P4' 20:10:19 [Ian] zakim, mute PaulC 20:10:19 [Zakim] PaulC should now be muted 20:10:27 [Ian] zakim, unmute PaulC 20:10:27 [Zakim] PaulC should no longer be muted 20:11:49 [Ian] --- 20:11:54 [Ian] Tech Plenary expectations 20:12:07 [Ian] SW: Meeting M + T is lesser of all evils; maximizes head count. 20:12:36 [Ian] NW, TBL: We have conflicts. 20:12:46 [TBray] q+ 20:12:51 [Ian] DC: I'd suggest Th and Fri instead 20:13:18 [Ian] TBray: I suggest that instead we use the whole time to liaise. 20:13:28 [Ian] ...with other groups who will be meeting there. 20:13:56 [TBray] q- 20:14:48 [Norm] q+ 20:14:50 [Ian] SW: I have the feeling no good fit for NW, or perhaps CL except for Friday. 20:14:58 [Ian] IJ has conflicts Th/Fri 20:15:03 [Ian] ack DanC 20:15:03 [Zakim] DanC, you wanted to express a preference for Th/Fr TAG and to 2nd bray's proposal, ammended to include a social time, such as dinner 20:15:41 [Ian] DC: If the TAG meets, I'll be there. If we don't spend a whole day meeting, I'd like to distribute task of organizing liaison and also a TAG social meeting. 20:15:46 [Ian] NW: I'm happy to do as TB suggests. 20:15:48 [timbl] q+ dave paul 20:15:51 [Ian] ack Norm 20:16:11 [Stuart] ack dave 20:17:18 [Ian] DO: I'd like to use some ftf time to go over findings (e.g., extensibility finding) 20:17:43 [Ian] DO: Maybe we could set aside one day to look at TAG material. 20:17:43 [TBray] q+ 20:17:45 [Stuart] ack paul 20:17:59 [Ian] PC: Heads-up for scheduling on the margin. 20:18:20 [Ian] PC: Recall that there may be new TAG participants. 20:18:37 [Ian] PC: It would be appropriate to have a ftf meeting early in the year to get them engaged. 20:19:01 [Ian] PC: My original proposal was that we not meet in March, but rather ftf in February to bring on new folks. 20:19:10 [Stuart] ack TBray 20:19:50 [Ian] TBray: For me the tech plenary is a valuable opportunity to liaise. It's not easy for me to go to the south of France for just one day. 20:20:25 [Ian] TBray: So I am willing to go if we either have a first-rate ftf meeting or to do real liaison work. 20:21:35 [Zakim] +Roy 20:21:43 [Ian] TBL: I would be happy to attend the RDF core meeting one day and a tag ftf the other day. 20:22:20 [DanC] s/RDF Core/RDF Interest/ 20:22:25 [DaveO] q+ 20:24:36 [Ian] ack DaveO 20:24:42 [Ian] DO: I'd like the TAG to meet the week of the TP. 20:25:16 [Ian] DO: TAG mtg in jan/feb looking tough for me. 20:25:33 [Ian] DO: I have a strong pref for week of TP. 20:25:41 [TBray] q+ 20:26:05 [Ian] SW: Propose to use the TP week for liaisons, and TAG ftf meeting on Tuesday. 20:26:32 [Ian] SW: I will arrange liaisons (with help) with other groups during the week. 20:26:36 [Ian] Brainstorm list of groups: 20:26:43 [Ian] HTML WG (xlink) 20:26:47 [Ian] I18N (charmod) 20:27:13 [Ian] Web Services (wsdl + REST, issue 37) 20:27:38 [Ian] PC: Meet with Schema about extensibility. 20:28:23 [DaveO] I agree with Stuart's proposal and vote yes. 20:28:46 [Ian] 20:30:19 [Ian] Proposal: 20:30:28 [Ian] - TAG meeting Tuesday 20:30:35 [Ian] - Arrange to liaise with other groups around that. 20:30:39 [Ian] PC: XML Core on ID? 20:30:57 [Ian] IJ: When is the binary xml workshop? 20:31:30 [timbl] q+ to point out that this is all reather dependent on the other groupos being able to make time on their schedules 20:31:47 [Ian] TB, TBL, NW: Like the proposal. 20:31:52 [DanC] DC too 20:32:15 [Ian] Resolved: Adopt proposal for meeting during tech plenary week. 20:33:05 [Ian] PC: Do we plan to invite old and new participants at the first meeting of new folks? 20:33:12 [Ian] SW, DO: Yes 20:34:22 [Ian] TBL: What about a video conf earlier than tech plenary? E.g., January. 20:34:47 [Ian] SW: One reason to wait until Feb is for new participants. 20:35:37 [Ian] Action SW: Explore possibility of TAG videolink mtg in February, with help from PC. 20:35:51 [Ian] q+ 20:35:54 [Ian] ack TBray 20:35:56 [Ian] ack timbl 20:35:56 [Zakim] timbl, you wanted to point out that this is all reather dependent on the other groupos being able to make time on their schedules 20:36:07 [Ian] q- 20:36:09 [Ian] ===== 20:36:19 [Ian] 1.2 TAG Nov face-to-face meeting agenda 20:36:29 [TBray] We'll be flexible and available, I'd hope that they would too. 20:36:30 [Ian] Meeting and agenda page 20:36:37 [Ian] 20:36:52 [Ian] SW: My proposal was to have people send written reviews of the arch doc. 20:37:03 [TBray] BTW, we'd be available for liaisons on Tuesday too, right? 20:37:21 [Ian] DC: I can't read anything new between now and meeting. 20:37:24 [Ian] q+ 20:38:09 [Ian] IJ: I was planning to have next editor's draft tomorrow. 20:38:24 [Ian] DC: That will be counter-productive in my opinion. 20:38:45 [Ian] DC: Please get endorsement before you ask for objections. 20:39:08 [Ian] [27 Oct draft: ] 20:39:44 [DanC] s/in my opinion/for my purposes/ 20:40:21 [Ian] RF: I will do a review of 11 Nov draft. 20:40:27 [Norm] So will I 20:40:30 [Ian] TBL: I expect to download before getting on plane. 20:40:47 [timbl] Trouble is, I was going to edit slides on the plane. 20:41:14 [Ian] DC: Re ftf agenda and last call decision: I think it would be great to say at the meeting "Yes, this doc is ready for last call." I think that we are likely to make more edits. 20:41:53 [Norm] q+ 20:41:56 [Ian] TBray: I'd like to have a TAG decision on the substance of my request. 20:41:57 [Ian] q- 20:42:34 [Ian] DO: I can agree to no more major structural changes, but not to point on new material (since NW and I have been working on extensibility and versioning material). 20:42:38 [Ian] ack Norm 20:42:54 [Ian] NW: I am unhappy with the current extensibility section and would like it fixed. 20:43:31 [Ian] q+ 20:44:03 [Ian] TBray: I think that abstractcomponentrefs is not cooked enough to be in the arch doc. 20:44:39 [DaveO] grumble. I did my action item to create material in abstractcomponentrefs for inclusion in the web arch.... 20:45:08 [TBray] yeah, but it's a way harder issue. 20:45:56 [Norm] q+ 20:46:19 [TBray] ack Ian 20:46:20 [Ian] IJ: I don't have need to make big structural changes; I suspect TAG may want to at FTF meeting. 20:46:20 [Ian] q- 20:46:21 [TBray] ack Norm 20:46:24 [timbl] q+ paul 20:46:38 [Ian] NW: My comment was that nobody on the TAG should make substantial changes except for versinoing sectino. 20:46:40 [Ian] ack paul 20:46:46 [Ian] PC: I think the TAG needs to be date-driven. 20:46:54 [Ian] q+ 20:47:05 [TBray] +1 to Norm's formulation 20:47:28 [Ian] ack Ian 20:47:47 [Ian] IJ: I would like to walk through my announced intentions before I make a complete commitment. 20:47:54 [Ian] PC: I think we need to be date-driven at this point. 20:48:06 [timbl] Ian, did the rep'n diagram have text in the "representation" box? 20:49:48 [Ian] DC: I am not yet satisfied that the TAG ftf meeting is clear enough about which document we'll be discussing. 20:49:49 [timbl] I think the diagram is misleading now. 20:51:01 [DanC] W3C process calls for ftf agendas 2 weeks in advance. I expect documents to stabilize in at that time. I gather I'm not gonna get what I want this time. 20:53:05 [Ian] REsolved: If IJ finishes draft by tomorrow, we will review that at the ftf meeting. 20:53:16 [TBray] not now 20:53:37 [DanC] I can't seem to find my last end-to-end review... I'm pretty sure it was a bit before 1Aug. 20:54:10 [Ian] [TAG will review AC meeting slides at ftf meeting] 20:54:29 [Ian] =================================== 20:54:49 [Ian] 2.2 XML Versioning (XMLVersioning-41) 20:54:58 [Ian] Proposal from DO: 20:55:07 [Ian] 20:55:14 [Ian] Proposal from IJ: 20:55:23 [Ian] 20:55:54 [timbl] 20:56:13 [Ian] 20:56:54 [timbl] The latter is Ian's shortened version for arch doc 20:56:55 [Ian] [IJ summarizes] 20:57:08 [Ian] DO: We talked about use of namespaces names on the thread. 20:57:49 [Ian] IJ: See status section for my expectations regarding namespaces. 20:58:04 [DanC] status section of what? 20:58:52 [TBray] 4.6.2 of 20:59:40 [DaveO] q+ 20:59:50 [Stuart] q+ paul 20:59:53 [Norm] q+ to note that Ian said "only make backwards compatible" but left that out of his proposed text 21:00:03 [Stuart] q- paul 21:00:13 [TBray] q+ to agree with Stuart's comment that the level of detail in webarch and the walsh/orchard draft is violently different 21:00:18 [Ian_] Ian_ has joined #tagmem 21:00:27 [timbl] q+ to note that the ownership and change issues with nmaepsaces are similar to te problems with document in general, and expectation shoudl be set. 21:00:54 [Ian_] DC: Not all namespaces have owners. Delegated ownership is a special case. 21:00:55 [Ian_] DC: I'd prefer to generalize rather than limit scope. 21:01:00 [Ian_] DC: The general point is that the Web community agrees on what URIs mean. This is just one case of that. 21:01:09 [Stuart] ack danC 21:01:09 [Zakim] DanC, you wanted to note problems with "Only namespace owner can change namespace" 21:01:28 [Ian_] IJ: I wanted to address issue of "changing namespaces" by saying "Document your change expectations" 21:01:44 [Ian_] TBL: I think we can include the specific case of http; you lose a lot of power in generalizing. 21:01:51 [Ian_] DO: What about URN? 21:01:58 [Ian_] TBL: What if they use a UUID? Depends on the URN scheme. 21:02:34 [Ian_] NW: The URI Scheme shouldn't have any bearing on this discussion. 21:03:01 [Ian_] TBL: HTTP allows you to own a URI, through DNS delegation, you have a right to declare what it means. In those circumstances, it makes sense to state your change expectations. 21:03:19 [Stuart] ack DaveO 21:03:29 [Ian_] [TBL seems to assert IJ's proposal to include a good practice note to document change policy] 21:03:37 [Ian_] s/assert/support 21:04:10 [Ian_] DO: One of the problems I had with IJ's proposal is that it didn't include all of the good practice notes that were in our text. In particular, requiring a processing model for extensions. 21:04:20 [DanC] [good practice notes are fine in specific cases of general principles; but if we can't say what the general principle is, we haven't done our job] 21:04:28 [Ian_] q+ to respond to DO 21:05:29 [timbl] s/User agent/agent/ 21:06:03 [DanC] is there some reason to rush this discussion? 21:06:38 [Norm] I want some text in the 11 Nov webarch draft. 21:06:55 [DanC] ah; I see, thx Norm. 21:06:59 [Ian_] q- Ian_ 21:07:08 [Ian_] DO: I think these strategies need to be called out even more. 21:07:14 [Stuart] ack PaulC 21:07:18 [Zakim] +Roy_Fielding 21:07:21 [Zakim] -Roy 21:07:24 [Ian_] PC: I have not yet read IJ's proposal since he sent Friday. 21:07:48 [Ian_] PC: Stability of namespaces should appear in finding. 21:08:19 [Roy] Roy has joined #tagmem 21:08:25 [Ian_] PC: I would support more advice on namespace change policies. 21:08:25 [timbl] q+ to say yes there is something. 21:08:58 [Ian_] PC: There seems to be a tremendous amount of content on single-namespace languages; less on multiple namespace strategies. 21:09:07 [Ian_] PC: Is the finding focused on a single namespace problem? 21:09:22 [timbl] q+ to say yes there is something. : Note that a condition of documents reaching CR status will be that the clauses 2 and 3 will no longer be usable, to give the specification the necessary stability. 21:09:29 [Ian_] DO: That is one of the splits in the finding. The finding doesn't go into enough detail on pros and cons of extension strategies. 21:10:48 [Ian_] PC: I was just pointing to IJ's point on stability. 21:11:19 [DaveO] ian, that's somewhat incorrect. "details on pros and cons of extension strategies on the use of multiple namespaces". 21:11:20 [Ian_] PC: I think we have to seriously consider talking about mixed namespace docs since that's one of our issues. 21:12:27 [Ian_] TBL: namespace policy for W3C specs is linked from W3C Guide. 21:12:44 [Ian_] TBL: The requirement is to indicate change policy; also when namespace becomes fixed (at CR). 21:13:20 [DanC] it is policy. 21:13:35 [Ian_] PC: We could include W3C policy as an example in arch doc. 21:13:43 [Norm] ack norm 21:13:43 [Zakim] Norm, you wanted to note that Ian said "only make backwards compatible" but left that out of his proposed text 21:14:07 [Ian_] NW: Warning about putting namespace material in section on namespaces. 21:14:20 [Ian_] [IJ expects to include xrefs] 21:14:34 [Ian_] NW: For draft tomorrow, I'd like for us to err on the side of including more text rather than less. 21:15:09 [Ian_] NW: The one critical piece not in IJ's proposal is forwards/backwards, closed/open systems, development times. 21:15:35 [Stuart] ack TBray 21:15:35 [Zakim] TBray, you wanted to agree with Stuart's comment that the level of detail in webarch and the walsh/orchard draft is violently different 21:15:48 [Ian_] TBray: I don't think the community is close on semantics or even desirability of mixed namespace docs. I don't think we can go there yet. 21:16:14 [Ian_] TBray: I have just read IJ's text. I agree with IJ's point that the level of detail of DO/NW text is greater than rest of arch doc. 21:16:32 [Ian_] TBray: I would by and large be ok with IJ's text. 21:16:52 [Ian_] TBray: I think IJ has come close to an 80/20 point. 21:17:19 [Ian_] TBray: On for/back compatibility, I don't know that it is required to be included. I agree that the finding should have the details since these are complex issues. 21:17:49 [Ian_] TBray: I am concerned that if you talk about f/b compatibility, you fall over the slippery slope that might require 8 pages of details. 21:18:03 [Ian_] TBray: Perhaps mention f/b compatibility as an example of what's important, with a link to the finding. 21:18:34 [Ian_] DO: Do you think additional material is required to be sufficient? 21:18:52 [Ian_] TBray: IJ's draft is close to being sufficient. It's fine for the arch doc to point off to findings for more detail. 21:19:45 [Stuart] ack timbl 21:19:45 [Zakim] timbl, you wanted to note that the ownership and change issues with nmaepsaces are similar to te problems with document in general, and expectation shoudl be set. and to say yes 21:19:48 [Zakim] ... there is something. and to say yes there is something. : Note that a condition of documents reaching CR status will be that the clauses 2 and 3 21:19:52 [Zakim] ... will no longer be usable, to give the specification the necessary stability. 21:20:04 [Ian_] TBray: I don't think IJ's draft is seriously lacking anything. Mention of f/b compatibility a good idea. 21:20:42 [Ian_] TBL: On the issue of mixed namespaces, it may be worth saying that if you are designing a mixed name doc in XML right now, no general solution. But that if you do so for RDF, there is a well-defined solution. 21:21:30 [timbl] There is a well-defined solution for mixing of RDF ontologies. 21:21:58 [timbl] RDF does not provdie a solution for how to mix arbitrary XML namespaces for non-RDF applications. 21:24:11 [Ian_] DO: I propose to work with IJ to find a middle ground. 21:24:11 [Ian_] DC 21:24:21 [Ian_] DC: It's ok for me if last call draft says nothing about versioning. 21:24:22 [Stuart] ack DanC 21:24:44 [Norm] q+ 21:24:47 [Ian_] TBray: I"d be happier with IJ's most recent draft rather than nothing. 21:25:06 [Ian_] DC: The tactic of putting more text in and cutting back is not working for me. 21:25:33 [Ian_] NW: I would like the arch doc to include some text in the arch doc. 21:25:49 [TBray] q+ 21:26:25 [DaveO] q+ 21:26:25 [Ian_] NW: I am happier with IJ's text than nothing; but I'd like to work with IJ to include a few more things in tomorrow's draft, and discuss at ftf meeting. 21:26:42 [timbl] I would be OK with skipping versioning for the arch doc last call in the interests of expediancy of consesnsus of tag. Would be happier with ian's current text, if consesnus of tag. 21:26:48 [Ian_] NW: My slightly preferred solution is to add all of DO/NW good practice notes for discussion at ftf meeting. 21:27:01 [Zakim] -PaulC 21:27:02 [Ian_] PC: I have to go; I'm flexible on solution. 21:27:06 [Norm] q? 21:27:08 [Norm] ack norm 21:27:27 [Ian_] TBray: I am sympathetic for a subgroup to work on some text for inclusion in tomorrow's draft. 21:27:36 [Ian_] TBray: I am not excited about adding a lot more stuff. 21:28:06 [Norm] q? 21:28:17 [Stuart] ack TBray 21:28:18 [Ian_] TBray: note that I'm a big fan of the finding. But I think we need to stick closer to IJ's level of detail and length. 21:28:43 [Ian_] DO: I would be disappointed if IJ's draft was the extent of material that was included in the arch doc. 21:29:24 [DanC] I got lost somewhere; In Vancouver, we had a list of the issues that were critical path for last call for a "backward looking" last call. Now versioning seems to be in there. I guess I'll have to pay more attention. 21:29:26 [Ian_] DO: I believe more material needs to be in the arch doc (in particular good practice notes); the arch doc will go through Rec track. I think that things that don't go through the Rec track will not be taken as seriously, not get as much review, etc. 21:29:34 [Stuart] q+ 21:29:37 [Ian_] ack DaveO 21:30:37 [Stuart] ack Stuart 21:30:45 [Ian_] SW: If the TAG agrees that we consider versioning that important, we can put a separate doc through the Rec track. 21:31:58 [timbl] q+ to ask about timing 21:32:40 [Ian_] DO: I think the middle ground for this text is closer to the middle. 21:32:46 [Ian_] q+ 21:32:52 [TBray] For the record: IMHO Ian's text is better leaving this uncovered, but Ian is coming close to the 80/20 point and I don't want to see it get much longer than that 21:33:12 [Ian_] DC: So there's no principles in here about versioning. 21:33:56 [Stuart] ack DanC 21:33:58 [Ian_] TBL: Perhaps we need to get into sync on the timing of this. 21:34:05 [TBray] I think procedurally the right thing to do is let Stuart and Norm/Dave saw off what they can by tomorrow. 21:34:17 [TBray] er s/Stuart/Ian/ 21:34:17 [Ian_] TBL: My assumption is that we will dot I's and cross T's if we are to be on last call track soon. 21:34:23 [Stuart] s/Stuart/Ian/? 21:35:06 [Ian_] TBL: We are going to find small things we want to clean up in the existing text. 21:35:16 [DanC] is that a question from the chair? NO! we are *not* anywhere near "last call sign off". I think 2/3rds of the current draft isn't endorsed by various tag memebers. 21:35:21 [Ian_] TBL: The versioning text is interesting, but i need to look more closely at the text. 21:36:16 [Ian_] TBL: In any case, we need a disclaimer that we are not done by virtue of going to last call. 21:36:27 [Ian_] TBL: We will need a place to put ongoing ideas for the next draft. 21:36:37 [Norm] As I said before, I will be sorely disappointed if we don't say something about this topic in V1.0 of the webarch document. 21:36:55 [DanC] I hear you norm, but I'm not clear why. 21:37:33 [Ian_] q+ 21:37:46 [Ian_] ack timbl 21:37:46 [Zakim] timbl, you wanted to ask about timing 21:38:14 [Ian_] TBray: I hear some consensus to hand this off to DO/NW/IJ to come up with something short enough and includes enough material. 21:38:47 [Ian_] ADJOURNED 21:38:54 [DaveO] and I'll want to have an ad-hoc group on abstract component refs 21:38:55 [Ian_] RRSAgent, stop
http://www.w3.org/2003/11/10-tagmem-irc.html
CC-MAIN-2015-40
en
refinedweb
Changes for version 1.06 - Change namespace from CGI::Application::Demo to CGI::Application::Demo::Basic, to force files to be installed separately from other modules called CGI::Application::Demo::(Ajax, Dispatch). - Re-write and/or re-design the docs. - Document much more clearly what needs to be patched by the end user. - Move CSS and HTML templates into htdocs/assets/..., and simplify their names. - Move the CGI scripts into cgi-bin/ca.demo.basic, and simplify their names. - Use DBIx::Admin::CreateTable rather than copying some of its code. - Remove POD heads 'Required Modules' and 'Changes'. - Replace personal doc root with /var/www. - Use namespace::autoclean with Moose. - Change default database driver to SQLite. - Change path to Perl to /usr/bin/perl. Thanx for Larry for bug report #45751. - Change path to config files and templates to /var/www/. - Remove scripts/bu-demo from the distro. - Eliminate references to $ENV. - Write a new module to handle reading config files. - Move the config files into lib/CGI/Application/Demo/Basic/Util, so they can be installed with the *.pm files, and so Config::Tiny can find them easily. Change config file handing to match. Modules - CGI::Application::Demo::Basic - A vehicle to showcase CGI::Application Provides - CGI::Application::Demo::Basic::Base in lib/CGI/Application/Demo/Basic/Base.pm - CGI::Application::Demo::Basic::Faculty in lib/CGI/Application/Demo/Basic/Faculty.pm - CGI::Application::Demo::Basic::Five in lib/CGI/Application/Demo/Basic/Five.pm - CGI::Application::Demo::Basic::Four in lib/CGI/Application/Demo/Basic/Four.pm - CGI::Application::Demo::Basic::One in lib/CGI/Application/Demo/Basic/One.pm - CGI::Application::Demo::Basic::Three in lib/CGI/Application/Demo/Basic/Three.pm - CGI::Application::Demo::Basic::Two in lib/CGI/Application/Demo/Basic/Two.pm - CGI::Application::Demo::Basic::Util::Config in lib/CGI/Application/Demo/Basic/Util/Config.pm - CGI::Application::Demo::Basic::Util::Create in lib/CGI/Application/Demo/Basic/Util/Create.pm - CGI::Application::Demo::Basic::Util::LogDispatchDBI in lib/CGI/Application/Demo/Basic/Util/LogDispatchDBI.pm
https://metacpan.org/release/RSAVAGE/CGI-Application-Demo-Basic-1.06
CC-MAIN-2015-40
en
refinedweb
Beta 1 - ToolBar with status at bottom of Window invisible until resized? Why? I have created a Window whose Widget is a NorthSouthContainer. The NorthWidget is a ToolBar with two buttons added to it. The SouthWidget is a ToolBar with a Status added to it. However, when I show() the Window, the north ToolBar is visible, but the south ToolBar with the Status is not, unless I manually resize the window by dragging an edge or corner. Any idea why? Here's the code to reproduce the issue: PHP Code: public class WindowStatusBarIssue implements EntryPoint { public void onModuleLoad() { Window w = new Window(); NorthSouthContainer container = new NorthSouthContainer(); ToolBar toolBar = new ToolBar(); toolBar.add(new TextButton("Button-1")); toolBar.add(new TextButton("Button-2")); ToolBar statusBar = new ToolBar(); Status status = new Status(); status.setText("Ready..."); statusBar.add(status); VerticalLayoutContainer layoutContainer = new VerticalLayoutContainer(); layoutContainer.add(new Label("some info")); layoutContainer.add(new Label("more stuff")); container.setWidget(layoutContainer); container.setNorthWidget(toolBar); container.setSouthWidget(statusBar); w.add(container); w.setWidth(400); w.setHeight(250); layoutContainer.forceLayout(); // can you tell container.forceLayout(); // that I was starting w.forceLayout(); // to get desperate here? w.show(); } } have you tried setting the layoutData of the toolbars to divide the height between the two toolbars? I haven't tried that . . basically my Window will show (once I've done a manual resize) the ToolBar with Buttons on top, and the ToolBar with Status on bottom, with a relatively large open area in the middle. I was hoping not to have to explicitly size them - I would guess that it shouldn't be necessary because, once a resize is done, it seems to know what to do. However, I'm actually a little unclear on what you mean: have you tried setting the layoutData of the toolbars to divide the height between the two toolbars? For your purpose, have you tried using a VerticalLayoutContainer? It seems your spacing issue is because your NorthSouthContainer is resizing to the parent size. If I want to guarantee 50/50 split sizing for 2 widgets, I use: Code: mainVert.add(graphPanel, new VerticalLayoutData(1,0.5)); mainVert.add(actionPanel, new VerticalLayoutData(1,0.5)); Ah, but I don't want a 50/50 split, I'm looking specifically for basically what amounts to a BorderLayout with only North, South, and Center components. The baffling thing is that, once I manually resize (and it can be literally resizing it only a single pixel larger by dragging the corner with the mouse), it looks EXACTLY how I want it, but before I do that, the South part is invisible. Hi, Have you tried calling forceLayout() ? Cheers Rob Yep . . 3 of the last 4 lines in my code above are calls to forceLayout on various components. No luck with that. Hi, I have also found that you sometimes need to perform an initial size/re-size on widgets added to a layout container. For example: Code: public abstract class AbstractPagingView<C extends UiHandlers> extends ViewWithUiHandlers<C> { public static final String CONTEXT_AREA_WIDTH = "100%"; public static final String CONTEXT_AREA_HEIGHT = "100%"; public static final String TOOLBAR_HEIGHT = "26px"; protected VerticalLayoutContainer panel; protected final ToolBar toolBar; protected final Grid<?> grid; @Inject public AbstractPagingView(ToolBar toolBar, final Grid<?> grid) { super(); this.toolBar = toolBar; this.grid = grid; this.panel = new VerticalLayoutContainer(); this.panel.setSize(CONTEXT_AREA_WIDTH, CONTEXT_AREA_HEIGHT); this.toolBar.setSize(CONTEXT_AREA_WIDTH, TOOLBAR_HEIGHT); this.grid.setSize(CONTEXT_AREA_WIDTH, TOOLBAR_HEIGHT); this.panel.add(this.toolBar, new VerticalLayoutData(1, -1)); this.panel.add(this.grid, new VerticalLayoutData(1, 1)); Scheduler.get().scheduleDeferred(new Scheduler.ScheduledCommand() { public void execute() { resize(); } }); bindCustomUiHandlers(); } public void resize() { int width = Window.getClientWidth(); int height = Window.getClientHeight(); Log.debug("resize() - width: " + width + " height: " + height); panel.setSize(width + "px", height + "px"); panel.onResize(); } Cheers Rob Hi, The sample Ext GWT demo: You can browse the code: Cheers Rob This is now fixed in SVN. Change will go out in the next release. As a workaround, you can use VerticalLayoutContainer rather than NorthSouthContainer. Use VerticalLayoutData values of 1, -1 for the top and bottom components, and value of 1,1 for the center widget.
https://www.sencha.com/forum/showthread.php?173394-Beta-1-ToolBar-with-status-at-bottom-of-Window-invisible-until-resized-Why
CC-MAIN-2015-40
en
refinedweb
Haskell For Kids: Week 1! And We’re Off! Thanks again to everyone that’s supported this project and stepped up to be a part of it. Today, I taught my first in-person class on Haskell, and it was a blast! This is my first weekly summary post, containing what we’re doing this week. First: Introductions Since there are a number of kids following along with this, let’s all get started with some introductions! - Me: My name is Chris Smith, and I’m teaching the in-person programming class at Little School on Vermijo that got all of this started. I’m a huge fan of Haskell, and am really excited to be able to share that with new people! - Sue: Sue Spengler is the “founder, lead teacher, principal, superindendent, custodian, secretary, and lunch lady” for the Little School on Vermijo. The school is her project, and she’s doing some pretty amazing things. I had to poke around a bit for a photo, so I hope she likes this one! - My local students: The kids in my class today were Grant, Sophia, Marcello, and Evie (I hope I spelled that right!) I’ll ask them to introduce themselves in comments on this post, so look for them there! - Everyone else: Any other kids who are taking the class, please use the comments to introduce yourselves as well! You can say hello, and if you like, you can even link to a video or picture. I hope everyone takes the time to leave comments and say hello to each other. Learning things is a lot more fun when you talk to other people. The Plan We talked about where we’re going, including: - Write computer programs to draw pictures. - Change our computer programs so the pictures move! - Build a game of your own choosing. This will take the school year! That’s because this class isn’t just about memorizing some thing about a particular computer program: it’s about being creative, trying things, and doing something you’re proud of. So there will be a lot of free time to play around and try out different ideas in your programs. We are learning the Haskell programming language, but in the end, the class is more about being in control of your computer and designing and building something really cool from scratch, not just remembering some stuff about Haskell. Organization of Computers and Programming The first thing we talked about was what a computer program is, and how some of the ideas fit together. Here’s the whiteboard when we were done! Some of the ideas we talked about: - How a computer works. The main part of a computer is built from a device for following instructions (the “CPU”), and a device for remembering information (“memory”). - Machine language. The computer doesn’t speak English, of course! It follows instructions in a language called “machine language”. This language is easy for the computer to understand, but it’s very, very difficult to write programs in. - Compilers. Instead of writing our programs in machine language, we write them in other languages, and then get the computer to translate them for us! The program that does that is called a compiler. - Some programming languages. We have a choice what programming language to use when writing computer programs! We brainstormed some languages kids in the class had heard about: Java, C, C++, Objective C, JavaScript, Java, and Smalltalk. (Yes, Marcello had heard of Smalltalk! I’m very impressed.) The language we’re learning in this class is called Haskell. - Libraries. Libraries are pieces of programs that other people have written for us, so we don’t have to start from scratch. We spent some time imagining all of the steps involved what we might consider very easy things to do with a computer. For example, thing of all the little steps in drawing a window… how many circles, rectangles, lines, letters, and so on can you find in a window on your computer? Libraries let someone describe things once instead of making you repeat all that each time. We talked about how we’ll be using: - A programming language called Haskell. - A library called Gloss. Playing Around At this point, we all used a web site to write some simple computer programs using Haskell and Gloss. The web site is: We started out with some really simple programs, like these: Draw a circle! import Graphics.Gloss picture = circle 80 Draw a rectangle! import Graphics.Gloss picture = rectangleSolid 200 300 Draw some words! import Graphics.Gloss picture = text "Hello" All of these programs have some things in common: - The first line of each one is “import Graphics.Gloss”. This tells the compiler that you want to use the Gloss library to make pictures. You only need to say it once, and it has to be at the very beginning of your program. - They all then go on to say “picture = …”. That’s because the way our programs work is to make a picture, and call it “picture”. The web site we’re using then takes that picture, whatever we define it to be, and draws it for us. We talked about how in the future, we might define other things with other names, but for now, we’re okay with just telling the compiler what “picture” is. - After the “=”, they describe the picture that we want to draw. There are several types of pictures, and we’ve just seen three of them! All of the kinds of pictures you can create are part of the Gloss library. - Except for the last one, they all use some distances. For example, the 80 in the first example is the radius of the circle (the distance from the middle to the outside). You can make that number larger to draw a bigger circle, and smaller to draw a smaller circle. You can do the same with the width and height of the rectangle. We did have problems with some people using the web site. If you’re having trouble, you might need to make sure you have a new version of your web browser. Also, the web site doesn’t work with Internet Explorer… so try with Chrome, Firefox, Opera, or Safari. Don’t worry too much about the web browser problems: soon enough, you’ll install the Haskell compiler on your own computer, and you won’t need the web site to run your programs any more! We’re just using the web site to get started quickly. Drawing more than one thing! By this time, several kids were asking if they can draw more than one shape at a time. Yes, you can! To draw more than one thing, you can use “pictures” (notice the s at the end). For example, import Graphics.Gloss picture = pictures [ circle 80, rectangleWire 200 100 ] Notice we do this: - The word “pictures” - An opening square bracket. - A list of the pictures we want to draw, separated by commas. - A closing square bracket. We talked about how it helps to make new lines in your program sometimes. The only think you need to be careful of is that when you make a new line, you have to put a few spaces at the beginning to indent it. See how the second and third lines of the part where we define “picture” are indented a little? Changing your pictures The Gloss library gives you some ways you can change your pictures, too! You can change the colors with “color”. import Graphics.Gloss picture = color red (circleSolid 80) Notice how you say “color”, then the name of the color you want, and then the picture to draw, in parentheses. The parentheses are important! They mean the same thing they do in math: treat that whole part as a single “thing” (in this case, a picture). We talked about what colors Gloss knows about. Here’s the list: black, white, red, green, blue, yellow, magenta, cyan, rose, orange, chartreuse, aquamarine, azure, and violet. We all laughed because Sue picked a weird color name off the top of her head, and asked “Does it know chartreuse?” Yes, it does. Lucky guess! You can also move things around on the screen. import Graphics.Gloss picture = translate 100 50 (rectangleSolid 50 50) When you want to move things around, Gloss calls that “translate”. Yes, it’s a weird name, but “translate” just means move something left, right, up, or down. The first number after translate is how far to move it to the side. Positive numbers mean right, negative numbers mean left, just like a number line! The second number is how far to move it up or down. Positive numbers mean up, and negative numbers mean down. Keep in mind that in Haskell, you have to write negative numbers in parentheses! If you say “translate -100 …”, then Haskell thinks you want to subtract one hundred from “translate”. It doesn’t know how to subtract a number from a verb (I don’t either) so it gives up! You have to write “translate (-100) …” instead, with the parentheses. You can also turn things. The verb for that is “rotate”. Let’s draw a diamond. import Graphics.Gloss picture = rotate 45 (rectangleWire 100 100) You rotate things in degrees. Remember that 360 degrees means turn it all the way around to where it started, so it won’t do anything! 45 degrees is half of a right angle. Do you see why that gives you a diamond? The last thing you can do is stretch the picture. The verb for that is “scale”. import Graphics.Gloss picture = scale 2 1 (circle 100) That will draw an ellipse, which is like a circle but it’s wider than it is tall! Don’t worry if this all doesn’t make sense yet! We’ll be spending a lot of time playing around with how to put these things together! Here’s the whiteboard after we finished all of this… Time for Experimentation We spent a lot of time with everyone making their own shapes and pictures of whatever they want. The best way to get more comfortable with all of this is to play around. Make lots of little changes and see what happens! Try to guess what’s going to happen, then try it and see if you’re right or wrong. Here are some pictures of the kids with their projects: Sophia and Evie showing off two circles side by side. These eventually became the eyes in a face! That’s Grant with his diamond. It looked even better after he stretched it a little bit up and down. This was Marcello’s graphics… centering the word in the circle was a long task! If you try it, you’ll notice text doesn’t get normally drawn right in the middle like other pictures do, so Marcello put in a lot of trial and error time with “translate” to put the word in the circle. That’s Sophia being very excited at getting her two eyes in the right places! Your Assignment Your mission, if you choose to accept it… is to plan and create a drawing of something you’re interested in! Maybe it’s a fish, a garden, a space station, or a dragon… just make sure you can draw it by using rectangles and circles of different colors, and moving, turning, or stretching them. Here at the Little School, we’ll be spending our remaining class this week and our classes next week working on this. Spend some time and come up with something you’re proud of!. - Teenage Haskell | twdkz - On CodeWorld and Haskell | Sententia cdsmithus)]
https://cdsmith.wordpress.com/2011/08/16/haskell-for-kids-week-1/?like=1&source=post_flair&_wpnonce=e9ecfe8d49
CC-MAIN-2015-40
en
refinedweb
As you learned at the end of the last chapter, one of the great things about ASP.NET is that we can pick and choose which of the various .NET languages we like. In this chapter, we’ll look at some key programming principles using our two chosen languages, VB.NET and C#. We’ll start off with a run-down of some basic programming concepts as they relate to ASP.NET using both languages. We’ll introduce programming fundamentals such as control and page events, variables, arrays, functions, operators, conditionals, and loops. Next, we’ll dive into namespaces and address the topic of classes – how they’re exposed through namespaces, and which you’ll use most often. The final sections of the chapter cover some of the ideas underlying modern, effective ASP.NET design, starting with that of code-behind and the value it provides by helping us separate code from presentation. We finish with an examination of how object-oriented programming techniques impact the ASP.NET developer. Note that you can download these chapters in PDF format if you’d rather print them out and read them offline. Programming Basics One of the building blocks of an ASP.NET page is the application logic: the actual programming code that allows the page to function. To get anywhere with this, you need to grasp the concept of events. All ASP.NET pages will contain controls, such as text boxes, check boxes, lists, and more, each of these controls allowing the user to interact with it in some way. Check boxes can be checked, lists can be scrolled, items on them selected, and so on. Now, whenever one of these actions is performed, the control will raise an event. It is by handling these events with code that we get our ASP.NET pages to do what we want. For instance, say a user clicks a button on an ASP.NET page. That button (or, strictly, the ASP.NET Button control) raises an event (in this case it will be the Click event). When the ASP.NET runtime registers this event, it calls any code we have written to handle it. We would use this code to perform whatever action that button was supposed to perform, for instance, to save form data to a file, or retrieve requested information from a database. Events really are key to ASP.NET programming, which is why we’ll start by taking a closer look at them. Then, there’s the messy business of writing the actual handler code, which means we need to check out some common programming techniques in the next sections. Specifically, we’re going to cover the following areas: - Control events and handlers - Page events - Variables and variable declaration - Arrays - Functions - Operators - Conditionals - Loops It wouldn’t be practical, or even necessary, to cover all aspects of VB.NET and C# in this book, so we’re going to cover enough to get you started, completing the projects and samples using both languages. Moreover, I’d say that the programming concepts you’ll learn here will be more than adequate to complete the great majority of day-to-day Web development tasks using ASP.NET. Control Events and Subroutines As I just mentioned, an event (sometimes more than one) is raised, and handler code is called, in response to a specific action on a particular control. For instance, the code below creates a server-side button and label. Note the use of the OnClick attribute on the Button control: Example 3.1. ClickEvent.aspx (excerpt) <form runat="server"> <asp:Button <asp:Label </form> When the button is clicked, it raises the Click event, and ASP.NET checks the OnClick attribute to find the name of the handler subroutine for that event. Here, we tell ASP.NET to call the btn1_Click() routine. So now we have to write this subroutine, which we would normally place within a code declaration block inside the <head> tag, like this: Example 3.2. ClickEvent.aspx (excerpt) <head> <script runat="server" language="VB"> Public Sub btn1_Click(s As Object, e As EventArgs) lblMessage.Text = "Hello World" End Sub </script> </head> Example 3.3. ClickEvent.aspx (excerpt) <head> <script runat="server" language="C#"> public void btn1_Click(Object s, EventArgs e) { lblMessage.Text = "Hello World"; } </script> </head> This code simply sets a message to display on the label that we also declared with the button. So, when this page is run and users click the button, they’ll see the message “Hello World” appear next to it. I hope you can now start to come to grips with the idea of control events and how they’re used to call particular subroutines. In fact, there are many events that your controls can use, some of which are only found on certain controls – not others. Here’s the complete set of attributes the Button control supports for handling events: OnClick As we’ve seen, the subroutine indicated by this attribute is called for the Click event, which occurs when the user clicks the button. OnCommand As with OnClick, the subroutine indicated by this attribute is called when the button is clicked. OnLoad The subroutine indicated by this attribute is called when the button is loaded for the first time – generally when the page first loads. OnInit When the button is initialized, any subroutine given in this attribute will be called. OnPreRender We can run code just before the button is rendered, using this attribute. OnUnload This subroutine will run when the control is unloaded from memory – basically, when the user goes to a different page or closes the browser entirely. OnDisposed This occurs when the button is released from memory. OnDataBinding This fires when the button is bound to a data source. Don’t worry too much about the intricacies of all these events and when they happen; I just want you to understand that a single control can produce a number of different events. In the case of the Button control, you’ll almost always be interested in the Click event, as the others are only useful in rather obscure circumstances. When a control raises an event, the specified subroutine (if there is one) is executed. Let’s now take a look at the structure of a typical subroutine that interacts with a Web control: Public Sub mySubName(s As Object, e As EventArgs) ' Write your code here End Sub public void mySubName(Object s, EventArgs e) { // Write your code here } Let’s break down all the components that make up a typical subroutine: Public, public Defines the scope of the subroutine. There are a few different options to choose from, the most frequently used being Public (for a global subroutine that can be used anywhere within the entire page) and Private (for subroutines that are available for the specific class only). If you don’t yet understand the difference, your best bet is to stick with Public for now. Sub, void Defines the chunk of code as a subroutine. A subroutine is a named block of code that doesn’t return a result; thus, in C#, we use the void keyword, which means exactly that. We don’t need this in VB.NET, because the Sub keyword already implies that no value is returned. mySubName(...) This part gives the name we’ve chosen for the subroutine. s As Object, Object s When we write a subroutine that will function as an event handler, it must accept two parameters. The first is the control that generated the event, which is an Object. Here, we are putting that Object in a variable named s (more on variables later in this chapter). We can then access features and settings of the specific control from our subroutine using the variable. e As EventArgs, EventArgs e The second parameter contains certain information specific to the event that was raised. Note that, in many cases, you won’t need to use either of these two parameters, so you don’t need to worry about them too much at this stage. As this chapter progresses, you’ll see how subroutines associated with particular events by the appropriate attributes on controls can revolutionize the way your user interacts with your application. Page Events Until now, we’ve considered only events that are raised by controls. However, there is another type of event – the page event. The idea is the same as for control events .),books: Objects Properties hold specific information relevant to that class of object. You can think of properties as characteristics of the objects that they represent. Our Dog class might have properties such as Color, Height, and Length. Methods. >>IMAGE. Properties - Height -.” Methodsbooks. Classes. >>IMAGE!"; Scope:. Understanding Inheritance. Separating Code From Content: sample.aspx layout, presentation, and static content sample.vb, sample.cs. Summary. Look out for more chapters from Build Your Own ASP.NET Website Using C# And VB.NET in coming weeks. If you can’t wait, download all the sample chapters, or order your very own copy now! No Reader comments
http://www.sitepoint.com/vb-dot-net-c-sharp-programming/
CC-MAIN-2015-40
en
refinedweb
java.lang.Object org.apache.openjpa.enhance.ManagedClassSubclasserorg.apache.openjpa.enhance.ManagedClassSubclasser public class ManagedClassSubclasser Redefines the method bodies of existing unenhanced classes to make them notify state managers of mutations. public ManagedClassSubclasser() public static List<Class<?>> prepareUnenhancedClasses(OpenJPAConfiguration conf, Collection<? extends Class<?>> classes, ClassLoader envLoader) classes, creates and registers a new subclass that implements PersistenceCapable, and prepares OpenJPA to handle new instances of the unenhanced type. If this is invoked in a Java 6 environment, this method will redefine the methods for each class in the argument list such that field accesses are intercepted in-line. If invoked in a Java 5 environment, this redefinition is not possible; in these contexts, when using field access, OpenJPA will need to do state comparisons to detect any change to any instance at any time, and when using property access, OpenJPA will need to do state comparisons to detect changes to newly inserted instances after a flush has been called. nullif classesis null. UserException- if confrequires build-time enhancement and classesincludes unenhanced types. public static void debugBytecodes(serp.bytecode.BCClass bc) throws IOException IOException
http://openjpa.apache.org/builds/2.1.1/apidocs/org/apache/openjpa/enhance/ManagedClassSubclasser.html
CC-MAIN-2015-40
en
refinedweb
I have GNU/linux Box (linode VPS running Debian) with PHP, Apache, MySQL and Varnish (and app/site that use those) is there a tool that will save or monitor load times of web server responses? This question appears to be off-topic. The users who voted to close gave this specific reason: Check out Apache's LogFormat directive. It allows to log the time taken to serve the request (%D and %T). This can be used for monitoring your server's response time. It will for example tell you if your server responds slower after you have made a change. %D %T However, I am not aware of any tool which uses that information to create a report. Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). You'll want to be a little clearer about what you mean by "the time of a response". If you're interested in Apache's timings, you can use the LogFormat directive to get "the time taken to serve the request" in either seconds (%T) or microseconds (%D). LogFormat Docs are here. Nagios or Icinga can do this, as can a number of other tools (Munin comes to mind). We use zabbix in our shop, you can set it up to monitor a specific page, it will give you ping time, download speed, and response time. It is open source, and although complex, allows you to do fairly complex stuff, including SMS alerts. in built graph creation, and tripwire style security checks (ie. Notify you if the checksum of /etc/passwd changes) You can use cacti too. There is a lot of templates and if i remember there are several to test/monitor/graph the load speed of a url. Regards! you can use to monitor processing time with varnish you will need this: /etc/varnish/newrelic.h: #include <sys/time.h> struct timeval detail_time; gettimeofday(&detail_time,NULL); char start[20]; sprintf(start, "t=%lu%06lu", detail_time.tv_sec, detail_time.tv_usec); VRT_SetHdr(sp, HDR_REQ, "\020X-Request-Start:", start, vrt_magic_string_end); vcl_recv: C{ #include </etc/varnish/newrelic.h> }C It really depends on what you want to achieve. Internal monitoring can give you a rough idea on the overall performance of your machine and software. If you are asking about remote server monitoring options, then you have a lot of options. External monitoring really has its advantages and you can get response time per city (depending on the service you are using). There are a lot to choose from, both paid and free. All of them would give you a pretty good idea about the response times. For extra resolution, it is almost certain you'll need to go for a paid account, but you can always start with the free options. I personally use Websitepulse, but have also tried other services such as Pingdom and Site24x7. What I like about WSP is the number of remote locations I can test from. Another cool thing is their somewhat limited, but free server monitoring for life service. It's the third one down. If you like, I can run some test for you and let you know how your site performs, from a couple of locations I'm currently paying for. This tools will log and Monitor your web server Nagios or Icinga Nagios or Icinga There's also Mod Firstbyte which will measure the time that your server took to generate the page (not how long it took to generate and download to the browser which %D and %T do) A couple of other services worth checking out are GTmetrix and Stella. They both monitor pages, graph performance, and track historical metrics. The nice thing about these services is that they don't just track page load times, they also track the load time of all the other assets on the page (images, css, js, etc.). I actually was coming to serverfault to ask if anyone knew of an open source equivalent for tracking loads times of pages and associated assets; then I ran across this thread. Still, if anyone knows of something similar that's open source, please post a comment on this answer. Thanks! Smokeping might do what you're looking for - its obviously measuring the latency between your smokeping box and the webserver too, and maybe not so good if you're looking at the response times of your complex cgis (apache logs are better for that), but it's simple, and it makes fun charts. asked 3 years ago viewed 1632 times active
http://serverfault.com/questions/393501/is-there-tool-that-will-monitor-or-log-speed-of-web-server-responses/399469
CC-MAIN-2015-40
en
refinedweb
Robust software systems should be built up from a web of interrelated objects whose responsibilities are tight, neatly cohesive, boiled down to just performing a few narrowed and well-defined tasks. However, it’s admittedly pretty difficult to design such systems, at least in the first take. Most of the time we tend to group tasks in question by following an ad-hoc semantic sense in consonance with the nature of our own human mind.. A good example of this is the Singleton. We all know that Singletons have been condemned for years because they suffer all sort of nasty implementation issues, with the classic mutable global state by far the most infamous one. They can be also be blamed for doing two semantically-unrelated things at the same time: asides from playing their main role, whatever this might be, they’re responsible for controlling how the originating classes should be instantiated as well. Singletons are entities unavoidably cursed with the obligation of performing at least two different tasks which aren’t even remotely related to each other. It’s easy to stay away from Singletons without feeling a pinch of guilt. But how can one be pragmatic in more generic, day-to-day situations, and design classes whose concerns are decently cohesive? Even when there’s no straight answer to the question, it’s possible to adhere in general to the rules of the Single Responsibility Principle, whose formal definition states the following: There should never be more than one reason for a class to change. What the principle attempts to promote is that classes must always be designed to expose only one area of concern and the set of operations they define and implement must be aimed at fulfilling that concern in particular and nothing else. In other words, a class should only change in response to the execution of those semantically-related operations. If it ever needs to change in response to another, totally unrelated operation, then it’s clear the class has more than one responsibility. Let’s move along and code a few digestible examples so that we can see how to take advantage of the principle’s benefits. A Typical Violation of the Single Responsibility Principle For obvious reasons, there’s plenty of situations where a seemingly-cohesive set of operations assigned to a class actually scope different unrelated responsibilities, hence violating the principle. One that I find particularly instructive is the Active Record Pattern because of the momentum it has gained in the last few years. Let’s pretend we’re a blind worshiper of the pattern and of the so-called database model and want to appeal to its niceties for creating a basic entity class which should model the data and behavior of generic users. Such a class, along with the contract that it implements, could be defined as follows: <?php namespace Model; interface UserInterface { public function setId($id); public function getId(); public function setName($name); public function getName(); public function setEmail($email); public function getEmail(); public function getGravatar(); public function findById($id); public function insert(); public function update(); public function delete(); } <?php namespace Model; use LibraryDatabaseDatabaseAdapterInterface; class User implements UserInterface { private $id; private $name; private $email; private $db; private $table = "users"; public function __construct(DatabaseAdapterInterface $db) { $this->db = $db; }() { if ($this->name === null) { throw new UnexpectedValueException( "The user name has not been set."); } return $this->name; } public function setEmail($email) { if (!filter_var($email, FILTER_VALIDATE_EMAIL)) { throw new InvalidArgumentException( "The user email is invalid."); } $this->email = $email; return $this; } public function getEmail() { if ($this->email === null) { throw new UnexpectedValueException( "The user email has not been set."); } return $this->email; } public function getGravatar($size = 70, $default = "monsterid") { return "" . md5(strtolower($this->getEmail())) . "?s=" . (integer) $size . "&d=" . urlencode($default) . "&r=G"; } public function findById($id) { $this->db->select($this->table, ["id" => $id]); if (!$row = $this->db->fetch()) { return null; } $user = new User($this->db); $user->setId($row["id"]) ->setName($row["name"]) ->setEmail($row["email"]); return $user; } public function insert() { $this->db->insert($this->table, [ "name" => $this->getName(), "email" => $this->getEmail() ]); } public function update() { $this->db->update($this->table, [ "name" => $this->getName(), "email" => $this->getEmail()], "id = {$this->id}"); } public function delete() { $this->db->delete($this->table, "id = {$this->id}"); } } As one might expect from a typical implementation of Active Record, the User class is pretty much a messy structure which mingles chunks of business logic such as setting/getting usernames and email addresses, and even generating some nifty Gravatars on the fly, with data access. Is this a violation of the Single Responsibility Principle? Well, unquestionably it is, as the class exposes to the outside world two different responsibilities which by no means have a true semantic relationship with each other. Even without going through the class’ implementation and just scanning its interface, it’s clear to see that the CRUD methods should be placed in the data access layer completely insulated from where the mutators/accessors live and breath. In this case, the result of executing the findById() method for instance will change the state of the class while any call to the setters impact the CRUD operations as well. This implies there are two overlapped responsibilities coexisting here which makes the class change in response to different requirements. Of course, if you’re anything like me you’ll be wondering how to turn User into an Single Responsibility Principle-compliant structure without too much hassle during the refactoring process. The first modification that should be introduced is to keep all the domain logic within the class’ boundaries while moving away the one that deals with data access… yes, to the data access layer. There are a few nifty ways to accomplish this, but considering that the responsibilities should be sprinkled across multiple layers, the use of a data mapper is an efficient approach that permits to do this in a fairly painless fashion. Putting Data Access Logic in a Data Mapper The best way to keep the class’ responsibilities (the domain-related ones, of course) isolated from the ones that deal with data access is via a basic data mapper. The one below does a decent job when it comes to dividing up the responsibilities in question: <?php namespace Mapper; use ModelUserInterface; interface UserMapperInterface { public function findById($id); public function insert(UserInterface $user); public function update(UserInterface $user); public function delete($id); } <?php namespace Mapper; use LibraryDatabaseDatabaseAdapterInterface, ModelUserInterface, ModelUser; class UserMapper implements UserMapperInterface { private $db; private $table = "users"; public function __construct(DatabaseAdapterInterface $db) { $this->db = $db; } public function findById($id) { $this->db->select($this->table, ["id" => $id]); if (!$row = $this->db->fetch()) { return null; } return $this->loadUser($row); } public function insert(UserInterface $user) { return $this->db->insert($this->table, [ "name" => $user->getName(), "email" => $user->getEmail() ]); } public function update(UserInterface $user) { return $this->db->update($this->table, [ "name" => $user->getName(), "email" => $user->getEmail() ], "id = {$user->getId()}"); } public function delete($id) { if ($id instanceof UserInterface) { $id = $id->getId(); } return $this->db->delete($this->table, "id = $id"); } private function loadUser(array $row) { $user = new User($row["name"], $row["email"]); $user->setId($row["id"]); return $user; } } Looking at the mapper’s contract, it’s easy to see how nice the CRUD operations that polluted the User class’ ecosystem before have been placed inside a cohesive set which is now part of the raw infrastructure instead of the domain layer. This single modification should let us refactor the domain class and turn it into a cleaner, more distilled structure that conforms to the principle: <?php namespace Model; interface UserInterface { public function setId($id); public function getId(); public function setName($name); public function getName(); public function setEmail($email); public function getEmail(); public function getGravatar(); } <?php namespace Model; class User implements UserInterface { private $id; private $name; private $email; public function __construct($name, $email) { $this->setName($name); $this->setEmail($email); }; } public function getGravatar($size = 70, $default = "monsterid") { return "" . md5(strtolower($this->email)) . "?s=" . (integer) $size . "&d=" . urlencode($default) . "&r=G"; } } It could be said that User now has a better designed implementation as the batch of operations it performs not only are pure domain logic, but they’re semantically bound to each other. In Single Responsibility Principle parlance, the class has only one well-defined responsibility which is exclusively and intimately related to handling user data. No more, no less. Of course, the example would look half-backed if I don’t show you how to get the mapper doing all the data access stuff while keeping the User class persistence agnostic: <?php $db = new PdoAdapter("mysql:dbname=test", "myusername", "mypassword"); $userMapper = new UserMapper($db); // Display user data $user = $userMapper->findById(1); echo $user->getName() . ' ' . $user->getEmail() . '<img src="' . $user->getGravatar() . '">'; // Insert a new user $user = new User("John Doe", "[email protected]"); $userMapper->insert($user); // Update a user $user = $userMapper->findById(2); $user->setName("Jack"); $userMapper->update($user); // Delete a user $userMapper->delete(3); While the example is unquestionably trivial, it does show pretty clearly how the fact of having delegated the responsibility for executing the CRUD operations to the data mapper permits us to deal with user objects whose sole area of concern is to handle exclusively domain logic. At this point, the objects’ tasks are principle-compliant, nicely distilled, and narrowed to setting/retrieving user data and rendering the associated gravatars instead of being focused additionally on persisting that data in the storage. Closing Remarks Perhaps just a biased opinion based on my own experience as developer (so take it at face value), I’d dare to say the Single Responsibility Principle’s worst curse and certainly the reason why it’s so blatantly ignored in practice is the pragmatism of reality. Obviously it’s a lot easier “to get the job done” and struggle with tight deadlines by blindly assigning a bunch of roles to a class, without thinking if they’re semantically related to each other. Even in enterprise environments, where the use of contracts for outlining explicitly the behavior of application components isn’t just a luxury but a must, it’s pretty difficult to figure out how to group together cohesively a set of operations. Even though, it’s doesn’t hurt to take some time and design cleaner classes that don’t mix up unnecessarily heaps of unrelated responsibilities. In that sense, the principle is just a guideline that will assist you in the process, but certainly a very valuable one. - j0k3r - Timothy Boronczyk - Alex Gervasio - lingtalfi - Alex Gervasio - Mark - Alex Gervasio - Matthew Setter - Alex Gervasio - Patrick - Alex Gervasio - Stephane Deuvaert - Alex Gervasio - Uncle Fred - Alex Gervasio - Mario - Antonio Carvalho
http://www.sitepoint.com/the-single-responsibility-principle/
CC-MAIN-2015-40
en
refinedweb
Uncaught TypeError: Cannot call method 'on' of undefined Hello, I'm using Ext Designer 1.2.2 with Ext JS 3. I successfully export the project, but when I try to run it I receive: Code: Uncaught TypeError: Cannot call method 'on' of undefined The problem is in the following code: Code: store = Ext.StoreMgr.lookup(store); store.on({ scope: this, beforeload: this.beforeLoad, load: this.onLoad, exception: this.onLoadError }); Code: Ext.StoreMgr.register(this); In my case I have a PagingToolbar with a store set and when that component needs it the store is still not created. So, my questions is: can you give me any advice how to fix the problem or just any direction of the Stores' things. I.e. how are they created, initialized etc ... However I changed this line of code: Code: return Ext.isObject(id) ? (id.events ? id : Ext.create(id, 'store')) : this.get(id); Code: if(Ext.isObject(id)) { return (id.events ? id : Ext.create(id, 'store')); } else { var store = null; if(store = this.get(id)) { return store; } else { return Ext.create(id, 'store'); } } Hope that this helps to someone
https://www.sencha.com/forum/showthread.php?229954-Uncaught-TypeError-Cannot-call-method-on-of-undefined&p=853243&mode=linear
CC-MAIN-2015-40
en
refinedweb
Python requires¶ Warning This is an experimental feature subject to breaking changes in future releases. Note This syntax supersedes the legacy python_requires() syntax. The most important changes are: - These new python_requires affect the consumers package_id. So different binaries can be managed, and CI systems can re-build affected packages according to package ID modes and versioning policies. - The syntax defines a class attribute instead of a module function call, so recipes are cleaner and more aligned with other types of requirements. - The new python_requires will play better with lockfiles and deterministic dependency graphs. - They are able to extend base classes more naturally without conflicts of ConanFile classes. Introduction¶ The python_requires feature is a very convenient way to share files and code between different recipes. A python requires is similar to any other recipe, it is the way it is required from the consumer what makes the difference. A very simple recipe that we want to reuse could be: from conans import ConanFile myvar = 123 def myfunct(): return 234 class Pkg(ConanFile): pass And then we will make it available to other packages with conan export. Note that we are not calling conan create, because this recipe doesn’t have binaries. It is just the python code that we want to reuse. $ conan export . pyreq/0.1@user/channel We can reuse the above recipe functionality declaring the dependency in the python_requires attribute and we can access its members using self.python_requires["<name>"].module: from conans import ConanFile class Pkg(ConanFile): python_requires = "pyreq/0.1@user/channel" def build(self): v = self.python_requires["pyreq"].module.myvar # v will be 123 f = self.python_requires["pyreq"].module.myfunct() # f will be 234 self.output.info("%s, %s" % (v, f)) $ conan create . pkg/0.1@user/channel ... pkg/0.1@user/channel: 123, 234 It is also possible to require more than one python-require, and use the package name to address the functionality: from conans import ConanFile class Pkg(ConanFile): python_requires = "pyreq/0.1@user/channel", "other/1.2@user/channel" def build(self): v = self.python_requires["pyreq"].module.myvar # v will be 123 f = self.python_requires["other"].module.otherfunc("some-args") Extending base classes¶ A common use case would be to declare a base class with methods we want to reuse in several recipes via inheritance. We’d write this base class in a python-requires package: from conans import ConanFile class MyBase(object): def source(self): self.output.info("My cool source!") def build(self): self.output.info("My cool build!") def package(self): self.output.info("My cool package!") def package_info(self): self.output.info("My cool package_info!") class PyReq(ConanFile): name = "pyreq" version = "0.1" And make it available for reuse with: $ conan export . pyreq/0.1@user/channel Note that there are two classes in the recipe file: - MyBaseis the one intended for inheritance and doesn’t extend ConanFile. - PyReqis the one that defines the current package being exported, it is the recipe for the reference pyreq/0.1@user/channel. Once the package with the base class we want to reuse is available we can use it in other recipes to inherit the functionality from that base class. We’d need to declare the python_requires as we did before and we’d need to tell Conan the base classes to use in the attribute python_requires_extend. Here our recipe will inherit from the class MyBase: from conans import ConanFile class Pkg(ConanFile): python_requires = "pyreq/0.1@user/channel" python_requires_extend = "pyreq.MyBase" The resulting inheritance is equivalent to declare our Pkg class as class Pkg(pyreq.MyBase, ConanFile). So creating the package we can see how the methods from the base class are reused: $ conan create . pkg/0.1@user/channel ... pkg/0.1@user/channel: My cool source! pkg/0.1@user/channel: My cool build! pkg/0.1@user/channel: My cool package! pkg/0.1@user/channel: My cool package_info! ... If there is extra logic needed to extend from a base class, like composing the base class settings with the current recipe, the init() method can be used for For more information about the init() method visit init() Limitations¶ There are a few limitations that should be taken into account: nameand versionfields shouldn’t be inherited. set_name()and set_version()might be used. short_pathscannot be inherited from a python_requires. Make sure to specify it directly in the recipes that need the paths shortened in Windows. exports, exports_sourcesshouldn’t be inherited from a base class, but explictly defined directly in the recipes. A reusable alternative might be using the SCMcomponent. build_policyshouldn’t be inherited from a base class, but explictly defined directly in the recipes. Reusing files¶ It is possible to access the files exported by a recipe that is used with python_requires. We could have this recipe, together with a myfile.txt file containing the “Hello” text. from conans import ConanFile class PyReq(ConanFile): exports = "*" $ echo "Hello" > myfile.txt $ conan export . pyreq/0.1@user/channel Now the recipe has been exported, we can access its path (the place where myfile.txt is) with the path attribute: import os from conans import ConanFile, load class Pkg(ConanFile): python_requires = "pyreq/0.1@user/channel" def build(self): pyreq_path = self.python_requires["pyreq"].path myfile_path = os.path.join(pyreq_path, "myfile.txt") content = load(myfile_path) # content = "Hello" self.output.info(content) # we could also copy the file, instead of reading it Note that only exports work for this case, but not exports_sources. PackageID¶ The python-requires will affect the package_id of the packages using those dependencies. By default, the policy is minor_mode, which means: - Changes to the patch version of a python-require will not affect the package ID. So depending on "pyreq/1.2.3"or "pyreq/1.2.4"will result in identical package ID (both will be mapped to "pyreq/1.2.Z"in the hash computation). Bump the patch version if you want to change your common code, but you don’t want the consumers to be affected or to fire a re-build of the dependants. - Changes to the minor or major version will produce a different package ID. So if you depend on "pyreq/1.2.3", and you bump the version to "pyreq/1.3.0", then, you will need to build new binaries that are using that new python-require. Bump the minor or major version if you want to make sure that packages requiring this python-require will be built using these changes in the code. - Both changing the minor and major requires a new package ID, and then a build from source. You could use changes in the minor to indicate that it should be source compatible, and consumers wouldn’t need to do changes, and changes in the major for source incompatible changes. As with the regular requires, this default can be customized. First you can customize it at attribute global level, modifying the conan.conf [general] variable default_python_requires_id_mode, which can take the values unrelated_mode, semver_mode, patch_mode, minor_mode, major_mode, full_version_mode, full_recipe_mode and recipe_revision_mode. For example, if you want to make the package IDs never be affected by any change in the versions of python-requires, you could do: Read more about these modes in Using package_id() for Package Dependencies. It is also possible to customize the effect of python_requires per package, using the package_id() method: from conans import ConanFile class Pkg(ConanFile): python_requires ="pyreq/[>=1.0]" def package_id(self): self.info.python_requires.patch_mode() Resolution of python-requires¶ There are few things that should be taken into account when using python-requires: - Python requires recipes are loaded by the interpreter just once, and they are common to all consumers. Do not use any global state in the python-requiresrecipes. - Python requires are private to the consumers. They are not transitive. Different consumers can require different versions of the same python-require. python-requirescan use version ranges expressions. python-requirescan python-requireother recipes too, but this should probably be limited to very few cases, we recommend to use the simplest possible structure. python-requirescan conflict if they require other recipes and create conflicts in different versions. python-requirescannot use regular requiresor build_requires. - It is possible to use python-requireswithout user and channel. python-requirescan use native python importto other python files, as long as these are exported together with the recipe. python-requiresshould not create packages, but use exportonly. python-requirescan be used as editable packages too. python-requiresare locked in lockfiles.
https://docs.conan.io/en/1.39/extending/python_requires.html
CC-MAIN-2022-33
en
refinedweb
>> Embedding an Image in a Tkinter Canvas widget using PIL The Pillow library in Python contains all the basic image processing functionality. It is an open-source library available in Python that adds support to load, process, and manipulate the images of different formats. Let's take a simple example and see how to embed an Image in a Tkinter canvas using Pillow package (PIL). Follow the steps given below − Steps − - Import the required libraries and create an instance of tkinter frame. from tkinter import * from PIL import Image, ImageTk Set the size of the frame using root.geometry method. Next, create a Canvas widget using canvas() function and set its height and width. Open an image using Image.open() and then convert it to an PIL image using ImageTk.PhotoImage(). Save the PIL image in a variable "img". Next, add the PIL image to the Canvas using canvas.create_image(). Finally, run the mainloop of the application window. Example # Import the required Libraries from tkinter import * from PIL import Image, ImageTk # Create an instance of tkinter frame root = Tk() # Set the geometry of tkinter frame root.geometry("700x450") # Create a canvas widget canvas= Canvas(root, width=600, height=400) canvas.pack() # Load an image img=ImageTk.PhotoImage(Image.open("camels.jpg")) # Add image to the Canvas Items canvas.create_image(250, 250, anchor=CENTER, image=img) root.mainloop() Output When you run this code, it will produce the following output window − - Related Questions & Answers - How to open PIL Image in Tkinter on Canvas? - How to update an image in a Tkinter Canvas? - How to make a Button using the Tkinter Canvas widget? - How to insert an image in a Tkinter canvas item? - How to center an image in canvas Python Tkinter - Drawing an image in canvas using in JavaScript - How to Move an Image in Tkinter canvas with Arrow Keys? - Resizing pictures in PIL in Tkinter - Embedding a matplotlib animation into a tkinter frame - How to update the image of a Tkinter Label widget? - How do I get the background color of a Tkinter Canvas widget? - How to resize an image using Tkinter? - How to draw a png image on a Python tkinter canvas? - How do I use PIL with Tkinter? - How to convert a Torch Tensor to PIL image?
https://www.tutorialspoint.com/embedding-an-image-in-a-tkinter-canvas-widget-using-pil
CC-MAIN-2022-33
en
refinedweb
Design patterns offer established solutions for common problems in software engineering. They represent the best practices that have evolved over time. This is the start of a series of posts that I will be creating over common and popular design patterns that developers should be familiar with. I’m going to start with creational patterns that involve the creation of objects. They help reduce complexity and decouple classes in a standardized manner. For this post, I am going to talk about the Singleton design pattern. The Singleton design pattern was the first pattern that I was taught in college. It’s purpose is to initialize only one instance of an object and provide a method for it to be retrieved. This is done by making the constructor private with a public method that returns the instance created. If you try to initialize another instance the compiler will throw an error. There are different ways of implementing this pattern and I will provide an example below pulled from tutorialspoint.com. public class Singleton { private static Singleton singleton = new Singleton(); private Singleton(){} public static Singleton getInstance(){ return singleton; } } When would you use the Singleton design pattern? - Creation of objects that are computationally expensive - Creation of loggers used for debugging - Classes that are used to configure settings for an application - Classes that hold or access resources that are shared The Singleton pattern does come with some detractors however that believe that it is an anti-pattern. Many believe that it is not used correctly and that novice programmers use it too often. Forums also state that creating a container that holds and accesses the single class object is a much better solutions in modern applications. Whether or not this design pattern is useful to new developers seems to be up to debate. Let me know what you think in the comments below! Sources Baeldung. (2019, September 11). Introduction to Creational Design Patterns. Baeldung.. Java - How to Use Singleton Class? Tutorialspoint. (n.d.).. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jryther/singleton-design-pattern-4o70
CC-MAIN-2022-33
en
refinedweb
even be taken into account when a package like zlib is created, such as cross-compiling it to Android (in which the Android toolchain would be a build requirement too). Important build_requires are designed for packaging tools, utilities that only run at build-time, but are not part of the final binary code. Anything that is linked into consumer packages like all type of libraries (header only, static, shared) most likely are not build_requires but regular requires. The only exception would be testing libraries and frameworks, as long as the tests are not included in the final package. To address these needs Conan implements build_requires. Declaring build requirements¶ Build requirements can be declared in profiles, like: [build_requires] tool1/0.1@user/channel tool2/0.1@user/channel, tool3/0.1@user/channel *: tool4/0.1@user/channel my_pkg*: Warning This section refers to the experimental feature that is activated when using --profile:build and --profile:host in the command-line. It is currently under development, features can be added or removed in the following versions.: cpp_info: all information will be available in the deps_cpp_info["xxx"]object. env_info: won’t be propagated. user_info: will be available using the deps_user_info["xxx"]object. Build requirements in the buildcontext will propagate all the env_infoand Conan will also populate the environment variables DYLD_LIBRARY_PATH, LD_LIBRARY_PATHand PATHwith the corresponding information from the cpp_infoobject. All this information will be available in the deps_env_infoobject. Custom information declared in the user_infoattribute will be available in the user_info_build["xxx"]object in the consumer conanfile. Making build_requires affect the consumers package-ID¶ Warning This subsection should be considered a workaround, not a feature, and it might have other side effects, that will not be fixed as this is not recommended production code. As discussed above, the build_requires do not affect at all the package ID. As they will not be present at all when the package_id is computed, it cannot be part of it. It is possible that this might change in the future in Conan 2.0, but at the moment it is not. In the meantime, there is a possible workaround that might be used if this is very needed: using python_requires to point to the same build_requires package. Something like: from conans import ConanFile class Pkg(ConanFile): python_requires ="tool/[>=0.0]" build_requires ="tool/[>=0.0]" By using this mechanism, tool dependency will always be used (the recipe will be fetched from servers), and the version of tool will be used to compute the package_id following the default_python_requires_id_mode in conan.conf, or the specific self.info.python_requires.xxxx_mode() in recipes. Testing build_requires¶ Warning This is an experimental feature, subject to future breaking changes From Conan 1.36, it is possible to test build_requires with the test_package functionality. What is necessary is to specify in the test_package/conanfile.py, that the tested package is a build tool, which can be done with: from conans import ConanFile class Pkg(ConanFile): test_type = "build_requires" ... The rest of the test conanfile.py should take into account that the reference automatically injected will be a build_require. If for some reason, it is necessary to test the same package both as a regular require and a build_require, then it is possible to specify: test_type = "build_requires", "requires".
https://docs.conan.io/en/1.39/devtools/build_requires.html
CC-MAIN-2022-33
en
refinedweb
Menu Close Red Hat Training A Red Hat training course is available for OpenShift Dedicated Creating Images OpenShift Dedicated 3 Image Creation Guide Abstract Chapter 1. Overview This guide provides best practices on writing and testing container images that can be used on OpenShift Dedicated. Chapter 2. Guidelines 2.1. Overview When creating container images to run on OpenShift Dedicated Dedicated. 2.2. General Container Image Guidelines The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Dedicated. Reuse Images. Maintain Compatibility Within Tags. Avoid Multiple Processes We recommend that you do not start multiple services, such as a database and SSHD, inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift Dedicated. Use exec. Clean Temporary Files. Place Instructions in the Proper Order. Mark Important Ports See the "Always EXPOSE Important Ports" section of the Project Atomic documentation for more information. Set Environment Variables. Avoid Default Passwords. Avoid SSHD Dedicated cluster. Installing and running SSHD in your image opens up additional vectors for attack and requirements for security patching. Use Volumes for Persistent Data Images should use a Docker volume for persistent data. This way OpenShift Dedicated Dedicated. Dedicated-Specific Guidelines The following are guidelines that apply when creating container images specifically for use on OpenShift Dedicated. Enable Images for Source-To-Image (S2I). Support Arbitrary User IDs By default, OpenShift Dedicated=u /some/directory Because the container user is always a member of the root group, the container user can read and write these files. The root group does not have any special permissions (unlike the root user) so there are no security concerns with this arrangement. Lastly, the final USER declaration in the Dockerfile should specify the user ID (numeric value) and not the user name. This allows OpenShift Dedicated Dedicated service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests. Provide Common Libraries. Use Environment Variables for Configuration Dedicated Set Image Metadata. Clustering Dedicated. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic. Logging It is best to send all logging to standard out. OpenShift Dedicated. Liveness and Readiness Probes. Templates Consider providing an example template with your image. A template will give users an easy way to quickly get your image deployed with a working configuration. Your template should include the liveness and readiness probes you documented with the image, for completeness. 2.4. External References Chapter 3. Image Metadata 3.1. Overview. 3.2. Defining Image Metadata Dedicated the namespace should be set to io.openshift and for Kubernetes the namespace is io.k8s. See the Docker custom metadata documentation for details about the format. Dedicated build system for automated testing and continuous integration. Check the S2I Requirements topic to learn more about the S2I architecture before proceeding.. 5.2. Testing Requirements The standard location for the test script is test/run. This script is invoked by the OpenShift Dedicated. 5.3. Generating Scripts and Tools 5.5. Basic Testing Workflow>_ - Optionally,. 5.6. Using OpenShift Dedicated for Building the Image Once you have a Dockerfile and the other artifacts that make up your new S2I builder image, you can put them in a git repository and use OpenShift Dedicated to build and push the image. Simply define a Docker build that points to your repository. If your OpenShift Dedicated. Chapter 6. Custom Builder 6.1. Overview By allowing you to define a specific builder image responsible for the entire build process, OpenShift Dedicated. Dedicated: - The Buildobject definition contains all the necessary information about input parameters for the build. - Run the build process. - If your build produces an image, push it to the build’s output location if it is defined. Other output locations can be passed with environment variables.
https://access.redhat.com/documentation/en-us/openshift_dedicated/3/html-single/creating_images/index
CC-MAIN-2022-33
en
refinedweb
20350/copying-existing-files-in-a-s3-bucket-to-another-s3-bucket I have a existing s3 bucket which contains large amount of files. I want to run a lambda function every 1 minute and copy those files to another destination s3 bucket. My function is: s3 = boto3.resource('s3') clientname=boto3.client('s3') def lambda_handler(event, context): bucket = 'test-bucket-for-transfer-check' try: response = clientname.list_objects( Bucket=bucket, MaxKeys=5 ) for record in response['Contents']: key = record['Key'] copy_source = { 'Bucket': bucket, 'Key': key } try: destbucket = s3.Bucket('serverless-demo-s3-bucket') destbucket.copy(copy_source, key) print('{} transferred to destination bucket'.format(key)) except Exception as e: print(e) print('Error getting object {} from bucket {}. '.format(key, bucket)) raise e except Exception as e: print(e) raise e Now how can I make sure the function is copying new files each time it runs?? Suppose the two buckets in question are Bucket-A and Bucket-B and task to be done is copy files from Bucket-A --> Bucket-B Now you have all records in DynamoDb table "Copy-Logs" which files were copied successfully and which were not. If you observe, it must be giving ...READ MORE You can take a look at the ...READ MORE You can use the below command $ aws ...READ MORE If you are referring to the number of objects ...READ MORE You can use method of creating object ...READ MORE You can delete the folder by using ...READ MORE You need to rename your bucket to ...READ MORE Hey, slight modification with what you have ...READ MORE To create a S3 bucket in AWS, ...READ MORE It might be throwing an error on ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/20350/copying-existing-files-in-a-s3-bucket-to-another-s3-bucket
CC-MAIN-2022-33
en
refinedweb
Have you ever tried to cast hand shadows on a wall? It is the easiest thing in the world, and yet to do it well requires practice and just the right setup. To cultivate your #cottagecore aesthetic, try going into a completely dark room with just one lit candle, and casting hand shadows on a plain wall. The effect is startlingly dramatic. What fun! Even a tea light suffices to create a great effect In 2020, and now into 2021, many folks are reverting back to basics as they look around their houses, reopening dusty corners of attics and basements and remembering the simple crafts that they used to love. Papermaking, anyone? All you need is a few tools and torn up, recycled paper. Pressing flowers? All you need is newspaper, some heavy books, and patience. And hand shadows? Just a candle. This TikTok creator has thousands of views for their handshadow tutorials But what's a developer to do when trying to capture that #cottagecore vibe in a web app? While exploring the art of hand shadows, I wondered whether some of the recent work I had done for body poses might be applicable to hand poses. What if you could tell a story on the web using your hands, and somehow save a video of the show and the narrative behind it, and send it to someone special? In lockdown, what could be more amusing than sharing shadow stories between friends or relatives, all virtually? Hand shadow casting is a folk art probably originating in China; if you go to tea houses with stage shows, you might be lucky enough to view one like this! When you start researching hand poses, it's striking how much content there is on the web on the topic. There has been work since at least 2014 on creating fully articulated hands within the research, simulation, and gaming sphere: MSR throwing hands There are dozens of handpose libraries already on GitHub: There are many applications where tracking hands is a useful activity: • Gaming • Simulations / Training • "Hands free" uses for remote interactions with things by moving the body • Assistive technologies • TikTok effects :trophy: • Useful things like Accordion Hands apps One of the more interesting new libraries, handsfree.js, offers an excellent array of demos in its effort to move to a hands free web experience: Handsfree.js, a very promising project As it turns out, hands are pretty complicated things. They each include 21 keypoints (vs PoseNet's 17 keypoints for an entire body). Building a model to support inference for such a complicated grouping of keypoints has provenn challenging. There are two main libraries available to the web developer when incorporating hand poses into an app: TensorFlow.js's handposes, and MediaPipe's. HandsFree.js uses both, to the extent that they expose APIs. As it turns out, neither TensorFlow.js nor MediaPipe's handposes are perfect for our project. We will have to compromise. TensorFlow.js's handposes allow access to each hand keypoint and the ability to draw the hand to canvas as desired. HOWEVER, it only currently supports single hand poses, which is not optimal for good hand shadow shows. MediaPipe's handpose models (which are used by TensorFlow.js) do allow for dual hands BUT its API does not allow for much styling of the keypoints so that drawing shadows using it is not obvious. One other library, fingerposes, is optimized for finger spelling in a sign language context and is worth a look. Since it's more important to use the Canvas API to draw custom shadows, we are obliged to use TensorFlow.js, hoping that either it will soon support multiple hands OR handsfree.js helps push the envelope to expose a more styleable hand. Let's get to work to build this app. As a Vue.js developer, I always use the Vue CLI to scaffold an app using vue create my-app and creating a standard app. I set up a basic app with two routes: Home and Show. Since this is going to be deployed as an Azure Static Web App, I follow my standard practice of including my app files in a folder named app and creating an api folder to include an Azure function to store a key (more on this in a minute). In my package.json file, I import the important packages for using TensorFlow.js and the Cognitive Services Speech SDK in this app. Note that TensorFlow.js has divided its imports into individual packages: "@tensorflow-models/handpose": "^0.0.6", "@tensorflow/tfjs": "^2.7.0", "@tensorflow/tfjs-backend-cpu": "^2.7.0", "@tensorflow/tfjs-backend-webgl": "^2.7.0", "@tensorflow/tfjs-converter": "^2.7.0", "@tensorflow/tfjs-core": "^2.7.0", ... "microsoft-cognitiveservices-speech-sdk": "^1.15.0", We will draw an image of a hand, as detected by TensorFlow.js, onto a canvas, superimposed onto a video suppled by a webcam. In addition, we will redraw the hand to a second canvas (shadowCanvas), styled like shadows: <div id="canvas-wrapper column is-half"> <canvas id="output" ref="output"></canvas> <video id="video" ref="video" playsinline</video> </div> <div class="column is-half"> <canvas class="has-background-black-bis" id="shadowCanvas" ref="shadowCanvas" > </canvas> </div> Working asynchronously, load the Handpose model. Once the backend is setup and the model is loaded, load the video via the webcam, and start watching the video's keyframes for hand poses. It's important at these steps to ensure error handling in case the model fails to load or there's no webcam available. async mounted() { await tf.setBackend(this.backend); //async load model, then load video, then pass it to start landmarking this.model = await handpose.load(); this.message = "Model is loaded! Now loading video"; let webcam; try { webcam = await this.loadVideo(); } catch (e) { this.message = e.message; throw e; } this.landmarksRealTime(webcam); }, Still working asynchronously, set up the camera to provide a stream of images async setupCamera() { if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) { throw new Error( "Browser API navigator.mediaDevices.getUserMedia not available" ); } this.video = this.$refs.video; const stream = await navigator.mediaDevices.getUserMedia({ video: { facingMode: "user", width: VIDEO_WIDTH, height: VIDEO_HEIGHT, }, }); return new Promise((resolve) => { this.video.srcObject = stream; this.video.onloadedmetadata = () => { resolve(this.video); }; }); }, Now the fun begins, as you can get creative in drawing the hand on top of the video. This landmarking function runs on every keyframe, watching for a hand to be detected and drawing lines onto the canvas - red on top of the video, and black on top of the shadowCanvas. Since the shadowCanvas background is white, the hand is drawn as white as well and the viewer only sees the offset shadow, in fuzzy black with rounded corners. The effect is rather spooky! async landmarksRealTime(video) { //start showing landmarks this.videoWidth = video.videoWidth; this.videoHeight = video.videoHeight; //set up skeleton canvas this.canvas = this.$refs.output; ... //set up shadowCanvas this.shadowCanvas = this.$refs.shadowCanvas; ... this.ctx = this.canvas.getContext("2d"); this.sctx = this.shadowCanvas.getContext("2d"); ... //paint to main this.ctx.clearRect(0, 0, this.videoWidth, this.videoHeight); this.ctx.strokeStyle = "red"; this.ctx.fillStyle = "red"; this.ctx.translate(this.shadowCanvas.width, 0); this.ctx.scale(-1, 1); //paint to shadow box this.sctx.clearRect(0, 0, this.videoWidth, this.videoHeight); this.sctx.shadowColor = "black"; this.sctx.shadowBlur = 20; this.sctx.shadowOffsetX = 150; this.sctx.shadowOffsetY = 150; this.sctx.lineWidth = 20; this.sctx.lineCap = "round"; this.sctx.fillStyle = "white"; this.sctx.strokeStyle = "white"; this.sctx.translate(this.shadowCanvas.width, 0); this.sctx.scale(-1, 1); //now you've set up the canvases, now you can frame its landmarks this.frameLandmarks(); }, As the keyframes progress, the model predict new keypoints for each of the hand's elements, and both canvases are cleared and redrawn. const predictions = await this.model.estimateHands(this.video); if (predictions.length > 0) { const result = predictions[0].landmarks; this.drawKeypoints( this.ctx, this.sctx, result, predictions[0].annotations ); } requestAnimationFrame(this.frameLandmarks); Since TensorFlow.js allows you direct access to the keypoints of the hand and the hand's coordinates, you can manipulate them to draw a more lifelike hand. Thus we can redraw the palm to be a polygon, rather than resembling a garden rake with points culminating in the wrist. Re-identify the fingers and palm: fingerLookupIndices: { thumb: [0, 1, 2, 3, 4], indexFinger: [0, 5, 6, 7, 8], middleFinger: [0, 9, 10, 11, 12], ringFinger: [0, 13, 14, 15, 16], pinky: [0, 17, 18, 19, 20], }, palmLookupIndices: { palm: [0, 1, 5, 9, 13, 17, 0, 1], }, ...and draw them to screen: const fingers = Object.keys(this.fingerLookupIndices); for (let i = 0; i < fingers.length; i++) { const finger = fingers[i]; const points = this.fingerLookupIndices[finger].map( (idx) => keypoints[idx] ); this.drawPath(ctx, sctx, points, false); } const palmArea = Object.keys(this.palmLookupIndices); for (let i = 0; i < palmArea.length; i++) { const palm = palmArea[i]; const points = this.palmLookupIndices[palm].map( (idx) => keypoints[idx] ); this.drawPath(ctx, sctx, points, true); } With the models and video loaded, keyframes tracked, and hands and shadows drawn to canvas, we can implement a speech-to-text SDK so that you can narrate and save your shadow story. To do this, get a key from the Azure portal for Speech Services by creating a Service: You can connect to this service by importing the sdk: import * as sdk from "microsoft-cognitiveservices-speech-sdk"; ...and start audio transcription after obtaining an API key which is stored in an Azure function in the /api folder. This function gets the key stored in the Azure portal in the Azure Static Web App where the app is hosted. async startAudioTranscription() { try { //get the key const response = await axios.get("/api/getKey"); this.subKey = response.data; //sdk let speechConfig = sdk.SpeechConfig.fromSubscription( this.subKey, "eastus" ); let audioConfig = sdk.AudioConfig.fromDefaultMicrophoneInput(); this.recognizer = new sdk.SpeechRecognizer(speechConfig, audioConfig); this.recognizer.recognized = (s, e) => { this.text = e.result.text; this.story.push(this.text); }; this.recognizer.startContinuousRecognitionAsync(); } catch (error) { this.message = error; } }, In this function, the SpeechRecognizer gathers text in chunks that it recognizes and organizes into sentences. That text is printed into a message string and displayed on the front end. In this last part, the output cast onto the shadowCanvas is saved as a stream and recorded using the MediaRecorder API: const stream = this.shadowCanvas.captureStream(60); // 60 FPS recording this.recorder = new MediaRecorder(stream, { mimeType: "video/webm;codecs=vp9", }); (this.recorder.ondataavailable = (e) => { this.chunks.push(e.data); }), this.recorder.start(500); ...and displayed below as a video with the storyline in a new div: const video = document.createElement("video"); const fullBlob = new Blob(this.chunks); const downloadUrl = window.URL.createObjectURL(fullBlob); video.src = downloadUrl; document.getElementById("story").appendChild(video); video.autoplay = true; video.controls = true; This app can be deployed as an Azure Static Web App using the excellent Azure plugin for Visual Studio Code. And once it's live, you can tell durable shadow stories! Try Ombromanie here. The codebase is available here Take a look at Ombromanie in action: Learn more about AI on Azure Azure AI Essentials Video covering speech and language Azure free account sign-up
https://techcommunity.microsoft.com/t5/educator-developer-blog/ombromanie-creating-hand-shadow-stories-with-azure-speech-and/ba-p/1822815
CC-MAIN-2022-33
en
refinedweb
ISchedulerMappingConverter Interface Enables you to apply custom logic to a mapping. Namespace: DevExpress.XtraScheduler Assembly: DevExpress.XtraScheduler.v22.1.Core.dll Declaration Related API Members The following members return ISchedulerMappingConverter objects: Remarks You can associate a mapping converter with a mapping. Create a class that implements the ISchedulerMappingConverter interface, instantiate it and assign it to a related property as described in the Mapping Converters document. Example Note A complete sample project is available at class MappingConverterStart : ISchedulerMappingConverter { public object Convert(object obj, Type targetType, object parameter) { return DateTime.ParseExact(obj.ToString(), "s", System.Globalization.DateTimeFormatInfo.InvariantInfo); } public object ConvertBack(object obj, Type targetType, object parameter) { return ((DateTime)obj).ToString("s"); } } Related GitHub Examples The following code snippet (auto-collected from DevExpress Examples) contains a reference to the ISchedulerMappingConverter interface. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.ISchedulerMappingConverter
CC-MAIN-2022-33
en
refinedweb
Python client for accessing MIT's Moira system Project description Python client for accessing MIT’s Moira system. This client uses the SOAP API, which has a few unusual limitations, and requires X.509 client certificates for access. Installation pip install mit-moira Usage from mit_moira import Moira # Initialize Moira client with X.509 certificate and private key file moira = Moira("path/to/x509.cert", "path/to/x509.pem") Full API documentation is on ReadTheDocs. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution mit-moira-0.0.4.tar.gz (7.3 kB view hashes)
https://pypi.org/project/mit-moira/0.0.4/
CC-MAIN-2022-33
en
refinedweb
§The Logging API Using logging in your application can be useful for monitoring, debugging, error tracking, and business intelligence. Play provides an API for logging which is accessed through the Logger class and uses Logback as the) { //.riskyCalculation(Application.java:20) ~[classes/:na] at controllers.Application.index(Application.java:11) ~[classes/:na] at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.apply(routes_routing.scala:69) [classes/:na] at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.apply(routes_routing.scala:69) [classes/:na] at play.core.Router$HandlerInvoker$$anon$8$$anon$2.invocation(Router.scala:203) .libs.F; import play.mvc.Action; import play.mvc.Controller; import play.mvc.Http; import play.mvc.Http.Request; import play.mvc.Result; import play.mvc.With; F.Promise<Result> call(Http.Context ctx) throws Throwable { final Request request = ctx.request(); accessLogger.info("method=" + request.method() + " uri=" + request.uri() + " remote-address=" + request.remoteAddress()); return delegate.call(ctx); } } intercept requests in global settings: import java.lang.reflect.Method; import play.Application; import play.GlobalSettings; import play.Logger; import play.Logger.ALogger; import play.mvc.Action; import play.mvc.Http.Request; public class Global extends GlobalSettings { private final ALogger accessLogger = Logger.of("access"); @Override @SuppressWarnings("rawtypes") public Action onRequest(Request request, Method method) { accessLogger.info("method=" + request.method() + " uri=" + request.uri() + " remote-address=" + request.remoteAddress()); return super.onRequest(request, method); } @Override public void onStart(Application app) { Logger.info("Application has started"); } @Override public void onStop(Application app) { Logger.info("Application has stopped"); } } Note that the Global class is also a sensible place to use the default logger for events like application start and stop. §Configuration See configuring logging for details on configuration. Next: Advanced topics
https://www.playframework.com/documentation/ja/2.4.4/JavaLogging
CC-MAIN-2022-33
en
refinedweb
In this article, we’ll be looking at using the Python Copy module, to perform deep and shallow copy operations. Now, what do we mean by deep copy and shallow copy? Let’s take a look, using illustrative examples! Why do we need the Python Copy module? In Python, everything is represented using objects. Therefore, in a lot of cases, we may need to copy objects directly. In these cases, we cannot use the assignment operator directly. The point behind assignment is that multiple variables can point to the same object. This means that if the object changes using any of those variables, changes will be reflected everywhere! The following example illustrates this problem, using a shared list object, which is mutable. a = [1, 2, 3, 4] b = a print(a) print(b) b.append(5) # Changes will be reflected in a too! print(a) print(b) Output [1, 2, 3, 4] [1, 2, 3, 4] [1, 2, 3, 4, 5] [1, 2, 3, 4, 5] As you can see, since both variables point to the same object, when b changes, so does a! To deal with this issue, Python gives us a way using the Copy module. The Python copy module is a part of the standard library, and can be imported using the below statement: import copy Now, in this module, we can perform two types of operations mainly: - Shallow Copy - Deep Copy Let’s take a look at these methods now. Shallow Copy This method is used to perform a shallow copy operation. The syntax for calling this method is: import copy new_obj = copy.copy(old_obj) # Perform a shallow copy This will do two things – - Create a new object - Insert all references of the objects found in the original object Now, since it creates a new object, we can be sure that our new object is different from the old object. However, this will still maintain references to nested objects. So if the object we need to copy has other mutable objects (list, set, etc), this will still maintain references to the same nested object! To understand this, let’s take an example. To illustrate the first point, we will try this with a simple list of integers (no nested objects!) import copy old_list = [1, 2, 3] print(old_list) new_list = copy.copy(old_list) # Let's try changing new_list new_list.append(4) # Changes will not be reflected in the original list, since the objects are different print(old_list) print(new_list) Output [1, 2, 3] [1, 2, 3, 4] [1, 2, 3] As you can see, in case our object is a simple list, there is no problem with shallow copy. Let’s take another case, where our object is a list of lists. import copy old_list = [[1, 2], [1, 2, 3]] print(old_list) new_list = copy.copy(old_list) # Let's try changing a nested object inside the list new_list[1].append(4) # Changes will be reflected in the original list, since the object contains a nested object print(old_list) print(new_list) Output [[1, 2], [1, 2, 3]] [[1, 2], [1, 2, 3, 4]] [[1, 2], [1, 2, 3, 4]] Here, notice that both old_list and new_list have been affected! If we must avoid this behavior, we must copy all objects recursively, along with nested objects. This is called a Deep Copy Operation using the Python copy module. Deep Copy This method is similar to the shallow copy method, but now copies everything from the original object (including nested objects) into a new object. To do a deep copy operation, we can use the below syntax: import copy new_object = copy.deepcopy(old_object) Let’s take our old example, and try using deep copy to solve our issue. import copy old_list = [[1, 2], [1, 2, 3]] print(old_list) new_list = copy.deepcopy(old_list) # Let's try changing a nested object inside the list new_list[1].append(4) # Changes will be reflected in the original list, since the objects are different print(old_list) print(new_list) Output [[1, 2], [1, 2, 3]] [[1, 2], [1, 2, 3]] [[1, 2], [1, 2, 3, 4]] Notice that the old list is unchanged. Since all objects were copied recursively, there is no problem now! However, due to copying all objects, this deepcopy method is a bit more expensive, as compared to the shallow copy method. So use this wisely, only when you need it! Conclusion In this article, we learned about using the Python Copy module, to perform shallow copy and deep copy operations. References - Python Copy Module Documentation - JournalDev article on Python Copy Module
https://www.askpython.com/python-modules/python-copy
CC-MAIN-2021-31
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 12/02/2017 at 06:02, xxxxxxxx wrote: hi there, i am failing to get a working shaderlink / texbox in my GeDialog! working with resource files allowed me to add a texbox with: SHADER MY_SHADER { FIT_H; SCALE_H; } but sadly the gui then doesn't react to any interaction i tried the following in GeDialogs CreateLayout(self) : _self.MyShader = self.FindCustomGui(MY_SHADER,c4d.CUSTOMGUI_TEXBOX) --> just returns True, so probably doesn't work self.MyShader = self.FindCustomGui(MY_SHADER,c4d.CUSTOMGUI_LINKBOX) --> returns a CustomGui but then: color = c4d.BaseShader(c4d.Xcolor) self.SetLink(MyShader, color) --> crashes c4d_ so the question is: is it possible to have a working texbox in a GeDialog? and if yes, how would one get there? best, theo On 13/02/2017 at 02:03, xxxxxxxx wrote: Hi, I'm afraid I haven't good news. It's not possible to use a SHADER gadget inside a dialog with the Python API. TexBoxGui that provides access to SHADER resource is missing in the Python API. This explains why self.FindCustomGui() returns True because the actual TexBoxGui object can't be returned. Calling self.FindCustomGui() with CUSTOMGUI_LINKBOX (good try but not recommended at all ) returns a "false" LinkBoxGui and crashes as the actual dialog gadget isn't a LinkBox. On 13/02/2017 at 02:10, xxxxxxxx wrote: hi and thanks for your answer... there is a post about it here, too... but it is from 2010 but can i ask to understand... would it work to add a texbox/shader in a GeDialog in c++? so it is just a problem of the python SDK? and if fundamentally not possible in GeDialog with python... are there other ways? can you construct the missing parts yourself? It must be possible somehow to open a window, display a texbox, and get some input, not? my goal is simply to have my own window where i can choose a texture or shader and then set/insert it in a material shader slot. On 16/02/2017 at 07:59, xxxxxxxx wrote: nothing? python + texbox = absolutely impossible? shouldnt that be set on the todo list for the sdk? .) On 16/02/2017 at 10:00, xxxxxxxx wrote: What you want to do does not require having a shader link in the dialog. All you need is a way to target the specific image or shader type. And then a way to target the specific material you want to add it to. Also maybe even what channels(color, bump, etc...) do want to add the image to? For example. One way of doing this is to: -Use a textbox for the user to enter the target material's name -Use a Filename gadget to get the image's path on your HD that you want to add to the material. In your code. You then get the string value from the Filename gadget. And the string value from the textbox. Then use that to insert the image into the target material like this: mat = doc.GetFirstMaterial() if mat is None: return False file = the string value of your Filename gadget if len(file) == 0: return False #Get the specific material based on what the user entered in the textbox #Then add the image file to the color channel's shader link while mat: if mat.GetName() == your textbox value: colorChanel = mat[c4d.MATERIAL_COLOR_SHADER] shdr = c4d.BaseList2D(c4d.Xbitmap) shdr[c4d.BITMAPSHADER_FILENAME] = file mat.InsertShader( shdr ) mat[c4d.MATERIAL_COLOR_SHADER] = sha mat = mat.GetNext() Or. What about maybe using a combobox gadget to get all of the existing materials rather than a textbox? Then the user just selects the one they want to add the image to, rather than typing the name. There's so many ways to do this kind of thing. There's no way to really answer your question. Don't let the fact that we can't physically have a material link in the dialog get in your way. Try to think about how you can work around it. -ScottA On 16/02/2017 at 11:11, xxxxxxxx wrote: no no... i wanted to code a helper plugin to setup materials for that i must be able to select textures and all shaders, noise, layer, etc etc so i would need a texbox On 16/02/2017 at 12:05, xxxxxxxx wrote: My point is that there are many ways to do what you want without using the shaderlink custom gui. For example. You can iterate through a material/materials and get&set the shader links manually. import c4d shdrlist = [] def ShaderSearch(shader) : next = shader.GetNext() down = shader.GetDown() if down: ShaderSearch(down) if next: ShaderSearch(next) shdrlist.append(shader.GetName()) def main() : mat = doc.GetActiveMaterial() if mat is None: return shdr = mat.GetFirstShader() if shdr is None: return ShaderSearch(shdr) print shdrlist c4d.EventAdd() if __name__=='__main__': main() Where you get the images or shaders that you want to add. And how you want the user to do that task is relative. All depending on your specific desired workflow. The shaderlink custom gui might be the most obvious thing to use. But it's not your only option. For visualizations. I've written dialog plugins that show the active material using a bitmap button to display it. That can be updated by clicking on them. A UserArea could also be used for this. This gets around the limitation of not being able to use a material gadget in a GeDialog. The sdk as so many options in it that you should be able to do almost anything you want to do. But you might have to do them by hand, instead of using a pre made gadget. On 17/02/2017 at 01:26, xxxxxxxx wrote: Yes, the TextBox GUI works with the C++ API. And I've already added TexBoxGui to the Python API todo list On 17/02/2017 at 03:05, xxxxxxxx wrote: @ ScottA: i would need to be able to choose from all available shaders in c4d, not from the shaders of a selected material. the goal is to have the same choice for a shader like when you would be in the material editor itself. @ Yannick: good news ! but that probably wont go that fast, will it? .) On 17/02/2017 at 05:17, xxxxxxxx wrote: Originally posted by xxxxxxxx good news ! but that probably wont go that fast, will it? .) Originally posted by xxxxxxxx good news ! but that probably wont go that fast, will it? .) Missing classes and functions are regularly added to the Python API. On 20/05/2017 at 01:37, xxxxxxxx wrote: Originally posted by xxxxxxxx Originally posted by xxxxxxxx good news ! but that probably wont go that fast, will it? .) Missing classes and functions are regularly added to the Python API. was there a progress regarding texbox/shader? does it work to add a texbox/shader in a GeDialog in python by now? my project is kinda stuck till then... and i dont even know where i could see a change, i guess such details are not in the release notes On 21/05/2017 at 03:11, xxxxxxxx wrote: Such details are in the Python documentation "What's New" page. Additions to the Python API usually only come with a new C4D release, so you'll have to wait until R19. On 06/09/2017 at 00:33, xxxxxxxx wrote: ... mhh... i read the news here ... but i think the Shaderlink / Texbox / SHADER gadget inside a dialog wasn't added to the Python API is that correct and is it still on the todo list? On 06/09/2017 at 05:22, xxxxxxxx wrote: unfortunately the TexBox CustomGUI didn't make it into R19. Sorry. To our excuse, we never said it would. But at least I can confirm, it is still on our ToDo list. On 06/09/2017 at 23:09, xxxxxxxx wrote: No excuse needed, it wasn't promised at all But is there a chance that this gets added in between major releases (in other words anytime soon-ish) ? or is it more likely to take a year or two? (i guess it is not tagged with high priority)
https://plugincafe.maxon.net/topic/9959/13411_shaderlink--texbox-in-gedialog-
CC-MAIN-2021-31
en
refinedweb
In this article, we provide an introduction to the use of Pandas, which is an extension of NumPy. Pandas Series, from which DataFrames can be constructed, are built on NumPy arrays. In addition to a wide range of ways Pandas DataFrames can be manipulated, one major advantage that Pandas has over NumPy is that indexes and columns can have labels provided by the user. This allows Pandas Series and DataFrames to be more expressive in the information they convey. As Numpy and Pandas are tightly coupled they are commonly imported together. The typical convention is to alias numpy as np, and pandas as pd. import numpy as np import pandas as pd Pandas Series Before exploring the Pandas DataFrame object we'll first build-up to that with the simpler Pandas Series. You can convert a common Python list, a NumPy array, or a Python dictionary to a Pandas Series. First, let's create those objects and see how they are converted to a Pandas Series. my_list = [2, 23, 42, 538, 1024] # python list my_labels = ['A', 'B', 'C', 'D', 'E'] # another python list my_array = np.array([1, 1, 2, 3, 5]) # numpy array my_dict = {'foo': 15, 'bar': 23, 'hi': 72} # python dictionary We can use the pd.Series method in Pandas and pass in a list to create a Pandas Series object. my_series = pd.Series(data=my_list) my_series 0 2 1 23 2 42 3 538 4 1024 dtype: int64 type(my_series) pandas.core.series.Series We have now created my_series which is the type pandas.core.series.Series, or a Pandas Series. You'll also notice there is an index ranging from 0 to 4. One other thing to note is that we had to assign the function parameter data to the list we passed in. This is required because as mentioned above, we can also pass in labels for our data: my_labeled_series = pd.Series(data=my_list, index=my_labels) my_labeled_series A 2 B 23 C 42 D 538 E 1024 dtype: int64 Now the list values have labels that we have assigned them. The real advantage in this is we can now retrieve values in the Series based on labels that may be more intuitive. Let's change our labels to student names. student_names = ['Grumpy', 'Sneezy', 'Sleepy', 'Happy', 'Bashful'] my_new_series = pd.Series(data=my_list, index=student_names) my_new_series Grumpy 2 Sneezy 23 Sleepy 42 Happy 538 Bashful 1024 dtype: int64 Let's say we wanted the value for Sleepy. In a NumPy Array that would mean we would need to know the index of Sleepy's value. That may not be much to ask in a list of five values, but thousands could pose a problem. Now in Pandas, we can access Sleepy's value by passing his name in as follows: sleepy_value = my_new_series['Sleepy'] sleepy_value 42 With a Pandas Series, we just need to pass in the index name or label to access a value. You can also create a Pandas Series from a Python Dictionary or NumPy Array: dict_to_series = pd.Series(my_dict) # the dictionary is constructed with labels already in place arr_to_series = pd.Series(my_array, my_labels) # NumPy arrays have default numerical labels only Pandas DataFrames DataFrames are a convenient way to interact with Earth observation data in EarthAI. Pandas DataFrames are similar to DataFrames in R if you are familiar with those. They can be thought of as objects that combine multiple Series together. More fundamentally, they are a 2-dimensional grid of data with labeled rows and columns. It may be most useful to just build one and take a look at it. my_names = ['Larry', 'Moe', 'Curly', 'Shemp'] my_tests = ['test1', 'test2', 'test3', 'test4'] from numpy.random import randn np.random.seed(42) The first two lines of code just create lists that we will use to name the rows and columns of our DataFrame. The third line imports a random number generator from NumPy. The fourth line is not strictly necessary but allows you to generate the same set of random numbers repeatedly by setting the seed. Below we'll use the Pandas method pd.DataFrame to build a 4 by 4 dimension DataFrame with rows and columns named from the lists above: my_df = pd.DataFrame(randn(4,4), index=my_names, columns=my_tests) my_df So now we've created my_df that has the appearance of a spreadsheet with labeled rows and columns. We can check the type of my_df and see that it is of type pandas.core.frame.DataFrame or more simply a Pandas DataFrame object. type(my_df) pandas.core.frame.DataFrame One of the first things you'll likely do is run the head method on a new DataFrame. By default, this returns the first 5 rows of the DataFrame, though you can set as a function argument how many rows it returns. This function allows you quickly to see the structure of a DataFrame, namely what columns are included and what form the data takes (float, string, etc.). For our DataFrame we only have four rows, so it will just return the full DataFrame. This function is particularly useful on very large DataFrames. my_df.head() seriesFromDf = my_df['test1'] seriesFromDf Larry 0.496714 Moe -0.234153 Curly -0.469474 Shemp 0.241962 Name: test1, dtype: float64 type(seriesFromDf) pandas.core.series.Series The above code shows that indeed Pandas DataFrames are made up of a set of Pandas Series. We pulled out the test1 Series and assigned it to a variable, seriesFromDf. But more importantly, is it demonstrates how to subset data in a DataFrame. Earth OnDemand queries often contain a lot of columns or attributes such as spectral bands, ids, datetimes, coordinate reference systems, and much more. Most of the time we are only interested in a smaller selection of the data so it does not make sense for computational, memory, and simple visual reasons to carry extra information around in the complete DataFrames. The above code used bracket notation to select the column test1. If we wanted multiple columns we can just pass a list into the brackets. multiCols = my_df[['test2', 'test4']] multiCols You may want to create a new column in your DataFrame that represents some sort of combination of spectral bands, for instance, which is commonplace in EarthAI analyses. In this case, let's sum all of the tests into a new column: my_df['sum'] = my_df['test1'] + my_df['test2'] + my_df['test3'] + my_df['test4'] my_df Dropping a column is just as simple: my_df.drop('sum', axis = 1) my_df We used the drop method on our DataFrame specifying the column name and axis = 1. axis = 1 refers to columns. axis = 0 refers to rows. But when we take another look at my_df in the following cell the column sum is still there. So the drop was not permanent. To make it permanent you need to add inplace = True as shown below. Another way to perform the drop and make it permanent is to assign it to a new variable. my_df.drop('sum', axis=1, inplace=True) # now the column drop will be permanent my_df We can likewise drop rows: my_df.drop('Shemp', axis=0) We previously selected columns by simply passing the column name(s) in brackets. That does not work for rows. Instead, we need to use a different method as below. Let's look at Moe's scores. moe = my_df.iloc[1] moe test1 -0.234153 test2 -0.234137 test3 1.579213 test4 0.767435 Name: Moe, dtype: float64 moe_again = my_df.loc['Moe'] moe_again test1 -0.234153 test2 -0.234137 test3 1.579213 test4 0.767435 Name: Moe, dtype: float64 If we know the index position we can pass that to iloc, the index location method. Or more straightforward we can just pass the row name "Moe" to the loc method. my_df.loc['Moe', 'test1'] -0.23415337472333597 Or if we only were interested in Moe's test1 score we just pass in the ['row', 'column'] coordinates. You can also easily check the DataFrame for values that meet some specified condition. In our DataFrame here we have both positive and negative floats. Let's say we're interested in the positive values. my_df > 0 my_df[my_df > 0] The first code cell above replaces the values with booleans: True for where the condition is met and False otherwise. The second code cell returns the values in place that meet the condition and NaN (not a number) otherwise. We'll see how to handle those NaNs below. You can also perform a conditional selection on individual columns of the DataFrame: my_df[my_df['test1'] > 0] In this case, a DataFrame is returned excluding those rows where test1 failed to meet the condition. my_df Performing a conditional test on a DataFrame does not change the DataFrame itself. The results of the conditional tests can be assigned to new variables if they need to be retained. new_test = my_df[my_df > -0.2] new_test We've constructed a new DataFrame based on the condition that values are greater than -0.2 and see there are a handful of NaN values. A quick way to deal with NaN is to perform the dropna method: new_test.dropna() By default, this only returns rows in the DataFrame without any NaN values. We could do the same for columns by setting axis=1 in dropna. In both cases, a great deal of data is lost. A more common use case is to replace the NaN values with some other value: new_test.fillna(value=2) The above code cell fills any NaN value with the value 2 instead. Or you can fill with something like the mean of the new_test DataFrame. new_test.fillna(value = new_test.mean()) So there are ways of dealing with NaN values in DataFrames that do not compromise other valid data. Next, let's create some basketball data to demonstrate a few other nice properties of Pandas DataFrames. hoopsData = {'Team': ['Bulls', 'Bulls', 'Lakers', 'Lakers', 'Pacers', 'Pacers'], 'Player': ['Jordan', 'Pippen', 'Johnson', 'James', 'Miller', 'Oladipo'], 'Points': [50, 25, 26, 40, 33, 30]} hoopsDF = pd.DataFrame(hoopsData) hoopsDF In this case, we have multiple entries for each team that we can group together and perform some operation. The groupby method is useful when you want to aggregate values in some way by a group. Below we'll figure out the total points for each team. hoopsDF.groupby('Team').sum() hoopsDF.groupby('Team').describe() # perform some basic statistics for each team Pandas also allows you to apply functions that you have written to columns of the DataFrame with the apply method. First, let's write a little function that squares a number: def squared(x): return x ** 2 hoopsDF['Points'].apply(squared) 0 2500 1 625 2 676 3 1600 4 1089 5 900 Name: Points, dtype: int64 Above we've applied our squared function to the Points column of the DataFrame. Finally, we can read and write csv and Excel files easily. The cell below reads in a file called "data/sample.csv", and writes out a file called "new_file.csv". df = pd.read_csv('data/sample.csv') # read csv into Pandas df.to_csv('new_file.csv', index=False) # write Pandas DataFrame to csv file Similarly, for the Excel file "data/sample.xlsx", we write out the file "new_file.xlsx". To write to xlsx, we need to install the openpyxl library. !pip install openpyxl df = pd.read_excel('data/sample.xlsx', sheet_name='Sheet1') # you have to specify a sheet, and this will not read in Excel formulas, etc. df.to_excel('new_file.xlsx', sheet_name='Sheet1') These sample files can be downloaded from the attachments to this article below. This is just scratching the surface of the wide range of functionality in Pandas. Please sign in to leave a comment.
https://docs.astraea.earth/hc/en-us/articles/360052337451-Pandas-Primer
CC-MAIN-2021-31
en
refinedweb
What(options);. The code example below shows storing and retrieving a file handle. You can see this in action over on Glitch (I use the idb-keyval library for brevity). import { get, set } from ''; const pre = document.querySelector('pre'); const button = document.querySelector('button'); button.addEventListener('click', async () => { try { // Try retrieving the file handle. const fileHandleOrUndefined = await get('file'); if (fileHandleOrUndefined) { pre.textContent = `Retrieved file handle "${fileHandleOrUndefined.name}" from IndexedDB.`; return; } // This always returns an array, but we just need the first entry. const [fileHandle] = await window.showOpenFilePicker(); // Store the file handle. await set('file', fileHandle); pre.textContent = `Stored file handle for "${fileHandle.name}" in IndexedDB.`; } catch (error) { alert(error.name, error.message); } }); options = {}; if (readWrite) { options.mode = 'readwrite'; } // Check if permission was already granted. If so, return true. if ((await fileHandle.queryPermission(options)) === 'granted') { return true; } // Request permission. If the user grants permission, return true. if ((await fileHandle.requestPermission(options)) === }); Drag and drop integration The HTML Drag and Drop interfaces enable web applications to accept dragged and dropped files on a web page. During a drag and drop operation, dragged file and directory items are associated with file entries and directory entries respectively. The DataTransferItem.getAsFileSystemHandle() method returns a promise with a FileSystemFileHandle object if the dragged item is a file, and a promise with a FileSystemDirectoryHandle object if the dragged item is a directory. The listing below shows this in action. Note that the Drag and Drop interface's DataTransferItem.kind will be "file" for both files and directories, whereas the File System Access API's FileSystemHandle.kind will be "file" for files and "directory" for directories. elem.addEventListener('dragover', (e) => { // Prevent navigation. e.preventDefault(); }); elem.addEventListener('drop', async (e) => { // Prevent navigation. e.preventDefault(); // Process all of the items. for (const item of e.dataTransfer.items) { // Careful: `kind` will be 'file' for both file // _and_ directory entries. if (item.kind === 'file') { const entry = await item.getAsFileSystemHandle(); if (entry.kind === 'directory') { handleDirectoryEntry(entry); } else { handleFileEntry(entry); } } } });OpenFilePicker()method can be approximated with an <input type="file">element. - The showSaveFile-fs-access that uses the File System Access API wherever possible and that falls back to these next best options in all other cases. Security and permissions The Chrome team has designed and implemented the File System Access File System Access>Storage>FileSystem. Glitch works great for sharing quick and easy repros. Planning to use the API? Planning to use the.
https://www.scien.cx/2019/08/20/the-file-system-access-api-simplifying-access-to-local-files/
CC-MAIN-2021-31
en
refinedweb
Cross-language transforms With the samples on this page we will demonstrate how to create and leverage cross-language pipelines. The goal of a cross-language pipeline is to incorporate transforms from one SDK (e.g. the Python SDK) into a pipeline written using another SDK (e.g. the Java SDK). This enables having already developed transforms (e.g. ML transforms in Python) and libraries (e.g. the vast library of IOs in Java), and strengths of certain languages at your disposal in whichever language you are more comfortable authoring pipelines while vastly expanding your toolkit in given language. In this section we will cover a specific use-case: incorporating a Python transform that does inference on a model but is part of a larger Java pipeline. The section is broken down into 2 parts: - How to author the cross-language pipeline? - How to run the cross-language pipeline? How to author the cross-language pipeline? This section digs into what changes when authoring a cross-language pipeline: - “Classic” pipeline in Java - External transform in Python - Expansion server “Classic” pipeline We start by developing an Apache Beam pipeline like we would normally do if you were using only one SDK (e.g. the Java SDK): public class CrossLanguageTransform extends PTransform<PCollection<String>, PCollection<String>> { private static final String URN = "beam:transforms:xlang:pythontransform"; private static String expansionAddress; public CrossLanguageTransform(String expansionAddress) { this.expansionAddress = expansionAddress; } @Override public PCollection<String> expand(PCollection<String> input) { PCollection<String> output = input.apply( "ExternalPythonTransform", External.of(URN, new byte [] {}, this.expansionAddress) ); } } public class CrossLanguagePipeline { public static void main(String[] args) { Pipeline p = Pipeline.create(); String expansionAddress = "localhost:9097" PCollection<String> inputs = p.apply(Create.of("features { feature { key: 'country' value { bytes_list { value: 'Belgium' }}}}")); input.apply(new CrossLanguageTransform(expansionAddress)); p.run().waitUntilFinish(); } } The main differences with authoring a classic pipeline and transform are - The PTransform uses the External transform. - This has a Uniform Resource Name (URN) which will identify the transform in your expansion service (more below). - The address on which the expansion service is running. Check the documentation for a deeper understanding of using external transforms. External transform The transform we are trying to call from Java is defined in Python as follows:> beam.Map( lambda input: google.protobuf.text_format.Parse(input, tf.train.Example()) ) | "Get predictions" >> RunInference( model_spec_pb2.InferenceSpecType( saved_model_spec=model_spec_pb2.SavedModelSpec( model_path=model_path, signature_name=['serving_default'])))) def to_runner_api_parameter(self, unused_context): return URN, None def from_runner_api_parameter( unused_ptransform, unused_paramter, unused_context): return PythonTransform() Check the documentation for a deeper understanding of creating an external transform. Expansion service The expansion service is written in the same language as the external transform. It takes care of injecting the transforms in your pipeline before submitting them to the Runner. def main(unused_argv): parser = argparse.ArgumentParser() parser.add_argument( '-p', '--port', type=int, help='port on which to serve the job api') options = parser.parse_args() global server server = grpc.server(thread_pool_executor.shared_unbounded_instance()) beam_expansion_api_pb2_grpc.add_ExpansionServiceServicer_to_server( expansion_service.ExpansionServiceServicer( PipelineOptions( ["--experiments", "beam_fn_api", "--sdk_location", "container"])), server) server.add_insecure_port('localhost:{}'.format(options.port)) server.start() _LOGGER.info('Listening for expansion requests at %d', options.port) signal.signal(signal.SIGTERM, cleanup) signal.signal(signal.SIGINT, cleanup) signal.pause() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) main(sys.argv) How to run the cross-language pipeline? In this section, the steps to run a cross-language pipeline are set out: Start the expansion service with your Python transforms: python expansion_service.py -p 9097 Start the Job Server which will translated into the stage that will run on your back-end or runner (e.g. Spark): - From Apache Beam source code: ./gradlew :runners:spark:job-server:runShadow - Using the pre-build Docker container: docker run -net=host apache/beam_spark_job_server Run pipeline: mvn exec:java -Dexec.mainClass=CrossLanguagePipeline \ -Pportable-runner \ -Dexec.args=" \ --runner=PortableRunner \ --jobEndpoint=localhost:8099 \ --useExternal=true \ --expansionServiceURL=localhost:9097 \ --experiments=beam_fn_api" Last updated on 2020/12/25 Have you found everything you were looking for? Was it all useful and clear? Is there anything that you would like to change? Let us know!
https://beam.apache.org/documentation/patterns/cross-language/
CC-MAIN-2021-31
en
refinedweb
01-26-2021 10:23 PM I'm a student doing a project for my studies, I'm trying to control a servo motor to sort items based on their colors. I've written a program so that if the toggle switch is on (the colored LEDs) and the desired color is detected the servo swings and stays until the color is no longer detected on the camera. I tried using select and switch case. both yield the same error. when I hover color over the camera, the color indicator(small green LEDs) stays lit however the servo motor flickers back and forth. I did testing on the part for the servo motor itself by giving a straight boolean signal from a toggle switch, I was able to fully control the servo motor. I suspect that the flickering of the servo motor would be due to an inconsistent signal from the color detection part of the program. Any help would be greatly appreciated! Thanks The program for my color detection are as such: Front panel: Solved! Go to Solution. 01-26-2021 10:32 PM Can you attach your VI? Can you do a block diagram cleanup before taking a picture of it? That 2nd image has more backwards running wires than forwards. It is hard to know what connects to what. You have 3 Express VI's marked red, green, blue for duty cycle. What do each of those do? (If we had a VI, we could open up those Express VI's and look at their settings!) 01-27-2021 12:10 AM - edited 01-27-2021 12:12 AM the program is in the color detection servo control vi. thank you! the 3 express vi are for the PWM frequency is set at 50Hz and the duty cycle is selected through the select function. 01-27-2021 07:50 AM Thank you for attaching your code. Unfortunately I see it is based on the MyRIO which I don't have as an installed module or drivers. So I can't see what is going on with those Express VI's. Can you answer the question I asked about them? What does each of them do? You have 3 of them labelled Red, Blue, Green taking either a .06 or .025 value. My understanding is they control a servo motor. Are they all controlling the same servo motor? If so, I think you'd have a conflict when one blue box gets one value, and the other two blue boxes get the other value. If they all go to the same output, the program is going to bounce that servo motor rapidly between the different values. 01-27-2021 09:40 AM It is to set the state of the servo motor extended out and not. and they control the same servo motor and ohhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh i think I get what you mean. since my other signals are giving 0 while one of it is giving it 1 that's why its flickering between on and off is that why? if so that's a very rookie mistake I'm sorry for wasitng your time XD. but is there any way for me to prioritize the 1 signals? Thanks. 01-27-2021 09:58 AM Yes. That is what I was thinking. Get yourself down to a single PWM Express VI. Use logic before that to determine whether you should send a .06 or .025. I don't know what is the signficance of red vs. green vs. blue in your case. Should you send a .06 if ANY of the those values are true? If so use an OR function on booleans. If there is an order that matters (right now they all put out .06 if true), you could nest case structure so if behaves like an If, elseif, else, endif if this was a text based language This site uses cookies to offer you a better browsing experience. Learn more about our privacy statement and cookie policy.
https://forums.ni.com/t5/LabVIEW/Servo-motor-signal-debouncing/td-p/4116272?profile.language=en
CC-MAIN-2021-31
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 03/01/2015 at 13:28, xxxxxxxx wrote: Sorry for a dumb question, but how do I store plugins first launch date (Day/Month/Year) using WriteRegInfo() : c4d.plugins.WriteRegInfo(pluginid, buffer) To be honest, I have no idea how to pass string to buffer Thanks in advance. On 05/01/2015 at 15:42, xxxxxxxx wrote: Hi Tomas, Here's an example of how to pass a byte string to a ByteSeq buffer. You'll have to change the plugin ID and get the properly formatted launch date. import c4d def main() : # Save and load the date Day/Month/Year, so 10 bytes WriteDateBuffer = "05/01/2015" # Create a byte sequence to hold the string WriteDateByteSeq = c4d.storage.ByteSeq(None, 10) # Copy the string into the byte sequence WriteDateByteSeq[0:10] = WriteDateBuffer # Write the user specific data c4d.plugins.WriteRegInfo(1000001, WriteDateByteSeq) # Read the user specific data to make sure it was written correctly ReadDateByteSeq = c4d.plugins.ReadRegInfo(1000001, 10) print WriteDateBuffer print WriteDateByteSeq print ReadDateByteSeq return Setting a byte sequence is explained in the __setitem__ section of the Python SDK documentation: Byte Sequence Python SDK documentation Joey Gaspe SDK Support Engineer On 05/01/2015 at 20:21, xxxxxxxx wrote: Is there a way to delete this kind of byte sequence data? The reason I ask is because once someone creates this. It stays there permanently. Even if the plugin is deleted! Just like storing data in the WorldContainer. I really don't appreciate people filling my memory with permanent garbage like this. Unless there is some mechanism in place to delete it if the plugin is deleted. -ScottA On 06/01/2015 at 03:08, xxxxxxxx wrote: Hi Joey. Appreciate your help. Yes, read your code like 100 and now I get it:) Thank you. ScottA, in order to delete that info, I think this will work: emptyByteSequence = c4d.storage.ByteSeq(None, 1) c4d.plugins.WriteRegInfo(PLUGIN_ID, emptyByteSequence) ScottA, you have a good point here. However, is there a better way to store that data? On 06/01/2015 at 07:32, xxxxxxxx wrote: ^I personally prefer to store any data like this in either tags or the document itself. That way it's never permanently stored on the user's system. These are small bits of data. And if a couple of people store a few things on my system it's really no big deal. But if a lot of people start doing this. And/or someone decides to store a large array of data in there. Then that becomes a problem. On 06/01/2015 at 08:06, xxxxxxxx wrote: Glad to help! I can confirm that your code to delete the info works. I retrieved the data both at the beginning and end of the script with your code and without, and get the expected results of either the data being removed or still being there, respectively. As for ScottA's question about a better way to store data, I have a few suggestions: - Store it in a file, perhaps C4D's HyperFile functionality could help? Either way, creating your own data file is probably the best option. - If you're certain you or your clients only work on Windows, you can perhaps use the registry. I don't know Mac OS X that well, but apparently there are a few roughly equivalent options, such as Property Lists. Note: Offering cross platform compatibility is the reason why there are various options within C4D that appear to replicate, even if only in a minimal way, other operating system data storage options. On 06/01/2015 at 09:45, xxxxxxxx wrote: Just for clarity. I'm not against people using the WorldContainer or the WriteRegInfo() as long as they do something to delete the data if the plugin gets removed. Which is difficult because the SDK does not have anything that does this. Providing a simple script with your plugin that the users can run is enough (although a bit clumsy). But it would be a nice option to have in the SDK to be able to scan all currently installed plugins. And delete any stored data that previous plugins might have created. Not sure if that's possible or not. On 06/01/2015 at 10:02, xxxxxxxx wrote: Originally posted by xxxxxxxx But it would be a nice option to have in the SDK to be able to scan all currently installed plugins. And delete any stored data that previous plugins might have created.Not sure if that's possible or not.-ScottA Originally posted by xxxxxxxx But it would be a nice option to have in the SDK to be able to scan all currently installed plugins. And delete any stored data that previous plugins might have created.Not sure if that's possible or not.-ScottA Not really viable because if a plugin is temporarily removed its stored info is deleted, then when the user wants to use the plug again all its data is gone. TBH I don't understand the concern about a few bytes or few kbytes in the days of terabyte-sized HDs. Steve On 06/01/2015 at 10:35, xxxxxxxx wrote: ^This thinking is why I need to format my HD roughly every six months Steve. It's the old "It's only 30cents a day pitch". By itself it's not a big deal. But the problem is if everyone does it. Then you no longer pay just 30cents a day. You start paying out big money every day. The amount of data being left on my computer after I delete a program is astounding! Programmers are doing it more and more these days. And now IMO it's an epidemic. So I've become very aware of this issue in my own coding practices. And I've pledged to never leave data behind in anything I write. No matter how small it is. On 06/01/2015 at 11:01, xxxxxxxx wrote: Now think how long it takes you to reformat your HD and reinstall all the data (and OS and software?) and how much that costs if your time is money. Then consider how you could save all that time (money) by buying a larger HD and forgetting about the few bytes a C4D plugin writes to the registry or wherever. On 06/01/2015 at 12:08, xxxxxxxx wrote: I second that Scott. If there's a way NOT to store something in memory, then don't. I raised this question simply because IT CAN BE DONE in Cinema, that's all. Not because I will go and store unnecessary info in c4d.plugins.WriteRegInfo(pluginid, buffer).
https://plugincafe.maxon.net/topic/8399/10975_pluginswritereginfo-solved
CC-MAIN-2021-31
en
refinedweb
Makes vectors normalized and orthogonal to each other. Normalizes normal. Normalizes tangent and makes sure it is orthogonal to normal (that is, angle between them is 90 degrees). See Also: Normalize function. Makes vectors normalized and orthogonal to each other. Normalizes normal. Normalizes tangent and makes sure it is orthogonal to normal. Normalizes binormal and makes sure it is orthogonal to both normal and tangent. Points in space are usually specified with coordinates in the standard XYZ axis system. However, you can interpret any three vectors as "axes" if they are normalized (ie, have a magnitude of 1) and are orthogonal (ie, perpendicular to each other). Creating your own coordinate axes is useful, say, if you want to scale a mesh in arbitrary directions rather than just along the XYZ axes - you can transform the vertices to your own coordinate system, scale them and then transform back. Often, a transformation like this will be carried out along only one axis while the other two are either left as they are or treated equally. For example, a stretching effect can be applied to a mesh by scaling up on one axis while scaling down proportionally on the other two. This means that once the first axis vector is specified, it doesn't greatly matter what the other two are as long as they are normalized and orthogonal. OrthoNormalize can be used to ensure the first vector is normal and then generate two normalized, orthogonal vectors for the other two axes. // Mesh "stretch" effect along a chosen axis. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { // The axis and amount of scaling. public Vector3 stretchAxis; public float stretchFactor = 1.0F; // MeshFilter component and arrays for the original and transformed vertices. private MeshFilter mf; private Vector3[] origVerts; private Vector3[] newVerts; // Our new basis vectors. private Vector3 basisA; private Vector3 basisB; private Vector3 basisC; void Start() { // Get the Mesh Filter, then make a copy of the original vertices // and a new array to calculate the transformed vertices. mf = GetComponent<MeshFilter>(); origVerts = mf.mesh.vertices; newVerts = new Vector3[origVerts.Length]; } void Update() { // BasisA is just the specified axis for stretching - the // other two are just arbitrary axes generated by OrthoNormalize. basisA = stretchAxis; Vector3.OrthoNormalize(ref basisA, ref basisB, ref basisC); // Copy the three new basis vectors into the rows of a matrix // (since it is actually a 4x4 matrix, the bottom right corner // should also be set to 1). Matrix4x4 toNewSpace = new Matrix4x4(); toNewSpace.SetRow(0, basisA); toNewSpace.SetRow(1, basisB); toNewSpace.SetRow(2, basisC); toNewSpace[3, 3] = 1.0F; // The scale values are just the diagonal entries of the scale // matrix. The vertices should be stretched along the first axis // and squashed proportionally along the other two. Matrix4x4 scale = new Matrix4x4(); scale[0, 0] = stretchFactor; scale[1, 1] = 1.0F / stretchFactor; scale[2, 2] = 1.0F / stretchFactor; scale[3, 3] = 1.0F; // The inverse of the first matrix transforms the vertices back to // the original XYZ coordinate space(transpose is the same as inverse // for an orthogonal matrix, which this is). Matrix4x4 fromNewSpace = toNewSpace.transpose; // The three matrices can now be combined into a single symmetric matrix. Matrix4x4 trans = toNewSpace * scale * fromNewSpace; // Transform each of the mesh's vertices by the symmetric matrix. int i = 0; while (i < origVerts.Length) { newVerts[i] = trans.MultiplyPoint3x4(origVerts[i]); i++; } // ...and finally, update the mesh with the new vertex array. mf.mesh.vertices = newVerts; } }
https://docs.unity3d.com/kr/2018.2/ScriptReference/Vector3.OrthoNormalize.html
CC-MAIN-2021-31
en
refinedweb
Question: Just found a bit of code someone here had written to access some DB Entities... public static OurCustomObject GetOurCustomObject(int primaryKey) { return GetOurCustomObject<int>(primaryKey, "usp_GetOurCustomObjectByID"); } public static OurCustomObject GetOurCustomObject(Guid uniqueIdent) { return GetOurCustomObject<Guid>(uniqueIdent, "usp_GetOurCustomObjectByGUID"); } private static OurCustomObject<T>(T identifier, string sproc) { if((T != typeof(int)) && (T == typeof(Guid))) { throw new ArgumentException("Identifier must be a string or an int"); } //ADO.NET Code to make DB Call with supplied sproc. } Theres just something about it that doesn't seem very generic. The fact that the sprocs are passed into the inner method feels ugly. but the only way I can see around that is to have an if/else in the private method along the lines of if(type == int) sproc = "GetByID"; else if (type == Guid) sproc = "GetByGUID"; Also the exception throwing looks ugly as well... is there anyway to use a where T : clause e.g. private static OurCustomObject<T>(T identifier) where T : int OR Guid Any suggestions on how to clean this up a little. Solution:1 You can't specify a constraint which says "it's one of these two", no. What you could do is: Dictionary<Type, string> StoredProcedureByType = new Dictionary<Type, string> { { typeof(Guid), "GetByGUID" }, { typeof(int), "GetByID" } }; Then use: string sproc; if (!StoredProcedureByType.TryGetValue(typeof(T), out sproc)) { throw new ArgumentException("Invalid type: " + typeof(T).Name); } This is probably overkill for just a couple of types, but it scales well if you have a lot of types involved. Given that both types are value types, you can make it a bit more robust with a constraint of: where T : struct but that will still allow byte etc. Solution:2 The code you provided looks reasonably fine to me, because private static OurCustomObject<T>(T identifier, string sproc) is private. I would even drop the exception checking from this method, because again - its private, so this class controls what gets passed to the method. The if statement would be rather horrible and over-engineered. Solution:3 The neatest thing to do is probably doing something like this: public interface IPrimaryKey { } public class PrimaryGuidKey(Guid key) : IPrimaryKey public class PrimaryIntegerKey(int key) : IPrimaryKey private static OurCustomObject<T>(T identifier) where T : IPrimaryKey Solution:4 There is no support for such a where clause to limit the type parameter in that way (unfortunatelly). In this case, you might find that passing identifier as an object, and not using generics is actually cleaner (when I needed something similar, that is what I ended up doing: seemed to be the least worst approach). This is an area that C# is weak, being neither a dynamic language or being able to specialise the template (C++ like, only providing implementations for T = int and T = Guid). Adendum: In this case I would likely stick with the overload, but change the type check to an Assert as that is a private helper method. Note:If u also have question or solution just comment us below or mail us on [email protected] EmoticonEmoticon
http://www.toontricks.com/2018/06/tutorial-using-where-t-something.html
CC-MAIN-2018-43
en
refinedweb
Happy Friday! Today I’d like to shed some light on another brand-new functionality upcoming for PyCharm 4 – Behavior-Driven Development (BDD) Support. You can already check it out in the PyCharm 4 Public Preview builds available on the EAP page. Note: The BDD support is available only in the PyCharm Professional Edition, not in the Community Edition. BDD is a very popular and really effective software development approach nowadays. I’m not going to cover the ideas and principles behind it in this blog post, however I would like to encourage everyone to try it, since it really drives your development in more stable and accountable way. Sure, BDD works mostly for companies that require some collaboration between non-programmers management and development teams. However the same approach can be used in smaller teams that want to benefit from the advanced test-driven development concept. In the Python world there are two most popular tools for behavior-driven development – Behave and Lettuce. PyCharm 4 supports both of them, recognizing feature files and providing syntax highlighting, auto-completion, as well as navigation from specific feature statements to their definitions. On-the-fly error highlighting, automatic quick fixes and other helpful PyCharm features are also available and can be used in a unified fashion. Let me show you how it works in 10 simple steps: 1. To start with BDD development and in order to get the full support from PyCharm, you first need to define a preferred tool for BDD (Behave or Lettuce) in your project settings: 2. You can create your own feature files within your project – just press Alt+Insert while in the project view window or in the editor and select “Gherkin feature file”. It will create a feature file where you can define your own features, scenarios, etc. PyCharm recognizes feature files format and provides syntax highlighting accordingly: 3. Since there is no step definitions at the moment, PyCharm highlights these steps in a feature file accordingly. Press Alt+Enter to get a quick fix on a step: 4. Follow the dialog and create your step definitions: 5. You can install behave or lettuce right from the editor. Just press Alt+Enter on unresolved reference to get the quick-fix suggestion to install the BDD tool: 6. Look how intelligently PyCharm keeps your code in a consistent state when working on step definitions. Use Alt+Enter to get a quick-fix action: 7. In feature files, with Ctrl+Click you can navigate from a Scenario description to the actual step definition: Note: Step definitions may contain wildcards as shown in the step #6 – matched steps are highlighted with blue in feature files. 8. PyCharm also gives you a handy assistance on automatic run configurations for BDD projects. In the feature file, right-click and choose the “create” option, to create an automatic run configuration for behave/lettuce projects: 9. In the run configurations you can specify the scenarios to run, parameters to pass and many other options: 10. Now you’re all set to run your project with a newly created run configuration. Press Shift+F10 and inspect the results: That was simple, wasn’t it? Hope you’ll enjoy the BDD support in PyCharm and give this approach a try in your projects! See you next week! -Dmitry Very nice feature and smart integration. Will there be Gherkin keyword support for other languages (configurable)? Well we have no such plans currently. Let me know for what languages/frameworks do you need this support? Greate Feature! I love it. Currently I’m working with guys from another Business Unit. All of them are German native speakers. The support of German Gherkin keywords would be very very helpful. Thanks. I will be pleased to have a French Gherkins vesion This looks great. PyCharm 3.* has changed my world, can’t wait to see what 4 has to offer. The BDD (+1 lettuce) navigation and quick fixes are great. This is a nice feature. Would it be possible to have support for behave’s default parse mode for step parameters instead of using re? We had no such plans, however this sounds as a good feature request. Could you please create a ticket here ? OK thank you, I’ve now added a ticket. Normally I run my tests like this: python run_behave.py testcases/website.feature --browser_name=firefox --target_env= How do I convert this to a Behave Run Configuration in PyCharm? Hi, What is “run_behave.py”? Is it your custom file? What does it do in this case? You can pass any arguments to behave but behave does not have “browser_name” nor “target_env” arguments. If you need to pass some data to your step definitions you may use environment variables (they may be passed to any python configuration in PyCharm including behave). Look: My run_behave does the following: from behave import configuration from behave import __main__ # Adding my wanted option to parser in behave. configuration.parser.add_argument('-b', '--browser_name', help='Browser to use') configuration.parser.add_argument('-vb', '--browser_version', help='Browser version') configuration.parser.add_argument('-os', '--operating_system', help='OS where the browser is running') ... __main__.main() I guess it shouldn’t have been done this way. I will try to rewrite this to environment variables. Hello, PyCharm uses Behave API to run it, so you should not run it directly. I believe env. variable is the best way to pass something to step definitions. Could you make support for different languages? We use Russian in “feature” files but they’re shown as plain text. Hello Zoya, You may use Russian in feature files now, but keywords should be in English. If you need to use Russian keywords, you may create Feature Request: , we will try to implement it in future versions. Thank you, Ilya Unfortunately I have no permission to create a new task why Scenario outlines are not detected by the lettuce runner. I get “Empty test suite.” error message Please file a bug to Hi, I’ve been using Behave BDD framework with PyCharm and it’s great!! One quick question, it seems like the test run terminates if one of the scenarios fails. Is there a way force the feature to run completely even some scenario fails. Hello. *Feature* does not stop when *scenario* fails, but *scenario* fails when one of its *steps* fails. That is how Behave works. Look: If PyCharm behaves differently in your case, please submit a bug. Thank you. Pingback: JetBrains PyCharm Professional 4.5.2 Build 141.1580 They are planned for the next major release. Hello, I’m a tester and test automation programmer for a small python shop. The dev team I work with is heavily invested in Pytest, and as a result, insist on using the Pytest-BDD plugin for Gherkin, rather than Behave. Any chance you folks will ever incorporate support for Pytest-BDD into Pycharm? It’s not a big deal for the dev team (who all use emacs). However, it would be nice if I could have a lot of these IDE conveniences. Thanks, Greg. Hi Greg, Thanks for the request. Could you please create a feature request here ? It will be easier to track and manage it. Does it format parameter tables and keep them aligned properly? Hi Terrence, If you speak about “examples” section for scenario outlines, then answer is yes. When you call “reformat” (which is CTRL+ALT+L on Windows for example) in Gherkin (.feature) file, it reformats tables, so they look pretty. Hi, I’m have some trouble setting up my environment using lettuce and django. I can run lettuce inside virtualenv with ‘python manage.py harvest’, but when I try to use a lettuce configuration. I get this error: ValueError: Unable to configure handler ‘mail_admins’: Cannot resolve ‘assettools.common.backend.log.FormattedSubjectAdminEmailHandler’: cannot import name QuerySet when importing (…)\blockOperations-steps.py Here is my django configuration: And my lettuce configuration: Can you help me? Best regards, Veronica Hi. Could you give me access please? I am not sure you can run lettuce configuration with Django. But you may run manage.py console from PyCharm and run harvest from there. Hi, sorry about the restricted access, it should work now. I can run harvest from the manage.py console successfully, but then the results are presented in a very inconvenient way (plain text), and it gets hard to track the result with everything in the console. What would be the proper way to configure it so I can use it as a Lettuce configuration? Should I have a separate project just for the tests? Thanks! Unfortunatelly PyCharm does not have harvest support for now. Please create feature request: You can use Django manage.py console in PyCharm to run tests for now. Is this feature available in the community version also? I miss these features in the community version which I enjoyed in the paid version of RubyMine. Very helpful features. Would love to see it working … in free or paid version. BDD support is available only in PyCharm Professional Edition. I was crafting a tutorial of BDD in PyCharm and I noticed several things I didn’t like: 1) “Create all steps definition” doesn’t work well. It creates definitions for “some” expressions and I can’t figure out how it choose them. 2) Automatically created step_impl functions have “pass” in it. That means auto-generated tests will pass by default. That’s not nice. Test must fail! 3) It adds “use_step_matcher(‘re’)” by default. 99% of times you wont need that. I mean if you have a problem you want to solve with regular expressions then you have two problems But overal nice support and great work! “-pg_sleep(0)-“ 1-sleep(0)-1 1-pg_sleep(0)-1 When you start to have a few features files it seems sensible to produce a directory structure to manage them. This is reasonably easy to establish and a right click on a directory name allows you to Run all the tests in that directory. However if the directory structure is deeper than that you cannot run the all the contained feature files. This would be useful and allow a granular approach to test running. Chris , unfortunatelly it does not suppored now. Please create feature request I have downloaded the Pycharm Community and I have installed the behave 1.2.5 through pip. In the project interpreter I could see the installed packages. When I try to create a new feature file, I don’t see the “new Gherkin file” option in the context menu itself.Now how can i create a feature file in my project. Do i need to add any other plugin. BDD features are a Professional feature, you can get a free 30 day trial of PyCharm Professional Edition from our website: Let us know if you have any questions! Thank you. So it is not possible in Pycharm Community right. You can always use the command line, and manually use behavioral-driven testing. The PyCharm integration is a feature that’s only available in the professional edition though. How can I run multiple feature files in Pycharm Professional edition. I have used behave “one.feature”, “two.feature” but it fails. Do we need to setup any configuration file or anything else Actually I have removed the comma separator in between two feature files and it started to work. When I have more feature files how can I give it another way apart from command line/terminal. Any configuration file or .bat file or any runner file? I could run the multiple features by removing the comma which is in-between the feature files. Now, how could I run apart from terminal/command line. Any way like configuration setup, runner file or .bat file Hi, I am using Pycharm with behave. is there any way that i can navigate from a Scenario description in execute_steps() block to the actual step definition ? eg: @Given(‘i am on the home page of the website’) def step_on_home_page(context, publisher): context.execute_steps(u”’ Given I am on the login page When I login Then I am redirected to Home Page ”’)
https://blog.jetbrains.com/pycharm/2014/09/feature-spotlight-behavior-driven-development-in-pycharm/
CC-MAIN-2018-43
en
refinedweb
I've got this working to print the description of a feature service, but how do I get a search cursor on a feature service out of AGOL?! I'm new at this arcrest bit! import arcrest un = r'username' pw = r'password' sh =arcrest.AGOLTokenSecurityHandler(org_url='myOrgURL', username=un, password=pw) admin = arcrest.manageorg.Administration(securityHandler=sh) content = admin.content currentUser = content.users.user() fsurl = r'theFeatureServiceURL' fs = arcrest.agol.FeatureService(fsurl,sh) print fs.description You would use the REST API query. Here's some sample code which you will need to modify. Some of the places you need to change are lines: 12, 13, 16, 32, 53, 54, 61. There's also some general notes and debugging code, which you can cut out.
https://community.esri.com/thread/182561-arcrest-search-cursor-against-agol-feature-service
CC-MAIN-2018-43
en
refinedweb
Changes for version 5.24.0 - =head1 Core Enhancements - =head2 Postfix dereferencing is no longer experimental - Using the C<postderef> and C<postderef_qq> features no longer emits a warning. Existing code that disables the C<experimental::postderef> warning category that they previously used will continue to work. The C<postderef> feature has no effect; all Perl code can use postfix dereferencing, regardless of what feature declarations are in scope. The C<5.24> feature bundle now includes the C<postderef_qq> feature. - =head2 Unicode 8.0 is now supported - For details on what is in this release, see L<>. - =head2. - =head2 New C<\b{lb}> boundary in regular expressions - C<lb> stands for Line Break. It is a Unicode property that determines where a line of text is suitable to break (typically so that it can be output without overflowing the available horizontal space). This capability has long been furnished by the L<Unicode::LineBreak> module, but now a light-weight, non-customizable version that is suitable for many purposes is in core Perl. - =head2 C<qr/(?[ ])/> now works in UTF-8 locales - L<Extended Bracketed Character Classes|perlrecharclass/Extended Bracketed Character Classes> now will successfully compile when S<C. - =head2 Integer shift (C<< << >> and C<< >> >>) now more explicitly defined - Negative shifts are reverse shifts: left shift becomes right shift, and right shift becomes left shift. - Shifting by the number of bits in a native integer (or more) is zero, except when the "overshift" is right shifting a negative value under C C<bigint> pragma, or the C<Bit::Vector> module from CPAN. - =head2 printf and sprintf now allow reordered precision arguments - That is, C<< sprintf '|%.*2$d|', 2, 3 >> now returns C<|002|>. This extends the existing reordering mechanism (which allows reordering for arguments that are used as format fields, widths, and vector separators). - =head2 More fields provided to C<sigaction> callback with C<SA_SIGINFO> - When passing the C<SA_SIGINFO> flag to L<sigaction|POSIX/sigaction>, the C<errno>, C<status>, C<uid>, C<pid>, C<addr> and C<band> fields are now included in the hash passed to the handler, if supported by the platform. - =head2 Hashbang redirection to Perl 6 - Previously perl would redirect to another interpreter if it found a hashbang path unless the path contains "perl" (see L<perlrun>). To improve compatability with Perl 6 this behavior has been extended to also redirect if "perl" is followed by "6". - =head1 Security - =head2 Set proper umask before calling C<mkstemp(3)> - In 5.22 perl started setting umask to 0600 before calling C<mkstemp(3)> and restoring it afterwards. This wrongfully tells C<open(2)> to strip the owner read and write bits from the given mode before applying it, rather than the intended negation of leaving only those bits in place. - Systems that use mode 0666 in C<mkstemp(3)> (like old versions of glibc) create a file with permissions 0066, leaving world read and write permissions regardless of current umask. - This has been fixed by using umask 0177 instead. [perl #127322] - =head2 Fix out of boundary access in Win32 path handling - This is CVE-2015-8608. For more information see L<[perl #126755]|> - =head2 Fix loss of taint in canonpath - This is CVE-2015-8607. For more information see L<[perl #126862]|> - =head2 Avoid accessing uninitialized memory in win32 C<crypt()> - Added validation that will detect both a short salt and invalid characters in the salt. L<[perl #126922]|> - =head2 Remove duplicate environment variables from C<environ> - Previously, if an environment variable appeared more than once in C<environ[]>, C<%ENV> would contain the last entry for that name, while a typical C<getenv()> would return the first entry. We now make sure C<%ENV> contains the same as what C<getenv> returns. - Second, we remove duplicates from C<environ[]>, so if a setting with that name is set in C<%ENV>, we won't pass an unsafe value to a child process. - CVE-2016-2381 - =head1 Incompatible Changes - =head2 The C<autoderef> feature has been removed - The experimental C<autoderef> feature (which allowed calling C<push>, C<pop>, C<shift>, C<unshift>, C<splice>, C<keys>, C<values>, and C<each> on a scalar argument) has been deemed unsuccessful. It has now been removed; trying to use the feature (or to disable the C<experimental::autoderef> warning it previously triggered) now yields an exception. - =head2 Lexical $_ has been removed - C. - =head2 C<qr/\b{wb}/> is now tailored to Perl expectations - This is now more suited to be a drop-in replacement for plain C<\b>, but giving better results for parsing natural language. Previously it strictly followed the current Unicode rules which calls for it to match between each white space character. Now it doesn't generally match within spans of white space, behaving like C<\b> does. See L<perlrebackslash/\b{wb}> - =head2 Regular expression compilation errors - Some regular expression patterns that had runtime errors now don't compile at all. - Almost all Unicode properties using the C<\p{}> and. - =head2 C<qr/\N{}/> now disallowed under C<use re "strict"> - An empty C<\N{}> makes no sense, but for backwards compatibility is accepted as doing nothing, though a deprecation warning is raised by default. But now this is a fatal error under the experimental feature L<re/'strict' mode>. - =head2 Nested declarations are now disallowed - A C<my>, C<our>, or C<state> declaration is no longer allowed inside of another C<my>, C<our>, or C<state> declaration. - For example, these are now fatal: - my ($x, my($y)); our (my $x); - L<[perl #125587]|> - L<[perl #121058]|> - =head2 The C<utf8::encode()> on the string (or a copy) first. - =head2 C<chdir('')> no longer chdirs home - Using C<chdir('')> or C<chdir(undef)> to chdir home has been deprecated since perl v5.8, and will now fail. Use C<chdir()> instead. - =head2 ASCII characters in variable names must now be all visible - It was legal until now on ASCII platforms for variable names to contain non-graphical ASCII control characters (ordinals 0 through 31, and 127, which are the C0 controls and C<$^]> and C<${^GLOBAL_PHASE}>. Details are at L<perlvar>. It remains legal, though unwise and deprecated (raising a deprecation warning), to use certain non-graphic non-ASCII characters in variables names when not under S<C<use utf8>>. No code should do this, as all such variables are reserved by Perl, and Perl doesn't currently define any of them (but could at any time, without notice). - =head2 An off by one issue in C<$Carp::MaxArgNums> has been fixed - C<$Carp::MaxArgNums> is supposed to be the number of arguments to display. Prior to this version, it was instead showing C<$Carp::MaxArgNums> + 1 arguments, contrary to the documentation. - =head2 Only blanks and tabs are now allowed within C<[...]> within C<(?[...])>. - C<\t> and SPACE characters. Previously, it was any white space. See L<perlrecharclass/Extended Bracketed Character Classes>. - =head1 Deprecations - =head2 Using code points above the platform's C<IV_MAX> is now deprecated - Unicode defines code points in the range C<0..0x10FFFF>. Some standards at one time defined them up to 2**31 - 1, but Perl has allowed them to be as high as anything that will fit in a word on the platform being used. However, use of those above the platform's C<IV_MAX> is broken in some constructs, notably C<tr///>, regular expression patterns involving quantifiers, and in some arithmetic and comparison operations, such as being the upper limit of a loop. Now the use of such code points raises a deprecation warning, unless that warning category is turned off. C<IV_MAX> is typically 2**31 -1 on 32-bit platforms, and 2**63-1 on 64-bit ones. - =head2 C<split> and C<map ord>. In the future, this warning will be replaced by an exception. - =head2 C<sysread()>, C<syswrite()>, C<recv()> and C<send()> are deprecated on :utf8 handles - The C<sysread()>, C<recv()>, C<syswrite()> and C<send()> operators are deprecated on handles that have the C<:utf8> layer, either explicitly, or implicitly, eg., with the C<:encoding(UTF-16LE)> layer. - Both C<sysread()> and C<recv()> currently use only the C<:utf8> flag for the stream, ignoring the actual layers. Since C<sysread()> and C<recv()> do no UTF-8 validation they can end up creating invalidly encoded scalars. - Similarly, C<syswrite()> and C<send()> use only the C<:utf8> flag, otherwise ignoring any layers. If the flag is set, both write the value UTF-8 encoded, even if the layer is some different encoding, such as the example above. - Ideally, all of these operators would completely ignore the C<:utf8> state, working only with bytes, but this would result in silently breaking existing code. To avoid this a future version of perl will throw an exception when any of C<sysread()>, C<recv()>, C<syswrite()> or C<send()> are called on handle with the C<:utf8> layer. - =head1 Performance Enhancements - =over 4 - =item * - The overhead of scope entry and exit has been considerably reduced, so for example subroutine calls, loops and basic blocks are all faster now. This empty function call now takes about a third less time to execute: - sub f{} f(); - =item * - Many languages, such as Chinese, are caseless. Perl now knows about most common ones, and skips much of the work when a program tries to change case in them (like C<ucfirst()>) or match caselessly (C<qr//i>). This will speed up a program, such as a web server, that can operate on multiple languages, while it is operating on a caseless one. - =item * - C</fixed-substr/> has been made much faster. - On platforms with a libc C C<memchr()>, e.g. 32-bit ARM Raspberry Pi, there will be a small or little speedup. Conversely, some pathological cases, such as C<"ab" x 1000 =~ /aa/> will be slower now; up to 3 times slower on the rPi, 1.5x slower on x86_64. - =item * -. - =item * - Preincrement, predecrement, postincrement, and postdecrement have been made faster by internally splitting the functions which handled multiple cases into different functions. - =item * - Creating Perl debugger data structures (see L<perldebguts/"Debugger Internals">) for XSUBs and const subs has been removed. This removed one glob/scalar combo for each unique C<.c> file that XSUBs and const subs came from. On startup (C - =item * - On Win32, C<stat>ing or C<-X>ing a path, if the file or directory does not exist, is now 3.5x faster than before. - =item * - Single arguments in list assign are now slightly faster: - ($x) = (...); (...) = ($x); - =item * - Less peak memory is now used when compiling regular expression patterns. - =back - =head1 Modules and Pragmata - =head2 Updated Modules and Pragmata - =over - =item * - L<arybase> has been upgraded from version 0.10 to 0.11. - =item * - L<Attribute::Handlers> has been upgraded from version 0.97 to 0.99. - =item * - L<autodie> has been upgraded from version 2.26 to 2.29. - =item * - L<autouse> has been upgraded from version 1.08 to 1.11. - =item * - L<B> has been upgraded from version 1.58 to 1.62. - =item * - L<B::Deparse> has been upgraded from version 1.35 to 1.37. - =item * - L<base> has been upgraded from version 2.22 to 2.23. - =item * - L<Benchmark> has been upgraded from version 1.2 to 1.22. - =item * - L<bignum> has been upgraded from version 0.39 to 0.42. - =item * - L<bytes> has been upgraded from version 1.04 to 1.05. - =item * - L<Carp> has been upgraded from version 1.36 to 1.40. - =item * - L<Compress::Raw::Bzip2> has been upgraded from version 2.068 to 2.069. - =item * - L<Compress::Raw::Zlib> has been upgraded from version 2.068 to 2.069. - =item * - L<Config::Perl::V> has been upgraded from version 0.24 to 0.25. - =item * - L<CPAN::Meta> has been upgraded from version 2.150001 to 2.150005. - =item * - L<CPAN::Meta::Requirements> has been upgraded from version 2.132 to 2.140. - =item * - L<CPAN::Meta::YAML> has been upgraded from version 0.012 to 0.018. - =item * - L<Data::Dumper> has been upgraded from version 2.158 to 2.160. - =item * - L<Devel::Peek> has been upgraded from version 1.22 to 1.23. - =item * - L<Devel::PPPort> has been upgraded from version 3.31 to 3.32. - =item * - L<Dumpvalue> has been upgraded from version 1.17 to 1.18. - =item * - L<DynaLoader> has been upgraded from version 1.32 to 1.38. - =item * - L<Encode> has been upgraded from version 2.72 to 2.80. - =item * - L<encoding> has been upgraded from version 2.14 to 2.17. - =item * - L<encoding::warnings> has been upgraded from version 0.11 to 0.12. - =item * - L<English> has been upgraded from version 1.09 to 1.10. - =item * - L<Errno> has been upgraded from version 1.23 to 1.25. - =item * - L<experimental> has been upgraded from version 0.013 to 0.016. - =item * - L<ExtUtils::CBuilder> has been upgraded from version 0.280221 to 0.280225. - =item * - L<ExtUtils::Embed> has been upgraded from version 1.32 to 1.33. - =item * - L<ExtUtils::MakeMaker> has been upgraded from version 7.04_01 to 7.10_01. - =item * - L<ExtUtils::ParseXS> has been upgraded from version 3.28 to 3.31. - =item * - L<ExtUtils::Typemaps> has been upgraded from version 3.28 to 3.31. - =item * - L<feature> has been upgraded from version 1.40 to 1.42. - =item * - L<fields> has been upgraded from version 2.17 to 2.23. - =item * - L<File::Copy> has been upgraded from version 2.30 to 2.31. - =item * - L<File::Find> has been upgraded from version 1.29 to 1.34. - =item * - L<File::Glob> has been upgraded from version 1.24 to 1.26. - =item * - L<File::Path> has been upgraded from version 2.09 to 2.12_01. - =item * - L<File::Spec> has been upgraded from version 3.56 to 3.63. - =item * - L<Filter::Util::Call> has been upgraded from version 1.54 to 1.55. - =item * - L<Getopt::Long> has been upgraded from version 2.45 to 2.48. - =item * - L<Hash::Util> has been upgraded from version 0.18 to 0.19. - =item * - L<Hash::Util::FieldHash> has been upgraded from version 1.15 to 1.19. - =item * - L<HTTP::Tiny> has been upgraded from version 0.054 to 0.056. - =item * - L<I18N::Langinfo> has been upgraded from version 0.12 to 0.13. - =item * - L<if> has been upgraded from version 0.0604 to 0.0606. - =item * - L<IO> has been upgraded from version 1.35 to 1.36. - =item * - IO-Compress has been upgraded from version 2.068 to 2.069. - =item * - L<IPC::Open3> has been upgraded from version 1.18 to 1.20. - =item * - L<IPC::SysV> has been upgraded from version 2.04 to 2.06_01. - =item * - L<List::Util> has been upgraded from version 1.41 to 1.42_02. - =item * - L<locale> has been upgraded from version 1.06 to 1.08. - =item * - L<Locale::Codes> has been upgraded from version 3.34 to 3.37. - =item * - L<Math::BigInt> has been upgraded from version 1.9997 to 1.999715. - =item * - L<Math::BigInt::FastCalc> has been upgraded from version 0.31 to 0.40. - =item * - L<Math::BigRat> has been upgraded from version 0.2608 to 0.260802. - =item * - L<Module::CoreList> has been upgraded from version 5.20150520 to 5.20160506. - =item * - L<Module::Metadata> has been upgraded from version 1.000026 to 1.000031. - =item * - L<mro> has been upgraded from version 1.17 to 1.18. - =item * - L<ODBM_File> has been upgraded from version 1.12 to 1.14. - =item * - L<Opcode> has been upgraded from version 1.32 to 1.34. - =item * - L<parent> has been upgraded from version 0.232 to 0.234. - =item * - L<Parse::CPAN::Meta> has been upgraded from version 1.4414 to 1.4417. - =item * - L<Perl::OSType> has been upgraded from version 1.008 to 1.009. - =item * - L<perlfaq> has been upgraded from version 5.021009 to 5.021010. - =item * - L<PerlIO::encoding> has been upgraded from version 0.21 to 0.24. - =item * - L<PerlIO::mmap> has been upgraded from version 0.014 to 0.016. - =item * - L<PerlIO::scalar> has been upgraded from version 0.22 to 0.24. - =item * - L<PerlIO::via> has been upgraded from version 0.15 to 0.16. - =item * - podlators has been upgraded from version 2.28 to 4.07. - =item * - L<Pod::Functions> has been upgraded from version 1.09 to 1.10. - =item * - L<Pod::Perldoc> has been upgraded from version 3.25 to 3.25_02. - =item * - L<Pod::Simple> has been upgraded from version 3.29 to 3.32. - =item * - L<Pod::Usage> has been upgraded from version 1.64 to 1.68. - =item * - L<POSIX> has been upgraded from version 1.53 to 1.65. - =item * - L<Scalar::Util> has been upgraded from version 1.41 to 1.42_02. - =item * - L<SDBM_File> has been upgraded from version 1.13 to 1.14. - =item * - L<SelfLoader> has been upgraded from version 1.22 to 1.23. - =item * - L<Socket> has been upgraded from version 2.018 to 2.020_03. - =item * - L<Storable> has been upgraded from version 2.53 to 2.56. - =item * - L<strict> has been upgraded from version 1.09 to 1.11. - =item * - L<Term::ANSIColor> has been upgraded from version 4.03 to 4.04. - =item * - L<Term::Cap> has been upgraded from version 1.15 to 1.17. - =item * - L<Test> has been upgraded from version 1.26 to 1.28. - =item * - L<Test::Harness> has been upgraded from version 3.35 to 3.36. - =item * - L<Thread::Queue> has been upgraded from version 3.05 to 3.09. - =item * - L<threads> has been upgraded from version 2.01 to 2.07. - =item * - L<threads::shared> has been upgraded from version 1.48 to 1.51. - =item * - L<Tie::File> has been upgraded from version 1.01 to 1.02. - =item * - L<Tie::Scalar> has been upgraded from version 1.03 to 1.04. - =item * - L<Time::HiRes> has been upgraded from version 1.9726 to 1.9733. - =item * - L<Time::Piece> has been upgraded from version 1.29 to 1.31. - =item * - L<Unicode::Collate> has been upgraded from version 1.12 to 1.14. - =item * - L<Unicode::Normalize> has been upgraded from version 1.18 to 1.25. - =item * - L<Unicode::UCD> has been upgraded from version 0.61 to 0.64. - =item * - L<UNIVERSAL> has been upgraded from version 1.12 to 1.13. - =item * - L<utf8> has been upgraded from version 1.17 to 1.19. - =item * - L<version> has been upgraded from version 0.9909 to 0.9916. - =item * - L<warnings> has been upgraded from version 1.32 to 1.36. - =item * - L<Win32> has been upgraded from version 0.51 to 0.52. - =item * - L<Win32API::File> has been upgraded from version 0.1202 to 0.1203. - =item * - L<XS::Typemap> has been upgraded from version 0.13 to 0.14. - =item * - L<XSLoader> has been upgraded from version 0.20 to 0.21. - =back - =head1 Documentation - =head2 Changes to Existing Documentation - =head3 L<perlapi> - =over 4 - =item * - The process of using undocumented globals has been documented, namely, that one should send email to L<[email protected]|mailto:[email protected]> first to get the go-ahead for documenting and using an undocumented function or global variable. - =back - =head3 L<perlcall> - =over 4 - =item * - A number of cleanups have been made to perlcall, including: - =over 4 - =item * - use C<EXTEND(SP, n)> and C<PUSHs()> instead of C<XPUSHs()> where applicable and update prose to match - =item * - add POPu, POPul and POPpbytex to the "complete list of POP macros" and clarify the documentation for some of the existing entries, and a note about side-effects - =item * - add API documentation for POPu and POPul - =item * - use ERRSV more efficiently - =item * - approaches to thread-safety storage of SVs. - =back - =back - =head3 L<perlfunc> - =over 4 - =item * - The documentation of C<hex> has been revised to clarify valid inputs. - =item * - Better explain meaning of negative PIDs in C<waitpid>. L<[perl #127080]|> - =item * - General cleanup: there's more consistency now (in POD usage, grammar, code examples), better practices in code examples (use of C<my>, removal of bareword filehandles, dropped usage of C<&> when calling subroutines, ...), etc. - =back - =head3 L<perlguts> - =over 4 - =item * - A new section has been added, L<perlguts/"Dynamic Scope and the Context Stack">, which explains how the perl context stack works. - =back - =head3 L<perllocale> - =over 4 - =item * - A stronger caution about using locales in threaded applications is given. Locales are not thread-safe, and you can get wrong results or even segfaults if you use them there. - =back - =head3 L<perlmodlib> - =over 4 - =item * - We now recommend contacting the module-authors list or PAUSE in seeking guidance on the naming of modules. - =back - =head3 L<perlop> - =over 4 - =item * - The documentation of C<qx//> now describes how C<$?> is affected. - =back - =head3 L<perlpolicy> - =over 4 - =item * - This note has been added to perlpolicy: - While civility is required, kindness is encouraged; if you have any doubt about whether you are being civil, simply ask yourself, "Am I being kind?" and aspire to that. - =back - =head3 L<perlreftut> - =over 4 - =item * - Fix some examples to be L<strict> clean. - =back - =head3 L<perlrebackslash> - =over 4 - =item * - Clarify that in languages like Japanese and Thai, dictionary lookup is required to determine word boundaries. - =back - =head3 L<perlsub> - =over 4 - =item * - Updated to note that anonymous subroutines can have signatures. - =back - =head3 L<perlsyn> - =over 4 - =item * - Fixed a broken example where C<=> was used instead of C<==> in conditional in do/while example. - =back - =head3 L<perltie> - =over 4 - =item * - The usage of C<FIRSTKEY> and C<NEXTKEY> has been clarified. - =back - =head3 L<perlunicode> - =over 4 - =item * - Discourage use of 'In' as a prefix signifying the Unicode Block property. - =back - =head3 L<perlvar> - =over 4 - =item * - The documentation of C<$@> was reworded to clarify that it is not just for syntax errors in C<eval>. L<[perl #124034]|> - =item * - The specific true value of C<$!{E...}> is now documented, noting that it is subject to change and not guaranteed. - =item * - Use of C<$OLD_PERL_VERSION> is now discouraged. - =back - =head3 L<perlxs> - =over 4 - =item * - The documentation of C<PROTOTYPES> has been corrected; they are I<disabled> by default, not I<enabled>. - =back - =head1 Diagnostics - The following additions or changes have been made to diagnostic output, including warnings and fatal error messages. For the complete list of diagnostic messages, see L<perldiag>. - =head2 New Diagnostics - =head3 New Errors - =over 4 - =item * - L<%s must not be a named sequence in transliteration operator|perldiag/"%s must not be a named sequence in transliteration operator"> - =item * - L<Can't find Unicode property definition "%s" in regex;|perldiag/"Can't find Unicode property definition "%s" in regex; marked by <-- HERE in m/%s/"> - =item * - L<Can't redeclare "%s" in "%s"|perldiag/"Can't redeclare "%s" in "%s""> - =item * - L<Character following \p must be '{' or a single-character Unicode property name in regex;|perldiag/"Character following \%c must be '{' or a single-character Unicode property name in regex; marked by <-- HERE in m/%s/"> - =item * - L<Empty \%c in regex; marked by E<lt>-- HERE in mE<sol>%sE<sol> |perldiag/"Empty \%c in regex; marked by <-- HERE in mE<sol>%sE<sol>"> - =item * - L<Illegal user-defined property name|perldiag/"Illegal user-defined property name"> - =item * - L<Invalid number '%s' for -C option.|perldiag/"Invalid number '%s' for -C option."> - =item * - L<<< Sequence (?... not terminated in regex; marked by S<<-- HERE> in mE<sol>%sE<sol>|perldiag/"Sequence (?... not terminated in regex; marked by <-- HERE in mE<sol>%sE<sol>" >>> - =item * - L<<< Sequence (?PE<lt>... not terminated in regex; marked by E<lt>-- HERE in mE<sol>%sE<sol> |perldiag/"Sequence (?PE<lt>... not terminated in regex; marked by <-- HERE in mE<sol>%sE<sol>" >>> - =item * - L<Sequence (?PE<gt>... not terminated in regex; marked by E<lt>-- HERE in mE<sol>%sE<sol> |perldiag/"Sequence (?PE<gt>... not terminated in regex; marked by <-- HERE in mE<sol>%sE<sol>"> - =back - =head3 New Warnings - =over 4 - =item * - L<Assuming NOT a POSIX class since %s in regex; marked by E<lt>-- HERE in mE<sol>%sE<sol>| perldiag/Assuming NOT a POSIX class since %s in regex; marked by <-- HERE in mE<sol>%sE<sol>> - =item * - L<%s() is deprecated on :utf8 handles|perldiag/"%s() is deprecated on :utf8 handles"> - =back - =head2 Changes to Existing Diagnostics - =over 4 - =item * - Accessing the C<IO> part of a glob as C<FILEHANDLE> instead of C<IO> is no longer deprecated. It is discouraged to encourage uniformity (so that, for example, one can grep more easily) but it will not be removed. L<[perl #127060]|> - =item * - The diagnostic C<< Hexadecimal float: internal error >> has been changed to C<< Hexadecimal float: internal error (%s) >> to include more information. - =item * - L<Can't modify non-lvalue subroutine call of &%s|perldiag/"Can't modify non-lvalue subroutine call of &%s"> - This error now reports the name of the non-lvalue subroutine you attempted to use as an lvalue. - =item * - When running out of memory during an attempt the increase the stack size, previously, perl would die using the cryptic message C<< panic: av_extend_guts() negative count (-9223372036854775681) >>. This has been fixed to show the prettier message: L<< Out of memory during stack extend|perldiag/"Out of memory during %s extend" >> - =back - =head1 Configuration and Compilation - =over 4 - =item * - C<Configure> now acts as if the C<-O> option is always passed, allowing command line options to override saved configuration. This should eliminate confusion when command line options are ignored for no obvious reason. C<-O> is now permitted, but ignored. - =item * - Bison 3.0 is now supported. - =item * - F<Configure> no longer probes for F<libnm> by default. Originally this was the "New Math" library, but the name has been re-used by the GNOME NetworkManager. L<[perl #127131]|> - =item * - Added F<Configure> probes for C<newlocale>, C<freelocale>, and C<uselocale>. - =item * - C<< PPPort.so/PPPort.dll >> no longer get installed, as they are not used by C<< PPPort.pm >>, only by its test files. - =item * - It is now possible to specify which compilation date to show on C<< perl -V >> output, by setting the macro C<< PERL_BUILD_DATE >>. - =item * - Using the C<NO_HASH_SEED> define in combination with the default hash algorithm C<PERL_HASH_FUNC_ONE_AT_A_TIME_HARD> resulted in a fatal error while compiling the interpreter, since Perl 5.17.10. This has been fixed. - =item * - F<Configure> should handle spaces in paths a little better. - =item * - No longer generate EBCDIC POSIX-BC tables. We don't believe anyone is using Perl and POSIX-BC at this time, and by not generating these tables it saves time during development, and makes the resulting tar ball smaller. - =item * - The GNU Make makefile for Win32 now supports parallel builds. [perl #126632] - =item * - You can now build perl with MSVC++ on Win32 using GNU Make. [perl #126632] - =item * - The Win32 miniperl now has a real C<getcwd> which increases build performance resulting in C<getcwd()> being 605x faster in Win32 miniperl. - =item * - Configure now takes C<-Dusequadmath> into account when calculating the C<alignbytes> configuration variable. Previously the mis-calculated C<alignbytes> could cause alignment errors on debugging builds. [perl #127894] - =back - =head1 Testing - =over 4 - =item * - A new test (F<t/op/aassign.t>) has been added to test the list assignment operator C<OP_AASSIGN>. - =item * - Parallel building has been added to the dmake C<makefile.mk> makefile. All Win32 compilers are supported. - =back - =head1 Platform Support - =head2 Platform-Specific Notes - =over 4 - =item AmigaOS - =over 4 - =item * - The AmigaOS port has been reintegrated into the main tree, based off of Perl 5.22.1. - =back - =item Cygwin - =over 4 - =item * - Tests are more robust against unusual cygdrive prefixes. L<[perl #126834]|> - =back - =item EBCDIC - =over 4 - =item L<[email protected]|mailto:[email protected]>, and we will write a conversion script for you. - =item EBCDIC C<cmp()> and C<sort()> fixed for UTF-EBCDIC strings - Comparing two strings that were both encoded in UTF-8 (or more precisely, UTF-EBCDIC) did not work properly until now. Since C<sort()> uses C<cmp()>, this fixes that as well. - =item EBCDIC C<tr///> and C<y///> fixed for C<\N{}>, and C<S<use utf8>> ranges - Perl v5.22 introduced the concept of portable ranges to regular expression patterns. A portable range matches the same set of characters no matter what platform is being run on. This concept is now extended to C<tr///>. See C<L<trE<sol>E<sol>E<sol>|perlop/trE<sol>SEARCHLISTE<sol>REPLACEMENTLISTE<sol>cdsr>>. - There were also some problems with these operations under S<C<use utf8>>, which are now fixed - =back - =item FreeBSD - =over 4 - =item * - Use the C<fdclose()> function from FreeBSD if it is available. L<[perl #126847]|> - =back - =item IRIX - =over 4 - =item * - Under some circumstances IRIX stdio C<fgetc()> and C<fread()> set the errno to C<ENOENT>, which made no sense according to either IRIX or POSIX docs. Errno is now cleared in such cases. L<[perl #123977]|> - =item * - Problems when multiplying long doubles by infinity have been fixed. L<[perl #126396]|> - =back - =item MacOS X - =over 4 - =item * - C<export MACOSX_DEPLOYMENT_TARGET=10.N> before. - C<setenv()> function to update the environment. - Perl now uses C<setenv()>/C<unsetenv()> to update the environment on OS X. L<[perl #126240]|> - =back - =item Solaris - =over 4 - =item * -. - =back - =item Tru64 - =over 4 - =item * - Workaround where Tru64 balks when prototypes are listed as C<< PERL_STATIC_INLINE >>, but where the test is build with C<< -DPERL_NO_INLINE_FUNCTIONS >>. - =back - =item VMS - =over 4 - =item * - On VMS, the math function prototypes in C<math.h> are now visible under C++. Now building the POSIX extension with C++ will no longer crash. - =item * - VMS has had C<setenv>/C<unsetenv> since v7.0 (released in 1996), C<Perl_vmssetenv> now always uses C<setenv>/C<unsetenv>. - =item * - Perl now implements its own C<killpg> by C<$pid>. - =item * - For those C<%ENV> elements based on the CRTL environ array, we've always preserved case when setting them but did look-ups only after upcasing the key first, which made lower- or mixed-case entries go missing. This problem has been corrected by making C<%ENV> elements derived from the environ array case-sensitive on look-up as well as case-preserving on store. - =item * - Environment look-ups for C<PERL5LIB> and C<PERLLIB> previously only considered logical names, but now consider all sources of C<%ENV> as determined by C<PERL_ENV_TABLES> and as documented in L<perlvms/%ENV>. - =item * - The minimum supported version of VMS is now v7.3-2, released in 2003. As a side effect of this change, VAX is no longer supported as the terminal release of OpenVMS VAX was v7.3 in 2001. - =back - =item Win32 - =over 4 - =item * - A new build option C<USE_NO_REGISTRY> has been added to the makefiles. This option is off by default, meaning the default is to do Windows registry lookups. This option stops Perl from looking inside the registry for anything. For what values are looked up in the registry see L<perlwin32>. Internally, in C, the name of this option is C<WIN32_NO_REGISTRY>. - =item * - The behavior of Perl using C<HKEY_CURRENT_USER\Software\Perl> and C<HKEY_LOCAL_MACHINE\Software\Perl> to lookup certain values, including C<%ENV> vars starting with C<PERL> has changed. Previously, the 2 keys were checked for entries at all times through the perl process's life time even if they did not exist. For performance reasons, now, if the root key (i.e. C<HKEY_CURRENT_USER\Software\Perl> or C<HKEY_LOCAL_MACHINE\Software\Perl>) does not exist at process start time, it will not be checked again for C<%ENV> override entries for the remainder of the perl process's life. This more closely matches Unix behavior in that the environment is copied or inherited on startup and changing the variable in the parent process or another process or editing F<.bashrc> will not change the environmental variable in other existing, running, processes. - =item * - One glob fetch was removed for each C<-X> or C<stat> call whether done from Perl code or internally from Perl's C code. The glob being looked up was C<${^WIN32_SLOPPY_STAT}> which is a special variable. This makes C<-X> and C<stat> slightly faster. - =item * - During miniperl's process startup, during the build process, 4 to 8 IO calls related to the process starting F<.pl> and the F<buildcustomize.pl> file were removed from the code opening and executing the first 1 or 2 F<.pl> files. - =item * - Builds using Microsoft Visual C++ 2003 and earlier no longer produce an "INTERNAL COMPILER ERROR" message. [perl #126045] - =item * - Visual C++ 2013 builds will now execute on XP and higher. Previously they would only execute on Vista and higher. - =item * - You can now build perl with GNU Make and GCC. [perl #123440] - =item * - C<truncate($filename, $size)> now works for files over 4GB in size. - perl #125347 - =item * - Parallel building has been added to the dmake C<makefile.mk> makefile. All Win32 compilers are supported. - =item * - Building a 64-bit perl with a 64-bit GCC but a 32-bit gmake would result in an invalid C<$Config{archname}> for the resulting perl. - perl #127584 - =item * - Errors set by Winsock functions are now put directly into C<$^E>, and the relevant C<WSAE*> error codes are now exported from the L<Errno> and L<POSIX> modules for testing this against. - The previous behavior of putting the errors (converted to POSIX-style C<E*> error codes since Perl 5.20.0) into C<$!> C<$!> against C<E*> constants for Winsock errors to instead test C<$^E> against C<WSAE*> constants. After a suitable deprecation period, the old behavior may be removed, leaving C<$!> unchanged after Winsock function calls, to avoid any possible confusion over which error variable to check. - =back - =item ppc64el - =over 4 - =item floating point - The floating point format of ppc64el (Debian naming for little-endian PowerPC) is now detected correctly. - =back - =back - =head1 Internal Changes - =over 4 - =item * - The implementation of perl's context stack system, and its internal API, have been heavily reworked. Note that no significant changes have been made to any external APIs, but XS code which relies on such internal details may need to be fixed. The main changes are: - =over 4 - =item * - The C<PUSHBLOCK()>, C<POPSUB()> etc. macros have been replaced with static inline functions such as C<cx_pushblock()>, C<cx_popsub()> etc. These use function args rather than implicitly relying on local vars such as C<gimme> and C<newsp> being available. Also their functionality has changed: in particular, C<cx_popblock()> no longer decrements C<cxstack_ix>. The ordering of the steps in the C<pp_leave*> functions involving C<cx_popblock()>, C<cx_popsub()> etc. has changed. See the new documentation, L<perlguts/"Dynamic Scope and the Context Stack">, for details on how to use them. - =item * - Various macros, which now consistently have a CX_ prefix, have been added: - CX_CUR(), CX_LEAVE_SCOPE(), CX_POP() - or renamed: - CX_POP_SAVEARRAY(), CX_DEBUG(), CX_PUSHSUBST(), CX_POPSUBST() - =item * - C<cx_pushblock()> now saves C<PL_savestack_ix> and C<PL_tmps_floor>, so C<pp_enter*> and C<pp_leave*> no longer do - ENTER; SAVETMPS; ....; LEAVE - =item * - C<cx_popblock()> now also restores C<PL_curpm>. - =item * - In C<dounwind()> for every context type, the current savestack frame is now processed before each context is popped; formerly this was only done for sub-like context frames. This action has been removed from C<cx_popsub()> and placed into its own macro, C<CX_LEAVE_SCOPE(cx)>, which must be called before C<cx_popsub()> etc. - C<dounwind()> now also does a C<cx_popblock()> on the last popped frame (formerly it only did the C<cx_popsub()> etc. actions on each frame). - =item * - The temps stack is now freed on scope exit; previously, temps created during the last statement of a block wouldn't be freed until the next C<nextstate> following the block (apart from an existing hack that did this for recursive subs in scalar context); and in something like C<f(g())>, the temps created by the last statement in C<g()> would formerly not be freed until the statement following the return from C<f()>. - =item * - Most values that were saved on the savestack on scope entry are now saved in suitable new fields in the context struct, and saved and restored directly by C<cx_pushfoo()> and C<cx_popfoo()>, which is much faster. - =item * - Various context struct fields have been added, removed or modified. - =item * - The handling of C<@_> in C<cx_pushsub()> and C<cx_popsub()> has been considerably tidied up, including removing the C<argarray> field from the context struct, and extracting out some common (but rarely used) code into a separate function, C<clear_defarray()>. Also, useful subsets of C<cx_popsub()> which had been unrolled in places like C<pp_goto> have been gathered into the new functions C<cx_popsub_args()> and C<cx_popsub_common()>. - =item * - C<pp_leavesub> and C<pp_leavesublv> now use the same function as the rest of the C<pp_leave*>'s to process return args. - =item * - C<CXp_FOR_PAD> and C<CXp_FOR_GV> flags have been added, and C<CXt_LOOP_FOR> has been split into C<CXt_LOOP_LIST>, C<CXt_LOOP_ARY>. - =item * - Some variables formerly declared by C<dMULTICALL> (but not documented) have been removed. - =back - =item * - The obscure C<PL_timesbuf> variable, effectively a vestige of Perl 1, has been removed. It was documented as deprecated in Perl 5.20, with a statement that it would be removed early in the 5.21.x series; that has now finally happened. L<[perl #121351]|> - =item * - An unwarranted assertion in C<Perl_newATTRSUB_x()> has been removed. If a stub subroutine definition with a prototype has been seen, then any subsequent stub (or definition) of the same subroutine with an attribute was causing an assertion failure because of a null pointer. L<[perl #126845]|> - =item * - C<::> has been replaced by C<__> in C<ExtUtils::ParseXS>, like it's done for parameters/return values. This is more consistent, and simplifies writing XS code wrapping C++ classes into a nested Perl namespace (it requires only a typedef for C<Foo__Bar> rather than two, one for C<Foo_Bar> and the other for C<Foo::Bar>). - =item * - The C<to_utf8_case()> function is now deprecated. Instead use C<toUPPER_utf8>, C<toTITLE_utf8>, C<toLOWER_utf8>, and C<toFOLD_utf8>. (See L<>.) - =item * - Perl core code and the threads extension have been annotated so that, if Perl is configured to use threads, then during compile-time clang (3.6 or later) will warn about suspicious uses of mutexes. See L<> for more information. - =item * - The C<signbit()> emulation has been enhanced. This will help older and/or more exotic platforms or configurations. - =item * - Most EBCDIC-specific code in the core has been unified with non-EBCDIC code, to avoid repetition and make maintenance easier. - =item * - MSWin32 code for C<$^X> has been moved out of the F<win32> directory to F<caretx.c>, where other operating systems set that variable. - =item * - C<< sv_ref() >> is now part of the API. - =item * - L<perlapi/sv_backoff> had its return type changed from C<int> to C<void>. It previously has always returned C<0> since Perl 5.000 stable but that was undocumented. Although C<sv_backoff> is marked as public API, XS code is not expected to be impacted since the proper API call would be through public API C<sv_setsv(sv, &PL_sv_undef)>, or quasi-public C<SvOOK_off>, or non-public C<SvOK_off> calls, and the return value of C<sv_backoff> was previously a meaningless constant that can be rewritten as C<(sv_backoff(sv),0)>. - =item * - The C<EXTEND> and C<MEXTEND> macros have been improved to avoid various issues with integer truncation and wrapping. In particular, some casts formerly used within the macros have been removed. This means for example that passing an unsigned C<nitems> argument is likely to raise a compiler warning now (it's always been documented to require a signed value; formerly int, lately SSize_t). - =item * - C<PL_sawalias> and C<GPf_ALIASED_SV> have been removed. - =item * - C<GvASSIGN_GENERATION> and C<GvASSIGN_GENERATION_set> have been removed. - =back - =head1 Selected Bug Fixes - =over 4 - =item * - It now works properly to specify a user-defined property, such as - qr/\p{mypkg1::IsMyProperty}/i - with C</i> caseless matching, an explicit package name, and I<IsMyProperty> not defined at the time of the pattern compilation. - =item * - Perl's C<memcpy()>, C<memmove()>, C<memset()> and C<memcmp()> fallbacks are now more compatible with the originals. [perl #127619] - =item * - Fixed the issue where a C<< s///r >>) with B<< -DPERL_NO_COW >> attempts to modify the source SV, resulting in the program dying. [perl #127635] - =item * - Fixed an EBCDIC-platform-only case where a pattern could fail to match. This occurred when matching characters from the set of C1 controls when the target matched string was in UTF-8. - =item * - Narrow the filename check in F<strict.pm> and F<warnings.pm>. Previously, it assumed that if the filename (without the F<.pmc?> extension) differed from the package name, if was a misspelled use statement (i.e. C<use Strict> instead of C<use strict>). We now check whether there's really a miscapitalization happening, and not some other issue. - =item * - Turn an assertion into a more user friendly failure when parsing regexes. [perl #127599] - =item * - Correctly raise an error when trying to compile patterns with unterminated character classes while there are trailing backslashes. [perl #126141]. - =item * - Line numbers larger than 2**31-1 but less than 2**32 are no longer returned by C<caller()> as negative numbers. [perl #126991] - =item * - C<< unless ( I<assignment> ) >> now properly warns when syntax warnings are enabled. [perl #127122] - =item * - Setting an C<ISA> glob to an array reference now properly adds C<isaelem> magic to any existing elements. Previously modifying such an element would not update the ISA cache, so method calls would call the wrong function. Perl would also crash if the C<ISA> glob was destroyed, since new code added in 5.23.7 would try to release the C<isaelem> magic from the elements. [perl #127351] - =item * -] - =item * - C<untie()> would sometimes return the last value returned by the C<UNTIE()> handler as well as it's normal value, messing up the stack. [perl #126621] - =item * - Fixed an operator precedence problem when C< castflags & 2> is true. - perl #127474 - =item * - Caching of DESTROY methods could result in a non-pointer or a non-STASH stored in the C<SvSTASH()> slot of a stash, breaking the B C<STASH()> method. The DESTROY method is now cached in the MRO metadata for the stash. [perl #126410] - =item * - The AUTOLOAD method is now called when searching for a DESTROY method, and correctly sets C<$AUTOLOAD> too. [perl #124387] [perl #127494] - =item * - Avoid parsing beyond the end of the buffer when processing a C<#line> directive with no filename. [perl #127334] - =item * - Perl now raises a warning when a regular expression pattern looks like it was supposed to contain a POSIX class, like C<qr/[[:alpha:]]/>, but there was some slight defect in its specification which causes it to instead be treated as a regular bracketed character class. An example would be missing the second colon in the above like this: C<qr/[[:alpha]]/>. This compiles to match a sequence of two characters. The second is C<"]">, and the first is any of: C<"[">, C<":">, C<"a">, C<"h">, C<"l">, or C<"p">. This is unlikely to be the intended meaning, and now a warning is raised. No warning is raised unless the specification is very close to one of the 14 legal POSIX classes. (See L<perlrecharclass/POSIX Character Classes>.) - perl #8904 - =item * - Certain regex patterns involving a complemented POSIX class in an inverted bracketed character class, and matching something else optionally would improperly fail to match. An example of one that could fail is C<qr/_?[^\Wbar]\x{100}/>. This has been fixed. - perl #127537 - =item * - Perl 5.22 added support to the C99 hexadecimal floating point notation, but sometimes misparses hex floats. This has been fixed. - perl #127183 - =item * - A regression that allowed undeclared barewords in hash keys to work despite strictures has been fixed. L<[perl #126981]|> - =item * - Calls to the placeholder C<&PL_sv_yes> used internally when an C<import()> or C<unimport()> method isn't found now correctly handle scalar context. L<[perl #126042]|> - =item * - Report more context when we see an array where we expect to see an operator and avoid an assertion failure. L<[perl #123737]|> - =item * - Modifying an array that was previously a package C<@ISA> no longer causes assertion failures or crashes. L<[perl #123788]|> - =item * - Retain binary compatibility across plain and DEBUGGING perl builds. L<[perl #127212]|> - =item * - Avoid leaking memory when setting C<$ENV{foo}> on darwin. L<[perl #126240]|> - =item * - C</...\G/> no longer crashes on utf8 strings. When C<\G> is a fixed number of characters from the start of the regex, perl needs to count back that many characters from the current C<pos()> position and start matching from there. However, it was counting back bytes rather than characters, which could lead to panics on utf8 strings. - =item * - In some cases operators that return integers would return negative integers as large positive integers. L<[perl #126635]|> - =item * - The C<pipe()> operator would assert for DEBUGGING builds instead of producing the correct error message. The condition asserted on is detected and reported on correctly without the assertions, so the assertions were removed. L<[perl #126480]|> - =item * - In some cases, failing to parse a here-doc would attempt to use freed memory. This was caused by a pointer not being restored correctly. L<[perl #126443]|> - =item * - C<< @x = sort { *a = 0; $a <=> $b } 0 .. 1 >> no longer frees the GP for *a before restoring its SV slot. L<[perl #124097]|> - =item * - Multiple problems with the new hexadecimal floating point printf format C<%a> were fixed: L<[perl #126582]|>, L<[perl #126586]|>, L<[perl #126822]|> - =item * - Calling C<mg_set()> in C<leave_scope()> no longer leaks. - =item * - A regression from Perl v5.20 was fixed in which debugging output of regular expression compilation was wrong. (The pattern was correctly compiled, but what got displayed for it was wrong.) - =item * - C<. - =item * - Certain syntax errors in L<perlrecharclass/Extended Bracketed Character Classes> caused panics instead of the proper error message. This has now been fixed. [perl #126481] - =item * - Perl 5.20 added a message when a quantifier in a regular expression was useless, but then caused the parser to skip it; this caused the surplus quantifier to be silently ignored, instead of throwing an error. This is now fixed. [perl #126253] - =item * - The switch to building non-XS modules last in win32/makefile.mk (introduced by design as part of the changes to enable parallel building) caused the build of POSIX to break due to problems with the version module. This is now fixed. - =item * - Improved parsing of hex float constants. - =item * - Fixed an issue with C<< pack >> where C<< pack "H" >> (and C<< pack "h" >>) could read past the source when given a non-utf8 source, and a utf8 target. - perl #126325 - =item * - Fixed several cases where perl would abort due to a segmentation fault, or a C-level assert. [perl #126615], [perl #126602], [perl #126193]. - =item * - There were places in regular expression patterns where comments (C<(?#...)>) weren't allowed, but should have been. This is now fixed. L<[perl #116639]|> - =item * - Some regressions from Perl 5.20 have been fixed, in which some syntax errors in L<C<(?[...])>|perlrecharclass/Extended Bracketed Character Classes> constructs within regular expression patterns could cause a segfault instead of a proper error message. L<[perl #126180]|> L<[perl #126404]|> - =item * - Another problem with L<C<(?[...])>|perlrecharclass/Extended Bracketed Character Classes> constructs has been fixed wherein things like C<\c]> could cause panics. L<[perl #126181]|> - =item * - C<$big_number>; now it will typically raise an exception. L<[perl #125937]|> - =item * - In a regex conditional expression C<(?(condition)yes-pattern|no-pattern)>, if the condition is C<(?!)> then perl failed the match outright instead of matching the no-pattern. This has been fixed. L<[perl #126222]|> - =item * - The special backtracking control verbs C<(*VERB:ARG)> now all allow an optional argument and set C<REGERROR>/C<REGMARK> appropriately as well. L<[perl #126186]|> - =item * - Several bugs, including a segmentation fault, have been fixed with the boundary checking constructs (introduced in Perl 5.22) C<\b{gcb}>, C<\b{sb}>, C<\b{wb}>, C<\B{gcb}>, C<\B{sb}>, and C<\B{wb}>. All the C<\B{}> ones now match an empty string; none of the C<\b{}> ones do. L<[perl #126319]|> - =item * - Duplicating a closed file handle for write no longer creates a filename of the form F<GLOB(0xXXXXXXXX)>. [perl #125115] - =item * - Warning fatality is now ignored when rewinding the stack. This prevents infinite recursion when the now fatal error also causes rewinding of the stack. [perl #123398] - =item * - In perl v5.22.0, the logic changed when parsing a numeric parameter to the -C option, such that the successfully parsed number was not saved as the option value if it parsed to the end of the argument. [perl #125381] - =item * - The PadlistNAMES macro is an lvalue again. - =item * - Zero -DPERL_TRACE_OPS memory for sub-threads. - C<perl_clone_using()> was missing Zero init of PL_op_exec_cnt[]. This caused sub-threads in threaded -DPERL_TRACE_OPS builds to spew exceedingly large op-counts at destruct. These counts would print %x as "ABABABAB", clearly a mem-poison value. - =item * - A leak in the XS typemap caused one scalar to be leaked each time a C<FILE *> or a C<PerlIO *> was C<OUTPUT:>ed or imported to Perl, since perl 5.000. These particular typemap entries are thought to be extremely rarely used by XS modules. [perl #124181] - =item * - C<alarm()> and C<sleep()> will now warn if the argument is a negative number and return undef. Previously they would pass the negative value to the underlying C function which may have set up a timer with a surprising value. - =item * - Perl can again be compiled with any Unicode version. This used to (mostly) work, but was lost in v5.18 through v5.20. The property C<Name_Alias> did not exist prior to Unicode 5.0. L<Unicode::UCD> incorrectly said it did. This has been fixed. - =item * - Very large code-points (beyond Unicode) in regular expressions no longer cause a buffer overflow in some cases when converted to UTF-8. L<[perl #125826]|> - =item * - The integer overflow check for the range operator (...) in list context now correctly handles the case where the size of the range is larger than the address space. This could happen on 32-bits with -Duse64bitint. L<[perl #125781]|> - =item * - A crash with C<< %::=(); J->${\"::"} >> has been fixed. L<[perl #125541]|> - =item * - C<qr/(?[ () ])/> no longer segfaults, giving a syntax error message instead. - perl #125805 - =item * - Regular expression possessive quantifier v5.20 regression now fixed. C<qr/>I<PAT>C<{>I<min>,I<max>C<}+>C</> is supposed to behave identically to C<qr/(?E<gt>>I<PAT>C<{>I<min>,I<max>C<})/>. Since v5.20, this didn't work if I<min> and I<max> were equal. [perl #125825] - =item * - C<< BEGIN <> >> no longer segfaults and properly produces an error message. [perl #125341] - =item * - In C<tr///> an illegal backwards range like C<tr/\x{101}-\x{100}//> was not always detected, giving incorrect results. This is now fixed. - =back - =head1 Acknowledgements - XXX: generate this just in time, Ricardo! - . - If the bug you are reporting has security implications which make it inappropriate to send to a publicly archived mailing list, then see L<perlsec/SECURITY VULNERABILITY CONTACT INFORMATION> for details of how to report the issue. - =head1 SEE ALSO - The F<Changes> file for an explanation of how to view exhaustive details on what changed. - The F<INSTALL> file for how to build Perl. - The F<README> file for general stuff. - The F<Artistic> and F<Copying> files for copyright information. - =cut Documentation - INSTALL - README.pod - README for the Porting/ directory in the Perl 5 core distribution. - bench.pl - Compare the performance of perl code snippets across multiple perls. - bisect.pl - use git bisect to pinpoint changes - checkURL.pl - Check that all the URLs in the Perl source are valid - checkansi.pl - Check source code for ANSI-C violations - perlepigraphs - list of Perl release epigraphs - expand-macro.pl - expand C macros using the C preprocessor - git-deltatool - Annotate commits for perldelta - how_to_write_a_perldelta - How to write a perldelta - Release - Pumpkin - Notes on handling the Perl Patch Pumpkin And Porting Perl - release_managers_guide - Releasing a new version of perl 5.x - release_schedule - Perl 5 release schedule - sort_perldiag.pl - Sort warning and error messages in perldiag.pod - todo - Perl TO-DO list - valgrindpp.pl - A post processor for make test.valgrind - perlapi - autogenerated documentation for the perl public API - Config - access Perl configuration information - Locale::Codes::Changes - lib - manipulate @INC at compile time - DynaLoader - Dynamically load C libraries into Perl code - Errno - System errno constants - Pod::Functions - Group Perl's functions a la perlfunc.pod - Functions.t - Test Pod::Functions - pod2html - convert .pod files to .html files - anchorify - Test Pod::Html::anchorify() - the - htmlcrossref - Test HTML cross reference links - crlf - ext::Pod-Html::t::feature - ext::Pod-Html::t::feature2 - - Escape - htmllink - Test HTML links - Test - Test - Test - perlpodspeccopy - Plain Old Documentation: format specification and notes - perlvarcopy - Perl predefined variables - installhtml - converts a collection of POD pages to HTML format. - CORE - Namespace for Perl's core routines - InputObjects.t - The tests for Pod::InputObjects - Select.t - Tests for Pod::Select. - Usage.t - Tests for Pod::Usage - perl5db.pl - the perl debugger -125delta - what is new for perl v5.12.5 - perl5140delta - what is new for perl v5.14.0 - perl5141delta - what is new for perl v5.14.1 - perl5142delta - what is new for perl v5.14.2 - perl5143delta - what is new for perl v5.14.3 - perl5144delta - what is new for perl v5.14.4 - perl5160delta - what is new for perl v5.16.0 - perl5161delta - what is new for perl v5.16.1 - perl5162delta - what is new for perl v5.16.2 - perl5163delta - what is new for perl v5.16.3 - perl5180delta - what is new for perl v5.18.0 - perl5181delta - what is new for perl v5.18.1 - perl5182delta - what is new for perl v5.18.2 - perl5184delta - what is new for perl v5.18.4 - perl5200delta - what is new for perl v5.20.0 - perl5201delta - what is new for perl v5.20.1 - perl5202delta - what is new for perl v5.20.2 - perl5203delta - what is new for perl v5.20.3 - perl5220delta - what is new for perl v5.22.0 - perl5221delta - what is new for perl v5.22.1 - perl5222delta - what is new for perl v5.22 - perlapio - perl's IO abstraction interface. - perlartistic - the Perl Artistic License - perlbook - Books about and related to Perl - perlboot - Links to information on object-oriented programming in Perl - perlbot - Links to information on object-oriented programming.24lexperiment - A listing of experimental features in Perl - perlfilter - Source Filters - - simple recipes for opening files and pipes in Perl - perlpacktut - tutorial on pack and unpack - perlperf - Perl Performance and Optimization Techniques - perlpod - the Plain Old Documentation format - perlpodspec - Plain Old Documentation: format specification and notes - perlpodstyle - Perl POD style guide -repository - Links to current information on the Perl source repository - - Link to the Perl to-do list - perltooc - Links to information on object-oriented programming in Perl - perltoot - Links to information on object-oriented programming in Perl - perltrap - Perl traps for the unwary - perlunicode - Unicode support in Perl - perlunicook - cookbookish examples of handling Unicode in Perl - perlunifaq - Perl Unicode FAQ - perluniintro - Perl Unicode introduction - perlunitut - Perl Unicode Tutorial - perlutil - utilities packaged with the Perl distribution - perlvar - Perl predefined variables - perlvms - VMS-specific documentation for Perl - feature - Perl pragma to enable new features - CharClass::Matcher - Generate C macros that match character classes efficiently - warnings - Perl pragma to control optional warnings - CPerlBase - a C++ base class encapsulating a Perl interpreter in Symbian - PerlUtil - a C++ utility class for Perl/Symbian - h2ph - convert .h C header files to .ph Perl header files - h2xs - convert .h C header files to Perl extensions - libnetcfg - configure libnet - perlbug - how to submit bug reports on Perl - perlivp - Perl Installation Verification Procedure - pl2pm - Rough tool to translate Perl4 .pl files to Perl5 .pm modules. - exetype Modules - autodie::exception - cpan/autodie/t/lib/autodie/test/au/exception.pm - experimental - I18N::LangTags - functions for dealing with RFC3066-style language tags - I18N::LangTags::Detect - detect the user's language preferences - I18N::LangTags::List - tags and names for human languages - IO - load various IO modules - - Amiga::ARexx - Perl extension for ARexx support - Amiga::Exec - Perl extension for low level amiga support - B - The Perl Compiler Backend - B::Concise - Walk Perl syntax tree, printing concise info about ops - B::Showlex - Show lexical variables used in functions or files - B::Terse - Walk Perl syntax tree, printing terse info about ops - B::Xref - Generates cross reference reports for Perl programs - O - Generic interface to Perl Compiler backends - OptreeCheck - check optrees as rendered by B::Concise - Devel::Peek - A data debugging tool for the XS programmer - ExtUtils::Miniperl - write the C code for perlmain.c - Fcntl - load the C Fcntl.h defines - File::DosGlob - DOS like globbing and then some - File::Find - Traverse a directory tree. - File::Glob - Perl extension for BSD glob routine - FileCache - keep more files open than the system permits - GDBM_File - Perl5 access to the gdbm library. - Hash::Util::FieldHash - Support for Inside-Out Classes - Hash::Util - A selection of general-utility hash subroutines - - ODBM_File - Tied access to odbm files - Opcode - Disable named opcodes when compiling perl code - ops - Perl pragma to restrict unsafe operations when compiling - POSIX - Perl interface to IEEE Std 1003.1 - - Sys::Hostname - Try every conceivable way to get hostname - Tie::Hash::NamedCapture - Named regexp capture buffers - Tie::Memoize - add data to hash when needed - - ext/XS-APItest/t/Block.pm - ext/XS-APItest/t/Null.pm - XS::Typemap - module to test the XS typemaps distributed with perl - arybase - Set indexing base via $[ - ext/arybase/t/scope_0.pm - attributes - get/set subroutine or variable attributes - mro - Method Resolution Order - re - Perl pragma to alter regular expression behaviour - Haiku - Interfaces to some Haiku API Functions - AnyDBM_File - provide framework for multiple DBMs - B::Deparse - Perl compiler backend to produce perl code - B::Op_private - OP op_private flag definitions - - File::Basename - Parse file paths into directory, filename and suffix. - File::Compare - Compare files or filehandles - File::Copy - Copy files or filehandles - File::stat - by-name interface to Perl's built-in stat() functions - FileHandle - supply object methods for filehandles - FindBin - Locate directory of original perl script - - PerlIO - On demand loader for PerlIO layers and root of PerlIO::* name space - SelectSaver - save and restore selected file handle - Symbol - manipulate Perl symbols and their names - Thread - Manipulate threads in Perl (for old code only) - Tie::Array - base class for tied arrays - Tie::Handle - base class definitions for tied handles - - blib - Use MakeMaker's uninstalled version of a package - bytes - Perl pragma to expose the individual bytes of characters - - open - perl pragma to set default PerlIO layers for input and output - overload - Package for overloading Perl operations - overloading - perl pragma to lexically control overloading - - OS2::ExtAttr - Perl access to extended attributes. - OS2::PrfDB - Perl extension for access to OS/2 setting database. - OS2::Process - exports constants for system() call, and process control on OS2. - OS2::DLL - access to DLLs with REXX calling convention. - OS2::REXX - access to DLLs with REXX calling convention and REXX runtime. Provides - Amiga::ARexx::Msg in ext/Amiga-ARexx/ARexx.pm - B::OBJECT in ext/B/B.pm - Class::Struct::Tie_ISA in lib/Class/Struct.pm - Getopt::Std in lib/Getopt/Std.pm - Moped::Msg in symbian/ext/Moped/Msg/Msg.pm - OS2::DLL::dll in os2/OS2/OS2-REXX/DLL/DLL - OS2::localMorphPM in os2/OS2/OS2-Process/Process.pm - POSIX::SigAction in ext/POSIX/lib/POSIX.pm - POSIX::SigRt in ext/POSIX/lib/POSIX.pm - POSIX::SigSet in ext/POSIX/lib/POSIX.pm - Pod::Simple::XHTML::LocalPodLinks in ext/Pod-Html/lib/Pod/Html.pm - Testing in ext/File-Find/t/lib/Testing.pm - Tie::ExtraHash in lib/Tie/Hash.pm - Tie::Hash in lib/DBM_Filter.pm - Tie::Hash in lib/Tie/Hash - diagnostics in lib/diagnostics.pm - overload::numbers in lib/overload/numbers.pm - t::BHK in ext/XS-APItest/t/BHK.pm - t::Markers in ext/XS-APItest/t/Markers.pm Examples Other files - AUTHORS - Changes - Copying - MANIFEST - META.json - META.yml - Porting/acknowledgements.pl - Porting/bisect.pl - Porting/corelist-perldelta.pl - Porting/exec-bit.txt - Porting/exercise_makedef.pl - Porting/newtests-perldelta.pl - Porting/pod_lib.pl - Porting/sync-with-cpan - README - cpan/Devel-PPPort/parts/inc/exception - cpan/Memoize/t/expfile.t - cpan/Memoize/t/expmod_n.t - cpan/Test-Simple/t/exit.t - cpan/Test-Simple/t/explain.t - cpan/Test-Simple/t/lib/Test/Simple/sample_tests/extras.plx - cpan/Test-Simple/t/subtest/exceptions.t - cpan/autodie/t/exception_class.t - cpan/autodie/t/exceptions.t - dist/threads/t/exit.t - ext/Amiga-ARexx/ARexx.xs - ext/Amiga-ARexx/Makefile.PL - ext/Amiga-ARexx/tagtypes.h - ext/Amiga-Exec/Exec.xs - ext/Amiga-Exec/tagtypes.h - ext/Amiga-Exec/typemap - ext/B/hints/darwin.pl - ext/B/hints/openbsd.pl - ext/B/t/f_map.t - ext/B/t/f_sort.t - ext/B/t/optree_check.t - ext/B/t/optree_constants.t - ext/B/t/optree_varinit.t - ext/B/t/sv_stash.t - ext/B/t/terse.t - ext/B/t/walkoptree.t - ext/B/t/xref.t - ext/Devel-Peek/Peek.xs - ext/Devel-Peek/t/Peek.t - ext/DynaLoader/Makefile.PL - ext/DynaLoader/dl_dllload.xs - ext/DynaLoader/dl_dlopen.xs - ext/DynaLoader/dl_dyld.xs - ext/DynaLoader/dl_freemint.xs - ext/DynaLoader/dl_hpux.xs - ext/DynaLoader/dl_none.xs - ext/DynaLoader/hints/aix.pl - ext/DynaLoader/hints/android.pl - ext/DynaLoader/hints/gnukfreebsd.pl - ext/DynaLoader/hints/gnuknetbsd.pl - ext/DynaLoader/hints/netbsd.pl - ext/DynaLoader/hints/openbsd.pl - ext/DynaLoader/t/DynaLoader.t - ext/Errno/ChangeLog - ext/Errno/Makefile.PL - ext/Errno/t/Errno.t - ext/Fcntl/t/autoload.t - ext/Fcntl/t/syslfs.t - ext/File-DosGlob/t/DosGlob.t - ext/File-Glob/Makefile.PL - ext/File-Glob/t/taint.t - ext/FileCache/t/01open.t - ext/GDBM_File/Makefile.PL - ext/GDBM_File/t/fatal.t - ext/GDBM_File/typemap - ext/Hash-Util-FieldHash/Changes - ext/Hash-Util-FieldHash/t/01_load.t - ext/Hash-Util-FieldHash/t/03_class.t - ext/Hash-Util/t/Util.t - ext/I18N-Langinfo/t/Langinfo.t - ext/IPC-Open3/t/IPC-Open2.t - ext/IPC-Open3/t/IPC-Open3.t - ext/NDBM_File/hints/MSWin32.pl - ext/NDBM_File/hints/dec_osf.pl - ext/NDBM_File/hints/dynixptx.pl - ext/NDBM_File/hints/gnu.pl - ext/NDBM_File/hints/gnukfreebsd.pl - ext/NDBM_File/hints/sco.pl - ext/NDBM_File/hints/svr4.pl - ext/NDBM_File/t/ndbm.t - ext/ODBM_File/hints/dec_osf.pl - ext/ODBM_File/hints/gnu.pl - ext/ODBM_File/hints/gnuknetbsd.pl - ext/ODBM_File/hints/solaris.pl - ext/ODBM_File/t/odbm.t - ext/ODBM_File/typemap - ext/Opcode/Opcode.xs - ext/POSIX/POSIX.xs - ext/POSIX/hints/dynixptx.pl - ext/POSIX/hints/gnukfreebsd.pl - ext/POSIX/hints/gnuknetbsd.pl - ext/POSIX/hints/mint.pl - ext/POSIX/t/export.t - ext/POSIX/t/iscrash - ext/POSIX/t/math.t - ext/POSIX/t/sysconf.t - ext/POSIX/t/usage.t - ext/POSIX/t/waitpid.t - ext/POSIX/t/wrappers.t - ext/PerlIO-encoding/encoding.xs - ext/PerlIO-encoding/t/nolooping.t - ext/PerlIO-scalar/t/scalar.t - ext/PerlIO-via/hints/aix.pl - ext/PerlIO-via/via.xs - ext/Pod-Html/t/cache.t - ext/Pod-Html/t/crossref.t - ext/Pod-Html/t/htmldir1.t - ext/Pod-Html/t/htmldir4.t - ext/Pod-Html/t/htmldir5.t - ext/Pod-Html/t/htmllink.t - ext/SDBM_File/Makefile.PL - ext/SDBM_File/README - ext/SDBM_File/README.too - ext/SDBM_File/biblio - ext/SDBM_File/dbd.c - ext/SDBM_File/hash.c - ext/SDBM_File/pair.h - ext/SDBM_File/sdbm.c - ext/SDBM_File/sdbm.h - ext/SDBM_File/t/prep.t - ext/SDBM_File/typemap - ext/Sys-Hostname/Hostname.xs - ext/Sys-Hostname/t/Hostname.t - ext/VMS-Filespec/t/filespec.t - ext/VMS-Stdio/0README.txt - ext/VMS-Stdio/Makefile.PL - ext/VMS-Stdio/Stdio.xs - ext/VMS-Stdio/t/vms_stdio.t - ext/Win32CORE/Makefile.PL - ext/Win32CORE/t/win32core.t - ext/XS-APItest/APItest.xs - ext/XS-APItest/Makefile.PL - ext/XS-APItest/XSUB-redefined-macros.xs - ext/XS-APItest/core.c - ext/XS-APItest/exception.c - ext/XS-APItest/numeric.xs - ext/XS-APItest/t/addissub.t - ext/XS-APItest/t/blockhooks-csc.t - ext/XS-APItest/t/blockhooks.t - ext/XS-APItest/t/call.t - ext/XS-APItest/t/caller.t - ext/XS-APItest/t/callregexec.t - ext/XS-APItest/t/check_warnings.t - ext/XS-APItest/t/clone-with-stack.t - ext/XS-APItest/t/cophh.t - ext/XS-APItest/t/coplabel.t - ext/XS-APItest/t/exception.t - ext/XS-APItest/t/extend.t - ext/XS-APItest/t/gotosub.t - ext/XS-APItest/t/gv_autoload4.t - ext/XS-APItest/t/gv_fetchmeth.t - ext/XS-APItest/t/gv_fetchmethod_flags.t - ext/XS-APItest/t/keyword_plugin.t - ext/XS-APItest/t/labelconst.aux - ext/XS-APItest/t/lexsub.t - ext/XS-APItest/t/loopblock.t - ext/XS-APItest/t/looprest.t - ext/XS-APItest/t/lvalue.t - ext/XS-APItest/t/magic.t - ext/XS-APItest/t/mro.t - ext/XS-APItest/t/multicall.t - ext/XS-APItest/t/my_exit.t - ext/XS-APItest/t/newDEFSVOP.t - ext/XS-APItest/t/op.t - ext/XS-APItest/t/op_list.t - ext/XS-APItest/t/overload.t - ext/XS-APItest/t/peep.t - ext/XS-APItest/t/pmflag.t - ext/XS-APItest/t/printf.t - ext/XS-APItest/t/ptr_table.t - ext/XS-APItest/t/refs.t - ext/XS-APItest/t/scopelessblock.t - ext/XS-APItest/t/sort.t - ext/XS-APItest/t/stmtasexpr.t - ext/XS-APItest/t/subcall.t - ext/XS-APItest/t/svcat.t - ext/XS-APItest/t/sviscow.t - ext/XS-APItest/t/svpv.t - ext/XS-APItest/t/swaptwostmts.t - ext/XS-APItest/t/sym-hook.t - ext/XS-APItest/t/utf16_to_utf8.t - ext/XS-APItest/t/weaken.t - ext/XS-APItest/t/whichsig.t - ext/XS-APItest/t/xs_special_subs.t - ext/XS-APItest/t/xs_special_subs_require.t - ext/XS-APItest/t/xsub_h.t - ext/XS-APItest/typemap - ext/XS-Typemap/Makefile.PL - ext/XS-Typemap/README - ext/arybase/arybase.xs - ext/arybase/t/akeys.t - ext/arybase/t/arybase.t - ext/arybase/t/av2arylen.t - ext/arybase/t/lslice.t - ext/arybase/t/splice.t - ext/attributes/attributes.xs - ext/mro/Changes - ext/re/t/qr.t - ext/re/t/reflags.t - ext/re/t/strict.t - regen/op_private - symbian/TODO - utils/c2ph.PL
https://metacpan.org/release/RJBS/perl-5.24.0-RC5
CC-MAIN-2018-43
en
refinedweb
Phoenix v1.3.4 Phoenix.Channel behaviour View Source Defines a Phoenix Channel. Channels provide a means for bidirectional communication from clients that integrate with the Phoenix.PubSub layer for soft-realtime functionality. Topics & Callbacks Every time you join a channel, you need to choose which particular topic you want to listen to. The topic is just an identifier, but by convention it is often made of two parts: "topic:subtopic". Using the "topic:subtopic" approach pairs nicely with the Phoenix.Socket.channel/2 allowing you to match on all topics starting with a given prefix: channel "room:*", MyApp.RoomChannel Any topic coming into the router with the "room:" prefix would dispatch to MyApp.RoomChannel in the above example. Topics can also be pattern matched in your channels’ join/3 callback to pluck out the scoped pattern: # handles the special `"lobby"` subtopic def join("room:lobby", _auth_message, socket) do {:ok, socket} end # handles any other subtopic as the room ID, for example `"room:12"`, `"room:34"` def join("room:" <> room_id, auth_message, socket) do {:ok, socket} end Authorization Clients must join a channel to send and receive PubSub events on that channel. Your channels must implement a join/3 callback that authorizes the socket for the given topic. For example, you could check if the user is allowed to join that particular room. To authorize a socket in join/3, return {:ok, socket}. To refuse authorization in join/3, return {:error, reply}. Incoming Events After a client has successfully joined a channel, incoming events from the client are routed through the channel’s handle_in/3 callbacks. Within these callbacks, you can perform any action. Typically you’ll either forward a message to all listeners with broadcast!/3, or push a message directly down the socket with push/3. Incoming callbacks must return the socket to maintain ephemeral state. Here’s an example of receiving an incoming "new_msg" event from one client, and broadcasting the message to all topic subscribers for this socket. def handle_in("new_msg", %{"uid" => uid, "body" => body}, socket) do broadcast! socket, "new_msg", %{uid: uid, body: body} {:noreply, socket} end You can also push a message directly down the socket: # client asks for their current rank, push sent directly as a new event. def handle_in("current_rank", socket) do push socket, "current_rank", %{val: Game.get_rank(socket.assigns[:user])} {:noreply, socket} end Replies In addition to pushing messages out when you receive a handle_in event, you can also reply directly to a client event for request/response style messaging. This is useful when a client must know the result of an operation or to simply ack messages. For example, imagine creating a resource and replying with the created record: def handle_in("create:post", attrs, socket) do changeset = Post.changeset(%Post{}, attrs) if changeset.valid? do post = Repo.insert!(changeset) response = MyApp.PostView.render("show.json", %{post: post}) {:reply, {:ok, response}, socket} else response = MyApp.ChangesetView.render("errors.json", %{changeset: changeset}) {:reply, {:error, response}, socket} end end Alternatively, you may just want to ack the status of the operation: def handle_in("create:post", attrs, socket) do changeset = Post.changeset(%Post{}, attrs) if changeset.valid? do Repo.insert!(changeset) {:reply, :ok, socket} else {:reply, :error, socket} end end Intercepting Outgoing Events When an event is broadcasted with broadcast/3, each channel subscriber can choose to intercept the event and have their handle_out/3 callback triggered. This allows the event’s payload to be customized on a socket by socket basis to append extra information, or conditionally filter the message from being delivered. If the event is not intercepted with Phoenix.Channel.intercept/1, then the message is pushed directly to the client: intercept ["new_msg", "user_joined"] # for every socket subscribing to this topic, append an `is_editable` # value for client metadata. def handle_out("new_msg", msg, socket) do push socket, "new_msg", Map.merge(msg, %{is_editable: User.can_edit_message?(socket.assigns[:user], msg)} ) {:noreply, socket} end # do not send broadcasted `"user_joined"` events if this socket's user # is ignoring the user who joined. def handle_out("user_joined", msg, socket) do unless User.ignoring?(socket.assigns[:user], msg.user_id) do push socket, "user_joined", msg end {:noreply, socket} end Broadcasting to an external topic In some cases, you will want to broadcast messages without the context of a socket. This could be for broadcasting from within your channel to an external topic, or broadcasting from elsewhere in your application like a controller or another process. Such can be done via your endpoint: # within channel def handle_in("new_msg", %{"uid" => uid, "body" => body}, socket) do ... broadcast_from! socket, "new_msg", %{uid: uid, body: body} MyApp.Endpoint.broadcast_from! self(), "room:superadmin", "new_msg", %{uid: uid, body: body} {:noreply, socket} end # within controller def create(conn, params) do ... MyApp.Endpoint.broadcast! "room:" <> rid, "new_msg", %{uid: uid, body: body} MyApp.Endpoint.broadcast! "room:superadmin", "new_msg", %{uid: uid, body: body} redirect conn, to: "/" end Terminate On termination, the channel callback terminate/2 will be invoked with the error reason and the socket. If we are terminating because the client left, the reason will be {:shutdown, :left}. Similarly, if we are terminating because the client connection was closed, the reason will be {:shutdown, :closed}. If any of the callbacks return a :stop tuple, it will also trigger terminate with the reason given in the tuple. terminate/2, however, won’t be invoked in case of errors nor in case of exits. This is the same behaviour as you find in Elixir abstractions like GenServer and others. Typically speaking, if you want to clean something up, it is better to monitor your channel process and do the clean up from another process. Similar to GenServer, it would also be possible :trap_exit to guarantee that terminate/2 is invoked. This practice is not encouraged though. Exit reasons when stopping a channel When the channel callbacks return a :stop tuple, such as: {:stop, :shutdown, socket} {:stop, {:error, :enoent}, socket} the second argument is the exit reason, which follows the same behaviour as standard GenServer exits. You have three options to choose from when shutting down a channel: :normal- in such cases, the exit won’t be logged, there is no restart in transient mode, and linked processes do not exit :shutdownor {:shutdown, term}- in such cases, the exit won’t be logged, there is no restart in transient mode, and linked processes exit with the same reason unless they’re trapping exits any other term - in such cases, the exit will be logged, there are restarts in transient mode, and linked processes exit with the same reason unless they’re trapping exits Subscribing to external topics Sometimes you may need to programmatically subscribe a socket to external topics in addition to the the internal socket.topic. For example, imagine you have a bidding system where a remote client dynamically sets preferences on products they want to receive bidding notifications on. Instead of requiring a unique channel process and topic per preference, a more efficient and simple approach would be to subscribe a single channel to relevant notifications via your endpoint. For example: defmodule MyApp.Endpoint.NotificationChannel do use Phoenix.Channel def join("notification:" <> user_id, %{"ids" => ids}, socket) do topics = for product_id <- ids, do: "product:#{product_id}" {:ok, socket |> assign(:topics, []) |> put_new_topics(topics)} end def handle_in("watch", %{"product_id" => id}, socket) do {:reply, :ok, put_new_topics(socket, ["product:#{id}"])} end def handle_in("unwatch", %{"product_id" => id}, socket) do {:reply, :ok, MyApp.Endpoint.unsubscribe("product:#{id}")} end defp put_new_topics(socket, topics) do Enum.reduce(topics, socket, fn topic, acc -> topics = acc.assigns.topics if topic in topics do acc else :ok = MyApp.Endpoint.subscribe(topic) assign(acc, :topics, [topic | topics]) end end) end end Note: the caller must be responsible for preventing duplicate subscriptions. After calling subscribe/1 from your endpoint, the same flow applies to handling regular Elixir messages within your channel. Most often, you’ll simply relay the %Phoenix.Socket.Broadcast{} event and payload: alias Phoenix.Socket.Broadcast def handle_info(%Broadcast{topic: _, event: ev, payload: payload}, socket) do push socket, ev, payload {:noreply, socket} end Logging By default, channel "join" and "handle_in" events are logged, using the level :info and :debug, respectively. Logs can be customized per event type or disabled by setting the :log_join and :log_handle_in options when using Phoenix.Channel. For example, the following configuration logs join events as :info, but disables logging for incoming events: use Phoenix.Channel, log_join: :info, log_handle_in: false Link to this section Summary Functions Broadcast an event to all subscribers of the socket topic Same as broadcast/3, but raises if broadcast fails Broadcast event from pid to all subscribers of the socket topic Same as broadcast_from/3, but raises if broadcast fails Defines which Channel events to intercept for handle_out/3 callbacks Sends event to the socket Replies asynchronously to a socket push Generates a socket_ref for an async reply Link to this section Types Link to this section Functions Broadcast an event to all subscribers of the socket topic. The event’s message must be a serializable map. Examples iex> broadcast socket, "new_message", %{id: 1, content: "hello"} :ok Same as broadcast/3, but raises if broadcast fails. Broadcast event from pid to all subscribers of the socket topic. The channel that owns the socket will not receive the published message. The event’s message must be a serializable map. Examples iex> broadcast_from socket, "new_message", %{id: 1, content: "hello"} :ok Same as broadcast_from/3, but raises if broadcast fails. Defines which Channel events to intercept for handle_out/3 callbacks. By default, broadcasted events are pushed directly to the client, but intercepting events gives your channel a chance to customize the event for the client to append extra information or filter the message from being delivered. Note: intercepting events can introduce significantly more overhead if a large number of subscribers must customize a message since the broadcast will be encoded N times instead of a single shared encoding across all subscribers. Examples intercept ["new_msg"] def handle_out("new_msg", payload, socket) do push socket, "new_msg", Map.merge(payload, is_editable: User.can_edit_message?(socket.assigns[:user], payload) ) {:noreply, socket} end handle_out/3 callbacks must return one of: {:noreply, Socket.t} | {:stop, reason :: term, Socket.t} Sends event to the socket. The event’s message must be a serializable map. Examples iex> push socket, "new_message", %{id: 1, content: "hello"} :ok reply(socket_ref(), reply()) :: :ok Replies asynchronously to a socket push. Useful when you need to reply to a push that can’t otherwise be handled using the {:reply, {status, payload}, socket} return from your handle_in callbacks. reply/2 will be used in the rare cases you need to perform work in another process and reply when finished by generating a reference to the push with socket_ref/1. Note: In such cases, a socket_ref should be generated and passed to the external process, so the socket itself is not leaked outside the channel. The socket holds information such as assigns and transport configuration, so it’s important to not copy this information outside of the channel that owns it. Examples def handle_in("work", payload, socket) do Worker.perform(payload, socket_ref(socket)) {:noreply, socket} end def handle_info({:work_complete, result, ref}, socket) do reply ref, {:ok, result} {:noreply, socket} end socket_ref(Phoenix.Socket.t()) :: socket_ref() Generates a socket_ref for an async reply. See reply/2 for example usage. Link to this section Callbacks code_change(old_vsn, Phoenix.Socket.t(), extra :: term()) :: {:ok, Phoenix.Socket.t()} | {:error, reason :: term()} when old_vsn: term() | {:down, term()} handle_in(event :: String.t(), msg :: map(), Phoenix.Socket.t()) :: {:noreply, Phoenix.Socket.t()} | {:reply, reply(), Phoenix.Socket.t()} | {:stop, reason :: term(), Phoenix.Socket.t()} | {:stop, reason :: term(), reply(), Phoenix.Socket.t()} handle_info(term(), Phoenix.Socket.t()) :: {:noreply, Phoenix.Socket.t()} | {:stop, reason :: term(), Phoenix.Socket.t()} join(topic :: binary(), auth_msg :: map(), Phoenix.Socket.t()) :: {:ok, Phoenix.Socket.t()} | {:ok, map(), Phoenix.Socket.t()} | {:error, map()} terminate(msg :: map(), Phoenix.Socket.t()) :: {:shutdown, :left | :closed} | term()
https://hexdocs.pm/phoenix/Phoenix.Channel.html
CC-MAIN-2018-43
en
refinedweb
- Introduction - Community - User Guide - Contacting Technical Support - Support Options - Severity Definitions and Response Times - Accessing the Support Portal - Creating a Case - What to Include in the Case - Case Submission and Follow-Up - Escalating a Case (Non Severity 1) - Best Practices for Contacting Support - Status Site - Professional Services - Customer Success - Summary Introduction This guide provides an overview of the various help and support options available to you as a Dell Boomi customer or partner. As a new or existing user, you have access to a vast list of resources including the Dell Boomi Community, Support Portal, User Guide, Status site, Services resources, and Customer Success to help you maximize your success with the tool. We are here to help you succeed. Please refer to this guide often to help you ramp-up quickly. You can save this document as PDF. In the upper right corner, go to Actions > View as PDF, then save from your browser. Quick Links Community The Dell Boomi Community is your one-stop-shop to find answers, share best practices and discuss integration topics with fellow users, and stay up to date with Dell Boomi products. Whether you're a customer, partner, or developer, a brand new user or experienced architect, the community is your single destination for all the resources you need to be successful. Community Features - About the Community - Learn how to get started and best leverage the community. - Community Forums - Ask questions and discuss integration topics with your fellow users. - Knowledge Center - Browse Dell Boomi-authored content including knowledge articles, implementation guides, training videos, and more. - Ideas for Boomi - Share your ideas for innovating and enhancing our products. Ideas for Boomi is restricted to registered customer users only. - News - Stay up to date with news and announcements on various topics such as best practices, feature deep dives, integration musings, and community announcements. Accessing the Community The community is publicly accessible and anyone including guests can view the community. However, you will need to log in with your AtomSphere credentials to post questions and comments. To access the community from within AtomSphere, go to Help & Feedback menu (question mark icon) > Community. Alternatively you can access the community directly at. Community Forums The forums are where community members come together to ask and answer questions about integration development and design, error messages, and product features. You can easily search existing discussions to find correct and helpful answers or post a new question. If you find a question or topic that you about, why not respond and share your expertise? Knowledge Center Browse Dell Boomi authored content including knowledge articles, implementation guides, video library, and more. Here you can learn more about our products and get help resolving technical issues. User Guide The User Guide is the official product documentation. It contains the core terms and concepts you need to understand when building integration processes. To access the User Guide from within your AtomSphere account, go to Help & Feedback menu (question mark icon) > User Guide. Alternatively go directly to: - AtomSphere () - MDM () - API Management () Contacting Technical Support Support Services: Dell Boomi’s goal is to provide support according to the tables below, depending on the level of support you’ve purchased. Standard Business Hours are defined by region: Asia Pacific (APAC): 8am — 8pm GMT+11, Monday — Friday Americas: 8am — 8pm ET, Monday — Friday Europe, Middle East, Africa (EMEA): 8am — 8pm GMT, Monday — Friday Extended Business Hours are from Sunday 5pm – Friday 8pm ET. Support Options - Portal (Recommended) – Through your AtomSphere Platform account, a client may access the support portal to log a case with the Dell Boomi Technical Support team. - Email – The client may submit an email problem request to [email protected] which will log a support case. Response times are not guaranteed with this method. - Phone – Available 24x5 starting Sunday 8 p.m. ET through Friday, 8 p.m. ET. The client may call the technical support number located within the portal to submit the details of their issue to create a support case or to report a Sev1 issue (24x7 is available with Premier Plus Support, 24x5 is available with Premier Support). - Live Chat – Available 24x5 starting Sunday 8 p.m. ET through Friday, 8 p.m. ET. Live Chats are available for simple problems that can be resolved quickly (24x7 is available with Premier Plus Support, 24x5 is available with Premier Support). Severity Definitions and Response Times Accessing the Support Portal To access the support portal from within AtomSphere, go to Help & Feedback menu (question mark icon) > Support. On the Support home page, look for the Contact Support section with the contact options available to you based upon your support agreement: - Chat – Start a Live Chat with a support agent - Open a Case – Open a new support case - Call Us – Click to display the support phone number for your support level Creating a Case Make sure to fill in all pertinent information in the form. This will help our support team resolve your issue most efficiently. When choosing Severity Level, remember that Severity 1 issues only include security breaches or complete system failures in which significant parts of the system are not secure or are inaccessible or inoperable and there is no viable workaround. Note that Severity 1 only applies to production environments, not development or testing. Most issues, production or not, will be Severity 2. - Product: Select the product to which this case pertains - Type of Issue: Choose "Production Error" or "Development Error" depending on environment - Severity: Leave default of "2" - Phone: *Mandatory* for call back assistance - Status: Leave default of "New" - Parent Case: If this case relates to an existing case, enter the other case number, otherwise leave blank What to Include in the Case Description Information Section In this section please provide as much information as possible. The more information you can provide about the issue, the context in which it occurred, your relevant environment details, and what you have done already to troubleshoot, the quicker we can start working to a resolution. When in doubt, more details are always better. - Subject: Summary of the issue as specifically as possible. Examples: - Less helpful: "Process not working please help" - More helpful: "Unknown host exception when trying to connect to HTTP endpoint" - Description: Provide concise details of the issue and include any helpful information that will more easily point to the error logged in your account. - Error Message: Refer to the error in the Process Reporting screen by clicking on the execution time stamp then clicking on the error below the process name within the Document view. Copy & Paste the error message from the Stack trace window. - Steps Taken: Share the steps you have taken to troubleshoot the issue thus far. This can provide additional clarification on the issue and avoid recommendations that you have already tried. Copying URL links from your browser will help point to important components that will help Support assess the issue more quickly and thoroughly. - Business Impact: Provide the business context of the issue, especially for severe or escalated issues. Environment Information Section In this section, please identify and highlight the specific components that are failing. - Component/Process URL: This is a browser URL link pointing to the exact execution or component that you are trying to troubleshoot. Follow these steps to ensure that you are copying & pasting the proper URL. - Find the execution that failed, click on the gear icon and select View Extended Information from the drop down menu 2. Highlight the Execution ID from the menu, Right click & copy the ID string and Close the window 3. Click on the Add a Filter button and Select Execution ID from the Filter Type dropdown 4. Paste the Execution ID into the box and Select Apply 5. Note how it isolates the Execution in the Process Reporting view and Copy & Paste the full URL string into the Component/Process URL box in the Case - Operation System: Choose one from the selection menu: - Core Concept: Select the Boomi Platform category that is most closely related to your question/issue: - Connectors: Select the Connector that is most closely related to your question/issue. This option is only available if you select "Connector" as the Core Concept above. - Process Component: Select the Process Component that is most closely related to your question/issue. This option is only available if you select "Process Component" as the Core Concept above. - Select Submit or Submit & Add Attachment. Case Submission and Follow-Up After submitting the case, you will receive a response according to your Boomi Support Policy. As we troubleshoot the issue, we will communicate with you by posting comments to the case. You will receive email notifications alerting you an update to your case is available and you can view the case updates via the Support Portal > Cases tab. Please do not reply to the email notification. To respond, simply add a comment to the case through the Support Portal. Dell Boomi Support can also add additional team members to your case so that others on your team can receive case comment notification emails as well. If you wish to add additional team members to your case, please add a comment in the case with the email addresses of your team members and request that they also be added to the case. Severity 1 If the platform user interface is inaccessible or your process running on a hosted Atom Cloud environment are failing, please check Status () for the latest platform status before contacting support. Again a Severity 1 issue means Production use of Boomi products on a primary business service, major application, or mission critical system is stopped or so severely impacted that the client cannot reasonably continue work. Severity Level 1 problems could have the following characteristics: - Platform down - Atom Cloud down - MDM Cloud down - "Production" processes hang or crash situations - "Production" data loss or data corruption - Critical functionality not working after a Dell Boomi release - Rollback functionality not working after an atom or connector update was applied during release control For Severity Level 1 problems, we will begin work on the problem within the guidelines of the Boomi Service Description. For issues that may involve a Dell Boomi OEM or Reseller Partner, the expectation is that partner resources will be available in Severity Level 1 situations, and the partner will reasonably cooperate with Boomi to resolve the issue. Severity 1 Call Procedure Call the Premier Support Phone Number for your region (available in support portal): - Please have your PIN # available. - The support team will respond to the call by creating a new case (if not already created) and starting to investigate the issue. - Weekend calls will be retrieved by Dell Boomi’s call service and immediately relayed to the on-call support team member via phone and email. - Support team member begins the investigation of the issue and updates the case comments within the guidelines of the Dell Boomi SLA.. - Support team will engage the backline operations and engineering teams as necessary. - case comments will be updated on regular intervals to keep the client informed of any updates with the case. Comments can be viewed through the support portal. - Support team will communicate the resolution to the client through email and case comments. - Support team reviews any client comments and follows up to verify a complete resolution. Escalating a Case (Non Severity 1) - Contact Boomi Support through the case or the Premier Support number with additional information to pass along to the Boomi Support Team regarding escalating the case. - Based on current case volume, the escalated case will be prioritized for additional focus and the Support Manager will instruct the support team to provide attention to the escalated case. - If the escalated case requires backline engineering attention, you will be notified and the Support Manager will act as a point to the Engineering team. - The Support Team will provide updates regarding the case through the support portal, email and phone. Best Practices for Contacting Support - For faster resolution, enter a case through the support portal and provide as much diagnostic information as possible. This reduces the number of interactions to resolve a case. - For Severity 1 cases, enter the case through the portal with diagnostic information, include adding ‘Severity 1 / Production Down’ in the case description and then call the Premier Support phone number (in support portal) with your PIN # and referencing the case number. If your support plan is not Premier or Premier Plus, call the number referenced on the status.boomi.com website, 1-866-407-6599 (Toll free in US), 1-503-470-5056 (International) and reference the case number. - Leverage the Dell Boomi Community resources for knowledge base, community forums, and training materials. - Use Live Chat for simple questions and the support portal for complex questions. - Check status.boomi.com for the current platform status. - Be prepared to share your business impact and intended process flow. Status Site The Status site () displays the current status of the AtomSphere platform and various Atom Cloud hosted environments. In addition, you can find: - Current status and updates to system-wide issues - Live and historical data on system performance - Notifications on scheduled maintenance windows and release schedule - Information on how we secure your data Professional Services With our training, support, and consulting resources, you can accelerate implementation, reduce time-to-value with the Dell Boomi platform and ensure the success of your business-to-business (B2B) application and data integration initiatives. Training and Certification Our training curriculum includes instructor lead on-site instruction at a Dell Boomi training facility or your location. We also offer a remote training option and other self-service options. - Classroom training at Dell Boomi: Our live, interactive classes focus on real business scenarios and Dell Boomi best practices. Classroom training is relevant for novice users and advanced developers and is the fastest, most thorough way to build your integration skill set. - Remote training: We provide remote training options offered in a public class setting. - Custom training: A Dell Boomi Certified Instructor will work with your staff to develop custom courses and deliver them remotely or in-person at your facility. - Self-service training: Dell Boomi customers have on-demand access to a growing library of recorded training sessions, tutorials, and fully configured integration's. For more information please see Training and Certification. Consulting Services The Dell Boomi professional services team offers expertise in integration development and best practices. We can advise your organization at critical implementation checkpoints, deliver a complete, production-ready integration, or provide custom training for your IT staff. Working with our consultants, you’ll gain confidence, knowing that your implementation will succeed. With Dell Boomi consulting, you can: - Reduce time-to-value through faster implementation - Improve integration design - Avoid costly rework - Identify additional opportunities to increase ROI Our consultants can help your organization with: - Architectural design and validation - Process development - On-site or remote JumpStarts - Team augmentation - Best practices review Our engagement models are flexible so please click here to learn more. Customer Success The Customer Success team provides assistance to help you get the most out of your Dell Boomi investment. We want you to be a completely satisfied “Customer for Life”. We offer a variety of programs to improve your overall experience including; quarterly check-ins, proactive monitoring, as well as serving as an escalation point for account and technical issues. For more information please click here. Summary Dell Boomi AtomSphere integration Platform as a Service enables you to integrate any combination of cloud and on-premise applications without software, appliances, or coding. The resources in this guide are available to help you be successful from day one. Thanks for choosing Dell Boomi!
https://community.boomi.com/docs/DOC-2491
CC-MAIN-2018-43
en
refinedweb
Artificially Intelligent - Text Sentiment Analysis By Frank La Vigne | May 2018 | Get the Code One of the truisms of the modern data-driven world is that the velocity and volume of data keeps increasing. We’re seeing more data generated each day than ever before in human history. And nowhere is this rapid growth more evident than in the world of social media, where users generate content at a scale previously unimaginable. Twitter users, for example, collectively send out approximately 6,000 tweets every second, according to tracking site Internet Live Stats (internetlivestats.com/twitter-statistics). At that rate, there are about 350,000 tweets sent per minute, 500 million tweets per day, and about 200 billion tweets per year. Keeping up with this data stream to evaluate content would be impossible even for the largest teams—you just couldn’t hire enough people to scan Twitter to evaluate the sentiment of its user base at any given moment. Fortunately, the use case for analyzing every tweet would be an extreme edge case. There are, however, valid business motives for tracking sentiment, be it against a specific topic, search term or hashtag. While this narrows the number of tweets to analyze significantly, the sheer volume of the data to analyze still makes it impractical to analyze the sentiments of the tweets in any meaningful way. Thankfully, analyzing the overall sentiment of text is a process that can easily be automated through sentiment analysis. Sentiment analysis is the process of computationally classifying and categorizing opinions expressed in text to determine whether the attitude expressed within demonstrates a positive, negative or neutral tone. In short, the process can be automated and distilled to a mathematical score indicating tone and subjectivity. Setting Up an Azure Notebook In February (msdn.com/magazine/mt829269), I covered in detail Jupyter notebooks and the environments in which they can run. While any Python 3 environment can run the code in this article, for the sake of simplicity, I’ll use Azure Notebooks. Browse over to the Azure Notebooks service Web site at notebooks.azure.com and sign in with your Microsoft ID credentials. Create a new Library with the name Artificially Intelligent. Under the Library ID field enter “ArtificiallyIntelligent” and click Create. On the following page, click on New to create a new notebook. Enter a name in the Item Name textbox, choose Python 3.6 Notebook from the Item type dropdown list and click New (Figure 1). Figure 1 Creating a New Notebook with a Python 3.6 Kernel Click on the newly created notebook and wait for the service to connect to a kernel. Sentiment Analysis in Python Once the notebook is ready, enter the following code in the empty cell and run the code in the cell. The results that appear will resemble the following: Polarity refers to how negative or positive the tone of the input text rates from -1 to +1, with -1 being the most negative and +1 being the most positive. Subjectivity refers to how subjective the statement rates from 0 to 1 with 1 being highly subjective. With just three lines of code, I could analyze not just sentiment of a fragment of text, but also its subjectivity. How did something like sentiment analysis, once considered complicated, become so seemingly simple? Python enjoys a thriving ecosystem, particularly in regard to machine learning and natural language processing (NLP). The code snippet above relies on the TextBlob library (textblob.readthedocs.io/en/dev). TextBlob is an open source library for processing textual data, providing a simple API for diving into common natural language processing (NLP) tasks. These tasks include sentiment analysis and much more. In the blank cell below the results, enter the following code and execute it: The results state that the phrase “the sky is blue” has a polarity of 0.0 and a subjectivity of 0.1. This means that the text is neutral in tone and scores low in subjectivity. In the blank cell immediately underneath the results, enter the following code and execute the cell: Note that the algorithm correctly identified that the contents of simple_text1 had a negative sentiment (-0.8) and that the statement is quite subjective (0.9). Additionally, the algorithm correctly inferred the positive sentiment of simple_text2 (0.625) and its highly subjective nature (1.0). However, the algorithm does have significant difficulties parsing the more subtle nuances of human language. Sarcasm, for instance, is not only hard to detect, but may throw off the results. Imagine a scenario where a restaurant extracts reviews from an online review site like Yelp and automatically publishes reviews with a positive sentiment on their Web site and social media. Enter the following code into an empty cell and execute it: sample_customer_review1 = TextBlob("The burgers at this place will make you ill with joy.") print(sample_customer_review1.sentiment) sample_customer_review2 = TextBlob("Whenever I want to take a sick day, I eat here the night before and it is always a sure fire win!") print(sample_customer_review2.sentiment) Clearly the sentiment of these two reviews are negative. Yet the algorithm seems to think otherwise, with both reviews scoring as having positive sentiment scores, 0.15 and 0.26 respectively. In this case, the restaurant would likely not want either of these reviews highlighted on any platform. NLP systems have yet to grasp a good understanding of sarcasm, although there is a lot of research currently being done in this area (thesarcasmdetector.com/about). Connecting to Twitter So far, I have only run small bits of text through the TextBlob analyzer. A more practical use of this technology is to feed it user-generated data, ideally in near-real time. Fortunately Twitter, with its approximately 327 million active users (), provides a constant stream of text to analyze. To connect with Twitter’s API, I need to register an application with Twitter to generate the necessary credentials. In a browser, go to apps.twitter.com and, if needed, log in with your Twitter credentials. Click the Create New App button to bring up the Create an application form as shown in Figure 2. Enter a name, description and a Web site for the app. For the purposes of this article, the Web site address does not matter, so enter a valid URL. Click the checkbox to agree to the terms of the Twitter Developer Agreement and click the Create your Twitter application button. Figure 2 The Twitter Create Application Form On the following screen, look for Consumer Key (API Key) under the Application Settings section. Click on the “manage keys and access tokens” link. On the page that follows, click the Create my access token button, as shown in Figure 3, to create an access token. Make note of the following four values shown on this page: Consumer Key (API Key), Consumer Secret (API Secret), Access Token and Access Token Secret. Figure 3 Twitter Application Keys and Access Tokens Screen Using Tweepy to Read Tweets Tweepy is a Python library that simplifies the interaction between Python code and the Twitter API. More information about Tweepy can be found at docs.tweepy.org/en/v3.5.0. At this time, return to the Jupyter notebook and enter the following code to install the Tweepy API. The exclamation mark instructs Jupyter to execute a command in the shell: Once the code executes successfully, the response text in the cell will read: “Successfully installed tweepy-3.6.0,” although the specific version number may change. In the following cell, enter the code in Figure 4 into the newly created empty cell and execute it. import tweepy consumer_key = "[Insert Consumer Key value]" consumer_secret = "[Insert Consumer Secret value]" access_token = "[Insert Access Token value]" access_token_secret = "[Insert Access Token Secret value]" authentication_info = tweepy.OAuthHandler(consumer_key, consumer_secret) authentication_info.set_access_token(access_token, access_token_secret) twitter_api = tweepy.API(authentication_info) spacex_tweets = twitter_api.search("#spacex") for tweet in spacex_tweets: print(tweet.text) analysis = TextBlob(tweet.text) print(analysis.sentiment) The results that come back should look similar to the following: #ElonMusk deletes own, #SpaceX and #Tesla Facebook pages after #deletefacebook Sentiment(polarity=0.0, subjectivity=0.0) RT @loislane28: Wow. did @elonmusk just delete #SpaceX and #Tesla from Facebook? Sentiment(polarity=0.0, subjectivity=0.0) Keep in mind that as the code executes a search on live Twitter data, your results will certainly vary. The formatting is a little confusing to read. Modify the for loop in the cell to the following and then re-execute the code. Adding the pipe characters to the output should make it easier to read. Also note that the sentiment property’s two fields, polarity and subjectivity, can be displayed individually. Load Twitter Sentiment Data Into a DataFrame The previous code created a pipe-delineated list of tweet content and sentiment scores. A more useful structure for further analysis would be a DataFrame. A DataFrame is a two-dimensional-labeled data structure. The columns may contain different value types. Similar to a spreadsheet or SQL table, DataFrames provide a familiar and simple mechanism to work with datasets. DataFrames are part of the Pandas library. As such, you will need to import the Pandas library along with Numpy. Insert a blank cell below the current cell, enter the following code and execute: The results now will display in an easier to read tabular format. However, that’s not all that the DataFrames library can do. Insert a blank cell below the current cell, enter the following code, and execute: print ("Polarity Stats") print ("Avg", tweet_df["Polarity"].mean()) print ("Max", tweet_df["Polarity"].max()) print ("Min", tweet_df["Polarity"].min()) print ("Subjectivity Stats") print ("Avg", tweet_df["Subjectivity"].mean()) print ("Max", tweet_df["Subjectivity"].max()) print ("Min", tweet_df["Subjectivity"].min()) By loading the tweet sentiment analysis data into a DataFrame, it’s easier to run and analyze the data at scale. However, these descriptive statistics just scratch the surface of the power that DataFrames provide. For a more complete exploration of Pandas DataFrames in Python, please watch the webcast, “Data Analysis in Python with Pandas,” by Jonathan Wood at bit.ly/2urCxQX. Wrapping Up With the velocity and volume of data continuing to rise, businesses large and small must find ways to leverage machine learning to make sense of the data and turn it into actionable insight. Natural Language Processing, or NLP, is a class of algorithms that can analyze unstructured text and parse it into machine-readable structures, giving access to one of the key attributes of any body of text—sentiment. Not too long ago, this was out of reach of the average developer, but now the TextBlob Python library brings this technology to the Python ecosystem. While the algorithms can sometimes struggle with the subtleties and nuances of human language, they provide an excellent foundation for making sense of unstructured data. As demonstrated in this article, the effort to analyze a given block of text for sentiment in terms of negativity or subjectivity is now trivial. Thanks to a vibrant Python ecosystem of third-party open source libraries, it’s also easy to source data from live social media sites, such as Twitter, and pull in users’ tweets in real time. Another Python library, Pandas, simplifies the process to perform advanced analytics on this data. With thoughtful analysis, businesses can monitor social media feeds and obtain awareness of what customers are saying and sharing about them. expert for reviewing this article: Andy Leonard Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/mt846650
CC-MAIN-2018-43
en
refinedweb
Here is the assignment criteria : Implement exception handling to validate user input as was described in class. You need to at least catch type mismatch exception. No late or by emailed homework will be accepted. We are to have a student enter their name, id and grade. The name is a string, grade a float and id an int. This i what I have so far, and I am having a very hard time with this. Please any input would be awesome. Please dumb it down as much as possible I am not a CS major and they force us to take this class for my degree plan. import java.util.Scanner; public class tryCatchTest { public static void main(String[] args) { Scanner input = new Scanner(System.in); studentInfo studentInfoObject = new studentInfo(); System.out.println("What is your name?"); String tempName = input.nextLine(); studentInfoObject.setName(tempName); studentInfoObject.validName(); try { int y = Integer.parseInt(tempName); System.out.println(y); } catch(NumberFormatException e) { System.out.println("Your name is not valid. Please enter a string"); } System.out.println("Please enter your id number:"); int tempId = input.nextInt(); studentInfoObject.setId(tempId); studentInfoObject.validId(); try { int x = Integer.parseInt(tempId); System.out.println(x); } catch(NumberFormatException e) { System.out.println("Your id is not a valid. Please enter a whole number"); } } } public class studentInfo { private String studentName; private int studentId; private float gradeAvg; public void setName(String name){ studentName = name; } public String getName(){ return studentName; } public void validName(){ System.out.printf("Your name is being validated %s", getName()); } public void setId(int id){ studentId = id; } public int getId(){ return studentId; } public void validId(){ System.out.printf("Your id is being validated %s", getId()); } public void setAverage(float average){ gradeAvg = average; } public float getAverage(){ return gradeAvg; } public void validAverage(){ System.out.printf("Your average is being validated %s", getAverage()); } public studentInfo() { } }
https://www.daniweb.com/programming/software-development/threads/388465/java-hw-help
CC-MAIN-2018-43
en
refinedweb
ASP.NET Core MVC with Entity Framework Core - Tutorial 1 of 10 This tutorial has not been upgraded to ASP.NET Core 2.1. The ASP.NET Core 2.0 version of this tutorial is available by selecting ASP.NET Core 2.0 above the table of contents or at the top of the page: The ASP.NET Core 2.1 Razor Pages version of this tutorial has many improvements over the 2.0 version. The 2.0 tutorial teaches ASP.NET Core MVC and Entity Framework Core with controllers and views. Razor Pages is a page-based programming model that makes building web UI easier and more productive. We recommend the Razor Pages tutorial over the MVC version. The Razor Pages tutorial: - Is easier to follow. For example, the scaffolding code has been significantly simplified. - Provides more EF Core best practices. - Uses more efficient queries. - Is more current with the latest API. - Covers more features. - Is the preferred approach for new application development. If you choose this tutorial over the Razor Pages version, let us know why in this GitHub discussion. By Tom Dykstra and Rick Anderson This tutorial teaches ASP.NET Core MVC and Entity Framework Core with controllers and views. Razor Pages is a new alternative in ASP.NET Core 2.0, a page-based programming model that makes building web UI easier and more productive. We recommend the Razor Pages tutorial over the MVC version. The Razor Pages tutorial: - Is easier to follow. - Provides more EF Core best practices. - Uses more efficient queries. - Is more current with the latest API. - Covers more features. - Is the preferred approach for new application development. If you choose this tutorial over the Razor Pages version, let us know why in this GitHub issue. doesn. Prerequisites Install one of the following: - CLI tooling: Windows, Linux, or macOS: .NET Core SDK 2.0 or later - IDE/editor tooling - Windows: Visual Studio for Windows - ASP.NET and web development workload - .NET Core cross-platform development workload - Linux: Visual Studio Code - macOS: Visual Studio for Mac. Create an isn<> <body> <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data- <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a asp-Contoso University</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li><a asp-Home</a></li> <li><a asp-About</a></li> <li><a> > <script src="~/js/site.min.js" asp-</script> </environment> @RenderSection("Scripts", required: false) </body> </html>. There's a one-to-many relationship between Student and Enrollment entities, and there's a one-to-many relationship between Course and Enrollment entities. In other words, a student can be enrolled in any number of courses, and a course can have any number of students enrolled in it. In the following sections you'll create a class for each one of these entities. The Student entity In the Models folder, create a class file named Student.cs and replace the template code with the following code. using System; using System.Collections.Generic; namespace ContosoUniversity.Models { public class Student { public int ID { get; set; } public string LastName { get; set; } public string FirstMidName { get; set; } public DateTime EnrollmentDate { get; set; } public ICollection<Enrollment> Enrollments { get; set; } } } The ID property will become the primary key column of the database table that corresponds to this class. By default, the Entity Framework interprets a property that's named ID or classnameID as the primary key. The Enrollments property is a navigation property. Navigation properties hold other entities that are related to this entity. In this case, the Enrollments property of a Student entity will hold all of the Enrollment entities that are related to that Student entity. In other words, if a given Student row in the database has two related Enrollment rows (rows that contain that student's primary key value in their StudentID foreign key column), that Student entity's Enrollments navigation property will contain those two Enrollment entities. Course Course { get; set; } public Student Student { get; set; } } }. The Grade property is an enum. The question mark after the Grade type declaration indicates that the Grade property is nullable. A grade that's null is different from a zero grade -- null means a grade isn't known or hasn't been assigned yet. The StudentID property is a foreign key, and the corresponding navigation property is Student. An Enrollment entity is associated with one Student entity, so the property can only hold a single Student entity (unlike the Student.Enrollments navigation property you saw earlier, which can hold multiple Enrollment entities). The CourseID property is a foreign key, and the corresponding navigation property is Course. An Enrollment entity is associated with one Course entity. Entity Framework interprets a property as a foreign key property if it's named <navigation property name><primary key property name> (for example, StudentID for the Student navigation property since the Student entity's primary key is ID). Foreign key properties can also be named ICollection<Enrollment> Enrollments { get; set; } } } The Enrollments property is a navigation property. A Course entity can be related to any number of Enrollment entities. We'll say more about the DatabaseGenerated attribute in a later tutorial in this series. Basically, this attribute lets you enter the primary key for the course rather than having the database generate it. Create the Database Context The main class that coordinates Entity Framework functionality for a given data model is the database context class. You create this class by deriving from the:; } } } This code creates a DbSet property for each entity set. In Entity Framework terminology, an entity set typically corresponds to a database table, and an entity corresponds to a row in the table. You could've omitted the DbSet<Enrollment> and DbSet<Course> statements and it would work the same. The Entity Framework would include them implicitly because the Student entity references the Enrollment entity and the Enrollment entity references the Course entity.'s no complex configuration. By default, LocalDB creates .mdf database files in the C:/Users/<user> directory. Add code to initialize the database with test data The Entity Framework will create an empty database for you. In this section, you write a method that")}, new Student{FirstMidName="Laura",LastName="Norman",EnrollmentDate=DateTime.Parse("2003-09-01")}, new Student{FirstMidName="Nino",LastName="Olivetto",EnrollmentDate=DateTime.Parse("2005-09-01")} }; foreach (Student s in students) { context.Students.Add(s); } context.SaveChanges(); var courses = new} }; foreach (Course c in courses) { context.Courses.Add(c); } context.SaveChanges(); var enrollments = new Enrollment[] { new Enrollment{StudentID=1,CourseID=1050,Grade=Grade.A}, new Enrollment{StudentID=1,CourseID=4022,Grade=Grade.C}, new Enrollment{StudentID=1,CourseID=4041,Grade=Grade.B}, new Enrollment{StudentID=2,CourseID=1045,Grade=Grade.B}, new Enrollment{StudentID=2,CourseID=3141,Grade=Grade.F}, new Enrollment{StudentID=2,CourseID=2021,Grade=Grade.F}, new Enrollment{StudentID=3,CourseID=1050}, new Enrollment{StudentID=4,CourseID=1050}, new Enrollment{StudentID=4,CourseID=4022,Grade=Grade.F}, new Enrollment{StudentID=5,CourseID=4041,Grade=Grade.C}, new Enrollment{StudentID=6,CourseID=1045}, new Enrollment{StudentID=7,CourseID=3141,Grade=Grade.A}, };. If the Add MVC Dependencies dialog appears: Update Visual Studio to the latest version. Visual Studio versions prior to 15.5 show this dialog. If you can't update, select ADD, and then follow the add controller steps again. Core dependency injection takes's returned. The return type Task<IActionResult>represents ongoing work with a result of type IActionResult. The awaitkeyword causes the compiler to split the method into two parts. The first part ends with the operation that's started asynchronously. The second part is put into a callback method that doesn't include, for example, statements that just change an IQueryable, such as var students = context.Students.Where(s => s.LastName == "Davolio"). An EF context isn.
https://docs.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro?view=aspnetcore-2.1
CC-MAIN-2018-43
en
refinedweb
0 stephen.teacher 4 Years Ago hi all, this is some extra credit from class we sat around cleaning up code today this is what we came up with public boolean equals2(IntTree t2){ return equals2(this.overallRoot, t2.overallRoot); } private Boolean equals2(IntTreeNode r1, IntTreeNode r2){ if(r1 == null || r2 == null){ return r1 == null && r2 == null; } return ((r1.data==r2.data)&& equals2(r1.left, r2.left) && equals(r1.right, r2.right)); } we get five points extra credit if we can shorten any further. So my question iscan someone explain to me the ^ character and is that the right path to shortening this code binary class java recursive shorten tree
https://www.daniweb.com/programming/software-development/threads/480447/can-someone-help-condence-this-code-recursive-binary-tree
CC-MAIN-2018-43
en
refinedweb
This tutorial guide you about the basic syntax of the java programming language. For writing any program, you need to keep in mind the syntax of that programming language. Given below the points for syntax of the java which is applicable to all java program : Java is case sensitive programming language means it distinguish between upper case letter and lover case. For example, case and Case are different words in java. According to naming convention in java, the first letter of class should be in the upper case. All the other letter can be in lower or upper case. If mistakenly you keep your program name starts from lower case. It will compile and run without any error. But for better implementation, it is recommended to follow the convention. Given below the example which follows the class naming convention of java : public class Hello{ /* by the convention, program name starts with 'H'. * This will print 'Welcome in java world' as the output */ public static void main(String []args){ System.out.println("Welcome in java world"); // prints Hello World } } According to naming convention for method, the first letter of the method should be in lower case. Your program still run if it is in lower case. For better implementation it is recommended to follow the method naming convention. The name of the program file must be same as the class name. The letters of the class file and program file must be exactly same. If it is not same, then the program will give compiler error. It is necessary for every java program that it must have main() method. Without it the java program will not compile successfully. Java - Java Tutorials Post your Comment
http://www.roseindia.net/javatutorials/basic_java.shtml
CC-MAIN-2016-30
en
refinedweb
A simulator definition is created by the Composite Application Validation System (CAVS) user to simulate a particular service in an Oracle Application Integration Architecture (AIA) integration or a participating application. A simulator receives data from the tested web service and returns predefined data so that the tested web service can continue processing. This chapter includes the following sections: Section 5.1, "How to Create a Simulator Definition" Section 5.2, "How to Modify a Simulator Definition" Section 5.3, "How to Provide Multiple Request and Response Message Sets in a Single Simulator Definition" Section 5.4, "How to Create a Simulator Definition that Supports Chatty Services" Section 5.5, "How to Send Dynamic Responses in a Simulator Response" To create a simulator definition: Access the AIA Home Page. In the Composite Application Validation System area, click the Go button. Select the Definitions tab. Click the Create Simulator button. The Create Simulator page displays, as shown in Figure 5-1 and Figure 5-2. Use the page elements on the Create Simulator page to create simulator definitions. Available elements are discussed in Table 5-1. Use the Test Messages group box to generate XPath values based on provided request XML message text. By default, SOAP envelope XML text is provided in these fields. You can paste XML text within this default SOAP envelope, or paste your own XML text already enclosed in an envelope into these fields. For more information about how to create simulator request and response messages that hold multiple sets of test data in a single definition, see Section 5.3, "How to Provide Multiple Request and Response Message Sets in a Single Simulator Definition." For more information about how to create simulator request and response messages that support chatty service conversations, see Section 5.4, "How to Create a Simulator Definition that Supports Chatty Services." Available elements in the Test Messages group box are discussed in Table 5-2. To modify a simulator definition: Access the Modify Simulator Definition page, as shown in Figure 5-3, Figure 5-4, Figure 5-5, Figure 5-6, and Figure 5-7. Access the page using one of the following navigation paths: Access the AIA Home Page. In the Composite Application Validation System area, click the Go button. Select the Definitions tab. Click the Create Simulator button. Enter required values on the Create Simulator page and click Next. Access the AIA Home Page. In the Composite Application Validation System area, click the Go button. Select the Definitions tab. Click an Id link for an unlocked simulator definition in the Search Result Selection grid on the Definitions page. Access the AIA Home Page. In the Composite Application Validation System area, click the Go button. Select the Instances tab. Click a Definition Id link for an unlocked simulator definition on the Instances page. Use the page elements on the Modify Simulator Definition page to modify a simulator definition. The page displays values you defined for the simulator definition. You can modify the values in editable fields. Most of the elements on this page also appear on the Create Simulator Definition page and are documented in Section 5.1, "How to Create a Simulator Definition." Any additional elements are discussed here in Table 5-3. For more information about the elements in the Test Messages group box, see Section 5.1, "How to Create a Simulator Definition." Prefix and Namespace Selection Use the Prefix and Namespace Selection grid to define namespace data that will be used in the XPath values defined in the XPath Selection grid. Available elements in the Prefix and Namespace Selection grid are discussed in Table 5-4. Use the XPath Selection grid to work with XPath values that are used to match the simulator definition with arriving requests. XPath values can also be used to validate data send in the test request. The values in this grid use the namespace values set in the Prefix and Namespace Selection grid. Note:If you are entering XPath values manually, it is important to maintain correlations with the values entered in the Prefix and Namespace Selection grid. Each XPath node must have a prefix that has been defined in the Prefix and Namespace Selection grid, unless it is an XPath expression. Available elements in the XPath Selection grid are discussed in Table 5-5. Simulator Instance Selection Select the Simulator Instances tab to display the Simulator Instance Selection grid, which displays information about simulator instances generated using the simulator definition. Available elements in the Simulator Instance Selection grid are discussed in Table 5-6. Test Definition Selection Select the Test Definitions tab to display the Linked Test Definition Selection grid, which displays information about test definitions associated with the simulator definition. Available elements in the Linked Test Definition Selection grid are discussed in Table 5-7. You can create a simulator definition that contains multiple pairs of request and response message data, as shown in Figure 5-8. This means that simulator definitions only need to be created per usage requirements, not per test data requirements. For example, if you want to simulate a service against five sets of test data, you can create a single simulator definition to simulate the service and include in it all five sets of test data with which you can the service to operate. This is as opposed to creating five separate simulator definitions, one per combination of service and set of test data. When a simulator definition that includes multiple test data sets is invoked, the appropriate data set is matched for use based on key attributes identified in the request. At this point, the request validation and response provision can occur. Since we would typically use such definitions to handle several sets of data, it is recommended that you choose the same key values for every set of data. Use the format provided in Example 5-1 to include multiple sets of request data in the simulator definition. The CAVSRequestInputs and CAVSRequestInput_1 envelopes are autogenerated upon the input of the endpoint URL value on the test definition. Use copy and paste commands to create more sets, such as CAVSRequestInput_2 and CAVSRequestInput_3. Example 5-1 Request Message Format <cavs:CAVSRequestInputs xmlns: <cavs:CAVSRequestInput_1> <soap:Envelope xmlns: <soap:Body xmlns: <ns1:SimpleProcessProcessRequest> . . . </ns1:SimpleProcessProcessRequest> </soap:Body> </soap:Envelope> </cavs:CAVSRequestInput_1> <cavs:CAVSRequestInput_2> <soap:Envelope xmlns: <soap:Body xmlns: <ns1:SimpleProcessProcessRequest> . . . </ns1:SimpleProcessProcessRequest> </soap:Body> </soap:Envelope> </cavs:CAVSRequestInput_2> </cavs:CAVSRequestInputs> Use the format shown in Example 5-2 to include multiple sets of response data in the simulator definition. Example 5-2 Response Message Format <cavs:CAVSResponseOutput_1> <soap:Envelope xmlns: <soap:Body xmlns: <ns1:SimpleProcessProcessResponse> . . . </ns1:SimpleProcessProcessResponse> </soap:Body> </soap:Envelope> </cavs:CAVSResponseOutput_1> <cavs:CAVSResponseOutput_2> <soap:Envelope xmlns: <soap:Body xmlns: <ns1:SimpleProcessProcessResponse> . . . </ns1:SimpleProcessProcessResponse> </soap:Body> </soap:Envelope> </cavs:CAVSResponseOutput_2> </cavs:CAVSResponseOutputs> Envelope text is prepopulated. Enter actual message content within appropriate tags provided within the envelopes. After entering request and response data sets and clicking the Generate Xpath button on the Modify Simulator Definition page, the XPath Selection grid provides access to available XPath values and enables you to select the XPaths that must be treated as key nodes. For more information about the Modify Simulator Definition page, see Section 5.2, "How to Modify a Simulator Definition." If your testing scenario includes test definitions, you can likewise create test definitions that contain multiple request and response message sets that work with the sets defined in your simulator definition. For more information, see Section 4.3, "How to Provide Multiple Request and Response Message Sets in a Single Test Definition." You can create a simulator definition that can simulate multiple services, each with a different schema. In general, we recommend that you create simulators that simulate a single specific service. However, in the case of chatty conversations, for the ease of maintenance, you may choose to simulate all callouts of an Application Business Connector Service (ABCS) using a single simulator definition. Using this method, you have the advantage of using one simulator for a particular ABCS, regardless of the number of callouts that need to be made. This method also provides ease of maintenance because linked callouts can all be viewed and modified in one place. For example, in some integration scenarios, participating applications do not provide services at the same level of granularity as operations in Enterprise Business Services (EBSs). In these scenarios, a requester ABCS may need to adopt patterns such as message enrichment, splitting, and aggregation and disaggregation as required by an EBS. Likewise, a provider ABCS may need to adopt patterns as required by participating application services. These ABCSs, which are typically implemented using BPEL process, call out to several services. To test this chatty ABCS using CAVS, there will likely be a need to replace the services that the ABCS calls out to with several simulators. It will also be required that these multiple request/response simulators be correlated, so that they accurately emulate the transaction of the same entity. When this type of simulator is called, CAVS initiates the following general flow: Selects simulator definition. Validates the first request message based on the selected simulator. Returns the appropriate response message, if the selected simulator is a two-way simulator. Repeats steps 2 and 3 until the chatty service conversation is complete. Use the format shown in Example 5-3 to create a simulator definition that supports chatty service conversations. This format provides the ability to specify a set of request and response messages, along with success criteria for each of them. This format is the same as that used for multiple requests and responses in a simulator definition. However, in this case, the schemas for each set will be different. Example 5-3 Request Message Format <cavs:CAVSRequestInputs xmlns: <cavs:CAVSRequestInput_1> <soap:Envelope xmlns: <soap:Body xmlns: <ns1:Service1Request> . . . </ns1: Service1Request> </soap:Body> </soap:Envelope> </cavs:CAVSRequestInput_1> <cavs:CAVSRequestInput_2> <soap:Envelope xmlns: <soap:Body xmlns: <ns2: Service2Request> . . . </ns2: Service2Request> </soap:Body> </soap:Envelope> </cavs:CAVSRequestInput_2> Once you have provided request and response messages, click the Generate Xpath button on the Modify Simulator Definition page to generate XPath values. Modify the generated XPath values, if necessary. For more information about the Modify Simulator Definition page, see Section 5.2, "How to Modify a Simulator Definition." When this type of simulator is called, separate simulator instances are created for each request and response pair. The evaluation of actual response versus expected response is handled per instance created for the same simulator definition. CAVS simulator definitions are actually predefined request and response message sets. In some cases, you may not know the values for all the fields in the request message. Additionally, you may want to send these unknown dynamic values in a response to the service that called the simulator. For example, consider the Enterprise Business Message (EBM) ID. This value is normally generated on the fly by AIA services. If you create a simulator that talks to this AIA service, you do not have a way to validate the value in the EBM ID field of the request message because the value is dynamically generated. You may to choose to avoid validations of this value by setting the CAVS XPath validation for the EBM ID field to isValid. However, you may have a requirement in which you need to send this dynamic value back in a particular field of the simulator response. To meet this requirement, you can let the simulator pick the particular field (such as EBM ID) in the request and send it back as a field in the response. To send a dynamic response in a simulator response: Map a field from the request message and add it to the response message. These are two valid formats you can use: #@#XPATH.{copy the XPath from request msg Ex./soap:Envelop/soap:Body..}#@# #@#SYSTEM.{SYSDATE}#@# Before sending the response, the simulator will pick up this ID from the generated XPath, substitute the actual value, and send it in the response. The strings referenced above will form a part of the response message. To know what the request message XPath values are, use the output that was generated by clicking the Generate Xpath button. For example, let's say that the request SOAP message has the nodes shown in Example 5-4: Example 5-4 Request SOAP Message Nodes <corecom:PersonName> <corecom:FirstName>CAVS</corecom:FirstName> <corecom:MiddleName>FP</corecom:MiddleName> <corecom:FamilyName>AIA</corecom:FamilyName> <corecom:CreationDateTime></corecom:CreationDateTime> </corecom:PersonName> You would define your response SOAP message as shown in Example 5-5: Example 5-5 Response SOAP Message <corecom:PersonName> <corecom:FirstName>#@#XPATH.{/soap:Envelope/soap:Body/corecom:CreateCustomerParty ListEBM/ebo:DataArea/ebo:CreateCustomerPartyList/ corecomx:Contact/corecomx:PersonName/corecomx:FamilyName}#@#2dot1</corecom:FirstName> <corecom:MiddleName>#@#XPATH.{/soap:Envelope/soap:Body/corecom:CreateCustomerParty ListEBM/ebo:DataArea/ebo:CreateCustomerPartyList/corecomx:Contact/corecomx:Person Name/corecomx:MiddleName}#@#</corecom:MiddleName> <corecom:FamilyName>#@#XPATH.{/soap:Envelope/soap:Body/corecom:CreateCustomerParty ListEBM/ebo:DataArea/ebo:CreateCustomerPartyList/corecomx:Contact/corecomx:Person Name/corecomx:FirstName}#@#</corecom:FamilyName> <corecom:CreationDateTime>#@#SYSTEM.{SYSDATE}#@#</corecom:CreationDateTime> </corecom:PersonName> In this case, the response would be modified by the CAVS engine by copying values from the request as shown in Example 5-6. Example 5-6 Response Message Modified by CAVS <corecom:PersonName> <corecom:FirstName>AIA2dot1</corecom:FirstName> <corecom:MiddleName>FP</corecom:MiddleName> <corecom:FamilyName>CAVS</corecom:FamilyName> <corecom:CreationDateTime>2008-05-12T15:26:43+05:30</corecom:CreationDateTime> </corecom:PersonName> Note: 2dot1is a static string that is always appended to the FamilyName value.
http://docs.oracle.com/cd/E21764_01/doc.1111/e17366/chapter5.htm
CC-MAIN-2016-30
en
refinedweb
So I started this new program and I can't seem to get past the first step. The menu choice option won't work. Any ideas?The menu choice option won't work. Any ideas?Code: #include <iostream> using namespace std; int main() { char Menu, Start, Recall, Exit,choice; cout <<"Hello, how are we today?" <<endl; cout <<"|===================|" <<endl; cout <<"| (M)enu |" <<endl; cout <<"| (S)tart |" <<endl; cout <<"| (R)ecall |" <<endl; cout <<"| (E)xit |" <<endl; cout <<"|______________________|" <<endl; cout <<"Please choose a destination" <<endl; cin >>choice; if (choice == Menu) { cout <<"You are in menu now!" <<endl; } system("pause"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/43354-my-new-programm-having-problems-printable-thread.html
CC-MAIN-2016-30
en
refinedweb
Type returned by PR_CreateFileMap and passed to PR_MemMap and PR_CloseFileMap. Syntax #include <prio.h> typedef struct PRFileMap PRFileMap; Description The opaque structure PRFileMap represents a memory-mapped file object. Before actually mapping a file to memory, you must create a memory-mapped file object by calling PR_CreateFileMap, which returns a pointer to PRFileMap. Then sections of the file can be mapped into memory by passing the PRFileMap pointer to PR_MemMap. The memory-mapped file object is closed by passing the PRFileMap pointer to PR_CloseFileMap.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PRFileMap
CC-MAIN-2016-30
en
refinedweb
Time::DayOfWeek - calculate which Day-of-Week a date is This documentation refers to version 1.6.A6FFxZB of Time::DayOfWeek, which was released on Tue Jun 15 15:59:35:11 2010. #!/usr/bin/perl use strict; use warnings; use Time::DayOfWeek qw(:dow); my($year, $month, $day)=(2003, 12, 7); print "The Day-of-Week of $year/$month/$day (YMD) is: ", DayOfWeek($year, $month, $day), "\n"; print 'The 3-letter abbreviation of the Dow is: ', Dow( $year, $month, $day), "\n"; print 'The 0-based index of the DoW is: ', DoW( $year, $month, $day), "\n"; This module just calculates the Day-of-Week for any particular date. It was inspired by the clean Time::DaysInMonth module written by David Muir Sharnoff <[email protected]>. The reason I created DayOfWeek was to support other Time modules which would like to have a Day-of-Week calculated. Time::DayOfWeek's core function which does the calculation and returns the weekday index answer in 0..6. If no Year is supplied, 2000 C.E. is assumed. If no Month or Day is supplied, they are set to 1. Months are 1-based in 1..12. DoW() is the only function that is exported from a normal 'use Time::DayOfWeek;' command. Other functions can be imported to local namespaces explicitly or with the following tags: :all - every function described here :dow - only DoW(), Dow(), and DayOfWeek() :nam - only DayNames() and MonthNames() :day - everything but MonthNames() same as above but returns 3-letter day abbreviations in 'Sun'..'Sat'. same as above but returns full day names in 'Sunday'..'Saturday'. can override default day names with the strings in @NewDayNames. The current list of day names is returned so call DayNames() with no parameters to obtain a list of the default day names. An example call is: DayNames('Domingo', 'Lunes', 'Martes', 'Miercoles', 'Jueves', 'Viernes', 'Sabado'); has also been included to provide a centralized name set. Just like DayNames(), this function returns the current list of month names so call MonthNames() with no parameters to obtain a list of the default month names. Revision history for Perl extension Time::DayOfWeek: * had to bump minor version to keep them ascending * added hack to shift days right one between Feb2008..2009 (still not sure why algorithm skewed) * added kwalitee && POD tests, bumped minor version * condensed code && moved POD to bottom * updated License * updated DoW param tests to turn zero month or day to one * updated POD to contain links * made bin/dow as EXE_FILES && added named month param detection * removed most eccentric misspellings * removed indenting from POD NAME field * added month name data and tidied up for release * wrote pod and made tests * original version Please run: `perl -MCPAN -e "install Time::DayOfWeek"` or uncompress the package && run: `perl Makefile.PL; make; make test; make install` or if you don't have `make` but Module::Build is installed `perl Build.PL; perl Build; perl Build test; perl Build install` Most source code should be Free! Code I have lawful authority over is && shall be! Copyright: (c) 2003-2007, Pip Stuart. Copyleft : This software is licensed under the GNU General Public License (version 2). Please consult the Free Software Foundation () for important information about your freedom. Pip Stuart <[email protected]>
http://search.cpan.org/~pip/Time-DayOfWeek-1.6.A6FFxZB/DayOfWeek.pm
CC-MAIN-2016-30
en
refinedweb
*this* proposal (most downsides discussed here are inherent to any records. Alternative update syntax: using tuple selectors let { r.x = x'; r.y = y'; r.z = z'; } in r If we allow tuples of selectors: r.(x, y, z) = (r.x, r.y, r.z) then one can simply write let r.(x, y, z) = (x', y', z') in r definitely want to maintain f and g at the top-level, but should consider also adding through the record name-space. See related discussion below on future directions. Compatibility with existing records The new record system is enabled with -XNAMESPACEDATA. - Should new modules be infectious? That is, if I turn the extension on for my module and export a record, does a user that wants to import the record also have to use the extension? - Records from modules without this extension can be imported into a module using it. Ideally the old record fields would now only be accessed through a namespace. Also, we would ideally be able to strip any now useless field prefixes. module OldModule ( Record(..) ) where data Prefix = Prefix { prefixA :: String } module NewModule where import OldModule ( Prefix(..) strip prefix ) aFunc = let r = Prefix "A" in r.a Details on the dot This.. structure namespaces.
https://ghc.haskell.org/trac/ghc/wiki/Records/NameSpacing?version=24
CC-MAIN-2016-30
en
refinedweb
/*- * chr.c 8.1 (Berkeley) 6/4/93"; #endif /* LIBC_SCCS and not lint */ #include <sys/cdefs.h> __FBSDID("$FreeBSD: src/lib/libc/string/memchr.c,v 1.8 2009/02/07 19:34:44 imp Exp $"); #include <string.h> void * memchr(const void *s, int c, size_t n) { if (n != 0) { const unsigned char *p = s; do { if (*p++ == (unsigned char)c) return ((void *)(p - 1)); } while (--n != 0); } return (NULL); }
http://opensource.apple.com/source/Libc/Libc-825.26/string/FreeBSD/memchr.c
CC-MAIN-2016-30
en
refinedweb
redisbayes 0.1.2 Naïve Bayesian Text Classifier on Redis What Is This? It’s a spam filter. I wrote this to filter spammy comments from a high traffic forum website and it worked pretty well. It can work for you too :) It’s not tied to any particular format like email, it just deals with the raw text. This is probably the only spam filtering library you’ll find for Python that’s simple (170 lines of code), works (30 lines of test code), and doesn’t suck. Installation From folder: sudo python setup.py install From cheeseshop: sudo pip install redisbayes From git: sudo pip install git+git://github.com/jart/redisbayes.git Basic Usage import redis, redisbayes rb = redisbayes.RedisBayes(redis=redis.Redis()) rb.train('good', 'sunshine drugs love sex lobster sloth') rb.train('bad', 'fear death horror government zombie god') assert rb.classify('sloths are so cute i love them') == 'good' assert rb.classify('i fear god and love the government') == 'bad' print rb.score('i fear god and love the government') rb.untrain('good', 'sunshine drugs love sex lobster sloth') rb.untrain('bad', 'fear death horror government zombie god') - Author: Justine Tunney - License: MIT - Categories - Package Index Owner: jart - DOAP record: redisbayes-0.1.2.xml
https://pypi.python.org/pypi/redisbayes/0.1.2
CC-MAIN-2016-30
en
refinedweb
This is V4 of my submission. Here is a list of requested changes: o Extra commit was added for changing an unsigned short to an int. o Use of EXTERN_SYMBOL was added to mips-atomic.c and bitops.c, as well as the removal of 'extern' in the functions' declarations. o Name of funcs changed from atomic_xxx to __mips_xxx in bitops.c. o The function comments in bitops.c were tweaked to please scripts/kernel-doc. Here is a list of requested changes that were not done (and why): o Suggested optimization of _MIPS_SZLONG and others was not needed as mips-atomic.c now includes <asm/irqflags.h>. o Suggested fixes to please checkpatch.pl for whitespace before newlines in asm strings was attempted but the result made the assembly code look more cluttered => no change made. These were unrequested changes: o Changed order of func listings in irqflags.h so that only one #ifdef/#endif pair was needed instead of three. Jim Quinlan
https://www.linux-mips.org/archives/linux-mips/2012-09/msg00037.html
CC-MAIN-2016-30
en
refinedweb
/* : GeneralFailureException.java,v 1.1 2002/12/05 22:23:58 gmurray Exp $ */ package com.sun.j2ee.blueprints.waf.controller; /** * This exception is the base class for all the web runtime exceptions. */ public class GeneralFailureException extends RuntimeException implements java.io.Serializable { private Throwable t; public GeneralFailureException(String s) { super(s); } public GeneralFailureException(String s, Throwable t) { super(s); this.t = t; } public String getThrowable() { return ("Received throwable with Message: "+ t.getMessage()); } }
http://docs.oracle.com/cd/E17802_01/blueprints/blueprints/code/adventure/1.0/src/com/sun/j2ee/blueprints/waf/controller/GeneralFailureException.java.html
CC-MAIN-2016-30
en
refinedweb
Answered by: Save field without displaying it on the VB form - I have a VB 2008 windows application that I'm working on; the data are stored in SQL. I have a lot of fields that user will need to fill out. Some fields that not displayed on the form are actual combination of few filled out fields. For example - user types in first name, middle name and last name in 3 separate boxes on the form and I'd like to combine those and assign it to the "FullName" field. I know it's duplicating the data but this FullName field would need to be merged into report as is in a different program. Another example - user will pick a data in combo box that is binded to a different table, but I need it to auto-populate 3 fields in the main form (those fields are not displayed). I know that I can actually add all those hidden fields on the form and assign calculated values based on user input and then everything is saved with built-in "Update" command. I was wondering if there is a way to specify those few calculated values as parameters and still use "Update" command without actually typing long Update command. I have over 400 fields - very lengthy command.. Hope it makes sense. Thanks! Question Answers - All replies - There are several way of going about this but I think I know what your looking for here. Solution #1: Keep the 4 text boxes binded to the sql data-source (e.g. Dataset, TableAdapter, and etc) set the fourth textboxes binding to the full name data field. Once you have done this place the textbox out of the way and set the property Visible = False. Then add an event in the last of the (3) to insert a combination of all three values into the fourth textbox (e.g. txtFirstName.Text + " " + txtMiddle.Text + " " + txtLastName.Text). One last note on this method would be to add an event handler OnMouseLeave on the LastName textbox and this will be where the fourth textbox "FullName" gets it's full name value set. Solution #2: The proper way of doing this is to have your normal (3) textboxes working and when you need the full name pull each value from the database or keep the fullname field in the data table and use string combination to combine the (3) values into a single full name. If you go that route you will most likely have to implement custom save method using sqlclient namespace. Let me know if this helps you out. Thanks Charlie - I briefly read over this and i am not sure i follow 100%. But, i thought i would mention something that i have used and if it is something that will help you then good, if not then disregard. I have a particular client application which has several hundred controls the user needs to input data into. large sections have about 30 or so checkboxes. 0 to all of these may be checked depending. this is each section so there is a lot to hanlde for the entire record. so this would take a lengthy routine or sql statement to write. What i found was easiest for me was to loop throgh the checkboxes and combine a comma delimeted string consisting of the checked checkbox's names. this way only the checked checkboxes are stored and the names allow me to split the string and check the controls by name in a loop. so basically i can handle 30 checkboxes with a single field, 1 small loop to build the delimeted string, a single parameter for my insert or update command, and 1 small loop to read the names when the record is loaded. using this approach i was able to shorten my column from about 250 or so down to about 70. i could probably combine more values into single fields but i try not to handle everything with strings. well, hope this helps. might give you some ideas to slim some things down. FREE DEVELOPER TOOLS, CODE & PROJECTS at Database Code Generator and Tutorial - Thank you for the suggestions. Solution 1 - is what I currently have. I wanted to see if there is a more efficient way to do it. As I mentioned - I have over 400 fields and trying not to make the form looking too busy (if I have a lot of hidden fields - hard to manage design). I have TabControl with 10 tabs already.. I was thinking about implementing solution 2 - to write the SQL command to save only calculated fields. Again - was trying to find an alternative. It's seems that there is got to be a way to setup the parameters with calculated fields, using existing parameters names. For example - built-in Update command already has @FullName parameter but I don't know how to assign this parameter calculated value prior to calling update command. Thanks again! - Me again. I've been trying to set the parameters prior to saving it and it still doesn't work. Please see the code below. What am I missing? I don't get any errors, but when I add a quick watch for the value of @FacTaxID - nothing showing up and no value inserted in the field. Please advise! FYI - I don't have FacTaxID field on the form, the field Me.txtFacTaxID is binded to related table. Private Sub TblAdmissDataEntryBindingNavigatorSaveItem_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles TblAdmissDataEntryBindingNavigatorSaveItem.Click Me.Validate() Me.Cursor = Cursors.WaitCursor TblAdmissDataEntryTableAdapter.Adapter.UpdateCommand.Parameters("@FacTaxID").Value = Me.txtFacTaxID.Text TblAdmissDataEntryTableAdapter.Adapter.UpdateCommand.Parameters("@LimitedPartnershipName").Value = Me.txtFacLegalName.Text Me.TableAdapterManager.UpdateAll(Me.AdmPacketMainDataSet) Me.Cursor = Cursors.Default end sub - Hey buddy I'm getting on a flight right now but I built you a sample app I can send you or post it for download and there is a method I just figured out today waiting at the airport that will blow your sox's off and it's only 2 lines of code. I will post around 9PM tonight sorry about the delays. Thanks Charlie - - Thank you, Charlie. I really appreciate you going into such length as designing sample application. I downloaded the project and reviewed the code and have a few questions. The Linq string pulls different full name than the one you pull from SQL - so Linq doesn't match the split version of the name. Was it intentional? About parameters - I wanted to use the built-in UpdateCommand that gets generated when you setup Data source connection in the TableAdapter and find a way to manipulate the existing parameters in that command. I see that in your project you didn't generate UpdateCommand and instead built your own. Is it possible to manipulate existing parameters of built in command? If not - then I'll still use built-in Update string for 400+ fields and setup separate SQL command to update Full name and other calculated fields. Thank you so much for your help! If it's not too much to ask - can you also take a look at another problem I'm having? Am I the only one experiencing it?? Thanks again, Alla Yes the LINQ needs to be tweaked a little I was just showing you a brief example to get you started. About the update command you can open the dataset view and view the code behind and the method I implemented will update the database. To use just click the little yellow + sign on the toolbar and when your ready to save the data click the button with a default image. It is the last button on the toolbar. It will save the new row including the fullname field behind the scenes in less than 2 lines of code. Thanks Charlie - Oh well. I guess I'll use a separate SQL update code for calculated fields. Unless someone will tell me why this line below doesn't work?! TblAdmissDataEntryTableAdapter.Adapter.UpdateCommand.Parameters("@FacTaxID").Value = Me.txtFacTaxID.Text Thanks again, Alla - Sorry, Charlie - I think there is some miscommunication here. I have setup connection to Data source - SQL database and using all automatically generated Select, Update, Insert, Delete commands - from table adapters. I understand that the method you suggested was to spell out each field and parameter in Insert/Update commands. I wanted to avoid it because of 400+ fields in those forms. It's much easier just to call it with 1 line of code: Me.TableAdapterManager.UpdateAll(Me.AdmPacketMainDataSet). I guess I'm trying to find an easier way. If I check the code behing UpdateCommand in Dataset designer - I can see that all parameters already there and wanted to find a way to modify those parameters prior to calling Update command (see example code from yesterday). Unfortunately that code doesn't give me any errors but doesn't update calculated fields either. If that method is not possible - I'll add the separate Update code (similar to the one in your sample) that I'll run right after built-in Insert for new records or UpdateCommand for existing. Hope I'm making sense. About strongly typed dataset - I learned VB practically by myself (books + some on-line classes) and have enough basic knowledge to create applications, but don't know technical terms to answer your question, sorry.. I really appreciate your assistance! Sincerely, Alla Well I understand your frustration and writing software with minimal knowledge can really be a pain but you can pat yourself on your back for the progress you have already made. So in your project did you use the datasource add-in in visual studio to create your dataset or did you create it manually? What exactly is wrong with the generated update method? Is it not updating the data correctly? Let me know exactly what you need help with and I can put you on the right track. Thanks Charlie - Thank you. I used a VS wizard to add a connection to SQL. The Update command works for all the fields displayed on the form and binded to the table fields. The problem is with the calculated fields that I don't display on the form or with the fields that are binded to different table. I was hoping that there was a way to manipulate the built in UpdateCommand parameters that are = calculated fields (see my code from yesterday - @FacTaxID is a parameter in the main table, but the value is stored in different table and displayed on the form). That code didn't update those 2 fields specified in the code. I can certainly write a separate SQL update code and use it. The other problem I have with date fields - see the reference link from earlier today. I am taking 2 days off - so I might not reply immediately. Have a great Thanksgiving! Thanks, Alla That's great. That is what I have been trying to explain to you. In the sample application I sent you Open the dataset and double click on the accounts table adapter. Once the code view is showing look at the method I have provided. This will modify the data before it is inserted or updated and it is a very simple but professional way of completing something like this. Thanks Charlie
https://social.msdn.microsoft.com/Forums/vstudio/en-US/7ffb501e-817d-4865-9145-051ad4485f2c/save-field-without-displaying-it-on-the-vb-form?forum=vbgeneral
CC-MAIN-2016-30
en
refinedweb
This - Protocol::XMPP::Bind - register ability to deal with a specific feature - Protocol::XMPP::Base - base class for Protocol::XMPP - Protocol::XMPP::Stream - handle XMPP protocol stream - 61 more results from Protocol...DAPATRICK/Net-XMPP-1.05 - 22 Dec 2014 21:29:27 GMT - Search in distribution - Net::XMPP - XMPP Perl Library - Net::XMPP::Client - XMPP Client Module - Net::XMPP::Namespaces - In depth discussion on how namespaces are handled TEAM/Net-Async-XMPP-0.003 - 04 Aug 2014 18:29:17 GMT - Search in distribution - Net::Async::XMPP::Protocol - common protocol support for Net::Async::XMPP - Net::Async::XMPP - asynchronous XMPP client based on Protocol::XMPP and IO::Async::Protocol::Stream. - Net::Async::XMPP::Server - asynchronous XMPP server based on Protocol::XMPP and IO::Async::Protocol::Stream. - 2 more results from Net-Async...GURUPERL/Net-XMPP3-1.02 - 13 Oct 2009 13:09:48 GMT - Search in distribution - Net::XMPP3 - XMPP Perl Library - Net::XMPP3::Client - XMPP Client Module - Net::XMPP3::Namespaces - In depth discussion on how namespaces are handled res...MSTPLBG/AnyEvent-XMPP-0.55 5 (1 review) - 01 Mar 2014 21:45:53 GMT - Search in distribution - AnyEvent::XMPP::Ext - Extension baseclass and documentation - AnyEvent::XMPP::Namespaces - XMPP namespace collection and aliasing class - AnyEvent::XMPP::Ext::Disco - Service discovery manager class for XEP-0030 - 1 more result from AnyEvent-XMPP » IM: TCLI is an acronym for Transactional Contextual command Line Interface. Optionally it may stand for Tester's Command Line Interface. TCLI supports the writing of agents (Agents) that interact with their host operating system or the network with a cur...HACKER/Agent-TCLI-0.032 - 03 May 2007 18:13:49 GMT - Search in distribution - Agent::TCLI::User - A User class for Net::CLI. - Agent::TCLI::Transport::Base - Base Class for transports TMHARISH/HTML-Miner-1.03 - 20 Jan 2013 08:53:50 GMT - Search in 5 (1 review) - 04 Aug 2011 08:51:51 GMT - Search in distribution ...REATMON/Net-Jabber-2.0 3 (3 reviews) - 07 Sep 2004 01:57:03 GMT - Search in distribution - Net::Jabber::Component - Jabber Component Library - Net::Jabber::Protocol - Jabber Protocol Library POE::Filter::XML provides POE with a completely encapsulated XML parsing strategy for POE::Wheels that will be dealing with XML streams. The parser is XML::LibXML...NPEREZ/POE-Filter-XML-1.140700 - 11 Mar 2014 22:00:01 GMT - Search in distribution SFINK/Net-Chat-Daemon-0.3 - 28 Jun 2006 06:28:58 GMT - Search in distribution - Net::Chat::Daemon - run a daemon that is controlled via instant messaging33 4.5 (2 reviews) - 15 Jul 2016 09:00:38 GMT - Search in distribution Working with named services can be a pain when you want to go back and forth between the port and its real name. This module helps alleviate some of those pain points by defining some helping hashes, functions, and regular expressions....LESPEA/Net-IANA-Services-0.004000 - 12 May 2014 23:10:49 GMT - Search in distribution MART/DJabberd-0.85 - 13 Jun 2011 22:24 Data::Transform::Zlib provides a filter for performing (de-)compression using Compress::Raw::Zlib. Since it is just a wrapper around that module, it supports the same features....MARTIJN/Data-Transform-Zlib-0.02 - 22 Dec 2011 19:30:34 GMT - Search in distribution TMHARISH/Net-XMPP-Client-GTalk-0.01 - 20 Jan 2013 14:31:52 GMT - Search in distribution
https://metacpan.org/search?q=Protocol-XMPP
CC-MAIN-2016-30
en
refinedweb
>>>>> "Samir" == Samir Patel <spatel@...> writes: Samir> I am trying to save an image without showing it. Here is Samir> small program There are a couple of ways to do it. It would help to know what platform you are on. On linux/unix, there are the postscript and GD backends which can generate images 'offline' w/o showing them. At present, there is no way to generate a GTK image (GTK is the default backend) w/o showing it. You should read the web pages for more information on these issues. Postscript is a high quality output that is more useful on linux/unix platforms where there are usually viewers, converters and printers by default. GD can be used to make PNGs and other image formats without showing the image, but as far as I know there is no good windows installer for gd module yet. I've spent some time working on it, and have been corresponding with the author of gdmodule, and can probably come up with one in the near future. So that's the long answer. The short answer is -- if you are on linux, install gd module following the instructions on the matplotlib web site and use the GD or PS backend. If you are on windows, wait a little bit and I'll see about getting the gd installer going, which is something I've been meaning to do anyway. Cheers, John Hunter PS solution works for me. One thing is that afm fonts are in gnome-print package (Took me some time to find where this fonts came from). Now I am trying to see whether I can make it work with Quixote (web server). I will try GD solution later on. Thanks for your suggestions. - Samir Samir, Either call it with "python test.py -dGD" or use this code, with 2 new lines at the beginning. from matplotlib import use use('GD') from matplotlib.matlab import * plot([1,2,3]) savefig('test.png') ---------------- Instructions for installing stuff for GD on the website. But if you just need an image, do -dPS or use('PS') for postscript output. -C -- Ask not what your computer can do for you; ask.... [ Uh-oh. ] First of all, one of the easiest graph package yet fill with tons of capability and future potential. Now my issue: I am trying to save an image without showing it. Here is small program ************************************************************** from matplotlib.matlab import * plot([1,2,3]) savefig('test.png') #show() *************************************************************** This does not create test.png file, but if I uncomment last line, it create test.png file. How can I create a test.png file without showing it? I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200310&viewday=24
CC-MAIN-2016-30
en
refinedweb
bool is_open ( ); Check if a file is open The function returns true if a previous call to open succeeded and there have been no calls to the member close since, meaning that the filebuf object is currently associated with a file. // is_open () example #include <iostream> #include <fstream> using namespace std; int main () { ifstream is; filebuf * fb; fb = is.rdbuf(); fb->open ("test.txt",ios::in); if ( fb->is_open() ) cout << "file is open.\n"; else cout << "file is not open.\n"; fb->close(); return 0; } bool is_open ( );
http://www.cplusplus.com/reference/iostream/filebuf/is_open/
crawl-002
en
refinedweb
Enterprise Libraries and OSGI We start with the library, responsible for loading properties files. It contains one abstract class named ALoader with one static final method getResourceBundle that returns a ResourceBundle. package com.jtunisie.osgi.load; import java.util.ResourceBundle; public abstract class ALoader { public static final ResourceBundle getResourceBundle(String ressource){ return java.util.ResourceBundle.getBundle(ressource); } } This class should work fine with a simple Java application. Starting our client, we create one class named LibCustomer with three methods : package com.jtunisie.osgi.loader.client.impl; import com.jtunisie.osgi.loader.client.ILoader; import com.jtunisie.osgi.loader.lib.ALoader; public class LibCustomer extends ALoader implements ILoader { @Override public String getNameFromLib() { return getResourceBundle("loader").getString("contry"); } @Override public String getNameFromLocale() { return java.util.ResourceBundle.getBundle("loader").getString("contry"); } public void init(){ System.out.println("local contry name : "+ getNameFromLocale()); System.out.println("lib contry name : "+getNameFromLib()); } } The init method will be called by Spring DM as the default init method. Note that we have to add loader.properties file to src root file that contains one property : contry=tunisie Using maven to generate the two bundles we will get the following nested exception: java.util.MissingResourceException: Can't find bundle for base name loader, locale en_US. This means that library can't load properties file because it isn't visible. Why ? In fact, the class loader concept isn't the same as a simple Java application - the tree concept isn't available here. The client bundle must import the library package otherwise the library is independent of client and it can't see resource file. Many libraries such as hibernate, JPA and JAXB use the same thing with the class forName concept to load class with reflection, and due to invisibility of client to such libraries, the ClassNotFound exception is thrown. OSGi R 4.1 isn't clear about this point, and some workaround is given by the Eclipse Equinox implementation. Let us adding a second chance to find resource file : We mention that our library is Eclipse-BuddyPolicy registered which helps to find resource on the registered bundle. Our client will register by adding Eclipse-RegisterBuddy: com.jtunisie.osgi.loader.lib property. If the OSGi class loader cycle fails to find the resource, our library will try to delegate the class loader to registered bundle wich is our client. Now if we try our client both methods work fine. Conclusion This is a brief introduction about the recurrent problem found when using external jars, and we need to take care about class loader before working with OSGi concept. It isn't as easy as a simple Java application but with some practice we can detect the source of ClassNotFoundExceptions - the workaround isn't always evident. Source is available here : svn checkout osgienterpriselibs-read-only - Login or register to post comments - 2493 reads - Printer-friendly version (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://java.dzone.com/articles/enterprise-libs-and-osgi
crawl-002
en
refinedweb
A .NET Memory-Mapped Cache TcpListener Service By Peter A. Bromberg, Ph.D. File Mapping File mapping is the association of a file's contents with a portion of the virtual address space of a process. The system creates a file mapping object to maintain this association. A file view is the portion of virtual address space that the process uses to access the file's contents. Processes read from and write to the file view using pointers, just as they would with dynamically allocated memory. Processes can also manipulate the file view with the VirtualProtect function. File mapping provides two major advantages: Faster and easier file access Shared memory between two or more applications File mapping allows a process to access files more quickly and easily by using a pointer to a file view. Using a pointer improves efficiency because the file resides on disk, but the file view resides in memory. File mapping allows the process to use both random input and output (I/O) and sequential I/O. It also allows the process to efficiently work with a large data file, such as a database, without having to map the whole file into memory. When the process needs data from a portion of the file other than what is in the current file view, it can unmap the current file view, then create a new file view. The file mapping functions allow a process to create file mapping objects and file views to easily access and share data. multiple processes use the same file mapping object to create views for a local file, the data is coherent. That is, the views contain identical copies of the file on disk. The file cannot reside on a remote computer if you want to share memory between multiple processes (e.g., you cannot use a UNC path). In short, memory-mapped files provide a way to look at a file as a chunk of memory. You map the file and get back a pointer to the mapped memory. You can simply read or write to memory from any location in the file mapping, just as you would from an array. When you've processed the file and closed the file mapping, the file is automatically updated. The operating system takes care of all the details of file I/O. Many developers are not aware that Memory Mapped files are used widely by the operating system itself, and the technique has been available since the first versions of Windows. Memory Mapped Files and .NET On the .NET Platform, sharing information across AppDomain boundaries typically only provides two choices: Remoting, and WebServices. Both are slow in comparison to File Mapping or "Memory Mapped Files". As one might guess, the .NET platform does not provide support for Memory Mapped Files, and as far as I know, none is expected. You must call into the native Windows API methods. There are not many resources available for Memory Mapped File management through .NET. Once excellent article by Natty Gur on codeproject.com provides an unusually sophisticated "global cache" approach, but it also has security considerations that would have to be solved to make it more usable. At one point, some people called "Metal Wrench" put out a .NET class library which, among other stream-related utility methods, had a Memory Mapped File Stream class. The Microsoft Caching Application Block also has a Memory Mapped File option, but there is so much additional code and infrastructure that goes along with it that, unless you are a glutton for punishment, I'd recommend against that route for all except the most patient of developers. Finally, Michael VanHoutte has an article (also on codeproject.com) in which he "tore apart" the MS Caching Application Block and pared it down to the most basic elements to create a kind of simplified "Cache Service". His code is written in VB.NET, which I generally try to avoid whenever possible, but it is certainly one of the best implementations I've seen. And, not to forget, MVP Thomas Restropo "winterdom" has publish his own implementation of Memory Mapped File API PInvoke methods. Initially, using Michael's example as a basis, I rewrote and enhanced his code into C# and added some additional features of my own to produce a new version of a C# "Memory Mapped Cache" service. My service adds one very important new element --besides the fact that different applications (including ASP.NET web applications) on the same machine can share .NET data in a common Cache, my creation also sports an asynchronous TCP Listener which enables applications on remote machines on the network to also make use of the cache located on a single central "cache machine". It operates very much the same way the ASP.NET StateServer service works, except that it is for global, enterprise-wide, inter-application data. My cache is configurable through standard AppSettings in the configuration file, both on the server and the clients, and offers "CacheProxy" and "CacheHelper" classes to make it relatively easy for remote applications, including Web Applications,to add or retrieve data from a remote cache. Best of all, as described above, it's fast. You can also have more than one named cache in operation simultaneously. However, after a series of stress tests I ran on my own using each developer's code, I found that only the original MetalWrench Toolbox assembly performed without fail under heavy load. This is most likely because it has a MemoryMappedFileStream class that is completely written in unsafe C# code with pointers. Consequently, after several revisions to my code and more testing, that is what I decided to use. You will also find a slightly modified version of the XYSocketLib class library which performs remarkably well under heavy multiple - client loads with zero memory leaks and a "dead man's EKG" in the Task Manager CPU meter. Let's have a look at the architecture of the Memory-Mapped Cache TCPListener Service: As can be seen above, Client applications can talk directly to the Memory Mapped Caches hosted by the Cache Service, or if the Cache Service is on another machine, they can use the CacheHelper API along with a special CacheProxy class that marshals the TCP Socket calls, to talk to the Memory Mapped Caches on the remote Cache Service via TCP sockets. My Cache itself has a Cache.Items(string key) method that returns an object containing your cached type, and it has Add and Remove methods. You can also refer to a specific Cache instance with Cache(cacheName), so multiple instances of Memory Mapped File caches can be run. For instance, you might want to have two Caches - one for small, lightweight objects (for speed) and another named Cache for fewer, but larger objects. My CacheHelper class provides several overloaded methods that make interacting with the Cache class easy: public static object Items(string key, ActionType action,object value) public static object Items(string cacheName, ActionType action, string key, string ipAddress,int port, object payload) public object Add(string key) public object Remove(string key) The cache ipAddress and port parameters are configured in the appSettings section of the CacheService config file. The ActionType enum includes Get, Add, Remove and Result Action Types which should be self-explanatory. In order to talk to the Cache on a remote machine, the CacheHelper API wraps your object and request in a CacheProxy class: using System; namespace MemoryMappedCache { public enum ActionType { Get, Add, Remove, Result } [Serializable] public class CacheProxy { public ActionType Action; public Object Payload; public string Key; public string CacheName; public CacheProxy(string cacheName, ActionType action, string key, object payload) { this.CacheName =cacheName; this.Action =action; this.Key =key; this.Payload =payload; } } } The CacheProxy class is essentially a "basket" that holds whatever information is needed to tell the receiving Cache what to do. It is serialized and deserialized to a byte array, and this is what is sent back and forth over the sockets, making for a very compact "packet" each way. Of course, if your Payload is a very large DataSet, for example, don't expect stellar performance from this- or from any other caching mechanism. I only mention this because I remember how incredulous I was when one reader reported that he had been using my CompressedDataSet infrastructure to send a 22MB dataset back and forth over the wire... We receive a lot of forum posts here about how to create an asynchronous TCP Socket listener which uses the .NET ThreadPool under the hood, and so I am posting some sample code for same below (this is not the final code I decided to use for my socket server): using System; using System.Net.Sockets; using System.Net; using System.Text; using System.Threading ; using System.IO; using System.Diagnostics ; using System.Runtime.Serialization.Formatters.Binary ; using MemoryMappedCache; namespace MemoryMappedCache { public class AsyncListener { public Socket s=null; public bool isLogging=Convert.ToBoolean(System.Configuration.ConfigurationSettings.AppSettings["isLogging"]); public void StartListening(int port) { try { // Resolve local name to get IP address IPHostEntry entry = Dns.Resolve(Dns.GetHostName()); IPAddress ip = entry.AddressList[0]; // Create an end-point for local IP and port IPEndPoint ep = new IPEndPoint(ip, port); if(isLogging)TraceLog.myWriter.WriteLine ("Address: " + ep.Address.ToString() +" : " + ep.Port.ToString(),"StartListening"); EventLog.WriteEntry("MMFCache Async Listener","Listener started on IP: " + ip.ToString() + " and Port: " +port.ToString()+ "."); // Create our socket for listening s = new Socket(ep.AddressFamily, SocketType.Stream, ProtocolType.Tcp); // Bind and listen with a queue of 100 s.Bind(ep); s.Listen(100); // Setup our delegates for performing callbacks acceptCallback = new AsyncCallback(AcceptCallback); receiveCallback = new AsyncCallback(ReceiveCallback); sendCallback = new AsyncCallback(SendCallback); // Set the "Accept" process in motion s.BeginAccept(acceptCallback, s); } catch(SocketException e) { Console.Write("SocketException: "+ e.Message); } } AsyncCallback acceptCallback; AsyncCallback sendCallback ; void AcceptCallback(IAsyncResult ar) { try { // Cast the user data back to a socket object Socket s = ar.AsyncState as Socket; // End the accept and get the resulting client socket Socket s2 = s.EndAccept(ar); // Keep the "Accept" process in motion s.BeginAccept(acceptCallback, s); // Create a state object for client (real apps may cache these) StateObject state = new StateObject(); state.workerSocket = s2; // Start an async receive state.workerSocket.BeginReceive(state.buffer, 0, state.buffer.Length, 0, receiveCallback, state); } catch(SocketException e) { Debug.WriteLine(e.Message); if(isLogging)TraceLog.myWriter.WriteLine( "SocketException:"+ e.Message+e.StackTrace,"AcceptCallback"); } return; // Return the thread to the pool } // Async receive method + matching delegate variable AsyncCallback receiveCallback; void ReceiveCallback(IAsyncResult ar) { int i=0; string data=String.Empty; try { StateObject state = ar.AsyncState as StateObject; i = state.workerSocket.EndReceive(ar); if(i==0) { if(isLogging)TraceLog.myWriter.WriteLine("Shutting down socket.","ReceiveCallback"); state.workerSocket.Shutdown(SocketShutdown.Both); state.workerSocket.Close(); } else { state.ms.Write(state.buffer ,0 ,i); state.workerSocket.BeginReceive(state.buffer, 0, state.buffer.Length, 0, receiveCallback, state); if(i <state.buffer.Length) { byte[] result=HandleMessage(state); state.workerSocket.BeginSend(result, 0, result.Length, 0, sendCallback, state); } } } catch(SocketException e) { if(isLogging)TraceLog.myWriter.WriteLine("SocketException: "+ e.Message,"ReceiveCallback"); } return; // Return the thread to the pool } // Async send method + matching delegate variable void SendCallback(IAsyncResult ar) { int i=0; try { // Cast the state to an object StateObject state = ar.AsyncState as StateObject; i = state.workerSocket.EndSend(ar); // Begin another receive on the thread state.workerSocket.BeginReceive(state.buffer, 0, state.buffer.Length, 0, receiveCallback, state); } catch(SocketException e) { Debug.WriteLine(e.Message); if(isLogging)TraceLog.myWriter.WriteLine("SocketException: "+ e.Message,"SendCallback"); } return; // Return the thread to the pool } private static byte[] HandleMessage(StateObject state) { byte[] bytResponse=null; BinaryFormatter b= new BinaryFormatter(); state.ms.Position =0; CacheProxy proxy = (CacheProxy) b.Deserialize(state.ms); if(proxy.Action ==ActionType.Get) { string key=proxy.Key ; string cacheName=proxy.CacheName ; MemoryMappedCache.Cache c= new MemoryMappedCache.Cache(cacheName); object payload =c[key]; // get the cache item from the key, package into a new Cache proxy, // serialize and send out CacheProxy proxyResult = new CacheProxy(cacheName,ActionType.Result ,key,payload); MemoryStream ms = new MemoryStream(); b.Serialize(ms, proxyResult); bytResponse=ms.ToArray(); } else if(proxy.Action ==ActionType.Add) { string key=proxy.Key ; string cacheName=proxy.CacheName ; MemoryMappedCache.Cache c= new MemoryMappedCache.Cache(cacheName); c[key]=proxy.Payload ;(); } else if (proxy.Action ==ActionType.Remove) { string key=proxy.Key ; string cacheName=proxy.CacheName ; MemoryMappedCache.Cache c= new MemoryMappedCache.Cache(cacheName); c.Remove(key);(); } return bytResponse; } } // end class } // end namespace And here is the StateObject class I've concocted to hold the Async state items. I stuff the Buffer contents into the ms MemoryStream each time around: public class StateObject { // Client socket. public Socket workSocket = null; // Size of receive buffer. public const int BufferSize = 8192; // Receive buffer. public byte[] buffer = new byte[BufferSize]; public MemoryStream ms = new MemoryStream(); } With the CacheHelper class, talking to the Memory Mapped File Cache on a remote machine is as simple as: CacheHelper ch= new CacheHelper(); ch["ds"]=DataSet1; DataSet dataset2=(DataSet) ch["ds"]; In the downloadable solution below, you can run the TestServer Console app to test out the server without having to install the Windows Service. Then, choose Debug/start new instance on the TestClient Console app by right-clicking the TestClient project in Solution Explorer. You will see a series of test cases repeated via the CacheHelper API talking to an instance of the Memory Mapped Cache via TCP sockets, including the serialization of a complete DataSet containing the Northwind Employees table. NOTE:This is a work in progress and there are more enhancements, bug fixes, and a lot more testing to come, as time permits. For these reasons, I caution readers that I do not yet consider this code ready for production. However, if you are interested in using this as a basis for further exploration of Memory Mapped File Caching, you can download the full solution below. Be sure to change any instances of the Cache IpAddress and port to match what you need for your own testing, and make sure that the "isLogging" appSettings item is set to "false" in order to suppress debugging log actions and Debugger.Launch() statements. And, please be kind enough to post either below or at our forums whatever discoveries you make! As of 12/20/05, I have completely reworked to use the MetalWrench API (fastest) and added a completely new multithreaded Socket server and client. Download the Visual Studio.NET Solution that accompanies this article Articles Submit Article Message Board Software Downloads Videos Rant & Rave
http://www.eggheadcafe.com/articles/20050116.asp
crawl-002
en
refinedweb
Hi, I have several custom controls that all work correctly when placed on a form and appear on the toolbox. The problem that I'm seeing is that 2 of the 5 controls disable form inheritance when placed on the base form. If I remove both of these controls from the base form then my form inheritance works again and I can see the other controls on the form. I get the error "Visual inheritance is currently disabled because the base class references a device-specific component or contains p/invoke" on the non-base forms at design time. I have checked and neither of the controls uses p/invoke. What would a device specific component be? The SIP control? I remarked it out. What other things could cause this? Anyone care to see the code? These are some of the references. using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Text; using System.Windows.Forms; using System.Threading; using Microsoft.WindowsCE.Forms; using System.Diagnostics; I also have a reference to "System.Windows.Forms.DataGrid" in the solution explorer However i cant add "using System.Windows.Forms.DataGrid" cause i get the error that it is a type not a namespace. These 2 controls are the only ones that have a reference to the datagrid (possible clue) So if I have a custom control that refers to the datagrid does this kill form inheritance? Thanks UPDATE I removed my references to the grid control and remarked out all lines using the grid. Now the custom control works with form inheritance. However this creates a big problem for me. I need to see the datagrid on forms that inherit from the base form so i can set table and column styles at design time :( Any ideas what can be done to allow a datagrid on a base form to work with form inheritance? Thanks "Jen" <[email protected]> wrote in message news:[email protected]... CLARIFICATION These are custom controls that reference the datagrid control. If the references to the datagrid are pulled out of the custom control then form inheritance works again. "Jen" <[email protected]> wrote in message news:[email protected]... There's good information in this article... You may need to tag your custom control(s) with the "DesktopCompatible" attribute. -- Tim Wilson ..NET Compact Framework MVP "Jen" <[email protected]> wrote in message news:[email protected]... form remove Articles Submit Article Message Board Software Downloads Videos Rant & Rave
http://www.eggheadcafe.com/forumarchives/netframeworkcompactframework/sep2005/post24403196.asp
crawl-002
en
refinedweb
April 2000 This article introduces you to STL standard library and thread-safety issues when the standard library is used in a multi-threaded mode. It explains how a current implementation, which uses coarser granularity locks, causes performance penalty. The penalty is assessed using a simple test that creates string objects in a multi-threaded mode. This article also describes an alternative locking technique and a possible future enhancement that can improve performance and compares their relative merits to the current mutex based critical sections. Finally, it explains how to use the workshop compilers with other standard libraries that are currently available. The Sun WorkShop 5.0 compiler currently provides a C++ standard library in archive form (libCstd.a). You can use this library to develop threaded applications. The standard library is based on Rogue Wave standard library version 2.01.01. Rogue Wave documentation on thread safety states that: For example, if you instantiate a string and create a new thread and you want to pass that string to the thread by reference, you will need to lock around write access to that string because you are explicitly sharing the one string object between threads. The library provides a mutex class (mutex_lock and mutex_unlock from the thread library) to accomplish this task. On the other hand, if you pass the string to the new thread by value, then you will not need to lock, even though the string in the two different threads may share a representation through Rogue Wave's "copy on write" technology. The library handles the locking automatically and provides thread-safety at a finer granularity level via a separate reference counting class (for example, string_ref for strings.) In other words, the reference counting is updated automatically in a thread safe manner. Hence you are only required to lock when making an object available to multiple threads explicitly, either by passing references between threads or by using global or static objects. Back to Top Classes (for example, string) use a technique called copy on write to minimize copying. This technique offers the advantage of easy-to-understand value semantics with the speed of reference counted pointer implementation. This section illustrates how the technique works. When you initialize a String with another String via the copy constructor: String(cont RWCString&); The two strings share the same data until one of them tries to write to it. At that point, a copy of the data is made, and the two strings go their separate ways. Copying only at "write" time makes copies of strings, particularly read-only copies, very inexpensive. The following example shows how four objects share one copy of a string until one of the objects attempts to change the string: #include <string.h> String g; // Global object void setGlobal(String x) { g = x; } main(){ String a("kernel"); // 1 String b(a); // 2 String c(a); // 3 setGlobal(a); // Still only one copy of "kernel"! //4 b += "s"; // Now b has its own data: "kernels" //5 } String g; // Global object void setGlobal(String x) { g = x; } main(){ String a("kernel"); // 1 String b(a); // 2 String c(a); // 3 setGlobal(a); // Still only one copy of "kernel"! //4 b += "s"; // Now b has its own data: "kernels" //5 } The counter for the reference increments and decrements under the protection of a lock. The library is safe when only pass by value is used. So if you pass std::strings by reference, you will need to lock access and enforce one-thread-at-a-time access yourself. Take the following example: std::string str = "A string"; a() { ... thr_create(NULL, 0, b, (LPVOID)NULL, 0,&tid); str = ""; } b() { std::string strcopy = str; } ... thr_create(NULL, 0, b, (LPVOID)NULL, 0,&tid); str = ""; } b() { std::string strcopy = str; } If both assignments happen at the same time, the first one can end up in the reference by going to 0, thus freeing the memory. Meanwhile, the second assignment can see the reference at 1 and decides to add a reference to a now deallocated replica. This example demonstrates that even a simple operation such as "=" should have a lock to prevent a race condition. In other words, every accessible member function should be guarded and such can result in severe performance penalty. To access the performance implication, a level_2 thread safety model was implemented for string class. The header files and string class member were modified to accommodate one thread at a time access. Take a look at a sample implementation: #if defined (_RWSTD_MULTI_THREAD) && defined (_RWSTD_STRING_MT_SAFETY_LEVEL_2) # define MT_GUARD(name) _RWSTDGuard name (this->__mutex) #else # define MT_GUARD(name) ((void)0) #endif // _RWSTD_MULTI_THREAD When level_2 thread safety is defined, Rogue Wave's STDGuard method gets invoked, which defaults to a void. The actual implementation for the operator "=" method looks like the following: template <class charT, class traits, class Allocator > | basic_string<charT, traits, Allocator> & basic_string<charT, traits, Allocator>::operator= ( const basic_string<charT, traits, Allocator>& str) { if (this != &str) { MT_GUARD(guard); if (str.__pref()->__references() > 0) { str.__pref()->__addReference(); __unLink(); __data_ = str.__data_.data(); } else this->operator=(str.c_str()); } return *this; } As you can see, the locking is introduced at a much coarser level that is thus serializing the entire invocation. Another optimization uses atomic updates for counters instead of mutex. Level 2 type guarding at coarser grain shows that the critical section is much larger. It also shows that mutex based guard overheads are small in comparison to the amount of work done in the critical section. However, reference counting, and counter increment and decrement operations are more efficient using atomic updates because the overheads from calling mutex methods can be much higher. This section discusses the two different optimizing techniques. One uses a more general swap based on spin blocking while the other uses compare and swap based updates. You can use spin blocking as a general replacement for mutexes. The CAS based Fetch_and_Add is more specific to counter increments and decrements and requires no guarding. Swap based spin blocking for guard methods are shown below and these methods were implemented in regular header file in a transparent manner. In the following example, SWAPINITLOCK, SWAPLOCK and SWAPUNLOCK macros are called in the place of pthread_mutex_init, pthread_mutex_lock and pthread_mutex_unlock: extern "C" Test_and_Set(long *,long); #define SWAPINITLOCK(lp) (*(lp) = 0) #define SWAPLOCK(lp) while (Test_and_Set(lp, 1L)); #define SWAPUNLOCK(lp) Test_and_Set(lp,0L) The Test_and_Set is implemented as a inline function using swap: .inline Test_and_Set,8 mov %o0,%o2 mov %o1,%o0 swap [%o2],%o0 .end .end In CAS based updates, _RWSTD_MT_INCREMENT macro is defined to be a Fetch_and_Add inline assembler function: .inline fetch_and_add,4 retry: ld [%o0],%l0 add %l0,%o1,%l1 cas [%o0],%l0,%l1 cmp %l0,%l1 bne retry mov %l1,%o0 nop .end A simple multi-threaded server was designed to test and compare performances. For each case, a new library (libCstd.a) was built and timing measurements were done under the same condition on the same system. The ultra 60 repeatedly created string objects in a mt_mode. The tests conclude that coarser grain locks (level 2) produce 25-30% performance penalty and is more efficient when objects are shared by value among threads than by reference. Furthermore, the atomic swap based updates show performance benefits only when the number of CPUs on the system is greater or equal to the number of concurrent servers. When the number of servers exceed the amount of available CPUs, swap based spin blocking is inefficient and shows marked performance degradation. A new technique that completely avoids guard class for counter increments and decrements has been implemented. It uses SPARC -v9 CAS instruction for atomic update of the reference counting. This technique by far shows the best performance of all schemes and the results on 8 CPUs show up to 40-50% in performance benefit. This table will be updated as soon as tests are completed. A study was performed using a multi-threaded server to assess the performance of standard library using different levels of synchronization. Present study shows the advantage of providing users with the ability to have a pluggable STL and standard library so that they can tune their requirements. Current evaluation of other STL implementations has been successful in integrating with Workshop compilers.
http://developers.sun.com/solaris/articles/stl-new.html
crawl-002
en
refinedweb
Microsoft:52:48 PM Hug a Java Developer Today. [Objective] Chris Hollander responses in a much camler way then I did to Russell Beattie's fiery rant. 12:37:25 PM 2:07:31 PM Of COURSE I Hate Microsoft. ... Furthermore, if you work for - or with - Microsoft, you need to do a reality check. I've said this before, but the question still remains: how do you sleep at night? How do you take pride in your work? Do you like copying other people's innovations? Do you like working with felons? If you're just making a buck, hey, more power to you. If you're one of those gung-ho borged-out drones... Well, all you're doing is making the whole computing industry a worse place to work in. And you KNOW it. ... [Russell Beattie Notebook] I sleep just fine at night. My work is to make the components I work with more featureful, secure and stable. While most of what I work on is just operating system plumbing (http.sys, and now winhttp and wininet) it's hard to ignore the innovation that are built into it them. The basic concept of a kernel switch of a http namespace is not somewhere I've seen elsewhere, even though it such a simple useful idea. I see enough of what's going on around the company to stay convinced that there is plenty of new stuff that is exciting, cool and orginal. The technology and products that Microsoft makes is good enough that other copy it, and irreplacable enough that even you use it. I take pride when I talk to someone who likes windows XP. I take pride in things like windows error reporting. I feel shame in things like the "Sign up for a passport popup notifications". It's a big company doing cool thing; some bad, most good. I've only worked here for three and half years, so I missed most of the stuff that generated the antitrust trials, so maybe the comapny back then was the definition of evil, but today I can't find something to be pissed about other then which bug I think should be fixed before we ship, and what feature won't be done in time for the next release that I was really looking forward to. 11:15:21 AM Ok, Lets leave activewords out of it... but reading Scoble's response, I want to comment: First, Smarttags in IE wasn't meant to be on by default. It was turned on in a ealy beta as a way to get some coverage in the wild for the feature, but it wasn't meant to ship that way. Second, I still don't buy the "getting in between the author and the reader" arguement, since plugins that do so will get turned off by the reader. Utilities stay and get used only to the degree that they add value for the end user. If the user is reading your site, it's a safe bet that they want to read your content and he/she won't tolerate something that distorts it. Third, I miss the potential of the idea. It's every other week or so, a place that the technology would be useful for me pops up. Yesterday, it was stuff like a dictionary lookup on words I don't know the definition to. A week ago it was generating entries into my calandar when a date and time is specified. The week before it was making keyhole's earthviewer go to an address, coordinates, or a city+state when one shows up on a web page. Today I imagine that I could have started this weblog entry by a smarttag that understand individual items on scoble's weblog, understand what blog software I use, and gives me an option similar to radio's "post" link. I'm still a bit disappointed that some peoples' reactions were so violently against it that they couldn't see the good stuff, and try to find a middle ground that makes everyone happy. 2:27:04 PM One came from Microsoft and the other doesn't. It doesn't matter if it was a technology for other people to build on. It doesn't matter that it was on or off by the choice of the end user. Nothing in the design could have mitigated the fear that people had of Microsoft's power over the browser, especially at a time when the anti-trust case was still looming. A coworker and I started wondering today if the same lessons and dilemas that the US is occuring with Iraq is applicable to Microsoft. There is power in going multilateral and building organizations. There are problems when you don't. Also such organizations can get old and become too slow to be as useful or torn apart by differing intrests. Prehaps the trick is to take a "collition of the willing", bypass the stop energy and form a new organization. I expect that there is delicate balance between being a useful place to get work done, and a shell that is one player hiding under a different name. Sometimes you can not get anyone to take your side even if it is working fine for most; at that point you need to rethink what you are doing, and come back at it in a different direction. 1:16:35 PM Sam tells how IIS uses http.sys. My side note is that you too can use port 80 while IIS is running via the new Win2k3 Http API. ;) 5:53:02 PM /. somehow expects that office using xml will somehow make their own word replacements have the same features as word. Where the hell do they get these ideas? I expect that at some point people will develop a pretty decent xsl(t) that will help convert with the most minimal lossy-ness, but (if the office team keeps doing thier jobs) there will always be a more feature office then what the free-be office clone gives you. 11:44:25. [The Scobleizer Weblog] I think it's more accurate to say that the jury is out on this point. A lot of old stuff will probably still work, but there are a lot of architectual changes going on, and there will be compatability problems as a result. I hope the majority of the breaking changes should be visible around PDC time. Also the usual compatailbity technologies help reduce problems as they occur. If it's any consolation, I'm a Longhorn selfhoster, and I scream bloody murder if any app I use breaks. ;) 11:20:41 AM 12:17:33 PM Better Bandwidth Utilization. jtorin writes "Daniel Hartmeier (of OpenBSD fame) has written a short but interesting article which explains how to better utilize available bandwidth. In ... [Slashdot] This is sounds like a different way to solve a problem we worked on for Windows XP. Part of the problem is the big buffers that fill up on the modem that don't allow new connections to get to the stable state fast enough. We solved this with DRR Fair Queing. These folks are doing it via the acks. I think the Windows XP technique is more general, but I wonder which works better. UPDATE: They are somewhat unrelated in that the bsd work is for faster asymetrical links, while the xp stuff is for 56K and lower. 11:05:18 AM Bush Offers Taxpayers Another $300 If We Go To War [The Onion] The onion does a brillent job tying together three major political threads in to a really funny article. 10:48:56 AM
http://radio.weblogs.com/0100529/
crawl-002
en
refinedweb
Paul Murphy asked people to run the following code using default compilers and no special switches, and to report the result: % cc -o sqr -O sqr.c -lm % time ./sqr #include <stdio.h> #include <math.h> main() { register i; double f=0.0; double sqrt(); for(i=0;i<=1000000000;i++) { f+=sqrt( (double) i); } printf("Finished %20.14f\n",f); } The table at the end of the article summarizes some reported results. Studying these suggests the questions: why are the results different for different (IEEE754 compliant) computers and different compilers, what is the "correct" result, how is it found, why are some results apparently more accurate than others, and why are the results on Sun, IBM, and other RISC CPU based computers about the same? Murphy discovered that one method of computing a "correct" result was the utilization of the Euler-Maclaurin formula and high precision computation. It gives 21081851083600.37596259382529338 as a good approximation to the correct result. 21081851083600.37596259382529338 The main points to be made are: Next is a more detailed discussion. For a good foundation in floating-point computing see the ACM article by David Goldberg "What Every Computer Scientist Should Know about Floating-Point Computing" especially with the addendum by Douglas Priest that addresses IA32. Also of interest might be Joe Darcy's JavaOne talk on What Everybody Using the Java Programming Language Should Know About Floating-Point Arithmetic. The table presents results from some of the 2, shows a result that in decimal representation, is in error by -.00096259382529338. However, observe that the hex representation shows an error of 0 ULPs, an exact match.. Consider now the errors encountered in the survey, based on the previous table. Consider #1 as perfect. Errors can be considered in terms of decimal representation, and ULPs. When considering the absolute error alone, SPARC results appear very inaccurate. However, considering the ratio error/correct, the variance between results, in fact, is discovered to be very small. There are two sources of error in the algorithm as provided in Murphy's article. The first is due to the sqrt operation, and the second is due to the addition in summing the square roots. Sqrt rounding error: the largest rounding errors for the sqrt operations should occur for the largest numbers. A one ULP error for sqrt(1,000,000,000) is on the order of 2*10^(-12). This rounding error is very small compared to the one that is possible due to the addition that follows the sqrt operations. The sum just before sqrt(1000000000) is 21081851051977.5977. A 1 ULP error in the sum value results in an error of about .0029. Huge compared to the rounding error of the sqrt operation. The SPARC-quad run shows that a lot more precision will yield the correct result. The cost as Murphy points out, is significantly more compute time. The question is, can something be done to minimize the inaccuracy without pushing precision to quad? Yes. The following algorithm uses the notion of compensated summation, which deals with the rounding error that is associated with each add in the summation. #include <stdio.h> #include <math.h> main() { register i; double sum=0.0,newsum,f; double sumerr=0.0; double err; double sqrt(); for(i=1;i<=1000000000;i++) { f=sqrt( (double) i); newsum = sum + f; err = (newsum - sum ) - f; sumerr += err; sum = newsum; } printf("Finished %20.14f\n",sum); printf("Finished %20.14f\n",sum-sumerr); } This program was compiled/linked with the same command, but returned results matching Correct while only taking about 20% more compute time. The entry "SPARC double+e" in the following table shows the result using compensated summation. We offer a bit more detail about compensated summation in the Notes section at the end of this article. There is a clustering of systems in terms of errors, identified as RISC machines that adhere to IEEE-754 64-bit format for double precision work. The best results were from systems based on Intel's Pentium model. Pentium systems have extended precision registers, 80 bits versus SPARC's 64-bits in double. In addition, Pentium systems allow intermediate results to be kept in extended format. An explicit call is required to force the Pentium processor to keep intermediate results in 64-bit format. As the preceding section shows, the critical issue of concern here should be the precision used in the problem. As Murphy points out, the correct value was calculated using 200 digits of precision. While the results from the Pentium class machines were better, the question arguably remains: Are any of these answers bad? We contend that the answer to this question is no. Given the magnitude of the number, and the size of the error, the results are all perfectly reasonable. Finally, what should be learned from this example is that a precision level may work for one algorithm and data set, but fail if the data changes or if the algorithm is modified. Another interpretation of this is that the compilers of C don't have strong guarantees about what "double" means. In some versions of gcc with some compiler options, "double" can mean "80-bits" instead of 64-bits. Some of the gcc compilers might be new enough to have some C99 support so it is conceivable that double_t is being used instead of double. If one wants more accurate results without compensated summation, one should declare f as "long double" or "double_t". If one wants predictable results one can use Java, even on x86. The one pertinent observation in Murphy's analysis of his example is that the results on SPARC systems dating back to 1989 are all the same, whereas the results on x86 systems vary in ways that are not at all obvious. In the original algorithm: for(i=0;i<=1000000000;i++) { f+=sqrt( (double) i); } whenever the addition is done, there are two possibilities: With each addition, the rounding error is ignored. After a billion additions, the results are noticeable in this problem. The compensating algorithm does the following: newsum = sum + f; (newsum - sum ) err = (newsum - sum ) - f; sumerr += err; Even the simple example of square-root summation affords the opportunity for appreciation of the complexity of floating-point computation. The complexity is inherent in the attempt to model arithmetic with real numbers, which in "true" arithmetic have infinite precision, by a finite precision approximation. Almost every arithmetic operation, therefore, results in an error and, in fact, it is in addition/subtraction that the largest errors can occur. Thus floating-point computation is epitomized by the need to understand and estimate the error to ensure the validity of what one computes. About the Authors Gregory Tarsy is the manager of the Floating Point and Numerical Computing group at Sun Microsystems. Prior to joining Sun he worked in scientific applications on supercomputers and was a professor of mathematics at the University of California and City University of New York. Neil Toda is a member of Sun's Floating Point and Numerical Computing group. Prior to joining Sun, he worked in Operating Systems Engineering, Database Engineering, and Software Performance Engineering.
http://developers.sun.com/solaris/articles/fp_errors.html
crawl-002
en
refinedweb
strait 0.5.1 Simple Traits for Python A simple implementation of traits for Python Abstract I provide a simple implementation of traits as units of composable behavior for Python. I argue that traits are better than multiple inheritance. Implementing frameworks based on traits is left as an exercise for the reader. Motivation Multiple inheritance is a hotly debated topic. The supporters of multiple inheritance claim that it makes code shorter and easier to read, whereas the opposers claim that is makes code more coupled and more difficult to understand. I have spent some time in the past facing the intricacies of multiple inheritance in Python and I was one of its supporters once; however, since then I have worked with frameworks making large use of multiple inheritance (I mean Zope 2) and nowadays I am in the number of the people who oppose it. Therefore I am interested in alternatives. In recent years, the approach of traits has gained some traction in a few circles and I have decided to write a library to implement traits in Python, for experimentation purposes. The library is meant for framework builders, people who are thinking about writing a framework based on multiple inheritance - typically via the common mixin approach - but are not convinced that this is the best solution and would like to try an alternative. This library is also for authors of mixin-bases frameworks which are unsatisfied and would like to convert their framework to traits. Are traits a better solution than multiple inheritance and mixins? In theory I think so, otherwise I would not have written this library, but in practice (as always) things may be different. It may well be that using traits or using mixins does not make a big difference in practice and that the change of paradigm is not worth the effort; or the opposite may be true. The only way to know is to try, to build software based on traits and to see how it scale in the large. In the small, more or less any approach works fine: it is only by programming in the large that you can see the differences. This is the reason why I am releasing this library with a liberal licence, so that people can try it out and see how it works. The library is meant to play well (when possible) with pre-existing frameworks. As an example, I will show here how you could rewrite Tkinter classes to use traits instead of mixins. Of course, I am not advocating rewriting Tkinter: it would be silly and pointless; but it may have sense (or not) to rewrite your own framework using traits, perhaps a framework which is used in house but has not been released yet. I am not the only one to have implemented traits for Python; after finishing my implementation I made a little research and discovered a few implementations. Then I have also discovered the Enthought Traits framework, which however seems to use the name to intend something completely different (i.e. a sort of type checking). My implementation has no dependencies, is short and I am committed to keep it short even in the future, according to the principle of less is more. There is also an hidden agenda behind this module: to popularize some advanced features of the Python object model which are little known. The strait module is actually a tribute to the metaprogramming capabilities of Python: such features are usually associated to languages with a strong academic tradition - Smalltalk, Scheme, Lisp - but actually the Python object model is no less powerful. For instance, changing the object system from a multiple inheritance one to a trait-based one, can be done within the fundamental object system. The reason is that the features that Guido used to implement the object system (special method hooks, descriptors, metaclasses) are there, available to the end user to build her own object system. Such features are usually little used in the Python community, for many good reasons: most people feel that the object system is good enough and that there is no reason to change it; moreover there is a strong opposition to change the language, because Python programmers believe in uniformity and in using common idioms; finally, it is difficult for an application programmer to find a domain where these features are useful. An exception is the domain of the Object Relation Mappers, whereas the Python language is often stretched to mimic the SQL language, a famous example of this tendency being SQLAlchemy). Still, I have never seen a perversion of the object model as big as the one implemented in the strait module, so I wanted to be the first one to perform that kind of abuse ;) What are traits? The word traits has many meanings; I will refer to it in the sense of the paper Traits - Composable Units of Behavior which implements them in Squeak/Smalltalk. The paper appeared in 2003, but most of the ideas underlying traits have been floating around for at least 30 years. There is also a trait implementation for PLT Scheme which is somewhat close in spirit (if not in practice) to what I am advocating here. The library you are reading about is by no means intended as a porting of the Smalltalk library: I am just stealing some of the ideas from that paper to implement a Pythonic alternative to mixins which, for lack of a better name, I have decided to call traits. I feel no obligation whatsoever to be consistent with the Smalltalk library. In doing so, I am following a long tradition, since a lot of languages use the name traits to mean something completely different from the Smalltalk meaning. For instance the languages Fortress and Scala use the name trait but with a different meaning (Scala traits are very close to multiple inheritance). For me a trait is a bunch of methods and attributes with the following properties: - the methods/attributes in a trait belong logically together; - if a trait enhances a class, then all subclasses are enhanced too; - if a trait has methods in common with the class, then the methods defined in the class have the precedence; - the trait order is not important, i.e. enhancing a class first with trait T1 and then with trait T2 or viceversa is the same; - if traits T1 and T2 have names in common, enhancing a class both with T1 and T2 raises an error; - if a trait has methods in common with the base class, then the trait methods have the precedence; - a class can be seen both as a composition of traits and as an homogeneous entity. Properties from 4 to 7 are the distinguishing properties of traits with respect to multiple inheritance and mixins. In particular, because of 4 and 5, all the complications with the Method Resolution Order disappear and the overriding is never implicit. Property 6 is mostly unusual: typically in Python the base class has the precedence over mixin classes. Property 7 should be intended in the sense that a trait implementation must provide introspection facilities to make seemless the transition between classes viewed as atomic entities and as composed entities. A hands-on example Let me begin by showing how you could rewrite a Tkinter class to use traits instead of mixins. Consider the Tkinter.Widget class, which is derived by the base class BaseWidget and the mixin classes Tkinter.Grid, Tkinter.Pack and Tkinter.Place: I want to rewrite it by using traits. The strait module provides a factory function named include that does the job. It is enough to replace the multiple inheritance syntax: class Widget(BaseWidget, Grid, Pack, Place): pass with the following syntax: class Widget(BaseWidget): __metaclass__ = include(Pack, Place, Grid) I said that the conversion from mixins to traits was easy: but actually I lied since if you try to execute the code I just wrote you will get an OverridingError: >>> from Tkinter import * >>> class Widget(BaseWidget): ... __metaclass__ = include(Pack, Place, Grid) Traceback (most recent call last): ... OverridingError: Pack overrides names in Place: {info, config, configure, slaves, forget} The reason for the error is clear: both Pack and Place provide methods called {info, config, configure, slaves, forget} and the traits implementation cannot figure out which ones to use. This is a feature, since it forces you to be explicit. In this case, if we want to be consistent with multiple inheritance rules, we want the methods coming from the first class (i.e. Pack) to take precedence. That can be implemented by including directly those methods in the class namespace and relying on rule 3: class TOSWidget(BaseWidget): __metaclass__ = include(Pack, Place, Grid) info = Pack.info.im_func config = Pack.config.im_func configure = Pack.configure.im_func slaves = Pack.slaves.im_func forget = Pack.forget.im_func propagate = Pack.propagate.im_func Notice that we had to specify the propagate method too, since it is a common method between Pack and Grid. You can check that the TOSWidget class works, for instance by defining a label widget as follows (remember that TOSWidget inherits its signature from BaseWidget): >>> label = TOSWidget(master=None, widgetName='label', ... cnf=dict(text="hello")) You may visualize the widget by calling the .pack method: >>> label.pack() This should open a small window with the message "hello" inside it. A few caveats and warnings First of all, let me notice that, in spite of apparency, include does not return a metaclass. Insted, it returns a class factory function with signature name, bases, dic: >>> print include(Pack, Place, Grid) <function include_Pack_Place_Grid at 0x...> This function will create the class by using a suitable metaclass: >>> type(TOSWidget) <class 'strait.MetaTOS'> In simple cases the metaclass will be MetaTOS, the main class of the trait object system, but in general it can be a different one not inheriting from MetaTOS. The exact rules followed by include to determine the right class will be discussed later. Here I want to remark that according to rule 6 traits take the precedence over the base class attributes. Consider the following example: >>> class Base(object): ... a = 1>>> class ATrait(object): ... a = 2>>> class Class(Base): ... __metaclass__ = include(ATrait)>>> Class.a 2 In regular multiple inheritance you would do the same by including ATrait before Base, i.e. >>> type('Class', (ATrait, Base), {}).a 2 You should take care to not mix-up the order, otherwise you will get a different result: >>> type('Class', (Base, ATrait), {}).a 1 Therefore replacing mixin classes with traits can break your code if you rely on the order. Be careful! The Trait Object System The goal of the strait module it to modify the standard Python object model, turning it into a Trait Object System (TOS for short): TOS classes behave differently from regular classes. In particular TOS classes do not support multiple inheritance. If you try to multiple inherit from a TOS class and another class you will get a TypeError: >>> class M: ... "An empty class" ... >>> class Widget2(TOSWidget, M): ... pass ... Traceback (most recent call last): ... TypeError: Multiple inheritance of bases (<class '__main__.TOSWidget'>, <class __main__.M at 0x...>) is forbidden for TOS classes This behavior is intentional: with this restriction you can simulate an ideal world in which Python did not support multiple inheritance. Suppose you want to claim that supporting multiple inheritance was a mistake and that Python would have been better off without it (which is the position I tend to have nowadays): how can you prove that claim? Simply by writing code that does not use multiple inheritance and it is clearer and more mantainable that code using multiple inheritance. I am releasing this trait implementation hoping you will help me to prove (or possibly disprove) the point. You may see traits as a restricted form of multiple inheritance without name clashes, without the complications of the method resolution, and with a limited cooperation between methods. Moreover the present implementation is slightly less dynamic than usual inheritance. A nice property of inheritance is that if you have a class C inheriting from class M and you change a method in M at runtime, after C has been created and instantiated, automagically all instances of C gets the new version of the method, which is pretty useful for debugging purposes. This feature is lost in the trait implementation provided here. Actually, in a previous version, my trait implementation was fully dynamic and if you changed the mixin the instances would be changed too. However, I never used that feature in practice, and it was complicating the implementation and slowing doing the attribute access, so I removed it. I think these are acceptable restrictions since they give back in return many advantages in terms of simplicity: for instance, super becomes trivial, since each class has a single superclass, whereas we all know that the current super in Python is very far from trivial. The magic of include Since the fundamental properties of TOS classes must be preserved under inheritance (i.e. the son of a TOS class must be a TOS class) the implementation necessarily requires metaclasses. As of now, the only fundamental property of a TOS class is that multiple inheritance is forbidden, so usually (but not always) TOS classes are instances of the metaclass MetaTOS which implements a single inheritance check. If you build your TOS hierarchy starting from pre-existing classes, you should be aware of how include determines the metaclass: if your base class was an old-style class or a plain new style class (i.e. a direct instance of the type metaclass), them include will change it to MetaTOS: >>> type(TOSWidget) <class 'strait.MetaTOS'> In general you may need to build your Trait Based Framework on top of pre-existing classes possessing a nontrivial metaclass, for instance Zope classes; in that case include is smart enough to figure out the right metaclass to use. Here is an example: class AddGreetings(type): "A metaclass adding a 'greetings' attribute for exemplification purposes" def __new__(mcl, name, bases, dic): dic['greetings'] = 'hello!' return super(AddGreetings, mcl).__new__(mcl, name, bases, dic) class WidgetWithGreetings(BaseWidget, object): __metaclass__ = AddGreetings class PackWidget(WidgetWithGreetings): __metaclass__ = include(Pack) include automatically generates the right metaclass as a subclass of AddGreetings: >>> print type(PackWidget).__mro__ (<class 'strait._TOSAddGreetings'>, <class '__main__.AddGreetings'>, <type 'type'>, <type 'object'>) Incidentally, since TOS classes are guaranteed to be in a straight hierarchy, include is able to neatly avoid the dreaded metaclass conflict. The important point is that _TOSAddGreetings provides the same features of MetaTOS, even if it is not a subclass of it; on the other hand, _TOSMetaAddGreetings is a subclass of AddGreetings which calls AddGreetings.__new__, so the features provided by AddGreetings are not lost either; in this example you may check that the greetings attribute is correctly set: >>> PackWidget.greetings 'hello!' The name of the generated metaclass is automatically generated from the name of the base metaclass; moreover, a register of the generated metaclasses is kept, so that metaclasses are reused if possible. If you want to understand the details, you are welcome to give a look at the implementation, which is pretty short and simple, compared to the general recipe to remove the metaclass conflict in a true multiple inheritance situation. Cooperative traits At first sight, the Trait Object System lacks an important feature of multiple inheritance as implemented in the ordinary Python object system, i.e. cooperative methods. Consider for instance the following classes: class LogOnInitMI(object): def __init__(self, *args, **kw): print 'Initializing %s' % self super(LogOnInitMI, self).__init__(*args, **kw) class RegisterOnInitMI(object): register = [] def __init__(self, *args, **kw): print 'Registering %s' % self self.register.append(self) super(RegisterOnInitMI, self).__init__(*args, **kw) In multiple inheritance LogOnInitMI can be mixed with other classes, giving to the children the ability to log on initialization; the same is true for RegisterOnInitMI, which gives to its children the ability to populate a registry of instances. The important feature of the multiple inheritance system is that LogOnInitMI and RegisterOnInitMI play well together: if you inherits from both of them, you get both features: class C_MI(LogOnInitMI, RegisterOnInitMI): pass >>> c = C_MI() Initializing <__main__.C_MI object at 0x...> Registering <__main__.C_MI object at 0x...> You cannot get the same behaviour if you use the trait object system naively: >>> class C_MI(object): ... __metaclass__ = include(LogOnInitMI, RegisterOnInitMI) ... Traceback (most recent call last): ... OverridingError: LogOnInitMI overrides names in RegisterOnInitMI: {__init__} This is a feature, of course, since the trait object system is designed to avoid name clashes. However, the situation is worse than that: even if you try to mixin a single class you will run into trouble >>> class C_MI(object): ... __metaclass__ = include(LogOnInitMI) >>> c = C_MI() Traceback (most recent call last): ... TypeError: super(type, obj): obj must be an instance or subtype of type What's happening here? The situation is clear if you notice that the super call is actually a call of kind super(LogOnInitMI, c) where c is an instance of C, which is not a subclass of LogOnInitMI. That explains the error message, but does not explain how to solve the issue. It seems that method cooperation using super is impossible for TOS classes. Actually this is not the case: single inheritance cooperation is possible and it is enough as we will show in a minute. But for the moment let me notice that I do not think that cooperative methods are necessarily a good idea. They are fragile and cause all of your classes to be strictly coupled. My usual advice if that you should not use a design based on method cooperation if you can avoid it. Having said that, there are situations (very rare) where you really want method cooperation. The strait module provide support for those situations via the __super attribute. Let me explain how it works. When you mix-in a trait T into a class C, include adds an attribute _T__super to C, which is a super object that dispatches to the attributes of the superclass of C. The important thing to keep in mind is that there is a well defined superclass, since the trait object system uses single inheritance only. Since the hierarchy is straight, the cooperation mechanism is much simpler to understand than in multiple inheritance. Here is an example. First of all, let me rewrite LogOnInit and RegisterOnInit to use __super instead of super: class LogOnInit(object): def __init__(self, *args, **kw): print 'Initializing %s' % self self.__super.__init__(*args, **kw) class RegisterOnInit(object): register = [] def __init__(self, *args, **kw): print 'Registering %s' % self self.register.append(self) self.__super.__init__(*args, **kw) Now you can include the RegisterOnInit functionality as follows: class C_Register(object): __metaclass__ = include(RegisterOnInit) >>> _ = C_Register() Registering <__main__.C_Register object at 0x...> Everything works because include has added the right attribute: >>> C_Register._RegisterOnInit__super <super: <class 'C_Register'>, <C_Register object>> Moreover, you can also include the LogOnInit functionality: class C_LogAndRegister(C_Register): __metaclass__ = include(LogOnInit) >>> _ = C_LogAndRegister() Initializing <__main__.C_LogAndRegister object at 0x...> Registering <__main__.C_LogAndRegister object at 0x...> As you see, the cooperation mechanism works just fine. I will call cooperative trait a class intended for inclusion in other classes and making use of the __super trick. A class using the regular super directly cannot be used as a cooperative trait, since it must satisfy inheritance constraints, nevertherless it is easy enough to convert it to use __super. After all, the strait module is intended for framework writers, so it assumes you can change the source code of your framework if you want. On the other hand, if are trying to re-use a mixin class coming from a third party framework and using super, you will have to rewrite the parts of it. That is unfortunate, but I cannot perform miracles. You may see __super as a clever hack to use super indirectly. Notice that since the hierarchy is straight, there is room for optimization at the core language level. The __super trick as implemented in pure Python leverages on the name mangling mechanism, and follows closely the famous autosuper recipe, with some improvement. Anyway, if you have two traits with the same name, you will run into trouble. To solve this and to have a nicer syntax, one would need more support from the language, but the __super trick is good enough for a prototype and has the serious advantage of working right now for current Python. Cooperation at the metaclass level In my experience, the cases where you need method cooperation in multiple inheritance situations are exceedingly rare, unless you are a language implementor or a designer of very advanced frameworks. In such a realm you have a need for cooperative methods; it is not a pressing need, in the sense that you can always live without them, but they are a nice feature to have if you care about elegance and extensibility. For instance, as P. J. Eby points it out in this thread on python-dev: A major use case for co-operative super() is in the implementation of metaclasses. The __init__ and __new__ signatures are fixed, multiple inheritance is possible, and co-operativeness is a must (as the base class methods must be called). I'm hard-pressed to think of a metaclass constructor or initializer that I've written in the last half-decade or more where I didn't use super() to make it co-operative. That, IMO, is a compelling use case even if there were not a single other example of the need for super. I have always felt the same. So, even if I have been unhappy with multiple inheritance for years, I could never dismiss it entirely because of the concern for this use case. It is only after discovering cooperative traits that I felt the approach powerful enough to replace multiple inheritance without losing anything I cared about. Multiple inheritance at the metaclass level comes out here and again when you are wearing the language implementor hat. For instance, if you try to implement an object system based on traits, you will have to do so at the metaclass level and there method cooperation has its place. In particular, if you look at the source code of the strait module - which is around 100 lines, a tribute to the power of Python - you will see that the MetaTOS metaclass is implemented as a cooperative trait, so that it can be mixed-in with other metaclasses, in the case you are interoperating with a framework with a non-trivial meta object protocol. This is performed internally by include. Metaclass cooperation is there to make the life of the users easier. Suppose one of you, users of the strait module, wants to enhance the include mechanism using another a metaclass coming for a third party framework and therefore not inheriting from MetaTOS: class ThirdPartyMeta(type): def __new__(mcl, name, bases, dic): print 'Using ThirdPartyMeta to create %s' % name return super(ThirdPartyMeta, mcl).__new__(mcl, name, bases, dic) The way to go is simple. First, you should mix-in MetaTOS in the third party class: class EnhancedMetaTOS(ThirdPartyMeta): __metaclass__ = include(MetaTOS) Then, you can define your own enhanced include as follows: def enhanced_include(*traits): return include(MetaTOS=EnhancedMetaTOS, *traits) In simple cases using directly ThirdPartyMeta may work, but I strongly recommend to replace the call to super with __super even in ThirdPartyMeta to make the cooperation robust. Discussion of some design decisions and future work The decision of having TOS classes which are not instances of MetaTOS required some thought. That was my original idea in version 0.1 of strait; however in version 0.2 I wanted to see what would happen if I made all TOS classes instances of MetaTOS. That implied that if your original class had a nontrivial metaclass, then the TOS class had to inherit both from the original metaclass and MetaTOS, i.e. multiple inheritance and cooperation of methods was required at the metaclass level. I did not like it, since I was arguing that you can do everything without multiple inheritance; moreover using multiple inheritance at the metaclass level meant that one had to solve the metaclass conflict in a general way. I did so, by using my own cookbook recipe, and all my tests passed. Neverthess, at the end, in version 0.3 I decided to go back to the original design. The metaclass conflict recipe is too complex, and I see it as a code smell - if the implementation is hard to explain, it's a bad idea - just another indication that multiple inheritance is bad. In the original design it is possible to add the features of MetaTOS to the original metaclass by subclassing it with single inheritance and thus avoiding the conflict. The price to pay is that now a TOS class is no more an instance of MetaTOS, but this is a non-issue: the important thing is that TOS classes perform the dispatch on their traits as MetaTOS would dictate. Moreover, starting from Python 2.6, thanks to Abstract Base Classes, you may satisfy the isinstance(obj, cls) check even if obj is not an instance of cls, by registering a suitable base class (similarly for issubclass). In our situation, that means that it is enough to register MetaTOS as base class of the original metaclass. Version 0.4 was much more complex that the current version (still short, it was under 300 lines of pure Python), since it had the more ambitious goal of solving the namespace pollution problem. I have discussed the issue elsewhere: if you keep injecting methods into a class (both directly or via inheritance) you may end up having hundreds of methods flattened at the same level. A picture is worth a thousand words, so have a look at the PloneSite hierarchy if you want to understand the horror I wanted to avoid with traits (the picture shows the number of nonspecial attributes defined per class in square brackets): in the Plone Site hierarchy there are 38 classes, 88 overridden names, 42 special names, 648 non-special attributes and methods. It is a nighmare. Originally I wanted to prevent this kind of abuse, but that made my implementation more complex, whereas my main goal was to keep the implementation simple. As a consequence this version assume the prosaic attitude that you cannot stop programmers from bad design anyway, so if they want to go the Zope way they can. In previous versions I did provide some syntactic sugar for include so that it was possible to write something like the following (using a trick discussed here): class C(Base): include(Trait1, Trait2) In version 0.5 I decided to remove this feature. Now the plumbing (i.e. the __metaclass__ hook) is exposed to the user, some magic has been removed and it is easier for the user to write her own include factory if she wants to. Where to go from here? For the moment, I have no clear idea about the future. The Smalltalk implementation of traits provides method renaming out of the box. The Python implementation has no facilities in this sense. In the future I may decide to give some support for renaming, or I may not. At the present you can just rename your methods by hand. Also, in the future I may decide to add some kind of adaptation mechanism or I may not: after all the primary goal of this implementation is semplicity and I don't want to clutter it with too many features. I am very open to feedback and criticism: I am releasing this module with the hope that it will be used in real life situations to gather experience with the traits concept. Clearly I am not proposing that Python should remove multiple inheritance in favor of traits: considerations of backward compatibily would kill the proposal right from the start. I am just looking for a few adventurous volunteers wanting to experiment with traits; if the experiment goes well, and people start using (multiple) inheritance less than they do now, I will be happy. Trivia strait officially stands for Simple Trait object system, however the name is also a pun on the world "straight", since the difference between multiple inheritance hierarchies and TOS hierarchies is that TOS hierarchies are straight. Moreover, nobody will stop you from thinking that the s also stands for Simionato ;) - Author: Michele Simionato <michele simionato at gmail com> - License: BSD License - Platform: any - Categories - Package Index Owner: micheles - DOAP record: strait-0.5.1.xml
http://pypi.python.org/pypi/strait/0.5.1
crawl-002
en
refinedweb
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug app-backup/boxbackup-0.10 compiled fine with gcc 3.4.6. After migrating to gcc 4.1.1, however, the emerge fails with the following (I've just shown the tail of the build): ---- g++ -DNDEBUG -O2 -Wall -I../../lib/common -I../../lib/compress -I../../lib/crypto -I../../lib/server -I../../lib/backupclient -DBOX_VERSION="\"0.10\"" -mmmx -march=pentium2 -fomit-frame-pointer -pipe -Wall -c BackupQueries.cpp -o ../../release/bin/bbackupquery/BackupQueries.o BackupQueries.cpp: In member function 'void BackupQueries::CommandGetObject(const std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, const bool*)': BackupQueries.cpp:818: error: 'LLONG_MIN' was not declared in this scope BackupQueries.cpp:818: error: 'LLONG_MAX' was not declared in this scope BackupQueries.cpp: In member function 'void BackupQueries::CommandGet(const std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, const bool*)': BackupQueries.cpp:904: error: 'LLONG_MIN' was not declared in this scope BackupQueries.cpp:904: error: 'LLONG_MAX' was not declared in this scope BackupQueries.cpp: In member function 'void BackupQueries::CommandRestore(const std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, const bool*)': BackupQueries.cpp:1697: error: 'LLONG_MIN' was not declared in this scope BackupQueries.cpp:1697: error: 'LLONG_MAX' was not declared in this scope make[1]: *** [../../release/bin/bbackupquery/BackupQueries.o] Error 1 make[1]: Leaving directory `/var/tmp/portage/boxbackup-0.10/work/boxbackup-0.10/bin/bbackupquery' make: *** [parcels/boxbackup-0.10-backup-client-linux-gnu.tgz] Error 2 ---- Tracking this down I believe the root cause is an error during configure which is designed to determine whether the symbols LLONG_MIN and LLONG_MAX are provided by system include files or whether its own definitions need to be provided. Looking at the resulting config.log (around line 2533 - search for LLONG_MAX), it's because the configure-compiled test C program is returning "unknown unknown" for the LLONG_MIN/LLONG_MAX determined values. Looking at the test program that is shown in the config.log, and separately compiling and debugging it, the problem appears to be with the following piece of code: /* Sanity check */ if (llmin + 1 < llmin || llmin - 1 < llmin || llmax + 1 > llmax || llmax - 1 > llmax) { fprintf(f, "unknown unknown\n"); exit(2); } On gcc 3.4.6 this passes OK, but on gcc 4.1.1 the check 'fails' and causes it to exit with the status of 2 (which ultimately causes configure to build an incorrect configuration header file for the compilation stage). It seems this check falls foul of some change in gcc 4.1.1 - there aren't any compilation warnings that give any clues, unfortunately. I'm in touch with the box backup development list so can probably pass any fixes back upstream, but I'm unsure as to what the fix might be in this case. Forgot to mention, here's my "emerge --info": Gentoo Base System version 1.6.14 Portage 2.0.54-r2 (default-linux/x86/2006.0, gcc-4.1.1, glibc-2.3.6-r3, 2.6.16-gentoo-r7 i686) ================================================================= System uname: 2.6.16-gentoo-r7 i686 Mobile Pentium II2 sys-devel/libtool: 1.5.22 virtual/os-headers: 2.6.11-r2 ACCEPT_KEYWORDS="x86" AUTOCLEAN="yes" CBUILD="i386-pc-linux-gnu" CFLAGS="-mmmx -O2 -march=pentium2 -fomit-frame-pointer -pipe" CHOST="i386-pc-linux-gnu" CONFIG_PROTECT="/etc" CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/env.d" CXXFLAGS="-mmmx -O2 -march=pentium2 -fomit-frame-pointer -pipe" alsa apache2 bash-completion berkdb bzip2 cli crypt dri eds esd ethereal expat fbcon gdbm gmp gpm gstreamer ipv6 isdnlog jpeg libwww mmx mp3 ncurses nls nptl ogg pam pcmcia pcre perl png pppd python readline reflection session spl ssl symlink tcpd udev usb vorbis xml xorg zlib userland_GNU kernel_linux elibc_glibc" Unset: CTARGET, INSTALL_MASK, LANG, LC_ALL, LDFLAGS, LINGUAS, PORTAGE_RSYNC_EXTRA_OPTS, PORTAGE_RSYNC_OPTS remove -O2 from your CFLAGS and try again. gcc-4.1.x seems to be a bit too aggressive when using optimisation (and no debugging symbols) for more than just this package. Created an attachment (id=89263) [edit] Test C program to illustrate where configure step fails Thanks for the suggestion - I tried that but with the same effect (the effect is the same with all of the compiler flags removed as well, so that doesn't seem to be the reason). I've put some instrumentation into the 'sanity check' within that configure step in a separate test C program which I have attached. Comparing the output with the two different compiler versions gives the following: gcc 4.1.1 (this is the one the build fails on): compiled with:gcc -mmm sanity fail: (llmin - 1 < llmin) sanity fail: (llmax + 1 > llmax) sanity check failed gcc-3.4.6-r1 (with this compiler the build works): compiled with:gcc -mmmx -O In both case the 'maths works' with the llmin/llmax values in that the wrapping occurs as expected, the difference is the behaviour of the comparison. I tried substituting the llmin/llmax variables for the constants shown above in the comparison with the same effect. I suspect that, even without optimisation, the compiler is being somewhat clever - clearly (X-1<X) is always TRUE... if you ignore wrapping. I'm going to see if I can peek into the generated code from the compiler (never done that before, so not sure how I'll go about it yet), but I thought I'd document my findings here anyway. Without understanding the assembler too much, I played around with some simple programs like: --- #include <stdio.h> int main(void) { int i=1; if (i - 1 < i) { printf("yes"); } return 0; } --- On GCC 4.1 this produced the same assembler output whether or not I commented out the if statement - it was being optimised away to just leave the printf call (note I wasn't passing any -O options). On GCC 3.4 this produced different assembler output with and without the if statement, and the comparison operation was visible in the assembler when the if statement was present. So, I think it's clear *why* this is happening, the only question is whether it is correct. Do you have an opinion on that? The configure step in box-backup is there because some compilers don't provide LLONG_MIN/LLONG_MAX and so it determines them programmatically (adding 1 until it wraps, for example). This sanity check makes sure the determined values really are the limits of the 'long long' type. I think either gcc is being overzealous with optimisations here (not sure whether ISO C semantics state one way or the other on this), or whether the sanity check makes too many assumptions about what optimisations the compiler *won't* do. Stuart, Thanks for your *very* extensive and exhaustive research into this issue. Really appreciated. I realise my comment was way too simplistic for the problem you found. I'm far from a compiler expert, so I'm affraid I cannot help you in any way with this issue. If there is a patch that gets around this problem, I'm happy to apply it. I think this is a problem that should be discussed at GCC bugzilla or mailing lists to figure out what those folks think about this issue, if they haven't dealt with it already. Based on that, I think either the boxbackup code has to be changed, or GNU should fix their compiler. If GCC porting has another standpoint to this, feel free to take over the bug, I'm clearly not of any use in this issue :) I've been pursuing this with the box backup list and I think what I'll end up with is a patch that I'll get rolled into the next upstream box backup version (beyond 0.10). There may or may not be a compiler issue here, but it seems the easiest route will be to avoid provoking the problem. I'll work that patch into an updated app-backup/box-backup ebuild and record that here, though, so hopefully we could get that applied as a app-backup/box-backup-0.10-r1 bump. Anyway, await a patch that I'll attach here when I'm done. OK, I've written a patch and supplied that to the upstream Box Backup project and they have accepted it into their codebase (r626 in their Subversion repository, for reference). The patch removes the reliance on the LLONG_MAX and LLONG_MIN symbols and the associated configure test that tries to programmatically determine the values if the symbols aren't found (it was this determination that fell foul of a GCC 4.1 optimisation). The STL mechanism std::numeric_limits<long long>::min/max is now used instead. I have written an ebuild bump that includes this patch in addition to the Gentoo patch that is already applied (it seemed sensible to keep them separate as this patch will fall away for the next upstream version while the Gentoo one will probably remain). The new ebuild also tidies the post-install instructions displayed and includes a few notes reminding upgraders that 0.10 is not compatible with 0.09 (so clients and servers must be upgraded together). Finally, the patch bumps the version reported by the tools to "0.10-r1". I hope this updated ebuild+patch can be included in Portage. I believe that without it or an equivalent patch Box Backup people will have trouble if they upgrade to GCC 4.1. If there are any questions then just let me know. ebuild and patch file to follow on this bug. Created an attachment (id=89556) [edit] Bumped boxbackup ebuild (boxbackup-0.10-r1.ebuild) Updated ebuild - note that the "boxbackup-0.10-gentoo.patch" it refers to is the one currently in Portage for the 0.10 ebuild. Created an attachment (id=89557) [edit] Patch to allow the build to compile with GCC 4.1 (boxbackup-0.10-noll.patch) This patch allows Box Backup compilation to succeed with GCC 4.1. I have tested it with the following GCC versions: 3.3.6 3.4.6 4.1.1 For all the above compiler tests I was using glibc-2.3.6. Stuart, thanks for your ebuild. A few remarks: - I removed the ewarn; I felt the info there is too superfluous and unimportant to warrant a large ewarn block: that a client can't talk with the server we find out ourselves through the logs, that the server can read the old store is not a warning at all. - I removed the version-bump from the patch, as that is a Gentoo-version bump number, instead of an upstream version bump. Gentoo revisions use the same sources, so I don't like to touch the version number. - I see no need for a version bump; the patch changes nothing for users that already could compile, and those that couldn't compile, now just can compile. No need to force a recompile for <GCC4.1 users to me. Compiles on my Mac with GCC4 :) Many thanks for the patches! In portage now. That's excellent. Thanks for committing it and explaining reasons for your changes - I'll remember those rules for the future. I'll try the updated version when it's trickled into the mirrors, then close this bug. Yep, updated portage now confirmed as now compiling with GCC 4.1.1. Many thanks. I'm verifying and closing the bug. Closing.
http://bugs.gentoo.org/136300
crawl-002
en
refinedweb
Hi all, My button won't redirect to the location in my account section. I have been trying for the past 24 hours and I just can't find a way. I am not sure if the button is functioning at all. If you want to take a look its upon login go to profile and the button will be there, "createProfileButton" please help its driving me crazy. thank you, code is posted below. one for profile page and other for sign in page. this is the code i have for the my profile page (I followed a video) import wixData from 'wix-data'; $w.onReady(function () { $w("#updateProfile").onReady( () => { let isEmpty = $w("#updateProfile").getCurrentItem(); let theName = $w("#firstName").value; if ( isEmpty === null ) { $w("#createProfileButton").show(); $w("#updateProfileButton").hide(); $w("#profileArea").collapse(); $w("#welcomeUser").text = `Welcome New User. Please create a profile!`; } else { $w("#updateProfileButton").show(); $w("#welcomeUser").text = `Welcome ${theName}`; $w("#profileArea").expand(); } } ); $w("#updateProfile").onAfterSave( () => { let theName = $w("#firstName").value; $w("#welcomeUser").text = "Update Completed!"; const millisecondsToDelay = 2500; setTimeout(() => { $w("#welcomeUser").text = `Welcome ${theName}`; }, millisecondsToDelay); }); }); this is the code for the sign up page.. import wixUsers from 'wix-users'; import wixData from 'wix-data'; import wixLocation from 'wix-location'; $w.onReady(function () { let user = wixUsers.currentUser; let userId = user.id; // "r5cme-6fem-485j-djre-4844c49" let isLoggedIn = user.loggedIn; // true user.getEmail() .then( (email) => { let userEmail = email; $w("#emailText").value = userEmail; let referralCode = userId.split("-")[4]; } ); $w("#createProfileButton").onClick( () => { let toInsert = { "firstName": $w("#firstName").value, "lastName": $w("#lastName").value, "emailAddress": $w("#emailText").value, }; wixData.insert("Members", toInsert) .then( () => { wixLocation.to("/account/my-profile"); } ) .catch( (err) => { let errorMsg = err; $w("#error").show(); } ); }); }); Hey Craig, The Create Profile button isn't connected to anything - so it seems that this button in fact is not functioning. You probably should connect it to "Submit": I hope this helps, Yisrael Hi Yisrael and thanks for the reply, when I add the submit my button blanks out on the live site. The create profile button is supposed to redirect me to the update profile page, there is then a button there which is connected to the submit function. This button is not connected to Submit, and it has no action associated with it. So, clicking on this button does just what it was told to do - nothing. haha I hear you but I took submit off as on the live site it greys the button out. I added submit back if you want to see and see if you can make sense of it? the action is specified in the code? $w("#createProfileButton").onClick( () => { let toInsert = { "firstName": $w("#firstName").value, I don't see that code on the page. You'll need to provide some sort of action to that button. You can take a look at the article How to Create Custom Member Profile Pages with Wix Code for ideas on how to do this. I did create the function which is $w.onReady(function () { So it is connected that way but it just does not work so i am missing something some where
https://www.wix.com/corvid/forum/community-discussion/button-not-working-with-wixlocation-to
CC-MAIN-2019-47
en
refinedweb
marble #include <RouteSyncManager.h> Detailed Description Definition at line 23 of file RouteSyncManager.h. Constructor & Destructor Documentation Definition at line 66 of file RouteSyncManager.cpp. Definition at line 78 of file RouteSyncManager.cpp. Member Function Documentation Gathers data from local cache directory and returns a route list. - Returns - Routes stored in local cache Definition at line 134 of file RouteSyncManager.cpp. Deletes route from cloud. - Parameters - Definition at line 231 of file RouteSyncManager.cpp. Starts the download of specified route. - Parameters - - See also - RouteSyncManager::saveDownloadedToCache() Definition at line 214 of file RouteSyncManager.cpp. Generates a timestamp which will be used as an unique identifier. - Returns - A timestamp. Definition at line 106 of file RouteSyncManager.cpp. Checks if the user enabled route synchronization. - Returns - true if route synchronization enabled Definition at line 88 of file RouteSyncManager.cpp. Returns CloudRouteModel associated with RouteSyncManager instance. - Returns - CloudRouteModel associated with RouteSyncManager instance Definition at line 101 of file RouteSyncManager.cpp. Opens route. - Parameters - Definition at line 219 of file RouteSyncManager.cpp. Starts preparing a route list by downloading a list of the routes on the cloud and adding the ones on the. Definition at line 196 of file RouteSyncManager.cpp. Removes route from cache. - Parameters - Definition at line 236 of file RouteSyncManager.cpp. Saves the route displayed in Marble's routing widget to local cache directory. Uses the RoutingManager passed as a parameter to the constructor. - Returns - Filename of saved file. Definition at line 112 of file RouteSyncManager.cpp. Setter for enabling/disabling route synchronization. - Parameters - Definition at line 93 of file RouteSyncManager.cpp. Definition at line 83 of file RouteSyncManager.cpp. Updates upload progressbar. - Parameters - Definition at line 241 of file RouteSyncManager.cpp. Uploads currently displayed route to cloud. Initiates necessary methods of backends. Note that, this also runs saveDisplayedToCache() method. Definition at line 127 of file RouteSyncManager.cpp. Uploads the route with given timestamp. - Parameters - Definition at line 189 of file RouteSyncManager.cpp. Property Documentation Definition at line 27 of file RouteSyncManager.
https://api.kde.org/4.14-api/kdeedu-apidocs/marble/html/classMarble_1_1RouteSyncManager.html
CC-MAIN-2019-47
en
refinedweb
Returns a scala.concurrent.Future that will be completed with success (value true) when existing messages of the target actor has been processed and the actor has been terminated. Returns a scala.concurrent.Future that will be completed with success (value true) when existing messages of the target actor has been processed and the actor has been terminated. Useful when you need to wait for termination or compose ordered termination of several actors, which should only be done outside of the ActorSystem as blocking inside Actors is discouraged. IMPORTANT NOTICE: the actor being terminated and its supervisor being informed of the availability of the deceased actor’s name are two distinct operations, which do not obey any reliable ordering. Especially the following will NOT work: def receive = { case msg => Await.result(gracefulStop(someChild, timeout), timeout) context.actorOf(Props(...), "someChild") // assuming that that was someChildᅰs name, this will NOT work } If the target actor isn't terminated within the timeout the scala.concurrent.Future is completed with failure akka.pattern.AskTimeoutException. If you want to invoke specalized stopping logic on your target actor instead of PoisonPill, you can pass your stop command as a parameter: gracefulStop(someChild, timeout, MyStopGracefullyMessage).onComplete { // Do something after someChild being stopped } (gracefulStopSupport: StringAdd).self (gracefulStopSupport: StringFormat).self (gracefulStopSupport: ArrowAssoc[GracefulStopSupport]).x (Since version 2.10.0) Use leftOfArrow instead (gracefulStopSupport: Ensuring[GracefulStopSupport]).x (Since version 2.10.0) Use resultOfEnsuring instead
https://doc.akka.io/api/akka/2.2.3/akka/pattern/GracefulStopSupport.html
CC-MAIN-2019-47
en
refinedweb
Python Data Structures and Algorithms: Quick sort Python Search and Sorting: Exercise-9 with Solution Write a Python program to sort a list of elements using the quick sort algorithm. Note : According to Wikipedia "Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation (formally, a total order) is defined. In efficient implementations it is not a stable sort, meaning that the relative order of equal sort items is not preserved. Quicksort can operate in-place on an array, requiring small additional amounts of memory to perform the sorting." Sample Solution: Python Code: def quickSort(data_list): quickSortHlp(data_list,0,len(data_list)-1) def quickSortHlp(data_list,first,last): if first < last: splitpoint = partition(data_list,first,last) quickSortHlp(data_list,first,splitpoint-1) quickSortHlp(data_list,splitpoint+1,last) def partition(data_list,first,last): pivotvalue = data_list[first] leftmark = first+1 rightmark = last done = False while not done: while leftmark <= rightmark and data_list[leftmark] <= pivotvalue: leftmark = leftmark + 1 while data_list[rightmark] >= pivotvalue and rightmark >= leftmark: rightmark = rightmark -1 if rightmark < leftmark: done = True else: temp = data_list[leftmark] data_list[leftmark] = data_list[rightmark] data_list[rightmark] = temp temp = data_list[first] data_list[first] = data_list[rightmark] data_list[rightmark] = temp return rightmark data_list = [54,26,93,17,77,31,44,55,20] quickSort(data_list) print(data_list) Sample Output: [17, 20, 26, 31, 44, 54, 55, 77, 93] Flowchart: Python Code Editor : Contribute your code and comments through Disqus. Previous: Write a Python program to sort a list of elements using the merge sort algorithm. Next: Write a Python program Program for counting
https://www.w3resource.com/python-exercises/data-structures-and-algorithms/python-search-and-sorting-exercise-9.php
CC-MAIN-2019-47
en
refinedweb
marble #include <EclipsesBrowserDialog.h> Detailed Description The eclipse browser dialog. This implements the logic for the eclipse browser dialog. Definition at line 33 of file EclipsesBrowserDialog.h. Constructor & Destructor Documentation Definition at line 24 of file EclipsesBrowserDialog.cpp. Definition at line 32 of file EclipsesBrowserDialog.cpp. Member Function Documentation Accept the dialog. This emits the buttonShowClicked signal - See also - buttonShowClicked Definition at line 60 of file EclipsesBrowserDialog.cpp. This signal is emitted when the use clicks the "show" button. - Parameters - Initialize the object. Definition at line 85 of file EclipsesBrowserDialog.cpp. Set whether or not to list lunar eclipses. - Parameters - - See also - withLunarEclipses Definition at line 47 of file EclipsesBrowserDialog.cpp. Set the year. This sets the year the browser currently shows eclipses for. Definition at line 37 of file EclipsesBrowserDialog.cpp. Update the dialog's button states. Disable/enable the show button according to the current selection. Definition at line 79 of file EclipsesBrowserDialog.cpp. Update the list of eclipses for the given year. - Parameters - Definition at line 73 of file EclipsesBrowserDialog.cpp. Returns whether or not lunar eclipses are listed. - Returns - Whether or not lunar eclipses are listed - See also - setWithLunarEclipses Definition at line 55 of file EclipsesBrowserDialog.cpp. Return the year the browser is set to. - Returns - The year the browser shows eclipses for at the moment Definition at line 42 of file EclipsesBrowserDialog.
https://api.kde.org/4.14-api/kdeedu-apidocs/marble/html/classMarble_1_1EclipsesBrowserDialog.html
CC-MAIN-2019-47
en
refinedweb
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including serverless-plugin-aws-alerts with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. A Serverless plugin to easily add CloudWatch alarms to functions npm i serverless-plugin-aws-alerts service: your-service provider: name: aws runtime: nodejs4.3 custom: alerts: stages: # Optionally - select which stages to deploy alarms to - production - staging dashboards: true nameTemplate: $[functionName]-$[metricName]-Alarm # Optionally - naming template for alarms, can be overwritten in definitions topics: ok: ${self:service}-${opt:stage}-alerts-ok alarm: ${self:service}-${opt:stage}-alerts-alarm insufficientData: ${self:service}-${opt:stage}-alerts-insufficientData definitions: # these defaults are merged with your definitions functionErrors: period: 300 # override period customAlarm: description: 'My custom alarm' namespace: 'AWS/Lambda' nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # Optionally - naming template for the alarms, overwrites globally defined one metric: duration threshold: 200 statistic: Average period: 300 evaluationPeriods: 1 datapointsToAlarm: datapointsToAlarm: 1 comparisonOperator: GreaterThanOrEqualToThreshold You can define several topics for alarms. For example you want to have topics for critical alarms reaching your pagerduty, and different topics for noncritical alarms, which just send you emails. In each alarm definition you have to specify which topics you want to use. In following example you get an email for each function error, pagerduty gets alarm only if there are more than 20 errors in 60s custom: alerts: topics: critical: ok: topic: ${self:service}-${opt:stage}-critical-alerts-ok notifications: - protocol: https endpoint: alarm: topic: ${self:service}-${opt:stage}-critical-alerts-alarm notifications: - protocol: https endpoint: nonCritical: alarm: topic: ${self:service}-${opt:stage}-nonCritical-alerts-alarm notifications: - protocol: email endpoint: [email protected] definitions: # these defaults are merged with your definitions criticalFunctionErrors: namespace: 'AWS/Lambda' metric: Errors threshold: 20 statistic: Sum period: 60 evaluationPeriods: 10 comparisonOperator: GreaterThanOrEqualToThreshold okActions: - critical alarmActions: - critical nonCriticalFunctionErrors: namespace: 'AWS/Lambda' metric: Errors threshold: 1 statistic: Sum period: 60 evaluationPeriods: 10 comparisonOperator: GreaterThanOrEqualToThreshold alarmActions: - nonCritical alarms: - criticalFunctionErrors - nonCriticalFunctionErrors If topic name is specified, plugin assumes that topic does not exist and will create it. To use existing topics, specify ARNs or use Fn::ImportValue to use a topic exported with CloudFormation. custom: alerts: topics: alarm: topic: arn:aws:sns:${self:region}:${self::accountId}:monitoring-${opt:stage} custom: alerts: topics: alarm: topic: Fn::ImportValue: ServiceMonitoring:monitoring-${opt:stage, 'dev'} barExceptions. Any function that included this alarm would have its logs scanned for the pattern exception Bar and if found would trigger an alarm. custom: alerts: definitions: barExceptions: metric: barExceptions threshold: 0 statistic: Minimum period: 60 evaluationPeriods: 1 comparisonOperator: GreaterThanThreshold pattern: 'exception Bar' bunyanErrors: metric: bunyanErrors threshold: 0 statistic: Sum period: 60 evaluationPeriods: 1 datapointsToAlarm: 1 comparisonOperator: GreaterThanThreshold pattern: '{$.level > 40}' Note: For custom log metrics, namespace property will automatically be set to stack name (e.g. fooservice-dev). You can define custom naming template for the alarms. nameTemplate property under alerts configures naming template for all the alarms, while placing nameTemplate under alarm definition configures (overwrites) it for that specific alarm only. Naming template provides interpolation capabilities, where supported placeholders are: $[functionName]- function name (e.g. helloWorld) $[functionId]- function logical id (e.g. HelloWorldLambdaFunction) $[metricName]- metric name (e.g. Duration) $[metricId]- metric id (e.g. BunyanErrorsHelloWorldLambdaFunctionfor the log based alarms, $[metricName]otherwise) Note: All the alarm names are prefixed with stack name (e.g. fooservice-dev). The plugin provides some default definitions that you can simply drop into your application. For example: alerts: alarms: - datapointsToAlarm: 1 comparisonOperator: GreaterThanOrEqualToThreshold treatMissingData: missing functionErrors: namespace: 'AWS/Lambda' metric: Errors threshold: 1 statistic: Sum period: 60 evaluationPeriods: 1 datapointsToAlarm: 1 comparisonOperator: GreaterThanOrEqualToThreshold treatMissingData: missing functionDuration: namespace: 'AWS/Lambda' metric: Duration threshold: 500 statistic: Average period: 60 evaluationPeriods: 1 comparisonOperator: GreaterThanOrEqualToThreshold treatMissingData: missing functionThrottles: namespace: 'AWS/Lambda' metric: Throttles threshold: 1 statistic: Sum period: 60 evaluationPeriods: 1 datapointsToAlarm: 1 comparisonOperator: GreaterThanOrEqualToThreshold treatMissingData: missing The plugin allows users to provide custom dimensions for the alarm. Dimensions are provided in a list of key/value pairs {Name: foo, Value:bar} The plugin will always apply dimension of {Name: FunctionName, Value: ((FunctionName))} For example: alarms: # merged with function alarms - name: fooAlarm namespace: 'AWS/Lambda' metric: errors # define custom metrics here threshold: 1 statistic: Minimum period: 60 evaluationPeriods: 1 comparisonOperator: GreaterThanThreshold dimensions: - Name: foo Value: bar 'Dimensions': [ { 'Name': 'foo', 'Value': 'bar' }, ] datapointsToAlarm: 1 comparisonOperator: GreaterThanThreshold treatMissingData: missing MIT © A Cloud Guru
https://npm.runkit.com/serverless-plugin-aws-alerts?t=1571001590722
CC-MAIN-2019-47
en
refinedweb
<< enriquethumar enriquethumar - 95%Tamamlanmış İşler - 100%Bütçe Dahilinde - 97%Zamanında - 23%Tekrar İşe Alım Oranı Portföy Enrique Thumar Görüntülemakale Gambling Site Görüntüle Full customization in pintastic script Görüntüle coded a shop in x cart and cms in joomla Görüntüle Coded a shopping cart in woocommerce Görüntüle Site for Golf players Görüntüle Site for game lovers Görüntüle coded magento shopping cart Görüntüle Education website Görüntüle Nawbo orlando Görüntüle Social Networking plus ecommerce in zend Görüntüle Army wife Network Görüntüle Son Değerlendirmeler Help Improve Website Functions on Front End & Admin + Activate SSL Certificate $175.00 USD “The reason we always go back to Yogesh is he is Good! knows what he is doing, efficient and very responsive. I look forward to our next project for sure.”Darren N. 5 ay önce Buddypress Extension : Edit Activity Posts & Comments Extension $333.00 USD “I searched a lot for a buddypress expert & finally i found Yogesh. Outstanding Experience, Excellent Communication, Professional Developer. I can't wait to work with him again. Highly recommended!”Youssef K. 7 ay önce Project for Yogesh T. -- 3 $160.00 USD “Very good job as usual. Was flexible during the progress of the project and capable to create multiple features on our website, iOS and Android mobile platforms.”Brice L. 9 ay önce import a second wordpress website on mac ₹333.00 INR “Knowledgeable. on time. Highly recommended! [25 December, 2018] Amazing knowledge. Work finished on time and in the budget. Highly Recommended”Santosh A. 10 ay önce Project for Yogesh T. -- 2 $350.00 USD “Another project well done by Yogesh T. Extremely flexible and open to feedback. Thank you!”Brice L. 10 ay önce Flash to HTML Canvas Job (Expert Only) $280.00 USD “Great professional to work with. Finished his job on time and will hire again for future projects.”Karamjeet Singh B. 11 ay önce Tecrübe freelancerJan 2009 - Feb 2013 (4 years) I am working with some company as well as doing freelancing since last 4 years. Eğitim BCA2005 - 2008 (3 years) Sertifikalar - Preferred Freelancer Program SLA97% - 43 - WordPress 32 - MySQL 21 - CSS 4 - e-Ticaret 4 - Javascript 3 - AJAX 2 - Grafik Tasarımı 1 - Mobile App Development 1 - Joomla 1
https://www.tr.freelancer.com/u/enriquethumar
CC-MAIN-2019-47
en
refinedweb
Testing all the links inside a web page is working fine or not is most important testing scenario. We can test this scenario very easily with selenium. As we know the links will be inside html tag <a> we can use By.tagName("a") locator and use an iterator in java to make the process simple.. Example - Consider we want to test all links in homepage of Test Case - Check out below test case and read the descriptions mentioned in the comment section to understand the flow. package selenium.tests; import java.util.List; import java.util.concurrent.TimeUnit; import org.openqa.selenium.*; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.support.ui.ExpectedConditions; import org.openqa.selenium.support.ui.WebDriverWait; public class TestAllLinks { public static void main(String[] args) { String baseUrl = ""; System.setProperty("webdriver.chrome.driver", "C:\\Users\\chromedriver_win32\\chromedriver.exe"); WebDriver driver=new ChromeDriver(); String notWorkingUrlTitle = "Under Construction: QAAutomated"; driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.get(baseUrl); List<WebElement> linkElements = driver.findElements(By.tagName("a")); String[] linkTexts = new String[linkElements.size()]; int i = 0; //extract the link texts of each link element for (WebElement elements : linkElements) { linkTexts[i] = elements.getText(); i++; } //test each link for (String t : linkTexts) { driver.findElement(By.linkText(t)).click(); if (driver.getTitle().equals(notWorkingUrlTitle )) { System.out.println("\"" + t + "\"" + " is not working."); } else { System.out.println("\"" + t + "\"" + " is working."); } driver.navigate().back(); } driver.quit(); } } Output - "Home" is working fine "About Me" is working fine If you find this useful please share the post with your friends using share options given below. You can write your feedback and suggestion in the comment section. When Executed above code i have encountered this erorr-Exception in thread "main" org.openqa.selenium.WebDriverException: Element is not clickable at point (104.5, 30). Other element would receive the click: Command duration or timeout: 87 milliseconds post. "best andaman honeymoon packages andaman honeymoon tour packages "
http://www.qaautomated.com/2016/10/selenium-test-to-check-links-in-web.html
CC-MAIN-2019-47
en
refinedweb
This chapter briefly explains the most popular programming models for parsing and manipulating XML data in use today. XML processing includes a diverse set of tools, which require different approaches but offer distinct advantages and disadvantages. XML's structured and labeled text can be processed by developers in several of ways. Programs can look at XML as text, as a stream of events, as a tree, or as a serialization of some other structure. Tools supporting all of these options are widely available. At their foundation, XML documents are text. The content and markup are both represented as text, and text-editing tools can be extremely useful for XML document inspection, creation, and modification. XML's textual foundations make it possible for developers to work with XML directly, using XML-specific tools only when they choose. Despite this textual nature, however, XML presents some serious limitations for programs that attempt to process XML documents as text documents. It is possible to process extremely simple XML documents reliably using basic textual tools like regular expressions, but this becomes much more difficult as features such as attribute defaulting, entity processing, and namespaces are added to documents. Using these features is extremely difficult when treating a document purely as text. Textual tools are a key part of the XML toolset, however. Many developers use text editors such as vi, Emacs, NotePad, WordPad, BBEdit, and UltraEdit to create or modify XML documents. Regular expressions in environments such as sed, grep, Perl, and Python can be used for search and replace or for tweaking documents prior to XML parsing or XSLT processing. These tools can also be very useful for searching and querying the information in XML documents, even without an understanding of the surrounding structure. Textual tools may also be applied to the results of an XML parser. Regular expressions and similar text-processing tools can be applied usefully to the results of an XML parse, working on the document when its XML-specific nature has already been resolved. The W3C's XML Schema, for instance, includes regular-expression matching as one mechanism for validating data types, as discussed in Chapter 16. A smart search and replace or spell checker might process only the contents of elements (and perhaps attributes), not the markup that defines the structures. Text-based processing can be preformed in conjunction with other XML processing. Parsing and then reserializing XML documents after other processing has taken place doesn't always produce the desired results. XSLT, for instance, will remove entity references and replace them with entity content. Preserving entities requires replacing them in the original document with unique placeholders, and then replacing the placeholder as it appears in the result. With regular expressions, this is quite easy to do. Developers may also need to replace particular characters with references to images; this approach can be very useful where an obscure or nonstandard glyph is needed in XHTML. As an XML parser reads a document, it moves from the beginning of the document to the end. It may pause to retrieve external resources for a DTD or an external entity, for instance but it builds an understanding of the document as it moves along. Enforcing well-formedness and validity constraints and applying namespaces requires keeping track of context; applying attribute defaults and entities requires keeping a list of appropriate content to insert; but the end result is a complete "reading" of the XML document. Event-based parsers report this reading as it happens, in a stream of events representing the information in the document. The "events" are, for example, the start of an element, the content of an element, and the end of an element. For example, given this document: <name><given>Keith</given><family>Johnson</family></name> an event-based parser might report events such as this: startElement:name startElement:given content: Keith endElement:given startElement:family content:Johnson endElement:family endElement:name The list and structure of events can become much more complex as features, such as namespaces, attributes, whitespace between elements, comments, processing instructions, and entities are added, but the basic mechanism is quite simple and generally very efficient. Event-based parsers only have to keep track of a limited amount of information. They need to understand the contents of DTDs (and possibly schemas), if the documents use them, and they need to maintain context stacks for element names and namespace declarations. They don't need to build a complete record of the document as they parse it, which minimizes the amount of memory needed for the parse. Event-based parsers require the consumer of the events to do a lot more work, however. Processing events typically means the creation of a state machine, i.e., code that understands current context and can route the information in the events to the proper consumer. Because events occur as the document is read, applications must be prepared to discard results should a fatal error occur partway through the document. Applications can't depend on information that occurs later in a document to interpret the current event, either, making it hard to use some kinds of XPaths, for instance, in an event-based environment. These factors can make it difficult to work directly with event-based parsers. Despite the potential difficulty, event-based parsers are very useful for a wide variety of tasks. Filters can process and modify events before passing them to another processor, efficiently performing a wide range of transformations. Filters can be stacked, providing a relatively simple means of building XML processing pipelines, where the information from one processor flows directly into another. Applications that want to feed information directly from XML documents into their own internal structures may find events to be the most efficient means of doing that. Even parsers that report XML documents as complete trees, as described in the next section, typically build those trees from a stream of events. XML documents, because of the requirements for well-formedness, describe tree structures. Documents typically contain an element that then contains text, attributes, and other elements, and these may contain elements, text, and attributes, and so on. Declarations, comments, and processing instructions enrich the mix, but all basically hold positions in the overall tree. There are a wide variety of tree models for XML documents. XPath (described in Chapter 9), used in XSLT transformations, has a slightly different set of expectations than does the Document Object Model (DOM) API, which is also different from the XML Information Set (Infoset), another W3C project. XML Schema (described in Chapter 16 and Chapter 21) defines a Post-Schema Validation Infoset (PSVI), which has more information in it (derived from the XML Schema) than any of the others. Developers who want to manipulate documents from their programs typically use APIs that provide access to an object model representing the XML document. Tree-based APIs typically present a model of an entire document to an application once parsing has successfully concluded. Applications don't have to worry about figuring out context or dealing with rollback when an error is encountered, since the tree model and parsing already address those issues. Rather than following a stream of events, an application can just navigate a tree to find the desired pieces of a document. Browsers and editors can present or modify the tree in conformance with user or script requests, using the tree as a persistent reference to the current content of the document. Working with a tree model of a document isn't very different conceptually from working with a document as text. The entire document is always available, and moving around well-formed portions of a document or modifying them is fairly easy. The complete set of context for any given part of the document is always available. Developers can use XPath expressions to locate content and make decisions based on content anywhere in the document where APIs support XPath. (DOM Level 3 adds formal support for XPath, and various implementations provide their own support.) Tree models of documents have a few drawbacks. They can take up large chunks of memory, typically multiplying the original document's size. Navigating documents can require additional processing after the parse, as developers have more options available to them. (Tree models don't impose the same kinds of discipline as event-based processing.) Both of these issues can make it difficult to scale and share applications that rely on tree models, though they may still be appropriate where small numbers of documents or small documents are being used. Another facility available to the XML programmer is a form of the XML transformation library. The Extensible Stylesheet Language Transformation (XSLT) language, covered in Chapter 8, is the most popular tool currently available for transforming XML to HTML, XML, or any other regular language that can be expressed in XSLT. In some cases, using a transformation to perform pre- or post-processing on XML data when processing it with either DOM or SAX might be simpler or more efficient. For instance, XSLT could be used as a preprocessor for a screen-scraping application that starts from XHTML documents. A script could extract the meaningful features from the XHTML document and pour them into an application-specific XML format. Transformations may be used by themselves, in browsers, or at the command line, but many XSLT implementations and other transformation tools offer SAX or DOM interfaces, simplifying the task of using them to build pipelines. Developers who want to take advantage of XML's cross-platform benefits but have no patience for the details of markup can use various tools that rely on XML but don't require direct exposure to XML's structures. Web Services, mentioned in Chapter 15, can be seen as a move in this direction. You can still touch the XML directly if you need to, but toolkits make it easier to avoid doing so. These kinds of applications are generally built as a layer on top of event- or tree-based processing, presenting their own API to the underlying information. This level of abstraction may be very useful in some cases or an inefficient inconvenience in others. It's probably helpful to understand more direct connections to XML if you need to evaluate the advantages and disadvantages of abstraction, as well as provide a bridge to systems that don't support a particular abstraction layer but still need access to the information. The SAX and DOM specifications, along with the various core XML specifications, provide a foundation for XML processing. Implementations of these standards, especially implementations of the DOM, sometimes vary from the specification. Some extensions are themselves formally specified Scalable Vector Graphics (SVG), for instance, specifies extensions to the DOM that are specific to working with SVG. Others are just kind of tacked on, adding functionality that a programmer or vendor felt was important but wasn't in the original specification. The multiple levels and modules of the DOM have also led to developers claiming support for the DOM, but actually supporting particular subsets (or extensions) of the available specifications. Porting standards also leads to variations. SAX was developed for Java, and the core SAX project only defines a Java API. The DOM uses Interface Definition Language (IDL) to define its API, but different implementations have interpreted the IDL slightly differently. SAX2 and the DOM are somewhat portable, but moving between environments may require some unlearning and relearning. Some environments also offer libraries well outside the SAX and DOM interfaces. Perl and Python both offer libraries that combine event and tree processing for instance, permitting applications to work on partial trees rather than SAX events or full DOM trees. Microsoft .NET's XMLReader offers similarly flexible processing. These approaches do not make moving between environments easy, but they can be very useful. While text, events, trees, and transformations may seem very different, it isn't unusual to combine them. Most parsers that produce DOM trees also offer the option of SAX events, and there are a number of tools that can create DOM trees from SAX events or vice versa. Some tools that accept and generate SAX events actually build internal trees many XSLT processors operate this way, using optimized internal models for their trees rather than the generic DOM. XSLT processors themselves often accept either SAX events or DOM trees as input and can produce these models (or text) for their output. Most programmers who want direct access to XML documents start with DOM trees, which are easier to figure out initially. If they have problems that are better solved in event-based environments, they can either rewrite their code for events it's a big change or mix and match event processing with tree processing.validating parsers aren't required to retrieve external DTDs or entities, though the parser should at least warn applications that this is happening. While reconstructing an XML document with exactly the same logical structure and content is possible, guaranteeing that it will match the original in a byte-by-byte comparison is not. your parser reports documents as you want, and not just the minimum required by the XML 1.0 specification, is to check its documentation and configure (or choose) your parser accordingly. documentation. version="1.0"?> <?xml-stylesheet type="text/css" href="test.css"?> An XML-aware application, such as Internet Explorer 5.5, would be capable of recognizing the XML author's intention to display the document using the test.css stylesheet. This processing instruction can also be used for XSLT stylesheets or other kinds of stylesheets not yet developed, though the application needs to understand how to process them to make this work. Applications that do not understand the processing instructions can still parse and use the information in the XML document while ignoring the unfamiliar processing instruction. The furniture example from Chapter 20 (see Figure 20 the XML name immediately after the <? with a notation, as described in the next section but important to custom XML applications. though they are a feature available to applications, they are also rarely used and not generally considered interoperable among XML processors. The linking and referencing tools described in the next section are more commonly used instead. The ability to create links between and within documents is important to XML's long-term success, both on the World Wide Web and for other applications concerned about the relationships between information. The XLink specification, described in Chapter 10, defines the semantics of how these links can be created. Unlike simple HTML links, XLinks can express sophisticated relationships between the source and target elements of a link. If an XML application requires the ability to encode relationships between various parts of an XML document, or between different documents, implementing this functionality using the XLinks recommendation should be considered. Not only would it save the effort of defining a new (and incompatible) linking scheme, the resulting documents would be intelligible to new XML authoring tools and browsers as XLinks support becomes more widespread. RDDL, described in Chapter 14, makes extensive use of XLink for machine-readable linking.
https://flylib.com/books/en/1.133.1.137/1/
CC-MAIN-2019-47
en
refinedweb
Posted 17 Sep 2013 Link to this post Line 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="test.aspx.cs" Inherits="test" %>Line 2: <%@ Register assembly="Telerik.Web.UI" namespace="Telerik.Web.UI" tagprefix="telerik" %> Line 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="test.aspx.cs" Inherits="test" %>Line 2: <%@ Register assembly="Telerik.Web.UI" namespace="Telerik.Web.UI" tagprefix="telerik" %> Posted 19 Sep 2013 Link to this post Posted 20 Sep 2013 Link to this post protected void Page_Load( object sender, EventArgs e) { RadPivotGrid1.OlapSettings.XmlaConnectionSettings.Encoding = System.Text.Encoding.UTF8; } Posted 05 Dec 2013 Link to this post Posted 22 Jul 2014 Link to this post Posted 24 Jul 2014 Link to this post Check out the Telerik Platform - the only platform that combines a rich set of UI tools with powerful cloud services to develop web, hybrid and native mobile apps. Posted 07 Apr 2015 Link to this post The above did not work for me. How would one go about removing the setting from the markup definition of the grid? Shannon Posted 10 Apr 2015 Link to this post < OlapSettings > XmlaConnectionSettings</ XmlaConnectionSettings </ See What's Next in App Development. Register for TelerikNEXT. Posted 10 Apr 2015 in reply to Eyup Link to this post Posted 08 Mar 2017 Link to this post Can you ***PLEASE*** fix this? It's still an issue in latest release 2017.1.228 Posted 10 Mar 2017 Link to this post Posted 10 Mar 2017 in reply to Eyup Link to this post While I understand this is a workaround, this really needs to be fixed in the designer. When this error is encountered, it is not obvious what happened: Parser Error Message: 'EncodingConverter' is unable to convert 'System.Text.UTF8Encoding' to 'System.ComponentModel.Design.Serialization.InstanceDescriptor'. This takes a while to remember what the cause is, and in addition, adding this code into every page's code behind is not a good solution. Posted 15 Mar 2017 Link to this post Posted 03 Jun 2017 Link to this post From September 20, 2013 "Our developers are aware of this issue and will introduce a fix for the future releases" It sounds as if it is planned. Is there really a current road map for PivotGrid? I haven't seen anything besides compatibility/bug fixes in a long time. Posted 05 Jun 2017 Link to this post Hi John, The UI for ASP.NET AJAX is a pretty mature product and the roadmap is currently based on enhancing the controls stability, to improve and unify the appearance of the different components along with better documentation and how-to articles. I searched for this PivotGrid request in the public portal and wasn't able to find such. This unfortunately means that we cannot measure it and raise its priority in the backlog. Can you please log your inquiry in the feedback portal as suggested by Vessy on Mar 15 -? Posted 05 Jun 2017 in reply to Rumen Link to this post Hello Rumen, Thank you for the reply. I am unsure why I have to submit a known issue to a request queue. Especially one that was acknowledged about 4 years ago as being fixed soon. Can you please explain why I have to do this? If so, please do. John Posted 07 Jun 2017 Link to this post Hi there, I agree with you that this is a known issue, but its priority is low due to that only a few users have requested it during the years and there are suggestions of how to overcome it. The chance to get implemented will become higher if the issue is logged by a customer in the feedback portal and the people can vote for it and raise its status in the backlog. Posted 06 Nov 2017 in reply to Rumen Link to this post Same here. Just started using pivotgrid. Out of the box it is wrong, so applied work-around. Do you recommend another way to go for Web dev, instead of ajax.
https://www.telerik.com/forums/pivotgrid-problem
CC-MAIN-2019-47
en
refinedweb
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::AutoScalingPlans::Types::CustomizedLoadMetricSpecification - Inherits: - Struct - Object - Struct - Aws::AutoScalingPlans::Types::CustomizedLoadMetricSpecification - Defined in: - (unknown) Overview When passing CustomizedLoad that can be used for predictive scaling. For predictive scaling to work with a customized load metric specification, AWS Auto Scaling needs access to the Sum and Average statistics that CloudWatch computes from metric data. Statistics are calculations used to aggregate data over specified time periods. When you choose a load metric, make sure that the required Sum and Average statistics for your metric are available in CloudWatch and that they provide relevant data for predictive scaling. The Sum statistic must represent the total load on the resource, and the Average statistic must represent the average load per capacity unit of the resource. For example, there is a metric that counts the number of requests processed by your Auto Scaling group. If the Sum statistic represents the total request count processed by the group, then the Average statistic for the specified metric must represent the average request count processed by each instance of the group. For information about terminology, available metrics, or how to publish new metrics, see Amazon CloudWatch Concepts in the Amazon CloudWatch User Guide. customized load metric specification. #metric_name ⇒ String The name of the metric. #namespace ⇒ String The namespace of the metric. #statistic ⇒ String The statistic of the metric. Currently, the value must always be Sum. Possible values: - Average - Minimum - Maximum - SampleCount - Sum #unit ⇒ String The unit of the metric.
https://docs.aws.amazon.com/sdkforruby/api/Aws/AutoScalingPlans/Types/CustomizedLoadMetricSpecification.html
CC-MAIN-2019-47
en
refinedweb
kvm_getprocs, kvm_getargv, kvm_getenvv — access user process state #include <sys/param.h> #include <sys/sysctl.h> #include <kvm.h> struct kinfo_proc * kvm_getprocs(kvm_t *kd, int op, int arg, size_t elemsize, filtering predicate as follows: KERN_PROC_KTHREAD KERN_PROC_ALL KERN_PROC_PID KERN_PROC_PGRP KERN_PROC_SESSION KERN_PROC_TTY KERN_PROC_UID KERN_PROC_RUID Only the first elemsize bytes of each array entry are returned.. The number of processes found is returned in the reference parameter cnt. The processes are returned as a contiguous array of kinfo_proc structures, the definition for which is available in <sys/sysctl.h>. This memory is locally allocated, and subsequent calls to kvm_getprocs() and kvm_close() will overwrite this storage. kvm_getprocs() sets the thread ID field accordingly for each thread except for the process (main thread) which has it set to -1.procs(), kvm_getargv(), and kvm_getenvv() all return NULL on failure. kvm(3), kvm_geterr(3), kvm_nlist(3), kvm_open(3), kvm_read(3) These routines do not belong in the kvm(3) interface.
http://man.openbsd.org/OpenBSD-6.1/kvm_getprocs.3
CC-MAIN-2019-47
en
refinedweb
From: Corwin Joy (cjoy_at_[hidden]) Date: 2001-06-22 19:09:18 ----- Original Message ----- From: "Beman Dawes" <bdawes_at_[hidden]> To: <boost_at_[hidden]>; <boost_at_[hidden]> Sent: Friday, June 22, 2001 6:40 PM Subject: Re: [boost] Question about leading underscores > At 06:40 PM 6/22/2001, Greg Colvin wrote: > > >The safe rule is not to use leading underscores, although I > >think those above are technically OK, if useless. > > Why? Lots of programmers (me included) use a single leading underscore in > private member names. It never causes any problems, and is completely > standard conforming. > > The trouble with putting two underscores in a variable name is that you might end up colliding with a preprocessor / #define macro defined by the standard library. Since these #defines don't have scope you can't be safe from them and so and so as noted below the standard reserves certain names which it may replace by #define macros. Here is a related post that appeared recently on comp.c++.moderated. ----- Original Message ----- From: "Pete Becker" <petebecker_at_[hidden]> Newsgroups: comp.lang.c++.moderated Sent: Monday, June 18, 2001 4:05 PM Subject: Re: Header protection against forbidden marcos > Attila Feher wrote: > > > > Hi All, > > > > I need some help in this trouble. As far as I know the only portable > > way for protecting header multiply inclusion is the good-old #ifndef, > > #define, stuff, #endif. However the C++ standard (for whatever reason) > > reserves _all_ macros for the standard library... > > No, it doesn't. Basically, all names that begin with an underscore > followed by a capital letter or by another underscore are reserved, just > as in C (it's actually a little broader: any name containing two > underscores is reserved, not just ones that begin with two underscores). > Users are free to guard their headers with their own macros, and > typically name them after the header: > > #ifndef Whatever_hh > #define Whatever_hh > // ... > #endif > > -- > Pete Becker > Dinkumware, Ltd. () > ----- End Original Message from comp.c++.moderated----- <..Beman continues...> > That choice was based in an experiment some years ago trying several > candidates (including none, trailing underscore, and some others I can't > remember.) Leading underscore won. > > --Beman > > >17.4.3.1.2 Global names [lib.global.names] > > > >1 Certain imple- > > mentation for use as a name in the global namespace.22) > > > > _________________________ > > 22) Such names are also reserved in namespace ::std (_lib.re- > > served.names_). > Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/06/13554.php
CC-MAIN-2019-47
en
refinedweb
I've looked some time for a good explanation about the association between ASP.NET (MVC/WebAPI) as server-side and AngularJS as the client. But I'm still uncertain whether is it the correct way to implement a SPA with both technologies. My idea is, to use ASP.NET as REST service. But I'm impressed about the Bundle in ASP.NET. To use it I need ASP.NET MVC and its Razor templates. Is that a good practice for building an application with angularjs as client? I guess I only need MVC for authorization if the user is allowed to go on relevant pages. Or is there a better way with angular to find out the windows user? Because I need the account of the windows login user. I hope anyone can help me to find out the right way. The stack you're suggesting is perfectly good to use. My team has build an application on the past where we had 2 razor pages, the one was basically the external app (by external I mean non-logged in user) which was quite light weight and only handled signup, then the other acted as an internal shell for the rest of the app (once logged in). I recently stumbled across a really cool bootstrap project that can help you construct a boilerplate app. My suggestion is just download this, and have a look through the boilerplate app to give you more clarity on its usage. (Then it's up to you if you want to use it or not, but it'll give you more clarity on solution construction) Thus your final app would have one (or 2) MVC controller(s) that deliver your shell pages, and all other calls would be done via WebAPI endpoints. See rough example below: ..\Controllers\RootController.cs public class RootController : Controller { // we only utilize one server side rendered page to deliver out initial // payloads; scripts; css; etc. public ActionResult Index() { // using partial view allows you to get rid of the _layout.cshtml page return PartialView(); } } ..\Views\Root\Index.cshtml <html ng- <head> @Styles.Render("~/content/css") @Scripts.Render("~/bundles/js") @Scripts.Render("~/bundles/app") </head> <body> <div class="container"> <div ng-view></div> </div> </body> </html> ..\WebAPI\SomethingController.cs public class SomethingController : ApiController { private readonly IRepository _repo; public SomethingController(IRepository repo) { _repo = repo; } // GET: api/Somethings public IQueryable<Something> GetSomethings() { return _repo.GetAll(); } } Tip: The easiest way I've found to construct projects like this is to use the Visual Studio's New Project Wizard. Select Web Project -> MVC, Make sure to Select WebAPI. This will ensure that App_Start has all the required routes & configs pre-generated.
https://codedump.io/share/JkdGisONSnzE/1/combining-aspnet-with-angularjs
CC-MAIN-2018-09
en
refinedweb
make TaggedInputSplit public class for development of MultipleInput of other DB Products extension -------------------------------------------------------------------------------------------------- Key: MAPREDUCE-4063 URL: Project: Hadoop Map/Reduce Issue Type: Improvement Affects Versions: 0.23.1 Reporter: Muddy Dixon Priority: Minor In Trunk, org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit is not public class. This prevents to develop other MultipleInput of DB products extension. I make workaround file So unless a reason, TaggedInputSplit should be public -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-issues/201203.mbox/%3C569130990.14386.1332664714950.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2018-09
en
refinedweb
BuildRequires for 2to3 If your specfile manually invokes /usr/bin/2to3, add an explicit build-time requirement on the /usr/bin/2to3 This is currently provided by the python-tools package, but might in the future be provided by the python3-tools package. Remember to add it within the 0%{with_python3} conditional: %if 0%{?with_python3} BuildRequires: /usr/bin/2to3 BuildRequires: python3-setuptools, python3-devel %endif # with_python3 You don't need to do this if you're using lib2to3 (e.g. implicitly using python-setuptools, a.k.a Distribute: we ship lib2to3 within the core python and python3 subpackages. Note however that in some cases the setup.py may need 2to3 to be run on it before it is valid Python 3 code). Questions - Where to add this on the Python Guidelines page?
http://fedoraproject.org/w/index.php?title=BuildRequires_for_2to3(draft)&oldid=151987
CC-MAIN-2018-09
en
refinedweb
It should be noted that this only support the MS Excel 5.0/95 format (.xls) file type. The syntax and use is pretty simple for this tool, it has 4 inputs, the table path (string), out path (string), use field alias (Boolean and optional), and use domain and sub-type description (Boolean and optional). Let's dive into an example: import arcpy import os if __name__ == '__main__': table = r"c:\temp\scratch.gdb\StarbucksAddresses" out_excel = r"c:\temp\starbucks.xls" if os.path.isfile(out_excel): os.remove(out_excel) arcpy.TableToExcel_conversion(table, out_excel) All we have done here is taken the simplest example and convert my list of Starbucks locations and converted to an excel spreadsheet so I can now use it in excel. Enjoy ArcGIS Tool help link -
https://anothergisblog.blogspot.in/2013/09/
CC-MAIN-2018-09
en
refinedweb
IDtcTransaction Interface .NET Framework (current version) Namespace: System.Transactions Describes a DTC transaction. Assembly: System.Transactions (in System.Transactions.dll) You should not implement this interface, as it is used only by the TransactionInterop class internally to represent the unmanaged version of the ITransaction interface of the System.EnterpriseServices namespace. Return to top .NET Framework Available since 2.0 Available since 2.0 Show:
https://msdn.microsoft.com/en-us/library/ms149704.aspx?cs-save-lang=1&cs-lang=fsharp
CC-MAIN-2018-09
en
refinedweb
Regex.GroupNameFromNumber Method Gets the group name that corresponds to the specified group number.. (For more information, see Grouping Constructs.). using System; using System.Collections.Generic; using System.Text.RegularExpressions; public class Example { public static void Main() { string pattern = @"(?<city>[A-Za-z\s]+), (?<state>[A-Za-z]{2}) (?<zip>\d{5}(-\d{4})?)"; string[] cityLines = {"New York, NY 10003", "Brooklyn, NY 11238", "Detroit, MI 48204", "San Francisco, CA 94109", "Seattle, WA 98109" }; Regex rgx = new Regex(pattern); List<string> names = new List<string>(); int ctr = 1; bool exitFlag = false; // Get group names. do { string name = rgx.GroupNameFromNumber(ctr); if (! String.IsNullOrEmpty(name)) { ctr++; names.Add(name); } else { exitFlag = true; } } while (! exitFlag); foreach (string cityLine in cityLines) { Match match = rgx.Match(cityLine); if (match.Success) Console.WriteLine("Zip code {0} is in {1}, {2}.", match.Groups[names[3]], match.Groups[names[1]], match.Groups[names[2]]); } } } // The example displays the following output: // Zip code 10003 is in New York, NY. // Zip code 11238 is in Brooklyn, NY. // Zip code 48204 is in Detroit, MI. // Zip code 94109 is in San Francisco, CA. // Zip code 98109 is in Seattle, WA. The regular expression pattern is defined by the following expression: (?<city>[A-Za-z\s]+), (?<state>[A-Za-z]{2}) (?<zip>\d{5}(-\d{4})?) The following table shows how the regular expression pattern is.
https://msdn.microsoft.com/en-us/library/system.text.regularexpressions.regex.groupnamefromnumber(v=vs.100).aspx
CC-MAIN-2018-09
en
refinedweb
Given. Naive Pattern Searching: Slide the pattern over text one by one and check for a match. If a match is found, then slides by 1 again to check for subsequent matches. C // C program for Naive Pattern Searching algorithm ; } Python # Python program for Naive Pattern Searching def search(pat, txt): M = len(pat) N = len(txt) # A loop to slide pat[] one by one for i in xrange(N-M + 1): # For current index i, check for pattern match for j in xrange(M): if txt[i + j] != pat[j]: break if j == M-1: # if pat[0...M-1] = txt[i, i + 1, ...i + M-1] print "Pattern found at index " + str(i) # Driver program to test the above function txt = "AABAACAADAABAAABAA" pat = "AABA" search (pat, txt) # This code is contributed by Bhavya Jain Java // Java program for Naive Pattern Searching public class NaiveSearch { public static void search(String txt, String pat) { int M = pat.length(); int N = txt.length(); /* A loop to slide pat one by one */ for (int i = 0; i <= N - M; i++) { int j; /* For current index i, check for pattern match */ for (j = 0; j < M; j++) if (txt.charAt(i + j) != pat.charAt(j)) break; if (j == M) // if pat[0...M-1] = txt[i, i+1, ...i+M-1] System.out.println("Pattern found at index " + i); } } public static void main(String[] args) { String txt = "AABAACAADAABAAABAA"; String pat = "AABA"; search(txt, pat); } } // This code is contributed by Harikishore Output: Pattern found at index 0 Pattern found at index 9 Pattern found at index 13 What is the best case? The best case occurs when the first character of the pattern is not present in text at all. txt[] = "AABCCAADDEE"; pat[] = "FAA"; The number of comparisons in best case is O(n). What is the worst case ? The worst case of Naive Pattern Searching occurs in following scenarios. 1) When all characters of the text and pattern are same. txt[] = "AAAAAAAAAAAAAAAAAA"; pat[] = "AAAAA"; 2) Worst case also occurs when only the last character is different. txt[] = "AAAAAAAAAAAAAAAAAB"; pat[] = "AAAAB"; Number of comparisons in worst case is O(m*(n-m+1)). Although strings which have repeated characters are not likely to appear in English text, they may well occur in other applications (for example, in binary texts). The KMP matching algorithm improves the worst case to O(n). We will be covering KMP in the next post. Also, we will be writing more posts to cover all pattern searching algorithms and data structures. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
https://www.geeksforgeeks.org/searching-for-patterns-set-1-naive-pattern-searching/
CC-MAIN-2018-09
en
refinedweb
Having 13 comments: Hello Gerard, my name is Lucas and i am working in a WebService https two-way authentication with Oracle WL 10.3. In this moment i have authenticate a java client class with https one-way authentication, including the following code: System.setProperty("javax.net.ssl.trustStore", "C:\\bea103\\wlserver_10.3\\server\\lib\\DemoTrust.jks"); System.setProperty("javax.net.ssl.trustStorePassword", "DemoTrustKeyStorePassPhrase"); This work fine in one-way. Then, I enabled two-way on my WL Server and add the following source code in the java client class: System.setProperty("javax.net.ssl.keyStore", "C:\\bea103\\wlserver_10.3\\server\\lib\\DemoIdentity.jks"); System.setProperty("javax.net.ssl.keyStorePassword", "DemoIdentityKeyStorePassPhrase"); And this get me the following error: keyStore is : C:\bea103\wlserver_10.3\server\lib\DemoIdentity.jks keyStore type is : JKS keyStore provider is : init keystore init keymanager of type SunX509 default context init failed: java.security.UnrecoverableKeyException: Cannot recover key Respuesta ERROR: ; nested exception is: java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl) I try to include a “CertGenCA.der” into a “DemoIdentity.jks” but don’t work, i use this: C:\bea103\jdk160_05\bin\keytool -import -alias certgenca -file C:/bea103/wlserver_10.3/server/lib/CertGenCA.der -keystore C:/bea103/wlserver_10.3/server/lib/DemoIdentity.jks -keypass DemoIdentityKeyStorePassPhrase -storepass DemoIdentityKeyStorePassPhrase You can get me a Tips for this problem. Sorry for my english, thanks a lot and special regards from Argentina. Lucas. The problem here is that you can only use the system properties for keystores where the keystore and key passwords are not the same. This is not true for the DemoIdentityStore which has two different passwords. You can either import the identity into a simpler keystore, create you own identity, or use the following API: PersistentSSLInfo provide far more control over the SSL configuration. Hope this helps, Gerard Thanks Gerard for the faster reply, I finally understand the problem. I’m trying to use the PersistentSSLInfo in a WL Client but cannot be resolved this import: import weblogic.wsee.jaxws.sslclient; Anyway, I implement another java class, using an Axis Client, where I can set the tree password to https two-way authentication, this class look like: package bo.socket; import java.io.FileInputStream; import java.io.IOException; import java.security.KeyStore; import java.util.Hashtable; import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.TrustManagerFactory; import org.apache.axis.components.net.JSSESocketFactory; import org.apache.axis.components.net.SecureSocketFactory; public class MyCustomSSLSocketFactory extends JSSESocketFactory implements SecureSocketFactory { public MyCustomSSLSocketFactory(Hashtable attributes) { super(attributes); } protected void initFactory() throws IOException { try { SSLContext context = getContext(); sslFactory = context.getSocketFactory(); } catch (Exception e) { if (e instanceof IOException) { throw (IOException) e; } System.out.print(e.getMessage()); throw new IOException(e.getMessage()); } } protected SSLContext getContext() throws Exception { try { String keystore_type = KeyStore.getDefaultType(); // "JKS" KeyStore keyStore = KeyStore.getInstance(keystore_type); KeyStore trustStore = KeyStore.getInstance(keystore_type); char[] keystore_password = "DemoIdentityKeyStorePassPhrase".toCharArray(); keyStore.load(new FileInputStream("C:\\bea103\\wlserver_10.3\\server\\lib\\DemoIdentity.jks"), keystore_password); char[] trusstore_password = "DemoTrustKeyStorePassPhrase".toCharArray(); trustStore.load(new FileInputStream("C:\\bea103\\wlserver_10.3\\server\\lib\\DemoTrust.jks"), trusstore_password); String algorithmTrust = TrustManagerFactory.getDefaultAlgorithm(); // PKIX TrustManagerFactory tmf = TrustManagerFactory.getInstance(algorithmTrust); tmf.init(trustStore); String algorithmKey = KeyManagerFactory.getDefaultAlgorithm(); // "SunX509" KeyManagerFactory kmf = KeyManagerFactory.getInstance(algorithmKey); char[] key_password = "DemoIdentityPassPhrase".toCharArray(); kmf.init(keyStore, key_password); SSLContext sslctx = SSLContext.getInstance("SSL"); sslctx.init(kmf.getKeyManagers(),tmf.getTrustManagers(),null); return sslctx; } catch (Exception e) { e.printStackTrace(); throw new Exception("Error creating context for SSLSocket.", e); } } } And in the main call of a WS from Client I set the new SSLSocketFactory Class: AxisProperties.setProperty("axis.socketSecureFactory","bo.socket.MyCustomSSLSocketFactory"); This works fine for https two-way authentication Client Axis implement. I trying now to use this in my Client WL class. Thanks a lot for your help and to throw some light on our way…;) Lucas. Hello Gerard, How can I use DemoIdentity and DemoTrust keystores that comes with weblogic on vb.net or c# client . How can set these values in my code? Could you help me please? Thanks Mauricio Sorry no idea, not used .NET Hi Gerard, Just wanted to let you know that I regularly visit this blog when I need the BEA passphrases and that I've linked to it in our documentation. Very useful, thank you! Best regards, Ganesh Hi Gerard, In Weblogic 10.3.1, when I started WL, it will load the trusted certificates automatically as below But in WL 10.3.2, it does not. In my console under Server -> Configuration -> Keystores tab. In my case, the drop-down for keystore is "Demo Identity and Demo Trust" I checked and all the jks and cacerts are there. Mine is a fresh installation of WL 10.3.2 Do you have any ideas? Thanks Andy I’m trying to send email notification through SOA 11g and I tried to configure Gmail SMTP port 465 SSL enabled and tried to send a notification but I got SSL exception. I got CA root certificate for gmail and I'm trying to insert into Demotrust.jks would you be able to tell me How I could do this. Vic Hoang, Sorry for the very late reply; but I don't. Did you have any resolution for this issue? Gerard Gerard, On Weblogic 9.2, when I only pass -Djavax.net.ssl.trustStore= config/DemoTrust.jks, the following exception was thrown : java.net.SocketException: Default SSL context init failed: Unable to initialize, java.io.IOException: DerInputStream.getLength(): lengthTag=6, too big. What's wrong ? Thanks & Regards Setya Hi, It turns out that the DemoTrust.jks is corrupted. Setya Hello Everyone... I am pretty new to Webservice and SSL. We have to make a call to a webservice via HTTPS, i have tried from the Weblogic but seems some issue with Weblogic. So now trying to configure 2way SSL from java code point of view. We are using axis 2 API. Can anybody share the source code for the same and give some pointers that how exactly it works. Thank you so much for the private key password for the DemoIdentity.jks
http://kingsfleet.blogspot.com/2008/11/using-demoidentity-and-demotrust.html
CC-MAIN-2018-09
en
refinedweb
I am using os to list the filenames within a directory. I am also using pandas to list the contents of one column in a CSV file. I have printed the results of both and now I want to match the names that appear in both prints and also identify which names are exclusive to one print. Below is my code which gets the names and the contents of the CSV file. import os, sys import pandas as pd path = "/mydir/csvfile" dirs = os.listdir( path ) for file in dirs: print file fields = ['Column'] df = pd.read_csv('/mydir/csv_file', skipinitialspace=True, usecols=fields) print df.Column Instead of for file in dirs: print file Build a list: files = [file for file in dirs] Then use the DataFrame to check: df.Column.isin(files) # this will check elementwise Out: 0 True 1 True 2 True 3 True Name: Column, dtype: bool Or df.Column.isin(files).all() # if all of them are the same Out: True
https://codedump.io/share/2RZlIv6OMeaa/1/match-identical-words-in-two-prints
CC-MAIN-2018-09
en
refinedweb
CFUserNotificationDisplayAlert not working Hi, In my project i includes: #include <CoreFoundation/CoreFoundation.h> and #include <QtMacExtras> and i am trying to use the function CFUserNotificationDisplayAlert, but the compiler throw error "Symbol(s) not found for architecture x86_64" How can i fix this? (I am compile on OCX environment). THX - SGaist Lifetime Qt Champion Hi, Are you linking your project to the CoreFoundation framework ? Yes, I do. QT +=macextras - SGaist Lifetime Qt Champion Using the qtmacextras module doesn't mean that you are linking agains the CoreFoundation framework. What if you add LIBS += -framework CoreFoundation to your .pro file ?
https://forum.qt.io/topic/63626/cfusernotificationdisplayalert-not-working
CC-MAIN-2018-09
en
refinedweb
The Singleton design pattern is an area of C++ philosophy where I happen to be in complete disagreement with the rest of the C++ world. Singleton classes, as described in Gamma Et Al. are unnecessary; they add excess verbiage to code, and they are a symptom of bad design. I have described my reasons for saying this in my writeup called Don't Use Singleton Classes. My challenge to you, the reader: describe for me a class that must be implemented using the form of the Singleton Pattern in Gamma Et Al.. Alternatively, describe one that is simpler, or works more efficiently, using that pattern. Put writeups to answer the challenge in this node and I will do my best to answer them. Let me know you did with a /msg. You're going to have to work really hard to convince me that any singular object should have its behavior and singularity merged into a singleton class. Suggested further reading: Arthur J. Riel, Object-Oriented Design Heuristics, 1996, Addison-Wesley, ISBN 0-201-63385-X. I agree with Core's comments (Although I wish they'd been in the other node) for every OO language except C++. In C++ you can declare any global variable you want, exactly the way you do in C. You make things like singletons to enforce their working correctly. In C, you're working without a net. I must challenge several points put forward by Rose Thorn: The number of classes in your program does not determine maintenance burden all by itself. Remember that small classes are easier to maintain. A singleton class that has two members for each function (one for the actual behavior and one to forward to the singular instance) is twice as complex as the server class or the client class separately. Regardless of whether we separate the client and server or merge them, we have the same amount of code to maintain! The client class is the real thing; it's the class the outside world uses. Suppose that not all of your class's behavior is singular. Where do you put the non-singular behavior? In the client class of course. Separation between the domain model and the implementation model is a Good Thing, not a Bad Thing. If by "re-engineering" you mean supplying a different implmentation, all you have to do is implement the server class interface to perform the behavior the client class expects. This is easier when you have separate classes. If you want to change the interface, you need to change both members under any model. Polymorphism is not the be-all and end-all of Object-Oriented Programming. Many C++ gurus will tell you that encapsulation is far more important. Encapsulation is about presenting an interface to the world, a contract between the class designer and the class user. If the contract can be satisfied without polymorphism, polymorphic member functions need not apply. Remember that Bjarne Stroustrup's original version of C with Classes, which made OOP something more than an academic toy, did not allow for polymorphism. Virtual functions were added later. I'd like to see a real model of a polymorphic class where every object must have its own behavior. For our theoretical Person class, we must be talking about an abstract base class Person from which a Joe class, a Jim class, and a MarySue class inherit. We want to pass Person objects into some other function somewhere, otherwise the Person class is meaningless. Joe has a magnetic personality, and along comes his disciple Bob, who acts exactly like Joe. All of a sudden, our assumption that there can be only one Joe is gone. (As an aside, It could be argued that a person's behavior is determined by our internal state, which comes from our initial genetic material [constructor arguments] plus outside environmental influences [mutator functions, input and output]. The internal processes that turn state into behavior (chemistry) remain the same.) For a different example, suppose that Ford and GM have quite different accounting practices. In an accounting application, we can model this with a FordAccountingSystem class and a GeneralMotorsAccountingSystem class. But, suppose Ford hired GM's CFO away form them. The new CFO replaces Ford's accounting system with an exact duplicate of GM's. We might be able to model Ford's new accounting system with a copy of General Motors's accounting system, except that we're that with the same AccountingSystem class as before. But if it's a singleton, we're stuck with separating the interface from the implementation again. The problem lies in having a GeneralMotorsAccountingSystem class rather than a SpiffyNewAccountingSystem class which can be applied to both Ford and GM. They're nothing more than a hack to get global variables in object oriented languages, and everyone knows it, but noone is willing to admit it. As far as a time when a singleton is necessary, try having a program where you need an instance of an abstract class Person: you need every person to have their own behaviour, so they need their own class. If any two instances of person have the same class, then you have something seriously wrong with your program: a matter-transporter has probably malfunctioned. Of course, you needn't use a class when there is only one of a kind, as you won't need to allocate versions dynamically. But then you don't get polymorphic behaviour. While in this scenario you could use a variant of the visitor pattern to do the kind of things you might do with mixins, you lose the ability to very strongly enforce your uniqueness constraints. If you don't want polymorphism, don't use an OO implementation. Polymorphism is what give OO it's encapsulation, because method calls are referentially transparent. The use of classes as namespaces is something that is pretty much idiosyncratic to C++ derived languages. Certainly Objective C and Common Lisp don't do it. Namespace division is done by a separate mechanism. The use of private members is not the same as information hiding, and inheritance and polymorphism are not the same thing. You CAN implement singleton classes nicely: every public method delegates to a private method which is invoked on the single "real" member of the class. Java example follows: class Foo { private static Foo theInstance; public Foo() { if(theInstance==null) { theInstance = this; } } public void frob(int i) { theInstance.priv_frob(i); } private void priv_frob(int i) { //do whatever } } Log in or register to write something here or to contact authors. Need help? [email protected]
https://everything2.com/title/Singleton+Classes+are+necessary%252C+you+idiot+cheese%2521
CC-MAIN-2018-09
en
refinedweb
In our first article on policy-based network management and metadirectory technologies, we defined some terminology and introduced key concepts. Now that we've laid the groundwork, we can get down to the business of specifying the goals of this nascent technology and examine some of the hurdles that must be overcome in its implementation. For an introduction to metadirectory technologies, check out Mark Kaelin’s first article in this series, "The future of network administration is here, and it’s called a metadirectory." Goals The promise of policy-based network management is to cut costs by optimizing network usage and automating the chores associated with day-to-day operations. The technology that implements the network policies of management is the directory. Directories, of course, are the main gatekeepers for giving employees, customers, and business partners access to enterprise networks, applications, intranets, and extranets. The goal of directory-enabled networking is to establish a common management interface for all network resources in an enterprise. Prime candidates for the technology are white pages and yellow pages applications, e-commerce and security, e-mail and messaging, network and systems management, and policy-based networking. The data is represented as objects in a hierarchical tree and typically includes names, e-mail addresses, phone numbers, passwords, access rights, details on network devices, and applications. The theory behind the metadirectory is that consolidating multiple directories within an enterprise will lower costs, increase directory interoperability, and reduce administration time. The metadirectory links the individual directories within an organization via the “join,” which is software that integrates heterogeneous data from multiple repositories to provide a common view of all resources. A network manager should, with proper directory synchronization and multiple-namespace support, be able to make changes in one directory and have them automatically update all directories in the organization. The initial best use for metadirectory technology may be in electronic commerce. Metadirectory functions will allow a company to publish an always-current subset of its enterprise directory to an extranet. In addition, companies can use the technology to integrate internal customer directories in an environment where different business units conduct business with the same customers, creating a single point of management and access for customer information. By providing the repository for maintaining customer identity and policy, and the means to integrate information across the extranet, directory services will play an increasingly important role in e-business, point-to-point relationships, and architectures. Implementation A directory capable of fully supporting an array of enterprise applications must possess characteristics such as advanced naming and location functions, sophisticated administration and management mechanisms, and substantial security features. A hierarchical namespace is necessary in this context because it enables the property of inheritance, which means that a change to an entry is automatically propagated to subordinate entries in the directory system. Establishing a directory infrastructure in an organization requires not only choosing a standard technology but also the consideration of numerous political mechanisms within that organization. Management must be persuaded that such a project has enough merit to justify the substantial cost and each department must buy into an effort that appears to lessen their power over their own data. The proliferation of legacy directories increases the complexity of these systems and hinders their implementation. Directory management systems require that companies retrofit legacy data into a directory infrastructure. Unfortunately, the standards for accomplishing this retrofit are yet to be ironed out. Another potential problem with the directory management system is multimaster replication, which is endorsed by Microsoft and Novell Directory Services (NDS). While the feature is good for administration and access, it can also create issues regarding data integrity. In a multimaster system, a number of directory replicas are available throughout a network. The system provides fault tolerance, reduces wide-area traffic, and improves performance by keeping information close to those who need it. However, because the data can be updated and stored in multiple places, problems with data integrity can arise when two or more administrators make changes to the same information within a replication cycle. Microsoft and Novell attack this problem in different ways. Microsoft’s Active Directory uses an Update Sequence Number system, which assigns a number to each update. Novell uses a time-stamp to distinguish directory changes. Some of the early examples of directory-enabled network administration demonstrate why such systems are needed. Instead of relying on standards or vendor-specific product integration, the University of Clemson decided to develop its own system for instituting single-user identification numbers and passwords for all servers and applications. Students can now access both the e-mail system and the class registration servers with the same username and password. Brigham Young University has implemented a similar system for their students and staff. Using a prerelease version of Windows 2000 and its Active Directory component has allowed Compaq Computer Corp. to combine all of its resources, including thousands of machines and 85,000 employees, as objects in a single directory. Active Directory gives Compaq a centralized repository for information and facilitates access to that information, as well as the enforcement of company-wide standards. Because the user company’s resources must be organized hierarchically in Active Directory, preparation is particularly critical and time-consuming. The software not only makes it possible to centralize resource administration for an entire company, but it also allows delegation of administrative privileges to lower levels. Decisions Tom Nolle, president of CIMI Corporation, a technology assessment and consulting company, believes that organizations should ask themselves several questions before implementing a directory-enabled network solution: - Do you have relatively frequent instances in which problems such as congestion impair the performance of your most important applications? - Do you have persistent problems maintaining multiple databases that describe your users, applications, and the network? - Is your network based on a well-planned, cohesive, switched-LAN architecture? Nolle advises that if you answer “no” to two of these questions, there is a good possibility that implementing directory-enabled networking will cause more trouble than it is worth. As you can see, the implementation of a directory-enabled network management system with policy-based administration and metadirectories is not a simple undertaking. The bottom-line question to consider is whether the benefits of such an organizational system outweigh the initial costs and the potential for problems in implementation. Stay tuned The final part of this series will discuss the major players in this budding technology, including Microsoft, Cisco, Novell, IBM, and Oracle. We'll also discuss the battle over standards. Whatever so-called standard prevails will determine the direction of this technology for years to come. or.
http://www.techrepublic.com/article/metadirectory-anyone-whats-best-for-your-firm/
CC-MAIN-2016-44
en
refinedweb
Greetings, I am hoping the someone can point me in the right direction; as I have been unable to pinpoint the source of this error: Event Type: Warning Event Source: DNS EventID: 7062 "The DNS Server encountered a packet addressed to itself on IP address x.x.x.x. The packet is for the DNS name _ldap._tcp.dc._msdcs.mydomain2.com. The packet will be discarded" I have a W2K3 AD environment. Two DNS servers. I have checked the Forwarders, Root Hints, Master and Notify Lists, etc. Any help would be appreciated. Thanks. 1 Reply Apr 29, 2008 at 1:13 UTC Greetings. Have you gone through Microsoft's thing? http:/ How to troubleshoot 7062 errors logged in DNS event log This article was previously published under Q235689 Article ID : 235689 Last Review : February 26, 2007 Revision : 3.3 SUMMARY This article describes how to troubleshoot the cause or causes of event ID 7062 on a DNS server that is running Microsoft Windows 2000 or Microsoft Windows NT Server 4.0. If event ID 7062 logs on your DNS server, it will appear as follows: EVENT message 7062: DNS Server encountered a packet addressed to itself -- IP address<actual IP address>. The DNS server should never be sending a packet to itself. This situation DNS server. As part of your troubleshooting, you may find that none of the steps that are listed in event ID 7062 apply to your DNS server. However, event ID 7062 may continue to log on your DNS server. This article discusses some of the other reasons why this issue may occur. Step 4 of event ID 7062 may lead you to conclude that for the event to be triggered, a primary DNS server must create a delegation of a subdomain. However, the root DNS servers maintain the .com, .net, and other domains. Additionally, the root DNS servers delegate the namespace under those domains to other DNS servers. Therefore, although your DNS server may be the primary DNS server for example.com, your DNS server has been delegated that responsibility by the ".com" DNS server or servers. This means that if you have registered a domain with the NSFnet Network Information Center (InterNIC), and they delegate that domain to your DNS server, it is your responsibility to make sure that your DNS server can handle all requests for the registered domain. For example, you have a DNS server at dns.example.com, and you have recently registered example.org with InterNIC. After InterNIC delegates example.org to your DNS server, you must create a zone file that can answer queries for example.org. The following hypothetical sequence of events describes how event ID 7062 may continue to log to your DNS server if you have not configured it to maintain zone files for domains that you have registered. 1. A client computer tries to contact. The client computer sends a query for to the root servers. The root servers determine that the Start of Authority (SOA) for example.org is the DNS server dns.example.com at IP address 10.1.1.1. IP address 10.1.1.1 is your DNS server. 2. The client computer sends a request to 10.1.1.1 for. Your DNS server examines its zone files and determines that it is not the SOA for example.org because it does not have a zone file for. 3. Your DNS server sends an iterative query for example.org to the root servers. 4. The root servers respond to the iterative query from your DNS server by telling your DNS server that the SOA, or owner of the domain, for example.org is at 10.1.1.1. Your DNS server examines itself for the answer to the query, and does not find one. 5. If the root hints that are in Windows 2000 point to the same computer, event ID 7062 will log. For additional information about replacing the existing root hints with the default root hints, click the following article number to view the article in the Microsoft Knowledge Base: 249868 (http:/ Note Event ID 7062 will log even when zone transfers are disabled. For more information about this behavior, see "Event ID 7062 logs even when zone transfers are disabled" in the More Information section. MORE INFORMATION How to find the actual requested domains To find the actual requested domains, you must debug your DNS server. After you have created a log file, open it in Microsoft Word or Microsoft Excel. We do not recommend that you open the file in Notepad because Notepad does not parse characters into an easily readable format. When you have the log file open, search for "7062." This search should bring you to the first error. After you find the error, scroll up. A DNS query log looks similar to the following code. dns_ProcessMessage() for packet at 00A5E524. dns_AnswerQuestion() for packet at 00A5E524. Node for (3)www(10)mycompany(3)com(0) NOT in database. Closest node found"com." Encountered non-authoritative node with no matching RRs. dns_ProcessMessage() for packet at 00A5EAC4. Processing response packet at 00A5EAC4. Packet contains RR for authoritative zone node: "dns.hello.com." -- ignoring RR.dns_ContinueCurrentLookup() for query at 00A5E524. dns_AnswerQuestion() for packet at 00A5E524. dns_AnswerQuestionFromDatabase() for query at 00A5E524 node label = www question type = 0x0001 ERROR: Self-send to address 10.1.1.1!!! Log EVENT message 7062 (80001B96): The following describes what occurs during the important phases of this log: 1. The DNS query log starts with dns_ProcessMessage(). 2. The node for the request is. 3. Your DNS server cannot handle the request Encountered non-authoritative node. 4. Your DNS server sends a dns_ProcessMessage to the root servers. 5. The root servers send a response packet, Processing response packet, to your DNS server. This response indicates that your DNS server is the SOA for example.org. 6. Your DNS server ignores the response packet. 7. Event ID 7062 logs on your DNS server. Event ID 7062 logs even when zone transfers are (http:/ Event ID 7062 will log even if zone transfers are disabled if the Notify option has been configured to notify a DNS server or servers that are listed on the Name Servers tab. By default, a Windows 2000-based primary DNS server that has multiple zones is configured to notify the servers that are listed on the Name Servers tab. Note Disabling zone transfers does not disable the Notify option. If the Notify option is set to notify a DNS server or servers that are listed on the Name Servers tab, it will continue to do this. To disable zone transfers, follow these steps: 1. Click Start, click Run, type regedit, and then click OK. 2. Locate and then click the following subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DNS\Zones\ZoneName 3. In the right pane, right-click NotifyLevel, and then click Modify. 4. Type 0, and then click OK. 5. Quit Registry Editor. REFERENCES For more information, see the following Microsoft Knowledge Base article: 218814 (http:/ APPLIES TO • Microsoft Windows 2000 Server • Microsoft Windows 2000 Advanced Server • Microsoft Windows 2000 Professional Edition • Microsoft Windows NT Server 4.0 Standard Edition
https://community.spiceworks.com/topic/14038-dns-error-packet-addressed-to-itself
CC-MAIN-2016-44
en
refinedweb
Content-type: text/html XmStringCompare - A compound string function that compares two strings #include <Xm/Xm.h> Boolean XmStringCompare (s1, s2) (char *) string and the same font list element tag other than XmFONTLIST_DEFAULT_TAG using XmStringCreate, the strings compare as equal. Specifies a compound string to be compared with s2 Specifies a compound string to be compared with s1 Returns True if two compound strings are equivalent. XmStringCreate(3X), XmStringCreateLocalized(3X)
http://backdrift.org/man/tru64/man3/XmStringCompare.3X.html
CC-MAIN-2016-44
en
refinedweb
could anyone tell me how can I use bcb6 to program the cy7c68013 with cyapi.lib? | Cypress Semiconductor could anyone tell me how can I use bcb6 to program the cy7c68013 with cyapi.lib? Summary: 1 Reply, Latest post by andrewsobotka on 01 Feb 2010 12:39 PM PST Verified Answers: 0 I want to design a Data acquisition card. to use c++ builder to design upper computer's code.I had load the cyapi.lib in my project.In code,declare #include "cyioctl.h" #include "CyAPI.h" first,and in window initialize function I only write one line of code CCyUSBDevice *USBDevice = new CCyUSBDevice(); then run,compiler report wrong : [Linker Error] Unresolved external 'CCyUSBDevice::CCyUSBDevice(void *, _GUID)' referenced from D:\WORK\USB开发板\BCB\UNIT1.OBJ who can help me?many thanks. It looks like the Linker is having problems finding the library file. There's probably a dialog box with a whole bunch of Linker options, one of deals with external libraries. Make sure cyapi.lib is listed and make sure it's in a directory that the linker can find.
http://www.cypress.com/forum/usb-high-speed-peripherals/could-anyone-tell-me-how-can-i-use-bcb6-program-cy7c68013-cyapilib
CC-MAIN-2016-44
en
refinedweb
/PSP In directory sc8-pr-cvs1:/tmp/cvs-serv29753/PSP Modified Files: ServletWriter.py Log Message: Modifying PSP/ServletWriter.py to use mkstemp() instead of mktemp(). mkstemp() was added in Python 2.3 for improved security over mktemp(). Making MiscUtils/Funcs.py simply pull the mktemp and mkstemp from Python if the version is 2.3 or greater. Otherwise, we define our own versions of those functions. Note that our mkstemp is not as secure as the one that comes with Python 2.3. Index: ServletWriter.py =================================================================== RCS file: /cvsroot/webware/Webware/PSP/ServletWriter.py,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** ServletWriter.py 20 Jan 2003 07:17:34 -0000 1.12 --- ServletWriter.py 30 Jan 2003 21:17:58 -0000 1.13 *************** *** 26,30 **** from Context import * ! from MiscUtils.Funcs import mktemp import string, os, sys, tempfile --- 26,30 ---- from Context import * ! from MiscUtils.Funcs import mkstemp import string, os, sys, tempfile *************** *** 39,44 **** def __init__(self,ctxt): self._pyfilename = ctxt.getPythonFileName() ! self._temp = mktemp('tmp', dir=os.path.dirname(self._pyfilename)) ! self._filehandle = open(self._temp,'w+') self._tabcnt = 0 self._blockcount = 0 # a hack to handle nested blocks of python code --- 39,44 ---- def __init__(self,ctxt): self._pyfilename = ctxt.getPythonFileName() ! fd, self._temp = mkstemp('tmp', dir=os.path.dirname(self._pyfilename)) ! self._filehandle = os.fdopen(fd, 'w') self._tabcnt = 0 self._blockcount = 0 # a hack to handle nested blocks of python code I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/webware/mailman/message/4464344/
CC-MAIN-2016-44
en
refinedweb
x86 Disassembly/Print Version The Wikibook of Using C and Assembly Language From Wikibooks: The Free Library Introduction. We are going to look at the way programs are made using assemblers and compilers, and examine the way that assembly code is made from C or C++ source code. Using this knowledge, we will try to reverse the process. By examining common structures, such as data and control structures, we can find patterns that enable us to disassemble and decompile programs quickly. Who Is This Book For? This book is for readers at the undergraduate level with experience programming in x86 Assembly and C or C++. This book is not designed to teach assembly language programming, C or C++ programming, or compiler/assembler theory. What Are The Prerequisites? The reader should have a thorough understanding of x86 Assembly, C Programming, and possibly C++ Programming. This book is intended to increase the reader's understanding of the relationship between x86 machine code, x86 Assembly Language, and the C Programming Language. If you are not too familar with these topics, you may want to reread some of the above-mentioned books before continuing. What is Disassembly? Computer programs are written originally in a human readable code form, such as assembly language or a high-level language. These programs are then compiled into a binary format called machine code. This binary format is not directly readable or understandable by humans. Many programs -- such as malware, proprietary commercial programs, or very old legacy programs -- may not have the source code available to you. Programs frequently perform tasks that need to be duplicated, or need to be made to interact with other programs. Without the source code and without adequate documentation, these tasks can be difficult to accomplish. This book outlines tools and techniques for attempting to convert the raw machine code of an executable file into equivalent code in assembly language and the high-level languages C and C++. With the high-level code to perform a particular task, several things become possible: - Programs can be ported to new computer platforms, by compiling the source code in a different environment. - The algorithm used by a program can be determined. This allows other programs to make use of the same algorithm, or for updated versions of a program to be rewritten without needing to track down old copies of the source code. - Security holes and vulnerabilities can be identified and patched by users without needing access to the original source code. - New interfaces can be implemented for old programs. New components can be built on top of old components to speed development time and reduce the need to rewrite large volumes of code. - We can figure out what a piece of malware does. We hope this leads us to figuring out how to block its harmful effects. Unfortunately, some malware writers use self-modifying code techniques (polymorphic camouflage, XOR encryption, scrambling)[1], apparently to make it difficult to even detect that malware, much less disassemble it. Disassembling code has a large number of practical uses. One of the positive side effects of it is that the reader will gain a better understanding of the relation between machine code, assembly language, and high-level languages. Having a good knowledge of these topics will help programmers to produce code that is more efficient and more secure. Tools Assemblers and Compilers Assemblers. MASMASM. NASMASM FASM, the "Flat Assembler" is an open source assembler that supports x86, and IA-64 Intel architectures. (x86) AT&T Syntax Assemblers AT&T syntax for x86 microprocessor assembly code is not as common as Intel-syntax, but the GNU Assembler (GAS) uses it, and it is the de facto assembly standard on Unix and Unix-like operating systems. GAS HLA Compiler Compiler Compilerrior This compiler is commonly used for classic MacOS and for embedded systems. If you try to reverse-engineer a piece of consumer electronics, you may encounter code generated by Metrowerks CodeWarrior. Green Hills Software Compiler This compiler is commonly used for embedded systems. If you try to reverse-engineer a piece of consumer electronics, you may encounter code generated by Green Hills C/C++. Disassemblers and Decompilers What is a Disassembler? analyse and understand native x86 and x64 Windows software. It provides interactive code and. - - Binary Ninja - Binary Ninja is a commercial, cross-platform (Linux, OS X, Windows) reverse engineering platform with aims to offer a similar feature set to IDA ad a much cheaper price point. It is currently in a semi-private beta (anyone requesting access is allowed on the beta) and a precursor written in python is open source (). Currently advertised pricing is $99 for student/non-commercial use, and $399 for commercial use. - - As we have alluded to before, there are a number of issues and difficulties associated with the disassembly process. The two most important difficulties are the division between code and data, and the loss of text information. Separating Code from Data even harder to determine what is going on. Another challenge is posed by modern optimising compilers; they inline small subroutines, then combine instructions over call and return boundaries. This loses valuable information about the way the program is structured. Decompilers: [2][3]). boasted pretty good results in the past. - - snowman - Snowman is an open source native code to C/C++ decompiler.. - A General view of Disassembling 8 bit CPU code On 8-bit CPUs, calculated jumps are often implemented by pushing a calculated "return" address to the stack, then jumping to that address using the "return" instruction. For example, the RTS Trick uses this technique to implement jump tables (w:branch table). parameters after the call instruction [4] has some tips on reverse engineering programs in JavaScript, Flash Actionscript (SWF), Java, etc. - the Open Source Institute occasionally has reverse engineering challenges among its other brainteasers.[5] - The Program Transformation wiki has a Reverse engineering and Re-engineering Roadmap, and discusses disassemblers, decompilers, and tools for translating programs from one high-level language to another high-level language. - Other disassemblers with multi-platform support Analysis Tools Resource Monitors - SysInternals Freeware - This page has a large number of excellent utilities, many of which are very useful to security experts, network administrators, and (most importantly to us) reversers. Specifically, check out Process Monitor, FileMon, RegMon, TCPView, and Process Explorer. - API Monitors -. Platforms Microsoft Windows Microsoft Windows The Windows operating system is a popular reverse engineering target. Windows Versions Windows operating systems can be easily divided into 2 categories: Win9x, and WinNT. ME.), Windows Vista (NT 6.0), Windows 7 (NT 6.1), Windows 8 (NT 6.2), Windows 8.1 (NT 6.3), and Windows 10 (NT 10.0). The Microsoft XBOX and and XBOX 360 also run a variant of NT, forked from Windows 2000. Most future Microsoft operating system products are based on NT in some shape or form.32,.. Native API; it is rumored that this prefix was chosen due to its having no significance at all. In actual implementation, the system call stubs merely load two registers with values required to describe a native API call, and then execute a software interrupt (or the sysenter instruction). load, manipulate and retrieve data from DLLs and other module resources User Mode Versus Kernel Mode.. Differences. Windows CE/Mobile, and other versions Windows CE is the Microsoft offering on small devices. It largely uses the same Win32 API as the desktop systems, although it has a slightly different architecture. Some examples in this book may consider WinCE. . Windows Executable Files MS-DOS COM Files The PE portable executable file format includes a number of informational headers, and is arranged in the following format: The basic format of a Microsoft PE file MS-DOS headerZ"; short lastsize; short nblocks; short nreloc; short hdrsize; short minalloc; short maxalloc; void *ss; void *sp; short checksum; void *ip; void *cs; short relocpos; short noverlay; short reserved1[4]; short oem_id; short oem_info; short reserved2[10]; long e_lfanew; } shows that The "PE Optional Header" is not "optional" per se, because it is required in Executable files, but not in COFF object files. PE Optional Header presented as a C data structure: struct PEOptHeader { /*. Code Sections) - .testbss/TEXTBSS - Present if Incremental Linking is enabled - .data/.idata/DATA/IDATA - Contains initialised data - .bss/BSS - Contains uninitialised data Section Flags What is linking? ordinal from the AddressOfNameOrdinals array. This ordinal is then used to get an index to a value in AddressOfFunctions. Forwarding Resource structures Alternate Bound Import Structure Windows DLL Files. Linux The Print Version page of the X86 Disassembly Wikibook is a stub. You can help by expanding this section. Linux The GNU/Linux operating system is open source, but at the same time there is so much that constitutes "GNU/Linux" that it can be difficult to stay on top of all aspects of the system. Here we will attempt to boil down some of the most important concepts of the GNU/Linux Operating System, especially from a reverser's standpoint System Architecture The concept of "GNU/Linux" is mostly a collection of a large number of software components that are based off the GNU tools and the Linux kernel. GNU/Linux is itself broken into a number of variants called "distros" which share some similarities, but may also have distinct peculiarities. In a general sense, all GNU/Linux distros are based on a variant of the Linux kernel. However, since each user may edit and recompile their own kernel at will, and since some distros may make certain edits to their kernels, it is hard to proclaim any one version of any one kernel as "the standard". Linux kernels are generally based off the philosophy that system configuration details should be stored in aptly-named, human-readable (and therefore human-editable) configuration files. The Linux kernel implements much of the core API, but certainly not all of it. Much API code is stored in external modules (although users have the option of compiling all these modules together into a "Monolithic Kernel"). On top of the kernel generally runs one or more shells. Bash is one of the more popular shells, but many users prefer other shells, especially for different tasks. Beyond the shell, Linux distros frequently offer a GUI (although many distros do not have a GUI at all, usually for performance reasons). Since each GUI often supplies its own underlying framework and API, certain graphical applications may run on only one GUI. Some applications may need to be recompiled (and a few completely rewritten) to run on another GUI. Configuration Files Shells Here are some popular shells: - Bash - An acronym for "Bourne Again SHell." - Bourne - A precursor to Bash. - Csh - C Shell - Ksh - Korn Shell - TCsh - A Terminal oriented Csh. - Zsh - Z Shell GUIs Some of the more-popular GUIs: - KDE - K Desktop Environment - GNOME - GNU Network Object Modeling Environment Debuggers - gdb - The GNU Debugger. It comes pre-installed on most Linux distributions and is primarily used to debug ELF executables. manpage - edb - A fully featured plugin-based debugger inspired by the famous OllyDbg. Project page File Analyzers - strings - Finds printable strings in a file. When, for example, a password is stored in the binary itself (defined statically in the source), the string can then be extracted from the binary without ever needing to execute it. manpage - file - Determines a file type, useful for determining whether an executable has been stripped and whether it's been dynamically (or statically) linked. manpage - objdump - Disassembles object files, executables and libraries. Can list internal file structure and disassemble specific sections. Supports both Intel and AT&T syntax - nm - Lists symbols from executable files. Doesn't work on stripped binaries. Used mostly on debugging version of executables. Linux Executable Files The Print Version page of the X86 Disassembly Wikibook is a stub. You can help by expanding this section. ELF Files Relocatable ELF files are created by compilers. They need to be linked before running. Those files are often found in .a archives, with a .o extension. a.out Files. File Format Code Patterns The Stack The Stack The following lines of ASM code are basically equivalent: but the single command actually performs much faster than the alternative. It can be visualized that the stack grows from right to left, and esp decreases as the stack grows in size. ESP In Action. Functions and Stack Frames. Standard Entry Sequence; ... _MyFunction:: _MyFunction2: code: : : | 2 | [ebp + 16] (3rd function argument) | 5 | [ebp + 12] (2nd argument) | 10 | [ebp + 8] (1st argument) | RA | [ebp + 4] (return address) | FP | [ebp] (old ebp value) | | [ebp - 4] (1st local variable) : : : : | | [ebp - X] (esp - the current stack pointer. The use of push / pop is valid now) The stack pointer value may change during the execution of the current function. In particular this happens when: - parameters are passed to another function; - the pseudo-function "alloca()" is used. [FIXME: When parameters are passed into another function the esp changing is not an issue. When that function returns the esp will be back to its old value. So why does ebp help there. This needs better explanation. (The real explanation is here, ESP is not really needed:)]: _MyFunction3: push ebp mov ebp, esp sub esp, 12 ; sizeof(a) + sizeof(b) + sizeof(c) ;x = [ebp + 8], y = [ebp + 12], z = [ebp + 16] ;a = [ebp - 4] = [esp + 8], b = [ebp - 8] = [esp + 4], c = [ebp - 12] = [esp] mov esp, ebp pop ebp ret 12 ; sizeof(x) + sizeof(y) + sizeof(z) Non-Standard Stack Frames Frequently, reversers will come across a subroutine that doesn't set up a standard stack frame. Here are some things to consider when looking at a subroutine that does not start with a standard sequence:. Local Static Variables Local static variables cannot be created on the stack, since the value of the variable is preserved across function calls. We'll discuss local static variables and other types of variables in a later chapter. Functions and Stack Frame Examples Example: Number of Parameters Given the following disassembled function (in MASM syntax), how many 4-byte parameters does this function receive? How many variables are created on the stack? What does this function do? _Question1: push ebp mov ebp, esp sub esp, 4 mov eax, [ebp + 8] mov ecx, 2 mul ecx mov [esp + 0], eax mov eax, [ebp + 12] mov edx, [esp + 0] add eax, edx mov esp, ebp pop ebp ret The function above takes 2 4-byte parameters, accessed by offsets +8 and +12 from ebp. The function also has 1 variable created on the stack, accessed by offset +0 from esp. The function is nearly identical to this C code: int Question1(int x, int y) { int z; z = x * 2; return y + z; } Example: Standard Entry Sequences Does the following function follow the Standard Entry and Exit Sequences? if not, where does it differ? _Question2: call _SubQuestion2 mov ecx, 2 mul ecx ret The function does not follow the standard entry sequence, because it doesn't set up a proper stack frame with ebp and esp. The function basically performs the following C instructions: int Question2() { return SubQuestion2() * 2; } Although an optimizing compiler has chosen to take a few shortcuts. Calling Conventions Calling Conventions: STDCALL, CDECL, and FASTCALL. In addition, there is another calling convention typically used with C++: THISCALL. There are other calling conventions as well, including PASCAL and FORTRAN conventions, among others. We will not consider those conventions in this book. Notes on Terminology C++ requires that non-static methods of a class be called by an instance of the class. Therefore it uses its own standard calling convention to ensure that pointers to the object are passed to the function: THISCALL. THISCALL); } MyClass::MyFunction(2) { } And here is the resultant mangled name: ?MyFunction@MyClass@@QAEHH@Z Extern "C" - x86 Disassembly/Calling Convention Examples - Embedded Systems/Mixed C and Assembly Programming describes calling conventions on other CPUs. Calling Convention Examples Microsoft C Compiler Here is a simple function in C: int MyFunction(int x, int y) { return (x * 2) + (y * 3); } Using cl.exe, we are going to generate 3 separate listings for MyFunction, one with CDECL, one with FASTCALL, and one with STDCALL calling conventions. On the commandline, there are several switches that you can use to force the compiler to change the default: /Gd: The default calling convention is CDECL /Gr: The default calling convention is FASTCALL /Gz: The default calling convention is STDCALL Using these commandline options, here are the listings: CDECL int MyFunction(int x, int y) { return (x * 2) + (y * 3); } becomes: PUBLIC _MyFunction _TEXT SEGMENT _x$ = 8 ; size = 4 _y$ = 12 ; size = 4 _MyFunction PROC NEAR ; Line 4 push ebp mov ebp, esp ; Line 5 mov eax, _y$[ebp] imul eax, 3 mov ecx, _x$[ebp] lea eax, [eax+ecx*2] ; Line 6 pop ebp ret 0 _MyFunction ENDP _TEXT ENDS END On entry of a function, ESP points to the return address pushed on the stack by the call instruction (that is, previous contents of EIP). Any argument in stack of higher address than entry ESP is pushed by caller before the call is made; in this example, the first argument is at offset +4 from ESP (EIP is 4 bytes wide), plus 4 more bytes once the EBP is pushed on the stack. Thus, at line 5, ESP points to the saved frame pointer EBP, and arguments are located at addresses ESP+8 (x) and ESP+12 (y). For CDECL, caller pushes arguments into stack in a right to left order. Because ret 0 is used, it must be the caller who cleans up the stack. As a point of interest, notice how lea is used in this function to simultaneously perform the multiplication (ecx * 2), and the addition of that quantity to eax. Unintuitive instructions like this will be explored further in the chapter on unintuitive instructions. FASTCALL int MyFunction(int x, int y) { return (x * 2) + (y * 3); } becomes: PUBLIC @MyFunction@8 _TEXT SEGMENT _y$ = -8 ; size = 4 _x$ = -4 ; size = 4 @MyFunction@8 PROC NEAR ; _x$ = ecx ; _y$ = edx ; Line 4 push ebp mov ebp, esp sub esp, 8 mov _y$[ebp], edx mov _x$[ebp], ecx ; Line 5 mov eax, _y$[ebp] imul eax, 3 mov ecx, _x$[ebp] lea eax, [eax+ecx*2] ; Line 6 mov esp, ebp pop ebp ret 0 @MyFunction@8 ENDP _TEXT ENDS END This function was compiled with optimizations turned off. Here we see arguments are first saved in stack then fetched from stack, rather than be used directly. This is because the compiler wants a consistent way to use all arguments via stack access, not only one compiler does like that. There is no argument is accessed with positive offset to entry SP, it seems caller doesn’t pushed in them, thus it can use ret 0. Let’s do further investigation: int FastTest(int x, int y, int z, int a, int b, int c) { return x * y * z * a * b * c; } and the corresponding listing: PUBLIC @FastTest@24 _TEXT SEGMENT _y$ = -8 ; size = 4 _x$ = -4 ; size = 4 _z$ = 8 ; size = 4 _a$ = 12 ; size = 4 _b$ = 16 ; size = 4 _c$ = 20 ; size = 4 @FastTest@24 PROC NEAR ; _x$ = ecx ; _y$ = edx ; Line 2 push ebp mov ebp, esp sub esp, 8 mov _y$[ebp], edx mov _x$[ebp], ecx ; mov esp, ebp pop ebp ret 16 ; 00000010H Now we have 6 arguments, four are pushed in by caller from right to left, and last two are passed again in cx/dx, and processed the same way as previous example. Stack cleanup is done by ret 16, which corresponding to 4 arguments pushed before call executed. For FASTCALL, compiler will try to pass arguments in registers, if not enough caller will pushed them into stack still in an order from right to left. Stack cleanup is done by callee. It is called FASTCALL because if arguments can be passed in registers (for 64bit CPU the maximum number is 6), no stack push/clean up is needed. The name-decoration scheme of the function: @MyFunction@n, here n is stack size needed for all arguments. STDCALL int MyFunction(int x, int y) { return (x * 2) + (y * 3); } becomes: PUBLIC _MyFunction@8 _TEXT SEGMENT _x$ = 8 ; size = 4 _y$ = 12 ; size = 4 _MyFunction@8 PROC NEAR ; Line 4 push ebp mov ebp, esp ; Line 5 mov eax, _y$[ebp] imul eax, 3 mov ecx, _x$[ebp] lea eax, [eax+ecx*2] ; Line 6 pop ebp ret 8 _MyFunction@8 ENDP _TEXT ENDS END The STDCALL listing has only one difference than the CDECL listing that it uses "ret 8" for self clean up of stack. Lets do an example with more parameters: int STDCALLTest(int x, int y, int z, int a, int b, int c) { return x * y * z * a * b * c; } Let's take a look at how this function gets translated into assembly by cl.exe: PUBLIC _STDCALLTest@24 _TEXT SEGMENT _x$ = 8 ; size = 4 _y$ = 12 ; size = 4 _z$ = 16 ; size = 4 _a$ = 20 ; size = 4 _b$ = 24 ; size = 4 _c$ = 28 ; size = 4 _STDCALLTest@24 PROC NEAR ; Line 2 push ebp mov ebp, esp ; pop ebp ret 24 ; 00000018H _STDCALLTest@24 ENDP _TEXT ENDS END Yes the only difference between STDCALL and CDECL is that the former does stack clean up in callee, the later in caller. This saves a little bit in X86 due to its "ret n". GNU C Compiler We will be using 2 example C functions to demonstrate how GCC implements calling conventions: int MyFunction1(int x, int y) { return (x * 2) + (y * 3); } and int MyFunction2(int x, int y, int z, int a, int b, int c) { return x * y * (z + 1) * (a + 2) * (b + 3) * (c + 4); } GCC does not have commandline arguments to force the default calling convention to change from CDECL (for C), so they will be manually defined in the text with the directives: __cdecl, __fastcall, and __stdcall. CDECL The first function (MyFunction1) provides the following assembly listing: _MyFunction1: First of all, we can see the name-decoration is the same as in cl.exe. We can also see that the ret instruction doesn't have an argument, so the calling function is cleaning the stack. However, since GCC doesn't provide us with the variable names in the listing, we have to deduce which parameters are which. After the stack frame is set up, the first instruction of the function is "movl 8(%ebp), %eax". One we remember (or learn for the first time) that GAS instructions have the general form: instruction src, dest We realize that the value at offset +8 from ebp (the last parameter pushed on the stack) is moved into eax. The leal instruction is a little more difficult to decipher, especially if we don't have any experience with GAS instructions. The form "leal(reg1,reg2), dest" adds the values in the parenthesis together, and stores the value in dest. Translated into Intel syntax, we get the instruction: lea ecx, [eax + eax] Which is clearly the same as a multiplication by 2. The first value accessed must then have been the last value passed, which would seem to indicate that values are passed right-to-left here. To prove this, we will look at the next section of the listing: movl 12(%ebp), %edx movl %edx, %eax addl %eax, %eax addl %edx, %eax leal (%eax,%ecx), %eax the value at offset +12 from ebp is moved into edx. edx is then moved into eax. eax is then added to itselt (eax * 2), and then is added back to edx (edx + eax). remember though that eax = 2 * edx, so the result is edx * 3. This then is clearly the y parameter, which is furthest on the stack, and was therefore the first pushed. CDECL then on GCC is implemented by passing arguments on the stack in right-to-left order, same as cl.exe. FASTCALL .globl @MyFunction1@8 .def @MyFunction1@8; .scl 2; .type 32; .endef @MyFunction1@8: pushl %ebp movl %esp, %ebp subl $8, %esp movl %ecx, -4(%ebp) movl %edx, -8(%ebp) movl -4(%ebp), %eax leal (%eax,%eax), %ecx movl -8(%ebp), %edx movl %edx, %eax addl %eax, %eax addl %edx, %eax leal (%eax,%ecx), %eax leave ret Notice first that the same name decoration is used as in cl.exe. The astute observer will already have realized that GCC uses the same trick as cl.exe, of moving the fastcall arguments from their registers (ecx and edx again) onto a negative offset on the stack. Again, optimizations are turned off. ecx is moved into the first position (-4) and edx is moved into the second position (-8). Like the CDECL example above, the value at -4 is doubled, and the value at -8 is tripled. Therefore, -4 (ecx) is x, and -8 (edx) is y. It would seem from this listing then that values are passed left-to-right, although we will need to take a look at the larger, MyFunction2 example: .globl @MyFunction2@24 .def @MyFunction2@24; .scl 2; .type 32; .endef @MyFunction2@24: pushl %ebp movl %esp, %ebp subl $8, %esp movl %ecx, -4(%ebp) movl %edx, -8(%ebp) movl -4(%ebp), %eax imull -8(%ebp), %eax movl 8(%ebp), %edx incl %edx imull %edx, %eax movl 12(%ebp), %edx addl $2, %edx imull %edx, %eax movl 16(%ebp), %edx addl $3, %edx imull %edx, %eax movl 20(%ebp), %edx addl $4, %edx imull %edx, %eax leave ret $16 By following the fact that in MyFunction2, successive parameters are added to increasing constants, we can deduce the positions of each parameter. -4 is still x, and -8 is still y. +8 gets incremented by 1 (z), +12 gets increased by 2 (a). +16 gets increased by 3 (b), and +20 gets increased by 4 (c). Let's list these values then: z = [ebp + 8] a = [ebp + 12] b = [ebp + 16] c = [ebp + 20] c is the furthest down, and therefore was the first pushed. z is the highest to the top, and was therefore the last pushed. Arguments are therefore pushed in right-to-left order, just like cl.exe. STDCALL Let's compare then the implementation of MyFunction1 in GCC: .globl _MyFunction1@8 .def _MyFunction1@8; .scl 2; .type 32; .endef _MyFunction1@8: $8 The name decoration is the same as in cl.exe, so STDCALL functions (and CDECL and FASTCALL for that matter) can be assembled with either compiler, and linked with either linker, it seems. The stack frame is set up, then the value at [ebp + 8] is doubled. After that, the value at [ebp + 12] is tripled. Therefore, +8 is x, and +12 is y. Again, these values are pushed in right-to-left order. This function also cleans its own stack with the "ret 8" instruction. Looking at a bigger example: .globl _MyFunction2@24 .def _MyFunction2@24; .scl 2; .type 32; .endef _MyFunction2@24: pushl %ebp movl %esp, %ebp movl 8(%ebp), %eax imull 12(%ebp), %eax movl 16(%ebp), %edx incl %edx imull %edx, %eax movl 20(%ebp), %edx addl $2, %edx imull %edx, %eax movl 24(%ebp), %edx addl $3, %edx imull %edx, %eax movl 28(%ebp), %edx addl $4, %edx imull %edx, %eax popl %ebp ret $24 We can see here that values at +8 and +12 from ebp are still x and y, respectively. The value at +16 is incremented by 1, the value at +20 is incremented by 2, etc all the way to the value at +28. We can therefore create the following table: x = [ebp + 8] y = [ebp + 12] z = [ebp + 16] a = [ebp + 20] b = [ebp + 24] c = [ebp + 28] With c being pushed first, and x being pushed last. Therefore, these parameters are also pushed in right-to-left order. This function then also cleans 24 bytes off the stack with the "ret 24" instruction. Example: C Calling Conventions Identify the calling convention of the following C function: int MyFunction(int a, int b) { return a + b; } The function is written in C, and has no other specifiers, so it is CDECL by default. Example: Named Assembly Function Identify the calling convention of the function MyFunction: :_MyFunction@12 push ebp mov ebp, esp ... pop ebp ret 12 The function includes the decorated name of an STDCALL function, and cleans up its own stack. It is therefore an STDCALL function. Example: Unnamed Assembly Function This code snippet is the entire body of an unnamed assembly function. Identify the calling convention of this function. push ebp mov ebp, esp add eax, edx pop ebp ret The function sets up a stack frame, so we know the compiler hasnt done anything "funny" to it. It accesses registers which arent initialized yet, in the edx and eax registers. It is therefore a FASTCALL function. Example: Another Unnamed Assembly Function push ebp mov ebp, esp mov eax, [ebp + 8] pop ebp ret 16 The function has a standard stack frame, and the ret instruction has a parameter to clean its own stack. Also, it accesses a parameter from the stack. It is therefore an STDCALL function. Example: Name Mangling What can we tell about the following function call? mov ecx, x push eax mov eax, ss:[ebp - 4] push eax mov al, ss:[ebp - 3] call @__Load?$Container__XXXY_?Fcii Two things should get our attention immediately. The first is that before the function call, a value is stored into ecx. Also, the function name itself is heavily mangled. This example must use the C++ THISCALL convention. Inside the mangled name of the function, we can pick out two english words, "Load" and "Container". Without knowing the specifics of this name mangling scheme, it is not possible to determine which word is the function name, and which word is the class name. We can pick out two 32-bit variables being passed to the function, and a single 8-bit variable. The first is located in eax, the second is originally located on the stack from offset -4 from ebp, and the third is located at ebp offset -3. In C++, these would likely correspond to two int variables, and a single char variable. Notice at the end of the mangled function name are three lower-case characters "cii". We can't know for certain, but it appears these three letters correspond to the three parameters (char, int, int). We do not know from this whether the function returns a value or not, so we will assume the function returns void. Assuming that "Load" is the function name and "Container" is the class name (it could just as easily be the other way around), here is our function definition: class Container { void Load(char, int, int); } Branches Branching). Branch Examples Example: Number of Parameters What parameters does this function take? What calling convention does it use? What kind of value does it return? Write the entire C prototype of this function. Assume all values are unsigned values. This function accesses parameters on the stack at [ebp + 8] and [ebp + 12]. Both of these values are loaded into ecx, and we can therefore assume they are 4-byte values. This function doesn't clean its own stack, and the values aren't passed in registers, so we know the function is CDECL. The return value in eax is a 4-byte value, and we are told to assume that all the values are unsigned. Putting all this together, we can construct the function prototype: unsigned int CDECL MyFunction(unsigned int param1, unsigned int param2); Example: Identify Branch Structures How many separate branch structures are in this function? What types are they? Can you give more descriptive names to _Label_1, _Label_2, and _Label_3, based on the structures of these branches? How many separate branch structures are there in this function? Stripping away the entry and exit sequences, here is the code we have left: mov ecx, [ebp + 8] cmp ecx, 0 jne _Label_1 inc eax jmp _Label_2 :_Label_1 dec eax : _Label_2 mov ecx, [ebp + 12] cmp ecx, 0 jne _Label_3 inc eax : _Label_3 Looking through, we see 2 cmp statements. The first cmp statement compares ecx to zero. If ecx is not zero, we go to _Label_1, decrement eax, and then fall through to _Label_2. If ecx is zero, we increment eax, and go to directly to _Label_2. Writing out some pseudocode, we have the following result for the first section: if(ecx doesnt equal 0) goto _Label_1 eax++; goto _Label_2 :_Label_1 eax--; :_Label_2 Since _Label_2 occurs at the end of this structure, we can rename it to something more descriptive, like "End_of_Branch_1", or "Branch_1_End". The first comparison tests ecx against 0, and then jumps on not-equal. We can reverse the conditional, and say that _Label_1 is an else block: if(ecx == 0) ;ecx is param1 here { eax++; } else { eax--; } So we can rename _Label_1 to something else descriptive, such as "Else_1". The rest of the code block, after Branch_1_End (_Label_2) is as follows: mov ecx, [ebp + 12] cmp ecx, 0 jne _Label_3 inc eax : _Label_3 We can see immediately that _Label_3 is the end of this branch structure, so we can immediately call it "Branch_2_End", or something else. Here, we are again comparing ecx to 0, and if it is not equal, we jump to the end of the block. If it is equal to zero, however, we increment eax, and then fall out the bottom of the branch. We can see that there is no else block in this branch structure, so we don't need to invert the condition. We can write an if statement directly: if(ecx == 0) ;ecx is param2 here { eax++; } Example: Convert To C Write the equivalent C code for this function. Assume all parameters and return values are unsigned values. push ebp mov ebp, esp mov eax, 0 mov ecx, [ebp + 8] cmp ecx, 0 jne _Label_1 inc eax jne _Label_2 :_Label_1 dec eax : _Label_2 mov ecx, [ebp + 12] cmp ecx, 0 jne _Label_3 inc eax : _Label_3 mov esp, ebp pop ebp ret Starting with the C function prototype from answer 1, and the conditional blocks in answer 2, we can put together a pseudo-code function, without variable declarations, or a return value: unsigned int CDECL MyFunction(unsigned int param1, unsigned int param2) { if(param1 == 0) { eax++; } else { eax--; } if(param2 == 0) { eax++; } } Now, we just need to create a variable to store the value from eax, which we will call "a", and we will declare as a register type: unsigned int CDECL MyFunction(unsigned int param1, unsigned int param2) { register unsigned int a = 0; if(param1 == 0) { a++; } else { a--; } if(param2 == 0) { a++; } return a; } Granted, this function isn't a particularly useful function, but at least we know what it does. Loops Loops C only has Do-While, While, and For Loops, but some other languages may very well implement their own types. Also, a good C-Programmer could easily "home brew" a new type of loop using a series of good macros, so they bear some consideration: Do-Until Loop A common Do-Until Loop will take the following form: do { //loop body } until(x); which essentially becomes the following Do-While loop: do { //loop body } while(!x); Until Loop Like the Do-Until loop, the standard Until-Loop looks like the following: until(x) { //loop body } which (likewise) gets translated to the following While-Loop: while(!x) { //loop body } Do-Forever Loop. Loop Examples Example: Identify Purpose What does this function do? What kinds of parameters does it take, and what kind of results (if any) does it return? This function loops through an array of 4 byte integer values, pointed to by esi, and adds each entry. It returns the sum in eax. The only parameter (located in [ebp + 8]) is a pointer to an array of integer values. The comparison between ebx and 100 indicates that the input array has 100 entries in it. The pointer offset [esi + ebx * 4] shows that each entry in the array is 4 bytes wide. Example: Complete C Prototype What is this function's C prototype? Make sure to include parameters, return values, and calling convention. Notice how the ret function cleans its parameter off the stack? That means that this function is an STDCALL function. We know that the function takes, as its only parameter, a pointer to an array of integers. We do not know, however, whether the integers are signed or unsigned, because the je command is used for both types of values. We can assume one or the other, and for simplicity, we can assume unsigned values (unsigned and signed values, in this function, will actually work the same way). We also know that the return value is a 4-byte integer value, of the same type as is found in the parameter array. Since the function doesnt have a name, we can just call it "MyFunction", and we can call the parameter "array" because it is an array. From this information, we can determine the following prototype in C: unsigned int STDCALL MyFunction(unsigned int *array); Example: Decompile To C Code Decompile this code into equivalent C source code. Starting with the function prototype above, and the description of what this function does, we can start to write the C code for this function. We know that this function initializes eax, ebx, and ecx before the loop. However, we can see that ecx is being used as simply an intermediate storage location, receiving successive values from the array, and then being added to eax. We will create two unsigned integer values, a (for eax) and b (for ebx). We will define both a and b with the register qualifier, so that we can instruct the compiler not to create space for them on the stack. For each loop iteration, we are adding the value of the array, at location ebx*4 to the running sum, eax. Converting this to our a and b variables, and using C syntax, we see: a = a + array[b]; The loop could be either a for loop, or a while loop. We see that the loop control variable, b, is initialized to 0 before the loop, and is incremented by 1 each loop iteration. The loop tests b against 100, after it gets incremented, so we know that b never equals 100 inside the loop body. Using these simple facts, we will write the loop in 3 different ways: First, with a while loop. unsigned int STDCALL MyFunction(unsigned int *array) { register unsigned int b = 0; register unsigned int a = 0; while(b != 100) { a = a + array[b]; b++; } return a; } Or, with a for loop: unsigned int STDCALL MyFunction(unsigned int *array) { register unsigned int b; register unsigned int a = 0; for(b = 0; b != 100; b++) { a = a + array[b]; } return a; } And finally, with a do-while loop: unsigned int STDCALL MyFunction(unsigned int *array) { register unsigned int b = 0; register unsigned int a = 0; do { a = a + array[b]; b++; }while(b != 100); return a; } Data Patterns Variables Variables Here is a table to summarize some points about global variables: When disassembling, a hard-coded memory address should be considered to be an ordinary global variable unless you can determine from the scope of the variable that it is static or extern. Constants Variable Examples Example: Identify C++ Code. Data Structures Data Structures Few programs can work by using simple memory storage; most need to utilize complex data objects, including pointers, arrays, structures, and other complicated types. This chapter will talk about how compilers implement complex data objects, and how the reverser can identify these objects. Arrays Arrays are simply a storage scheme for multiple data objects of the same type. Data objects are stored sequentially, often as an offset from a pointer to the beginning of the array. Consider the following C code: x = array[25]; Which is identical to the following asm code: mov ebx, $array mov eax, [ebx + 25] mov $x, eax Now, consider the following example: int MyFunction1() { int array[20]; ... This (roughly) translates into the following asm pseudo-code: :_MyFunction1 push ebp mov ebp, esp sub esp, 80 ;the whole array is created on the stack!!! lea $array, [esp + 0] ;a pointer to the array is saved in the array variable ... The entire array is created on the stack, and the pointer to the bottom of the array is stored in the variable "array". An optimizing compiler could ignore the last instruction, and simply refer to the array via a +0 offset from esp (in this example), but we will do things verbosely. Likewise, consider the following example: void MyFunction2() { char buffer[4]; ... This will translate into the following asm pseudo-code: :_MyFunction2 push ebp mov ebp, esp sub esp, 4 lea $buffer, [esp + 0] ... Which looks harmless enough. But, what if a program inadvertantly accesses buffer[4]? what about buffer[5]? what about buffer[8]? This is the makings of a buffer overflow vulnerability, and (might) will be discussed in a later section. However, this section won't talk about security issues, and instead will focus only on data structures. Spotting an Array on the Stack To spot an array on the stack, look for large amounts of local storage allocated on the stack ("sub esp, 1000", for example), and look for large portions of that data being accessed by an offset from a different register from esp. For instance: :_MyFunction3 push ebp mov ebp, esp sub esp, 256 lea ebx, [esp + 0x00] mov [ebx + 0], 0x00 is a good sign of an array being created on the stack. Granted, an optimizing compiler might just want to offset from esp instead, so you will need to be careful. Spotting an Array in Memory Arrays in memory, such as global arrays, or arrays which have initial data (remember, initialized data is created in the .data section in memory) and will be accessed as offsets from a hardcoded address in memory: :_MyFunction4 push ebp mov ebp, esp mov esi, 0x77651004 mov ebx, 0x00000000 mov [esi + ebx], 0x00 It needs to be kept in mind that structures and classes might be accessed in a similar manner, so the reverser needs to remember that all the data objects in an array are of the same type, that they are sequential, and they will often be handled in a loop of some sort. Also, (and this might be the most important part), each elements in an array may be accessed by a variable offset from the base. Since most times an array is accessed through a computed index, not through a constant, the compiler will likely use the following to access an element of the array: mov [ebx + eax], 0x00 If the array holds elements larger than 1 byte (for char), the index will need to be multiplied by the size of the element, yielding code similar to the following: mov [ebx + eax * 4], 0x11223344 # access to an array of DWORDs, e.g. arr[i] = 0x11223344 ... mul eax, $20 # access to an array of structs, each 20 bytes long lea edi, [ebx + eax] # e.g. ptr = &arr[i] This pattern can be used to distinguish between accesses to arrays and accesses to structure data members. Structures All C programmers are going to be familiar with the following syntax: struct MyStruct { int FirstVar; double SecondVar; unsigned short int ThirdVar; } It's called a structure (Pascal programmers may know a similar concept as a "record"). Structures may be very big or very small, and they may contain all sorts of different data. Structures may look very similar to arrays in memory, but a few key points need to be remembered: structures do not need to contain data fields of all the same type, structure fields are often 4-byte aligned (not sequential), and each element in a structure has its own offset. It therefore makes no sense to reference a structure element by a variable offset from the base. Take a look at the following structure definition: struct MyStruct2 { long value1; short value2; long value3; } Assuming the pointer to the base of this structure is loaded into ebx, we can access these members in one of two schemes: The first arrangement is the most common, but it clearly leaves open an entire memory word (2 bytes) at offset +6, which is not used at all. Compilers occasionally allow the programmer to manually specify the offset of each data member, but this isn't always the case. The second example also has the benefit that the reverser can easily identify that each data member in the structure is a different size. Consider now the following function: :_MyFunction push ebp mov ebp, esp lea ecx, SS:[ebp + 8] mov [ecx + 0], mov [ecx + 4], ecx mov [ecx + 8], mov esp, ebp pop ebp The function clearly takes a pointer to a data structure as its first argument. Also, each data member is the same size (4 bytes), so how can we tell if this is an array or a structure? To answer that question, we need to remember one important distinction between structures and arrays: the elements in an array are all of the same type, the elements in a structure do not need to be the same type. Given that rule, it is clear that one of the elements in this structure is a pointer (it points to the base of the structure itself!) and the other two fields are loaded with the hex value 0x0A (10 in decimal), which is certainly not a valid pointer on any system I have ever used. We can then partially recreate the structure and the function code below: struct MyStruct3 { long value1; void *value2; long value3; } void MyFunction2(struct MyStruct3 *ptr) { ptr->value1 = 10; ptr->value2 = ptr; ptr->value3 = 10; } As a quick aside note, notice that this function doesn't load anything into eax, and therefore it doesn't return a value. Advanced Structures Lets say we have the following situation in a function: :MyFunction1 push ebp mov ebp, esp mov esi, [ebp + 8] lea ecx, SS:[esi + 8] ... what is happening here? First, esi is loaded with the value of the function's first parameter (ebp + 8). Then, ecx is loaded with a pointer to the offset +8 from esi. It looks like we have 2 pointers accessing the same data structure! The function in question could easily be one of the following 2 prototypes: struct MyStruct1 { DWORD value1; DWORD value2; struct MySubStruct1 { ... struct MyStruct2 { DWORD value1; DWORD value2; DWORD array[LENGTH]; ... one pointer offset from another pointer in a structure often means a complex data structure. There are far too many combinations of structures and arrays, however, so this wikibook will not spend too much time on this subject. Identifying Structs and Arrays Array elements and structure fields are both accessed as offsets from the array/structure pointer. When disassembling, how do we tell these data structures apart? Here are some pointers: - array elements are not meant to be accessed individually. Array elements are typically accessed using a variable offset - Arrays are frequently accessed in a loop. Because arrays typically hold a series of similar data items, the best way to access them all is usually a loop. Specifically, for(x = 0; x < length_of_array; x++)style loops are often used to access arrays, although there can be others. - All the elements in an array have the same data type. - Struct fields are typically accessed using constant offsets. - Struct fields are typically not accessed in order, and are also not accessed using loops. - Struct fields are not typically all the same data type, or the same data width Linked Lists and Binary Trees Two common structures used when programming are linked lists and binary trees. These two structures in turn can be made more complicated in a number of ways. Shown in the images below are examples of a linked list structure and a binary tree structure. Each node in a linked list or a binary tree contains some amount of data, and a pointer (or pointers) to other nodes. Consider the following asm code example: loop_top: cmp [ebp + 0], 10 je loop_end mov ebp, [ebp + 4] jmp loop_top loop_end: At each loop iteration, a data value at [ebp + 0] is compared with the value 10. If the two are equal, the loop is ended. If the two are not equal, however, the pointer in ebp is updated with a pointer at an offset from ebp, and the loop is continued. This is a classic linked-loop search technique. This is analagous to the following C code: struct node { int data; struct node *next; }; struct node *x; ... while(x->data != 10) { x = x->next; } Binary trees are the same, except two different pointers will be used (the right and left branch pointers). Objects and Classes The Print Version page of the X86 Disassembly Wikibook is a stub. You can help by expanding this section. Object-Oriented Programming Object-Oriented (OO) programming provides for us a new unit of program structure to contend with: the Object. This chapter will look at disassembled classes from C++. This chapter will not deal directly with COM, but it will work to set a lot of the groundwork for future discussions in reversing COM components (Windows users only). Classes A basic class that has not inherited anything can be broken into two parts, the variables and the methods. The non-static variables are shoved into a simple data structure while the methods are compiled and called like every other function. When you start adding in inheritance and polymorphism, things get a little more complicated. For the purposes of simplicity, the structure of an object will be described in terms of having no inheritance. At the end, however, inheritance and polymorphism will be covered. Variables All static variables defined in a class resides in the static region of memory for the entire duration of the application. Every other variable defined in the class is placed into a data structure known as an object. Typically when the constructor is called, the variables are placed into the object in sequential order, see Figure 1. A: class ABC123 { public: int a, b, c; ABC123():a(1), b(2), c(3) {}; }; B: 0x00200000 dd 1 ;int a 0x00200004 dd 2 ;int b 0x00200008 dd 3 ;int c However, the compiler typically needs the variables to be separated into sizes that are multiples of a word (2 bytes) in order to locate them. Not all variables fit this requirement, namely char arrays; some unused bits might be used pad the variables so they meet this size requirement. This is illustrated in Figure 2. A: class ABC123{ public: int a; char b[3]; double c; ABC123():a(1),c(3) { strcpy(b,"02"); }; }; B: 0x00200000 dd 1 ;int a ; offset = abc123 + 0*word_size 0x00200004 db '0' ;b[0] = '0' ; offset = abc123 + 2*word_size 0x00200005 db '2' ;b[1] = '2' 0x00200006 db 0 ;b[2] = null 0x00200007 db 0 ;<= UNUSED BYTE 0x00200008 dd 0x00000000 ;double c, lower 32 bits ; offset = abc123 + 4*word_size 0x0020000C dd 0x40080000 ;double c, upper 32 bits In order for the application to access one of these object variables, an object pointer needs to be offset to find the desired variable. The offset of every variable is known by the compiler and written into the object code wherever it's needed. Figure 3 shows how to offset a pointer to retrieve variables. ;abc123 = pointer to object mov eax, [abc123] ;eax = &a ;offset = abc123+0*word_size = abc123 mov ebx, [abc123+4] ;ebx = &b ;offset = abc123+2*word_size = abc123+4 mov ecx, [abc123+8] ;ecx = &c ;offset = abc123+4*word_size = abc123+8 Figure 3: This shows how to offset a pointer to retrieve variables. The first line places the address of variable 'a' into eax. The second line places the address of variable 'b' into ebx. And the last line places the variable 'c' into ecx. Methods At a low level, there is almost no difference between a function and a method. When decompiling, it can sometimes be hard to tell a difference between the two. They both reside in the text memory space, and both are called the same way. An example of how a method is called can be seen in Figure 4. A: //method call abc123->foo(1, 2, 3); B: push 3 ; int c push 2 ; int b push 1 ; int a push [ebp-4] ; the address of the object call 0x00434125 ; call to method A notable characteristic in a method call is the address of the object being passed in as an argument. This, however, is not a always a good indicator. Figure 5 shows function with the first argument being an object passed in by reference. The result is function that looks identical to a method call. A: //function call foo(abc123, 1, 2, 3); B: push 3 ; int c push 2 ; int b push 1 ; int a push [ebp+4] ; the address of the object call 0x00498372 ; call to function Inheritance & Polymorphism Inheritance and polymorphism completely changes the structure of a class, the object no longer contains just variables, they also contain pointers to the inherited methods. This is due to the fact that polymorphism requires the address of a method or inner object to be figured out at runtime. Take Figure 6 into consideration. How does the application know to call D::one or C::one? The answer is that the compiler figures out a convention in which to order variables and method pointers inside the object such that when they're referenced, the offsets are the same for any object that has inherited its methods and variables. The abstract class A acts as a blueprint for the compiler, defining an expected structure for any class that inherits it. Every variable defined in class A and every virtual method defined in A will have the exact same offset for any of its children. Figure 7 declares a possible inheritance scheme as well as it structure in memory. Notice how the offset to C::one is the same as D::one, and the offset to C's copy of A::a is the same as D's copy. In this, our polymorphic loop can just iterate through the array of pointers and know exactly where to find each method. A: class A{ public: int a; virtual void one() = 0; }; class B{ public: int b; int c; virtual void two() = 0; }; class C: public A{ public: int d; void one(); }; class D: public A, public B{ public: int e; void one(); void two(); }; B: ;Object C 0x00200000 dd 0x00423848 ; address of C::one ;offset = 0*word_size 0x00200004 dd 1 ; C's copy of A::a ;offset = 2*word_size 0x00200008 dd 4 ; C::d ;offset = 4*word_size ;Object D 0x00200100 dd 0x00412348 ; address of D::one ;offset = 0*word_size 0x00200104 dd 1 ; D's copy of A::a ;offset = 2*word_size 0x00200108 dd 0x00431255 ; address of D::two ;offset = 4*word_size 0x0020010C dd 2 ; D's copy of B::b ;offset = 6*word_size 0x00200110 dd 3 ; D's copy of B::c ;offset = 8*word_size 0x00200114 dd 5 ; D::e ;offset = 10*word_size Classes Vs. Structs Floating Point Numbers Floating Point Numbers This page will talk about how floating point numbers are used in assembly language constructs. This page will not talk about new constructs, it will not explain what the FPU instructions do, how floating point numbers are stored or manipulated, or the differences in floating-point data representations. However, this page will demonstrate briefly how floating-point numbers are used in code and data structures that we have already considered. The x86 architecture does not have any registers specifically for floating point numbers, but it does have a special stack for them. The floating point stack is built directly into the processor, and has access speeds similar to those of ordinary registers. Notice that the FPU stack is not the same as the regular system stack. Calling Conventions With the addition of the floating-point stack, there is an entirely new dimension for passing parameters and returning values. We will examine our calling conventions here, and see how they are affected by the presence of floating-point numbers. These are the functions that we will be assembling, using both GCC, and cl.exe: __cdecl double MyFunction1(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); } __fastcall double MyFunction2(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); } __stdcall double MyFunction3(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); } CDECL Here is the cl.exe assembly listing for MyFunction1: PUBLIC _MyFunction11 PROC NEAR ; Line 2 push ebp mov ebp, esp ; Line 3 4 pop ebp ret 0 _MyFunction1 ENDP _TEXT ENDS Our first question is this: are the parameters passed on the stack, or on the floating-point register stack, or some place different entirely? Key to this question, and to this function is a knowledge of what fld and fstp do. fld (Floating-point Load) pushes a floating point value onto the FPU stack, while fstp (Floating-Point Store and Pop) moves a floating point value from ST0 to the specified location, and then pops the value from ST0 off the stack entirely. Remember that double values in cl.exe are treated as 8-byte storage locations (QWORD), while floats are only stored as 4-byte quantities (DWORD). It is also important to remember that floating point numbers are not stored in a human-readable form in memory, even if the reader has a solid knowledge of binary. Remember, these aren't integers. Unfortunately, the exact format of floating point numbers is well beyond the scope of this chapter. x is offset +8, y is offset +16, and z is offset +24 from ebp. Therefore, z is pushed first, x is pushed last, and the parameters are passed right-to-left on the regular stack not the floating point stack. To understand how a value is returned however, we need to understand what fmulp does. fmulp is the "Floating-Point Multiply and Pop" instruction. It performs the instructions: ST1 := ST1 * ST0 FPU POP ST0 This multiplies ST(1) and ST(0) and stores the result in ST(1). Then, ST(0) is marked empty and stack pointer is incremented. Thus, contents of ST(1) are on the top of the stack. So the top 2 values are multiplied together, and the result is stored on the top of the stack. Therefore, in our instruction above, "fmulp ST(1), ST(0)", which is also the last instruction of the function, we can see that the last result is stored in ST0. Therefore, floating point parameters are passed on the regular stack, but floating point results are passed on the FPU stack. One final note is that MyFunction2 cleans its own stack, as referenced by the ret 20 command at the end of the listing. Because none of the parameters were passed in registers, this function appears to be exactly what we would expect an STDCALL function would look like: parameters passed on the stack from right-to-left, and the function cleans its own stack. We will see below that this is actually a correct assumption. For comparison, here is the GCC listing: LC1: .long 0 .long 1073741824 .align 8 LC2: .long 0 .long 1074266112 .globl _MyFunction1 .def _MyFunction1; .scl 2; .type 32; .endef _MyFunction1:1 faddp %st, %st(1) fmulp %st, %st(1) flds 24(%ebp) fldl LC2 faddp %st, %st(1) fmulp %st, %st(1) leave ret .align 8 This is a very difficult listing, so we will step through it (albeit quickly). 16 bytes of extra space is allocated on the stack. Then, using a combination of fldl and fstpl instructions, the first 2 parameters are moved from offsets +8 and +16, to offsets -8 and -16 from ebp. Seems like a waste of time, but remember, optimizations are off. fld1 loads the floating point value 1.0 onto the FPU stack. faddp then adds the top of the stack (1.0), to the value in ST1 ([ebp - 8], originally [ebp + 8]). FASTCALL Here is the cl.exe listing for MyFunction2: PUBLIC @MyFunction2@20 PROC NEAR ; Line 7 push ebp mov ebp, esp ; Line 8 9 pop ebp ret 20 ; 00000014H @MyFunction2@20 ENDP _TEXT ENDS We can see that this function is taking 20 bytes worth of parameters, because of the @20 decoration at the end of the function name. This makes sense, because the function is taking two double parameters (8 bytes each), and one float parameter (4 bytes each). This is a grand total of 20 bytes. We can notice at a first glance, without having to actually analyze or understand any of the code, that there is only one register being accessed here: ebp. This seems strange, considering that FASTCALL passes its regular 32-bit arguments in registers. However, that is not the case here: all the floating-point parameters (even z, which is a 32-bit float) are passed on the stack. We know this, because by looking at the code, there is no other place where the parameters could be coming from. Notice also that fmulp is the last instruction performed again, as it was in the CDECL example. We can infer then, without investigating too deeply, that the result is passed at the top of the floating-point stack. Notice also that x (offset [ebp + 8]), y (offset [ebp + 16]) and z (offset [ebp + 24]) are pushed in reverse order: z is first, x is last. This means that floating point parameters are passed in right-to-left order, on the stack. This is exactly the same as CDECL code, although only because we are using floating-point values. Here is the GCC assembly listing for MyFunction2: This is a tricky piece of code, but luckily we don't need to read it very close to find what we are looking for. First off, notice that no other registers are accessed besides ebp. Again, GCC passes all floating point values (even the 32-bit float, z) on the stack. Also, the floating point result value is passed on the top of the floating point stack. We can see again that GCC is doing something strange at the beginning, taking the values on the stack from [ebp + 8] and [ebp + 16], and moving them to locations [ebp - 8] and [ebp - 16], respectively. Immediately after being moved, these values are loaded onto the floating point stack and arithmetic is performed. z isn't loaded till later, and isn't ever moved to [ebp - 24], despite the pattern. LC5 and LC6 are constant values, that most likely represent floating point values (because the numbers themselves, 1073741824 and 1074266112 don't make any sense in the context of our example functions. Notice though that both LC5 and LC6 contain two .long data items, for a total of 8 bytes of storage? They are therefore most definitely double values. STDCALL Here is the cl.exe listing for MyFunction3: PUBLIC _MyFunction3@20 PROC NEAR ; Line 12 push ebp mov ebp, esp ; Line 13 14 pop ebp ret 20 ; 00000014H _MyFunction3@20 ENDP _TEXT ENDS END x is the highest on the stack, and z is the lowest, therefore these parameters are passed from right-to-left. We can tell this because x has the smallest offset (offset [ebp + 8]), while z has the largest offset (offset [ebp + 24]). We see also from the final fmulp instruction that the return value is passed on the FPU stack. This function also cleans the stack itself, as noticed by the call 'ret 20. It is cleaning exactly 20 bytes off the stack which is, incidentally, the total amount that we passed to begin with. We can also notice that the implementation of this function looks exactly like the FASTCALL version of this function. This is true because FASTCALL only passes DWORD-sized parameters in registers, and floating point numbers do not qualify. This means that our assumption above was correct. Here is the GCC listing for MyFunction3: .align 8 LC9: .long 0 .long 1073741824 .align 8 LC10: .long 0 .long 1074266112 .globl @MyFunction3@20 .def @MyFunction3@20; .scl 2; .type 32; .endef @MyFunction9 faddp %st, %st(1) fmulp %st, %st(1) flds 24(%ebp) fldl LC10 faddp %st, %st(1) fmulp %st, %st(1) leave ret $20 Here we can also see, after all the opening nonsense, that [ebp - 8] (originally [ebp + 8]) is value x, and that [ebp - 24] (originally [ebp - 24]) is value z. These parameters are therefore passed right-to-left. Also, we can deduce from the final fmulp instruction that the result is passed in ST0. Again, the STDCALL function cleans its own stack, as we would expect. Conclusions Floating point values are passed as parameters on the stack, and are passed on the FPU stack as results. Floating point values do not get put into the general-purpose integer registers (eax, ebx, etc...), so FASTCALL functions that only have floating point parameters collapse into STDCALL functions instead. double values are 8-bytes wide, and therefore will take up 8-bytes on the stack. float values however, are only 4-bytes wide. Float to Int Conversions FPU Compares and Jumps Floating Point Examples Example: Floating Point Arithmetic Here is the C source code, and the GCC assembly listing of a simple C language function that performs simple floating-point arithmetic. Can you determine what the numerical values of LC5 and LC6 are? __fastcall double MyFunction2(double x, double y, float z) { return (x + 1.0) * (y + 2.0) * (z + 3.0); } For this, we don't even need a floating-point number calculator, although you are free to use one if you wish (and if you can find a good one). LC5 is added to [ebp - 16], which we know to be y, and LC6 is added to [ebp - 24], which we know to be z. Therefore, LC5 is the number "2.0", and LC6 is the number "3.0". Notice that the fld1 instruction automatically loads the top of the floating-point stack with the constant value "1.0". Difficulties Code Optimization Code Optimization An optimizing compiler is perhaps one of the most complicated, most powerful, and most interesting programs in existence. This chapter will talk about optimizations, although this chapter will not include a table of common optimizations. Stages of Optimizations There are two times when a compiler can perform optimizations: first, in the intermediate representation, and second, during the code generation. Intermediate Representation Optimizations While in the intermediate representation, a compiler can perform various optimizations, often based on dataflow analysis techniques. For example, consider the following code fragment: x = 5; if(x != 5) { //loop body } An optimizing compiler might notice that at the point of "if (x != 5)", the value of x is always the constant "5". This allows substituting "5" for x resulting in "5 != 5". Then the compiler notices that the resulting expression operates entirely on constants, so the value can be calculated now instead of at run time, resulting in optimizing the conditional to "if (false)". Finally the compiler sees that this means the body of the if conditional will never be executed, so it can omit the entire body of the if conditional altogether. Consider the reverse case: x = 5; if(x == 5) { //loop body } In this case, the optimizing compiler would notice that the IF conditional will always be true, and it won't even bother writing code to test x. Control Flow Optimizations Another set of optimization which can be performed either at the intermediate or at the code generation level are control flow optimizations. Most of these optimizations deal with the elimination of useless branches. Consider the following code: if(A) { if(B) { C; } else { D; } end_B: } else { E; } end_A: In this code, a simplistic compiler would generate a jump from the C block to end_B, and then another jump from end_B to end_A (to get around the E statements). Clearly jumping to a jump is inefficient, so optimizing compilers will generate a direct jump from block C to end_A. This unfortunately will make the code more confused and will prevent a nice recovery of the original code. For complex functions, it's possible that one will have to consider the code made of only if()-goto; sequences, without being able to identify higher level statements like if-else or loops. The process of identifying high level statement hierarchies is called "code structuring". Code Generation Optimizations Once the compiler has sifted through all the logical inefficiencies in your code, the code generator takes over. Often the code generator will replace certain slow machine instructions with faster machine instructions. For instance, the instruction: beginning: ... loopnz beginning operates much slower than the equivalent instruction set: beginning: ... dec ecx jne beginning So then why would a compiler ever use a loopxx instruction? The answer is that most optimizing compilers never use a loopxx instruction, and therefore as a reverser, you will probably never see one used in real code. What about the instruction: mov eax, 0 The mov instruction is relatively quick, but a faster part of the processor is the arithmetic unit. Therefore, it makes more sense to use the following instruction: xor eax, eax because xor operates in very few processor cycles (and saves three bytes at the same time), and is therefore faster than a "mov eax, 0". The only drawback of a xor instruction is that it changes the processor flags, so it cannot be used between a comparison instruction and the corresponding conditional jump. Loop Unwinding When a loop needs to run for a small, but definite number of iterations, it is often better to unwind the loop in order to reduce the number of jump instructions performed, and in many cases prevent the processor's branch predictor from failing. Consider the following C loop, which calls the function MyFunction() 5 times: for(x = 0; x < 5; x++) { MyFunction(); } Converting to assembly, we see that this becomes, roughly: mov eax, 0 loop_top: cmp eax, 5 jge loop_end call _MyFunction inc eax jmp loop_top Each loop iteration requires the following operations to be performed: - Compare the value in eax (the variable "x") to 5, and jump to the end if greater then or equal - Increment eax - Jump back to the top of the loop. Notice that we remove all these instructions if we manually repeat our call to MyFunction(): call _MyFunction call _MyFunction call _MyFunction call _MyFunction call _MyFunction This new version not only takes up less disk space because it uses fewer instructions, but also runs faster because fewer instructions are executed. This process is called Loop Unwinding. Inline Functions The C and C++ languages allow the definition of an inline type of function. Inline functions are functions which are treated similarly to macros. During compilation, calls to an inline function are replaced with the body of that function, instead of performing a call instruction. In addition to using the inline keyword to declare an inline function, optimizing compilers may decide to make other functions inline as well. Function inlining. It is not necessarily possible to determine whether identical portions of code were created originally as macros, inline functions, or were simply copy and pasted. However, when disassembling it can make your work easier to separate these blocks out into separate inline functions, to help keep the code straight. Optimization Examples Example: Optimized vs Non-Optimized Code The following example is adapted from an algorithm presented in Knuth(vol 1, chapt 1) used to find the greatest common denominator of 2 integers. Compare the listing file of this function when compiler optimizations are turned on and off. /*/ } Compiling with the Microsoft C compiler, we generate a listing file using no optimization: PUBLIC _EuclidsGCD _TEXT SEGMENT _r$ = -8 ; size = 4 _q$ = -4 ; size = 4 _m$ = 8 ; size = 4 _n$ = 12 ; size = 4 _EuclidsGCD PROC NEAR ; Line 2 push ebp mov ebp, esp sub esp, 8 $L477: ; Line 4 mov eax, 1 test eax, eax je SHORT $L473 ; Line 6 mov eax, DWORD PTR _m$[ebp] cdq idiv DWORD PTR _n$[ebp] mov DWORD PTR _q$[ebp], eax ; Line 7 mov eax, DWORD PTR _m$[ebp] cdq idiv DWORD PTR _n$[ebp] mov DWORD PTR _r$[ebp], edx ; Line 8 cmp DWORD PTR _r$[ebp], 0 jne SHORT $L479 ; Line 10 mov eax, DWORD PTR _n$[ebp] jmp SHORT $L473 $L479: ; Line 12 mov ecx, DWORD PTR _n$[ebp] mov DWORD PTR _m$[ebp], ecx ; Line 13 mov edx, DWORD PTR _r$[ebp] mov DWORD PTR _n$[ebp], edx ; Line 14 jmp SHORT $L477 $L473: ; Line 15 mov esp, ebp pop ebp ret 0 _EuclidsGCD ENDP _TEXT ENDS END Notice how there is a very clear correspondence between the lines of C code, and the lines of the ASM code. the addition of the "; line x" directives is very helpful in that respect. Next, we compile the same function using a series of optimizations to stress speed over size: cl.exe /Tceuclids.c /Fa /Ogt2 and we produce the following listing: As you can see, the optimized version is significantly shorter then the non-optimized version. Some of the key differences include: - The optimized version does not prepare a standard stack frame. This is important to note, because many times new reversers assume that functions always start and end with proper stack frames, and this is clearly not the case. EBP isnt being used, ESP isnt being altered (because the local variables are kept in registers, and not put on the stack), and no subfunctions are called. 5 instructions are cut by this. - The "test EAX, EAX" series of instructions in the non-optimized output, under ";line 4" is all unnecessary. The while-loop is defined by "while(1)" and therefore the loop always continues. this extra code is safely cut out. Notice also that there is no unconditional jump in the loop like would be expected: the "if(r == 0) return n;" instruction has become the new loop condition. - The structure of the function is altered greatly: the division of m and n to produce q and r is performed in this function twice: once at the beginning of the function to initialize, and once at the end of the loop. Also, the value of r is tested twice, in the same places. The compiler is very liberal with how it assigns storage in the function, and readily discards values that are not needed. Example: Manual Optimization The following lines of assembly code are not optimized, but they can be optimized very easily. Can you find a way to optimize these lines? mov eax, 1 test eax, eax je SHORT $L473 The code in this line is the code generated for the "while( 1 )" C code, to be exact, it represents the loop break condition. Because this is an infinite loop, we can assume that these lines are unnecessary. "mov eax, 1" initializes eax. the test immediately afterwards tests the value of eax to ensure that it is nonzero. because eax will always be nonzero (eax = 1) at this point, the conditional jump can be removed along whith the "mov" and the "test". The assembly is actually checking whether 1 equals 1. Another fact is, that the C code for an infinite FOR loop: for( ; ; ) { ... } would not create such a meaningless assembly code to begin with, and is logically the same as "while( 1 )". Example: Trace Variables Here are the C code and the optimized assembly listing from the EuclidGCD function, from the example above. Can you determine which registers contain the variables r and q? / At the beginning of the function, eax contains m, and esi contains n. When the instruction "idiv esi" is executed, eax contains the quotient (q), and edx contains the remainder (r). The instruction "mov ecx, edx" moves r into ecx, while q is not used for the rest of the loop, and is therefore discarded. Example: Decompile Optimized Code Below is the optimized listing file of the EuclidGCD function, presented in the examples above. Can you decompile this assembly code listing into equivalent "optimized" C code? How is the optimized version different in structure from the non-optimized version? Altering the conditions to maintain the same structure gives us: int EuclidsGCD(int m, int n) { int r; r = m % n; if(r != 0) { do { m = n; r = m % r; n = r; }while(r != 0) } return n; } It is up to the reader to compile this new "optimized" C code, and determine if there is any performance increase. Try compiling this new code without optimizations first, and then with optimizations. Compare the new assembly listings to the previous ones. Example: Instruction Pairings - Q - Why does the dec/jne combo operate faster than the equivalent loopnz? - A - The dec/jnz pair operates faster then a loopsz for several reasons. First, dec and jnz pair up in the different modules of the netburst pipeline, so they can be executed simultaneously. Top that off with the fact that dec and jnz both require few cycles to execute, while the loopnz (and all the loop instructions, for that matter) instruction takes more cycles to complete. loop instructions are rarely seen output by good compilers. Example: Avoiding Branches Below is an assembly version of the expression c ? d : 0. There is no branching in the code, so how does it work? ; ecx = c and edx = d ; eax will contain c ? d : 0 (eax = d if c is not zero, otherwise eax = 0) neg ecx sbb eax, eax and eax, edx ret This is an example of using various arithmetic instructions to avoid branching. The neg instruction sets the carry flag if c is not zero; otherwise, it clears the carry flag. The next line depends on this. If the carry flag is set, then sbb results in eax = eax - eax - 1 = 0xffffffff. Otherwise, eax = eax - eax = 0. Finally, performing an and on this result ensures that if ecx was not zero in the first place, eax will contain edx, and zero otherwise. Example: Duff's Device What does the following C code function do? Is it useful? Why or why not? void MyFunction(int *arrayA, int *arrayB, int cnt) { switch(cnt % 6) { while(cnt != 0) { case 0: arrayA[--cnt] = arrayB[cnt]; case 5: arrayA[--cnt] = arrayB[cnt]; case 4: arrayA[--cnt] = arrayB[cnt]; case 3: arrayA[--cnt] = arrayB[cnt]; case 2: arrayA[--cnt] = arrayB[cnt]; case 1: arrayA[--cnt] = arrayB[cnt]; } } } This piece of code is known as a Duff's device or "Duff's machine". It is used to partially unwind a loop for efficiency. Notice the strange way that the while() is nested inside the switch statement? Two arrays of integers are passed to the function, and at each iteration of the while loop, 6 consecutive elements are copied from arrayB to arrayA. The switch statement, since it is outside the while loop, only occurs at the beginning of the function. The modulo is taken of the variable cnt with respect to 6. If cnt is not evenly divisible by 6, then the modulo statement is going to start the loop off somewhere in the middle of the rotation, thus preventing the loop from causing a buffer overflow without having to test the current count after each iteration. Duff's Device is considered one of the more efficient general-purpose methods for copying strings, arrays, or data streams. Code Obfuscation Code Obfuscation. Debugger Detectors Detecting Debuggers Timeouts OllyDbg is a popular 32-bit usermode debugger. Unfortunately, the last few releases, including the latest version (v1.10) contain a vulnerability in the handling of the Win32 API function OutputDebugString(). [7]. Resources Resources Wikimedia Resources Wikibooks - X86 Assembly - Subject:Assembly languages - Compiler Construction - Floating Point - C Programming - C++ Programming Wikipedia External Resources External Links - The MASM Project: - Randall Hyde's Homepage: - Borland Turbo Assembler: - NASM Project Homepage: - FASM Homepage: - DCC Decompiler: [8] - Boomerang Decompiler Project: [9] - Microsoft debugging tools main page: - Solaris observation and debugging tools main page: - Free Debugging Tools, Static Source Code Analysis Tools, Bug Trackers - Microsoft Developers Network (MSDN): - Gareth Williams: http: //gareththegeek.ga.funpic.de/ - B. Luevelsmeyer "PE Format Description": PE format description - TheirCorp "The Unofficial TypeLib Data Format Specification": - MSDN Calling Convention page: [10] - Dictionary of Algorithms and Data Structures - Charles Petzold's Homepage: - Donald Knuth's Homepage: - "THE ISA AND PC/104 BUS" by Mark Sokos 2000 - "Practically Reversing CRC" by Bas Westerbaan 2005 - "CRC and how to Reverse it" by anarchriz 1999 - "Reverse Engineering is a Way of Life" by Matthew Russotto - "the Reverse and Reengineering Wiki" - F-Secure Khallenge III: 2008 Reverse Engineering competition (is this an annual challenge?) - "Breaking Eggs And Making Omelettes: Topics On Multimedia Technology and Reverse Engineering" - "Reverse Engineering Stack Exchange" Books - Yurichev, Dennis, "An Introduction To Reverse Engineering for Beginners". Online book: - Eilam, Eldad. "Reversing: Secrets of Reverse Engineering." 2005. Wiley Publishing Inc. ISBN 0764574817 - Hyde, Randall. "The Art of Assembly Language," No Starch, 2003 ISBN 1886411972 - Aho, Alfred V. et al. "Compilers: Principles, Techniques and Tools," Addison Wesley, 1986. ISBN: 0321428900 - Steven Muchnick, "Advanced Compiler Design & Implementation," Morgan Kaufmann Publishers, 1997. ISBN 1-55860-320-4 - Kernighan and Ritchie, "The C Programming Language", 2nd Edition, 1988, Prentice Hall. - Petzold, Charles. "Programming Windows, Fifth Edition," Microsoft Press, 1999 - Hart, Johnson M. "Win32 System Programming, Second Edition," Addison Wesley, 2001 - Gordon, Alan. "COM and COM+ Programming Primer," Prentice Hall, 2000 - Nebbett, Gary. "Windows NT/2000 Native API Reference," Macmillan, 2000 - Levine, John R. "Linkers and Loaders," Morgan-Kauffman, 2000 - Knuth, Donald E. "The Art of Computer Programming," Vol 1, 1997, Addison Wesley. - MALWARE: Fighting Malicious Code, by Ed Skoudis; Prentice Hall, 2004 - Maximum Linux Security, Second Edition, by Anonymous; Sams, 2001.
https://en.wikibooks.org/wiki/X86_Disassembly/Print_Version
CC-MAIN-2016-44
en
refinedweb
David, Appreciated so much for your sharing. Advertising Best Regards, Adele On 10/25/2011 10:23 AM, David Magda wrote: On Tue, October 25, 2011 09:42, [email protected] wrote:Hi all, I have a customer who wants to know what is the max characters allowed in creating name for zpool, Are there any restrictions in using special characters?255 characters. Try doing a 'man zpool':. Or, use the source Luke: Going to, and searching for "zpool" turns up: Inside of it we have a zpool_do_create() function, which defines a 'char *poolname' variable. From there we call a zpool_create() in libzfs/common/libzfs_pool.c to zpool_name_valid() to pool_namecheck(), where we end up with the following code snippet: /* * Make sure the name is not too long. * * ZPOOL_MAXNAMELEN is the maximum pool length used in the userland * which is the same as MAXNAMELEN used in the kernel. * If ZPOOL_MAXNAMELEN value is changed, make sure to cleanup all * places using MAXNAMELEN. */ if (strlen(pool)>= MAXNAMELEN) { if (why) *why = NAME_ERR_TOOLONG; return (-1); } Check the function for further restrictions: _______________________________________________ zfs-discuss mailing list [email protected]
https://www.mail-archive.com/[email protected]/msg47500.html
CC-MAIN-2016-44
en
refinedweb
First off, this is my first article on Code Project. Actually, it is my first article ever! I can't wait to get some feedback, and I welcome any criticism. I am by no means a C# expert, so please feel free to express your expert opinions. The problem that I am going to address in this article is how to pass some Transact-SQL text to a specified SQL Server instance, and ask it to parse the code, returning any syntax errors. This would be useful in a case where your application allows the user to enter some T-SQL text to execute, or where T-SQL gets executed dynamically from script files, or whatever. The possibilities are endless, just bear in mind the security implications that this might have if this article inspires you to implement such a design. Say, for example, you have a system which enables one user (with special access privileges, of course) to write T-SQL code and store it on the system in the form of scripts (in the database or in files). Then another user of the system would come in and choose one of these scripts based on the name and description provided by the programmer, and then click a button to execute it. Obviously you need some mechanism to check the validity of the code before allowing it to be stored on the system. This is where my solution would hopefully be useful. I did some research and decided to include some background on the inner workings of SQL Server, or any other DBMS for that matter. So what really happens when your applications execute queries on the database? Is there a specific process that the DBMS follows to return the requested data, or to update or delete a subset of data? What happens under the hood of your preferred DBMS is quite complicated, and I will only explain or mention key processes on a high level. SQL Server is split into multiple components, and most of these components are grouped to form the Relational Engine and the Storage Engine. The Relational Engine is responsible for receiving, checking and compiling the code, and for managing the execution process, while the Storage Engine is responsible for retrieving, inserting, updating or deleting the underlying data in the database files. The component that I want to touch base with is the Query Processor, which is part of the Relational Engine. As the name suggests, it is the Query Processor's job to prepare submitted SQL statements before it can be executed by the server. The Query Processor will go through three processes before it can provide an Execution Plan. This execution plan is the most optimal route chosen by the DBMS for servicing the query. The three processes mentioned include: The parser checks for syntax errors including correct spelling of keywords. The normalizer performs binding, which involves checking if the specified tables and columns exist, gathering meta data about the specified tables and columns, and performing some syntax optimizations. Programmers frequently use the term Compilation to refer to the compilation and optimization process. True compilation only affects special T-SQL statements such as variable declarations and assignments, loops, conditional processing, etc. These statements provide functionality to SQL code, but they do not form part of DML statements such as SELECT, INSERT, UPDATE or DELETE. On the other hand, only these DML statements need to be optimized. Optimization is by far the most complex process of the Query Processor. It employs an array of algorithms to first gather a sample of suitable execution plans, and then filters through them until the best candidate is chosen. After the optimal execution plan is determined and returned by the Query Processor, it is stored in a cache. SQL Server will automatically determine how long to keep this execution plan within the cache as it might get reused often. When an application executes a query, SQL Server checks if an execution plan exists in the cache for the query. SQL Server generates a cache key based on the query text, and searches for the same key in the cache. Queries need to be recompiled and reoptimized when metadata changes such as column definitions or indexes, but not for changes in parameters, system memory, data in the data cache, etc. Finally, the Query Processor communicates the execution plan to the Storage Engine and the query is executed. I have created and included a simple code editor application, but keep in mind that the main purpose of this article is to provide you with a parse function, and only code snippets and notes revolving around this point will be covered here. There are lots of very useful articles out there for building WPF applications. I will assume that you have some experience with Visual Studio and C#. I have included the example app which was written in Visual C# Express 2010 as a WPF Application. Knowledge of WPF is not required as I will explain the relevant C# code in detail. Essentially, I want my application to have an Execute button and a Parse button (like in MS SQL Server Management Studio). Pressing the Execute button, SQL Server will go through the whole process as described above to first prepare the statements and then determine the execution plan before it will be executed. For the parse button, naturally it should only parse the query. I am going to create a class that will encapsulate all my ADO.NET objects, and provide methods Execute and Parse for wiring functionality to the buttons. I would also provide methods for connecting and disconnecting to the SQL Server instance with a specified connection string. The class is called SqlHandler. SqlHandler This class encapsulates and hides the following objects: SqlConnection conn SqlCommand cmd SqlDataAdapter adapter List<SqlError> errors You need to include using System.Data.SqlClient; and using System.Data; to the using directives list at the top of the code file, as I am sure you know. The conn object is used for connecting to the database. The ConnectionString property directly gets and sets the conn.ConnectionString property. This allows you to get or set the Connection string from outside the class. The cmd object is used to execute commands, and adapter is used to obtain query results from the database. The object errors is a generic list of type SqlError. This list will be used to capture and return errors generated whild executing or parsing T-SQL code. using System.Data.SqlClient; using System.Data; conn ConnectionString conn.ConnectionString cmd errors SqlError Most of you reading this article would already be familiar with these ADO.Net classes. Most of the time I am developing applications with ADO.NET; I only use a few selected properties and methods. The SqlConnection class contains a FireInfoMessageEventOnUserErrors property and an InfoMessage event that are less well known and less often used (in my opinion at least). I had to discover them myself by digging through the objects as I could not find a relevant article explaining how to accomplish what I wanted. Eventually through trial and error I got a working solution. SqlConnection FireInfoMessageEventOnUserErrors InfoMessage FireInfoMessageEventOnUserErrors is a boolean property. When set to false (default), the InfoMessage event will not be fired when an error occurs, and an Exception will be raised by the ADO.NET API. When set to true, an Exception will not be thrown, but the InfoMessage event will be fired. For my code to work, I had to enable this event to catch all the messages through the SqlInfoMessageEventArgs event argument object. The following code snippet shows how to set this property and event in the constructor: SqlInfoMessageEventArgs conn.FireInfoMessageEventOnUserErrors = true; conn.InfoMessage += new SqlInfoMessageEventHandler(conn_InfoMessage); conn_InfoMessage is the name of the event handler method that will be called when the event fires. It is important to note that although this looks like an asynchronous operation, it is in fact synchronous. This means that when the T-SQL query is executed by passing it to cmd.ExecuteNonQuery or to adapter.Fill, the event will be fired before continuing execution. This allows us to suck up all the messages into the errors list before returning from the Execute and Parse methods of our class where ExecuteNonQuery and Fill is called. The snippet below describes how the messages are caught in the event handler. conn_InfoMessage cmd.ExecuteNonQuery adapter.Fill Execute Parse ExecuteNonQuery Fill private void conn_InfoMessage(object sender, SqlInfoMessageEventArgs e) { //ensure that all errors are caught SqlError[] errorsFound = new SqlError[e.Errors.Count]; e.Errors.CopyTo(errorsFound, 0); errors.AddRange(errorsFound); } It is important to mention that the event will be fired for every error that the T-SQL script might contain. For instance, if your script contains two errors, the conn_InfoMessage event handler will be called twice! I only discovered this while testing my application where I tried to parse a script containing multiple errors. The initial result was that my Parse method always returned only one error, while SSMS reported the correct amount of errors for the same script. Only when I inserted a message box in the event handler I discovered how it works. The reason why this was misleading is because the second argument of our event handler, the e object, which is of type SqlInfoMessageEventArgs has an Errors property. This property is of type SqlErrorCollection, which to me implied that it contains multiple SqlError objects. Naturally I assumed that this collection will contain all the errors at once. After a few code modifications I got the desired result. What happens now is that every time the event is fired, an SqlError array is created and the e.Errors collection of SqlError objects will be copied to this array. Even though this collection contained exactly one item every time I have tested my code, I make sure that all the SqlError objects are captured just to be safe. This whole array is then copied to the errors list, which is a private field within my class definition. This list is used to aggregate all the errors before returning it to the client code. Another point worth mentioning is that the errors list has to be cleared every time Parse or Execute is called. e Errors SqlErrorCollection e.Errors The first parameter of this method, sqlText contains the T-SQL code to be executed. The second parameter is an SqlError array object. Take notice of the out keyword. This means that the parameter is an out parameter, and we have to set it's value somewhere in the method. This allows the method to return both a DataTable object (through the normal return type and return statement), and an array containing our SqlError objects. The client code will be responsible for checking the length of the array to determine if any errors were generated. sqlText out DataTable public DataTable Execute(string sqlText, out SqlError[] errorsArray) { if (!IsConnected) throw new InvalidOperationException("Can not execute Sql query while the connection is closed!"); errors.Clear(); cmd.CommandText = sqlText; DataTable tbl = new DataTable(); adapter.Fill(tbl); errorsArray = errors.ToArray(); return tbl; } First we need to tests whether the connection is open or not using the IsConnected property, and throw an exception if it is not. Next, the errors list is cleared to prevent reporting errors previously encountered. The query is then executed using adapter.Fill(tbl) where tbl is a reference to a new DataTable object. This table will be filled with data if the T-SQL code returns any data. As mentioned earlier, the InfoMessage event will be raised synchronously, so the next line after calling Fill will only be executed after all errors were raised through the event. All errors (if any) are copied to a new array of SqlError objects. This array is assigned to the out parameter errorsArray, allowing the client of our class to check if any errors were encountered. Remember that no exceptions will be thrown when you set FireInfoMessageEventOnUserErrors to true. IsConnected adapter.Fill(tbl) tbl errorsArray This method accepts one parameter, sqlText which contains the T-SQL code to be parsed. It returns an array containing SqlError objects. The client code should test the length of this array to determine if any errors were generated. public SqlError[] Parse(string sqlText) { if (!IsConnected) throw new InvalidOperationException("Can not parse Sql query while the connection is closed!"); errors.Clear(); cmd.CommandText = "SET PARSEONLY ON"; cmd.ExecuteNonQuery(); cmd.CommandText = sqlText; cmd.ExecuteNonQuery(); //conn_InfoMessage is invoked for every error, e.g. 2 times for 2 errors cmd.CommandText = "SET PARSEONLY OFF"; cmd.ExecuteNonQuery(); return errors.ToArray(); } Again, we throw an exception if the connection is not open, and we clear the errors list. SQL Server has an option "PARSEONLY" that we will use to prevent further processing of our T-SQL code beyond the parse phase. Before our sqlText string is executed, the PARSEONLY option is set to ON. Afterwards it is set back to OFF. There is a potential pitfall here: what if the client code is a console-type application, and the user executed the command SET PARSEONLY ON to explicitly prevent further execution beyond the parse phase. When the client code then calls the Parse method, PARSEONLY will be set back to OFF before the method returns, without the user's knowledge. Workarounds for this problem will not be explored further in this article, because the implementation will differ as per requirements of the project. PARSEONLY ON OFF SET PARSEONLY ON The ConnectionString property of our SqlHandler class "forwards" the ConnectionString property on the SqlConnection object that it encapsulates. In the constructor, the ConnectionString is initialized to a "template" connection string. You have to manually insert the Data Source and Initial Catalog values in the string. The Connect method accepts a string argument containing a connection string. This connection string will replace the existing connection string on the SqlConnection object. Data Source Initial Catalog Connect My sample project contains the SqlHandler class, and a small test application. The application provides some basic text editor functionality such as opening files, saving files, cut, copy and paste. Furthermore, it implements the SqlHandler object's methods to enable connecting and disconnecting from a SQL Server instance, and executing and parsing SQL code. The layout of the main window was designed to be familiar looking, with the menu and toolbar at the top, the text area in the middle, and an error grid and status bar at the bottom. When you build and run the application, a Connection dialog window will pop up. On this window you have to enter a valid connection string to connect to a SQL Server instance. Keep in mind that this application is not multi-threaded. As a result, entering a bad connection string will cause the interface to "hang" while the connection times out and eventually returns with an error message. I have created a region in the SqlHandler class for housing custom RoutedUICommand objects for binding my own commands to the user interface. I put them in their own separate region because they have nothing to do with the rest of the class. These command objects are all static, and the class also defines a static constructor for initializing them. These commands could also have been placed in a separate class. RoutedUICommand Type your T-SQL text in the text area in the middle of the window. To parse the code, press the Parse button, or press the F6 key on the keyboard. To execute the code, press the Execute button, or press F5 on the keyboard. Both the Parse and Execute functions will report errors in the errors grid at the bottom of the application. The errors grid is nested within an expander which will pop up automatically when errors are generated. When you execute a query that returns a result set, a result viewer window will appear. Parsing and executing will be disabled when the application is not connected to a SQL Server instance, as defined by the command bindings. When you parse a query that references invalid database objects such as tables or columns that does not exist, no errors will be returned. Remember from the Background section that Parsing does not include Binding. Compliments to the author of the icons set which can be downloaded here for free. Visual Studio has some nifty little tools that can make your life easier. One of them is the tool that inserts appropriate code snippets where it is expected by pressing the Tab key. This is useful, for example, when you are registering the InfoMessage event. Type the following line of code: conn.InfoMessage +=. You should see a little pop up box... conn.InfoMessage += Press Tab once and it will complete the line for you based on the required delegate for the event. Press Tab again and it will generate the event handler method for you. The event handler will already be set up to contain the correct arguments, all you have to do is add your code. SQL Server Pro, 23/10/1999, Inside SQL Server: Parse, Compile, and Optimize [online] Available at: [Accessed on 20th June.
http://www.codeproject.com/Articles/410081/Parse-Transact-SQL-to-Check-Syntax?fid=1737157&df=90&mpp=10&sort=Position&spc=None&tid=4294106&noise=1&prof=True&view=None
CC-MAIN-2016-44
en
refinedweb
It is no real surprise that one of the primary purposes of computer languages is processing lists (indeed, one of the oldest programming languages – Lisp – is a contraction of the term “list processing”). In Javascript, lists are managed with the Array object. The last few years has seen a significant beefing up of what arrays can do as part of the EcmaScript 6 development, to the extent that even many programmers aren’t aware of the full capabilities that arrays offer. The following is a mixed bag of tricks, focusing both on some of the cooler ES6 code, some on the more esoteric functional programming tricks of ES5. One thing that both of these improvements do is to establish a unifying principle for iterating through lists of items, a problem that’s emerged from twenty years of different people implementing what was familiar to them in JavaScript. That’s admittedly a testimony to how flexible Javascript is. Avoiding the Index The traditional way of iterating over an array has been to use indexes, such as the following (Listing 1). var colors = [“red”,”orange”,”yellow”,”green”,”blue”,”violet”]; for (var index=0;index<colors.length;index++){ var color=colors[index]; // do something with color } Listing 1. Using index to iterate over an array The problem with this is that it first requires the declaration and creation of an index variable, and you still have to resolve the value that the array has at that index. It’s also just aesthetically unpleasing – the emphasis is on the index, not the value. ES6 introduces the of keyword, which lets you iterate to retrieve an object directly (Listing 2) var colors = [“red”,”orange”,”yellow”,”green”,”blue”,”violet”]; for (const colorofcolors){ // do something with color } Listing 2. Using the of keyword to iterate over an array. The use of this construction is both shorter and with a much clearer intent, retrieving each color from the list without having to resolve an array position. The use of the const keyword also provides a bit of optimization – by declaring the variable color as constant, JavaScript can reduce the number of pointer resolutions because a new variable isn’t declared each time. If you’ve ever had to process web pages to retrieve specific elements, you likely also know that the result of functions such as document.getElementsByClassName() is array- like , but not strictly an array. (You have to use the item() keyword on the resulting object instead). The ES6 from () function is a static function of the Array object that lets you convert such objects into JavaScript arrays (Listing 3). var colorNodes = Array.from(document.getElementsByClassName(“color”)); for (const colorNodeofcolorNodes){ // do something with colorNode } Listing 3. Converting an array-like object into an array. By the way, how do you know that colorNodes is an array? You use the static Array.isArray() function (Listing 4): if (Array.isArray(colorNodes)) {console.log(“I’m anArray!”)} Listing 4. Testing for an array. Array-like objects, on the other hand, will self- identify (using the typeof keyword) as objects, or will return false to the .isArray() function. In most cases, so long as an interface exposes the length property, it should be possible to convert it into an array. This can be used to turn a string into an array of characters with a single call (Listing 5) function strReverse(str){ return Array.from(str).reverse().join(""); }; console.log(strReverse("JavaScript")) > "tpircSavaJ" Listing 5. Inverting a string In this example, the strReverse () function using from () to convert a string into an array of characters, then uses the Array reverse() function to reverse the order, followed by the join(“”) function to convert the array back into a string. Prepositional Soup: From In to Of The of keyword is easy to confuse with the in keyword, though they do different things. The of statement, when applied to an array, returns the items of that array in the order of that array (Listing 6). var colors = ["red","orange","yellow","green","blue","violet"]; for (const colorofcolors){console.log(color)} > "red" > "orange" > "yellow" > "green" > "blue" > "violet" Listing 6. The on keyword returns the values of an array. The in keyword, on the other hand, returns the index keys of the array. var colors = ["red","orange","yellow","green","blue","violet"]; for (const colorIndexin colors){console.log(colorIndex)} > 0 > 1 > 2 > 3 > 4 > 5 Listing 7. The in keyword returns the keys or indices of an array. Note that Javascript arrays, unlike Java arrays, can be sparse. This means that you can have an element a[0] and a[5] without having an a[1] through a[4]. Iterating over the length of an array in this case can cause problems as the length of the Array does not necessarily correspond to the last item in that array (listing 8). var a= []; a[0] = 10; a[5] = 20; for(var index=0;index!=index.length;index++){console.log(a[index])}; // Generates an error because index.length = 2, but a[1] (the second indexed item) //doesn't currently exist. for(var indexin a){console.log(index+": "+a[index])}; //Here, index has the values 0 and 5 respectively, without any intervening values. > 0: 10 > 5: 20 Listing 8. Sparse arrays can give problems for iterated indexes, while in provides better support. The Joy of Higher Order Functions Callbacks have become an indispensable part of Javascript, especially with the popularity of jQuery and related browser frameworks and the node.js server. In a callback, a function is passed as an object to another function. Sometimes these callbacks are invoked by asynchronous operations (such as those used to make asynchronous calls to databases or web services), but they can also be passed to a plethora of Array related iteration functions. Perhaps the simplest of such callbacks is that used by forEach() . The principle argument is the function to be invoked, passing as parameters both an object and its associated key or index: colors.forEach(function(obj,key){console.log(key+": "+obj)}) > 0: red > 1: orange > 2: yellow > 3: green > 4: blue > 5: violet Listing 9. Invoking a function using the forEach() function. The forEach() function, while perhaps the most commonly known of the “functional” Array functions, is intended primarily to execute an expression without providing a resulting output. If, on the other hand, you want to chain functions together, a more useful Array function is the map() function, which takes the output of each function and adds it to a new array. For instance, the following function (Listing 10), takes an array of lower case characters and returns an array where the first letter of each word has been capitalized. colors.map(function(obj,index){return obj[0].toUpperCase() + obj.substr(1);}) > ["Red", "Orange", "Yellow", "Green", "Blue", "Violet"] Listing 10. Using the map() function. Note that this function can be extended to turn a sentence into “title case” where every word is capitalized (listing 11). function titleCase(str){ return str.split(" ") .map(function(obj,index){return obj[0].toUpperCase() + obj.substr(1);}) .join(" ") }; var expr = "This is a test of capitalization."; titleCase(expr); > "This Is A Test Of Capitalization." Listing 11. Using array functions to capitalize expressions. As should be obvious, the split() and join() functions bridge the gap between strings and arrays, where split converts a string into an array using a separator expression, and join() takes an array and joins the strings within the array back together, with a given separator. What’s even cooler about split() is that it can also be used with regular expressions. A common situation that arises with input data is that you may have multiple spaces between words, and you want to remove all but a single space. You can use split and join (listing 12), to do precisely that. var "This is an example of how you clean up text with embedded tabs, white spaces and even carriage returns." expr.split(//s+/).join(" ") > "This is an example of how you clean up text with embedded tabs, white spaces and even carriage returns." Listing 12. Using arrays to clean up “dirty” text. While touching on dirty data, another useful “functional” Array function is the filter() function, which will iterate over an array and compare the item against a particular expression. If the expression is true, then the filter() function will pass this along, while if it’s false, nothing gets past. If you have data coming from a database, certain field values may have the value null. Getting rid of these (or some similar “marker” value) can go a long way towards making that data more usable without hiccups (Listing 13). var data = [10,12,5,9,22,18,null,21,17,null,3,12]; data.filter(function(obj,index){return obj != null}) > [10, 12, 5, 9, 22, 18, 21, 17, 3, 12] Listing 13. Filter cleans up dirty data. This notation is pretty typical of Javascript – powerful but confusingly opaque with all of these callback functions. However, it turns out that there are some new notational forms in ES6 that can make these kinds of functions easier to read. The predicate construct <(arg1,arg2,…) => exprinvolvingarg1,arg2,…</span> can be used to make small anonymous functions. Thus, <(obj, index) => (obj != null)</span> Is the same as <function(obj, index){return (obj != null)};</span> With this capability, you can write Listing 13 as: <data.filter((obj)=> obj != null);</span> You can even use this notation to create named functions (Listing 14). dropNulls = (obj) => (obj != null) data.filter(dropNulls(obj)) Listing 14. Using predicate notation to define named functions for filters. Counter tof filter() is find() , which returns the first item (not all items) where the predicate is true. For instance, suppose that you wanted to find from an array the first value that is greater a given threshold. The find() function can do exactly that (Listing 15). var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; data.find((item) => item>10) > 12 Listing 15. Use find to get the first item that satisfies a predicate. Note that if you wanted to get all items where this condition is true then simply use a filter (Listing 16). var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; data.filter((item) => (item>10)) > [12, 22, 18, 21, 18, 12] Listing 16. Find retrieves the first value satisfying a predicte, filter, retrieves all of them. The findIndex() function is related, except that it returns the location of the first match, or -1 if nothing satisfies the predicate. Did you know that JavaScript Arrays can do map/reduce? This particular process, made famous by Hadoop, involves a two-step process where a dataset (an array) is initially mapped to another “processed” array. This array is then passed to a reducer function, which takes the array and converts it into a single processed entity. You can see this in action in Listing 17, which simply sums up the array values, but shows each of the arguments in the process: var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; data.reduce(function(prev,current,index,arr){ console.log(prev+", "+current+", "+index+", "+arr); return prev+current},0); > 0, 10, 0, 10,12,5,9,22,18,21,17,3,12 > 10, 12, 1, 10,12,5,9,22,18,21,17,3,12 > 22, 5, 2, 10,12,5,9,22,18,21,17,3,12 > 27, 9, 3, 10,12,5,9,22,18,21,17,3,12 > 36, 22, 4, 10,12,5,9,22,18,21,17,3,12 > 58, 18, 5, 10,12,5,9,22,18,21,17,3,12 > 76, 21, 6, 10,12,5,9,22,18,21,17,3,12 > 97, 17, 7, 10,12,5,9,22,18,21,17,3,12 > 114, 3, 8, 10,12,5,9,22,18,21,17,3,12 > 117, 12, 9, 10,12,5,9,22,18,21,17,3,12 > 129 Listing 17. The reduce() function takes an array and processes content into an accumulator. The first column shows that result of adding the current value to the accumulator (initially set to initializer, the 0 value given as the second argument of the reduce() function), the second columngives the current value itself, the third item is the index, and the final item is the array. A perhaps more realistic example would be a situation where a tax of 6% is added to each cost, but only for values above $10. Again, you can use predicate maps to simplify things (Listing 18). var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; total = data.reduce((prev,current) => (prev + current + ((current>10)?(current-10)*.05:0)),0); console.log("$" + total.toLocaleString("en")) > "$131.52" Listing 18. Using the reduce() function to calculate a total with a complex tax. The .toLocaleString() function is very useful for formatting output to a given linguistic locale, especially for currency. If the argument value “de” is passed to the function, the output would be given as “131,52”, where the comma is used as the decimal delimiter and the period is used for the thousands delimiter. How Do Your Arrays Stack Up? When I was in college, cafeterias made use of special boxes that had springs mounted in the bottom and a flat plate on which you could put quite a number of trays. The weight of each tray pushed the spring down just enough to keep the stack of trays level, and when you took a tray off, it rose so that the last tray placed on the stack was always the first tray off. This structure inspired programmer who discovered that there were any number of situations where you might want to save the partial state of something by pushing it down on an array, then when you were done processing, popping the previous item back off. Not surprisingly, such arrays became known as, well, stacks. JavaScript defines four functions – push (), pop (), shift (), and unshift () respectively. Push() places an item on the end of an array, pop() removes and returns it. You can see this in Listing 19. var stack = ["a","b","c"]; console.log(stack) > ["a", "b", "c"] stack.push("d") > 4 console.log(stack) > ["a", "b", "c", "d"] stack.pop() "d" console.log(stack) > ["a", "b", "c"] stack.unshift("d") >4 console.log(stack) > ["d", "a", "b", "c"] stack.shift() "d" console.log(stack) > ["a", "b", "c"] Listing 19. Pushing and popping the stack. Stack based arrays have a number of uses, but before digging into them, you can extend the Array() functionality with a new function called peek() that will let you retrieve the last item in the array (which is the top of the stack), as shown in Listing 20. Array.prototype.peek=function(){return this[this.length-1]}; arr = ["a","b","c","d","e"] console.log(arr.peek()) >"e" Listing 20. Peeking at the stack. (Note that you can always get the first items in an array with arr[0]). One of the more common use for stacks is to maintain a “browse” history in a Javascript application. For instance, suppose that you have a data application where render(id) will draw a “page” of content without doing a server-side reload. Listing 21 shows a simple application that keeps a stack which will let you add new items as you continue browsing the stack, but also lets you back up to previously visited “pages”. <!DOCTYPE html> < <head> <title></title> <style> #container,#list{color:white;} </style> <script> Array.prototype.peek=function(){return this[this.length-1]}; App = { stack:[], color:"white", colorSelect:null, popButton:null, list:null, container:null, init:function(){ App.container = document.getElementById("container"); App.colorSelect = document.getElementById("colorSelect"); App.popButton = document.getElementById("popButton"); App.list = document.getElementById("list"); App.colorSelect.addEventListener("change",App.display); App.popButton.addEventListener("click",App.goBack); return false; }, display:function(){ color = App.colorSelect.value; oldColor = App.color; App.color = color; container.innerHTML=color; document.body.style.backgroundColor=color; App.stack.push(oldColor); App.list.innerHTML = App.stack; }, goBack:function(){ var oldColor = App.stack.pop(); if (oldColor != null){ container.innerHTML=oldColor; document.body.style.backgroundColor=oldColor; App.color=oldColor; App.list.innerHTML = App.stack; } return false; } }; window.addEventListener("load",App.init); </script> </head> <body> <</div> <div> < <White</option> <Red</option> <Green</option> <Blue</option> <Orange</option> <Purple</option> </select> <Pop</button> </div> <</div> </body> </html> Listing 21. Creating a stack that remembers color states. When you select an item from the <select> box, it changes the background to the new color, but then pushes the old color onto the stack (shown as the comma separated list). When you pop the item, the stack removes the last item and makes that the active color. This is very similar to the way that a web browser retains its internal history. Pop! Contemporary Javascript arrays are far more powerful than they have been in the past. They provide a foundation for higher order manipulation of functions, let you do filtering and searching, and can even turn complex functions (often ones that relied heavily upon recursion) into chained map/reduce operations. This should make arrays and array functions a staple for both text processing and data analysis. Learning how to work with arrays properly can give you a distinct advantage as a Javascript programmer. One final note: While ES6 is rapidly making its way into most contemporary browsers and environments such as node.js, they may not necessarily be available in older browsers (especially constructs like of and predicates). There are a number of polyfill libraries that will let you get close, most notably the Babel polfill ( ). Kurt Cagle is the founder and chief ontologist for Semantical, LLC, a smart data company. He has been working with Javascript since 1996. Array of JavaScript Array Tricks 评论 抢沙发
http://www.shellsec.com/news/16208.html
CC-MAIN-2016-44
en
refinedweb
This blog post has been updated to reflect changes in VS 2010 RC or later build. If you are on VS 2010 Beta 2, please upgrade before trying this post. One question that keeps coming up is “How to configure search properties used by the recorder\code generation for identifying the control?”. For example, inform recorder that the Name of the certain control is dynamic and not to use it to identify the control. Well, this feature per say is missing in VS 2010 release of Coded UI Test. However, there is a fairly simple workaround to this issue using the extensibility support of Coded UI Test. The configuring search properties may mean add or remove or change of search properties. Using the extensibility support, remove and change of search properties is possible but not add (unless you know the property value too). The extensibility sample for this is attached to this post as RemoveUnwantedProperties.zip. Before starting with this sample, ensure you have read at least the first two post on Coded UI Test Extensibility series. Explaining the Sample – - The sample hooks into UITest.Saving event. This event is raised before the UITest file is saved and code is generated for it. - It then iterates through all the UI objects. The UI objects are stored in an hierarchical tree. The sample uses recursion to do pre-order traversal of the tree. (Check for the calls to RemoveUnwantedSearchCondition() method.) - During traversal, for each UI object, it calls GetRedundantSearchProperties() method in the sample. Based on the property names returned by this method, it removes those properties, if present, from the UI object’s search criteria. - The current implementation of GetRedundantSearchProperties() method simply removes “Name” property for all but few Win32\WinForms control from the search criteria. Depending on your need, you can customize this function accordingly. Please note that since the sample is removing a search property, it might make the search criteria weak and result in playback failure. This is the reason for “claiming” this only a “workaround”. RemoveUnwantedProperties.zip Hi, what VS2010 version I need to use your example? RC? It seems namespace ‘CodeGeneration’ doesn’t exist in Beta 2. 🙁 Thanks. Hi Buck, The sample is for Beta 2 only. The possibility could be that some reference is not resolved properly. Check for any warning in the reference particularly Microsoft.VisualStudio.TeamTest.UITest.CodeGeneration.dll. You might have to fix the path to this dll. Thanks. Hi, you’re right. Thanks. 🙂 Great content! The extensibility series totally rocks! I wish the comments on some of the older posts were still enabled for folks to ask qs on relevant posts. Is there a reason you want to close commenting on older posts? Anu – All the posts are open to commenting. I have not done anything deliberate to block comments. Hi Gautam, I need help on using separate UIMAP.uitest files… Here is my scenario. 1) I have created a new folder "Screen1" under my project "TSApp". 2) I added new CodedUI test by right-cliking "Screen1" folder –> "Screen1TEST.cs" 3) then added new CodedUIMap item by right-cliking "Screen1" folder –> "Screen1UIMAP.uitest" and recorded some actions –> saved as "Method_Add". Here it created "Screen1UIMAP.cs" and "Screen1UIMAP.Designer.cs" files 4) then i went to "Screen1TEST.cs" file and trying to call recorded "Method_Add" as TSApp.Screen1.Screen1UIMAP.Method_Add() but here it not listing out this "Screen1UIMAP" file under TSApp.Screen1.. Anything i went wrong? Can u give me an idea on this…OR Tell me the approach to use separate UIMAP file and creating TEST based on that… Thanks in advance, Shanmugavel. Please use Coded UI Test forum – social.msdn.microsoft.com/…/threads for such questions. Thanks. thanks for the great Post; i am trying this for my project to customize href;Pageurl ; but not sure if i got this even though i have copied this demo to the required folder still not getting desired , any help how can i debug this ? @subbu – What is the error that you are seeing? Note that the path to copy the files are different on 32bit machine vs 64 bit machine. Check for details in blogs.msdn.com/…/2-hello-world-extension-for-coded-ui-test.aspx. Try the above Hello, World plugin to see if you are able to get that working. Hi Guatam, Can you elaborate some more on "Using the extensibility support, remove and change of search properties is possible but not add (unless you know the property value too)."? I was having trouble with CUIT recognizing some MFC controls after receiving a new test build. I want to help CUIT find the controls with perhaps adding search properties. Most likely this will need to be done by hand coding and not recording. Can you point me in the right direction? Thanks in advance. The above extensibility hook is during saving of the generated file. Hence it can only remove (or make certain changes) to the properties captured by the recorder. To add properties, we need a hook into the recorder to ask it to capture the properties in first place which is not there today. Thanks.
https://blogs.msdn.microsoft.com/gautamg/2010/02/02/configure-search-properties-used-by-recordercode-generation/
CC-MAIN-2016-44
en
refinedweb
. Great post! I think blocks are also difficult to understand. "Don't test because a segment of the Ruby community forces you to, do it because it is so damn easy there is no reason not do." As a ruby newbie who's just started playing with test/unit, this seems to be a perfectly valid statement. Thanks for the feedback and the challenge. Automate things. Write test and have them executed in the background everytime you save a file. What might seem like more work in the beginning, turns out to be less work in the end. Felipe: Dunno if this helps or not, but I didn't really grok blocks until I coded the equivalent in prototype.js (which mimics much of Ruby in Javascript). Consider the "doubler" mapping in Ruby: [1, 2, 3].map{ |x| x * 2 } The equivalent in prototype.js: [1, 2, 3].map(function(x) { return x * 2 }) Looking at it this way, I realized that Ruby blocks are just anonymous functions with less verbosity. There are some semantic differences between blocks and methods in Ruby, but 90% of the time, you can think of them as equivalent. Of course, you can do some interesting stuff with the other 10%, but that's something to look forward to :) I love the comment you made above about how close Ruby allows you to get to the domain of the problem and clearly express the intent. The newbie challenge was a very interesting problem and one that I actually spent three hours on just trying to do the math than actually coding. Once I finally had the math figured out, the Ruby came almost effortlessly. I spent about 15 minutes coding and testing. But the remarkable thing is that the solution decidedly took the exact shape of the hand-written worksheet where I solved the problem mathematically. Powerful stuff! I can't get the tests to run without placing them in a class which is a subclass of Test::Unit::TestCase. Is it possible to run test/unit tests without doing so? Could you provide a gist? Thanks! Also, test_simple_times is missing a paren. nstielau - Thanks! Two copy & paste errors in three lines of code is not a good ratio :-/ You are correct, the test cases do need to be a sub-class of Test::Unit::TestCase (corrected above). Not sure how I missed that, but thanks for pointing it out! In the symbol-to-proc blog posting you mentioned, there is a comment that the symbol-to-proc method was over 3x slower than the {|i| i.to_time} version. I don't know if this is still the case (2009 vs 2006), but its worth considering. Isn't symbol-to-proc essentially syntactic sugar, or is there a more substantial / less obvious benefit that might outweigh the slower performance? Pete - There is no overhead for symbol-to-proc... at least not in Ruby 1.8.7+. The compiler remembers the initial symbol-to-proc conversion and uses it in subsequent iterations. So you can have clearer intent without worry of performance penalty. Hi Chris, Great document - a must read for all newbies. After reading the inject, chaining and symbol-to-proc -- I do consider myself a newbie again ! :D There is a subtle issue with your time summing up: in case the array is empty you get nil and not 0 (omitting the map here): irb(main):006:0> [].inject(&:+) => nil You rather want this form: irb(main):007:0> [].inject(0, &:+) => 0 I don't believe the remark about adding a ! to average_time_of_day is quite correct. To begin with, that particular method shouldn't have any side effects anyway. If you really want to manipulate an object passed into a method, call #dup on it first. The second thing is that adding a ! to methods that change data only really applies to situations where there is also a method without ! that works on/returns a copy. There are exceptions, of course, supposedly for "dangerous" situations. But suffixing ! to every method that changes data would be a bit excessive. Excellent advice about testing that method. Using TDD on functions makes it so easy, I always feel it's practically cheating. Mark -- I agree. I would not suggest adding a bang to average_time_of_day. I think it more appropriate to leave it side-effect free and not use any bang methods inside it. The rest of your point is well taken. In the interest of correcting mistakes... In Ruby parameters are NOT passed by reference. They are passed by value. You are confusing the concepts of immutability and pass by reference. In pass by reference, the formal parameter becomes an *alias* for the actual argument in the method body and so any changes made inside the method will be reflected afterwards. Simple proof of Ruby's not pass-by-reference-ness: irb(main):018:0> def changeme(a) irb(main):019:1> a = 10 irb(main):020:1> end => nil irb(main):021:0> b = 20 => 20 irb(main):022:0> changeme(b) => 10 irb(main):023:0> b => 20 Under pass-by-reference semantics, b's value would be 10. But in Ruby it clearly isn't, it's still 20. However, Ruby *is* an impure or mutable language (unlike Haskell, Erlang, etc..) and so does have side effects. This means that if a method body is given a copy of a reference to an object, and makes a change to that object, that too will be reflected after the method call. It is for these types of methods that Ruby uses the bang operator. The reasons this is confusing to many people is that in Ruby everything is an object. But it's still pass by value, you're just passing around references to objects. Subtle, but different. (In fact, it's the exact same function passing semantics as Java minus Java's primitive types). El Popa - Agreed. You are completely right. I've updated the post accordingly. Thanks! hey Chris, I was wondering about performance when you chain things like x.map(blah).inject(blah). I know it is really practical and cool to use and nice to read but aren't we adding up lots of avoidable iterations(comparing with using a simple while loop) slowing down the code? ....................................................... Nice post... Now I wish I had done the exercise before reading the post to see how close we were :) # A method that generates a side effect: def changeme!(a) puts a.object_id a[3] = "yep" end b = {} b.object_id # ==>12860680 changeme!(b) 12860680 b # ==>{3=>"yep"} # A method with no side effect: def changeme(a) puts a.object_id a = "rudolph" puts a.object_id end ==>nil b = {1=>2} ==>{1=>2} b.object_id ==>12535460 changeme(b) 12535460 12459832 ==>nil b ==>{1=>2} b.object_id ==>12535460
https://japhr.blogspot.com/2009/10/newbie-feedback.html
CC-MAIN-2016-44
en
refinedweb