text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
What: mpexpr Where: compiling and adding mpexpr to your tcl environment sounds too much like work and you happen to be running Linux or Unix, check out Bignums Fast and Dirty. Description: Tcl 7.6/8.0 extension (adding mpexpr and mpformat) that supports multiple precision math for Tcl. Tested on Solaris and Linux and a Windows port has begun. Currently at version 1.0. Updated: 09/2000 Contact: mailto:[email protected]Description: Tcl 7.6/8.0 extension (adding mpexpr and mpformat) that supports multiple precision math for Tcl. Tested on Solaris and Linux and a Windows port has begun. Currently at version 1.0. Updated: 09/2000 Contact: mailto:[email protected] (Tom Poindexter)(Tom Poindexter) TC - Mpexpr is now being hosted at SourceForge: TR Mpexpr (Version 1.1) does only compile properly until Tcl 8.3. After the CONSTification [1] which took place for Tcl 8.4 the compiling returns with an error. You can, however, work around this problem without changing the source code of Mpexpr. Just run the configure script first, then edit the resulting Makefile. In the line saying TCL_SHLIB_CFLAGS = -fPICjust add another entry to make the line look lke this: TCL_SHLIB_CFLAGS = -fPIC -DUSE_NON_CONSTYou can then compile properly and get the extension running for Tcl 8.4(Tested on Linux only) A note of caution: It is not always a good idea to just replace the original expr command by mpexpr like so: package require Mpexpr interp alias {} expr {} mpexprThis is dangerous because the mpexpr args have not the exact same syntax as the original expr command. For example wide is missing. Also, it can interfere with Tk. The panedwindow from Tk 8.4 will throw an error when trying to move the sash, if the above alias is used ... so just in case, be cautious! LV Does Mpexpr provide any functionality not included in the Tcl 8.5 larger number support? Just curious whether this code will evolve or whether it will no longer be needed or what.AM Tcl 8.5 adds support for large integers, but not as far as I know for arbitrary-precision reals. So that is where this package will still be useful.escargo 6 Dec 2005 - Would this package be faster for multiprecision arithmetic as needed for the Programming Language Shootout? [2]AM Sure, if you compare it to the Tcl-only solution in Tcllib. Whether it is faster than the library incorporated in Tcl 8.5, I do not know.escargo 8 Dec 2005 - Somebody tried to use this package for the Shootout, but it wasn't available on the host system so it got an error and bombed out. Thanks for trying. During Feb, 2002, Gerald Lester posted some basic Tcl code to news:comp.lang.tcl package provide packedDecimal 0.1 namespace eval packedDecimal { namespace export add subtract multiply divide setDecimals variable decimals 2 variable formatString {%d.%2.2d} variable carry 100 } proc packedDecimal::add {a b} { variable decimals variable formatString scan $a %d.%d a1 a2 scan $b %d.%d b1 b2 incr a2 $b2 if {[string length $a2] > $decimals} then { incr a1 1 set a2 [string range $a2 1 end] } incr a1 $b1 return [format $formatString $a1 $a2] } proc packedDecimal::subtract {a b} { variable decimals variable formatString variable carry scan $a %d.%d a1 a2 scan $b %d.%d b1 b2 incr a2 -$b2 if {$a2 < 0} then { incr b1 1 set a2 [expr {$carry + $a2}] } incr a1 -$b1 return [format $formatString $a1 $a2] } proc packedDecimal::multiple {a b} { variable decimals variable formatString return -code error {Sorry, Multiple is not yet implemented!} } proc packedDecimal::divide {a b} { variable decimals variable formatString return -code error {Sorry, Divide is not yet implemented!} } proc packedDecimal::setDecimals {a} { variable decimals variable formatString variable carry 100 set formatString [format {%%d.%%%d.%dd} $a $a] set decimals $a set carry [format "1%${a}.${a}d" 0] return; } proc packedDecimal::getDecimals {} { variable decimals return $decimals }
http://wiki.tcl.tk/1358
CC-MAIN-2017-26
refinedweb
655
58.18
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hey all, A quick addendum: Mark D. Baushke from the CVS development team suggested that ~0 might be a better value than -1 as it is more obviously positive. He also suggested that it wouldn't require the typecast, but I believe this would not be the case, as I think ((size_t) ~0) and, say, ((long long) ~0) may not be the same number and allowing the compiler to pick how many bytes to shove ~0 into based on the context of each use of SIZE_MAX is almost certainly wrong here. I can see the desirability of ((size_t) ~0) for simple sake of looking more obviously positive than ((size_t) -1), however. Derek Derek Robert Price wrote: >Hey all, here's a patch for lib/xsize.h: Basically, a default value for >SIZE_MAX needs to be provided under Windows. I did the same thing it >looks like Paul did in lib/quotearg.c and defined it to ((size_t) -1) >when no value has been found in the includes. > >Index: lib/xsize.h >=================================================================== >RCS file: /cvs/ccvs/lib/xsize.h,v >retrieving revision 1.1 >diff -u -p -r1.1 xsize.h >--- lib/xsize.h 15 Feb 2004 22:55:53 -0000 1.1 >+++ lib/xsize.h 7 Mar 2004 22:29:44 -0000 >@@ -28,6 +28,11 @@ ># include <stdint.h> >#endif > >+/* Windows does not supply a definition for SIZE_MAX. */ >+#ifndef SIZE_MAX >+# define SIZE_MAX ((size_t) -1) >+#endif >+ >/* The size of memory objects is often computed through expressions of >type size_t. Example: >void* p = malloc (header_size + n * element_size). > > >Thanks, > >Derek - -- *8^) Email: address@hidden Get CVS support at <>! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) Comment: Using GnuPG with Netscape - iD8DBQFAS9cbLD1OTBfyMaQRAvudAJ4hiHTaitEnJvPNC2mD88U8yjxCawCfZSSb 9yGNvVFMuZJRqdsGybdSYSs= =rwLc -----END PGP SIGNATURE-----
http://lists.gnu.org/archive/html/bug-gnulib/2004-03/msg00013.html
CC-MAIN-2014-15
refinedweb
292
66.03
Implements logging facilities. Dprovides a standard interface for logging. The easiest way to create a log message is to write: import std.experimental.logger; void main() { log("Hello World"); }This will print a message to the stderrdevice.is created. Individual Loggerand the global log functions share commonly named functions to log data. The names of the functions are as follows: log trace info warning critical fatal Loggerwill by default log to stderrand has a default LogLevelof LogLevel.all. The default Logger can be accessed by using the property called sharedLog. This property is a reference to the current default Logger. This reference can be used to assign a new default Logger. sharedLog = new FileLogger("New_Default_Log_File.log");Additional Loggercan be created by creating a new instance of the required Logger. LogLevelof a log call can be defined in two ways. The first is by calling logand passing the LogLevelexplicitly as the first argument. The second way of setting the LogLevelof a log call, is by calling either trace, info, warning, critical, or fatal. The log call will then have the respective LogLevel. If no LogLevelis defined the log call will use the current LogLevelof the used Logger. If data is logged with LogLevel fatalby default an Errorwill be thrown. This behaviour can be modified by using the member fatalHandlerto assign a custom delegate to handle log call with LogLevel fatal. boolas first argument to a log function. If conditional logging is used the condition must be truein order to have the log message logged. In order to combine an explicit LogLevelpassing with conditional logging, the LogLevelhas to be passed as first argument followed by the bool. LogLevelof the log message is greater than or equal to the LogLevelof the used Loggerand additionally if the LogLevelof the log message is greater than or equal to the global LogLevel. If a condition is passed into the log call, this condition must be true. The global LogLevelis accessible by using globalLogLevel. To assign a LogLevelof a Loggeruse the logLevelproperty of the logger.veland conditional logging functions and methods. Logger sharedLog. Actually, a thread local Loggerof type StdForwardLoggerprocesses the log call and then, by default, forwards the created Logger.LogEntryto the sharedLog Logger. The thread local Loggeris accessible by the stdThreadLocalLogproperty. This property allows to assign user defined Logger. The default LogLevelof the stdThreadLocalLog Loggeris LogLevel.alland it will therefore forward all messages to the sharedLog Logger. The LogLevelof the stdThreadLocalLogcan be used to filter log calls before they reach the sharedLog Logger. Loggerbehavior, create a new classthat inherits from the abstract Logger class, and implements the writeLogMsgmethod.method the methods beginLogMsg, logMsgPartand finishLogMsgcan be overridden. Logger StdLoggerDisableLoggingas a version argument to the Dcompiler when compiling your program code. This will disable all logging functionality. Specific LogLevelcan be disabled at compile time as well. In order to disable logging with the trace LogLevelpass StdLoggerDisableTraceas a version. The following table shows which version statement disables which LogLevel. Such a version statement will only disable logging in the associated compile unit. Loggerimplementations are given. The FileLoggerlogs data to files. It can also be used to log to stdoutand stderras these devices are files as well. A Loggerthat logs to stdoutcan therefore be created by new FileLogger(stdout). The MultiLoggeris basically an associative array of strings to Logger. It propagates log calls to its stored Logger. The ArrayLoggercontains an array of Loggerand also propagates log calls to its stored Logger. The NullLoggerdoes not do anything. It will never log a message and will never throw on a log call with LogLevel error. © 1999–2019 The D Language Foundation Licensed under the Boost License 1.0.
https://docs.w3cub.com/d/std_experimental_logger
CC-MAIN-2021-10
refinedweb
607
51.24
Stack-like allocator. This allocator allocates single elements of POD type from a stack-like pool. The allocator is intentionally not thread safe and is intended to be used in multi-threaded applications where each thread is allocating and freeing elements in a stack-like order from its own allocator. Definition at line 21 of file StackAllocator.h. #include <StackAllocator.h> Type of values being allocated. Definition at line 23 of file StackAllocator.h. Free unused large blocks. Definition at line 44 of file StackAllocator.h. Reserve space for objects. Reserves space for nElmts elements of type T and returns a pointer to the first one. Although this reserves space for objects, the memory is not yet usable to store an object. The next allocation request will give back the same address as returned here, unless the allocated is reverted to an earlier address in the mean time. Definition at line 54 of file StackAllocator.h. Allocate one element. If this allocation is below the amount previously reserved then it is guaranteed to be adjacent to the previous allocation. Definition at line 64 of file StackAllocator.h. Free all elements back to the specified element. Reverts the state of this allocator so that the specified ptr would be the return value from allocNext. If this results in whole large blocks becoming unused they are not returned to the system (call prune if that's desired). Definition at line 73 of file StackAllocator.h.
http://rosecompiler.org/ROSE_HTML_Reference/classSawyer_1_1StackAllocator.html
CC-MAIN-2019-43
refinedweb
243
50.63
Badger::Class::Method - metaprogramming module for adding methods to a class package My::Module; # using the module directly use Badger::Class::Methods accessors => 'foo bar', mutators => 'wiz bang'; # or via Badger::Class use Badger::Class accessors => 'foo bar', mutators => 'wiz bang'; This module can be used to generate methods for a class. It can be used directly, or via the accessors, accessors and slots export hooks in Badger::Class. This method is a central dispatcher to other methods. Badger::Class::Methods->generate( accessors => 'foo bar', ); This method can be used to generate accessor (read-only) methods for a class (Badger::Class object) or package name. You can pass a list, reference to a list, or a whitespace delimited string of method names as arguments. # these all do the same thing Badger::Class::Methods->accessors('My::Module', 'foo bar'); Badger::Class::Methods->accessors('My::Module', 'foo', 'bar'); Badger::Class::Methods->accessors('My::Module', ['foo', 'bar']); A method will be generated in the target class for each that returns the object member data of the same name. The method itself is generated by calling the accessor() method. This method generates an accessor method for accessing the item in an object denoted by $name. The method is returned as a code reference. It is not installed in the symbol table of any package - that's up to you (or use the accessors() method). my $coderef = Badger::Class::Method->accessor('foo'); The code generated is equivalent to this: sub foo { $_[0]->{ foo }; } This method can be used to generate mutator (read/write) methods for a class (Badger::Class object) or package name. You can pass a list, reference to a list, or a whitespace delimited string of method names as arguments. # these all do the same thing Badger::Class::Methods->mutators('My::Module', 'foo bar'); Badger::Class::Methods->mutators('My::Module', 'foo', 'bar'); Badger::Class::Methods->mutators('My::Module', ['foo', 'bar']); A method will be generated in the target class for each that returns the object member data of the same name. If an argument is passed to the method then the member data is updated and the new value returned. The method itself is generated by calling the mutator() method. This method generates a mutator method for accessing and updating the item in an object denoted by $name. The method is returned as a code reference. It is not installed in the symbol table of any package - that's up to you (or use the mutators() method). my $coderef = Badger::Class::Method->mutator('foo'); The code generated is equivalent to this: sub foo { @_ == 2 ? ($_[0]->{ foo } = $_[1]) : $_[0]->{ foo }; } Ugly isn't it? But of course you wouldn't ever write it like that, being a conscientious Perl programmer concerned about the future readability and maintainability of your code. Instead you might write it something like this: sub foo { my $self = shift; if (@_) { # an argument implies a set return ($self->{ foo } = shift); } else { # no argument implies a get return $self->{ foo }; } } Or perhaps like this: sub foo { my $self = shift; # update value if an argument was passed $self->{ foo } = shift if @_; return $self->{ foo }; } Or even like this (my personal favourite): sub foo { my $self = shift; return @_ ? ($self->{ foo } = shift) : $self->{ foo }; } Whichever way you do it is a waste of time, both for you and anyone who has to read your code at a later. Seriously, give it up! Let us generate the methods for you. We'll not only save you the effort of typing pages of code that no-one will ever read (or want to read), but we'll also generate the most efficient code for you. The kind that you wouldn't normally want to handle by yourself. So in summary, using this method will keep your code clean, your code efficient, and will free up the rest of the afternoon so you can go out skateboarding. Tell your boss I said it was OK. This method generates methods for accessing or updating items in a hash reference stored in an object. In the following example we create a users() method for accessing the internal users hash reference. package Your::Module; use base 'Badger::Base'; use Badger::Class::Methods hash => 'users'; sub init { my ($self, $config) = @_; $self->{ users } = $config->{ users } || { }; return $self; } The init() method copies any users passed as a configuration parameter or creates an empty hash reference. my $object = Your::Module->new( users => { tom => '[email protected]', } ); When called without any arguments, the generated users() method returns a reference to the users hash array. print $object->users->{ tom }; # [email protected] When called with a single non-reference argument, it returns the entry in the hash corresponding to that key. print $object->users('tom'); # [email protected] When called with a single reference to a hash array, or a list of named parameters, the method will add the new items to the internal hash array. A reference to the hash array is returned. $object->users({ # single hash ref dick => '[email protected]', harry => '[email protected]', }); $object->users( # list of amed parameters dick => '[email protected]', harry => '[email protected]', ); This method can be used to create a custom init() method for your object class. A list, reference to a list, or string of whitespace delimited method names should be passed an argument(s). A method will be generated which calls each in turn, passing a reference to a hash array of configuration parameters. use Badger::Class::Methods->initialiaser( 'My::Module', 'init_foo init_bar' ) The above example will generate an init() method in My::Module equivalent to: sub init { my ($self, $config) = @_; $self->{ config } = $config; $self->init_foo($config); $self->init_bar($config); return $self; } It's up to you to implement the init_foo() and init_bar() methods, or to inherit them from a base class or mixin. This method can be used to define methods for list-based object classes.. Badger::Class::Methods->slots('My::Module', 'foo bar'); Badger::Class::Methods->slots('My::Module', 'foo', 'bar'); Badger::Class::Methods->slots('My::Module', ['foo', 'bar']); It is usually called indirectly via the slots export hook in Badger::Class. can be used to define a method that automatically generates other methods on demand. Suppose you have a view class that renders a view of a tree. In classic double dispatch style, each node in the tree calls a method against the view object corresponding to the node's type. A text node calls $view->view_text($self), a bold node calls $view->view_bold($self), and so on (we're assuming that this is some kind of document object model we're rendering, but it could apply to anything). Our view methods might look something like this: sub view_text { my ($self, $node) = @_; print "TEXT: $node\n"; } sub view_bold { my ($self, $node) = @_; print "BOLD: $node\n"; } This can get rather repetitive and boring if you've got lots of different node types. So instead of defining all the methods manually, you can declare an auto_can method that will create methods on demand. use Badger::Class auto_can => 'can_view'; sub can_view { my ($self, $name) = @_; my $NAME = uc $name; return sub { my ($self, $node) = @_; print "$NAME: $node"; } } The method should return a subroutine reference or any false value if it declines to generate a method. For example, you might want to limit the generator method to only creating methods that match a particular format. sub can_view { my ($self, $name) = @_; # only create methods that are prefixed with 'view_' if ($name =~ s/^view_//) { my $NAME = uc $name; return sub { my ($self, $node) = @_; print "$NAME: $node"; } } else { return undef; } } The auto_can() method adds AUTOLOAD() and can() methods to your class. The can() method first looks to see if the method is pre-defined (i.e. it does what the default can() method does). If it isn't, it then calls the can_view() method that we've declared using the auto_can option (you can call your method auto_can() if you like, but in this case we're calling it can_view() just to be different). The end result is that you can call can() and it will generate any missing methods on demand. # this calls can_view() which returns a CODE sub my $method = $object->can('view_italic'); The AUTOLOAD() method is invoked whenever you call a method that doesn't exist. It calls the can() method to automatically generate the method and then installs the new method in the package's symbol table. The next time you call the method it will be there waiting for you. There's no need for the AUTOLOAD() method to get involved from that point on. # this calls can_view() to create the method and then calls it $object->view_cheese('Camembert'); # CHEESE: Camembert # this directly calls the new method $object->view_cheese('Cheddar'); # CHEESE: Cheddar If your can_view() method returns a false value then AUTOLOAD() will raise the familiar "Invalid method..." error that you would normally get from calling a non-existent method. This methods inspect the arguments and performs the necessary validation for the accessors(), mutators() and slots() methods. Andy Wardley This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~abw/Badger-0.09/lib/Badger/Class/Methods.pm
CC-MAIN-2015-14
refinedweb
1,539
59.74
W3C's SOAP provides a way to package messages or requests in XML envelopes. SOAP is a W3C framework or protocol for the exchange of information between "peers in a decentralized, distributed environment" (). SOAP is managed by the W3C's XML Protocol Working Group (). Originally, SOAP was an acronym for Simple Object Access Protocol, but that interpretation has been dropped. The original SOAP attracted a lot of interest early on, but as applications of SOAP rolled out (such as the API for Google,), many people complained that non-SOAP interfaces were easier to use (). Amazon's Web Services developer kit offers SOAP and REST (or Representational State Transfer, which is basically XML plus HTTP) interfaces, but the uptake for the REST interface seems to have been greater than for SOAP. Some prefer to avoid SOAP and commercial web services solutions altogether. Nevertheless, SOAP remains an approach at least worth examining. The main concept behind SOAP is the remote procedure call (RPC), whereby one computer on a network can send a message or command to another computer on the network. The computer that receives the call can hand such an RPC to a local program. Then, if desired, the local program can send a reply?such as a return value?back to the original machine it came from. SOAP is much more than just an XML vocabulary, though an XML vocabulary is an important part of it. The information in a SOAP envelope can be represented in forms other than XML across the wire. SOAP has a basic data model () that represents structures as "a directed edge-labeled graph of nodes." SOAP also has a set of optional encoding rules that may be used to encode messages that conform to the SOAP data model (). This hack will demonstrate what a SOAP envelope looks like in XML and some of the features that surround it. Example 4-6 shows soap.xml, an XML representation of a SOAP message. <?xml version="1.0" encoding="UTF-8"?> <env:Envelope xmlns: <env:Header> <dt:instant xmlns: <dt:date </dt:instant> </env:Header> <env:Body> <tz:time <tz:hour>11</tz:hour> <tz:minute>59</tz:minute> <tz:second>59</tz:second> <tz:meridiem>p.m.</tz:meridiem> <tz:atomic </tz:time> </env:Body> </env:Envelope> The document element is Envelope, the container for the SOAP message. The env: prefix is the customary prefix for SOAP envelopes using the namespace. The Envelope element has two children, Header and Body. Header is optional, but Body is not. Both of these elements contain XML elements for some other namespace. SOAP messages are expected to be sent to at least one SOAP receiver, though there may be more. SOAP intermediaries may be included along the path of a SOAP message, and each of those intermediaries is expected to have some role in processing or handling the message. The role attribute, with the value, indicates that the application receiving the message must look over and perhaps process the message before sending it on to the next destination. The mustUnderstand attribute with a value of true means the header block that contains it must be understood by (i.e., properly handled by) the application receiving the message. The information contained in the Body element is considered the payload of the message. The encodingStyle attribute with a value of tells the processing application that contents of structure follow SOAP's optional encoding rules. Finally, SOAP may be transported by a variety of protocols, most prominently HTTP. For example, you can grab SOAP messages with HTTP GET () or post a message with HTTP POST (). Other transportation methods for SOAP include web architectures that identify SOAP messages with URIs () and email (). For a more thorough introduction to SOAP, read the SOAP v1.2 primer (Part 0): SOAP v1.2 messaging framework (Part 1), including information on the SOAP processing model and message constructs: SOAP v1.2 adjuncts (Part 2), including information on the SOAP data model, encoding, and HTTP binding:
https://etutorials.org/XML/xml+hacks/Chapter+4.+XML+Vocabularies/Hack+63+Create+a+SOAP+1.2+Document/
CC-MAIN-2021-31
refinedweb
666
55.74
current position:Home>Opencv skills | saving pictures in common formats as transparent background pictures (with Python source code) - teach you to easily make logo Opencv skills | saving pictures in common formats as transparent background pictures (with Python source code) - teach you to easily make logo 2021-08-23 04:48:06 【Color Space】 Reading guide This paper mainly introduces the use of OpenCV Method and implementation code of saving common format pictures as transparent background pictures . Achieve the goal The objectives of this paper are as follows : ① The common format [jpg/png/bmp] The white background picture is converted and saved as a transparent background picture ; ② The common format [jpg/png/bmp] Complex background image is converted and saved as transparent background image . Implementation steps and detailed demonstration Implementation steps : ① Load pictures in color mode ; ② The image by BGR The color space is converted to BGRA Color space ; ③ Corresponding the pixel value of the white position in the original image A All channels are set to 0; ④ Save the processed image as PNG Format . Code implementation and demonstration : The image to be processed : Processing result image : Compare carefully to see the difference ( White background and transparent background ): Alpha Channel processing results ( The white part is retained , The black part is finally a transparent background ): Try another picture : Python-OpenCV Implementation code : import cv2 import numpy as np img = cv2.imread("opencv.jpg") cv2.imshow('src', img) print(img.shape) result = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA) for i in range(0,img.shape[0]): # Access all rows for j in range(0,img.shape[1]): # Access all columns if img[i,j,0] > 200 and img[i,j,1] > 200 and img[i,j,2] > 200: result[i,j,3] = 0 cv2.imwrite('result.png', result, [int(cv2.IMWRITE_PNG_COMPRESSION), 0])() What if the background of the picture is a little more complex ? All change is the same , Just put the part you want to keep Alpha The gray value of the corresponding part of the channel changes to 255, What you don't want to keep Alpha The gray value of the corresponding part of the channel changes to 0, And save it as PNG Just pictures . Take the following picture as an example : The goal is to extract the middle part of the flower , Then it is processed into a transparent background . Extracting flowers can transform the R After the channel threshold processing, it is directly used as Alpah Just the passage . R Channel separation effect : Binary effect : Code implementation and final results : import cv2 import numpy as np img = cv2.imread("flower.jpg") cv2.imshow('src', img) print(img.shape) result = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA) B,G,R = cv2.split(img) _, Alpha= cv2.threshold(R, 200, 255, cv2.THRESH_BINARY) cv2.imshow('thres', Alpha) B2,G2,R2,A2 = cv2.split(result) A2 = Alpha result = cv2.merge([B2,G2,R2,A2]) # Channel merging cv2.imwrite('result.png', result)() This article is from WeChat official account. - OpenCV And AI Deep learning (OpenCV_AI_DL) , author :Color Space The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the [email protected] Delete . Original publication time : 2021-08-07 Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share . author[Color Space]
https://en.pythonmana.com/2021/08/20210823044803156v.html
CC-MAIN-2022-27
refinedweb
560
52.09
TextInput in qml: How to detect that user has completed typing - Nelson_Piquet I am new to Qt & QML. I am using QtQuick 2.4. I have TextInputitem with a signal defined in a qmlfile like below: import QtQuick 2.4 TextInput { text: "Text" cursorVisible: true signal qmlSignal(string msg) } I also have a slot tied to the qmlSignal. I want to trigger the signal when user completes typing on the TextInputfield or closes my qml page to go to another page in the application. What is the correct way to acheive this desired functionality ? I see that there is onEditingFinished(). How can I use onEditingFinished() here ? Can someone pls provide a sample Hi! I guess the editingFinishedsignal is what you're looking for. Hello @Nelson_Piquet , You can use signal handler like this: TextInput { text: "Text" cursorVisible: true signal qmlSignal(string msg) onEditingFinished: { qmlSignal (text); } onQmlSignal: { console.log(msg) } } - Nelson_Piquet
https://forum.qt.io/topic/72764/textinput-in-qml-how-to-detect-that-user-has-completed-typing
CC-MAIN-2018-51
refinedweb
149
58.38
Single Round Match 736 Editorials Div2 Easy: A0Paper In this task, there are many possible solutions. One of them simply greedily combines as many A(i+1) papers into A(i) as possible, for i decreasing. If there are any A0 papers after the process, the answer is clearly “Possible”, otherwise it is “Impossible”. This solution runs in O(n). class A0Paper { string canBuild(vector<int> a) { for (int i = a.size()-1; i > 0; --i) a[i-1] += a[i]/2; return a[0] ? "Possible" : "Impossible"; } } Another possibility is to sum the areas of the papers in stock (multiplied by 2^20, so they are integers instead of doubles). If the total area is at least 1 square meter, it is always possible to build an A0. Keep in mind that 32-bit integers might overflow using this approach. Div2 Medium: Reroll The limits are small enough so that we can try all possible subsets. For each subset, we can calculate the range of values the reroll can produce as the sum of ranges of all individual dice. For a die that is left as is, the range is [a_i, a_i], and for a rerolled die it is [1,6]. If target is inside this range, then it is achievable. This solution works in O(n * 2^n). There is in fact a greedy solution. Without loss of generality let target be larger than the current sum. While this is the case, remove the smallest die and replace it with a 6. Once the current sum is larger or equal to target, we are done. Similarly, if the target is smaller than the current sum, we replace the dice with the largest roll with ones. class A0Paper { int minimumDice(int target, vector<int> rolls) { int N = rolls.size(); int total = 0; for (int roll: rolls) total += roll; sort(rolls.begin(),rolls.end()); if (target == total) return 0; else if (target > total) { for (int i = 0; i < N; ++i) { if (total += 6 - rolls[i] >= target) return i+1; } } else { for (int i = 0; i < N; ++i) { if (total += 1 - rolls[N-i-1] <= target) return i+1; } } } } The complexity of this approach is O(n log n) or O(n) if sorting counting sort is used. Div2 Hard: MazeWithKeys Let’s try all possible starting points, and for each of them we determine whether the target is reachable. If there were no keys, we could have solved the graph using any graph traversal algorithm, such as DFS. How do keys and doors affect the problem? Let’s break down the cases: - We reach a key k, but we have not visited the door K yet: remember in a global variable (i.e. bitset) that the key k has been found - We reach the door K, but we have not found the key k yet: remember in a global variable (i.e. bitset) that the door K has been found and do not traverse the neighbours of this vertex - We reach the door K and a key k has been found: continue the DFS as normal - We reach a key k and the door K has already been found: start the DFS from the door K This algorithm works as the edges are undirected, and thus if a key k and the door K are both reachable, we can always select a path that first visits the key and then the door, thus unlocking the door. The steps above ensure that the DFS continues from the door vertex only after the appropriate key has been found. If we reach the target point from a given start point, we know that such level would be doable, and we can count all the doable levels. Note that all boring levels are also doable, so we just need to subtract the number of boring levels to get the answer. To count the number of boring levels, we run DFS from the target point, ignoring all doors. All reachable cells represent starting points that would create a boring level. public class MazeWithKeys { vector<string> a; int R, C; vector<int> doorR, doorC; vector<bool> hasKey, hasDoor; vector<vector<bool>> visited; bool dfs(int r, int c) { if (r < 0 || r >= R || c < 0 || c >= C || visited[r]) return false; char w = a[r]; if (w == '#') return false; if (w == '*') return true; if (w >= 'A' && w <= 'Z') { hasDoor[w-'A'] = true; if (!hasKey[w-'A']) return false; } visited[r] = true; if (w >= 'a' && w <= 'z') { if (!hasKey[w-'a'] && hasDoor[w-'a']) { hasKey[w-'a'] = true; if (dfs(doorR[w-'a'], doorC[w-'a'])) return true; } else { hasKey[w-'a'] = true; } } return dfs(r+1, c) || dfs(r-1, c) || dfs(r, c-1) || dfs(r, c+1); } int dfs2(int r, int c) { if (r < 0 || r >= R || c < 0 || c >= C || visited[r]) return 0; char w = a[r]; if (w >= 'A' && w <= 'Z') return 0; if (w == '#') return 0; visited[r] = true; return (w=='.') + dfs2(r+1,c) + dfs2(r-1,c) + dfs2(r,c-1) + dfs2(r,c+1); } int startingPoints(vector<string> a) { this.a = a; R = a.size(); C = a[0].size(); int ans = 0; doorR = doorC = vector<int>(26); visited = vector<vector<bool>>(R, vector<bool>(C,false)); hasKey = hasDoor = vector<bool>(26, false); for (int r = 0; r < R; ++r) { for (int c = 0; c < C; ++c) { char w = a[r]; if (w >= 'A' && w <= 'Z') { doorR[w-'A'] = r; doorC[w-'A'] = c; } else if (w == '*') { ans -= reachable(r,c); } } } for (int r = 0; r < R; ++r) { for (int c = 0; c < C; ++c) { if (a[r] != '.') continue; for (int i = 0; i < 26; ++i) hasKey[i] = hasDoor[i] = false; for (int i = 0; i < R; ++i) for (int j = 0; j < C; ++j) visited[i][j] = false; if (dfs(r,c)) ans++; } } return ans; } } If we store the locations of all doors in an array so that we don’t have to look for them each time we find a key, simulating one starting point takes O(RC) time, and there are at most O(RC) of them. Hence the total complexity is O(R^2 C^2). Div1 Easy: DigitRotation Evaluation of a large integer in modular arithmetic can be done using Horner’s schema in O(N). We can try all possible triples of indices, generate the respective rotated number and sum all evaluations, but that would be too slow. Notice that each rotated number differs from the original number only at most 3 indices. We can thus generate all triples of indices and calculate their difference from the original number. To do that in O(1), we simply need to precompute all powers of 10. constexpr int MOD = 998244353; class DigitRotation { int sumRotations(string s) { int N = s.size(); vector<Field<998244353>> W(N, 1); for (int i = N-2; i >= 0; --i) W[i] = 10 * W[i+1]; Field<998244353> sum = 0, ans = 0; for (int i = 0; i < N; ++i) sum = 10 * sum + (S[i]-'0'); for (int a = 0; a < N; ++a) { for (int b = a+1; b < N; ++b) { for (int c = b+1; c < N; ++c) { if (a == 0 && S == '0') continue; int d = S[a]-'0', e = S[b]-'0', f = S-'0'; auto cur = (f-d)*W[a] + (d-e)*W[b] + (e-f)*W; ans += sum + cur; } } } return ans; } } The time complexity of the above is O(N^3). Div1 Medium: MinDegreeSubgraph The smallest graph (in terms of the number of edges) that is locally k-dense is a clique on k+1 vertices. Thus, if m < k(k+1)/2, the answer is clearly NONE. By induction on n we can prove that all graphs having m >= t(n,k) := k(k+1)/2 + (n-(k+1))(k-1) edges are localy k-dense. The base case where n = k+1 is trivial, as it is a k+1-clique. Consider a graph G with >k+1 vertices and let v be the vertex of minimum degree. If deg(v) >= k, we are done. Otherwise, remove v from G to form G’. G has n’ = n – 1 vertices and at least m’ >= m – (k-1) >= k(k+1)/2 + (n’-(k+1))(k-1) = t(n’,k) vertices. By induction, G’ is locally k-dense. On the other hand, for m < t(n,k) we can construct a graph G with n vertices, m edges and no mindegree k subgraph. Clearly it suffices to show that there is such graph with t(n,k) – 1 edges, as graphs with fewer edges can be obtained simply by removing edges. Start with a (k-1)-clique on vertices 1,..,k-1. Connect each vertex with index k,..,n to each of the vertices 1,..,k-1. There are exactly t(n,k)-1 edges in this graph. Additionally, there are only k-1 vertices with degree at least k, so G is not locally k-dense. This means that if m < t(n,k), the answer is SOME and if m >= t(n,k), the answer is ALL. class MinDegreeSubgraph { string exists(int n, ll m, int k) { if (m < ll(k) * (k+1) / 2) return "NONE"; else if (m < ll(k) * (k+1) / 2 + ll(n - (k+1))*(k-1)) return "SOME"; else return "ALL"; } } The time complexity is O(1). Div1 Hard: Subpolygon It is evident that as P is convex, each choice of W yields a different convex polygon Q. We need to sum the areas across all such polygons. Recall that one can compute the area of the polygon as half of the sum of X[i]*Y[i+1] – X[i+1]*Y[i], where (X[0],Y[0]), (X[1],Y[1]), … , (X[M],Y[M]) = (X[0],Y[0]) are the coordinates of vertices in counter-clockwise order. Consider an edge between P[i] and P[i+j] separately. If we knew the number of subpolygons in which it appears, we could calculate its contribution towards the sum of areas. In order for this edge to appear in Q, none of the vertices of the form P[i+k] (0 < k < j) may be present in Q, and at least one of the vertices of form P[i+k] (j < k < N) has to be present in Q (so that Q does not degenerate to a line segment). There are thus 2^{N-j-1}-1 such polygons. If we precompute all powers of 2, we can compute the sum of areas in O(n^2). To make this faster, note that the total area is, up to some multiplicative factors, a sum of two convolutions and can be thus computed with FFT. In the sample source below, the FFT implementation is omitted for brevity. Also note that the template class Field is used for modular arithmetic (i.e. performing all additions, subtractions and multiplication modulo given prime, and using modular exponentiation for division). class Subpolygon { public: int sumOfAreas(int n) { vector<Field<998244353>> x(8*n), y(8*n), revX(8*n), revY(8*n); // generate polygon P for (int i = 0; i < n; ++i) { x[i+1] = x[i] + (n-i); y[i+1] = y[i] + i; } for (int i = 0; i < n; ++i) { x[n+i+1] = x[n+i] - i; y[n+i+1] = y[n+i] + (n-i); } for (int i = 0; i < n; ++i) { x[2*n+i+1] = x[2*n+i] - (n-i); y[2*n+i+1] = y[2*n+i] - i; } for (int i = 0; i < n; ++i) { x[3*n+i+1] = x[3*n+i] + i; y[3*n+i+1] = y[3*n+i] - (n-i); } // concatenate x (resp. y) with itself, and let revX,revY be reverse of x,y for (int i = 0; i < 4 * n; ++i) { x[4*n+i] = revX[4*n-1-i] = x[i]; y[4*n+i] = revY[4*n-1-i] = y[i]; } // convolve x with revY, y with revX fft(x); fft(y); fft(revX); fft(revY); for (int i = 0; i < x.size(); ++i) { x[i] *= revY[i]; y[i] *= revX[i]; } fftInverse(x); fftInverse(y); // calculate answer Field<998244353> ans = 0, p = 1; for (int i = 8*n-3; i >= 4*n; --i) { ans += p*(y[i]-x[i]); p = 2*p+1; } return ans/2; } }; The total complexity is O(n log n).
https://www.topcoder.com/blog/single-round-match-736-editorials/
CC-MAIN-2019-39
refinedweb
2,079
66.98
NAME setfib -- set the default FIB (routing table) for the calling process. LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/socket.h> int setfib(int fib); DESCRIPTION The setfib() system call sets the associated fib for all sockets opened subsequent to the call, to be that of the argument fib. The fib argument may be between 0 and the current system maximum which may be retrieved by the net.fibs sysctl. The default fib of the process will be applied to all protocol families that support multiple fibs, and ignored by those that do not. The default fib for a process may be overridden for a socket with the use of the SO_SETFIB socket option. RETURN VALUES The setfib() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error. ERRORS The setfib() system call will fail and no action will be taken and return EINVAL if the fib argument is greater than the current system maximum. SEE ALSO setfib(1), setsockopt(2) STANDARDS The setfib() system call is a FreeBSD extension however similar extensions have been added to many other UNIX style kernels. HISTORY The setfib() function appeared in FreeBSD 7.1.
http://manpages.ubuntu.com/manpages/oneiric/man2/setfib.2freebsd.html
CC-MAIN-2014-41
refinedweb
207
51.18
This forum is closed. Thank you for your contributions. i use C#. and also, how do i get an image to be downloaded from mysql database at runtime upon request. here is a bit of my declaration code and conditional construct code byte [] f = (byte[])(read["Passport"]); // this reads d byte data from the column passport in the database and assigns it to byte f. if (f == null) { admin1pass.ImageUrl = null; } else { MemoryStream ms = new MemoryStream(f); // this holds the image gotten from the database admin1pass.ImageUrl = System.Drawing.Image.FromStream(ms); } 'now thats where i'm stuck. i hv used the using System.Drawing; using System.Data; namespaces but none contains the class Image which is oftren what i use in windows form apps. They'll help you over here. Microsoft C# ASP.Net forum Regards, Dave Patrick .... Microsoft Certified Professional Microsoft MVP [Windows] Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
https://social.microsoft.com/Forums/en-US/3d56bacd-1770-4ecd-916b-7b95a0581d72/how-to-display-images-at-runtime-in-a-web-appplication?forum=whatforum
CC-MAIN-2021-39
refinedweb
161
61.53
Investors in Genuine Parts Co. (Symbol: GPC) saw new options begin trading today, for the November GPC options chain for the new November 20th GPC, that could represent an attractive alternative to paying $53.53/share today. Because the $45.05% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Genuine Parts Co., and highlighting in green where the $45.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $55.00 strike price has a current bid of $5.90. If an investor was to purchase shares of GPC stock at the current price level of $53.53/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $55.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 13.77% if the stock gets called away at the November 20th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if GPC shares really soar, which is why looking at the trailing twelve month trading history for Genuine Parts Co., as well as studying the business fundamentals becomes important. Below is a chart showing GPC.02% boost of extra return to the investor, or 16.62% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 76%, while the implied volatility in the call contract example is 53%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $53.53) to be 35%..
https://www.nasdaq.com/articles/gpc-november-20th-options-begin-trading-2020-03-23
CC-MAIN-2020-16
refinedweb
300
65.52
- Enable. Social Connected requires Sitecore 6.5 or above. The DMS module is optional, but greatly extends its functionality. In my testing I have found usability bugs when using DMS 2.0.0, so I recommend DMS 2.0.1, which is available with Sitecore 6.5 build 4 and above. The module is installed as a Sitecore package, and no additional configuration changes are required post installation to get the module working. In addition to installing DLLs, config files, and sublayouts, the module creates a /sitecore/system/social content tree (see figure 1). The module is now functional, but to be useful, you need to tie it to a social network. In this article I will use Facebook as an example, but the process with the other networks is similar, and is documented in the Social Connected Administrator and Developer's Guide. To set up Facebook, you need a regular Facebook user's account (no special API developer's account is needed). At, click the "Create New App" button, and at the pop up window, give your application a name. This name can be changed later on, but should be chosen with care, as it will be visible to website visitors at sign on and on timeline posts. This brings up the screen in figure 2, which displays an Application ID and Application Secret, which are analogous to a user name and password, and will be used by Sitecore to access the application. You will need to (1) enter the App Domain of your site (this is not documented, but in my testing login did not work without this set up), (2) select the option "Website with Facebook Login", and (3) enter the URL of the website login page. Now we need to set up this application in Sitecore. This is done by adding an "Application" item to /sitecore/system/social/applications/default (Figure 3). The Application item has three fields: Application ID and Application Secret, which take the values from the Facebook App screen (figure 2), and a Network droplist for selecting Facebook, Twitter, LinkedIn, or Google+. Save, publish, and we are ready to set up Facebook sign in. Note: You can only add one application for each social network to the default folder. If you want to enable sign in on multiple sites, you will need to create the additional applications as children of the /sitecore/system/social/applications item, and you will need to write custom code to wire up your sign-in handler to the proper application item. Once the Facebook app has been set up in Sitecore, wiring up Facebook login is simply a matter of dropping a sublayout on a page. The control itself requires no configuration. It automatically uses the application defined the social/applications/default folder to communicate with Facebook. The sublayout is located in the folder, “/sitecore/layout/Sublayouts/Social/Connector/Login with Facebook,” and appears as a small Facebook button. For an anonymous user, the button has the tooltip, “Login with Facebook”, and for a logged in user the tooltip states, “Attach Facebook account.” If a visitor clicks on the button, the following steps occur: - If the user is not logged in to Facebook on the current browser, she is directed to a Facebook log in screen that states, “Login to use your Facebook account with <App Name>”. - She is alerted that the app requires access to a large number of profile properties (“… description, birthday, education history, hometown, interests, likes, location, relationship status, relationship details, religious and political views, website and work history”) All of this information will be stored into the Sitecore user profile and can be used for personalization. If you wish to limit this list, you can switch off specific pieces of information in the Sitecore.Social.ProfileMapping.config. - The user is prompted to allow or disallow the application to post to her timeline. This will come into play with goal-triggered messaging. - The user is logged on to the website. Note: Sitecore's documentation refers to the login functionality as "Social Connector," which is part of the "Social Connected" module. PersonalizationWhen an account is linked to Facebook, a rich collection of attributes is pulled into the Sitecore user profile, which opens up a range of personalization possibilities. These go from the broadly demographic, such as showing targeted content to women over 50, to the finely targeted, such as displaying ads relating to games or movies that the user has expressed an interest in. To facilitate this, the personalization engine adds the following Rules Editor conditions: - where the gender of the current user is value (new with SC 1.2) - where the current user is interested in value on any social network (new with SC 1.2) - where the current user is connected to the specific social network - where the network profile specific field compares to value Triggered Messages Social Connected allows two kinds of messaging: - Sending updates to user's Facebook and Twitter timelines when they complete a DMS goal - Sending updates to the organization's Facebook and Twitter timelines manually or on content publishing. Goal Triggered Messages The steps for making a goal triggered message appear are somewhat complex, so I've included a number of screenshots to help visualize the process. First, you need to have a goal defined, and you need to be able to have it triggered. The simplest way is to directly link the goal to a content item, so that whenever a visitor lands on the item, the goal is achieved. You can do this by selecting the goal on the Analyze ribbon of the content item. To add the Facebook message, go to the goal item, located under /sitecore/system/marketing center/goals, and click the publish tab. You will see a new menu item, Messages (Figure 4). Click this to pull up the messages tab (Figure 5). Note that in Social Connected 1.1, this only worked if you went through the Content Editor, not the Marketing Center application. This was fixed with 1.2 Clicking on the New Facebook message pulls up a screen (Figure 10) that prompts you to set the text of the message, and allows you to create a link back to the site. Especially worth noting is the Campaign field. This allows DMS to identify traffic that comes to the website as a result of the link included in the Facebook message. The Create Campaign is a nice feature that allows you to create a campaign on the fly to associate with this goal. The new campaign is created already in a "Deployed" workflow step, allowing you to use it without having to leave this form. After saving the message, you will be able to see a preview of what the goal message will look like on the visitor's Facebook wall (figure 7). Before this will work on the production site, you will need to publish the item, the goal, and the message itself, which is located under /sitecore/system/social/messages. Figure 8 shows how the message appears on the wall of a Facebook test account: One final point to mention. When you add a message to a goal, the goal item has this rule automatically added: Without this rule in place, the message will not be sent, which gives considerable flexibility in tailoring conditions for who will receive the message. For example, you could add a profile value when the message is sent to a user, and only send messages to users who have not received them before, or who have not opted out of receiving social network updates. Message level Opt-In/Opt-Out In addition, Social Connected provides a mechanism for allowing visitors to opt out of receiving specific messages. However, this requires significant developer involvement. The class Sitecore.Social.Security.Managers.PublishPreferenceManager contains a method SetPreference(messageId, user, PublishPreference) which sets a profile property indicating whether the visitor is to receive a given message. GetPreference(messageId, user) can be used to query the current status (see Social Connected documentation, section 3.3). You can control how the system interprets messages that do not have a preference set with a config file setting: <!-- Allows publishing if the message PublishPreference has default value --> <setting name="Social.AllowPublishByDefault" value="true"/> Changing this setting from the initial value of "true" will disable sending goal messages unless they have been explicitly enabled for a user. Although this mechanism is functional, it presents challenges for the implementor, since there is no straightforward way of obtaining a list of goal messages, and no obvious way of presenting them to a user for action. Goal Message Token Replacement Social Connected provides a mechanism for sending individualized content to users. If the message includes one or more tokens in the format $field (dollar-sign + fieldname), you can use the method GoalUtil.RegisterEventParameters(string pageEventName, IDictionary parameters) to trigger the goal, and pass the dictionary of parameters to the message. For example, suppose you want to have an online poll of favorite Beatles, and you want to provide the user an option to post their choice to their Facebook wall. To implement this you would:: var parameters = new Dictionary<string, string>(); parameters["favoriteBeatle"] = ddlBeatles.Value; GoalUtil.RegisterEventParameters("Beatle Poll Completed", parameters); The user will receive the message with the token replaced by the selected value. It is important to note that while the token is entered in the message with a dollar sign, the dollar sign is omitted when the value is passed to the parameters dictionary. Content Messages Content messages are similar to goal-triggered messages, but instead of being tied to a goal, they are tied to a content item, and instead of going to a visitor's Facebook or Twitter timeline, they go to the timeline of a Facebook or Twitter account controlled by the website organization. A typical use would be to promote new website content on the organization's Facebook page and Twitter feed. Link field is set to the related content item. - The Campaign field is set to Facebook Content Messages. - There is a new checkbox field, "Post when the item is published" - There is a new multilist field, "Accounts" The "Post when ... published" checkbox will cause the message to be posted whenever the item is published. Note that an item that is skipped due to a smart publish will not trigger the message, but an unchanged item that is published due to a Republish will generate a new message, leading to duplicate messages. I have not tested what will happen if there are multiple publishing targets. One solution to this issue would be to write a pipeline step to set this field to unchecked after the message has been published. I have not tested how message publishing works in a multi-target environment, but I suspect that publishing to either target will cause the message to be sent to Facebook. A pipeline modification may be able to handle this as well, by aborting the Facebook message if the production target is not selected. The Accounts field will require us to take a detour. Whereas Goal messages are sent to the Context.User's Facebook account, content messages are sent to accounts that are predefined. These accounts are set up through a wizard that is triggered by inserting an Account item to /sitecore/system/social/accounts. Accounts allow you to determine where the update is sent to. The wizard will provide a drop down listing every application created by the owner of the application(s) defined under /sitecore/system/social/applications. For example, suppose developer A is an administrator of organization B's application. A also has a fan page for a musical group, and B has Facebook pages for several subsidiaries. The wizard would provide a drop-down list of all of developer A's Facebook pages and apps and all of organization B's pages. The developer would create account items for all of organization B's pages, and then would have the option of choosing which one should receive each message in the Accounts field of the content message. In this example, I've created a "Sitecore Fan Page" with my Facebook account, which allows me to define it as an Account using the Account creation wizard, and to select it in the message's account field. Sitecore presents the following preview:). One additional point to note is that unlike Goal messages, Content messages are issued from the Content Management site. If the URL of the CM site is different domain from the content delivery site, use the configuration setting Social.LinkDomain to correct this. According to the documentation, in a multisite installation, Social connected uses the path of the content item to select the site root, and uses the hostName attribute from the Web.config <sites> section. It does not appear that Social Connected supports a mutlisite installation with different domains on the CD server, since the LinkDomain attribute only accepts one value: <!--When this setting is set all links generated by the Social module will contain this hostName--> <setting name="Social.LinkDomain" value="" /> Support for Analytics. Configuration and Customization Social Connected follows Sitecore's development practice of making most of the internal workings of the product configurable. The product works without any changes to initial settings, but you may wish to modify it to, for example, capture off line changes to Facebook user accounts, or to modify what profile fields are captured. The module also exposes five pipelines: - MatchUser: Called when the login button is clicked, you may wish to modify this to change the way existing users are matched. For example, you could modify this to prevent Sitecore domain users form being linked to a Facebook account. - CreateSocialMessage: Called when a message is created, this could be modified to prompt users for meta-data useful in managing opt-in/opt-out interactions with visitors. - ReadSocialMessage: Called when Sitecore finds the messages attached to a content item. This could be modified to add custom filtering logic, such as only showing messages created in the last seven days. - BuildMessage: This creates the message, and performs token replacement. It can be modified to suppplement customization of messages. - PublishMessage: This does the actual transmission to the social network. This pipeline could be modified to log communication actions, or to implement customized opt-in/opt-out logic. For more information The best source of information for Social Connected are the "Administrator's and Developer's Guide" and the "Release Notes" available at. There is also a Mike Casey blog post introducing the module, and explaining how it can be used in personalization.. Awesome post Dan! And thank you for the awesome presentation last night at the Sitecore User Group! It was really enjoyable and informative. I made a plug for you and for this post on my blog, Hope to see you at the next Sitecore User Group! Thanks for the post Dan, good stuff. A question for you. For the simple Like Button and Tweet Button (sublayouts) does the user need to be logged in for those to appear? I see the element in the source but it doesn't appear. I am wondering if the end user needs to login through Facebook for the button to show. And if that is the case then the Sitecore Social Connector wouldn't be a true replacement to ShareThis (sharethis.com) modules right? ~james I'm not seeing this behavior. I added Like Button and Tweet Button sublayouts to a test item, published it, and opened the item in a Chrome Incognito session. The links appeared, and when I clicked them, I was prompted to log in to Facebook or Twitter. Have you made sure to publish the "/sitecore/system/Social" and "/sitecore/layout/Sublayouts/Social" hierarchies? Omitting the latter will cause the elements to appear in preview mode but not on the published site. Great post Dan! You may be interested in the unofficial Sitecore MVC port: Hi Herskind, I've tried using SocialConnectedMvc for my site. I'm getting below error: The controller for path '/' was not found or does not implement IController. Description: An unhandled exception occurred. Exception Details: Sitecore.Mvc.Diagnostics.ExceptionWrapper: The controller for path '/' was not found or does not implement IController. Source Error: Line 11: } Line 12: Line 13: @Html.Sitecore().Rendering("{853B9105-DD1C-4163-882E-E5BB8925D87F}", new { Parameters = "NetworkName=Facebook" }) Please help Regards, Chandana Same procedure for Twitter account too??? Great article Dan. I've read through the Administrator's and Developer's Guide for Social Connected 1.2 and I've implemented the web controls to allow login. However when I attempt to login I'm sent to the social media site (FB or Twitter) and prompted to allow access as expected, but once I login and I'm redirected back to my website the URL shows "authResult=error" and the Sitecore log file lists the following error: ---------------------------------------------------------------------------------------- 4172 14:12:55 ERROR The error in ConnectManager occured Exception: Sitecore.Social.Exceptions.SocialException Message: The username supplied is invalid. Nested Exception Exception: System.Web.Security.MembershipCreateUserException Message: The username supplied is invalid. Source: Sitecore.Social at Sitecore.Social.Connector.Pipelines.MatchUser.CreateUser.CreateSitecoreUser(String domainUser, String email, String fullName) at Sitecore.Social.Connector.Pipelines.MatchUser.CreateUser.Process(SelectUserPipelineArgs args) at (Object , Object[] ) at Sitecore.Pipelines.CorePipeline.Run(PipelineArgs args) at Sitecore.Social.Users.UserSelector.ChooseUserForNetworkAccount(AccountBasicData accountBasicData) at Sitecore.Social.Connector.Managers.ConnectManager.Connect(Account account, Boolean attachAccountToLoggedInUser, Boolean isAsyncProfileUpdate) at Sitecore.Social.Connector.ConnectorAuthCompleted.AuthCompleted(AuthCompletedArgs args) 4172 14:12:55 ERROR Thread was being aborted. ---------------------------------------------------------------------------------------- Is this an issue within Social Connected or is this an issue that I need to resolve with Sitecore User Mgmt??? Any thoughts on how to resolve the error??? Thanks in advance. Hi Dan, I'm very new to Sitecore and trying to retrieve profile data using Sitecore social connected. Which namespace or methods should I use to retrieve user profile details? Please guide me. Thanks, Chandana Hi Dan, Adding more clarity on my question here ... I've used connectedUserManager.LogOnUser and connectedUserManager.AttachUser method. But, my database doesn't get details of the facebook user updated in it. What might be missing? How to retrieve user information, after I'm successfully logged in using Facebook credentials. Thanks, Chandana Hi Dan, I have a question. If we are managing multiple sites in Sitecore6.6 with Socail Connected 1.2 and we need to use the "LIKE" button of facebook, how do we use different layouts of it in different sites/pages. Note : Like button has different layouts('box_count','standard', etc) if i change the layout in the like button under sublayouts->social->Share, it affects all the like buttons used. Is there any solution for this or social connected provides only one layout. Hi Dan, How Sitecore Social Connected work with custom membership provider Thanks , Jinesh
http://www.dansolovay.com/2012/09/a-look-at-sitecore-social-connected.html
CC-MAIN-2018-34
refinedweb
3,152
54.42
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. fields_view_get not working as expected I am using the fields_view_get function to change certain properties of the form. However, I am facing 2 issues: 1) the fields_view_get function gets invoked everytime when I click on tree view but only the first time when I click on form view . I want it to be invoked each time I click on the form view. 2) Using context.get('active_ids') always returns None, when I use it inside fields_view_get. Code Snippet: def fields_view_get(self, cr, uid, view_id=None, view_type=False, context=None, toolbar=False, submenu=False): res = super(hr_travel_allowance_request, self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context, toolbar=toolbar,submenu=submenu) active_id=context.get('active_ids') print "Context",active_id print res return res Is there something I am doing wrong when I invoke it? P.S. Please tell me how to add formatted code to my question. I couldn't find an option to do it. What you have noticed is correct... "fields_view_get", is method which will be called only once, that is to load the view, as the name indicates, it will load the XML view & the fields associated to that object... once the view is loaded, then the datas are fetched from database, after that the view is unaltered... So when you open a form, only once the function will be evoked... So does in the case of tree view, however since you click on the action menu, it is like evoking the action newly, inturn it will evoke the "field_view_get" again, seeming it look like it is called repeatedly, but technically it is no, ... Thank you for the response. However, I have also observed that when I select the 'Tree' view from the available modes on the right side, the fields_view_get gets invoked each time. But it does not happen in the case of form view. Also, do you know how to get the context.get('active_ids') to work
https://www.odoo.com/forum/help-1/question/fields-view-get-not-working-as-expected-71139
CC-MAIN-2017-09
refinedweb
349
73.68
Hi all, I am a newbie with using the Blynk app in conjunction with the Arduino IDE. The context of my question is based around the blynk app and the ESP8266 based Wemos D1 Mini board. I am trying to control the board’s digital pin D2’s state using virtual pin V5 using the Button widget (button used in switch mode) in the Blynk app. I have an external LED connected to D2. Here is my sketch. The button presses get registered in the serial monitor since I have print statements for debug purposes. This means that the Virtual pin’s state data is getting transmitted from Blynk app to the Arduino Serial Monitor. However, digitalWrite to D2 is not happening. I do not see the LED glow. What am I missing? I have attached below the sample sketch, the serial monitor output and a mapping between wemos d1 mini, blynk and ESP8266 pins (may not be useful for this question but just sticking it in there). Is there a sensitivity to sequence of actions such as uploading sketch, opening blynk app, playing project or something along the lines? I just did the conventional way ie. upload sketch, open blynk app, open project, play the app, push the button, check for LED response. To conclude, as an FYI, if I use the digital pin D2 directly from Blynk app to control the LED and just run the blynk server from the arduino sketch, then the LED seems to respond without issues. The issue shows up only when I use a virtual pin state to control a digital pin. Arduino IDE Sketch: #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> // You should get Auth Token in the Blynk App. // Go to the Project Settings (nut icon). char auth[] = "XXX"; // Your WiFi credentials. // Set password to "" for open networks. char ssid[] = "YYY"; char pass[] = "ZZZ"; int LED = D2; // Define LED as an Integer (whole numbers) and pin D8 on Wemos D1 Mini Pro void setup() { // Debug console Serial.begin(115200); // pinMode(LED, OUTPUT); //Set the LED (D1) as an output Blynk.begin(auth, ssid, pass); } void loop() { Blynk.run(); } // This function will be called every time button Widget // in Blynk app writes values to the Virtual Pin V5 BLYNK_WRITE(V5) { int pinValue = param.asInt(); // Assigning incoming value from pin V5 to a variable Serial.print("Pin number: "); Serial.println(LED); Serial.print("Value read from Blynk ButtonPress: "); Serial.println(pinValue); if (pinValue == 1) { digitalWrite(LED, HIGH); // Turn LED on. } else { digitalWrite(LED, LOW); // Turn LED off. } } Serial Monitor Output: 21:19:12.743 -> $⸮e⸮d`⸮⸮n⸮[68] Connecting to XXX 21:19:13.291 -> [572] Connected to WiFi 21:19:13.291 -> [572] IP: A.B.C.D 21:19:13.291 -> [572] 21:19:13.291 -> ___ __ __ 21:19:13.291 -> / _ )/ /_ _____ / /__ 21:19:13.291 -> / _ / / // / _ \/ '_/ 21:19:13.291 -> /____/_/\_, /_//_/_/\_\ 21:19:13.291 -> /___/ v0.6.1 on ESP8266 21:19:13.291 -> 21:19:13.291 -> [578] Connecting to blynk-cloud.com:80 21:19:13.495 -> [769] Ready (ping: 98ms). 21:19:24.399 -> Pin number: 4 21:19:24.399 -> Value read from Blynk ButtonPress: 1 21:19:25.279 -> Pin number: 4 21:19:25.279 -> Value read from Blynk ButtonPress: 0
https://community.blynk.cc/t/problem-with-blynk-apps-virtual-pin-trying-to-control-a-digital-pin-in-arduino-ide/40030
CC-MAIN-2019-39
refinedweb
560
78.35
by Dan Ruta Using WebGL shaders in WebAssembly? Setting up We’ll very briefly go through setting up an example project, then we’ll look at how an image can be loaded as a texture. Then, in a separate context, we’ll apply an edge detection GLSL shader to the image. All the code is in a repo here, if you’d prefer to jump straight to that. Note that you have to serve your files via a server for WebAssembly to work. As a prerequisite, I’m going to assume you already have your WebAssembly project set up. If not, you can check out the article here on how to do it, or just fork the repo linked above. For demoing the below code, I’m using a basic html file which serves only to load an image, get its imageData, and pass it to the WebAssembly code using the ccallArrays function. As for the C++ code, there is an emscripten.cpp file which manages and routes method calls to context instances created in the Context.cpp file. The Context.cpp file is structured as follows: Compilation WebGL is based on and follows the OpenGL ES (Embedded Systems) spec, which is a subset of OpenGL. When compiling, emscripten will map our code to the WebGL API. There are a couple of different versions we can target. OpenGL ES 2 maps to WebGL 1, whereas OpenGL ES 3 maps to WebGL 2. By default you should target WebGL 2, as it comes with some free optimizations and improvements. To do this, we must add the USE_WEBGL2=1 flag to the compilation. If you are planning to use some OpenGL ES features not present in the WebGL spec, you can use the FULL_ES2=1 and/or FULL_ES3=1 flags. To be able to handle large textures/images, we can also add the ALLLOW_MEMORY_GROWTH=1 flag. This removes the memory limit of the WebAssembly program, at the cost of some optimizations. If you know ahead of time how much memory you’ll need, you can instead use the TOTAL_MEMORY=X flag, where X is the memory size. So we’re going to end up with something like this: emcc -o ./dist/appWASM.js ./dev/cpp/emscripten.cpp -O3 -s ALLOW_MEMORY_GROWTH=1 -s USE_WEBGL2=1 -s FULL_ES3=1 -s WASM=1 -s NO_EXIT_RUNTIME=1 -std=c++1z Finally, we need the following imports, in our code: #include <emscripten.h>#include <string>#include <GLES2/gl2.h>#include <EGL/egl.h>extern "C" { #include "html5.h" // emscripten module} Implementation If you have previous experience with WebGL or OpenGL, then this bit may seem familiar. When writing OpenGL, the API will not work until you create a context. This is normally done using platform specific APIs. However, the web is not platform bound, and we can instead use an API integrated into OpenGL ES. The majority of the legwork, however, can be more easily implemented using emscripten’s APIs in the html5.h file. The functions we’re interested in are: - emscripten_webgl_create_context — This will instantiate a context for the given canvas and attributes - emscripten_webgl_destroy_context — This is needed for cleaning up memory when destructing context instances - emscripten_webgl_make_context_current — This will assign and switch which context WebGL will render to Create the context To start implementing, you have to first create the canvas elements in your JavaScript code. Then, when using the emscripten_webgl_create_context function, you pass the id of the canvas as the first parameter, with any configurations as the second. The emscripten_webgl_make_context_current function is used to set the new context as the one currently in use. Next, the vertex shader (to specify coordinates) and the fragment shader (to calculate the colour at each pixel) are both compiled, and the program is built. Finally, the shaders are attached to the program, which is then linked, and validated. Though that sounds like a lot, the code for this is as follows: The shader compilation is done within the CompileShader helper function which performs the compilation, printing out any errors: Create the shader The shader code for this example is minimal, and it just maps each pixel to itself, to display the image as a texture: You can access the canvas’ context in JavaScript in addition to the context in the C++ code, but it must be of the same type, ‘webgl2’. While defining multiple context types does nothing when just using JavaScript, if you do it before creating the webgl2 context in WebAssembly, it will throw an error when the code execution gets there. The first thing to do when applying the shader is to call the emscripten_webgl_make_context_currentfunction to make sure that we are still using the correct context, and glUseProgramto make sure we are using the correct program. Next, we get the indices of the GLSL variables (similar to getting a pointer) via the glGetAttribLocationand glGetUniformLocation functions, so we can assign our own values to those locations. The function used to do that depends on the value type. For example, an integer, such as the texture location needs glUniform1i, whereas a float would need glUniform1f. This is a good resource for seeing which function you need to use. Next, we get the texture object via glGenTextures, assign it as the active texture, and load the imageData buffer. The vertex and indices buffers are then bound, to set the boundaries of the texture to fill the canvas. Finally, we clear the existing content, define our remaining variables with data, and draw to the canvas. Detect edges using a shader To add another context, where the edge detection is done, we load a different fragment shader (which applies the Sobel filter), and we bind the width and height as extra variables, in the code. To pick between different fragment shaders, for the different contexts, we just add an if-else statement in the constructor, like so: And to load the width and height variables, we add the following to the run function: If you run into an error similar to ERROR: GL_INVALID_OPERATION : glUniform1i: wrong uniform function for type, then there’s a mismatched assignment function for the given variable. One thing to look out for when sending the imageData, is to use the correct heap, unsigned integer (the Uint8Array typed array). You can learn more about those here, but if you’re using the ccallArray function, set the ‘heapIn’ config to “HEAPU8”, as seen above. If the type is not correct, the texture will still load, but you’re going to be seeing strange renderings, like these: Conclusion We’ve gone through a mini “Hello World!”-style project to show how to load textures and apply GLSL shaders to them in WebAssembly. The complete code is hosted on GitHub here, for further reference. For a real project, you may want to add some additional error handling. I omitted it here, for clarity. It may also be more efficient (in the above example) to share data such as the imageData texture between contexts. You can read more about this and more here. For some further reading, you can check out this link for common mistakes, or you can look through some demo projects in emscripten’s glbook folder, on GitHub..1 can’t come soon enough ? ). Update To see what GPU compute using shaders would look like in WebAssembly, you can check out the repo for GPGPU, a small library I’m working on, with both JavaScript and WebAssembly versions.
https://www.freecodecamp.org/news/how-to-use-webgl-shaders-in-webassembly-1e6c5effc813/
CC-MAIN-2021-49
refinedweb
1,238
59.23
You know that feeling of driving away from your house almost getting to work and saying “Now Did close the garage door?” . I hate that feeling and seeked out to resolve it in the smiplest/cheapest way I could. The starting point was of course Arduino. This project ended up being simpler than I could of imagined but it was not my best effort (as it was my first real project making something useful with the Arduino). Step 0: Sync the garage door monitor to the base station. Unplug everything then. Step 1: The first thing to do is unscrew the two screws in the back and the one under the battery cover. Then you can pry the back off (Be careful as getting the battery contacts out is reallyyyy sorta a pain. Step 2: Flip over and find the LED for zone 1 (labeled 1!). Then turn it back to the PCB side and find the contacts. I soldered one wire for where I found the ground was (Green) and another red wire for the -. I found which was which by trial and error using my arduino. NOTE: Soldering there’s wires are hard, i dunno if it was my crappy soldering iron or what but it was hard to heat up the metal already on the PCB and get my wires soldered to it on a solid way. If you are enterprising enough you can solder two wires to each of the zone alarms and have this tweet you when say someone is detected by their wireless motion sensor, or the door sensor goes off etc… Step 3. I used a dremel to edge out some of the plastic on the battery cover so the wires could come out from the PCB into the battery area. This is where I also attached a (superficial perhaps) board to connect more wires to that ultimately lead to the screw terminals attached to the arduino. The idea of the board was make sure there was slack in the system so the wires don’t get yanked out of the delicate soldering on the LED pins on the PCB of the alarm base station. Step 4: Drilling out the battery cover. This is simple drill I used and made the holes so the wires could pop out of it… Step 5: Mark where the screw terminal should go in the project box and drill the holes, then on the other side solder in the screw terminal with the wires that will goto the arduino/ethernet controller. Step 6: Finally put in the arduino and ethernet controller shield into the project box and connect the wires to ground respectively and analog 0. Then close the box up. #if defined(ARDUINO) && ARDUINO > 18 // Arduino 0019 or later #include #endif #include //#include Only needed in Arduino 0022 or earlier #include byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; byte ip[] = { 192, 168, 0, 23 }; Twitter twitter("read on how to set this"); char msg[] = "Garage Door is OPEN"; char msgStartup[] = "Garage door monitor is online!"; boolean failed=false; boolean blinkTime=false; boolean doorOpen=false; int alertcounter=0; int resetcounter=0; int diodePin=0; int val; int randomValue; void setup() { delay(1000); Ethernet.begin(mac, ip); Serial.begin(9600); pinMode(13, OUTPUT); Serial.println("connecting ..."); sendStartupTweet(); } void sendStartupTweet() { if (twitter.post(msgStartup)) { int status = twitter.wait(); if (status == 200) { Serial.println("OK."); } else { Serial.print("failed : code "); Serial.println(status); failed=true; } } else { Serial.println("connection failed. Startup"); failed=true; } 1. Project Enclosure for the Arduino + Ethernet shield from amazon. 2. Arduino, in my case the old Diecimila. You all know and love the arduino, found online about 23$ 3. Arduino Ethernet shield ~ 40$ 4. GE Choice Alert wireless-control center 5. GE Choice-Alert wireless garage door sensor 6. Screw Terminal For more detail: Twitter garage door using the GE Choice ALERT system & Arduino
https://duino4projects.com/twitter-garage-door-using-the-ge-choice-alert-system-arduino/
CC-MAIN-2021-39
refinedweb
649
71.95
This preview shows pages 1–2. Sign up to view the full content. Terms: Scope local scope: a name declared within a block is accessible only within that block and blocks enclosed by it, and only after the point of declaration. global scope: is the outmost namespace scope of a program, in which objects, functions, types, and templates can be defined. From the place declared until the file ends. Loops(a control structure that causes a statement or statements to repeat) looping: (3 parts of a loop!) -set up a loop control variable (LCV) -set up a correct condition for the loop using the LCV (test the LCV) -change the value of the LCV inside the loop iterative loop a loop that executes a specified a specified number of times or over a specific range of values. sentinel-controlled loop a loop that continues until a specific data value is seen. loop-control variable is used to regulate the number of times the body of a program loop is executed; it’s incremented( or decremented when counting down) each time the loop body is executed. infinite loops, errors that can lead to an infinite loop a sequence of instructions in a computer program that is repeated over and over without end, due to a mistake in the programming. loop body View Full Document This preview has intentionally blurred sections. This note was uploaded on 02/12/2011 for the course CS 2124 taught by Professor Causey during the Spring '11 term at ASU. - Spring '11 - causey Click to edit the document details
https://www.coursehero.com/file/6129388/Exam-2-Study-Guide/
CC-MAIN-2017-17
refinedweb
262
60.04
When you have an outer and an inner loop, how do you continue the outer loop from a condition inside the inner loop? Consider the following code: for i in range(10): for j in range(9): if i <= j: # break out of inner loop # continue outer loop print(i,j) # don't print unless inner loop completes, # e.g. outer loop is not continued print("inner complete!") Here, we want to print for all i ∈ [0,10) all numbers j ∈ [0,9) that are less than or equal to i and we want to print complete once we’ve found an entire list of j that meets the criteria. While this seems like a fairly contrived example, I’ve actually encountered this exact situation in several places in code this week, and I’ll provide a real example in a bit. My first instinct simply uses a function to use return to do a “hard break” out of the loop. This allows us to short-circuit functionality by exiting the function, but doesn’t actually provide continue functionality, which is the goal in the above example. The technique does work, however, and in multi-loop situations is probably the best bet. def inner(i): for j in range(9): if i <= j: # Note if this was break, the print statement would execute return print(i,j) print("inner complete") for i in range(10): inner(i) Much neater, however is using for/else. The else block fires iff the for loop it is connected with completes. This was very weird to me at first, I thought else should trigger if break. Think of it this way though: You’re searching through a list of things, for item in collectionand you plan to breakwhen you’ve found the item you’re looking for, elseyou do something if you exhaust the collection and didn’t find what you were looking for. Therefore we can code our loop as follows: for i in range(10): for j in range(9): if i <= j: break print(i,j) else: # Outer loop is continued continue print("inner complete!") This is a little strange, because it is probably more appropriate to put our else block, but this was the spec, continue the outer loop if the inner loop gets broken. Here’s a better example with date parsing: # Try to parse a timestamp with a bunch of formats for fmt in (JSON, PG, ISO, RFC, HUMAN): try: ts = datetime.strptime(ts, fmt) break except ValueError: continue else: # Could not parse with any of the formats required raise ValueError("could not parse timestamp") Is this better or worse than the function version of this? def parse_timestamp(ts): for fmt in (JSON, PG, ISO, RFC, HUMAN): try: return datetime.strptime(ts, fmt) except ValueError: continue raise ValueError("could not parse timestamp") ts = parse_timestamp(ts) Let’s go to the benchmarks: So basically, there is no meaningful difference, but depending on the context of implementation, using for/else may be a bit more meaningful or easy to test than having to implement another function. Benchmark code can be found here.
https://bbengfort.github.io/2018/05/continuing-outer-loops-for-else/
CC-MAIN-2021-17
refinedweb
522
62.82
My Code : class c: def __init__(self, format): self.format = format def process(self, formatting=self.format) print formatting Error name 'self' is not defined c("abc").process() # prints "abc" c("abc").process("xyz") # prints "xyz" You can't really define this as the default value, since the default value is set before any instances exist. An easy work-around is to do something like this: class C: def __init__(self, format): self.format = format def process(formatting=None): formatting = formatting if formatting is not None else self.format print formatting self.format will only be used if formatting is equal to None. To demonstrate the point of how default values work, see this example: def mk_default(): print "mk_default has been called!" def myfun(foo=mk_default()): print "myfun has been called." print "about to test functions" myfun("testing") myfun("testing again") And the output here: mk_default has been called! about to test functions myfun has been called. myfun has been called. Notice how mk_default was called only once (and before the function was ever called!)
https://codedump.io/share/a3s3l3gMHN1s/1/python--how-to-pass-default-argument-to-instance-method-with-an-instance-variable
CC-MAIN-2021-21
refinedweb
175
69.58
Anton Pevtsov wrote: > The attached file contains the updated version of the test for > 21.string.capacity. > I modified the code and pass the tested string length to the widen > function and basic_string ctor to avoid the bug with strings containing > embedded NULs. Okay. There still are a small number of improvements that I think would make the test better. For one, it would be helpful to line up the arguments to the TEST macro to make them easier to read. Also, it would make the test for each member function more readable if you added arrows pointing to each argument and documenting what each stands for (as is already done in test_reserve). I'm also not sure I understand the purpose of widening "a" when the str argument is 0 in test_string_capacity. It seems that widen should be called with the 0 pointer in this case (widen handles NULL pointers just fine). Finally, there is no difference between invoking the test function like this TEST ("Test\0string", 11); or like this: TEST ("Test\000string", 11); Either way the effect is identical since \0 and \000 both represent the NUL character. Instead I would exercise sequences of consecutive NULs such as "\0", "\0\0", and "\0\0\0" and similar sequences but with non-NULs thrown in, e.g., "\0a\0", "\0ab\0", etc. > But I suspect a bug in the rw_assert formatting output for {#*S}. > The wchar string "abc" is displayed as "a\0b" (maybe the function > dumps each byte, not symbol here). Yes. That's the expected result. In general, the extended formatting directives (such as %{#*S}) format their arguments so that they are human readable even when the arguments contain non-printable characters. Here's an example: $ cat t.cpp && ./t #include <string> #include <rw_printf.h> int main () { const std::string s ("abc\0def", 7); const std::wstring ws (L"abc\0def", 7); rw_printf ("%{#1S}\n", &s); rw_printf ("%{#*S}\n", sizeof (wchar_t), &ws); } "abc\0def" L"abc\0def" > > Btw., maybe it would be useful to move the widen function to rw_char > header (or some another place in the test driver). Definitely. I suspect three overloads of rw_widen() will actually be useful (and already are being used in various forms throughout the test suite): char* rw_widen (char*, const char*, size_t); wchar_t* rw_widen (wchar_t*, const char*, size_t); UserChar* rw_widen (UserChar*, const char*, size_t); where the first is equivalent to memcpy(). Martin
http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200603.mbox/%[email protected]%3E
CC-MAIN-2019-09
refinedweb
402
68.4
I'm transferring Matlab's imresize imresize imresize from scipy.misc import imresize import numpy as np dtest = np.array(([1,2,3],[4,5,6],[7,8,9])) scale = 1.4 dim = imresize(dtest,1/scale) imresize dtest = [1,2,3; 4,5,6; 7,8,9]; scale = 1.4; dim = imresize(dtest,1/scale); The scipy.misc.imresize function is a bit odd for me. For one thing, this is what happens when I specify the sample 2D image you provided to a scipy.misc.imresize call on this image with a scale of 1.0. Ideally, it should give you the same image, but what we get is this (in IPython): In [35]: from scipy.misc import imresize In [36]: import numpy as np In [37]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9])) In [38]: out = imresize(dtest, 1.0) In [39]: out Out[39]: array([[ 0, 32, 64], [ 96, 127, 159], [191, 223, 255]], dtype=uint8) Not only does it change the type of the output to uint8, but it scales the values as well. For one thing, it looks like it makes the maximum value of the image equal to 255 and the minimum value equal to 0. MATLAB's imresize does not do this and it resizes an image in the way we expect: >> dtest = [1,2,3;4,5,6;7,8,9]; >> out = imresize(dtest, 1) out = 1 2 3 4 5 6 7 8 9 However, you need to be cognizant that MATLAB performs the resizing with anti-aliasing enabled by default. I'm not sure what scipy.misc.resize does here but I'll bet that there is no anti-aliasing enabled. As such, I probably would not use scipy.misc.imresize. The closest thing to what you want is either OpenCV's resize function, or scikit-image's resize function. Both of these have no anti-aliasing. If you want to make both Python and MATLAB match each other, use the bilinear interpolation method. imresize uses bicubic interpolation by default and I know for a fact that MATLAB uses custom kernels to do so, and so it will be much more difficult to match their outputs. See this post for some more informative results: MATLAB vs C++ vs OpenCV - imresize For the best results, don't specify a scale - specify a target output size to reproduce results. MATLAB, OpenCV and scikit-image, when specifying a floating point scale, act differently with each other. I did some experiments and by specifying a floating point size, I was unable to get the results to match. Besides which, scikit-image does not support taking in a scale factor. As such, 1/scale in your case is close to a 2 x 2 size output, and so here's what you would do in MATLAB: >> dtest = [1,2,3;4,5,6;7,8,9]; >> out = imresize(dtest, [2,2], 'bilinear', 'AntiAliasing', false) out = 2.0000 3.5000 6.5000 8.0000 With Python OpenCV: In [93]: import numpy as np In [94]: import cv2 In [95]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]), dtype='float') In [96]: out = cv2.resize(dtest, (2,2)) In [97]: out Out[97]: array([[ 2. , 3.5], [ 6.5, 8. ]]) With scikit-image: In [100]: from skimage.transform import resize In [101]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]), dtype='uint8') In [102]: out = resize(dtest, (2,2), order=1, preserve_range=True) In [103]: out Out[103]: array([[ 2. , 3.5], [ 6.5, 8. ]])
https://codedump.io/share/xw6TsW7PMzUE/1/how-to-use-matlab39s-imresize-in-python
CC-MAIN-2017-43
refinedweb
602
73.27
A clone of pdb, fast and with the remote debugging and attach features. Features -. pdb-clone runs the same source code on all the supported versions of Python, which are: - Python 3: from version 3.2 onward. - Python 2: version 2.7. See also the the project home page. Report bugs to the issue tracker. Usage Invoke pdb-clone as a script to debug other scripts. For example: $ pdb-clone myscript.py Or use one of the different ways of running pdb described in the pdb documentation and replace: import pdb with: from pdb_clone import pdb Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pdb-clone/
CC-MAIN-2017-26
refinedweb
115
76.22
No products For Pan and/or Tilt movement, following API's are provided by Pi-Pan library: do_tilt(y) Tilt the Pi-Pan head to 'y' position. Parameters: y: the tilt position of Pi-Pan ranging from 80 to 220, where - 80 = looking down - 220 = looking up - 150 = straight ahead (neutral position) do_pan(x) Pan the Pi-Pan head to 'x' position. Parameters: x: the Pan position of Pi-Pan ranging from 50 to 250, where, neutral_pan(x) Bring the Pi-Pan head to neutral Pan position (straight ahead). neutral_tilt(x) Bring the Pi-Pan head to neutral tilt position (straight ahead). Pi-Pan Servo Controller board can control upto 6 servos. The servo pins are marked with S0 to S5. Depending on your product configuration, some pin headers may not be populated. If you need to extend the board functionality, solder standard (0.1 inch pitch) header pins on the board. pwm(pin, position) Parameters: Pi-Light can be turned on (or off) with any combination of three basic colors (Red, Green and Blue). Turning it on with all three colors in equal intensity will get you White light. To turn off, set all color values to zero. createPiLight(red,green,blue) Parameters: Program below moves the Pi-Pan head up/down in a while loop. import time import os, sys import pipan p = pipan.PiPan() x = 150 while 1: # move head down while x < 180: p.do_tilt (int(x)) time.sleep(0.1) x += 2 # move head up while x > 90: p.do_tilt (int(x)) time.sleep(0.1) x -= 2 Program below turns Pi-Light on with White color, and then changes to basic colors at 1 second intervals, and then switches off. import time import os, sys import pilight pl = pilight.PILIGHT() pl.createPiLight(255,255,255) time.sleep(1) pl.createPiLight(255, 0, 0) time.sleep(1) pl.createPiLight(0, 255, 0) time.sleep(1) pl.createPiLight(0, 0, 255) time.sleep(1) pl.createPiLight(255,255,255) time.sleep(1) pl.createPiLight(0, 0, 0) There are several API's and programs to take pictures using Pi Camera. To take picture with Pi-Pan 'raspistill' program is used, as follows: # take the picture and save as test.jpg in /var/tmp folder. subprocess.call(["raspistill", "-o", "/var/tmp/test.jpg", "-rot", "180"]) For more info on taking pictures with Pi Camera Module, visit Matt's page at:
http://www.mindsensors.com/content/29-pi-pan-python-programming-reference
CC-MAIN-2021-21
refinedweb
403
67.86
When raw, gold is yellow and bright in color. Shade it from the sun, its brightness remains the same. It is one of the most beautiful metals in the world. A good conductor of electricity. It doesn’t rust. You can pound it, shape it to different forms, and yet it doesn’t break. That’s how I see Python. Python is gold. The language does things that wow you. And one is using it to track a phone number. Write eight lines of code, you get the country name the phone number belongs to, and the name of the service provider. No hacking. Here’s how you can do it. How To Track Phone Number Location With Python - Open PyCharm and create a new project To create a new project, click on File at the top-left corner of your screen. Select New Project. Give your new project a name, then click Create. For example, the name of my project is tracking. PyCharm will display your project name at the left side of the screen with its location. It will look like this: C:\Users\hp\PyCharmProject\tracking. 2. Right-click on the project name Right-click on the project name. Then click New. Now click on Python file. 3. Give the Python file a name Name the Python file. Ensure the name ends with .py. For example body.py. Press the keyboard — enter. 4. Go to Terminal Below the screen at the bottom-left, you will see Terminal. Click on it. Now type the following: pip install phonenumbers Run it. This will install the Python phonenumbers library. Phonenumbers library is used for parsing, formatting, storing, and validating international phone numbers. Remember to add “s” at the end of phonenumbers. The installation takes a few minutes. Close the Terminal when the library is installed successfully. 5. Write in body.py Click on body.py or on the name you gave the file. Now write the following code: import phonenumbers 6. Create another Python file Again, right-click on the project name. Click New. Click Python file. Give your new file a different name — for instance, text.py. Let the name also end with .py. Press the keyboard enter. The purpose of the new file is to store the number you want to track. You will see the full information below. 7. Write in text.py In text.py (or the name you gave the file), write the number you want to track with the country code: number = "+000000000000000000" The variable — number — stores the phone number you want to track. Remember to include the country code starting with a +. The number above is just a random example. 8. Click on body.py Write: from text import number from phonenumbers import geocoder The first line of the code imports the phone number you want to track to the body.py file. Geocoder here is a function in phonenumbers. It provides geographical coordinates corresponding to a location. 9. Get the history of the phone number from its country by parsing two parameters In body.py file, write: ch_number = phonenumbers.parse(number, "CH") CH stands for Country History. 10. Run your code Also in body.py, write: print(geocoder.description_for_number(ch_number, "en")) “en” means English. You want the info to display in English. Note: don’t write “eng”. You will get a blank display. Run the code. You will find the run option at the top of the screen. Click Run, then press the keyboard enter. Or click Run ‘body’. You will see the country name of the phone number you tracked displayed below on your screen. How To Track the Name of Network Provider For every phone number, there is a network provider. Include the below codes to find out the name of the network provider of the phone number. - Click on body.py Write: from phonenumbers import carrier service_provider = phonenumbers.parse(number, "RO") print(carrier.name_for_number(service_provider, "en")) Carrier is a function. It helps you get the name of the service provider of the phone number you’re tracking. Run your code again. You will see the name of the service provider on your screen. Here’s how all the code looks like: import phonenumbers from text import number from phonenumbers import geocoder ch_number = phonenumbers.parse(number, "CH") print(geocoder.description_for_number(ch_number, "en")) from phonenumbers import carrier service_provider = phonenumbers.parse(number, "RO") print(carrier.name_for_number(service_provider, "en")) Python is the king of automation. It does a lot of things you could think of — WhatsApp message automation, email automation, web scraping, machine learning, artificial intelligence, cryptocurrency, web development, and a lot more.
https://plainenglish.io/blog/how-to-track-phone-number-location-with-python
CC-MAIN-2022-40
refinedweb
770
79.46
We begin by importing some helper functions from helper import * Now, let's get the data from the [List of helicopter prison escapes] () Wikipedia Article. url = str('') data = data_from_url(url) Let' print the first three rows print(data[:3]) [['August 19, 1971', 'Santa Martha Acatitla', 'Mexico', 'Yes', 'Joel David Kaplan Carlos Antonio Contreras Castro', "Joel David Kaplan was a New York businessman who had been arrested for murder in 1962 in Mexico City and was incarcerated at the Santa Martha Acatitla prison in the Iztapalapa borough of Mexico City. Joel's sister, Judy Kaplan, arranged the means to help Kaplan escape, and on August 19, 1971, a helicopter landed in the prison yard. The guards mistakenly thought this was an official visit. In two minutes, Kaplan and his cellmate Carlos Antonio Contreras, a Venezuelan counterfeiter, were able to board the craft and were piloted away, before any shots were fired.[9] Both men were flown to Texas and then different planes flew Kaplan to California and Castro to Guatemala.[3] The Mexican government never initiated extradition proceedings against Kaplan.[9] The escape is told in a book, The 10-Second Jailbreak: The Helicopter Escape of Joel David Kaplan.[4] It also inspired the 1975 action movie Breakout, which starred Charles Bronson and Robert Duvall.[9]"], ['October 31, 1973', 'Mountjoy Jail', 'Ireland', 'Yes', "JB O'Hagan Seamus TwomeyKevin Mallon", 'On October 31, 1973 an IRA member hijacked a helicopter and forced the pilot to land in the exercise yard of Dublin\'s Mountjoy Jail\'s D Wing at 3:40\xa0p.m., October 31, 1973. Three members of the IRA were able to escape: JB O\'Hagan, Seamus Twomey and Kevin Mallon. Another prisoner who also was in the prison was quoted as saying, "One shamefaced screw apologised to the governor and said he thought it was the new Minister for Defence (Paddy Donegan) arriving. I told him it was our Minister of Defence leaving." The Mountjoy helicopter escape became Republican lore and was immortalized by "The Helicopter Song", which contains the lines "It\'s up like a bird and over the city. There\'s three men a\'missing I heard the warder say".[1]'], ['May 24, 1978', 'United States Penitentiary, Marion', 'United States', 'No', 'Garrett Brock TrapnellMartin Joseph McNallyJames Kenneth Johnson', "43-year-old Barbara Ann Oswald hijacked a Saint Louis-based charter helicopter and forced the pilot to land in the yard at USP Marion. While landing the aircraft, the pilot, Allen Barklage, who was a Vietnam War veteran, struggled with Oswald and managed to wrestle the gun away from her. Barklage then shot and killed Oswald, thwarting the escape.[10] A few months later Oswald's daughter hijacked TWA Flight 541 in an effort to free Trapnell."]] index = 0 for row in data: data[index] = row[:-1] index = index + 1 print(data[:3]) [['August 19, 1971', 'Santa Martha Acatitla', 'Mexico', 'Yes', 'Joel David Kaplan Carlos Antonio Contreras Castro'], ['October 31, 1973', 'Mountjoy Jail', 'Ireland', 'Yes', "JB O'Hagan Seamus TwomeyKevin Mallon"], ['May 24, 1978', 'United States Penitentiary, Marion', 'United States', 'No', 'Garrett Brock TrapnellMartin Joseph McNallyJames Kenneth Johnson']] for row in data: date = fetch_year(row[0]) row[0] = date print(data[:3]) [[1971, 'Santa Martha Acatitla', 'Mexico', 'Yes', 'Joel David Kaplan Carlos Antonio Contreras Castro'], [1973, 'Mountjoy Jail', 'Ireland', 'Yes', "JB O'Hagan Seamus TwomeyKevin Mallon"], [1978, 'United States Penitentiary, Marion', 'United States', 'No', 'Garrett Brock TrapnellMartin Joseph McNallyJames Kenneth Johnson']] min_year = min(data, key=lambda x: x[0])[0] max_year = max(data, key=lambda x: x[0])[0] years = [] for y in range(min_year, max_year + 1): years.append(y) attempts_per_year = [] for y in years: attempts_per_year.append([y, 0]) print(attempts_per_year) [[1971, 0], [1972, 0], [1973, 0], [1974, 0], [1975, 0], [1976, 0], [1977, 0], [1978, 0], [1979, 0], [1980, 0], [1981, 0], [1982, 0], [1983, 0], [1984, 0], [1985, 0], [1986, 0], [1987, 0], [1988, 0], [1989, 0], [1990, 0], [1991, 0], [1992, 0], [1993, 0], [1994, 0], [1995, 0], [1996, 0], [1997, 0], [1998, 0], [1999, 0], [2000, 0], [2001, 0], [2002, 0], [2003, 0], [2004, 0], [2005, 0], [2006, 0], [2007, 0], [2008, 0], [2009, 0], [2010, 0], [2011, 0], [2012, 0], [2013, 0], [2014, 0], [2015, 0], [2016, 0], [2017, 0], [2018, 0], [2019, 0], [2020, 0]] for row in data: for ya in attempts_per_year: y = ya[0] if row[0] == y: ya[1] += 1 print(attempts_per_year) [[1971, 1], [1972, 0], [1973, 1], [1974, 0], [1975, 0], [1976, 0], [1977, 0], [1978, 1], [1979, 0], [1980, 0], [1981, 2], [1982, 0], [1983, 1], [1984, 0], [1985, 2], [1986, 3], [1987, 1], [1988, 1], [1989, 2], [1990, 1], [1991, 1], [1992, 2], [1993, 1], [1994, 0], [1995, 0], [1996, 1], [1997, 1], [1998, 0], [1999, 1], [2000, 2], [2001, 3], [2002, 2], [2003, 1], [2004, 0], [2005, 2], [2006, 1], [2007, 3], [2008, 0], [2009, 3], [2010, 1], [2011, 0], [2012, 1], [2013, 2], [2014, 1], [2015, 0], [2016, 1], [2017, 0], [2018, 1], [2019, 0], [2020, 1]] %matplotlib inline barplot(attempts_per_year) 2009, 2007, 2001 and 1986 had the most prison breaks at 3 per year countries_frequency = df["Country"].value_counts() print_pretty_table(countries_frequency)
https://nbviewer.org/urls/community.dataquest.io/uploads/short-url/y0EvjbGfZqjabCw28DNDDtUdx4W.ipynb
CC-MAIN-2022-40
refinedweb
857
53.17
I'm very happy with the progress of this right now! - Last week was my first SPI project! - This week is my first I2C project! - I can't tell you how happy I am that both are working side by side. - I think I bricked a Teensy! - Check out the fast I2C mouse action in the videos! (Turn the sound off – annoying fan noise in background.) First up is one mouse, a Teensy 3.1 with a Win7 on the left, and a Teensy-LC on an Ubuntu 14 computer on the right. The two microprocessors simultaneously act like mice, and the mouse movement data that is scrolling on both serial monitors is identical. The second video is setup for high contrast. Up until now I was skeptical there would be some sort of noticeable delay or sluggishness in the mouse movement. Both mouse cursors seem to move identically as if they were from the same microprocessor. I was also afraid that SPI and I2C would have eternal conflicts, but both work great, even when hot swapping the computers USB ports. Saved the best for last. They are Not going from one screen to the next. They are each individually just wrapping the mouse cursor from one side to the other of the same screen. ( Firefox is displaying Hackaday message notifications on both computers :) This is a big milestone for me. This means all the software will work. It won't be long until I have it switching automatically. There is no data being sent from the computers at all, so it maintains the “driverless” feature. Not only are the mice hot swappable, but the computers are also. My First I2C experience This was kind of scary because of what looked like an infinite amount of I2C problems online, some requiring expensive test equipment to diagnose. There were a variety of pullup resistor suggestions online ranging from 190ohms to 10K. I found Paul's wire library information to be simple and straight forward at PJRC.com My desk is a mess :) I had two 4.7K resistors, but one looked deformed and the other I broke a lead off. I had some others ready if I had to experiment with different values. I soldered in two 2.2K resistors and it worked the first time with no hardware adjustments. I will be ordering better headers for the farm soon. Bottom view: Opps! Is this better? Resistors are on the top side. I bricked a Teensy-LC :( I am hoping I might find a cure. I have not researched it yet because I have been so busy with the software. My meter says it has 5V and 3.3V on the board, otherwise very brickish. It seemed to happen when I was screwing up the I2C bytes being transferred. My workarounds, for now. The Teensy 3.1 acts like the master and as the first slave at same time by mashing the software together, then the Teensy-LC is added as a second slave. It works for now, but I want to eventually separate them to leave as much spare processing power as possible on the master. That will allow more flexibility for hardware accessories like encoders and other sensors, or software patches like global acceleration and cross platform gestures for example. I used the basic Arduino wire library Having very small functions that are coordinated together worked the best. I used: write three bytes, end, on receive get three bytes, repeat. I didn't know I would have to calculate by own method of transferring 7 bit bytes on both ends with 8 bit positive and negative mouse data (-127 to +127 for each axis). The Arduino site is usually not much help, but this worked. I kept it really simple. I2c and big numbers: Translation of previous paragraph : #include <Wire.h> Master: int actualX; int getX; char type = 'x'; uint8_t x = 0; uint8_t y = 0; void wiresend() { Wire.beginTransmission(1); Wire.write(type); Wire.write(x); Wire.write(y); Wire.endTransmission(); // See Wire.onReceive } void loop() { // actualX is the x axis mouse data recieved getX = actualX + 127; x = highByte(getX); y = lowByte(getX); wiresend(); //Plus other data... }Slave: void getMouse (int x) { type = Wire.read(); x = Wire.read(); y = Wire.read(); } void setup() { Serial.begin(115200); Wire.begin(1); Wire.onReceive(getMouse); } void loop() { if (type == 'x') { actualX = (word(x, y)) – 127; //Plus the y axis and other data… } Discussions Become a Hackaday.io Member Create an account to leave a comment. Already have an account? Log In. If I were you I would buy the chip on the Teensy and just solder a new one on if you have access to rework tools that is. I did this with my chipkit board when I bricked the pic32 with a short. Just sample ordered a new micro and soldered it on. Are you sure? yes | no Great suggestion! Are you sure? yes | no That's a lot of exclamation points! I'm sure I could find a way to scare you about your I2C setup! This is a great write-up! Are you sure? yes | no Thank you! The first step is always the scariest. Are you sure? yes | no
https://hackaday.io/project/2872-driverless-mouse-and-keyboard-sharing/log/20789-i2c-challenge/discussion-30740
CC-MAIN-2022-27
refinedweb
871
76.82
java.lang.Object java.io.Readerjava.io.Reader oracle.adfnmc.java.io.BufferedReaderoracle.adfnmc.java.io.BufferedReader oracle.adfnmc.java.io.LineNumberReaderoracle.adfnmc.java.io.LineNumberReader public class LineNumberReader LineNumberReader is a buffered character input reader which counts line numbers as data is being read. The line number starts at 0 and is incremented any time '\r', '\n', or '\r\n' is read. public LineNumberReader(java.io.Reader in) in. The default buffer size (8K) is allocated and all reads can now be filtered through this LineNumberReader. in- the Reader to buffer reads on. public LineNumberReader(java.io.Reader in, int size) in. The buffer size is specified by the parameter sizeand all reads can now be filtered through this LineNumberReader. in- the Reader to buffer reads on. size- the size of buffer to allocate. public int getLineNumber() public void mark(int readlimit) throws java.io.IOException readLimitindicates how many characters can be read before a mark is invalidated. Sending reset() will reposition the reader back to the marked position provided readLimithas not been surpassed. The lineNumber associated with this marked position will also be saved and restored when reset() is sent provided readLimithas not been surpassed. markin class BufferedReader readlimit- an int representing how many characters must be read before invalidating the mark. java.io.IOException- If an error occurs attempting mark this LineNumberReader. public int read() throws java.io.IOException readin class BufferedReader java.io.IOException- If the reader is already closed or another IOException occurs. public int read(char[] buffer, int offset, int count) throws java.io.IOException countchars from this LineNumberReader and stores them in char array bufferstarting at offset offset. Answer the number of chars actually read or -1 if no chars were read and end of reader was encountered. This implementation reads chars from the target stream. The line number count is incremented if a line terminator is encountered. A line delimiter sequence is determined by '\r', '\n', or '\r\n'. In this method, the sequence is always translated into '\n'. readin class BufferedReader buffer- the char array in which to store the read chars. offset- the offset in bufferto store the read chars. count- the maximum number of chars to store in buffer. java.io.IOException- If the reader is already closed or another IOException occurs. public java.lang.String readLine() throws java.io.IOException Stringrepresenting the next line of text available in this LineNumberReader. A line is represented by 0 or more characters followed by '\n', '\r', "\n\r"or end of stream. The Stringdoes not include the newline sequence. readLinein class BufferedReader java.io.IOException- If the LineNumberReader is already closed or some other IO error occurs. public void reset() throws java.io.IOException readlimithas been passed or no markhas been set, throw IOException. This implementation resets the target reader. It also resets the line count to what is was when this reader was marked. resetin class BufferedReader java.io.IOException- If the reader java.io.IOException countnumber of chars in this LineNumberReader. Subsequent read()'s will not return these chars unless reset()is used. This implementation skips countnumber of chars in the target stream and increments the lineNumber count as chars are skipped. skipin class BufferedReader count- the number of chars to skip. java.io.IOException- If the reader is already closed or another IOException occurs.
http://docs.oracle.com/cd/E21764_01/apirefs.1111/e17503/oracle/adfnmc/java/io/LineNumberReader.html
CC-MAIN-2014-23
refinedweb
556
52.15
Technical Articles Building a Covid-19 Chatbot powered by SAP BTP (Part 2/4): Accelerating Data Transformation and Governance with SAP Data Intelligence Introduction This article is the second of a four-part blog post series by Sebastian Schuermann and me. We met in the 2021 Career Starter Program and together we took the opportunity to discover and work with the latest technologies as part of the SAP Innovator Challenge 2021. Together, we developed a COVID-19 chatbot (Vyri) as a solution to keep users updated regarding regulations, news, and statistics about the current pandemic. In this series, we will present our chatbot solution, which we implemented based on SAP Business Technology Platform (SAP BTP) components. The goal of the blogposts is to share our personal experience with the SAP BTP solution portfolio and its integration. This second blog post, “Building a Covid-19 Chatbot powered by SAP BTP (Part 2/4): Accelerating Data Transformation and Governance with SAP Data Intelligence”, will highlight how SAP Data Intelligence’s data extraction, data transformation, and data governance capabilities are incorporated within the Covid-19 use case presented. The other articles in this series can be accessed here: - Setting the Stage (Part 1/4) - Creating a Chatbot with SAP Conversational AI (Part 3/4) - Data Modeling and Advanced Analytics with SAP Data Warehouse Cloud and SAP Analytics Cloud (Part 4/4) Agenda - What is SAP Data Intelligence? - Integration with SAP Data Warehouse Cloud - Data Ingestion Pipelines - Custom Python Operator with Dockerfile - RESTful Web Service with SAP Data Intelligence - Data Governance Capabilities - Conclusion 1. What is SAP Data Intelligence? SAP Data Intelligence is a comprehensive data management solution that aims to transform distributed data sources into vital insights and deliver innovation at scale. Data Intelligence is much more than an ETL tool. SAP Data Intelligence allows to manage metadata across different data landscapes and to create a comprehensive data catalog. Data processing from different SAP and 3rd party sources can be orchestrated by Data Intelligence via powerful data pipelines. Furthermore, Data Intelligence supports the operationalization of Machine Learning. SAP Data Intelligence can be implemented both on-premises and in the cloud. SAP Data Intelligence is a container-based software and scales via Docker containers. More information about the SAP Data Intelligence product can be found in this blog post or on the official website. 2. Integration with SAP Data Warehouse Cloud SAP Data Warehouse Cloud is a powerful Software as a Service (SaaS) solution that combines data warehousing, data integration, and analytic capabilities, built on the SAP HANA Cloud Database. The integration of SAP Data Intelligence and SAP Data Warehouse Cloud is seamless. Data Warehouse Cloud can be used as a persistence layer into which the data pipelines in Data Intelligence ingest data or read it. Furthermore, the SAP Data Warehouse Cloud data can be managed within the Metadata Explorer in Data Intelligence. The following figure shows how the tables of the Data Warehouse Cloud generated from Data Intelligence in our scenario can be consumed in the Metadata Explorer of Data Intelligence: SAP Data Warehouse Cloud tables in Metadata Explorer in SAP Data Intelligence (Image Source: Own Image) The technical connection between the solutions is not the focus of this article. There is a very good blog article by Yuliya Reich. I would just like to mention that the SAP Data Warehouse instance can be connected via the connection type “HANA_DB” in Data Intelligence and almost all HANA operators can also be used for the SAP Data Warehouse Cloud (see my blog post on HANA operators in Data Intelligence). SAP Data Warehouse Cloud Connection in SAP Data Intelligence (Image Source: Own Image) 3. Data Ingestion Pipelines As described in chapter 1, SAP Data Intelligence enables the orchestration and integration of disparate data sources via powerful data pipelines. In our first blog post (link here), we already described the use case and from which data sources we retrieve information. The pipelines developed for the Vyri chatbot are described in more detail below: - Covid-19 Regulations The current legal situation regarding Covid-19 is subject to rapid changes in Germany. With the pipeline, the current regulations in various areas of public life (gastronomy, tourism, sports, …) are ingested per county. For this purpose, an Excel file is stored in the SAP Data Intelligence internal data lake, in which the rules were prototypically maintained. The pipeline looks as follows: Pipeline for retrieving the Covid-19 Measures per German County (Image Source: Own Image) With the Read File operator the current version of the Excel file with the regulations is retrieved from the Data Lake. With the SAP HANA Client the CSV file is written into a table in SAP Data Warehouse Cloud. The Graph Terminator terminates the pipeline. The Wiretap operator can be used to monitor and debug the messages. - Covid-19 Statistics The current statistics (number of cases, number of deaths, incidence rate, etc.) for the Covid-19 pandemic in Germany are provided daily by the Robert Koch Institute (RKI) per county (Ger.: Landkreis). The following pipeline is used to retrieve the data from the Robert Koch Institute API on a daily basis and then persist it in SAP Data Warehouse Cloud. The pipeline looks as follows: Pipeline for consuming the current Covid-19 Statistics per German County (Image Source: Own Image) In the RKI Request Generator (Java Script Operator) the parameters are defined which are sent to the RKI API. In the OpenAPI Client the connection to the RKI API is configured. The operator sends the request to the API and transmits the response of the interface (JSON file) to its output port. In the JSON Formatter operator (base Python operator), the JSON file is formatted so that it can be written to a table in the SAP Data Warehouse Cloud via the SAP HANA Client. I have published a blogpost on how a JSON file can be written to SAP HANA or SAP Data Warehouse Cloud via the SAP HANA Client operator. - Covid-19 News Since the reporting on the Covid-19 pandemic is constantly changing, the following pipeline is intended to ingest the latest news from the German media on the pandemic. A News API is connected to extract the news. The pipeline looks like this: Pipeline for ingesting current Covid-19 News (Image Source: Own Image) The pipeline is structured analogously to the Covid-19 Statistics pipeline. The difference is that the parameters for the News API are defined in the Covid News Request Generator operator and the connection to the News API is stored in the OpenAPI Client. The JSON Formatter and SAP HANA Client is adapted to the data model provided by the News API. - Covid-19 Tweets In addition to current Covid-19 news, current tweets on the topic of coronavirus are also collected for Vyri. For this purpose, Twitter is connected via the Python client Tweepy. The pipeline looks like this: Pipeline for receiving current Tweets related to Covid-19 (Image Source: Own Image) The Twitter Reader operator is a Python operator that reads the latest tweets via the Python library Tweepy. For this, it explicitly filters for keywords related to Covid-19 and the German language. Within the operator, the response of the Twitter client is converted into a suitable JSON format, so that the results can be written directly via the SAP HANA Client operator into an SAP Data Warehouse Cloud table. - Sentiment Analysis of Covid-19 News and Covid-19 Tweets To get a sense of the sentiment in the German media and Twitter community on the development of the Covid-19 pandemic, the sentiment of news stories and tweets is analyzed via a pipeline. For this purpose, the data is sent to Amazon Web Services’ (AWS) Natural Language Processing service Comprehend and analyzed. The pipeline looks like this: Pipeline for analysing the sentiment of current Covid-19 News and Tweets with AWS Comprehend The SQL Statement Generator generates a SQL statement that reads the latest messages or tweets from the respective SAP Data Warehouse Cloud tables and the processing is then done by the SAP HANA Client. The AWS Comprehend Sentiment Analyzer (Custom Python Operator) uses a list of messages or tweets as input and sends them to AWS Comprehend. The result is further processed (e.g. average calculated) and then sent to the SAP HANA Client, which persists the results into an SAP Data Warehouse table. The AWS Comprehend Sentiment Analyzer is described in more detail in chapter 4. 4. Custom Python Operator with Dockerfile In addition to the standard operators available for SAP Data Intelligence, it is possible to implement custom operators in various runtimes with their own logic. Currently, operators can be implemented in Python 3.6, Node.js, C++, ABAP, R, JavaScript and Go. The documentation for this can be accessed here and here. Each programming language has its own basic operator. Own operators can then be created, which extend the respective basic operator. A custom icon can then be selected for the operators, input and output ports can be defined with data types, custom fields can be created for configuration, and documentation can also be created. Yuliya Reich has already written a helpful blogpost for the creation of a Python operator. In our use case, we mainly extended the Python operator for custom logic (see Chapter 3). In the following, the operator AWS Comprehend Sentiment Analyzer is presented as an example. The command api.set_port_callback(“input”, on_input) is used to define that a message routed into the operator through the input port will call the on_input method. Then a client for the AWS service Comprehend is created and the input message (a list of strings) is sent to the sentiment analysis of AWS. The result of this analysis is used as the body for the output message. The api.send(“output”, api.Message([body], input_message_attributes)) command sends the output message to the output port of the operator. def on_input(data): # Credentials ACCESS_KEY = 'XXX' SECRET_KEY = 'XXX' # Generate AWS Comprehend Client comprehend = boto3.client( service_name='comprehend', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY, region_name='eu-west-2' ) # Get the input message input_message_body = data.body input_message_attributes = data.attributes # Convert the message to a text list textList = [i["CONTENT"] for i in input_message_body] # Load sentiment from AWS Comprehend sentiment = comprehend.batch_detect_sentiment(TextList=textList, LanguageCode='de') # Calculate Sentiment Average and prepare output JSON for DWC table body = calculate_sentiment_average(sentiment) api.send("output", api.Message([body], input_message_attributes)) api.set_port_callback("input", on_input) The base Docker image of the base Python3 operator already contains some python libraries by default, such as Tweepy, the Python client for Twitter. The library “boto3”, the Python client for using AWS services, is not included in this base image. Since the data pipelines in Data Intelligence run in Docker containers (see Chapter 1), the custom operator for AWS Sentiment Analysis might not run in the base container for the Python3 operator. Then, a Dockerfile must be created that extends the base image for Python3 with the boto3 library (see image below). A detailed description of how to create a Dockerfile and build the Docker image can be found in Thorsten Hapke’s blogpost and in the official documentation. Simple Dockerfile for installing the „boto3“ library (Image Source: Own Image) In this Dockerfile the added library must be added in the tags (optionally with a version number). The Dockerfile must then be initially built. Tag the “boto3” library within the Dockerfile (Image Source: Own Image) The custom operator must reference the required library’s tag in the Tags tab (see next image). This will select the correct Docker image for the operator’s container based on the operator’s tags when the pipeline is started. Custom Python Operator with “boto3” tag (Image Source: Own Image) 5. RESTful Web Service with SAP Data Intelligence With the OpenAPI Servlow operator, RESTful web services can be offered in SAP Data Intelligence, which are built according to the OpenAPI specification. These provide a programmatic interface to a publicly exposed endpoint that provides messages via JSON format. Ian Henry and Jens Rannacher have written very good blogposts about how a Restful web service can be offered via SAP Data Intelligence. Furthermore, there is an openSAP Microlearning about it. In our use case the OpenAPI Servlow operator is integrated in a pipeline which runs 24/7. The pipeline is shown below. The OpenAPI Servlow operator accepts authorized requests through the expost endpoint. The RKI Statement Generator (a custom Python operator) detects which of the endpoint’s services was requested and with which parameters and generates an SQL statement from it. In the SAP HANA client, this SQL statement is then sent to the SAP Data Warehouse Cloud instance. The result of the query is sent back to the API endpoint in JSON format. The Wiretap operators are used in this use case to monitor and debug the queries. Pipeline with “OpenAPI Servlow“ Operator (Image Source: Own Image) There are several configuration options for the operator, details can be found in the documentation of the operator here. The “Base Path” option can be used to set the path under which the API endpoint can later be reached. Under “Swagger Specification” the offered RESTful Web Service can be defined and documented using Swagger. Swagger is an Interface Description Language that specifies the offered API using JSON. Under “Max Concurrency” it can be specified how many concurrent requests are made to the service. The following figure shows the prototype implementation of the operator in our graph: Configuration Options of “OpenAPI Servlow” Operator (Image Source: Own Image) In the following, the service “getLKData” with the parameter “name” (delivers the covid related statistics per county in Germany) shows how the services of an API endpoint can be documented using Swagger: "/getLKData/{name}": { "get": { "description": "Show the RKI data for a specific German district (Landkreis)", "produces": [ "application/json" ], "summary": "Show the RKI data for a specific German district (Landkreis)", "operationId": "getLKData", "parameters": [ { "type": "string", "description": "Landkreis Name (district)", "name": "name", "in": "path", "required": true } ], "responses": { "200": { "description": "RKI Data Landkreis" } } } }, As soon as a pipeline with the OpenAPI Servlow operator is running, the web service can be reached via HTTP. The URL at which a service can be addressed is structured as follows: URL for API Access (Image Source: Own Image) Currently, only base authentication with an SAP Data Intelligence user can be used as an authentication option (username/password). Attention: The username must be specified as follows: <tenant name>/<username>. An HTTP request of a service via the tool “Postman” and the response as JSON are shown below: HTTP Request of API Service of Vyri (Get Covid data for one county) via the tool POSTMAN (Image Source: Own Image) 6. Data Governance Capabilities The following section deals with the data governance capabilities of Data Intelligence, looking at the creation of glossaries and data quality rules. This is just an overview of our implementation and not a guide on how to implement it. For this we can recommend the videos 3-6 of the SAP HANA Academy on Data Intelligence 3.1. Data governance is intended to provide a unified view of the data as well as a consistent terminology. However, acronyms are not always easy and intuitive to understand. There is a need for a central and shared repository for defining terms and describing how and where the data is used. The Business Glossary allows you to create terms and categories and link them to Metadata Explorer objects. Terms provide definition and context to the data. Once the terms are defined and categorized, you can find related terms and objects when you search the glossary. This makes it easier to understand the relationship from business items and technical items used in your company. Since we also used acronyms in our datasets, we decided to use the Business Glossary from Data Intelligence. For exemplary consideration, we have created the two glossary entries Landkreis and Source. Glossary entries in Data intelligence (Image Source: Own Image) Within the glossary entry, we have entered the relationship between the business term and the technical term. We have assigned the glossary entry to different records. The glossary entry Source was assigned to the tables GV_NEWS_COV_DATE and GV_NEWS_COVID. Furthermore, it was assigned to the columns Author, Sourceid, Sourcename and URL. Selection of related objects for the respective glossary entry (Image Source: Own Image) The glossary entries can subsequently also be found in the Data Intelligence data catalog in the datasets. Furthermore, they can also be found in the corresponding linked column details. Thus, the relationship between the business term and technical term is directly visible in the dataset. Glossary entry linked to a database object (Image Source: Own Image) Glossary entry linked to a table column (Image Source: Own Image) Data quality rules To make it possible for us to monitor the data quality of our data, we have defined rules and created dashboards to monitor the quality of the data and to view the trend of the data quality. Rules help us align data with business constraints and requirements. In combination with the Rule Books from Data Intelligence, the rules can be bound to the datasets. We did this by assigning a column to a specific parameter. For our rules, we used the existing rule category Accuracy and did not create a new one. We created a rulebook and as an example we created two rules AGS_Team4131 and R-Value_Team4131. In the real world, of course, there would be several rules, this is just an example to show what is possible. Rulebook in Data Intelligence (Image Source: Own Image) For the rule R-Value_Team4131 we used the parameter R-Value. The condition here is that the R-Value is equal to or greater than 0. Rule creation in Data Intelligence (Image Source: Own Image) With Rule Bindings, we have the rule associated with our dataset GV_RKI_COVDATA. By mapping the parameter with the corresponding column RS, we can apply the rule to the record and the corresponding column. Rule Binding in Data Intelligence (Image Source: Own Image) The rule helped us to constantly check the quality of our data, so that we could make valid statements, for example about the number of COVID infected people. Results after the rules have been executed (Image Source: Own Image) We also created a dashboard for our rules that shows whether the dataset has passed the rules or not. Rulebook (Image Source: Own Image) 7. Conclusion As shown in this blog post, SAP Data Intelligence offers very powerful possibilities to integrate and manage data from various sources (SAP sources like SAP Data Warehouse Cloud, but also external data like APIs). By developing own operators using Python, JavaScript, Go or R, very broad scenarios can be implemented. Through the OpenAPI Servlow operator, an API can be provided without much effort, which can offer data from multiple sources (in our case Data Warehouse Cloud). The data governance capabilities of Data Intelligence are also very extensive and have only been glimpsed by us. The machine learning capabilities offered by Data Intelligence were not considered in our use case but would certainly be an exciting addition. Thank you for reading! We hope you find this post helpful and will also read the following posts in our blogpost series. For any questions or feedback just leave a comment below this post. Best wishes, Tim & Sebastian Find more information and related blog posts on the topic page for SAP Data Intelligence. If you have questions about SAP Data Intelligence you can submit them in the Q&A area for SAP Data Intelligence in the SAP Community. Dear Tim & Sebastian 🙂 This is a very comprehensive one for SAP Data Intelligence. I will bookmark this , and interesting info with AWS Comprehend. Thanks for the documentation Regards Leena Hi Leena, thanks for your feedback 🙂 Best wishes, Tim
https://blogs.sap.com/2021/10/18/building-a-covid-19-chatbot-powered-by-sap-btp-part-2-4-accelerating-data-transformation-and-governance-with-sap-data-intelligence/
CC-MAIN-2022-33
refinedweb
3,309
50.46
Oggetti FeaturePython Contents Introduction FeaturePython objects (also often referred to as 'Scripted Objects') provide users the ability to extend FreeCAD's with objects that integrate seamlessly into the FreeCAD framework. This encourages: - Rapid prototyping of new objects and tools with custom Python classes. - Serialization through 'App::Property' objects, without embedding any script in the FreeCAD document file. - Creative freedom to adapt FreeCAD for any task! This wiki will provide you with a complete understanding of how to use FeaturePython objects and custom Python classes in FreeCAD. We're going to construct a complete, working example of a FeaturePython custom class, identifying all of the major components and gaining an intimate understanding of how everything works as we go. How Does It Work? FreeCAD comes with a number of default object types for managing different kinds of geometry. Some of them have 'FeaturePython' alternatives that allow for user customization with a custom python class. The custom python class simply takes a reference to one of these objects and modifies it in any number of ways. For example, the python class may add properties directly to the object, modifying other properties when it's recomputed, or linking it to other objects. In addition the python class implements certain methods to enable it to respond to document events, making it possible to trap object property changes and document recomputes. It's important to remember, however, that for as much as one can accomplish with custom classes and FeaturePython objects, when it comes time to save the document, only the FeaturePython object itself is serialized. The custom class and it's state are not retained between document reloading. Doing so would require embedding script in the FreeCAD document file, which poses a significant security risk, much like the risks posed by embedding VBA macros in Microsoft Office documents. Thus, a FeaturePython object ultimately exists entirely apart from it's script. The inconvenience posed by not packing the script with the object in the document file is far less than the risk posed by running a file embedded with an unknown script. However, the script module path is stored in the document file. Therefore, a user need only install the custom python class code as an importable module following the same directory structure to regain the lost functionality. Setting up your development environment To begin, FeaturePython Object classes need to act as importable modules in FreeCAD. That means you need to place them in a path that exists in your Python environment (or add it specifically). For the purposes of this tutorial, we're going to use the FreeCAD user Macro folder, though if you have another idea in mind, feel free to use that instead! If you don't know where the FreeCAD Macro folder is type 'FreeCAD.getUserMacroDir(True)' in FreeCAD's Python console. The place is configurable but, by default, to go there: - Windows: Type '%APPDATA%/FreeCAD/Macro' in the filepath bar at the top of Explorer - Linux: Navigate to /home/USERNAME/.FreeCAD/Macro - Mac: Navigate to /Users/USERNAME/Library/Preferences/FreeCAD/Macro Now we need to create some files. - In the Macro folder create a new folder called fpo. - In the fpo folder create an empty file: __init__.py. - In the fpo folder, create a new folder called box. - In the box folder create two files: __init__.py and box.py (leave both empty for now) Notes: - The fpo folder provides a nice spot to play with new FeaturePython objects and the box folder is the module we will be working in. - __init__.py tells Python that in the folder is an importable module, and box.py will be the class file for our new FeaturePython Object. Your directory structure should look like this: .FreeCAD |--> Macro |--> fpo |--> __init__.py |--> box |--> __init__.py |--> box.py With our module paths and files created, let's make sure FreeCAD is set up properly: - Start FreeCAD (if it isn't already open) - Enable Python Console and Report Views (View -> Panels -> Report view and Python console) (learn more about it here) - In your favorite code editor, navigate to the /Macro/fpo/box folder and open box.py It's time to write some code! A Very Basic FeaturePython Object Let's get started by writing our class and it's constructor: class box(): def __init__(self, obj): """ Constructor Arguments --------- - obj: a variable created with FreeCAD.Document.addObject('App::FeaturePython', '{name}'). """ self.Type = 'box' obj.Proxy = self The __init__() method breakdown In the box.py file at the top, add the following code: import FreeCAD as App def create(obj_name): """ Object creation method """ obj = App.ActiveDocument.addObject('App::FeaturePython', obj_name) fpo = box(obj) return obj The create() method breakdown The create() method is not required, but it provides a nice way to encapsulate the object creation code. Testing the Code Now we can try our new object. Save your code and return to FreeCAD, make sure you've opened a new document. You can do this by pressing CTRL+n or selecting File -> New In the Python Console, type the following: >>> from fpo.box import box Now, we need to create our object: >>> box.create('my_box') You should see a new object appear in the tree view at the top left labelled my_box. Note that the icon is gray. FreeCAD is simply telling us that the object is not able to display anything in the 3D view... yet. Click on the object and note what appears in the property panel under it. There's not very much - just the name of the object. We'll need to add some properties in a bit. Let's also make referencing our new object a little more convenient: >>> mybox = App.ActiveDocument.my_box And then we should take a look at our object's attributes: >>> dir(mybox) ['Content', 'Document', 'ExpressionEngine', 'InList', 'InListRecursive', 'Label', 'MemSize', 'Module', 'Name', 'OutList', 'OutListRecursive', 'PropertiesList', 'Proxy', 'State', 'TypeId', 'ViewObject', '__class__', ... 'setEditorMode', 'setExpression', 'supportedProperties', 'touch'] There's a lot of attributes there because we're accessing the native FreeCAD FeaturePyton object that we created in the first line of our create() method. The Proxy property we added in our __init__() method is there, too. Let's inspect that by calling the dir()on the Proxy object: >>> dir(mybox.Proxy) ['Object', 'Type', '__class__', '__delattr__', '__dict__', '__dir__', ... '__str__', '__subclasshook__', '__weakref__'] Once we inspect the Proxy property, we can see our Object and Type properties. This means we're accessing the custom Python object defined in box.py. Call the Type property and look at the result: >>> mybox.Proxy.Type 'box' Sure enough, it returns the value we assigned, so we know we're accessing the custom class itself through the FeaturePython object. Likewise, we can access the FreeCAD object (not our Python object) by using the Objectmethod: >>> mybox.Proxy.Object That was fun! But now let's see if we can make our class a little more interesting... and maybe more useful. Adding Properties Properties are the lifeblood of a FeaturePython class. Fortunately, FreeCAD supports a number of property types for FeaturePython classes. These properties are attached directly to the FeaturePython object itself and fully serialized when the file is saved. That means, unless you want to serialize the data yourself, you'll need to find some way to wrangle it into a supported property type. Adding properties is done quite simply using the add_property() method. The syntax for the method is: add_property(type, name, section, description) Let's try adding a property to our box class. Switch to your code editor and move to the __init__() method. Then, at the end of the method, add: obj.addProperty('App::PropertyString', 'Description', 'Base', 'Box description').Description = "" Note how we're using the reference to the (serializable) FeaturePython object, obj and not the (non-serializable) Python class instanace, self. Anyway, once you're done, save the changes and switch back to FreeCAD. Before we can observe the changes we made to our code, we need to reload the module. This can be accomplished by restarting FreeCAD, but restarting FreeCAD everytime we make a change to the python class code can get a bit inconvenient. To make it easier, try the following in the Python console: >>> from importlib import reload >>> reload(box) This will reload the box module, incorporating changes you made to the box.py file, just as if you'd restarted FreeCAD. With the module reloaded, now let's see what we get when we create an object: >>> box.create('box_property_test') - Select it and look at the Property Panel. There, you should see the 'Description' property. - Hover over the property name at left and see the tooltip appear with the description text you provided. - Select the field and type whatever you like. You'll notice that Python update commands are executed and displayed in the console as you type letters and the property changes. But before we leave the topic of properties for the moment, let's go back and add some properties that would make a custom box object *really* useful: namely, length, width, and height. Return to your source code and add the following properties to __init__(): obj.addProperty('App::PropertyLength', 'Length', 'Dimensions', 'Box length').Length = 10.0 obj.addProperty('App::PropertyLength', 'Width', 'Dimensions', 'Box width').Width = '10 mm' obj.addProperty('App::PropertyLength', 'Height', 'Dimensions', 'Box height').Height = '1 cm' That's because when an object is created or changed, it's "touched" and needs to be recomputed. Clicking the "recycle" arrows (the two arrows forming a circle) will accomplish this. But, we can accomplish that automatically by adding the following line to the end of the create() method: App.ActiveDocument.recompute() Now, test your changes as follows: - Save your changes and return to FreeCAD. - Delete any existing objects and reload your module. - Finally, create another box object from the command line by calling >>> box.create('myBox'). Once the box is created (and you've checked to make sure it's been recomputed!), select the object and look at your properties. You should note two things: - Three new properties (length, width, and height) - A new property group, Dimensions. Note also how the properties have dimensions. Specifically, they take on the linear dimension of the units set in the user preferences (see Edit -> Preference... -> Units tab). In fact, if you were paying attention when you were entering the code, you will have noticed that three separate values were entered for each dimension. The length was a floating-point value (10.0), the width was a string, specifying millimeters ('10 mm') and the height was a string specifying centimeters ('1 cm'). Yet, the property rendered all three values the same way: 10 mm. Specifically, a floating-point value is assumed to be in the current document units, and the string values are parsed according to the units specified, then converted to document units. The nice thing about the App::PropertyLength type is that it's a 'unit' type - values are understood as having specific units. Therefore, whenever you create a property that uses linear dimensions, use App::PropertyLength as the property type. Event Trapping The last element required for a basic FeaturePython object is event trapping. Specifically, we need to trap the execute() event, which is called when the object is recomputed. There's several other document-level events that can be trapped in our object as well, both in the FeaturePython object itself and in the ViewProvider, which we'll cover in another section. Add the following after the __init__() function: def execute(self, obj): """ Called on document recompute """ print('Recomputing {0:s} ({1:s})'.format(obj.Name, self.Type)) Test the code as follows: - Save changes and reload the box module in the FreeCAD python console. - Delete any objects in the Treeview - Re-create the box object. You should see the printed output in the Python Console, thanks to the recompute() call we added to the create() method. Of course, the execute() method doesn't do anything here (except tell us that it was called), but it is the key to the magic of FeaturePython objects. So that's it! You now know how to build a basic, functional FeaturePython object! The Completed Code import FreeCAD as App def create(obj_name): """ Object creation method """ obj = App.ActiveDocument.addObject('App::FeaturePython', obj_name)) fpo = box(obj) return obj class box(): def __init__(self, obj): """ Default Constructor """ self.Type = 'box' obj.addProperty('App::PropertyString', 'Description', 'Base', 'Box description').Description = "" obj.addProperty('App::PropertyLength', 'Length', 'Dimensions', 'Box length').Length = 10.0 obj.addProperty('App::PropertyLength', 'Width', 'Dimensions', 'Box width').Width = '10 mm' obj.addProperty('App::PropertyLength', 'Height', 'Dimensions', 'Box height').Height = '1 cm' obj.Proxy = self def execute(self, obj): """ Called on document recompute """ print('Recomputing {0:s} {1:s}'.format(obj.Name, self.Type)) >
https://www.freecadweb.org/wiki/index.php?title=FeaturePython_Objects/it&oldid=510681
CC-MAIN-2019-47
refinedweb
2,122
65.01
This page shows how to configure the DNS settings for a domain using Cloud DNS and Cloud Tools for PowerShell. It walks through a simple example of creating a managed zone to govern a domain and its subdomains, and then adding resource records to the zone to provide information that determines the behavior of the DNS server when handling requests to the zone’s domains. This document assumes you have a domain name and an IP address that you can point the domain name to. If you do not, you can register a domain name through Google Domains or another domain registrar of your choice. Read the Cloud Tools for PowerShell cmdlet reference to learn more about Cloud DNS cmdlets. To learn more about Cloud DNS in general, read the Overview of Cloud DNS. Creating a managed zone for your domain The entirety of the DNS namespace is composed of many domains, which will soon include your own domain name. Managed zones in Cloud DNS model DNS zones and serve as containers to organize DNS records (such as A, CNAME, or TXT entries) for the same DNS name suffix. For example, the records for "example.com." and subdomains such as "first.example.com." could be in the same zone since they share the "example.com." suffix. Note the trailing dot, which is necessary and signifies an absolute DNS name. To get started, set up a managed zone to organize the DNS records that you will create for your domain name. You can create a new managed zone and add it to your GCP Console project by using the Add-GcdManagedZone cmdlet: Add-GcdManagedZone ` -Name "my-new-zone" ` -DnsName "example.com." ` -Description "This is my first zone." This creates a new zone with the specified details and adds it to the current project of the active Cloud SDK configuration, though you also have the option to specify a different project ID if desired. This additionally creates default NS and SOA records in the zone for you. However, to publish your new records in the zone to the internet, you also need to update your domain’s name servers to use Cloud DNS. Even if your domain is registered with Google Domains, you still need to update its name servers. You can find the Cloud DNS name servers assigned to your domain by using the Get-GcdManagedZone cmdlet on the managed zone governing the domain to return information about the zone: Get-GcdManagedZone -Zone "my-new-zone" Adding and removing resource record sets DNS resource records provide information that dictates the behavior of the DNS server when it is handling requests sent to a domain. For example, DNS records can be used to tell the server which IP address a domain resolves to, indicate the usable mail exchange servers for a domain, and much more. In Cloud DNS, you can add or remove DNS records in a zone to configure such behavior. To add or remove immutable resource record sets, you do not operate on the records in a zone directly. Rather, you create independent resource records or retrieve existing ones using the New-GcdResourceRecordSet and Get-GcdResourceRecordSet cmdlets respectively, and then send change requests with these records to a specific zone by using the Add-GcdChange cmdlet. Creating resource record sets You can use the helper cmdlet New-GcdResourceRecordSet to create a resource record set that you can put within a change and then add to or remove from a managed zone. For example, if you wanted to create an A record to point your domain to an external IPv4 address in the format #.#.#.# (if you have an IPv6 address, use an AAAA record), you could use the following command: $ARecord = New-GcdResourceRecordSet ` -Name "example.com." -Rrdata "107.1.23.134" -Type "A" Similarly, to create a CNAME record for the www subdomain such that "." resolves to the same IP and behaves the same as "example.com." you could use the following command: $CNAMERecord = New-GcdResourceRecordSet ` -Name "." -Rrdata "example.com." -Type "CNAME" The supported resource record types that you can create and include in changes to a zone are A, AAAA, CNAME, MX, NAPTR, NS, PTR, SOA, SPF, SRV, and TXT. For help deciding which records you need and how to create them, see supported resource record formats for Cloud DNS. Retrieving resource record sets If you want to retrieve an existing resource record set in a zone, you can use the Get-GcdResourceRecordSet cmdlet to return all records in a zone and then index the results: $allRecords = Get-GcdResourceRecordSet -Zone "my-new-zone" $record0 = $allRecords[0] If you only want records of a specific type, you can filter the results accordingly: $ARecord = Get-GcdResourceRecordSet -Zone "my-new-zone" -Filter "A" Applying changes to a managed zone The cmdlets New-GcdResourceRecordSet and Get-GcdResourceRecordSet both return resource record sets, but they do not add records to or remove them from anything. For that, use the Add-GcdChange cmdlet: Add-GcdChange ` -Zone "my-new-zone" -Add $record1,$record2 -Remove $record0 For example, to add the A and CNAME records that you created above to our zone, do the following: Add-GcdChange -Zone "my-new-zone" -Add $ARecord,$CNAMERecord If you later wanted to change the IPv4 address that your domain resolves to, you could create a new A-type record and then add it to the managed zone while deleting the old A record: $oldARecord = Get-GcdResourceRecordSet -Zone "my-new-zone" -Filter "A" $newARecord = New-GcdResourceRecordSet ` -Name "example.com." -Rrdata "104.1.34.167" -Type "A" Add-GcdChange -Zone "my-new-zone" -Remove $oldARecord -Add $newARecord Each call to the Add-GcdChange cmdlet returns the newly executed change request object. You can also pass the Add-GcdChange cmdlet a change request object to execute directly instead of lists of resource record sets: Add-GcdChange -Zone "my-new-zone" -ChangeRequest $change0 Creating change request objects manually is not recommended, but they are useful if you want to re-apply a change made earlier or one made in a different zone. You can retrieve all past change request objects that were applied to a zone by using the Get-GcdChange cmdlet: $allChanges = Get-GcdChange -Zone "my-new-zone" You can choose a specific change by indexing the previous result, or specify which change to retrieve with a ChangeId. Change requests are usually numbered from 0 onwards based on the order they were sent to the zone: $firstChange = Get-GcdChange -Zone "my-new-zone" -ChangeId 0 Add-GcdChange -Zone "my-new-zone" -ChangeRequest $firstChange Deleting managed zones Sometimes, you may want to remove a managed zone altogether. This could be for a variety of reasons. Perhaps you have managed zone "user1-zone" for subdomain "user1.example.com." but user1 deletes their account, in which case you want to remove this subdomain and all associated DNS records. To remove a managed zone from your project, use the Remove-GcdManagedZone cmdlet: Remove-GcdManagedZone -Zone "user1-zone" If successful, the command returns nothing. However, this cmdlet does not immediately work on what Cloud DNS considers "non-empty" managed zones, or zones that contain non-NS or non-SOA type records (non-default records). For example, you would not be able to delete "my-new-zone," to which you added A and CNAME type records, without either granting permission to the cmdlet during processing or using the -Force switch: Remove-GcdManagedZone -Zone "my-new-zone" -Force The Remove-GcdManagedZone cmdlet also accepts pipeline input for the zone(s) to delete. For example, the following command forcibly deletes all the managed zones in the current project. This might be useful if you are no longer maintaining any of the domains set up in a project: Get-GcdManagedZone | Remove-GcdManagedZone -Force
https://cloud.google.com/tools/powershell/docs/dns
CC-MAIN-2019-22
refinedweb
1,290
53.55
Welcome back to the final part of our Vue with GraphQL series. Continuing from the GraphQL server we’ve built in Part 2 of this series, we are going to focus on the client-side of the equation. In particular, we’ll create a Vue app as a separate project, and use Apollo client to connect it to our API server for data. Preparing the Vue app First’ we’ll create a new Vue app: npm init vue-app my-app Then go to the App.vue file, and replace the template with the following code: 📃src/components/App.vue <template> <div id="app"> <p class="username">{{ currentUser.username }}'s posts:</p> <ul> <li v-{{ post.content }}</li> </ul> <div> <input v- <button @Add Post</button> </div> </div> </template> It’s just a basic UI with a p element, a ul element, a textbox input, and a button. The list will display all the posts that belong to the user, while the textbox and the button act as a form for adding new posts. Next, we’ll add data and the addPost event method to the component options: 📃src/components/App.vue export default { name: 'app', data: function(){ return { currentUser: { username: 'user' }, posts: [], newPostContent: '' } }, methods: { addPost() { this.posts.push({ content: this.newPostContent }) this.newPostContent = ''; } }, } Now, you can run the app: cd my-app npm run dev It should look like a typical offline todo app: It works, but we want to sync the data with our GraphQL server. So, we’re not done yet. Setting up Apollo Client To tie the Apollo server and the Vue app together, we have to set up Apollo Client in our Vue project. First, install the required packages: npm install -s graphql vue-apollo apollo-boost Apollo itself isn’t specific to Vue, but the vue-apollo package is the Vue-specific version of apollo client. apollo-boost is a package that makes it easier to set up a GraphQL client without getting into too much configuration. Inside index.js, import the Apollo-related utilities and create the apolloProvider: 📃src/index.js import ApolloClient from 'apollo-boost' import VueApollo from "vue-apollo"; const apolloProvider = new VueApollo({ defaultClient: new ApolloClient({ uri: '' }) }); Here, we’re configuring it to talk to our API server, which is listening at. To finish the setup, we’ll use VueApollo as a middleware, and add apolloProvider to the Vue options: 📃src/index.js Vue.use(VueApollo); // use middleware new Vue({ el: '#app', apolloProvider, // add option render: h => h(App) }) Now our Vue app is bootstrapped as a GraphQL client. All components in the app will be able to send GraphQL-style queries to our API server. Sending queries Back in App.vue, let’s import gql and create our first query: 📃src/components/App.vue import gql from 'graphql-tag' const CURRENT_USER = gql`query { currentUser { id username } }`; export default { ... This query will get us the id and username of the current user. (The current user is the one with the id abc-1, which we hard-coded in the server code in the previous article.) To bind the query to our component, we have to use the apollo option: export default { name: 'app', data: function(){ return { currentUser: { username: 'user' }, posts: [], newPostContent: '' } }, methods: { addPost() { this.posts.push({ content: this.newPostContent }) this.newPostContent = ''; } }, // NEW apollo: { currentUser: CURRENT_USER, } } Notice that we’re using the same name currentUser in both data and apollo. Having the same name is how the currentUser state can be synced to the currentUser query’s result. When you refresh the Vue app, you should see the actual username from our GraphQL server. (Make sure your API server is still running at port 4001) We’ll repeat the same process for posts data. Create another query: 📃src/components/App.vue const POSTS_BY_USER = gql`query ($userId: String!) { postsByUser(userId: $userId) { id content } }`; Since posts is also a field in the User type, we can actually query the posts data through the currentUser query. But using the postsByUser query, we can demonstrate how to send arguments (also called variables) to the server. We’re sending the userId variable with the postsByUser query. Our server code will be able to extract the userId from the args parameter, and use that to gather the posts data. Since the POST_BY_USER query requires a variable (argument), binding it to the component will be a little more complicated. First, add a new query to the apollo option as an object: 📃src/components/App.vue apollo: { currentUser: CURRENT_USER, posts: { query: POSTS_BY_USER } } With this object syntax, we can specify the variables we want to send along this query: 📃src/components/App.vue apollo: { currentUser: CURRENT_USER, posts: { query: POSTS_BY_USER, variables() { return { userId: this.currentUser.id } }, } } Notice that variables is a function that returns the variables as an object. The function syntax allows us to refer to the component instance with this. We’re getting the user id from this.currentUser and setting the userId variable with it. Apollo client will automatically match the name of the returned data with the name of the state in the component. In this case, the returned data will have a field called postsByUser. But since our state is called posts, they won’t be matched automatically. One workaround is to use the update method to map to the postsByUser field in the returned data: 📃src/components/App.vue apollo: { currentUser: CURRENT_USER, posts: { query: POSTS_BY_USER, variables() { return { userId: this.currentUser.id } }, update(data) { return data.postsByUser } } } Now refresh the Vue app. You should see the posts that we defined on the server-side. Mutation So far, we’ve only been reading data from the server. Let’s complete the cycle by allowing the user to add new posts to the server. We’ll start with tweaking the server code. Add a Mutation type in the schema with an addPost field: 📃server.js const schema = gql(` type Query { currentUser: User postsByUser(userId: String!): [Post] } // ADD THIS type Mutation { addPost(content: String): Post } ... You might have noticed that addPost looks very similar to postsByUser, that’s because a mutation is just a “query” that changes the server data instead of asking for server data. Now let’s add a new resolver for addPost under the Mutation type: 📃server.js var resolvers = { Mutation: { addPost: async (_, { content }, { currentUserId, data }) => { let post = { id: 'xyz-' + (data.posts.length + 1), content: content, userId: currentUserId, }; data.posts.push(post); return post; } }, ... We’re creating a new post object and putting it inside the data.posts array. And finally, we return the newly created That’s all we need on the backend. Now in App.vue, create a mutation query for addPost: 📃src/components/App.vue const ADD_POST = gql`mutation ($content: String!) { addPost(content: $content) { id content } }`; Different from a query, we don’t have to bind the mutation to the component. Instead, we’ll use the this.$apollo.mutate method to send the mutation request to the server. We do that inside the addPost event handler: 📃src/components/App.vue methods: { addPost() { // this.posts.push({ content: this.newPostContent }) this.$apollo.mutate({ mutation: ADD_POST, variables: { content: this.newPostContent }, }) this.newPostContent = '' } }, Every time we send a mutation to add something on the server, we have to also update the locally cached copy of the data. Otherwise, the frontend will not be updated even when the backend data is changed. We can update the cache using the update option: 📃src/components/App.vue methods: { addPost() { this.$apollo.mutate({ mutation: ADD_POST, variables: { content: this.newPostContent }, // NEW update: (cache, result) => { // the new post returned from the server let newPost = result.data.addPost // an "identification" needed to locate the right data in the cache let cacheId = { query: POSTS_BY_USER, variables: { userId: this.currentUser.id }, } // get the cached data const data = cache.readQuery(cacheId) const newData = [ ...data.postsByUser, newPost ] // update the cache with the new data cache.writeQuery({ ...cacheId, data: { postsByUser: newData } }) } } this.newPostContent = ''; } }, To make the code cleaner, we can extract the function to somewhere else: 📃src/components/App.vue function updateAddPost(cache, result) { let newPost = result.data.addPost let cacheId = { query: POSTS_BY_USER, variables: { userId: this.currentUser.id }, } const data = cache.readQuery(cacheId) const newData = [ ...data.postsByUser, newPost ] cache.writeQuery({ ...cacheId, data: { postsByUser: newData } }) } And then bind the function with this (since we’re using this inside the function): 📃src/components/App.vue methods: { addPost() { this.$apollo.mutate({ mutation: ADD_POST, variables: { content: this.newPostContent }, // NEW update: updateAddPost.bind(this) } this.newPostContent = ''; } }, Now using the app, you should be able to add new posts and see the post list updated immediately. And because the data are stored on the server, you can refresh the app and the old data will still be there. (Since we’re only storing the data in an in-memory object, the data will get reset once the GraphQL server is restarted.) Optimistic Update Although seemingly the new post gets added to the DOM immediately, things are not always this smooth. For example, if the data requires more time-consuming processing on the server, our Vue app will have to wait for that whole time before the DOM can be updated. The app is basically out of sync during this waiting period. We didn’t see this problem in our current app only because that waiting period is very, very short. To make it future-proof, we’ll use a technique called the optimistic UI update. We would just optimistically assume the data gets updated on the server without incident, so we would update the UI immediately with the available data at hand. This will eliminate the need to wait for a server response on the success/failure of the mutation. Optimistic UI update is a general programming concept, so it isn’t exclusive to GraphQL or Vue.js. But, Apollo Client has a built-in support for this. All we have to do is to supply an object through the optimisticResponse option, which will pretend to be the actual server response: 📃src/components/App.vue methods: { addPost() { this.$apollo.mutate({ mutation: ADD_POST, variables: { content: this.newPostContent }, update: updateAddPost.bind(this), // NEW optimisticResponse: { __typename: 'Mutation', addPost: { __typename: 'Post', id: 'xyz-?', content: this.newPostContent, userId: this.currentUser.id }, } }) this.newPostContent = '' } }, This object that we set with optimisticResponse will be sent to our updateAddPost function. This will be the result.data in that function. Only after the server responded that we get to swap out this object with the actual server data. Basically, this is a placeholder. The optimisticResponse object is supposed to be a response of a mutation request, that’s why it’s typed Mutation. Aside from the __typename property, it has an addPost property, which is named after the mutation request that we want to map to. The addPost property is used to set the new Post data’s placeholder. Notice that this Post object has xyz-? as its id. Since the actual id of a new post will be decided on the server-side, we don’t have this information before the actual mutation response, so we’re just using xyz-? here as a placeholder. So, our updateAddPost will get called twice, first with the optimistic response, then with the actual server response. We can test it by printing a log message inside the updateAddPost function: 📃src/components/App.vue function updateAddPost(cache, result) { let newPost = result.data.addPost // ADD THIS console.log(newPost.id) let cacheId = { query: POSTS_BY_USER, variables: { userId: this.currentUser.id }, } const data = cache.readQuery(cacheId) const newData = [ ...data.postsByUser, newPost ] cache.writeQuery({ ...cacheId, data: { postsByUser: newData } }) } Now refresh the app in the browser, try to add a new post. In the browser’s console, you should see the xyz-? post id, and right after that you see the actual post id from the server. This confirms that the updateAddPost function gets hit twice with different data at different times. Our GraphQL-powered app is now completed and optimized. But, you can easily extend the app by adding more schema types, resolvers, queries, or data sources. The journey ahead GraphQL is a huge step forward for frontend development. It’s a technology that comes with its own ecosystem of tools. After going through this three-part GraphQL introduction, you should now have a solid foundation to explore more advanced techniques and tools that the GraphQL community has to offer.
https://www.vuemastery.com/blog/part-3-client-side-graphql-with-vuejs/
CC-MAIN-2022-33
refinedweb
2,047
58.69
Would it be possible for someone to evaluate an assignment that I completed? I had to write two classes and I want to see what I need to work on and stuff like that. If so, I will post the code after someone says they will. Would it be possible for someone to evaluate an assignment that I completed? I had to write two classes and I want to see what I need to work on and stuff like that. If so, I will post the code after someone says they will. Post what you have and we'll take a look at it :) Ok, so the specs are: The challenge is to make a class named car that has 3 fields, yearModel as in int, make as a String, and speed as an int. Then it says to make a Constructor that accepts the cars year model and make as arguments. It says these values should be assigned to the objects year model and make fields. and assign speed to 0. It says to make appropriate accessors. and a method of accelerate that adds 5 to the speed field everytime its called. and a method called brake that declines 5 everytime its called. It says I need to demonstrate a class that creates a car object and calls the accelerate method five times, after each call display the speed of it, then call brake 5 times and display the speed after each call. My class is:Code : public class Car { private int yearModel; private String make; private int speed; public Car(int model, String m) { yearModel = model; make = m; speed = 0; } public int getYearModel() { return yearModel; } public String getMake() { return make; } public int getSpeed() { return speed; } public void accelerate() { speed = speed + 5; } public void brake() { speed -= 5; } } and the driver is: Code : import javax.swing.JOptionPane; public class CarDemo { public static void main(String[] args) { Car c = new Car(1992, "Mustang"); int s = 0; for(int i = 0; i < 5; i++) { c.accelerate(); s = c.getSpeed(); System.out.println("The " + c.getYearModel() + " " + c.getMake() + "\n is going " + s); } for(int i = 0; i < 5; i++) { s = c.getSpeed(); System.out.println("The speed is now " + s); c.brake(); } } } 1. Comment, comment, comment. While comments don't add any functionality to your program, commenting allows others (and you) to quickly and easily understand what the code should do. There are three types of comments in java: Block, Line, and Javadoc. Code : /* * This is a block comment. It can span several lines * */ // This is a line comment. It's only this line /** * This is a Javadoc comment. It's similar to a block comment, but it's special in that it describes what methods/fields are for */ 2. Code looks pretty well formatted, there is one line out of place (I'm assuming it's a copy/paste problem). If you have an IDE like Netbeans/Eclipse (sure there are others with this feature, too), you can quickly auto-format your code (on Eclipse the hotkey is CTRL+SHIFT+F, or you can set it to auto-format everytime you save the file) 3. Choose better names for your variables. variables names c and s aren't very descriptive. Something like myCar and speed could help to describe the function of that variable (see the Car class for some good variable naming). There are some exceptions to this rule (ex. i and j as loop indices are fairly common, x, y, z for 2d or 3d math) 4. Your CarDemo class has an extra import (javax.swing.JOptionPane). Generally it's a good idea to not import anything you're not using (of course, "blanket imports" are an exception), but this is a minor issue. Again, some IDE's will can automatically manage which packages/classes to import, and even remove unecessary imports. 5. You may want to split up the first printing command unless you do want the car make and year displayed 5 times. Not really necessary, but it would match the output of how you have the deceleration portion. 6. In your second for-loop where the car is decelerating, you should probably be calling brake() before querying the speed. This way you don't get this kind of output: (accelerating) speed is now 5 speed is now 10 speed is now 15 speed is now 20 speed is now 25 (decelerating) (off by 5) speed is now 25 speed is now 20 speed is now 15 speed is now 10 speed is now 5 something like this: Code : for (int i = 0; i < 5; i++) { c.brake(); s = c.getSpeed(); System.out.println("The speed is now " + s); } All in all, nice work :) Just remember to comment. Also, as a last note, more commenting does not always mean better commenting. ex.: bad Code : // times vector.x by 5 vector.x = vector.x * 5; // times vetor.y by 5 vector.y = vector.y * 5; ex.: better Code : // scale vector by 5 vector.x = vector.x * 5; vector.y = vector.y * 5; This is especially apparent when you have loops or a recursive algorithm. alright thanks, I use JGrasp, where there is no help but the errors the compiler gives, haha. jGRASP does come with plugins for Checkstyle and FindBugs, which may help you automatically find style, formatting, and some other bugs. Just install Checkstyle and/or FindBugs and jGRASP will find them the next time you start it.
http://www.javaprogrammingforums.com/%20java-theory-questions/4108-grading-printingthethread.html
CC-MAIN-2018-09
refinedweb
910
72.36
What Is XQuery October 16, 2002. An Expression Language. Here is a conditional expression that evaluates to a string: if (3 < 4) then "yes!" else "no!" You can define local variable definitions using a let-expression: let $x := 5 let $y := 6 return 10*$x+$y Primitive Data Types The primitives data types in XQuery are the same as for XML Schema. - Numbers, including integers and floating-point numbers. - The boolean values true and false. - Strings of characters, for example: "Hello world!". These are immutable - i.e. you cannot modify a character in a string. - Various types to represent dates, times, and durations. - A few XML-related types. For example a QName is a pair of a local name (like template) and a URL, which is used to represent a tag name like xsl:templateafter it has been namespace-resolved. Derived types are variations or restrictions of other types. Primitive types and the types derived from them are known as atomic types, because an atomic value does not contain other values. Thus a string is considered atomic because XQuery does not have character values. Node Values and Expressions XQuery also has the necessary data types needed to represent XML values. It does this using node values, of which there are 7 XQuery expression inside element constructors. Thus, let $i := 2 return let $r := <em>Value </em> return <p>{$r} of 10*{$i} is {10*$i}.</p> creates <p><em>Value </em> of 10*2 is 20.</p> Popular template processors, like cannot 3 integers. Note that a sequence containing just single value is the same as that value by itself. You cannot nest sequences. To illustrate this, we'll use the count function, which takes one argument and returns the number of values in that sequence. So. The children function returns a sequence of the child nodes of the argument. Thus, children(<p>This is <em>very</em> cool.</p>) returns this sequence of 3 values: "This is ", <em>very</em>, " cool." Path Expressions and Relationship to XPath. The following simple example assumes an XML file "mybook.xml" whose root element is a <book>, containing example includes a predicate: $book//para[@class="warning"] The double slash is a convenience syntax to select all descendants (rather than just children) of $book, selecting only <para> element nodes that have an attribute node named class whose value is "warning". Iterating Over Sequences A for expression lets you "loop" over the elements of a sequence: for $x in (1 to 3) return ($x,10+$x) The for expression first evaluates the expression following the in. Then for each value of the resulting sequence, the variable (in this case $x) is bound to the value, and the return expression evaluated using that variable binding. The value of the entire for expression is the concatenation of all values of the return expression, in order. So the example evaluates to this 6-element sequence: 1,11,2,12,3,13. Here is a more useful example. Assume again that mybook.xml is a <book> that contains some <chapter> elements. Each <chapter> has a <title>. The following will create a simple page that where expression is true. The next example has a nested loop, allowing us to combine two sequences: one of customer elements and. The following is>) Which evaluates to this sequence of length 4: <a>X<b>Y</b></a>; "X"; <b>Y</b>; "Y" Sorting and Context If you want to sort a sequence you can use a sortby expression.. In author/name the name children that are returned are those of the context item, which is an author item. Type Specification XQuery is a strongly typed programming language. Like Java and C#, for example, it's a mix of static typing (type consistency checked at compile-time) and dynamic typing (run-time type tests). However, the types in XQuery are different from the classes familiar from object-oriented programming. Instead, it has types to match XQuery's data model, and it allows you to import types form. The following converts a set of tag names to a different set. define function convert($x) { typeswitch ($x) case element para return <p>{process-children($x)}</p> case element emph return <em>{process-children($x)}</em> default return process-children($x) } define function process-children($x) { for $ch in children($x) return convert($ch) } There's only one XQuery book so far, mainly because there are significant loose ends in the specification: Early Adopter XQuery from Wrox. I am co-authoring (with James McGovern, Kurt Cagle, James Linn and Vaidyanathan Nagarjan) XQuery Kick Start for Sams Publishing, due to be released in 2003. There are no complete standards-conforming implementations either, but the XQuery site lists known implementations, some of which have executable demos. The only open-source implementation currently available seems to be my Qexo. (The Qexo implementation is interesting in that it compiles XQuery programs on-the-fly directly to Java bytecodes.) I recommend considering XQuery when you need a powerful and convenient tool for analyzing or generating XML.
https://www.xml.com/pub/a/2002/10/16/xquery.html
CC-MAIN-2018-05
refinedweb
844
56.45
Gazebo spawn_model hangs (wrong ros namespace) [closed] Hello I have a problem with Gazebo. When I try to launch the model of a robot, I get the following message: loading xml from ros parameter model, but I never get to see the robot in the simulator. The funny thing is that yesterday it loaded the model correctly. Also, when I try to launch some model of Gazebo (a table or something simple) I get the same message and I still see nothing. So, I think that it's not a problem of my model. Where is the problem? Thanks in advance! I'm working on Ubuntu 12.04 and fuerte. Regards. With roswtf I get the following error: ERROR Communication with [/empty_world_server] raised an error: ERROR The following nodes should be connected but aren't: * /empty_world_server->/empty_world_server (/clock) With roslaunch gazebo_worlds empty_world_paused.launch I dont have this problem... Any clue? I have the same problem on a machine with ubuntu ludid and latest fuerte from debs. Hi g.sterido, can you update the title of this question to something more specific, e.g. "Gazebo spawn_model hangs (wrong ros namespace)" or similar? Thanks. Title updated! Thanks for the advice ;)
https://answers.ros.org/question/40587/gazebo-spawn_model-hangs-wrong-ros-namespace/
CC-MAIN-2019-43
refinedweb
198
67.04
Recent Notes Displaying keyword search results 101 - 110 By default, heights for jQuery UI tab panels expand or contract depending on the height of each tab. The code snippet here sets the height of all tabs to be equal to that of the container. <!DOCTYPE html> <html> <head> <title>jQu... This is a bare bones, no frills, just the facts tutorial on JSTL. I will not bother you with theories, principles, best practices, anecdotes, or any other junk, because JSTL is shallow and simple, and I don't want to make it sound deep or complex. Getting ready JSTL/JSP Expression Language JSTL implicit variables A simple test application for JSTL Expanding the simple JSTL test application Core Tags Basic tags : <c:out> , <c:set> , <c:remove> , <c:catch> Flow control tags : <c:if> , <c:choose> , <c:when> , <c:otherwise> , <c:forEach> , <c:forTokens> URL Tags : <c:import> , <c:url> , <c:redirect> , <c:param> Internationalization (i18n) and formatting tags I18N Overview Set locale : <fmt:setLocale> , <fmt:requestEncoding> Format messages : <fmt:message> , <fmt:bundle> , <fmt:setBundle> , <fmt:param>...... Read from stdio and print out rot13: import java.io.BufferedReader; import java.io.I...The ROT13 translated Fourth Amendment reads: Gur evtug bs gur crbcyr gb or frpher va gurve crefbaf, ubhfrf, cncref, naq rssrpgf, ntnvafg haernfbanoyr frnepurf naq frvmherf, funyy abg or ivbyngrq, naq ab Jneenagf funyy vffhr, ohg hcba cebonoyr pnhfr, fhccbegrq ol Bngu be nssvezngvba, naq cnegvphyneyl qrfpevovat gur cynpr gb or frnepurq, naq gur crefbaf be guvatf gb or frvmrq. > ...
http://www.xinotes.net/notes/keywords/length/p,11/
CC-MAIN-2014-10
refinedweb
252
54.66
Opened 5 years ago Closed 5 years ago #17728 closed Bug (fixed) Filtering of annotated querysets broken with timezone-aware datetimes Description There appears to be a regression with timezone-aware datetimes that causes them to not match "exact" queries properly when added as an annotation. This is easiest explained with an example. Say we have a series of "events" which are grouped into "sessions". We wish to find the start time of the session (i.e. the time of the session's earliest event) models.py: from django.db import models from django.contrib.auth.models import User class Event(models.Model): time = models.DateTimeField() session = models.ForeignKey('Session', related_name='events', null=True) class Session(models.Model): user = models.ForeignKey(User, related_name='km_sessions', null=True) With USE_TZ = False (or with Django 1.3) it works as expected: >>> from datetime import datetime >>> from django.db import models >>> from django.conf import settings >>> print settings.USE_TZ False >>> from aggregate_fields.models import Session, Event >>> now = datetime.now() >>> s = Session.objects.create() >>> e = Event.objects.create(time=now, session=s) >>> now_tz = Event.objects.get(pk=e.pk).time # ensure we're querying with the as-saved datetime >>> now == now_tz # timezone are disabled, so these should be equivalent True >>> Session.objects.annotate(start=models.Min('events__time')).filter(start=now_tz) [<Session: Session object>] >>> now_tz in Session.objects.annotate(start=models.Min('events__time')).values_list('start', flat=True) True >>> Session.objects.annotate(start=models.Min('events__time')).filter(start=now_tz).count() 1 >>> Session.objects.annotate(start=models.Min('events__time')).count() 1 >>> Session.objects.annotate(start=models.Min('events__time')).filter(start__lt=now_tz).count() 0 >>> Event.objects.filter(time=now_tz).count() 1 >>> Event.objects.get(time=now_tz) <Event: Event object> But with USE_TZ = False the results are inconsistent: >>> from datetime import datetime >>> from django.db import models >>> from django.conf import settings >>> print settings.USE_TZ True >>> from aggregate_fields.models import Session, Event >>> now = datetime.now() >>> s = Session.objects.create() >>> e = Event.objects.create(time=now, session=s) RuntimeWarning: DateTimeField received a naive datetime (2012-02-19 15:03:55.892547) while time zone support is active. RuntimeWarning) >>> now_tz = Event.objects.get(pk=e.pk).time # ensure we're querying with the timezone-aware datetime >>> now == now_tz # these shouldn't be comparable Traceback (most recent call last): File "<console>", line 1, in <module> TypeError: can't compare offset-naive and offset-aware datetimes >>> Session.objects.annotate(start=models.Min('events__time')).filter(start=now_tz) [] >>> now_tz in Session.objects.annotate(start=models.Min('events__time')).values_list('start', flat=True) True >>> Session.objects.annotate(start=models.Min('events__time')).filter(start=now_tz).count() 0 >>> Session.objects.annotate(start=models.Min('events__time')).count() 1 >>> Session.objects.annotate(start=models.Min('events__time')).filter(start__lt=now_tz).count() 1 >>> Event.objects.filter(time=now_tz).count() 1 >>> Event.objects.get(time=now_tz) <Event: Event object> When the QuerySet is annotated with the start time, it seems to not use the timezone in the subsequent filtering, though it is present when the list of values is reconstituted as Python objects. These tests were done on SQLite, in case this is a databse-specific problem. I don't have another DB easily accessible to me at the moment, but I can set up a test later, if needed. Attachments (3) Change History (17) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by I can reproduce the problem under SQLite and MySQL. Under PostgreSQL everything works as expected. comment:4 Changed 5 years ago by It appears that connection.ops.value_to_db_datetime isn't called when filtering on an annotation. Strictly speaking, it's a bug of annotations rather than a bug of time zones. At this point, I haven't managed to fix it without introducing other regressions. The core of the problem is the fallback to Field.get_db_prep_lookup that happens in two places in WhereNode: 146 params = [connection.ops.value_to_db_datetime(params_or_value)] 147 else: 148: params = Field().get_db_prep_lookup(lookup_type, params_or_value, 149 connection=connection, prepared=True) 150 if isinstance(lvalue, tuple): ... 339 # (we don't really want to waste time looking up the associated 340 # field object at the calling location). 341: params = Field().get_db_prep_lookup(lookup_type, value, 342 connection=connection, prepared=True) 343 db_type = None When the aggregation happens on a DateTimeField, we should call DateTimeField.get_db_prep_lookup instead, so that DateTimeField.get_db_prep_value is inkoved, and calls connection.ops.value_to_db_datetime. Overall, ignoring the subclasses of Field entirely is approximative, and in this case, wrong. comment:5 Changed 5 years ago by The attached patch solves this issue at least on SQLite, but I must say it is just ugly. It fixes the immediate symptom without doing anything for the real problem. comment:6 Changed 5 years ago by I'm attaching a patch that solves the general problem: converting the value used to perform a lookup on the results of an annotation from a Python object to its database representation. With the patch, Django uses the methods provided by the Field that describes the output of the aggregate, as if it were a regular model field. There's a little bit of noise in the patch because I renamed value_annotation consistently (sometimes it was confusingly called annotation or value_annot). comment:7 Changed 5 years ago by I looked through the patch and it looks good. It not only solves this problem but also cleans the is_ordinal aggregate handling. I was wondering that shouldn't this be about the type of the param_or_value, not about the field you are comparing against. After checking the code I think what is done in the patch is good, as DateField does its own transformations for the param_or_value so that it will be a date string in the query. There could be a problem if you compared a datetime field against something which doesn't expect datetime values, but I can't give any sane example. It doesn't make sense to compare datetime values against IntegerFields for example... I am not at all sure that what DateField does for datetime values is sane. An example, where datef is a DateField and dtf is a DateTimeField: def test_date_compare_datetime(self): time = datetime.datetime(2012, 02, 22, 01, 00, 00).replace(tzinfo=EAT) WithDate.objects.create(datef=time, dtf=time) with override_settings(DEBUG=True): print len(WithDate.objects .filter(datef__gte=time) .filter(dtf__lte=time)) print connection.queries OUT: SELECT ... FROM "timezones_withdate" WHERE ("timezones_withdate"."datef" >= 2012-02-22 AND "timezones_withdate"."dtf" <= 2012-02-21 22:00:00 ) So, what happens? The timestamp is 2012-02-21 22:00:00 in the query when viewed as a datefield, but it is 2012-02-22 as a date. Note that the date of the datetime value would not be equal to the date value. I think this boils down to in whose timezone the date is. One viewpoint is that it should be in UTC, as in Python and in the database all datetimes should be viewed in UTC, so why not dates taken from datetimes, too? Another viewpoint is to save in the time zone of the active time zone. I think this is what PostgreSQL does: if you save now() to a date column, the date that gets saved is dependent on what time zone is active. However, I don't think just turning the aware datetime into a date in the time zone the datetime happens to be is correct choice from any viewpoint. I think that is what happens now, at least according to my tests using SQLite3. Anyways, this is probably another ticket's problem. comment:8 Changed 5 years ago by comment:9 Changed 5 years ago by Reviewed the patch, and it looks excellent to me. Confirmed that without the patch, the tests fail on MySQL and SQLite and pass on Postgres, and with the patch they pass on all three. Marking RFC; you can do the commit, Aymeric. I do have two questions/comments; neither is a blocker for the patch as they're both pre-existing issues, but I'll note them here as they're related: - I think the docstring for WhereNode.make_atomhas been out of date for some time (the tuple it claims to handle is not the tuple it actually handles); this might be a good time to fix it. - If there are a number of tests (all of them?) that should be shared verbatim between NewDatabaseTests and LegacyDatabaseTests, why not put those test methods on a mixin class that both inherit, to reduce code duplication and ensure that the tests actually do stay in sync? Anssi, I haven't entirely gotten my head around what should happen in the case you're discussing, but I think it's clear that it should be its own ticket. comment:10 Changed 5 years ago by I'll open another ticket for the datetime as date one. comment:11 Changed 5 years ago by @Carl: 1. yes, I'll update the docstring of WhereNode.make_atom. @Carl: 2. in fact, the tests are subtly different between NewDatabaseTests and LegacyDatabaseTests. It may be possible to factor them in a mixin and use different class variables; I remember decided against that when I initially wrote them because it made them hard to read. I'll look again, but outside of the scope of this ticket. @Anssi: yes, the fact that DateTimeFields can be set or compared to date values is a problem when time zone support is enabled, since dates are naive objects. I've created #17742 to track this. comment:12 Changed 5 years ago by I think this is related to this ticket: why aren't aware datetimes converted to non-aware UTC datetimes using .register_adapter for SQLite3? MySQL has something similar (a "conv" dicitionary). This way the bug for this ticket would not exists. Granted that the patch should be committed even without any bugs in this ticket. For example if you are using raw cursors datetimes aren't converted correctly. If there was an adapter, they would work correctly in raw queries, too. Am I missing some design choice here? comment:13 Changed 5 years ago by By design, the ORM's puts the responsibility of conversions between Python objects and their database representation in Field classes. These classes provide more flexibility, on a per-field basis, than registering converters and adapters. My feeling is that we should walk away from adapters / converters, since their job is already performed by fields, and their existence could hide bugs this one. I can't think of a use case where the ORM would send to the database a value (a Python object) that isn't either associated to a Field, or a known type (typically, an int for a LIMIT clause). I think the way forward is to improve DateField and DateTimeField so they handle appropriately both datetimes and dates. I haven't assessed the bug report yet, but it's probably a release blocker.
https://code.djangoproject.com/ticket/17728
CC-MAIN-2017-09
refinedweb
1,820
58.79
brooksuk I've updated the plugin to be a bit more powerful. I believe the issue you were having with alignment is that the plugin was intended for aligning things like equal signs in the middle of lines, do it always inserted spaces. The way that sublime measures columns of characters, only a single space would have been inserted if you use tabs for indenting. That said, I've updated the plugin to do what I think you were intending to do, which is to align the left edge of each line to the same indent level. If you have a multi-line selection, the plugin will cause all lines to be indented to the same level. As a bonus, if you run it a second time, the first set of equal signs with be aligned using spaces. The original functionality for multiple selections still exists and functions that same way. If any of what I described is confusing, please see the readme for examples. New update: The multi-line selection mode now properly handles equal signs that do not have a space before them. Equal signs are aligned to the left-most column possible while leaving at least one space to the left of each equal sign/operator. github.com/wbond/sublime_alignm ... all/master It would be useful for javascript to check not only for '=' but for ':' too. The equal sign could have precedence. Nice Plugin I love that plugin!I really like having my "="s aligned.As senzo said, ":" alignment would be great for JavaScript and JSON. A quick modification for ":" alignment Replace equal_pt = view.find('=', pt).a with import re if re.search("=", view.substr(view.line(pt))): equal_pt = view.find('=', pt).a elif re.search(":", view.substr(view.line(pt))): equal_pt = view.find(':', pt).a I made it work with this syntax below: var test = "sadsad", asd = "Asdasd", asdasdasd = "asdasd"; Buy commenting out this part of the code: # for pt in points: # pt += adjustment # length = max_col - view.rowcol(pt)[1] # max_length = max([max_length, length]) # adjustment += length # view.insert(edit, pt, (' ' if use_spaces else '\t') * length) You don't use this plugin the way it was designed, take a look at the documentation from wbond: code you commented remove the left alignment and only keep the alignment of "=", what you probably want.This plugin must probably be splitted in 2 commands, one for the left alignment and one for the "=" (or ":") alignment. I.
https://forum.sublimetext.com/t/multi-selection-alignment/1749/5
CC-MAIN-2017-09
refinedweb
406
72.76
The ILDSAM tool parses any .NET Framework EXE/DLL Module and shows the information in a human-readeble format I am writing my first paper on C# technologies. So I started to make search on net what topic papers are available on various sites of C#. I found the programmers and developers have presented almost all topics but nothing was available for IL disassember, a very useful tool for .NET programmers. User will find it as their best friend once started using it.You can download .NET SDK beta 1 from MSDN site. Once you will install .NET SDK in your system. You can get IL disassemble tool as named ILDasm.exe in directory C:\Program Files\Microsoft.NET\FrameworkSDK\bin (Windows OS).This question raise in mind of programmer "Why should I use ILDASM tool?"Answer of this question I found in the tutorial supplied with .NET SDK as "The ILDSAM tool parses any .NET Framework EXE/DLL module and shows the information in a human-readable format. It allows user to see the pseudo assembly language for .NET". IL disassmeber tool shows not only namespace but also types including their interfaces.As its name suggests, it is Intermediate language, so it has its own specification. User can also write program using this intermediate language, its very similar to assembly language of olden days.I will use one simple example and use ILDASM.exeC# Hello World Programusing System; class HelloWorld{static void Main(){Console.WriteLine("Hello, world!");}}Complier it on command line by using csc HelloWorld.csHelloWorld.exe will be generatedAgain use command ildasm Helloworld.exeYou will get a screen like this.Here you can see all of Symbols. The table below explains what each graphic symbol means:Some of them you can find in HelloWorld's members. View All
https://www.c-sharpcorner.com/article/vs-net-tools-intermediate-language-disassembler-ildam/
CC-MAIN-2020-29
refinedweb
301
61.22
I am trying to wrap my noodle around the whole pointer concept here. I want call the function fun1 with a pointer. What syntax do I use? What I doing wrong? I get this error when I try to compile Code:boohoo.cpp: IN function 'int main(int, char**)': boohoo.cpp 21: error: expected primary-expression before ')' token any help would be greatany help would be greatCode:#include <iostream> using namespace std; struct myStruct { long * stuff1; long stuff2; long stuff3; }; void fun1(myStruct* workplz); int main(int argc, char* argv[]) { myStruct workplz; fun1(workplz*); }//end main void fun1 (myStruct* workplz) { cout<<"it works."<<endl; }
https://cboard.cprogramming.com/cplusplus-programming/83654-calling-function-pointer-struct.html
CC-MAIN-2017-47
refinedweb
104
74.9
02 January 2009 19:22 [Source: ICIS news] TORONTO (ICIS news)--JP Morgan on Friday cut its target price and 2009 earnings estimate for Dow Chemical because of pressure in the company’s basic plastics segment. The bank cut the target for Dow’s shares by $5 to $15 and it reduced the earnings per share (EPS) estimate by 50 cents to $1.50. Dow’s shares were up 1.2% to $15.27 on the ?xml:namespace> “We project that Dow’s basics plastics segment is likely to face pressures in 2009 related to lower product pricing and lower estimated cash margins,” JP Morgan said in a research note. “Additionally, we believe that lower projected operating rates will contribute to poor fixed cost absorption,” it added. By contrast, JP Morgan expects that Dow’s agricultural sciences segment to generate modest sales growth and greater profits in 2009, it said. The bank maintained its 2008 fourth-quarter EPS estimate of 34 cents and its full-year 2008 estimate of $2.65. Commenting on speculation that Dow may try to seek better terms for its pending acquisition of Rohm and Haas after “It is difficult to envision clearly how Dow can reach better terms with Rohm,” the bank said. “There are no issues of possible material adverse events or solvency issues as in the case of Huntsman,” it said. “The stress of the transaction is not a matter of Rohm and Haas but of the pressures of the global slowdown, commodity overcapacity, and the frozen and expensive character of the credit markets,” the bank said. Competition authorities in the ($1 = €0.71)
http://www.icis.com/Articles/2009/01/02/9181222/jp-morgan-cuts-dow-chemical-targets-on-tough-2009-outlook.html
CC-MAIN-2015-22
refinedweb
271
59.94
Draft Arc Description The Arc tool creates a circular arc in the current work plane by entering four points, the center, a point that defines the radius, the first end point and the second end point, or by picking tangents, or any combination of those. It uses the Draft Linestyle set on the Draft Tray. This tool works the same way as the Draft Circle tool, but adds start and end angles. To draw an elliptical arc use Draft Ellipse. You can also approximate a circular arc by using the Draft BSpline, Draft BezCurve, and Draft CubicBezCurve tools. Arc defined by four points, center, radius, initial point of arc and final point of arc. How to use - Press the Draft Arc button, or press A then R keys. - Click a first point on the 3D view, or type a coordinate and press the add point button. - Click a second point on the 3D view, or enter a radius value. - Click a third point in the 3D view, or enter a start angle. - Click a fourth point in the 3D view, or enter an aperture angle. The arc can be edited by double clicking on the element in the tree view, or by pressing the Draft Edit button. Then you can move the center point to a new position. The arc can be turned into a circle after creation, by setting its first angle and last angle properties to the same value. Options - The primary use of the arc tool is by picking four points: the centre, a point on the circumference, the start of the arc, and its end. - By pressing Alt, you can select a tangent instead of picking point, to define the base circle of the arc. You can therefore construct several types of circles by selecting one, two or three tangents. - The direction of the arc depends on the movement you are doing with your mouse. If you start to move clockwise after the third point is entered, your arc will go clockwise. To go counter-clockwise, simply move your mouse back over the third point, until the arc starts drawing in the other direction. - Arc tool will restart after you finish the arc, allowing you to draw another one without pressing the tool button again. - Hold Ctrl while drawing to force snapping your point to the nearest snap location, independently of the distance. - Hold Shift while drawing to constrain your point horizontally or vertically in relation to the center. - Press Esc or the Close button to abort the current command. Properties An Arc object shares all properties from a Draft Circle, but some properties only make sense for the circle. Data - DataFirst Angle: specifies the angle of the first point of the arc. - DataLast Angle: specifies the angle of the last point of the arc. - DataRadius: specifies the radius of the arc. - DataMake Face: specifies if the arc makes a face or not. This property only works if the shape is a full circle. - To make an arc a full circle set the DataFirst Angle and DataLast Angle to the same value. The values 0° and 360° aren't considered the same, so if these two values are used, the circle will not form a face. View - ViewPattern: specifies a Draft Pattern with which to fill the face of the arc. This property only works if the arc is a full circle, and if DataMake Face is true, and if ViewDisplay Mode is "Flat Lines". - ViewPattern Size: specifies the size of the Draft Pattern. Scripting See also: Draft API and FreeCAD Scripting Basics. The Arc tool can be used in macros and from the Python console by using the same function to create circles, with additional arguments. See the information in Draft Circle. Example: import Draft Arc1 = Draft.makeCircle(200, startangle=0, endangle=90) Arc2 = Draft.makeCircle(500, startangle=20, endangle=160) Arc3 = Draft.makeCircle(750, startangle=-30, endangle=-150) -
https://wiki.freecadweb.org/index.php?title=Draft_Arc&oldid=509130
CC-MAIN-2020-45
refinedweb
654
72.97
Editor's note: Last week's sample recipes from ActionScript Cookbook covered using a unique depth when creating a new movie clip, performing actions at set intervals, and more. This week we conclude this series with recipes on pausing and resuming a sound, saving a local shared object, and searching XML. Just a sampling of the hundreds of solutions to common ActionScript problems that you'll find in this book. You( ); You want to save local shared object data to the client computer. Use the SharedObject.flush( ) method in the Flash movie. Flash automatically attempts to save local shared object data to disk when the movie is unloaded from the Player (such as when the Player closes). However, it is not a good practice to rely on the automatic save functionality, as there are several reasons why the data might not save successfully. Instead, you should explicitly instruct the local shared object to write the data to disk using the SharedObject.flush( ) method: flushResult = my_l_so.flush( ); When the flush( ) method is invoked, it attempts to write the data to the client computer. The result of a flush( ) invocation can be one of three possibilities: If the user set the local storage for the domain to "Never", the data is not saved and the method returns false. If the amount of disk space required to save the local shared object's data is less than the local storage setting for the domain, the data is written to disk and the method returns true. If the user has not allotted as much space as the shared object data requires, he is prompted to allow enough space or to deny access to save the data. When this happens, the method returns "pending". If the user chooses to grant access, the extra space is automatically allotted and the data is saved. In the third case, in which the flush( ) method returns "pending", there is an additional step you can take to determine whether the user grants or denies access to save the data. When the user makes a selection from the automatic prompt, the onStatus( ) method of the shared object is automatically invoked. It is up to you to define the method to handle the results in the way that is appropriate for your application. When the callback method is invoked, it is passed a parameter. The parameter is an object with a code property that is set to " SharedObject.Flush.Success" if the user granted access or " SharedObject.Flush.Failed" if the user denied access. Here is an example that invokes flush( ) to save the data explicitly and then handles the possible responses: my_l_so = SharedObject.getLocal("myFirstLSO"); my_l_so.data.val = "a value"; result = my_l_so.flush( ); // If the flush operation completes, check the result. // If the operation is pending, the onStatus( ) method of the // shared object is invoked before any result is returned. if (result) { // Saved successfully. Place any code here that you want to execute after the data // was successfully saved. } else if (!result) { // This means the user has the local storage settings to 'Never.' If it is // important to save your data, you may want to alert the user here. Also, if you // want to make it easy for the user to change his settings, you can open the local // storage tab of the Player Settings dialog box with the following code: // System.showSettings(1);. } // Define the onStatus( ) method for the shared object. // It is invoked automatically after the user makes a selection // from the prompt that occurs when flush( ) returns "pending." my_l_so.onStatus = function (infoObj) { if (infoObj.code == "SharedObject.Flush.Success") { // If the infoObj.code property is "SharedObject.Flush.Success", it means the // user granted access. Place any code here that you want to execute when the // user grants access. } else if (infoObj.code == "SharedObject.Flush.Failed") { // If the infoObj.code property is "SharedObject.Flush.Failed", it means the user // denied access. Place any code here that you want to execute when the user // denies access. } }; If you know in advance that a shared object is likely to continue to increase in size with each session, it is prudent to request a larger amount of local storage space when the shared object is created. Otherwise, each time the current allotted space is exceeded, the user is prompted again to accept or deny the storage request. Setting aside extra space avoids repeatedly asking the user for permission to store incrementally more data. You can request a specific amount of space when you call the flush( ) method by passing it a number of bytes to set aside for the shared object: // Request 500 KB of space for the shared object. result = mySO.flush(1024 * 500); You want to search an XML object for nodes based on keywords and other criteria such as node hierarchy. Use the third-party XPath class from XFactorStudio.com. Thus far in this chapter you've read recipes on how to work with XML objects using the DOM, or Document Object Model. This means that if you want to locate a particular node in the XML tree, you need to know the relationship of that node to the whole (i.e., first child, next sibling, etc.). However, when you want a more flexible way of looking for nodes, the DOM can become tedious. XPath is a language that allows you a much more intuitive and flexible way to find nodes within an XML object. XPath is a W3C standard (see) that is supported on many platforms, but it is not natively supported in Flash. However, Neeld Tanksley of XFactorStudio.com has created an ActionScript XPath class that you can download from. You should download the .zip file and extract all the .as files into your Flash Include directory (make sure they are extracted into the Include directory, and not into subdirectories). Once you have downloaded and installed the custom XPath class, you can include it in your Flash movies and begin using XPath to work with XML, as follows: #include "XPath.as" XPath uses path expressions to denote the node or nodes you want to find. For example, if the root node in your XML object is named books, then you can find that root node using: /books If books contains child nodes named book, then you can return all the book nodes using: /books/book If you don't know or care about the full path from the root node to the node or nodes for which you are searching, you can use a double slash to indicate that you want to locate all matching nodes at any level in the XML tree. For example, the following returns all author nodes regardless of their hierarchy: //author An asterisk (*) is a wildcard. For example, the following matches all author nodes that are children of any nodes that are, in turn, children of the books node: /books/*/author You can also use square brackets ([]) to indicate criteria that the nodes must match. For example, you can match all book nodes that contain author nodes with the following: //book[author] Notice that the preceding is different from //book/author in that the former returns book nodes and the latter returns author nodes. You can also use expressions with equality operators such as the following, which returns all book nodes containing a child title node with a text value of "ActionScript Cookbook": //book[title='ActionScript Cookbook'] The @ sign can be used to signify an attribute. The following example matches all author nodes containing a name attribute: //author[@name] There are also many other built-in functions, operators, and keywords in XPath that you can read about in the documentation. The XPath class has one method that we are interested in for this recipe. The XPath.selectNodes( ) method is a static method, which means you invoke it from the class itself, not from an instance of XPath. The method takes two parameters - the XMLNode object to search and the XPath expression to use - and returns an array of matching nodes: matches = XPath.selectNodes(my_xml, "/books/book"); Now let's take a look at an example of XPath in use. For this example you should make sure that you have installed all the .as files for the XPath class. Create an XML document using a simple text editor such as WordPad. Add the following XML code to the document, and then save it as books> Open a new Flash document. Copy the PushButton component symbol into the Library by dragging an instance from the Components panel onto the Stage and then deleting the instance. The symbol remains in the Library. Then add the following code to the first frame of the main timeline: #include "XPath.as" function doSearch ( ) { // Use the selectNodes( ) method to find all matches to the XPath expression the // user has entered into xpath_txt. Display the results in output_txt. Use a // newline character between each match. var results = XPath.selectNodes(my_xml, xpath_txt.text); output_txt.text = results.join("\n"); } // Create the input text field for the user to enter the XPath expression. this.createTextField("xpath_txt", 1, 20, 20, 300, 20); xpath_txt.border = true; xpath_txt.type = "input"; // Create the text field for the output of the results. this.createTextField("output_txt", 2, 20, 70, 300, 300); output_txt.border = true; output_txt.multiline = true; output_txt.wordWrap = true; // Add the push button instance so the user can perform the search. this.attachMovie("FPushButtonSymbol", "search_pb", 3, {_x: 20, _y: 45}); search_pb.setClickHandler("doSearch"); search_pb.setLabel("Find Matches"); // Set the ignoreWhite property to true for all XML objects. XML.prototype.ignoreWhite = true; // Create an XML object and load books.xml into it. my_xml = new XML( ); my_xml.load("books.xml"); Save the Flash document to the same directory as the books.xml file. Test the movie. Try entering the following XPath expressions, and you should get the results as shown: //book/title <title>ActionScript Cookbook</title> <title>Flash Cookbook</title> <title>Flash Remoting: The Definitive Guide</title> <title>ActionScript for Flash MX: The Definitive Guide</title> //book[contains(title, `ActionScript')]/title <title>ActionScript Cookbook</title> <title>ActionScript for Flash MX: The Definitive Guide</title> //book[authors[author[@name='Joey Lott']]]/title <title>ActionScript Cookbook</title> <title>Flash Cookbook</title> XPath is too large a subject to cover in detail in this book. For more information, refer to the online tutorial at or check out the book XPath and XPointer by John E. Simpson (O'Reilly). Joey Lott is a founding partner of The Morphic Group. He has written many books on Flex and Flash-related technologies, including Programming Flex 3, ActionScript 3 Cookbook, Adobe AIR in Action, and Advanced ActionScript 3 with Design Patterns. Return to the Web DevCenter.
http://oreillynet.com/lpt/a/4251
CC-MAIN-2014-10
refinedweb
1,778
63.7
NUMA, binding, and OpenMP By Darryl Gove on Mar 31, 2009 One of my colleagues did an excellent bit of analysis recently, it pulls together a fair number of related topics, so I hope you'll find it interesting..) So most systems with more than one CPU will see some element of NUMA. Solaris has contained some memory placement optimisations MPO since Solaris 9. These optimisations attempt to allocate memory locally to the processor that is running the application. OpenSolaris has the lgrpinfo command that provides an interface to see the levels of memory locality in the system. Solaris will attempt to schedule threads so that they remain in their locality group - taking advantage of the local memory. Another way of controlling performance is to use binding to keep processes, or threads on a particular processor. This can be done through the pbind command. Processor sets can performance a similar job (as can zones, or even logical domains), or directly through processor_bind.. One situation where binding is commonly used is for running OpenMP programs. In fact, it is so common that the OpenMP library has built in support for binding through the environment variable SUNW_MP_PROCBIND. This variable enables the user to specify which threads are bound to which logical processors.). We put together a LD_PRELOAD library to prove it. The following code has a parallel section in it which gets called during initialisation. This ensures that binding has already taken place by the time the master thread starts. #include <stdio.h> #pragma init(s) void s() { #pragma omp parallel sections { #pragma omp section { printf("Init"); } } } The code is compiled and used with: $ cc -O -xopenmp -G -Kpic -o par.so par.c $ LD_PRELOAD=./par.so ./a.out
https://blogs.oracle.com/d/tags/mpo
CC-MAIN-2014-10
refinedweb
287
53.81
Well, as you may guess, I've got all of them too, so, maybe someone else could give it a try ? -Fred -----Message d'origine----- From: [email protected] Sent: Saturday, July 13, 2013 3:54 PM To: [email protected] Subject: AW: AW: Trying to mavenize project flex-asjs with IntelliJ fails Would be cool if you could give it a test-drive ;-) Strangely I have all the stuff available in all my repos .... would be cool if someone could check if it really worked. Chris -----Ursprüngliche Nachricht----- Von: Frédéric THOMAS [mailto:[email protected]] Gesendet: Samstag, 13. Juli 2013 15:52 An: [email protected] Betreff: Re: AW: Trying to mavenize project flex-asjs with IntelliJ fails Nice, thanks :-) -----Message d'origine----- From: [email protected] Sent: Saturday, July 13, 2013 3:48 PM To: [email protected] Subject: AW: Trying to mavenize project flex-asjs with IntelliJ fails Ok ... so it's done ... Flexmojos 6.0.1 is available in the Sonatype public repo ... think it might take some time to reach maven central repo. Have phun, Chris PS: Releasing FM is a real PITA ... hopefully this will become a lot easier with the new plugin. -----Ursprüngliche Nachricht----- Von: [email protected] [mailto:[email protected]] Gesendet: Freitag, 12. Juli 2013 16:33 An: [email protected] Betreff: AW: Trying to mavenize project flex-asjs with IntelliJ fails Ok ok you got me ;-) think I'll go home now and start the release process for FM6.0.1 I see this is a serious Problem for People and after all, I want the People to be able to use it without Problems ;-) Chris ________________________________________ Von: [email protected] [[email protected]] im Auftrag von Carlos Rovira [[email protected]] Gesendet: Freitag, 12. Juli 2013 10:33 An: [email protected] Betreff: Re: Trying to mavenize project flex-asjs with IntelliJ fails Hi Frederic, I just see your comment in you track first and responded there. As I stated there, I think my environment is slight different since: a) I have a internal flexmojos version 6.1 deployed with the fix Chris Dutz made. Jose Barragan deployed it to our internal repository for testing since he's working closely with Chris. This fixes the namespace dependency problem from "com.adobe*" to "org.apache" b) I have all SDKs from com.adobe and org.apache deployed starting from 2.x.x.x to 4.9.1.x At maven level it seems to work all ok, but at IntelliJ level I have the commented problem with the autogeneration of flexmojos SDK. If I use com.adobe and 4.6 all works. If I use org.apache and 4.9.1 IntelliJ is not capable of generating the internal reference of flexmojos sdk 4.9.1 I think it will be important to have flexmojos 6.1 released soon since the solved bug is very important since is a real stopper for people wanting to use flexmojos with apache flex sdk. Thanks Carlos 2013/7/12 Frédéric THOMAS <[email protected]> > The good links can be found here [1] [2], maybe someone would like to > add them to the wiki. > > [1] > Find & Use FM6: >**index.html#nexus-search;gav~** > net.flexmojos.oss~flexmojos-**maven-plugin~~~< > /index.html#nexus-search;gav~net.flexmojos.oss~flexmojos-maven-plugin~ > ~~> >**repos/asf?p=flex-utilities.** > git;a=blob;f=mavenizer/README.**txt;h=**d962139fb50f787f608a6f84a94335 > ** > 5153306edc;hb=HEAD< > ities.git;a=blob;f=mavenizer/README.txt;h=d962139fb50f787f608a6f84a943 > 355153306edc;hb=HEAD> >.**net/wiki/display/FLEXMOJOS/**Flexmojos+6 > .x<. > x> > > [2] > Building FM6: >**flexmojos< > flexmojos> >**display/FLEXMOJOS/Building+** > Flexmojos+from+sources< > Flexmojos+from+lding+Flexmojos+from+sources> > > And as I stated before, you can check my whiteboard (fthomas) to get a > model of pom.xml built for air/swf/swc using FM6 and a recent Apache > Flex SDK, that's an Air project but still, it might help. > > -Fred > > -----Message d'origine----- From: Gary Young > Sent: Friday, July 12, 2013 4:34 AM > To: [email protected] > Subject: Re: Trying to mavenize project flex-asjs with IntelliJ fails > > > I am still using Flexmojos 3.9+ Flex 4.6+AIR3.1, I want to use the > most recent version, it just won't work. I have to exclude old > playerglobal.swc and include new version in pom. -Gary > > > On Thu, Jul 11, 2013 at 7:43 PM, Frédéric THOMAS > <[email protected]> > **wrote: > > It has been answered >****issue/IDEA-110462< > tbrains.com/**issue/IDEA-110462> >> <http://**youtrack.jetbrains.com/issue/**IDEA-110462<. >> jetbrains.com/issue/IDEA-110462> >> > >> >> Check my whiteboard (fthomas) to get a model of a pom.xml building >> for air/swf/swc using FM6 and a recent Apache Flex SDK (replace it by >> 4.9.1), that's an Air project but still, it might help. >> >> -Fred >> >> -----Message d'origine----- From: Carlos Rovira >> Sent: Thursday, July 11, 2013 10:47 PM >> To: [email protected] >> Subject: Trying to mavenize project flex-asjs with IntelliJ fails >> >> >> Hi, >> >> I'm trying to create a project in IntelliJ for flex-asjs with >> mavenized SDK 4.9.1.1447119 But seems that IDEA can't generate Apache >> Flex SDK from pom.xml. It works for Adobe Flex SDKs but not for >> Apache Flex. >> >> I created a youtrack ticket: >> >>****issues/IDEA#< >> ins.com/**issues/IDEA#> >> <http://**youtrack.jetbrains.com/issues/**IDEA#< >> ains.com/issues/IDEA#> >> > >> >> I'm missing something or it's really a bug? >> >> In the ticket I attached a draft of the pom.xml >> >> If you get the same error please vote the ticket >> >> -- >> Carlos Rovira >> >> > -- Carlos Rovira Director de Tecnología M: +34 607 22 60 05 F: +34 912 94 80 80
http://mail-archives.apache.org/mod_mbox/flex-dev/201307.mbox/%[email protected]%3E
CC-MAIN-2018-13
refinedweb
963
60.31
I know what you are think i find slow spots and in turn it turns out to be me. I hope this will turn out the same way. Currently it can take up 50% of my python “App” time - that is not good most of the other tasks did take up more time but i have significantly cleaned them up. So lets cut to the point first is a python task that does: Audio3DManager.update which calls “getSoundVelocity” for each of its known sounds. If that call is removed this task takes no time def getSoundVelocity(self, sound): """ Get the velocity of the sound. """ if (self.vel_dict.has_key(sound)): vel = self.vel_dict[sound] if (vel!=None): return vel else: for known_object in self.sound_dict.keys(): if self.sound_dict[known_object].count(sound): return known_object.getPosDelta(self.root)/globalClock.getDt() return VBase3(0, 0, 0) I don’t really know which one is causing the problem. I see keys() used a lot in the code base aren’t iterkeys() faster? In py3k iterkeys() will become keys(). Ok now onto the next taks which is much slower. That one is in showbase: def __audioLoop(self, state): if (self.musicManager != None): self.musicManager.update() for x in self.sfxManagerList: x.update() return Task.cont As it happens both musicManager and the sfxManagerList contains one OpenALAudioManager - at first i thought it was the same one but it turnes out it has 2 different OpenALAudioManagers to update on. Without quests this update takes a long time maybe even with duplicating the effort //////////////////////////////////////////////////////////////////// // Function: OpenALAudioManager::update // Access: Public // Description: Perform all per-frame update functions. //////////////////////////////////////////////////////////////////// void OpenALAudioManager:: update() { // See if any of our playing sounds have ended // we must first collect a seperate list of finished sounds and then // iterated over those again calling their finished method. We // can't call finished() within a loop iterating over _sounds_playing // since finished() modifies _sounds_playing SoundsPlaying sounds_finished; double rtc = TrueClock::get_global_ptr()->get_short_time(); SoundsPlaying::iterator i=_sounds_playing.begin(); for (; i!=_sounds_playing.end(); ++i) { OpenALAudioSound *sound = (*i); sound->pull_used_buffers(); sound->push_fresh_buffers(); sound->restart_stalled_audio(); sound->cache_time(rtc); if (sound->status()!=AudioSound::PLAYING) { sounds_finished.insert(*i); } } i=sounds_finished.begin(); for (; i!=sounds_finished.end(); ++i) { (**i).finished(); } } I don’t see any thing particularly speed uttering here maybe you do? Another funny thing i noticed is that i can stop __audioLoop form running at all (saving lots of CPU cycles) and the sound still plays! Yes really it does play but only for a second or so then it does out unless i force another sound to play (with click or some thing) … maybe i can just force the sound to play every once in a while to keep the sound system moving and save me about 50% of the app time - i think i will find other drawbacks to this method though.
https://discourse.panda3d.org/t/openal-sound-system-is-slow/3531
CC-MAIN-2022-27
refinedweb
468
57.06
Your Account ActionScript is the internal programming language that Flash designers and developers use to add interactivity to projects. Sometimes a linear progression through the Timeline with animations that never vary is not enough. ActionScript can add variety, randomness, and user input and control to the mix.. Introduced in Flash 2, interactive control of Flash has been around for a long time. Flash 4 included support for written scripts. Flash 5, unveiled in 2000, contained the first reasonably full-featured version of a scripting language. This language was called ActionScript and was retroactively named ActionScript 1.0 (AS1) later on. Since that time, there have been two major architectural changes to the language. Flash MX 2004 (actually released in September of 2003) included ActionScript 2.0 (AS2), a more robust iteration of ActionScript and the first to introduce formal object-oriented programming capabilities to Flash. Later, in 2007, Flash CS3 rebuilt ActionScript from scratch when it let ActionScript 3.0 (AS3) out of the cage. Rather than enhancing the codebase of AS1 and AS2, and continuing with any baggage or flaws ActionScript adopted through its early existence, the code was reinvented for AS3. The prior code base was just too entrenched to accommodate sweeping improvements without breaking backward compatibility. Instead, an entirely new codebase was developed and added to Flash Player alongside the legacy player code. The split codebases mean that AS3 can't intermingle with older versions of ActionScript in a single FLA, but it also means that current Flash Players still support projects created in any version of the language. Projects that use AS1 or AS2 will play back on virtually every Flash-enabled computer, while files that rely on ActionScript 3.0 require a contemporary player (version 9 or later) to function. Although AS3 is a restart of sorts, it still shares some characteristics with AS1 and AS2, and even with other languages based on ECMAScript, the standard from which ActionScript evolved. ECMAScript actually began its life as JavaScript so, if you have any experience coding web pages with JavaScript, you have a leg up when it comes to learning AS3. In many ways, however, AS3 is entirely different. Here a few examples of changes and improvements that have reshaped AS3 into the fastest and most powerful version of ActionScript in any Flash release to date: Consistent syntax makes AS3 easier to pick up as you go along. Many rambling issues from prior versions of the language have been clarified and conventions followed more reliably. For example, in prior versions of ActionScript, properties (ways of describing objects, such as width and height) were sometimes preceded by underscores, and sometimes not. Once you gain a little bit of experience writing basic scripts, you'll find yourself hunting for specific, task-related solutions. At that point, you'll find the language consistent enough that you can actually start guessing at syntax and finding yourself right a lot of the time. This consistency also makes using help and other resources easier. More detailed error reporting makes finding problems in AS3 programs significantly easier. No longer will your code silently fail at runtime, leaving you to wonder where your troubles lay. Instead, AS3 not only improves runtime error reporting, it also introduces error and warning reports when your file is compiled to SWF for distribution. By learning about these errors at authoring time, you can usually catch and fix your bugs prior to introducing your application into the wild. Stronger data typing is the greatest aid to better error reporting. By declaring the type of your data as a number, for instance, you'll be notified if you accidentally use a text string instead. The consistent use of strong data typing makes AS3 more verbose than prior versions of the language but, again, this enables better error reporting and more reliably sound projects. New display architecture features consolidate and simplify the many ways that prior versions of ActionScript controlled visual assets. Using the new display list, it's now much simpler to control the visual stacking order of visual assets and to manipulate the familial relationships—parent, child, and sibling—between these assets. New event architecture features unify a method of processing events. All events are now handled with event listeners, which listen for specific events and react when those events occur. For example, a listener assigned to listen for a mouse event will ignore keyboard events. This makes event handling more efficient. Improved handling of external assets simplifies the task of loading data at runtime. Consistent approaches to loading assets can be applied to images, text, sound, video, and even other SWF files. It's much easier to work with XML in AS3, and you can even work with raw data, such as the bytes that make up a JPEG or sound. Better sound management provides greater control over sound playback in 32 discrete sound channels. You can also poll data from sounds during playback to visualize sounds with waveforms and similar graphics. There are several more compelling attributes of AS3 that are likely of interest to intermediate to advanced coders; however, these fall outside the scope of this chapter. Some of these topics are discussed in the companion volume, Learning ActionScript 3.0: A Beginner's Guide (O'Reilly), described in the next section. AS3 is a robust language with significant breadth and depth. It also has a learning curve steeper than those associated with other versions of ActionScript. In this author's humble opinion, it's unrealistic to cover both the Flash authoring application and the ActionScript language simultaneously with any degree of effectiveness. As such, this book is one of a pair that has been conceived to focus on these two areas of interest. Each volume is organized to make it easy to acquire and digest the material most appropriate for your needs. The book you're reading now focuses primarily on the Flash authoring application, while its companion volume, Learning ActionScript 3.0, focuses almost exclusively on ActionScript. However, despite dividing this material into two separate volumes, ActionScript can't be ignored in the pages herein. Unless you intend to use Flash strictly for linear animations, you'll need a minimum amount of ActionScript to expand your Flash skills. You certainly need ActionScript to add interactivity, and you need at least one line of code just to stop a Timeline animation from looping forever. Furthermore, this book is project-based, teaching topics both with isolated examples and the ongoing development of a single portfolio. The isolated examples require no long-term investment to grasp the ideas behind them, and the project allows you to apply your newly learned skills to an example real-world application. Therefore, this chapter aims to provide an ActionScript primer, of sorts. It consists of material condensed from several chapters of Learning ActionScript 3.0, as well as content directly related to the portfolio project. This primer will cover the basic skills required to support each topic and project task through the remainder of this book. Additional ActionScript will be introduced periodically to cover chapter-specific content. Try to remember that this material is here to get you started. Don't expect to learn the language from a primer, and don't worry if it seems like a lot to take in upon first reading. Feel free to use this chapter as a small reference, reading it in segments and coming back to it when you need additional help with language fundamentals. You can then decide whether you'd like to explore ActionScript further, and whether to acquire this book's companion volume. Learning ActionScript 3.0 covers a great deal more coding concepts, has considerably more detail, and is supported by its own companion website. Although much of the Flash interface was covered in Chapter 1, INTERFACE ESSENTIALS, ActionScript-related interface elements have been reserved for discussion in this chapter. At the entry level, we'll focus on two primary panels for ActionScript development: the Actions panel and the Output panel. The Actions panel is where you'll be writing your scripts (Figure 6.1, “The Actions panel”). While it's also possible to write scripts in external files (discussed at length in the companion volume Learning ActionScript 3.0), this book concentrates on originating your scripts in the Flash Timeline, so the Actions panel will be your home. Here's an overview of tools found within the Actions panel. In some cases, their functionality may not be self-explanatory, but relevant topics will be explained later in the chapter: The main pane on the right is the Script pane where you write your scripts. The pane in the lower-left corner is the Script Navigator, used to select any script written in your FLA. The pane in the upper-left corner is the Actions Toolbox. From here, you can drag or double-click to add ActionScript to your script in progress. The left two panes can be minimized to provide more room for the Script pane. This menu provides access to the content in the Actions Toolbox for use when that pane is minimized. This button opens a find and replace dialog for editing your script. This button opens a graphical browser for selecting objects in your FLA such as movie clips. It will insert a path to that object in your script. Figure 6.1. The Actions panel This button checks your scripts for errors. This button formats error-free scripts, or alerts you to problems in your code. When possible, this button shows you a floating syntax tip for the ActionScript surrounding the cursor while editing your script. Here you can insert, remove, or enable/disable break points for use in the ActionScript debugger. This is a tool for intermediate users and beyond and requires a bit of comfort and experience to use. Using these three tools, you can temporarily hide code by collapsing multiple lines into one closed marker. You can collapse between braces ({}), collapse selected code, or expand all previously collapsed segments. These three tools help to add comments, or disable/enable code in blocks of multiple lines, or in a single line, as well as uncomment selected commented code. This button expands and minimizes the Actions Toolbox. Each time you change scripts, the previous script is replaced in the Script pane. This button will force script to stay in tabs at the bottom of the panel, allowing you to switch between them easily. This button launches a rigid interface-driven script authoring system that is not discussed in this book. This button opens the Actions panel menu, discussed later in this chapter. The Output panel is a very simple, but very useful, text output panel that can be used to monitor your own understanding of a script (Figure 6.2, “The Output panel”). Its sole purpose is to display text generated by a script at authortime. You can't enter a script in the Output panel, and the panel doesn't exist at runtime. You will use the Output panel only as a means of getting quick feedback from an example, or as a testing and debugging technique when writing scripts. You'll likely find the trace command very helpful, both in this book and in your own scripts, to send information to the panel. trace Figure 6.2. The Output panel Because AS3 is a vast language, it can sometimes be a bit difficult to explain in a strictly linear fashion. For example, to understand how to effectively manipulate visual assets, you must learn about properties, methods, events, and event listeners. These are the basic building blocks of most scripted tasks that allow you to get and set characteristics of, issue instructions to, and react to input from many Flash objects. To go into a reasonable depth, particularly with real-world samples, you typically must discuss two or more of these topics at the same time. For example, to create an interactive exercise that allows you to experiment with properties or methods, you need events. Similarly, to understand events, you usually need to set properties or call methods. Therefore, this section will introduce you to some basic terms and an example use or two of these topics. Later, you'll follow up with more examples and additional discussions. For all three of these introductory passages, code samples will be provided for a movie clip that has been given an instance name of mc. Properties are somewhat akin to adjectives in that they describe the object being modified or queried. For example, you can check or set the width of a movie clip. Most properties are read-write, meaning that you can both get and set their values. Some properties, however, are read-only, which means you can ask for, but not change, their values. Here are examples of setting the width, height, and rotation of a movie clip. You can see these properties in use in the properties.fla source file. properties.fla mc.width = 100; mc.height = 50; mc.rotation = 90; Methods are a bit like verbs. They are used to tell objects to do something, such as play and stop. In some cases, methods can simplify the setting of properties. You might use a method called setSize(), for example, to simultaneously set the width and height of something. Other methods are more unique, such as navigateToURL(), which will instruct a browser to display a web page. setSize() navigateToURL() Here are examples of telling a movie clip to play, stop, and go to the next frame. You can see these in use in the methods.fla source file. methods.fla mc.play(); mc.stop(); mc.nextFrame(); Events are the catalysts that trigger the actions you write, such as setting properties and calling methods. For instance, a user might click the mouse button, which results in a mouse event. That event then causes a function to execute, performing the desired actions. Event handlers are the ActionScript middlemen that trap the events and actually call the functions. ActionScript 3.0 unifies event handling into a consistent system of event listeners, which are set up to listen for the occurrence of a specific event and react accordingly. Here is an example of an event listener, designed to listen for a mouse up event. You can see this in use in the events.fla source file: events.fla mc.addEventListener(MouseEvent.MOUSE_UP, doIt); function doIt(evt:MouseEvent):void { trace("do it"); } You'll learn more about some important concepts in action here, such as how the block of code called a function works, what's inside the parentheses of the function name, and what void means, but here is the basic idea. void An event listener is listening for a specific mouse event—a mouse up event only, or when the mouse button is released after being pressed. The listener is attached to the movie clip, so when the movie clip is clicked, a mouse up event will be detected. When that happens, the function called doIt() is executed, and the letters "do it" are displayed in the Flash Output panel. doIt() This flow of information—from the click of the mouse, to the trapping (receipt) of the event, to the execution of the function and the eventual output of the text message—is the cycle of event processing. Additional details will be discussed later in this chapter, but this is the mechanism by which interactivity is controlled. The infrastructure beneath a programming language is often overlooked, but understanding what may seem like smaller topics will make it easier for you to adapt to ActionScript and form good habits. If you look over the code examples earlier in the the section called “Basic Script Grammar” section, you'll see that a dot (.) separates the movie clip instance name from its properties and methods. This is sometimes referred to as dot syntax or dot notation. Essentially, this system strings together a series of items, from highest to lowest in the object hierarchy, including only items relevant to the task at hand. In this case, the first relevant item is the movie clip instance and the last is the property or method. Considering another example, in which you want to check the width of a movie clip that is inside another movie clip, the first item will be the parent, or container movie clip, then comes the nested movie clip, and then comes its width: mc1.mc2.width; This dot syntax will be used in virtually every example for the rest of this book, and it will soon become as familiar as your own language. Simply put, ActionScript is case-sensitive. For any word that is already part of the ActionScript lexicon, you must replicate that case exactly. For example, neither "True" nor "TRUE" will work when you're trying to represent the Boolean (true/false) value of true. Another example is keyboard input. When you want to verify a user's key input, "Claire" and "CLAIRE" are not the same. true For words you make up, such as variable names (a variable is a container for storing data, discussed later in the chapter), you must be consistent. If you name a variable myMovieClip, you can't refer to it later as MyMovieClip. Although you can use any case you like for your own names, a few conventions exist to make code more readable—particularly among multiple programmers working on the same project, where standardized practices really help. A few examples of conventions that are widely adopted include: This is a naming convention in which variable names are lowercase, except for the first letter of compound words. For example, "movie" is lowercase, but "myMovieClip" capitalizes the second and third words. Camel case is typically the default naming convention, and the next two examples are exceptions to the rule. Classes are often used to create instances of objects, such as movie clips, for a net result similar to dragging a movie clip from the Library to the Stage. Classes are, essentially, collections of related code responsible for the creation and behavior of objects. You'll learn more about classes throughout this chapter and use them throughout this book. The first letter of a class name is capitalized to set it apart from other possible variable or instance names. For example, instead of using "movieClip" as the name of the class responsible for movie clip behavior, the actual class is called MovieClip. Constants are variables that don't change their values. Constants are typically written in all uppercase. Constants that represent keys on the keyboard, for example, include SPACE and TAB. In ActionScript 3.0, constants are usually organized into classes and are referenced through the class name. The aforementioned keyboard constants, for instance, are part of the Keyboard class and are referred to by the usage Keyboard.SPACE and Keyboard.TAB. In general, ActionScript executes in a top-to-bottom, left-to-right order. That is, each line executes one after another, working from left to right. While this is typically a reliable rule of thumb, several things can change this order in subtle ways. For example, subroutines of one type or another can be called in the middle of a script. This causes the execution order of the original script to pause while the remote routine is executed. When the subroutine has completed, the execution of the original script continues where it left off. Execution flow will be explained in context in all scripts in this book. The official use of the semicolon in most ECMAScript languages is simply to allow execution of more than one instruction on a single line. This is rare in the average script, and we will look at this technique when discussing loops. However, the semicolon is also used to indicate the end of a line. Typically, this is not required, but there are cases in AS3 that rely on the semicolon to indicate the end of a line. These are cases in which a single line would be too difficult to read, so the line is broken up by carriage returns, even though it's essentially a single execution. A good example of this is writing XML in a human-readable form in a script. The ActionScript compiler is smart enough to understand that line breaks in XML are just for readability, and it looks for the semicolon to indicate the end of a line. For this reason, and because forging this habit makes it easier to transition into learning other languages where the semicolon is required, place a semicolon at the end of every line. It's helpful to note that you are usually not solving an equation when you see an expression with like values on the left and right of an equals sign. For example, if you see something like x = x + 1, it's unlikely that you will be solving for the value of x. Instead, this line is assigning a new value to x by adding 1 to its previous value. x = x + 1 x 1 Much like a computer operating system's directory or the file structure of a website, ActionScript refers to the location of its objects in a hierarchical fashion. You can reference an object, such as a movie clip, using an absolute or relative path. Absolute paths can be easy because you most likely know the exact path to any object starting from the main Timeline. However, they are quite rigid and will break if you change the nested relationship of any of the referenced objects. Relative paths can be a bit harder to call to mind at any given moment, but they are more flexible. Working from a movie clip and going up one level to its parent and down one level to a sibling will work from anywhere, be it in the main Timeline, another movie clip, or nested even deeper, because the various stages aren't named. Table 6.1, “Absolute paths from main Timeline to nested movie clip” and Table 6.2, “Relative paths from a third movie clip, up to the root, and down to the child of a sibling” draw parallels to the operating system and website analogies. Table 6.1, “Absolute paths from main Timeline to nested movie clip” references the root, which in this case is another way to refer to the main Timeline. The companion website has more information about how to use root. root Table 6.1. Absolute paths from main Timeline to nested movie clip ActionScript Windows OS Mac OS Website root.mc1.mc2 c:\folder1\folder2 Macintosh/ folder1/ folder2 Table 6.2. Relative paths from a third movie clip, up to the root, and down to the child of a sibling this.parent.mc1.mc2 ../folder1/folder2 ..\folder1\folder2 ../dir/dir Comments are lines of text within your scripts that are not executed and are invaluable programmer's tools. The obvious purpose of a comment is to add a brief descriptive note that will explain the purpose, expected outcome, or possibly a caveat of a segment of your script. Comments are really important when working with other programmers, but they're also very helpful when archiving your own code. It's not uncommon to revisit code long after writing it and have to figure out what it does for your project. A single-line comment begins with two slashes: //set width to size of left column mc.width = 100; There's another, possibly lesser-known purpose of comments, however. It's quite convenient to use comments to temporarily disable code. For example, you may want to quickly disable ActionScript sound playback or change navigation options. You can also use comments to try two different approaches to a programming task without deleting and rewriting. By completing two alternate versions of a code segment, you can test either at any time by commenting one out and the other back in. Symmetrical slash-asterisk pairs surround multiline comments: /* mc.width = 100; mc.height = 50; */ mc.width = 50; mc.height = 100; This is obviously a very simple example just to demonstrate the comment toggling process. In this case, it may be as easy to switch the values each time you test, but it's also easy to become distracted and lose track of the values you tested. As a proof of concept, this example switches a movie clip between horizontal and vertical sizes. To try this code the other way, you could uncomment the first block and comment the second block: mc.width = 100; mc.height = 50; /* mc.width = 50; mc.height = 100; */ As you write your scripts, it's helpful to check your progress. As Adobe's John Dowdell likes to say, "test early, test often." There are a few tools built into Flash's interface that can help you quickly examine the health of your scripts: A quick and easy way to see if your script is in good shape is to click the Check Syntax button at the top of the Actions panel (see Figure 6.1, “The Actions panel”). If your script is OK, a dialog will tell you as much. If there are problems, the compiler will alert you via the Compiler Errors panel. When the compiler detects errors or warnings, it will add them to the Compiler Errors panel (Figure 6.3, “The Compiler Errors panel”) so you can try to find the problem and correct it. Error reports include the error, a brief description of the problem, and even a line number where the error is thought to have occurred. If you double-click the error, the Flash interface will switch to the Actions panel and scroll to the errant line number. If you ever want to learn more about the warnings or errors that may appear during development or runtime, here are a few resources to explore: Figure 6.3. The Compiler Errors panel As insignificant as it sounds, formatting a script can help you find problems. Formatting can help because Flash will check the integrity of your scripts before proceeding. If errors exist, formatting will be abandoned. In this way, you're giving your code a once-overy ever time you format it. In addition, formatting a script will indent it properly, place braces and spaces where specified, and more. These assists help you to spot lines of code that may be out of place. To format a script, click the Auto Format button at the top of the Actions panel (see Figure 6.1, “The Actions panel”). Flash's Preferences (Flash→Preferences on Mac, Edit→Preferences on Windows) allow you to customize (to some extent) how the script is formatted (Figure 6.4, “The Preference dialog's ActionScript formatting preferences”). Additional basic options, such as word wrapping, are available from the Actions panel's menu (Figure 6.5, “The Actions panel menu options”), accessible from the upper-right corner of the panel. Figure 6.4. The Preference dialog's ActionScript formatting preferences Figure 6.5. The Actions panel menu options To hone in on a particular segment of a longer script, improving focus and clarity, you can use the code collapse feature (Figure 6.6, “Before and after code collapse in the Actions panel”). Highlight the lines of code you want to hide temporarily and click on the open arrow next to the first or last selected line number. To show the code again, click on the closed arrow. Figure 6.6. Before and after code collapse in the Actions panel Variables are best described as containers into which you place information for later recall. Imagine if you were unable to store any information for later use. You would not be able to compare values against previously described information (such as usernames or passwords), your scripts would suffer performance lags due to repeated unnecessary calculations, and you wouldn't be able to carry any prior experiences through to the next possible implementation of a task. In general, you wouldn't be able to do anything that required data that your application had to "remember." You'll learn on the next page that variables used for the first time must be declared with the var keyword. var Variables make all this and more possible, and are relatively straightforward. In basic terms, you need only create a variable with a unique name so it can be referenced separately from other variables and the ActionScript language itself, and then populate it with a value. Ignoring usage for a moment, a simple example is remembering the number 1 with the following: myVariable = 1; There are just a few rules and best practices to consider when naming variables. They must: Be one word Include only alphanumeric characters, dollar signs ($), or underscores (_) Not start with a number Not already be a keyword or reserved word in ActionScript To help you catch bugs and unexpected uses of data, ActionScript can monitor what's placed into variables. You can tell the ActionScript compiler that you want a certain type of data to be stored in a variable, and when your file is compiled into a SWF, you will be warned if the variable contains a different kind of data. This compile-time error checking can prevent problems from sneaking into your projects and is one of the best things about AS3. For example, if you try to perform a mathematical operation on a passage of text, Flash will issue a warning so you can correct the problem before you distribute your work to a client or the public. To make this work, you must indicate what you intend to store in each variable—this is called declaring the variable. To declare a variable, precede its first use with the var keyword and use the syntax <variable name>:<data type> to specify the type of data to be stored. For instance, the previous example of remembering the number 1 should be written this way: <variable name>:<data type var myVariable:Number = 1; There are several native data types including, but not limited to, those listed in Table 6.3, “Variable types”. Table 6.3. Variable types Data type Example Description Number 4.5 Any number, including floating point values (decimals) int −5 Any integer or whole number uint Unsigned integer or any nonnegative whole number String "hello" Text or a string of characters Boolean True or false Array [2, 9, 17] More than one value in a single variable Object myObject The basic structure of every ActionScript entity, but also a custom form that can store multiple values as an alternative to Array There are also dozens of additional data types that describe which kind of object is used. For example, the following line of code uses the MovieClip class (the built-in code that makes a movie clip behave the way it does) to create an empty movie clip at runtime: MovieClip var myMC:MovieClip = new MovieClip(); It's impractical to list every possible data type here, but this book will reference data types frequently, and using them will soon become second nature to you. In some cases, you will need to change one data type to another. This is called casting. A good example of the need for casting data types is when using a text field to capture numeric input from the user. When a user types into a text field, the data captured is, logically, text. However, when the intended use of that information is mathematical, you must convert the input from text data to numeric data. There are two ways to cast from one data type to another. The first is by using the as operator. Applying the as operator to existing data, followed by the desired data type, will make the conversion. as var num:Number = userAnswer.text as Number; The second method is to use the desired data type class as a method and place the original data inside the parentheses. The following code, for example, converts a number to an integer: var num2:int = int(num); An important basic idea in any programming language is that of operators. Operators are symbols that represent action taken upon objects, such as setting, comparing, or testing values. Some operators will seem like common sense, such as arithmetic operators: addition (+), subtraction (−), multiplication (*), division (/), and more. Others, however, may not be as obvious. For example, did you know that, in addition to adding two numbers, the plus operator (+) can join two strings of text? A single equals sign (x = 1) is an assignment operator, because it is used to assign a value to something, but did you know there is also a double equals sign (==) operator? The latter is used to compare two values to determine whether they are equal. There are also a few categories of operators that you may never have seen before: Shortcut arithmetic operators combine two tasks into one operator. The following are standard operators: x++ x-- x = x − 1 x += n x = x + n x -= n x = x − n x *= n x = x * n x /= n x = x / n Comparison operators are typically used in conditional (if) statements for comparing values. The following examples include equal to, not equal to, greater than, greater than or equal to, less than, and less than or equal to. if if (x == 1) { } if (x != 1) { } if (x > 1) { } if (x >= 1) { } if (x < 1) { } if (x <= 1) { } Logical operators are also used in conditional statements. They group tests together using and (&&), or or (||) to create a new test. The combined test relies on either one or the other original tests passing (or) or both tests passing (and). Another logical operator tests for falsehood using not (!). You'll learn about conditionals later, but it's good to be able to recognize these operators: && || if (x == 1 && y == 2) { } if (x == 1 || y == 2) { } if (!myClip.visible) { } Scope is the realm or space within which an object lives. As you learn more about ActionScript, particularly when you start using writing classes, you'll find scope a more central issue. Scope is still notable when you're just starting out, however, when it comes to use of this. this The keyword this is essentially shorthand for, "whichever object or scope you're working with now." For example, think of a movie clip inside Flash's main timeline. Both of the movie clips have a unique scope, so a variable or function defined inside the movie clip will not exist in the main timeline, and vice versa. It's easiest to understand the usage of this in context, but here are a couple of examples to get you started. If you want to refer to the width of a movie clip, from within that same movie clip, you write: this.width; The this identifier refers to the movie clip in which the script was written. Similarly, if you want to check the width of the movie clip's parent, this is still the basis of the code: this.parent.width; In both cases, this is a reference point from which you start your path. It's fairly common to drop the keyword when going down the chain of objects from the current scope (as in the first example), but it's usually used, or even required, when going up to a higher scope (as in the second example). This is because Flash must understand which ancestor is needed when traversing through the hierarchy. Imagine a family reunion in which several extended family members, including cousins and multiple generations, are present, and you are looking for your mother, father, or grandparent. If you just said "parent," any number of parents might answer. If you instead said "my parent" or "my mother's parent," that would be specific enough to get you headed in the right direction. Functions are an indispensable part of programming in that they wrap code into blocks that can be executed only when needed. They also allow you to reuse and edit code blocks efficiently, without having to copy, paste, and edit repeatedly. Without functions, all code would be executed in a linear progression from start to finish, and edits would require changes to every single occurrence of any repeated code. Creating a basic function requires little more than surrounding the code you wish to trigger with a wrapper that allows you to give the block a name. Triggering that function later requires only that you call the function by name. The following syntax shows a function that traces a string to the Output panel. The function is defined and then, to illustrate the process, immediately called. In a real-world scenario, the function is usually called at some later time or from some other place, such as when the user clicks a button with the mouse. The output is depicted in the comment that follows the function call: function showMsg() { trace("hello"); } showMsg(); //hello If efficiently reusing code and executing code only when needed were the only advantages of functions, you'd already have a useful improvement over linear execution of a script. This allows you to group your code into subroutines that you can trigger at any time and in any order. However, you can do much more with functions to gain even greater power. Assume you need to vary the purpose of the previous function slightly. Let's say you need to trace 10 different messages. To do that using only what you've learned so far, you'd have to create 10 functions and vary the string that is sent to the Output panel in each function. However, you can accomplish this more easily using arguments. Arguments are like variables that have life only within their own functions. By adding an argument to the parenthesis next to the function name, you can pass a value into that argument when you call the function. In the following case, the argument is called msg and is expected to contain a data type of String: msg function showMsg(msg:String) { trace(msg); } showMsg("goodbye"); //goodbye By using msg in the body of the function, it takes on the value that is sent into the argument. In this example, the function no longer traces "hello" every time it's called. Instead, it traces the text sent into its argument when the function is called. When using arguments, it's ideal to type the data coming in so Flash knows how to react and can issue any warnings or errors. It's also possible to return a value from a function, increasing its usefulness. Returning a value to the script from which it was called means a function can vary its input and its output. The following examples convert temperature values from Celsius to Fahrenheit and Fahrenheit to Celsius. In both cases, a value is sent into the function and a calculated value is returned to the script. The first example immediately traces the result sent back from the function, while the second example stores the value returned from the function in a variable. This mimics real-life usage in that you can immediately act upon the returned value or store and process the data at a later time. function celToFar(cel:Number):Number { return (9 / 5) * cel + 32; } trace(celToFar(20)); //68 function farToCel(far:Number):Number { return (5 / 9) * (far - 32); } var celDeg:Number = farToCel(68); trace(celDeg); //20 When returning a value from a function, you should also declare the data type of the return value. As with applying other data types, use a colon followed by the type specific to that function, but place the data type between the argument's close parenthesis and the opening function brace. Once you get used to this practice, it's best to specify void as a return type when your function does not return a value. This will cause an alert if you attempt to return a value after originally planning not to do so. You will often need to make a decision in your script, choosing to do one thing under one circumstance and another thing under a different circumstance. These situations are usually addressed by conditionals. Put simply, a conditional is a test that determines whether a particular condition is met. If the condition is met, the test evaluates to true, and specific code is executed accordingly. If the condition is not met, either no further action is taken or an alternate set of code is executed. The most common form conditional is the if statement. The structure of the statement's basic structure is the if keyword, followed by the conditional test in parentheses and the code to be executed (if the statement evaluates to true) in braces. The first three lines in the following example establish a set of facts. The if statement evaluates the given facts (this initial set of facts will be used for this and subsequent examples in this section). var a:Number = 1; var b:String = "hello"; var c:Boolean = false; if (a == 1) { trace("option a"); } To evaluate the truth of the test inside the parentheses, conditionals often make use of comparison and logical operators. A comparison operator compares two values, such as equals (==), less than (<), and greater than or equal to (>=), to name a few. The test in the preceding example uses a double equals sign. This is a comparison operator that asks, "Is this equal to?" This distinction is very important because the accidental use of a single equals sign will cause unexpected results. A single equals sign is an assignment operator that assigns the value on the right side of the equation to the object on the left side of the equation. Because this assignment occurs, the test will always evaluate to true. A logical operator evaluates the logic of an expression. Included in this category are and (&&), or (||), and not (!). These allow you to ask if "this and that" are true, or if "this or that" is true, or if "this" is not true. ! For example, the following code would return false because both conditions must be true. The value of a is 1, but the value of b is "hello." Because the second test fails, the combined test fails. As a result, nothing would appear in the Output panel. false if (a == 1 && b == "goodbye") { trace("options a and b"); } In the next example, the test would evaluate to true, because one of the two conditions (the first) is true. As a result, "option a or b" would appear in the Output panel. if (a == 1 || b == "goodbye") { trace("option a or b"); } Finally, the following would also evaluate to true because the not operator correctly determines that c is not true (remember that every if statement, at its core, is testing for truth). c if (!c) { trace("not option c"); } You can also use the not operator in a comparison. When combined with a single equals sign, the pair means "not equal to." Therefore, the following will fail because a does equal 1, and nothing will appear in the Output panel: a if (a != 1) { trace("a does not equal 1"); } Additional power can be added to the if statement by adding an unconditional alternative (true no matter what). In this case, an alternative set of code is executed no matter what the value being tested is, simply because the test did not pass. With the following new code added to the previous example, the second (else) trace will occur: if (a != 1) { trace("a does not equal 1"); } else { trace("a does equal 1"); } Finally, you can make the statement even more robust by adding a conditional alternative (or an additional test) to the structure. In this example, the second trace will occur: if (a == 2) { trace("a does not equal 1"); } else if (a == 1) { trace("a does equal 1"); } The if statement requires one if. You can use only one optional else, but you can use any number of optional additional else if tests. In all cases, however, only one result can come from the structure. Consider the following example, in which all three results could potentially execute—the first two because they are true, and the last because it's an unconditional alternative: else else if if (a == 1) { trace("option a"); } else if (b == "hello") { trace("option b"); } else { trace("option other"); } In this case, only "option a" would appear in the Output panel, because the first truth would exit the if structure. If you needed more than one execution to occur, you would need to use two or more conditionals. The following structure, for example, executes the first trace in each if, by design: if (a == 1) { trace("option a"); } if (b == "hello") { trace("option b"); } else { trace("option other"); } An if statement can be as simple or as complex as needed. However, long if structures can be difficult to read and are sometimes better expressed using the switch statement. The switch statement also has a unique feature that lets you control which, if any, instructions are executed, even when a test evaluates to false. switch Imagine an if statement asking if a variable is 1, else if it's 2, else if it's 3, and so on. A test like that can become difficult to read quickly. An alternate structure appears as follows: switch (a) { case 1 : trace("one"); break; case 2 : trace("two"); break; case 3 : trace("three"); break; default : trace("other"); break; } In this case, "one" would appear in the Output panel. The switch line contains the object or expression you want to test. Each case line offers a possible value. Following the colon are the instructions to execute upon a successful test, and each break line prevents any following instructions from executing. When not used, the next instructions in line will execute, even if that test is false. break For example, if a equals 1, the following will place both "one" and "two" in the Output panel, even though a does not equal 2: switch (a) { case 1 : trace("one"); case 2 : trace("two"); break; } This break feature does not exist with the if statement and, if used with care, makes switch an efficient alternative to a more complex series of multiple if statements. Switch statements must have one switch and one case, an optional unconditional alternative in the form of default, and an optional break for each case and default. The final break is not needed, but you may prefer to include it for consistency. Switch case default It's quite common to execute many repetitive instructions in your scripts. However, including them line by line, one copy after another, is inefficient and difficult to edit and maintain. Wrapping repetitive tasks in an efficient structure is the role of loops. A programming loop is probably just what you think it is: you use it to go through the structure and then loop back to the start and do it again. There are a few kinds of loops, and the type you choose to use can help determine how many times your instructions are executed. The first type of loop structure is the for loop. This loop executes its contents a finite number of times. For example, you may wish to create a grid of 25 movie clips or check to see which of 5 radio buttons a user has selected. for For our purposes, suppose you want to trace content to the Output panel three times. To loop through a process effectively, you must first start with an initial value, such as 0, so you know you have not yet traced anything to the Output panel. The next step is to test to see if you have exceeded your limit. The first time through, 0 does not exceed the limit of three times. The next step is to trace the content once, and the final step is to increment the initial value, registering that you've traced the desired content once. The process then starts over until, ultimately, you will exceed the limit of the loop. The syntax for a basic for loop is as follows: for (var i:int = 0; i < 3; i++) { trace("hello"); } You may notice the declaration and typing of the counter, i. This is a common technique because the i variable is often used only for counting and, therefore, is created on the spot and not used again. If you have already declared and typed the counter, you can omit this step. Next is the loop test. In this case, the counter variable must have a value that is less than 3. Finally, the double plus sign (++) is equivalent to i = i + 1, or "add 1 to the current value of i." i ++ i = i + 1 Note the use of the semicolon to execute more than one step in a single line. The result is three occurrences of the word "hello" in the Output panel. It's also possible to count down by reversing the values in steps 1 and 2, and then decrementing the counter: for (var i:int = 3; i > 0; i--) { trace("hello"); } In other words, as long as the value of i is greater than 0, execute the loop and subtract one from the counter each time. This is less common, and works in this case because the loop only traces a string. However, if you need to use the actual value of i inside the loop, that need may dictate whether you count up or down. For example, if you created 10 movie clips and called them mc0, mc1, mc2, and so on, it may be clearer to count up. mc0 mc1 mc2 The other loop you'll likely use is the while loop. Instead of executing its contents a finite number of times, it executes as long as something remains true. while For example, look at a very simple case of choosing a random number. ActionScript generates a random number using a method of the Math class called random(). This method chooses a random number between 0 and 1. So, say you want to choose a random number greater than or equal to 0.5. For the sake of discussion, you have a 50% chance of choosing a desired number each time, so you may end up with the wrong choice several times in a row. To be sure you get a qualifying number, you can add this to your script: random() var num:Number = 0; while (num < 0.5) { num = Math.random(); } Starting with a default value of 0, num will be less than 0.5 the first time into the loop. A random number is then put into the num variable, and if it's less than 0.5, the loop will execute again. This will go on until a random number that is greater than 0.5 is chosen, thus exiting the loop. num Use while loops with caution until you are comfortable with them. It's very easy to accidentally write an infinite loop with no exit, which will crash your machine. Do not try this code yourself, but here is a significantly simplified example of an infinite loop: var flag:Boolean = true; while (flag) { trace ("I am an infinite loop"); } In this case, the flag variable remains true and, thus, the loop can never fail. It's very important to understand that although they are compact and convenient, loop structures are not always the best method to achieve an outcome. This is because loops are very processor-intensive. Once a loop begins its process, nothing else will execute until the loop has been exited. For this reason, it may be wise to avoid for and while loops when interim visual updates are required. In other words, when a loop serves as an initialization for a process that is updated only once upon its completion, such as creating the aforementioned grid of 25 movie clips, you are less likely to have a problem. The script enters the loop, 25 clips are created, the loop is completed, a frame update can then occur, and you see all 25 clips. However, if you want each of the 25 clips to appear one by one, those interim visual updates of the playhead cannot occur while the processor is consumed by the loop. In this situation, it's more desirable to create a loop using methods that do not interfere with the normal playhead updates. A frame loop is just such a method. A frame loop is simply a repeating frame event, executing an instruction each time the playhead is updated. The events occur concurrently with any other events in the ordinary functioning of the file, so visual updates, for example, can continue while the frame loop is executing. Frame loops will be explained in greater detail later in this chapter. For now, the important thing is to remember that frame loops offer a possible alternative to for and while loops. Basic variables can contain only one value. If you set a variable to 1 and then later set that same variable to 2, the value will be reassigned to 2. However, there are times when you need one variable to contain more than one value. Think of a set of groceries, including 50 items. A variable approach to this task would be to define 50 variables and populate each with a grocery item. That's the equivalent of 50 pieces of paper, each containing one grocery item. That's not a shopping list you are likely to use. It's unwieldy, can only be created at authortime, and you'd have to recall and manage all variable names every time you wanted to access the grocery items. An array equivalent, however, resembles the way we handle this situation in real life. You can write a list of 50 groceries on one piece of paper. You can add to the list while at the store, cross each item off as it's acquired, and you only have to manage one piece of paper. Creating an array is quite easy. You can prepopulate an array by setting a variable (typed as an Array) to a comma-separated list of items, surrounded by brackets. You can also create an empty array by using the Array class. Both techniques are illustrated here: var myArray:Array = [1, 2, 3]; var yourArray:Array = new Array(); In both cases, you can add to or remove from the array at runtime. For example, you can add a value to an array using the push() method, which pushes the value into the array at the end. You can remove an item from the end of an array using the pop() method. push() pop() var myArray:Array = new Array(); myArray.push(1); trace(myArray); // 1 appears in the Output panel myArray.push(2); // the array now has two items: 1, 2 trace(myArray.pop()); // the pop() method removes the last item, displaying its value of 2 trace(myArray); // the lone remaining item in the array, 1, is displayed There are a dozen or so other array methods, allowing you to add to or remove from the front of an array, sort its contents, check for the position of a found item within the array, compare each value against a control value, and more. You can also add to or retrieve values from locations within the array by using brackets and including the index, or position, of the array you need. Keep in mind that ActionScript uses zero-based arrays, which means that the first value is at position 0, the second is at position 1, the next at position 2, and so on. So, to retrieve the existing fifth value from an array, you must request the item at position 4: var myArray:Array = ["a", "b", "c", "d", "e"]; trace(myArray[4]); //"e" appears in the Output panel Arrays can even contain other arrays. The result is called a multidimensional array. Arrays within arrays can resemble database structures or tables. Here are two examples of creating multidimensional arrays: var multiArray:Array = new Array(); multiArray.push([1,2,3]); multiArray.push([4,5,6]); var multiArray2:Array = new Array(); var array1:Array = [1,2,3]; var array2:Array = [4,5,6]; multiArray2.push(array1); multiArray2.push(array2); The example in the first three lines pushes two arrays that are created on the fly into a multidimensional array. The second example pushes two existing arrays, which have already been stored in their own variables, into a multidimensional array. In both cases, the resulting array looks like this: // [[1,2,3],[4,5,6]] As stated earlier, this is nothing more than an array of arrays. As such, accessing values from the multidimensional array is similar to accessing values from one-dimensional arrays. First, you follow the array name with brackets and an index that identifies which array you want to query further. Because that, too, is an array, you follow it with another set of brackets and index. The following example contains the syntax required to pull 4 out of the first position of the second array in either multidimensional array in the example (the first example, multiArray, is used here): trace(multiArray[1][0]); // 4 Associative arrays store a pair of items—the value and an associated property name, or key, to describe that value. For example, a student might be represented this way: var student1:Array = new Array(); student1["name"] = "Jodi"; student1["email"] = "[email protected]"; student1["phone"] = "212-555-1212"; You can access the value the same way you set the value: trace(student1["name"]); // Jodi Although the values can be any data type, each property in an associative array must be a string. Another way to accomplish this task is to use an object. Creating a custom object allows you to get and set your property values using the familiar dot syntax employed throughout ActionScript. There are two ways to define objects. The first is to write them out explicitly, as shown in the following example. The syntax looks a bit like an array, with two differences. First, the entity is wrapped with braces, not brackets. Second, instead of single values, commas separate property/value pairs. To access or populate the values, simply reference the property using dot syntax: var student2:Object = {name:"Sally", email: "[email protected]", phone:"212-555-1212"}; trace(student2.name); // Sally You can create an array of objects this way. You will see this approach in Chapter 9, COMPONENTS when you work with components. To access an object's property, use the same syntax, but first determine which item in the array you want to manipulate. Pull that object out of the array using brackets, then access its property using dot syntax as previously described: var studentGroup:Aray = new Array(); studentGroup.push({name:"Jodi", email:"[email protected]", phone:"212-555-1212"}); studentGroup.push({name:"Sally", email:"[email protected]", phone:"212-555-1212"}); studentGroup.push({name:"Claire", email:"[email protected]", phone:"212-555-1212"}); trace(studentGroup[0].name); // Jodi Finally, the clearest way to create and populate an object is to create a new instance of the object and add its properties as needed: var student3:Object = new Object(); student3.name = "Claire"; student3.email = "[email protected]"; student3.phone = "212-555-1212"; trace(student3.name); // Claire If you think of properties as ways of describing an object, using them becomes second nature. Asking where a movie clip is, for example, or setting the size of a movie clip are both descriptive steps and both use properties. When referencing a property, you must begin with an instance, because you must decide which element to query or change. If you consider a test file with only one movie clip, instantiated as "box," all that remains is referencing the property and either getting or setting its value. Table 6.4, “Movie clip properties” contains syntax for making five changes to movie clip properties. Later, when you learn more about events, you'll change these properties interactively. Table 6.4. Movie clip properties Property Syntax for setting value Units and/or range Location x, y box.x = 100; box.y = 100; pixels Scale (1) scaleX, scaleY box.scaleX = 0.5 box.scaleY = 0.5 percent / 0−1 Scale (2) width, height box.width = 72; box.height = 72; Rotation rotation box.rotation = 45; degrees / 0−360 Transparency alpha box.alpha = 0.5 Visibility visible box.visible = false; Figure 6.7, “Changes to five movie clip properties” shows the visual change made by each property included in Table 6.4, “Movie clip properties”. The light-colored square is the original state and the darker color represents the square after a property change (the alpha property shows only the final state). The dashed stroke for the visible property indicates that the box is not visible. Figure 6.7. Changes to five movie clip properties A few changes have simplified and unified the way properties are referenced in AS3. First, the properties do not begin with an underscore. Rather than varying property syntax, some with and some without leading underscores, no properties begin with the underscore character. Second, some value ranges that used to be 0–100 are now 0–1. Examples include scaleX, scaleY, and alpha. Instead of using 50 to set a 50% value, specify 0.5. scaleX scaleY Finally, the first scaling method uses the properties scaleX and scaleY rather than _xscale and _yscale, which are their AS1/AS2 equivalents. Typically, AS3 properties will cite the x and y versions of a property as a suffix to make the code easier to read. _xscale _yscale Table 6.4, “Movie clip properties” shows syntax for setting a property. Querying the value of a property, also known as getting the property, is just as easy. For example, to trace the box's alpha value or store it in a variable, you can write either of the following: trace(box.alpha); var bAlpha:Number = box.alpha; You can also set the properties of a class instance. For example, the following code creates an instance of the Point class, and then sets the x and y values of that point: var myPoint:Point = new Point(); myPoint.x = 20; myPoint.y = 20; This code is usually more readable (or at least virtually self-commenting) than trying to make all possible property assignments when the instance is created. For example, in Chapter 7, FILTERS AND BLEND MODES, you'll apply filters, such as a drop shadow filter, to movie clips with ActionScript. You can customize a filter in one line using this syntax: var ds:DropShadowFilter = new DropShadowFilter (5, 45, 0x000000, 0.5, 5, 5, 0.5, 1, false, false, false); However, you may not remember all of the properties that are being set in that syntax, or you may not remember the order in which they are set. If readability and clarity are more important than brevity, you can accomplish the same task using this: var ds:DropShadowFilter = new DropShadowFilter(); ds.distance = 5; ds.angle = 45; ds.color = 0x000000; ds.alpha = 0.5; ds.blurX = 5; ds.blurY = 5; ds.strength = 1; ds.quality = 1; ds.inner = false; ds.knockout = false; ds.hideObject = false; Methods, the verbs of the ActionScript language, instruct their respective objects to act. Like properties, methods appear consistently in the dot syntax that is the foundation of ActionScript, following the object calling the method. For example, if the movie clip "box" in the main Timeline issues the stop() method, the syntax would appear like this: stop() box.stop(); Methods also sometimes require values to be passed to the object being manipulated. For example, although the stop() method will stop a movie clip from playing the visible frame, another method will stop playback after first going to a specific frame. The following example tells the movie clip "box" to go to frame 3 and stop: box.gotoAndStop(3); Events make the Flash world go 'round. They are responsible for setting your scripts in motion, causing them to execute. A button can be triggered by a mouse event, and text fields react to keyboard events. Even calling your own custom functions is a means of issuing a custom event. Events come in many varieties. In addition to the obvious events like mouse and keyboard input, most ActionScript classes have their own events. For example, events are fired when watching a video, working with text, and resizing the Stage. To take advantage of these events to drive your application, you need to be able to detect their occurrences. In previous versions of ActionScript, there were a variety of ways to trap events. In AS3, trapping events is simplified by relying on one approach for all event handling, the use of event listeners. The concept of event listeners is pretty simple. Imagine that you are in a lecture hall that holds 100 people. Only one person in the audience has been given instructions about how to respond when the lecturer asks a specific question. In this case, one person has been told to listen for a specific event and to act on the provided instructions when this event occurs. Now imagine that many more responses are required. For example, when the lecturer takes the stage, someone must dim the lights. When the lecturer clicks a hand-held beeping device, an audio/visual technician must advance to the next slide in the presentation. When each video ends, the lecturer must react by introducing the next exhibit in the lecture. Finally, when an audience member raises a hand, an usher must bring a microphone to assist the audience member in asking his or her question. These are all reactions to specific events that are occurring throughout the lecture. Some are planned and directed to a specific recipient, such as the beeping that triggers the technician to advance to the next video in the series. Others are unplanned, such as when, or even if, an audience member has a question. Yet each appropriate party in the mix has been told which event to listen for and how to react when that event occurs. Creating an event listener, in its most basic form, is also fairly straightforward. The first step is to identify the host of the listener. That is, which object should be told to listen for a specific event. One easy-to-understand example is instructing a button to listen for mouse events that might trigger its scripted behavior. Once you have identified an element that should listen for an event, the next step is choosing an event appropriate for that element. For example, it makes sense for a button to listen for a mouse event, but it makes less sense for the same button to listen for the end of a video or the resizing of the Stage. It would be more appropriate for the video player to listen for the end of the video and for the Stage to listen for any resize event. Each respective element could then act or instruct others to act when that event occurs, which leads to the third main step in setting up a listener. To identify the instructions that must be executed when an event occurs, you simply write a function, then tell the event listener to call that function when the event is heard. That function uses an argument to receive information about the event that called it, allowing the function to use key bits of data during its execution. To tie it all together, the addEventListener() method puts the listener into service and assigns the function to be executed when the event is heard. Suppose you want a button called rotate_right_btn to listen for a mouse up event and call the function onRotateRight() when the event is heard. The code to accomplish this looks like the following script: addEventListener() onRotateRight() rotate_right_btn.addEventListener(MouseEvent.MOUSE_UP, onRotateRight); function onRotateRight(evt:MouseEvent):void { box.rotation += 20; } In the first line, addEventListener() is attached to the button. The method requires two mandatory parameters. The first is the event for which you want to listen. In AS3, similar events are grouped together into classes to make it easier for you to check against their data types. Checking to make sure the incoming event is of type MouseEvent prevents a KeyboardEvent from triggering the listener function. MouseEvent KeyboardEvent The MouseEvent class contains constants that refer to mouse events like mouse up and mouse down. This example uses the MOUSE_UP constant to reference the mouse up event. MOUSE_UP The second parameter is the function that should be called when the event is received. In this example, it is a reference to the onRotateRight() function, which begins on the second line. The function used in an event listener is just like any other function, with one exception: the argument in a listener function is not optional. In the following code, for example, the argument is named evt and receives information about the element that triggered the event. You can use information from this argument in the function, which you'll do in a moment. The argument should be typed to the expected data to improve error checking. In this case, because the function listens for a MouseEvent, that is the data type is used for the argument. evt To illustrate this, look at the impact of another mouse event, with more than one listener is in play. 1 myMovieClip.addEventListener(MouseEvent.MOUSE_DOWN, onStartDrag); 2 myMovieClip.addEventListener(MouseEvent.MOUSE_UP, onStopDrag); 3 function onStartDrag(evt:MouseEvent):void { 4 evt.target.startDrag(); 5 } 6 function onStopDrag(evt:MouseEvent):void { 7 evt.target.stopDrag(); 8 } In this code, two event listeners are assigned to a movie clip. One listens for a mouse down event and another listens for mouse up. They each invoke different functions. In both functions, however, the target property of the event, which is retrieved from the function argument, identifies which element received the mouse event. This allows the instruction in line 3 to start dragging the selected movie clip, and allows the instruction in line 6 to stop dragging the selected movie clip without specifying the movie clip using its instance name in either case. mouse down mouse up target This generic approach is very useful because it makes the function much more flexible. It means the function can act upon any appropriate item that is clicked and passed into its argument. In other words, the same function could start and stop dragging any movie clip to which the same listener was added. In the companion source files, the start_stop_drag.fla file shows this by adding the following lines to the previous example: start_stop_drag.fla 9 myMovieClip2.addEventListener(MouseEvent.MOUSE_DOWN, onStartDrag); 10 myMovieClip2.addEventListener(MouseEvent.MOUSE_UP, onStopDrag); You can drag and drop each movie clip simply by adding another movie clip to the exercise and specifying the same listeners. Now you can use syntax from the the section called “Properties”, the section called “Methods”, and the section called “Events” sections of this chapter to set up interactive control over a movie clip. In the companion source code, you'll find a file called props_methods_events.fla. It contains nothing more than the example movie clip "box," and two buttons in the Library that will be used repeatedly to change the five properties discussed earlier. The movie clip contains numbers to show which of its frames is visible at any time, and the instance names of each button will reflect its purpose. Included are move_up_btn, scale_down_btn, rotate_right_btn, fade_up_btn, and toggle_visibility_btn, among others. The start of the main chapter project consists of several buttons that will modify properties of the center movie clip. Figure 6.8, “Layout of the props_methods_events.fla file” shows the layout of the file. props_methods_events.fla Figure 6.8. Layout of the props_methods_events.fla file Starting with movement, you need to define one or more functions to update the location of the movie clip. In this case, you'll define a separate function for each type of movement. 1 function onMoveLeft(evt:MouseEvent):void { 2 box.x -= 20; 3 } 4 function onMoveRight(evt:MouseEvent):void { 5 box.x += 20; 6 } 7 function onMoveUp(evt:MouseEvent):void { 8 box.y -= 20; 9 } 10 function onMoveDown(evt:MouseEvent):void { 11 box.y += 20; 12 } You can see that the structure of the functions is consistent. Once you have defined the functions, all you have to do is add the listeners to the appropriate buttons: 13 move_left_btn.addEventListener(MouseEvent.MOUSE_UP, onMoveLeft); 14 move_right_btn.addEventListener(MouseEvent.MOUSE_UP, onMoveRight); 15 move_up_btn.addEventListener(MouseEvent.MOUSE_UP, onMoveUp); 16 move_down_btn.addEventListener(MouseEvent.MOUSE_UP, onMoveDown); This simple process is then repeated for each of the buttons on stage. The remainder of the script collects the aforementioned properties and event listeners in the same pattern used with the four movement listeners to complete the demo. 17 scale_up_btn.addEventListener(MouseEvent.MOUSE_UP, onScaleUp); 18 scale_down_btn.addEventListener(MouseEvent.MOUSE_UP, onScaleDown); 19 20 rotate_left_btn.addEventListener(MouseEvent.MOUSE_UP, onRotateLeft); 21 rotate_right_btn.addEventListener(MouseEvent.MOUSE_UP, onRotateRight); 22 23 fade_in_btn.addEventListener(MouseEvent.MOUSE_UP, onFadeIn); 24 fade_out_btn.addEventListener(MouseEvent.MOUSE_UP, onFadeOut); 25 26 toggle_visible_btn.addEventListener(MouseEvent.MOUSE_UP, onToggleVisible); 27 28 function onScaleUp(evt:MouseEvent):void { 29 box.scaleX += 0.2; 30 box.scaleY += 0.2; 31 } 32 function onScaleDown(evt:MouseEvent):void { 33 box.scaleX -= 0.2; 34 box.scaleY -= 0.2; 35 } 36 37 function onRotateLeft(evt:MouseEvent):void { 38 box.rotation -= 20; 39 40 } 41 function onRotateRight(evt:MouseEvent):void { 42 box.rotation += 20; 43 } 44 45 function onFadeIn(evt:MouseEvent):void { 46 box.alpha += 0.2; 47 } 48 function onFadeOut(evt:MouseEvent):void { 49 box.alpha -= 0.2; 50 } 51 52 function onToggleVisible(evt:MouseEvent):void { 53 box.visible = !box.visible; 54 } Finally, you can add an example of a mouse event triggering a method. The final 10 lines of this script will toggle the movie clip between playing and stopped states. Because the stop() and play() methods are not Boolean opposites, you can't use a single not (!) operator to switch the states. Instead, you need to use a conditional statement that tests the value of a Boolean variable. Each time the play state is reversed, the Boolean variable can be toggled using the not (!) operator to prepare for the next test. play() 55 var isPlaying:Boolean = true; 56 box.addEventListener(MouseEvent.CLICK, onTogglePlay); 57 function onTogglePlay(evt:MouseEvent):void { 58 if (isPlaying) { 59 box.stop(); 60 } else { 61 box.play(); 62 } 63 isPlaying = !isPlaying; 64 } Line 55 creates the variable isPlaying and sets it to true. The default state of the movie clip is playing. Lines 56 through 64 contain the event listener. The listener triggers the onTogglePlay() function when a user clicks the box movie clip. isPlaying onTogglePlay() Line 58 tests to see if isPlaying is true. If so, line 59 stops the movie clip. Otherwise, line 61 plays the movie clip. In either case, the value of isPlaying is reversed (if it was false, it's set to true, and vice versa). Frame events are not triggered by user input the way mouse events are. Instead, they occur without intervention as the Flash file plays. Each time the playhead enters a frame that contains a frame script, that script is executed. This means that frame scripts execute only once for the life of the frame, making them excellent for seldom-executed tasks such as initializations. For a frame script to execute more than once, the playhead must leave the frame and return, either because of an ActionScript navigation instruction or a playback loop that returns the playhead to frame 1 when it reaches the end of the Timeline. However, using an event listener, you can listen for a recurring enter frame event in a movie clip that is independent from the playhead. This type of event can trigger even when the playhead is stopped; this is a great way to run continuous animation. An enter frame event is fired at the same pace as the document frame rate. For example, if the default frame rate is 12 frames per second, the default enter frame frequency is 12 times per second. The frame_events.fla file in the companion source code demonstrates this event by updating the position of a unicycle every enter frame. It places the unicycle at the horizontal location of the mouse and rotates the wheel child movie clip. Figure 6.9, “Visual depiction of the unicycle movements” shows the effect. As the user moves the mouse to the right on the Stage, the unicycle will move to the right, and the wheel will rotate clockwise. frame_events.fla Figure 6.9. Visual depiction of the unicycle movements Here is the code for this example. The first line adds an enter frame event listener to the main Timeline. The listener function then sets the unicycle's x location to the mouse's x location using the mouseX property. It also sets the rotation property of the wheel clip (nested inside cycle) to the same value. stage.addEventListener(Event.ENTER_FRAME, onFrameLoop); function onFrameLoop(evt:Event):void { cycle.x = mouseX; cycle.wheel.rotation = mouseX; } This example also demonstrates a scripting shortcut aided by ActionScript. When specifying a rotation greater than 360 degrees, ActionScript will understand and use the correct value; that is, 360 degrees is one full rotation around a circle, returning to degree 0 (720 degrees is twice around the circle and also equates to 0). Similarly, 370 degrees is equivalent to 10 degrees, as it's 10 degrees past degree 0, and so on. This allows you to set the rotation of the wheel movie clip to the x coordinate of the mouse without worrying about moving past the 360-pixel point on the Stage. While event listeners make most event handling easy to add and maintain, leaving them in place when unneeded can wreak havoc. From a logic standpoint, consider what could happen if you kept an unwanted listener in operation. Imagine a week-long promotion for radio station 101 FM, which rewards the 101st customer to enter a store each day of that week. The manager of the store is set up to listen for "customer enter" events and when customer 101 enters the store, oodles of prizes and cash are bestowed upon the lucky winner. Now imagine if you left that listener in place after the promo week was over. Oodles of prizes and cash would continue to be awarded at great, unexpected expense. Unwanted events are not the only problem, however. Every listener occupies a small amount of memory. Injudiciously creating many event listeners without cleaning up after yourself can result in dwindling memory. Therefore, it's a good idea to remove listeners when you know they will no longer be needed. To do so, simply use the removeEventListener() method. By specifying the owner of the relevant event and the listener function that is triggered, you can remove that listener so it no longer reacts to future events. The removeEventListener() method requires two parameters: the event and function specified when the listener was created. Specifying the event and function is important, as you may have multiple listeners set up for the same event. removeEventListener() The following example rotates the hand of a watch 2 degrees every time an enter frame event is fired. When the hand rotates to 90 degrees, the listener is removed and the rotation stops. (See the remove_listener.fla source file.) remove_listener.fla this.addEventListener(Event.ENTER_FRAME, onLoop); function onLoop(evt:Event):void { watch.hand.rotation += 2; if (watch.hand.rotation >= 90) { this.removeEventListener(Event.ENTER_FRAME, onLoop); } } One of the most dramatic changes introduced by AS3, particularly for designers accustomed to prior versions of the language, is the way in which code is used to access visual elements. AS3 brings with it an entirely new way of handling visual assets, called the display list. It's literally a hierarchical list of everything you can see, including movie clips, buttons, text fields, and more. There's a lot of power and flexibility available to a scripter who uses the display list to its full potential. This book's companion volume includes deeper coverage of the display list, but here you'll focus on the tasks you'll find yourself repeating over and over: adding assets to the list, referencing assets in the list by position or name, and removing assets from the list. If you have experience with prior versions of ActionScript, you'll probably be excited to hear that the many varied ways of adding an asset to the visual world of Flash have been unified and simplified. If you're new to ActionScript, the news is even better. You only have to learn one approach to getting an asset into the public eye. Objects that you can add to the display list are called, appropriately, display objects. For all display objects, you'll use a variant of new MovieClip()—the new keyword followed by the object class for which you want to create an instance. Other examples include new TextField(), new Video(), new Bitmap(). Even adding an existing movie clip from the Library is consistent with this syntax, as you'll see in a moment. new MovieClip() new new TextField() new Video() new Bitmap() While there are many object types that you can add to the display list, you'll focus on movie clips initially. Just like any other class, the new keyword creates an instance of the display object and you can use the object class (MovieClip) as a data type: var mc:MovieClip = new MovieClip(); But that's only the first half of the story. This code creates an object (in this case, a movie clip), but it only places that object in memory. It hasn't yet become a part of the list of assets the user can see. The second half of the story is adding the object to the display list. The primary way to add a movie clip to the display list is by using the addChild() method. To add the movie clip you created a moment ago to the main Timeline, you can place this statement after the prior instruction to create the movie clip: addChild() addChild(mc); Despite its simplicity, this code does imply a destination for the movie clip. By omitting an explicit destination, you cause the movie clip to be added to the scope in which the script was written—in this case, the main Timeline. You can specify a particular location to which the movie clip will be added, but not all display objects will be happy to adopt a child. For example, neither the Video nor Bitmap display object types can have children. To include a child, the destination must be a display object container. Examples of display object containers include Stage, MovieClip, and Sprite (a one-frame movie clip with no Timeline), but for the purposes of this chapter, you'll continue to work with movie clips. So, if you wanted to add the mc movie clip nested inside another movie clip, called mc2, you would provide a destination object for the addChild method to act upon: mc addChild mc2.addChild(mc); You don't even have to specify a depth (visible stacking order), because the display list automatically handles that for you. In fact, you can even use the same code for changing the depths of existing display objects. Thus far you've either referenced display objects using instance names that have been applied on the Stage (through the Properties panel) or limited the dynamic creation of display objects to empty movie clips. However, you will likely find a need to dynamically create and use instances of movie clips that already exist in your library. This process is nearly identical to creating and using empty movie clips, but one additional step is required: you must first set up the symbol first by adding a linkage class. In its most basic use, this is nothing more than a unique name that allows you to create an instance of the symbol dynamically. To see this process, look at the companion source file addChild.fla. In the Library, you will find a unicycle. Select the movie clip in your library, then click the Symbol Properties button (it looks like an "i" at the bottom of the Library) for access to all the symbol properties. addChild.fla In the resulting dialog, shown in Figure 6.10, “Entering a class name for a movie clip in the library Linkage settings”, enable the Export for ActionScript option and add Unicycle to the Class field. adds the newly created movie clip to the display list. MovieClip() Unicycle() var cycle:MovieClip = new Unicycle(); addChild(cycle); Figure 6.10. Entering a class name for a movie clip in the library Linkage settings The addChild() method adds the display object to the end of the display list, which always places the child on top of other display objects. In some cases, however, you may need to add a child at a specific position in the display list. For example, you may wish to insert an item into the middle of a vertical stack of display objects. This example, found in the addChildAt.fla source file, adds a Library movie clip with the class name Ball to the start of the display list with every mouse click. The ultimate effect is that a new ball is added below the previous balls and positioned down and to the right 10 pixels every time the mouse is clicked. addChildAt.fla 1 var inc:uint = 0; 2 3 stage.addEventListener(MouseEvent.CLICK, onClick);. y 100 inc Choosing when to use addChild() and when to use addChildAt() depends entirely on your needs. If you only need to add the display object to the display list, or if you want the object to appear on top of all other objects, use addChild(). If you need to insert the object anywhere below the top of the visual stacking order, use addChildAt() and specify a depth. addChildAt() Any time you do specify a depth, the new object will be sandwiched between the surrounding objects, rather than overwriting whatever is in the specified depth. The display list can have no gaps in it, so everything above an insertion level is automatically moved up a notch in the list. For example, assume a file started with objects in levels 0 through 9 by virtue of adding 10 objects to the display list. Then assume you need to insert a new display object into level 5. All objects in levels 5 through 9 will automatically move up to levels 6 through 10 to accommodate the insert. It's equally important to know how to remove objects from the display list. The process for removing objects is nearly identical to the process for adding objects to the display list. To remove a specific display object from the display list, use the removeChild() method: removeChild() removeChild(ball); To remove a display object at a specific level, use the removeChildAt() method: removeChildAt() removeChildAt(0); The following example is the reverse of the addChildAt() script discussed in the prior section. It starts by using a for loop to add 20 balls to the stage, positioning them with the same technique used in the prior script. It then uses an event listener to remove the children with each click. 1 for (var inc:uint = 0; inc < 20; inc++) { 2 var ball:MovieClip = new Ball(); 3 ball.x = 100 + inc * 10; 4 ball.y = 100 + inc * 10; 5 addChild(ball); 6 } 7 8 stage.addEventListener(MouseEvent.CLICK, onClick); 9 function onClick(evt:MouseEvent):void { 10 removeChildAt(0); 11 } This script will work correctly as long as there is something in the display list. If, after removing the last ball, you click the Stage again, there will be a warning that, "the supplied index is out of bounds." This makes sense, because you are trying to remove a child from position 0 of the display list, when there is nothing in the display list at all. To avoid this problem, you can first check to see if there are any children in the display object container that you are trying to empty. Making sure that the number of children exceeds zero will prevent the aforementioned error from occurring. The following is an updated onClick() function, replacing lines 9–11 in the previous code, with the new conditional in bold: onClick() 9 function onClick(evt:MouseEvent):void { 10 if (numChildren > 0) { 11 removeChildAt(0); 12 } 13 } As discussed previously for event listeners, it's always a good idea to try to keep track of your objects and remove them from memory when you are sure you will no longer need them. Keeping track of objects is particularly relevant when discussing the display list because it's easy to remove an object from the display list and forget to remove it from RAM. When this is the case, the object will not be displayed but will still linger in memory. The following script, a simplification of the previous example, will remove a movie clip from both the display list and RAM: 1 var ball:MovieClip = new Ball(); 2 ball.x = 100; 3 ball.y = 100; 4 addChild(ball); 5 6 stage.addEventListener(MouseEvent.CLICK, onClick); 7 8 function onClick(evt:MouseEvent):void { 9 this.removeChild(ball); 10 //ball removed from display list but still exists 11 trace(ball); 12 ball = null; 13 //ball now entirely removed 14 trace(ball); 15 16 stage.removeEventListener(MouseEvent.CLICK, onClick); 17 } Lines 1 through 5 create and position the ball, then add it to the display list. Line 6 adds a mouse click listener to the Stage. The first line of function content, line 9, removes the ball from the display list using the removeChild() method. Although it's no longer displayed, it's still around, as shown by line 11, which traces the object to the Output panel. Line 12, however, sets the object to null, removing it entirely from memory—again, shown by tracing the object to the output panel in Line 14. null As an added review of best practices, line 16 removes the event listener. Many of the example scripts in this chapter demonstrate working with children that have been previously stored in a variable. However, you will likely need to find children in the display list with little more to go on than their position or name. Finding a child by position is consistent with adding or removing children at a specific location in the display list. Using the getChildAt() method, you can work with the first child of a container using this familiar syntax: getChildAt() var dispObj:DisplayObject = getChildAt(0); If you don't know the location of a child that you wish to manipulate, you can try to find it by name using its instance name, instead of using the display list index. Assuming a child has an instance name of circle, you can store a reference to that child using this syntax: circle var dispObj:DisplayObject = getChildByName("circle"); Finally, if you need to know the location of a display object in the display list but have only its name, you can use the getChildIndex() method to accomplish your goal. getChildIndex() var dispObj:DisplayObject = getChildByName("circle"); var dispObjIndex:int = getChildIndex(dispObj); In the preceding discussion, you used DisplayObject as the data type when retrieving a reference to a display object, rather than another type, like MovieClip, for example. This is because you may not know if the child is a movie clip or another type of display object. DisplayObject In fact, Flash may not even know the data type, such as when referencing a parent movie clip created using the Flash interface (rather than ActionScript). Without the data type information supplied in the ActionScript creation process, Flash sees only the parent Timeline as a display object container. To tell Flash that the container in question is a movie clip, you can cast it as such; that is, you can change the data type of that object to MovieClip. For example, consider a movie clip created in the Flash Player interface that needs to tell its parent, the main timeline, to go to frame 20. A simple line of ActionScript is all that would ordinarily be required: parent.gotoAndStop(20); However, since Flash doesn't know that gotoAndStop() is a legal method of the display object container (the Stage, for example, can't go to frame 20, and neither can a sprite), you will get the following error: gotoAndStop() Call to a possibly undefined method gotoAndStop through a reference with static type flash.display:DisplayObjectContainer. To tell Flash the method is legal for the main timeline, you need to state that the parent is of a data type that supports the method. In this case, the main timeline is a movie clip, so you can say: MovieClip(parent).gotoAndStop(20); This will prevent the error from occurring, and the movie clip will be able to successfully send the main timeline to frame 20. One of the most basic ActionScript skills you need to embrace is navigating within your Flash movies. You will often use these skills to control the playback of the main Timeline or movie clips nested therein. The first thing to learn is how to start and stop playback of the main Timeline or a movie clip, and then add an initial jump to another frame. Figure 6.11, “navigation_01.fla demonstrates simple navigation” shows navigation_01.fla, which contains four Timeline tweens of black circles. For added visual impact, the circles use the Invert blend mode to create an interesting optical illusion of rotating cylinders. You can start and stop playback at any point, as well as starting and stopping in a specific frame—frame one, in this example. Initially, you'll rely on frame numbers to specify where to start and stop. navigation_01.fla Figure 6.11. navigation_01.fla demonstrates simple navigation You've already seen the stop() action at work in a frame script as a passive means of halting playback at the end of an animation or, perhaps, to support a menu screen or similar single frame. In the following code, look at invoking the stop() action via user input, such as clicking a button. In the first frame of the actions layer, you'll find the following code: 1 stopBtn.addEventListener(MouseEvent.CLICK, onStopClick); 2 3 function onStopClick(evt:MouseEvent):void { 4 stop(); 5 } This code does not introduce anything new, other than the aforementioned use of stop() as a method triggered by user interaction. Line 1 is an event listener added to a button named stopBtn. It uses a mouse click to call onStopClick. onStopClick The effect of this setup is to add to the stopBtn functionality for stopping the main movie. All playback of the main Timeline will cease when the user clicks the button. Adding new lines to the script (shown here in bold) will allow you to restart playback. The code structure is similar to the previous example, but invokes the play() method on the playBtn instead. Using this pair of buttons, you can start and stop playback at any time without relocating the playback head in the process. 1 stopBtn.addEventListener(MouseEvent.CLICK, onStopClick); 2 playBtn.addEventListener(MouseEvent.CLICK, onPlayClick); 3 4 function onStopClick(evt:MouseEvent):void { 5 stop(); 6 } 7 function onPlayClick(evt:MouseEvent):void { 8 play(); 9 } Using stop() and play() in this fashion is useful for controlling a linear animation, much the way a controller bar might control audio or video playback. However, it's less common in the case of menus or other navigation devices because typically you must jump to a specific point in your Timeline before stopping or playing. For example, you might have generic sections that could apply to any project, such as home (start), about (info), and help. If restricted to the use of stop() and play(), you'd be forced to play through one section to get to another. Adding again to the example script, the following content shown in bold adds a slight variation. The buttons in the new script function in similar ways, but instead of stopping in or playing from in the current frame, the new buttons go to a specified frame first. For example, if you had previously stopped playback in frame 20, triggering play() again would begin playback at frame 20. However, if you use gotoAndPlay() and specify frame 1 as a destination (shown in the script that follows), you will resume playback at frame 1 rather than at frame 20. There are no structural differences in this code, so simply add the content shown in bold to your ongoing script: gotoAndPlay() 1 stopBtn.addEventListener(MouseEvent.CLICK, onStopClick); 2 playBtn.addEventListener(MouseEvent.CLICK, onPlayClick); 3 gotoPlayBtn.addEventListener(MouseEvent.CLICK, onGotoPlayClick); 4 gotoStopBtn.addEventListener(MouseEvent.CLICK, onGotoStopClick); 5 6 function onStopClick(evt:MouseEvent):void { 7 stop(); 8 } 9 function onPlayClick(evt:MouseEvent):void { 10 play(); 11 } 12 function onGotoPlayClick(evt:MouseEvent):void { 13 gotoAndPlay(1); 14 } 15 function onGotoStopClick(evt:MouseEvent):void { 16 gotoAndStop(1); 17 } To add a nice level of diagnostic reporting to your playback, you can add two new properties to this script. Using the trace() method to send text to the Output panel, you can reference totalFrames to display the number of frames in your movie, and reference currentFrame to tell you which frame the playback head is displaying at the time the script is executed. trace() totalFrames currentFrame trace("This movie has " + totalFrames + " frames."); trace(currentFrame); The companion sample file, navigator_02.fla, demonstrates the use of these properties. It uses totalFrames at the start of playback, and uses currentFrame each time a button is clicked. navigator_02.fla There are specific advantages to using frame numbers with goto methods, including simplicity and use in numeric contexts (such as with a loop or other type of counter). However, frame numbers also have specific disadvantages. The most notable disadvantage is that edits that you make to your file subsequent to the composition of your script may result in a change to the frame sequence in your timeline. goto For example, your help section may start at frame 100, but you may then insert or delete frames in a section of your timeline prior to that frame. This change may cause the help section to shift to a new frame, and your navigation script will no longer send the playback head to the help section. One way around this problem is to use frame labels to mark the location of a specific segment of your timeline. As long as you shift content by inserting or deleting frames to all layers in your timeline, therefore maintaining sync among your layers, a frame label will move with your content. For example, if your help section, previously at frame 100, is marked with a frame label called "help," adding 10 frames to all layers in your Timeline panel will not only shift the help content, but will also shift the frame label used to identify its location. So, although you will still be navigating to the "help" frame label after the addition of frames, you will correctly navigate to frame 110. This is a useful feature when you are relying heavily on timeline tweens for file structure or transitions, or when you think you may be adding or deleting sections in your file. In fact, frame labels free you to simply rearrange your timeline if desired. The capability to go to a specific frame label, no matter where it is, means that you don't have to arrange your file linearly, and you are free to add last-minute changes to the end of your timeline without having to remember an odd sequence of frame numbers to jump to content. The sample file, frame_labels_01.fla, demonstrates the use of frame labels instead of frame numbers when using a goto method. It also illustrates another important and useful concept, which is that you can use these methods to control the playback of movie clips as well as the main timeline. frame_labels_01.fla Instead of controlling the playback of a linear animation, frame_labels_01.fla moves the playback head between the frames of a movie clip called pages. This is a common technique for swapping content in Flash because you can keep your main Timeline simple and just jump the movie clip from frame to frame to reveal each new screen. Figure 6.12, “The "page1" frame of frame_labels_01.fla” displays the "page1" frame of frame_labels_01.fla. The Timeline inset shows the frame labels. Figure 6.12. The "page1" frame of frame_labels_01.fla The initial setup of this example requires that you prevent the movie clip from playing on its own so that you can exert the desired control over its playback. There are several ways to do this. The first, and perhaps most obvious, approach is to put a stop() action in the first frame of the movie clip. You will see this technique used often. The second is to add the stop() method to a main timeline script, but to target a movie clip instead of the main timeline. To do this, precede the method with the object you wish to stop, as shown in line 1 of the following script. In this case, you're stopping the movie clip called pages. You'll look at a third method for stopping movie clips at the end of this chapter, but for now, let's focus on the simple changes this file introduces. In addition to stopping the pages movie clip in line 1, this script adds listeners to buttons one, two, and three, which cause the movie clip to change frames in lines 8, 11, and 14, respectively. 1 pages.stop(); 2 3 one.addEventListener(MouseEvent.CLICK, onOneClick); 4 two.addEventListener(MouseEvent.CLICK, onTwoClick); 5 three.addEventListener(MouseEvent.CLICK, onThreeClick); 6 7 function onOneClick(evt:MouseEvent):void { 8 pages.gotoAndStop("page1"); 9 } 10 function onTwoClick(evt:MouseEvent):void { 11 pages.gotoAndStop("page2"); 12 } 13 function onThreeClick(evt:MouseEvent):void { 14 pages.gotoAndStop("page3"); 15 } The code is essentially the same as the ActionScript you've seen before. To test the effectiveness of using frame labels, simply add or delete frames across all layers before one of the existing frame labels. Despite changing the frame count, you will find that the navigation still works as desired. Also new to AS3 is the ability to dynamically change the frame rate at which your file plays at runtime. The default frame rate of a Flash CS4 movie is 24 frames per second. Previously, whichever frame rate you chose was the frame rate you were stuck with for the life of your SWF. It's now possible to update the speed at which your file plays by changing the frameRate property of the stage, as demonstrated in the sample file frame_rate.fla. frameRate frame_rate.fla This simple script increments or decrements the frame rate by 5 frames per second with each click of a button. You may also notice another simple example of error checking, in the function used by the slower button, to prevent a frame rate of zero or below. Start the file and watch it run for a second or two at the default frame rate of 24 frames per second. Then, experiment with additional frame rates to see how they change the movie clip animation. 5 1 info.text = stage.frameRate; 2 3 faster.addEventListener(MouseEvent.CLICK, onFasterClick); 4 slower.addEventListener(MouseEvent.CLICK, onSlowerClick); 5 6 function onFasterClick(evt:MouseEvent):void { 7 stage.frameRate += 5; 8 info.text = stage.frameRate; 9 } 10 function onSlowerClick(evt:MouseEvent):void { 11 if (stage.frameRate > 5) { 12 stage.frameRate -= 5; 13 } 14 info.text = stage.frameRate; 15 } This frameRate property requires little explanation, but its impact should not be underestimated. Other interactive environments have long been able to vary playback speed, and this is a welcome change to ActionScript for many enthusiastic developers, especially animators. Be it for a Matrix parody or a sports game, slow mo has never been easier. You won't add anything new to the portfolio project in this chapter, but it'll help to review the scripts you wrote in previous chapters now that you have a little more experience with AS3. To begin, you wrote a script in Chapter 3, USING SYMBOLS that animated the symbols created by the Deco tool (Figure 6.13, “Art created with the Deco tool in Chapter 3, USING SYMBOLS”). Figure 6.13. Art created with the Deco tool in Chapter 3, USING SYMBOLS The Deco tool quickly and automatically adds movie clips to a parent movie clip container, arranging the children in circular patterns. However, you did not give each of the child ovals instance names, so you might wonder how they can be controlled with ActionScript. The answer is by using display list methods and properties. The script you entered in Chapter 3, USING SYMBOLS uses the numChildren property to determine how many ovals are in the parent movie clip, and then loops through that number, gaining access to each child using the getChildAt() method. The combination of these tools makes it possible for the script to manipulate each oval individually. numChildren 1 ovals.addEventListener(Event.ENTER_FRAME, onEnter); 2 function onEnter(evt:Event):void { 3 var numOvals:int = ovals.numChildren; 4 for (var i:int = 0; i < numOvals; i++) { 5 ovals.getChildAt(i).rotation += 10; 6 } 7 ovals.rotation = mouseX; 8 } Line 1 adds an enter frame listener to the parent movie clip, which has an instance name of ovals, which was applied through the Properties panel. Lines 2 through 8 comprise the listener function, onEnter(). This function includes the mandatory event argument, typed to the Event class, and returns no value. onEnter() Line 3 checks the number of children in the parent movie clip and places that integer into the numOvals variable. numOvals Lines 4 through 6 contain a for loop that loops through those children one at a time. Each time through the loop, the child at the next highest level in the display list is rotated 10 degrees. That is, the first time through the loop, the loop counter is 0. As such, the child in the ovals movie clip that is at level 0 is rotated. The next time through the loop, the ovals child at level 1 is rotated, and so on. After the for loop fully executes, the parent movie clip (not the individual oval-shaped children) is also rotated, this time to the same value as the x position of the mouse. The entire process is repeated every time the enter frame event is fired or, by default, 24 times per second. This means that you can move the mouse left and right to control the rotation of the container movie clip. All the while, the individual ovals inside the container continue to rotate, contributing to an interesting visual effect. In Chapter 5, ANIMATION, you added a script to control the navigation of the portfolio project. With every click of a button, this script populates a variable and sets the portfolio in motion. A corresponding script then sends the playhead to the frame chosen by the button click, and a stop() action halts the playhead on the desired screen. 1 var nextSection:String; 2 3 navigation.home.addEventListener(MouseEvent.CLICK, onNavigate); 4 navigation.gallery.addEventListener(MouseEvent.CLICK, onNavigate); 5 navigation.lab.addEventListener(MouseEvent.CLICK, onNavigate); 6 navigation.help.addEventListener(MouseEvent.CLICK, onNavigate); 7 8 function onNavigate(evt:MouseEvent):void { 9 nextSection = evt.target.name; 10 play(); 11 } Line 1 creates the needed variable and types its data as a string of text. Lines 3 through 6 add event listeners to the navigation buttons that sit within the navigation bar container. Each button calls the same function when a mouse click is detected. Lines 8 through 11 define the listener function, which expects a MouseEvent and returns no value. Line 9 parses the incoming event information to get the name of the target object that received the click, and then puts that name into the nextSection variable. The function then sets the playhead in motion. nextSection At the end of the section, a separate frame script tells the playhead to go to the desired frame and play through the section so the transitions can complete their visual updates. gotoAndPlay(nextSection); Finally, after the transitions are complete, the playhead is stopped in the middle of the content frame span by another independent script: stop(); The next chapter will introduce filters and blend modes, both of which you will apply to portfolio assets. © 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/a/flash/excerpts/flash-learning-cs4/actionscript-basics.html
CC-MAIN-2015-27
refinedweb
18,166
62.07
HTML::String::TT - HTML string auto-escaping for Template Toolkit my $tt = HTML::String::TT->new(\%normal_tt_args); or, if you're using Catalyst::View::TT: use HTML::String::TT; # needs to be loaded before TT to work __PACKAGE__->config( CLASS => 'HTML::String::TT', ); Then, in your template - <h1> [% title %] <-- this will be automatically escaped </h1> <div id="main"> [% some_html | no_escape %] <-- this won't </div> [% html_var = '<foo>'; html_var %] <-- this won't anyway (but note that the content key in wrappers shouldn't need this). HTML::String::TT is a wrapper for Template Toolkit that installs the following overrides: no_escapeis added to mark outside data that you don't want to be escaped. The override happens to all of the plain strings in your template, so even things declared within directives such as [% html_var = '<h1>' %] will not be escaped, but any string coming from anywhere else will be. This can be a little bit annoying when you then pass it to things that don't respond well to overloaded objects, but is essential to HTML::String's policy of "always fail closed" - I'd rather it throws an exception than lets a value through unescaped, and if you care about your HTML not having XSS (cross site scripting) vulnerabilities then I hope you'll agree. We mark a number of TT internals namespaces as "don't escape when called by these", since TT has a tendency to do things like open FH, "< $name"; which really don't work if it gets converted to " $name while you aren't looking. Additionally, since TT often calls ref to decide e.g. if something is a string or a glob, it's important that UNIVERSAL::ref is loaded before TT is. We check to see if the latter is loaded and the former not, and warn loudly that you're probably going to get weird errors. This warning is not joking. "Probably" is optimistic. Load this module first. The no_escape filter marks the filtered input to not be escaped, so that you can provide HTML chunks from externally and still render them within the TT code. See HTML::String for authors. See HTML::String for the copyright and license.
http://search.cpan.org/~mstrout/HTML-String-1.000006/lib/HTML/String/TT.pm
CC-MAIN-2016-30
refinedweb
364
64.64
This article is based on Free Code Camp Basic Algorithm Scripting “Reverse a String” Reversing a string is one of the most frequently asked JavaScript question in the technical round of interview. Interviewers may ask you to write different ways to reverse a string, or they may ask you to reverse a string without using in-built methods, or they may even ask you to reverse a string using recursion. There are potentially tens of different ways to do it, excluding the built-in reverse function, as JavaScript does not have one. Below are my three most interesting ways to solve the problem of reversing a string in JavaScript. Algorithm Challenge Reverse the provided string. You may need to turn the string into an array before you can reverse it. Your result must be a string. function reverseString(str) { return str; } reverseString("hello"); Provided test cases - reverseString(“hello”) should become “olleh” - reverseString(“Howdy”) should become “ydwoH” - reverseString(“Greetings from Earth”) should return”htraE morf sgniteerG” 1. Reverse a String With Built-In Functions For this solution, we will use three methods: the String.prototype.split() method, the Array.prototype.reverse() method and the Array.prototype.join() method. - The split() method splits a String object into an array of string by separating the string into sub strings. - The reverse() method reverses an array in place. The first array element becomes the last and the last becomes the first. - The join() method joins all elements of an array into a string. function reverseString(str) { // Step 1. Use the split() method to return a new array var splitString = str.split(""); // var splitString = "hello".split(""); // ["h", "e", "l", "l", "o"] // Step 2. Use the reverse() method to reverse the new created array var reverseArray = splitString.reverse(); // var reverseArray = ["h", "e", "l", "l", "o"].reverse(); // ["o", "l", "l", "e", "h"] // Step 3. Use the join() method to join all elements of the array into a string var joinArray = reverseArray.join(""); // var joinArray = ["o", "l", "l", "e", "h"].join(""); // "olleh" //Step 4. Return the reversed string return joinArray; // "olleh" } reverseString("hello"); Chaining the three methods together: function reverseString(str) { return str.split("").reverse().join(""); } reverseString("hello"); 2. Reverse a String With a Decrementing For Loop function reverseString(str) { // Step 1. Create an empty string that will host the new created string var newString = ""; // Step 2. Create the FOR loop /* The starting point of the loop will be (str.length - 1) which corresponds to the last character of the string, "o" As long as i is greater than or equals 0, the loop will go on We decrement i after each iteration */ for (var i = str.length - 1; i >= 0; i--) { newString += str[i]; // or newString = newString + str[i]; } /* Here hello's length equals 5 For each iteration: i = str.length - 1 and newString = newString + str[i] First iteration: i = 5 - 1 = 4, newString = "" + "o" = "o" Second iteration: i = 4 - 1 = 3, newString = "o" + "l" = "ol" Third iteration: i = 3 - 1 = 2, newString = "ol" + "l" = "oll" Fourth iteration: i = 2 - 1 = 1, newString = "oll" + "e" = "olle" Fifth iteration: i = 1 - 1 = 0, newString = "olle" + "h" = "olleh" End of the FOR Loop*/ // Step 3. Return the reversed string return newString; // "olleh" } reverseString('hello'); Without comments: function reverseString(str) { var newString = ""; for (var i = str.length - 1; i >= 0; i--) { newString += str[i]; } return newString; } reverseString('hello'); 3. Reverse a String With Recursion For this solution, we will use two methods: the String.prototype.substr() method and the String.prototype.charAt() method. - The substr() method returns the characters in a string beginning at the specified location through the specified number of characters. "hello".substr(1); // "ello" - The charAt() method returns the specified character from a string. "hello".charAt(0); // "h" The depth of the recursion is equal to the length of the String. This solution is not the best one and will be really slow if the String is very long and the stack size is of major concern. function reverseString(str) { if (str === "") // This is the terminal case that will end the recursion return ""; else return reverseString(str.substr(1)) + str.charAt(0); /* First Part of the recursion method You need to remember that you won’t have just one call, you’ll have several nested calls Each call: str === "?" reverseString(str.subst(1)) + str.charAt(0) 1st call – reverseString("Hello") will return reverseString("ello") + "h" 2nd call – reverseString("ello") will return reverseString("llo") + "e" 3rd call – reverseString("llo") will return reverseString("lo") + "l" 4th call – reverseString("lo") will return reverseString("o") + "l" 5th call – reverseString("o") will return reverseString("") + "o" Second part of the recursion method The method hits the if condition and the most highly nested call returns immediately 5th call will return reverseString("") + "o" = "o" 4th call will return reverseString("o") + "l" = "o" + "l" 3rd call will return reverseString("lo") + "l" = "o" + "l" + "l" 2nd call will return reverserString("llo") + "e" = "o" + "l" + "l" + "e" 1st call will return reverserString("ello") + "h" = "o" + "l" + "l" + "e" + "h" */ } reverseString("hello"); Without comments: function reverseString(str) { if (str === "") return ""; else return reverseString(str.substr(1)) + str.charAt(0); } reverseString("hello"); Conditional (Ternary) Operator: function reverseString(str) { return (str === '') ? '' : reverseString(str.substr(1)) + str.charAt(0); } reverseString("hello"); Reversing a String in JavaScript is a small and simple algorithm that can be asked on a technical phone screening or a technical interview. You could take the short route in solving this problem, or take the approach by solving it with recursion or even more complex solutions. I hope you found this helpful. This is part of my “How to Solve FCC Algorithms” series of articles on the Free Code Camp Algorithm Challenges, where I propose several solutions and explain step-by-step what happens under the hood. Three ways to repeat a string in JavaScript In this article, I’ll explain how to solve freeCodeCamp’s “Repeat a string repeat a string” challenge. This involves… Two ways to confirm the ending of a String in JavaScript In this article, I’ll explain how to solve freeCodeCamp’s “Confirm the Ending” challenge.!
https://www.freecodecamp.org/news/how-to-reverse-a-string-in-javascript-in-3-different-ways-75e4763c68cb/
CC-MAIN-2020-05
refinedweb
1,013
53.21
How Lino applications use setup.py¶ This document describes our trick for keeping the metadata about a Python package in a single place. It does not depend on Lino and we recommend it for any Python project which contains a package. The classical layout is to store the setup information directly in the setup.py file of your project. The problem with this layout is that this setup.py file is not available at runtime. For example the version number. You need it of course in the setup.py, but there are quite some projects which want to show somehow their version. So they need it at runtime as well. And that number can change quickly and can be critical. You don't want to store it in two different places. Is there a way to have setup information both in a central place and* accessible at runtime? It is an old problem, and e.g. Single-sourcing the package version describes a series of answers. Our solution¶ To solve this problem, we store the setup information in a separate file which we usually name setup_info.py and which we load ("execute") from both our setup.py and our packages's main __init__.py file. That's why the setup.py of a package "xxyyzz" contains just this: from setuptools import setup fn = 'xxyyzz/setup_info.py') exec(compile(open(fn, "rb").read(), fn, 'exec')) if __name__ == '__main__': setup(**SETUP_INFO) And the __init__.py file of the main module contains this: from os.path import join, dirname fn = join(dirname(__file__), 'setup_info.py') exec(compile(open(fn, "rb").read(), fn, 'exec')) __version__ = SETUP_INFO.get('version') Note that exec(compile(open(fn, "rb").read(), fn, 'exec')) is equivalent to execfile(fn), except that it works in both Python 2 and 3. Usage example: >>> from lino import SETUP_INFO >>> print(SETUP_INFO['description']) A framework for writing desktop-like web applications using Django and ExtJS or React >>> from lino_xl import SETUP_INFO >>> print(SETUP_INFO['description']) Lino Extensions Library Setup information¶ The setup() function has a lot of keyword parameters which are documented elsewhere. install_requires¶ See tests_require¶ See long_description¶ This contains the description to be published on PyPI. Some projects insert this in the api/index.rst file of their docs tree. This is also used by inv bd as the source text for generating the project's README.rst. How to suggest changes to a README file¶ We assume that you have installed a development environment as explained in Installing a Lino developer environment. Open the setup_info.py file of your project and find the long_description. Edit its content. Run inv bd in the root of the project you want to make changes. This will ask you: Overwrite /path/to/my/project/README.rst [Y,n]? Hit ENTER. Open the README.rst file and check that it contains your changes. Submit a pull request with the two modified files setup_info.py and README.rst.
https://lino-framework.org/dev/setup.html
CC-MAIN-2020-05
refinedweb
488
69.99
Drawing attractive figures is important. When making figures for yourself, as you explore a dataset, it’s nice to have plots that are pleasant to look at. Visualizations are also central to communicating quantitative insights to an audience, and in that setting it’s even more necessary to have figures that catch the attention and draw a viewer in. Matplotlib is highly customizable, but it can be hard to know what settings to tweak to achieve an attractive plot. Seaborn comes with a number of customized themes and a high-level interface for controlling the look of matplotlib figures. %matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns np.random.seed(sum(map(ord, "aesthetics"))) Let’s define a simple function to plot some offset sine waves, which will help us see the different stylistic parameters we can tweak. def sinplot(flip=1): x = np.linspace(0, 14, 100) for i in range(1, 7): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) This is what the plot looks like with matplotlib defaults: sinplot() To switch to seaborn defaults, simply call the set() function. sns.set() sinplot() (Note that in versions of seaborn prior to 0.8, set() was called on import.. To control the style, use the axes_style() and set_style() functions. To scale the plot, use the plotting_context() and set_context() functions. In both cases, the first function returns a dictionary of parameters and the second sets the matplotlib defaults. There are five preset seaborn themes: darkgrid, whitegrid, dark, white, and ticks.: sns.set_style("whitegrid") data = np.random.normal(size=(20, 6)) + np.arange(6) / 2 sns.boxplot(data=data); For many plots, (especially for settings like talks, where you primarily want to use figures to provide impressions of patterns in the data), the grid is less necessary. sns.set_style("dark") sinplot() sns.set_style("white") sinplot() Sometimes you might want to give a little extra structure to the plots, which is where ticks come in handy: sns.set_style("ticks") sinplot() Both the white and ticks styles can benefit from removing the top and right axes spines, which are not needed. It’s impossible to do this through the matplotlib parameters, but you can call the seaborn function despine() to remove them: sinplot() sns.despine() Some plots benefit from offsetting the spines away from the data, which can also be done when calling despine(). When the ticks don’t cover the whole range of the axis, the trim parameter will limit the range of the surviving spines. You can also control which spines are removed with additional arguments to despine(): sns.set_style("whitegrid") sns.boxplot(data=data, palette="deep") sns.despine(left=True) Although it’s easy to switch back and forth, you can also use the axes_style() function in a with statement to temporarily set plot parameters. This also allows you to make figures with differently-styled axes: with sns.axes_style("darkgrid"): plt.subplot(211) sinplot() plt.subplot(212) sinplot(-1) If you want to customize the seaborn styles, you can pass a dictionary of parameters to the rc argument of axes_style() and set_style().() A separate set of parameters control the scale of plot elements, which should let you use the same code to make plots that are suited for use in settings where larger or smaller plots are appropriate. First let’s reset the default parameters by calling set(): sns.set() The four preset contexts, in order of relative size, are paper, notebook, talk, and poster. The notebook style is the default, and was used in the plots above. sns.set_context("paper") sinplot() sns.set_context("talk") sinplot() sns.set_context("poster") sinplot() Most of what you now know about the style functions should transfer to the context functions. You can call set_context() with one of these names to set the parameters, and you can override the parameters by providing a dictionary of parameter values. You can also independently scale the size of the font elements when changing the context. (This option is also available through the top-level set() function). sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.5}) sinplot() Similarly (although it might be less useful), you can temporarily control the scale of figures nested under a with statement. Both the style and the context can be quickly configured with the set() function. This function also sets the default color palette, but that will be covered in more detail in the next section of the tutorial.
http://seaborn.pydata.org/tutorial/aesthetics.html
CC-MAIN-2018-09
refinedweb
752
65.01
Tutorial - Getting Started with the Innovation Station Contents - What a you need? - What is an Innovation Station? - Why the Desktop NanoBoard? - Capturing the FPGA Design - Creating the FPGA Project - Adding the Schematic and OpenBus Source Documents - Defining the OpenBus System - Placing OpenBus Components - Connecting the OpenBus Components - Interconnect Components - Completing the OpenBus System - Configuring OpenBus Components - Configuring the Port I/O Component - Configuring the Terminal Instrument - Configuring the Audio Streaming Controller (Audio CODEC) - Configuring the SPI (Audio CODEC Control) - Configuring the Memory Controller - Configuring the Processor - Managing the Memory Map - Configuring the Memory Using Interconnect Components - Automating Memory Configuration from within the Processor - Managing the Lower Level Signals - Using the OpenBus Signal Manager - Specifying Interrupts in the OpenBus Signal Manager - Linking the OpenBus Document to its Parent Schematic - Creating a Sheet Symbol from the OpenBus Document - Placing the Final Components in Schematic - Configuring the Digital IO Instrument - Completing the Schematic Wiring - Configuring projects to run on the Desktop NanoBoard - Auto-configuring projects running on the Desktop NanoBoard - Creating User Constraints - Building the FPGA Design - Developing the Embedded Code - Linking an Embedded Project to its Target Processor - Adding the Source Code to the Embedded Project - Building the Software Platform - Adding main.C - Writing some C Source Code - Developing the Complete Application - Considering your Deployment Options - Level 1: Development of pure 'Device Intelligence' - Level 2: OTS Hardware Platform, OTS Enclosure - Level 3: OTS Hardware Platform, Custom Enclosure - Level 4: Mixture of OTS and Custom Hardware Platform, OTS or Custom Enclosure - Level 5: Custom Hardware Platform, OTS or Custom Enclosure - Transferring the Design to a Deployment Platform The term Innovation Station represents the powerful combination of the Altium Designer software and the Desktop NanoBoard reconfigurable hardware platform. This combination gives you all of the tools and technology necessary to capture, implement, test and debug your embedded / FPGA designs, in real-time. With Altium's Innovation Station, the low level detail is managed for you, leaving you free to focus on device intelligence and functionality - the source of true and sustainable product differentiation. In this tutorial, we will first define the Innovation Station, then implement a simple processor-based FPGA design, getting it running on a physical FPGA device plugged into the Desktop NanoBoard, showing just how these two component systems work in concert with one another. During the course of this tutorial, you will gain knowledge of the basics of FPGA design the Altium way, introducing you to: - An overview of the Desktop NanoBoard - FPGA project creation within Altium Designer - Use of design hierarchy within an FPGA project - Implementing an OpenBus-based FPGA design, including sourcing and placing components, connecting them together, and interfacing them to peripherals on the Desktop NanoBoard. - Creating and linking an embedded project to a soft processor on the target FPGA - Targeting a design to a daughter board FPGA on the Desktop NanoBoard using the auto-configuration feature - Processing of a design - compiling, synthesizing and building the design to obtain the programming file used to program the target device - Using Virtual Instrumentation to control your system For the example design featured in this tutorial, we will create a basic, reverb-like audio effect, in software, executing on a processor on the target FPGA device. Reverb, like any echo effect, repeats the incoming audio signal, however the echo is decayed to produce an effect that sounds closer to audio echoing in a large area. The decayed signal is then fed back into the input and summed with the current incoming audio, producing a fuller sound that gives us audible clues as to the size of an area. Our focus for this tutorial is to create the delay and decay components of the reverb effect. A more complete reverb might include options to control the delay and dampening for specific frequency bands. Overview of the audio system. The source files can be found in the \Examples\Tutorials\Audio Effects tutorial folder of your Altium Designer installation. Refer to this example at any time to get further insight, or to skip some of the steps. What a you need? In order to complete this tutorial, you will need: - Altium Designer Summer 08 (or later) installed - A Desktop NanoBoard with PB01 peripheral board and DB30 Spartan3 daughter board (or similar) installed - An audio source (could be a PC headphone output, MP3 player, etc) - The loopback audio cable included with your Desktop NanoBoard (or similar cable to connect your audio source to the audio input on the Desktop NanoBoard) - Vendor build tools. The free version of Xilinx's ISE (8.2.03i or later) will be sufficient if you are using the DB30 daughter board. What is an Innovation Station? The term Innovation Station represents the combination of Altium's Desktop NanoBoard and the award-winning Altium Designer software. As a concept, the Innovation Station was conceived to provide the full range of tools required to design new and innovative electronic products in ways never before possible; truly unlocking the creative potential of every engineer. The hardware implementation platform, that's the Desktop NanoBoard, has been specifically designed to work with Altium Designer and makes the process of developing and testing complete systems on real hardware, faster and easier than ever before. Included on the Desktop NanoBoard are the complete range of physical-layer peripherals required to exercise the array of FPGA-based soft IP cores included with Altium Designer. Packaged on removable peripheral boards, these peripherals can be swapped at any time and user designed systems can even be substituted, connecting into the NanoBoard subsystem via one of the 3 high-density peripheral board connectors available. Altium Designer, the software workhorse that comprises the second half of this powerful pairing, is the first and only fully unified electronics design software tool, spanning Schematic Capture, PCB Layout, FPGA Development, Embedded Software Development, and even CAM. The fundamental difference between Altium Designer and other software tools continues to be Altium's holistic approach to design, looking at electronics design not as a series of individual tasks to be addressed in isolation, but rather as a singular task comprised of many component parts each of which needs to feed off of- and work in concert with- one another. Altium Designer includes a complete range of reusable IP, packaged as a part of your Innovation Station. This includes soft IP cores for use in FPGA development, the low level software drivers to support the IP from the processor-side, and complete schematics of each of the subsystems, to make implementing your design in the target system a snap. Why the Desktop NanoBoard? The Desktop NanoBoard is a completely reconfigurable hardware development platform that includes a wide range of hardware peripherals, intended to make it easy to leverage the range of soft IP cores supplied with Altium Designer. The peripherals are organized on a series of removable peripheral boards, each connected to the main board by way of a 100 pin connector. By pairing the hardware with Altium Designer and supplying the complete schematics for the Desktop NanoBoard; Altium has made it possible to leverage the NanoBoard design, in your design. So when you select an IP core from the libraries supplied with Altium Designer, you know that not only do you have access to that piece of IP, but you also have a completed physical system to reference in your target board. Furthermore, each of the NanoBoard subsystems has been captured using Device Sheets which are reusable documents that can be instantiated in your target system using hierarchical symbols. This means that as quickly as you can add and wire-in a sheet symbol into your hierarchical schematics, you can include the physical layer components used to create the Desktop NanoBoard in your board. Capturing the FPGA Design The first thing we need to do is capture our FPGA design within the Altium Designer environment. Altium Designer provides a range of capture modes including OpenBus, Schematic, VHDL, Verilog, and even C. Deciding on the best method depends on the project and the level of control and customization you are looking for. All of Altium's IP cores are supplied both as OpenBus symbols and schematic components so if you intend to use only the IP supplied with Altium Designer, then either of these two capture methods is preferred. Of the two, schematic and OpenBus, OpenBus is the fastest to implement, eliminating virtually all of the complicated wiring required in schematic. Because of the required top level schematic, if there is a need to add additional components not included in the OpenBus palette, these components can be added at the top level and wired into the sheet symbol as required. The fact that these capture modes, OpenBus and schematic, can be mixed and matched makes it possible to develop any range of complex systems with the perfect complement of ease and low-level detail. For example, a design created with Altium IP using OpenBus can be extended using VHDL or Verilog by writing your source code and linking the two documents together at another layer in the project hierarchy. No matter what method you choose, every FPGA project must contain a top level schematic sheet. This top level sheet will include ports, or port plug-in components (from one of Altium's port plug-in libraries discussed later) as these ports / components represent the pins of the target FPGA device. The top level schematic sheet can also include soft-core IP components and/or the interfaces to lower level source documents, represented by sheet symbols. FPGA designs must be hierarchical and the same rules of hierarchy apply to FPGA design as to any other design in Altium Designer (for complete details on hierarchical design in Altium Designer, refer to Connectivity and Multi-Sheet Design). Since our design uses only Altium IP, the simplest capture method will be OpenBus. This will involve adding the required components to an OpenBus document and connecting them accordingly. However, before we can address the OpenBus document and its contents, we must create a project. The following sections take you through the steps required to capture our design. Creating the FPGA Project The basis for every design created in the Altium Designer environment is a project file. For an FPGA design, we need to create a new FPGA project (*.PrjFpg). The project document itself is an ASCII file that stores the project information such as the documents that belong to the project, output settings, compilation settings, error checking settings, and so on. Let's go ahead and create the FPGA project. - Create a new FPGA project using the File»New»Project»FPGA Project command. - Right-click on the name for this new project (FPGA_Project1.PrjFpg) in the Projects panel and choose the Save Project command. Save the project at a location of your choice with the name Audio_Effects.PrjFpg, and in a new folder called Audio Effects Tutorial. Note that spaces and/or dashes must not be used in FPGA project or document filenames. Doing so will result in synthesis errors during design processing. Underscores (_) can be used instead to provide readability. Adding the Schematic and OpenBus Source Documents The next step is to add the source documents required by our FPGA design. Though the design will be captured in OpenBus, recall that every FPGA project requires a top level schematic document. Thus, we will be adding both a new schematic and new OpenBus document to our project. - Add a new schematic document by right-clicking on the FPGA project entry in the Projects panel and choosing the Add New to Project»Schematic command. A blank schematic sheet will open as the active document in the main design window. - Save this document (File»Save) with the name Effects_Sch.SchDoc, in the same folder as the parent project. - Add a new OpenBus document by right-clicking on the FPGA project entry in the Projects panel and choosing the Add New to Project»OpenBus System Document command. A blank OpenBus document will open as the active document in the main design window. - Save this document with the name Effects_OB.OpenBus, in the same folder as the parent project. - The project itself will appear as modified in the Projects panel. Save the project too (right-click on its name and choose Save Project). Defining the OpenBus System OpenBus is a new way of doing system-level FPGA design. Offering a much lighter interface than schematic based implementations, OpenBus is hardly lightweight in its capabilities. By automatically taking care of much of the low-level detail, OpenBus lets you focus on the high-level system and interconnection of major components. You'll find all of the components you need in the OpenBus Palette. You can display the Palette by clicking on the OpenBus panel control in the lower right portion of the main editor and then selecting OpenBus Palette from the popup menu. Click OpenBus button down the bottom right of the workspace to open the Palette. Placing OpenBus Components Now that we have our blank OpenBus canvas, it's time to add-in the required components that comprise our design circuitry. The OpenBus Palette panel contains all OpenBus components that can be used in an OpenBus document. They have been categorized in the palette into groups of Connectors, Processors, Memories and Peripherals. The table below identifies the components required by our design. Required OpenBus components. To place an OpenBus component onto an OpenBus document: - Select the OpenBus component that you want to place by left-clicking on its icon in the OpenBus Palette panel. - The component will be locked to the mouse cursor. At this point you can use the Spacebar to rotate the component or the X or Y keys to flip the component along the X or Y axis respectively. - Move the mouse to where you want the component, and left-click to place it. - Continue placing the components listed in the Table, as per the OpenBus diagram shown in the figure below. OpenBus components required by the design. To edit the names of the OpenBus components that you have placed: - Left-click once on the text associated with the OpenBus components that you want to rename. This selects the text. - Click a second time on the text or press the F2 key to enter the edit text mode. - Edit the names of the components to match the names shown in the figure above. - Press the Enter key or click somewhere else in the editor window to leave the editing mode and accept your changes. - Once you have placed and named all of the OpenBus components save your work by selecting File»Save All from the main menu. Connecting the OpenBus Components In order to control the flow of data between the components on your OpenBus document, you will need to place connection links between them. These links indicate bus connections between master ports and slave ports . The arrow on the connection link indicates the direction of control. To place a connection link between a master and slave port: - Select Place»Link OpenBus Ports or click on the Link OpenBus Ports icon in the OpenBus toolbar. - Click the master port that you want to create the link from - Click the slave port that you want to connect to - Repeat steps 2 and 3 for any additional links that you wish to create. To exit the link placement mode, press the Esc key or right-click the mouse. - To remove a connection link between the master and slave port, left-click to select the link and press Delete. - To change position of the ports on an OpenBus component, left-click and drag the port, repositioning it around the body of the component. Interconnect Components Since connection links can only be made between a single master and a single slave, OpenBus Interconnect components are required to allow you to connect multiple components together. OpenBus Interconnect components have a single slave port and one or more master ports. This allows a master device (connected to the OpenBus Interconnect's slave port) to control multiple other slave devices (connected to the OpenBus Interconnect's master ports). To add an additional port to an OpenBus Interconnect: - Select Place»Add OpenBus Port or click on the Add OpenBus Port icon in the OpenBus toolbar. - Hover the mouse over the outer edge of the component you wish to add a new port to. Ports are added at 4 points around the component, the top, bottom, left, and right. Clicking on either side of the component will add the port to the respective side. - Click to add the new port. - Repeat steps 2 & 3 for any additional links that you wish to create. To exit port placement mode press the Esc key or right-click the mouse. To remove a port from an OpenBus Interconnect left-click to select the port and press Delete. To change the position of ports on an OpenBus Interconnect simply left-click and drag the port, repositioning it around the body of the component. Completing the OpenBus System - Complete the creation of the OpenBus system using the techniques outlined above. The completed system can be seen in the figure below. - Save your work. Completed OpenBus connectivity. Configuring OpenBus Components The vast majority of components found in the OpenBus Palette translate directly to similarly named components found within the FPGA Peripherals and FPGA Processor libraries used for schematic-based FPGA design. In the same way that several of the components in the schematic-based libraries are configurable, so too are their OpenBus counterparts. In this tutorial, we are using several configurable components that will need to be adjusted. Configuring the Port I/O Component The Port I/O component, GPIO will be used to create a simple interface to the processor, for use by an instrument we will connect later in this tutorial. For now, we need to configure the port IO and specify the number of ports, their direction, and their width. To configure the Port I/O component for this tutorial: - Double-click the GPIO component to open the Configure OpenBus Port I/O dialog. - In the dialog: - Set the Component Designator to GPIO - Set the Kind to Input/Output - Set the Port Count to 2 - Set the Bus Width to 8 - Set the Interface Type to Signal Harness - Click OK to save your changes. Configuring the Terminal Instrument The terminal instrument provides a simple way of outputting data in an FPGA design. - Double-click the GPIO component to open the Configure OpenBus Terminal Instrument dialog. - In the dialog: - Set the Component Designator to TERM - Set the Interface Type to Signal Harness - Click OK to save your changes. Configuring the Audio Streaming Controller (Audio CODEC) As part of its high-quality audio sub-system, the NanoBoard includes a CS4270 24-bit, 192kHz stereo audio CODEC (from Cirrus Logic). The CODEC caters for both analog and digital audio I/O. The Audio Streaming Controller transfers audio data over the inter-IC sound (I2S) bus. - Double-click the TERM component to open the Configure OpenBus Audio Streaming Controller dialog. - In the dialog: - Set the I2S Channels to Receive and Transmit - Set the I2S Hardware Buffer to Include Hardware Buffer, 1K samples - Set the Component Designator to AUDIO - Set the Interface Type to Signal Harness - Click OK to save your changes. Configuring the SPI (Audio CODEC Control) Internal registers for the audio codec, which are used to determine the required functionality of the device, are accessed over the SPI bus. - Double-click the GPIO component to open the Configure OpenBus SPI dialog. - In the dialog: - Set the Component Designator to SPI - Set the Interface Type to Signal Harness - Click OK to save your changes. Configuring the Memory Controller The SRAM Controller provides a simple, generic interface to Asynchronous Static RAM. As such, it requires that we configure the component for the amount and type of external memory we intend to use. - Right-click the SRAM component and select Configure SRAM (SRAM Controller) from the right-click menu. - In the Configure (Memory Controller)dialog box: - Set the Memory Type to Asynchronous SRAM - Set the Size of Static RAM array to 1MB (256K x 32-bit) - Set the Memory Layout to 2 x 16-bit Wide Devices - Click OK to save your changes. Configuring the Processor The TSK3000A is a configurable soft processor core and as such requires you specify a few key parameters prior to using it. This includes the size of the internal processor memory, whether or not to use a hardware multiply/divide unit, whether to include an on-chip debugging system and how to manage breakpoints on reset. To configure the processor for this tutorial: - Right-click the MCU component and select Configure MCU (TSK3000A) from the right-click menu. - In the Configure (32-bit Processors)dialog box: - Set the Internal Processor Memory to 32 K Bytes (8K x 32-Bit Words) - Set the Multiply/Divide Unit (MDU) to Hardware MDU - Set the On-Chip Debug System to Include JTAG-Based On-Chip Debug System - Set the final option to Disable Breakpoints on Hard Reset - Click OK to save the changes. You may receive an error message indicating that there are "2 Nexus JTAG Parts Found but no NEXUS_JTAG_CONNECTOR was found on the top sheet". This message can be safely ignored as we have not yet completed the top level schematic. Managing the Memory Map One of the key benefits of developing our system with OpenBus is the level of automation that OpenBus brings to the management of the system memory map. Ultimately, all peripherals and memory devices sit within a 32-bit memory space that spans 4GBytes. To make the management of this memory space easier, OpenBus intelligently interprets the design and automatically allocates memory spaces for each of the peripherals and memory controllers. In most situations these memory allocations will be sufficient but in some rare cases you may wish to manually edit the memory allocations yourself. You can still do this with OpenBus. Configuring the Memory Using Interconnect Components Under normal circumstances, the interconnect component will probe the settings and memory requirements of each of its connected devices and will update the memory map automatically. You can see (and edit) the memory mapping of the interconnect component from the Configure OpenBus Interconnect dialog which can be accessed from the component's right-click menu. OpenBus interconnect configuration. The Interconnect configuration information is automatically propagated to the memory map of the processor. Do not be concerned if your Configure OpenBus Interconnect dialog does not appear exactly as it does in the figure above. This memory map will be handled automatically and may appear slightly different. Automating Memory Configuration from within the Processor In addition to the memory that is managed by each of the interconnect components within the OpenBus document; you can also centrally manage memory from the Processor's memory and peripheral configuration dialog boxes. These can be accessed by right-clicking the processor. The Configure Processor Memory dialog provides a pictorial representation of where the peripherals and memory controllers will be positioned within the processor's memory. Checking the hardware.h (C Header File) option will cause a hardware.h header file to be created when the project is compiled. This file contains #defines that specify the addresses and sizes of any peripherals and the details of any interrupts used. This will ensure that all OpenBus memory settings are incorporated and synchronized in the design each time it is compiled. By ensuring this box is checked, any changes you make to the OpenBus document will be propagated through to the embedded project we will create later in this tutorial. Enabling the hardware.h option automates how the peripherals are mapped into the processor memory space. To set the project to automatically import the settings from the OpenBus document: - Right-click the MCU component and select Configure Processor Peripheral from the right-click menu. - In the Configure Peripherals dialog box, check the hardware.h (C Header File) checkbox. - Click OK to exit. Managing the Lower Level Signals Though OpenBus makes strides in simplifying the complex interconnections between components that use a standard bus interface, it is sometimes necessary to have a view into- and control over- the lower level signals in the design. This is useful if for nothing else, to understand how the underlying signals in the OpenBus document get exposed to the top level schematic. All of the signals used for the bus interconnects (links) are visible from the OpenBus Signal Manager. Using the OpenBus Signal Manager The OpenBus Signal Manager (shown in the figure below, accessed from the Tools menu), allows you to take finer control over which signals are to be exposed externally to the OpenBus document. The Clocks and Resets tabs will rarely need your attention as the default settings for these are usually adequate. The Interrupts tab will need your attention if you are planning on using any peripherals as interrupt sources. From this dialog you can allocate interrupts to the available interrupt channels on the main processor. Since we will be using several interrupts in this design, we will need to specify those connections here. The list of signals in the External connection summary cannot be edited directly however this dialog serves as an excellent reference. All of the signals listed in this dialog will be exported to the parent schematic. The signals are grouped according to the component that controls them, making it easier to identify and locate the source of the different signals. For instance, when you link an OpenBus document to a parent schematic, it may not be immediately apparent where certain signals on the sheet symbol have come from. It is in this dialog box that you will find your answers. Specifying Interrupts in the OpenBus Signal Manager The TSK3000A supports a total of 32 interrupt inputs, each of which can be configured to be either level sensitive (active High) or edge triggered (active on a rising edge). To configure interrupts for the design: - Select Tools»OpenBus Signal Manager. - From the OpenBus Signal Manager dialog, select the Interrupts tab. - Locate the Audio Streaming Controller(AUDIO) in the list of OpenBus peripherals at the top of the dialog and click once in the Interrupt column to the right of the signal INT_O to expose the drop down. - Assign this signal to the interrupt pin INT_I1 by selecting this item in the list of available interrupts. Notice that the Kind and Polarity column is populated automatically. The OpenBus system is aware of the interrupt requirements of each of the OpenBus peripherals and thus there's no reason to manually configure the pin kind or polarity. - Locate SPI(SPI) in the list of OpenBus peripherals at the top of the dialog and assign the signal INT_O the interrupt pin INT_I2. - Locate Terminal instrument(TERM) in the list of OpenBus peripherals at the top of the dialog and assign the signal INT_O0 to the interrupt pin INT_I3. - Finally, assign the terminal's INT_O1 signal to the interrupt pin INT_I4. The table of interrupts should now appear as they do in the figure below. - Click OK to exit. Specifying interrupts from the OpenBus Signal Manager. Linking the OpenBus Document to its Parent Schematic As mentioned earlier in this tutorial, the top level document in an FPGA project needs to be a schematic. Now that we have created an OpenBus document, we must now link that document back to the top level schematic sheet that we have previously created. Creating a Sheet Symbol from the OpenBus Document To link the OpenBus document to a parent schematic, you need to create a sheet symbol from the OpenBus document and place it on the parent schematic. - Open Effects_Sch.SchDoc. - Select Design»Create Sheet Symbol From Sheet or HDL. - When the Choose Document to Place dialog box appears, select the Effects_OB.OpenBusdocument and click OK. - A sheet symbol will be attached to the cursor. Position it where you want to place it on the schematic page and click once to commit the placement. - Resize the sheet symbol and reposition the sheet entries with inputs and bidirectional signals on the left and outputs on the right, as shown in the figure below. Organize the sheet entries into logical positions. Placing the Final Components in Schematic To complete the placement process and finalize the top level schematic, you'll need to first place a number of components. The relevant integrated libraries in which these components can be found are located in the \Libraries\FPGA folder of your Altium Designer installation. These libraries are installed in Altium Designer by default, so will all be available in the Libraries panel. Position the components approximately as shown in the schematic below. - From the FPGA NB2DSK01 Port-Plugin.IntLib library: - CLOCK_BOARD # - TEST_BUTTON # - From the FPGA PB01 Port-Plugin.IntLib library: - AUDIO_CODEC # - AUDIO_CODEC_CTRL # - From the FPGA DB Common Port-Plugin.IntLib library: - SRAM_DAUGHTER0 # - SRAM_DAUGHTER1 # - From the FPGA Instruments.IntLib library: - DIGITAL_IO - From the FPGA Generic.IntLib library: - OR2N1S - FPGA_STARTUP8 Position the components approximately as shown The Audio Codec components and wiring for the NB2 (left) and NB3000 (right). Configuring the Digital IO Instrument With the components placed the next step is to configure the Digital IO Instrument. This instrument is a configurable device that provides an efficient and uncomplicated means by which to monitor/generate digital signals in your design. Once programmed, interaction with this device is done live from within the Altium Designer software, allowing you to read and manipulate signal values of FPGA signals as they execute on the FPGA, in system and in real time. In this design, the digital IO instrument will be used to mimic the user interface to the system. Instead of building a complete implementation platform, we will use the digital IO instrument to trigger events in the processor, ensuring we can continue on with development though we don't yet have a PCB prototype. For details on the Digital IO instrument refer to the article Configurable Digital IO Module. To configure the Configurable Digital IO Instrument: - Right click the DIGITAL_IO component and select Configure U? (DIGITAL IO) to launch the Digital I/O Configuration dialog. - Press the Add button to the right of the Input Signals section at the top of the dialog to add a second group of signals named BIN[7..0]. - Click once in the Style column to the right of the signal AIN[7..0]to expose the arrow and select the Bar style. - Click once in the Color column to the right of the signal AIN[7..0]to expose the arrow and select the color Orange. - Press the Add button to the right of the Output Signals section at the bottom of the dialog to add a second group of signals named BOUT[7..0]. - Click once in the Style column to the right of the signal AOUT[7..0]to expose the arrow and select the Slider style. - Click once in the Style column to the right of the signal BOUT[7..0]to expose the arrow and select the LED Digits style. The dialog should now appear as it does in the figure below. - Click OK. Configuring the Digital IO instrument. Completing the Schematic Wiring With the components configured, the next step is to complete the final placement and wiring of the top level schematic. To complete the wiring of the design: - A completed version of the Effects_Sch.SchDocschematic is shown in the figure below. Use this as a guide to position the port-plugin components around the sheet symbol, then reorder the sheet entries so they will line up nicely with them. - Using the figure below as a guide, complete the wiring of the components in your design. - Once you have completed wiring up the schematic, select Tools»Annotate Schematics Quietly to annotate the design, giving each component a unique designator. - Compile the design by selecting Project»Compile FPGA Project Audio_Effects.PrjFpg. Fix any compilation or wiring errors as necessary. - Save your work. Finalizing the schematic wiring. Configuring projects to run on the Desktop NanoBoard At this point, we have completed the bulk of the FPGA design but there is one additional step required before we can run it on the Desktop NanoBoard. Constraining an FPGA design is the process of defining the specific FPGA pins that you want each of the signals in your design to appear on. This is an important step as it will ensure that the FPGA design is able to interact with NanoBoard resources that have been hardwired to the FPGA daughter board. When defining constraints, it is possible to hardcode them into the top-level schematic sheet but this is not advisable. The reason for this is because it binds the design to a specific device and limits your ability to retarget a different FPGA should the need arise. Best practice is to store constraint information in a separate location to the schematic. Altium Designer implements this approach using a set of pre-built and user-definable constraint files which can be added to the FPGA project. Auto-configuring projects running on the Desktop NanoBoard Auto configure the FPGA project. In order to make the process of targeting your design to the Desktop NanoBoard easier, Altium Designer includes a handy auto configuration feature. By utilizing some smarts that have been built into the NanoBoard's firmware, Altium Designer is able to probe the Desktop NanoBoard and determine exactly what daughter and peripheral boards are connected. A set of pre-defined constraint files will then be loaded and grouped together into a configuration that targets your specific hardware setup. To auto-configure your FPGA design to run on a Desktop NanoBoard: - Make sure your Desktop NanoBoard is connected to your PC and powered on. - Select View»Devices or click on the Devices View icon in the toolbar. - In the Devices View, ensure that the Live checkbox is checked. You should see a picture of the Desktop NanoBoard in the upper region of the display. - Right-click the Desktop NanoBoard icon and select Configure FPGA Project»Audio_Effects.PrjFpg. - Altium Designer will take just a few moments to probe the Desktop NanoBoard and create a new configuration. Click OK to accept the new configuration. You may notice that a new Settings folder has been added to the project, as shown in the figure below. In this folder you will find a Constraint Files folder, with all of the newly added constraint files. Note the Settings folder that has been added to the Project. Several of the constraint files will have a 'shortcut' symbol. Altium Designer uses this notation to indicate files which are not stored in the main project folder. These particular files are all pre-defined constraint files that are shipped with Altium Designer. They are specific to the various peripheral and daughter boards that they represent and should NOT be edited as changes made to these files will affect all other projects that you build for the Desktop NanoBoard. The constraint file that has been highlighted in the figure above was automatically created by the auto-configure process and is stored with the project. This file defines where the peripheral boards are located on the Desktop NanoBoard. Creating User Constraints The auto-configuration process deals with the mapping of ports defined on the top-level FPGA schematic and their target FPGA pins. There are, however, additional constraints (such as the clock frequency) that are important for the design but which can not be handled automatically. In order to capture this information, it is best to create another constraint file that is reserved for this information and add it to the configuration. To create a new user constraint file and add it to the configuration: - Right click the Audio_Effects.PrjFpgproject in the Projects panel and select Add New to Project»Constraint File. - Select File»Save As to save the file. Give it a meaningful name such as MyConstraints.Constraintand click OK. - Go to Project»Configuration Manager or right click the project in the Projects panel and select Configuration Manager, the Configuration Manager dialog will open. - Locate MyConstraint.Constraintin the Constraint Files column and check the box in the Configurations column to add the constraint to the existing configuration. - Click OK to close the Configuration Manager and save your changes. Adding a new constraint file to the configuration. To add a clock constraint to the CLK_BRD signal: - Open MyConstraints.Constraint. - Select Design»Add Modify Constraint»Port. - In the Add/Modify Port Constraintdialog: - Set the Target to CLK_BRD - Set the Constraint Kind to FPGA_CLOCK_FREQUENCY - Set the Constraint Value to 50MHz - Click OK to close the Add/Modify Port Constraint dialog. - Observe that a new constraint record has been added to MyConstraints.Constraint, as shown below. Save your work. Building the FPGA Design Once the FPGA design has been defined along with its constraints, you are now ready to build it. Building an FPGA design is the process of compiling and synthesizing your entire FPGA design into a configuration bit file that can be downloaded and run on the target FPGA device. Altium Designer standardizes the way you build an FPGA design so that it is vendor independent. You'll recall that a copy of the Vendor tools for the specific device you are targeting was listed in the What you will need section of this tutorial. Altium Designer needs these vendor tools in order to place and route the design but the interaction with these back end tools will be largely transparent to the user. To build an FPGA design: - Make sure your Desktop NanoBoard is connected to your PC and powered up. - Select View»Devices View or click on the Devices View icon in the toolbar. - Ensure the Live checkbox is checked. You should see a picture of the Desktop NanoBoard in the upper region of the display and an icon of the Spartan 3 FPGA in the middle region. - In the drop down list just below the Spartan 3 icon, be sure that the Audio_Effects / NB2DSK01_08_DB30_04 project / configuration pair is selected. - Locate the Compile, Synthesize, Build, Program FPGA buttons running left to right just below the Desktop NanoBoard icon. As this is the first time you have built your design, the colored indicators on each of the buttons will appear RED. Click once on the words Program FPGA to begin the build process. As the build process progresses, the colored indicator from each stage will turn yellow while it is processing and then green when completed successfully. The process of building the design may take several minutes to complete. You can observe the progress of the build from the Messages and Output panels which can be accessed from the System panel tab in the lower right section of the main workspace.FPGA Build process in the Devices View. If any errors occur you will need to rectify them before you can proceed. Try to locate the source of the error by retracing your steps through the instructions of the tutorial. - A summary dialog will be displayed once the design has been built and downloaded successfully. Click OK to close this dialog. - Once the FPGA design has been downloaded to the Desktop NanoBoard, you should notice that the status of the TSK3000A processor has changed from Missing to Running. We can now begin work on the embedded code that this processor will run. Developing the Embedded Code Altium Designer uses an Embedded Project as the container for all of the source code intended to execute on a given target. To create a new embedded project: - Select File»New»Project»Embedded Project from the menus, or click Blank Project (Embedded) in the New section of the Files panel. - The Projects panel will display a new Embedded project with the default name Embedded_Project1.PrjEmb. Select File»Save Project or right-click the project in the Projects panel and select Save Project. Save the file as Audio_Effects_Emb.PrjEmb. If you want to keep your embedded project documents separate from your FPGA project documents, you may wish to save the Embedded Project in a sub folder called Embedded under your FPGA project folder. Linking an Embedded Project to its Target Processor An Embedded Project can be developed in isolation but pretty soon you're going to want to run it on a target processor. Altium Designer gives you the ability to link your embedded project to an FPGA project containing an embedded processor. The embedded project is linked to the target processor in the Structure Editor. To link an Embedded Project to its target processor: - Make sure both the Embedded Project and the FPGA Project that contains the target processor are loaded in the Projects panel. - Enable the Structure Editor option at the top of the Projects panel to switch to the Structure Editor mode. - Left-click and drag the Embedded Project over the top of the FPGA project. Any valid processor targets will be highlighted in light blue. Drop the project on the MCU (TSK3000A) processor. - Switch the Projects panel back to File View. Observe that the hierarchy of the projects have been updated placing Audio_Effects_Emb.PrjEmbas a child of Audio_Effects.PrjFpg. Adding the Source Code to the Embedded Project When a new embedded project is first created, it will be created as an empty container. You must then add or create the relevant source files to the project. Altium Designer can compile C source files, C header files, or Assembly files as part of your project. Altium Designer also includes a powerful productivity aid when developing an embedded application, namely the Software Platform Builder, which is used to build a Software Platform for your application. Building the Software Platform The Software Platform is a software framework that facilitates writing software to access peripheral devices on the NanoBoard that are part of your FPGA design. It also facilitates the implementation of software protocols, and provides extra functionality that you can use in your application, such as multithreading. In essence, it is a collection of software modules, delivered as source code. These modules are automatically added to your embedded project to take care of various low level routines that are required to control or access peripherals. The modules also provide an interface to the application, offering specific functionality (for example, set_baudrate(), a function to dynamically change the baud rate). The Software Platform Builder is the graphical user interface that you use to configure and add modules to your project, building up the Software Platform. The Software Platform Builder becomes available when you add a special document to your embedded project: a Software Platform file with extension .SwPlatform. This document both represents the Software Platform for your project and provides you with a graphical interface to select and configure the modules you need. What you need, of course, depends on the peripheral devices on your FPGA design that you wish to access in your application. The Software Platform Builder can read your FPGA design and import the appropriate low-level modules for the peripherals on your FPGA design. You can use this import as a starting point and add more (higher-level) modules to the Software Platform file. To add a Software Platform and build it: - Right-click the embedded project in the Projects panel and select Add New to Project»SwPlatform File. A blank Software Platform document named Software Platform1.SwPlatformwill be added to the Embedded Project and displayed in the main editor window. - Rename the newly created file by selecting File»Save As. Navigate to the same folder as your embedded project and type the name Audio_Effects_Emb.SwPlatformand click on Save. - The next step is to build up the device stacks in the Software Platform. To start this, click the Import from FPGA button. This will add a low-level hardware wrapper for each of the hardware modules detected in the OpenBus document, as shown by the green icons in the figure below. - Next you must grow each device stack up, one by one. To do this: - Click once on the green General Purpose IO Port wrapper, then click the Grow Stack Up button. The Grow Stack dialog will open, click once on the orange GPIO Port Driver and click OK to add it to the stack. - Click once on the green I2S Master Controller wrapper, click the Grow Stack Up button, adding the I2S Driver. - Click once on the green SPI Master Controller wrapper, click the Grow Stack Up button, adding the CS4270 Audio Codec Driver. After closing the Grow Stack dialog you will notice that both the SPI Driver and the CS4270 Audio Codec Driver have been added to the SPI stack. - Click once on the green Virtual Terminal Instrument driver, click the Grow Stack Up button, adding the Serial Device IO Services context. That's it, the low-level hardware wrappers, device drivers and context code have been added to your embedded project - you can now focus on the high level code in the application. To learn more about the Software Platform, read the Introduction to the Software Platform. The completed Software Platform. Adding main.C To create a new C file and add it to the project: - Right-click the embedded project in the Projects panel and select Add New to Project»C File. A blank text document named Source1.Cwill be added to the Embedded Project and displayed in the main editor window. - Rename the newly created file (with a .C extension) by selecting File»Save As. Navigate to the same folder as your embedded project and type the name main.Cand click on Save. Writing some C Source Code Now that our Embedded Project has been linked to a hardware platform that it can execute from, we are ready to start writing some C code. We'll take things slowly and using this small section of code, write the value 0x55 to the input at Port A of the Digital IO Instrument. To add some simple code to the Embedded Project: Open the auto generated hardware.hfile that is now part of the Audio_Effects_Emb.PrjEmbproject. Observe that an entry for the Port IO component's base address has already been made. The base address Base_GPIO may be slightly different in your design. //.............................................................................. #define Base_GPIO 0xFF000000 #define Size_GPIO 0x00000002 //.............................................................................. Open main.C and enter the following source code: #include "hardware.h" #define DIGIO_PORTA (*(unsigned char*)Base_GPIO) void main (void) { DIGIO_PORTA = 0x55; } - Switch to the Devices view by selecting View»Devices View or clicking on the Devices View button in the toolbar. - Click the arrow to the left of the words Program FPGA on the Program FPGA button to rerun the FPGA build process. - From the Devices view, locate the digital IO instrument in the soft JTAG chain. Right-click the instrument and select Instrument to bring up the device's Instrument Rack. The value for the signal AIN[7..0]should display 0x55 as shown in the figure below. Digital IO instrument rack. - If nothing appears on the AIN[7..0]input of the Digital IO instrument, be sure that the OR2N1S component is correctly positioned on the schematic (changing it will require a re-build of the design from the Devices View). The TEST_BUTTON port plugin must be connected to the inverted input or the device will remain in a reset state. Developing the Complete Application At the beginning of this tutorial, we indicated that we would be developing a complete application capable of accepting an incoming audio signal and producing an audio output with a reverb-style effect added in software. So far we have laid the foundations for this application and only a few steps remain. The final step is to complete the software required to create the reverb effect. The box below contains the complete listing of main.C. #include <drv_cs4270.h> #include <drv_i2s.h> #include <drv_ioport.h> #include "devices.h" #include <string.h> #include <stdint.h> #include <stdio.h> #define PORT_A 0 #define PORT_B 1 #define I2S_BUF_SIZE 512 #define AUDIO_BUF_SIZE 65536 //this number MUST be a power of 2 #define I2S_SAMPLERATE 48000 #define MS_SAMPLES (I2S_SAMPLERATE / 1000) //millisecond samples int32_t i2s_inbuf[I2S_BUF_SIZE] = {0}; int16_t in_temp_buf[I2S_BUF_SIZE / 2] = {0}; int16_t process_buf[AUDIO_BUF_SIZE] = {0}; cs4270_t * cs4270_drv; i2s_t * i2s_drv; ioport_t* ioport_drv; void init_audio(void); void get_audio(void); void process_audio_echo(uint8_t delay); void passthrough(void); void put_audio(void); void main(void) { uint8_t effect_enable; uint8_t delay_coefficient = 0; //initialize the audio init_audio(); ioport_drv = ioport_open(DRV_IOPORT_1); //output a list of instructions on how to use the digital IO example to control audio on the terminal printf("\n\nAudio Reverb Example:\n"); printf("\n1. Set Bit 0 of BOUT[7..0] on the digital IO instrument for audio pass\n through.\n"); printf("\n2. Set Bits 1 - 7 to initiate the audio Reverb Effect.\n"); printf(" The Slider AOUT[7..0] will control the delay used by the reverb effect.\n"); printf("3. Clear all bits on BOUT[7..0] to stop audio.\n"); while (1) { effect_enable = ioport_get_value(ioport_drv, PORT_B); //read the value from the digital IO connected into GPIO port B ioport_set_value(ioport_drv, PORT_B, effect_enable); //loop value of 'b' to the output of the GPIO port B (display to user) //create a coefficient to control the delay from value of digital IO slider at port A of GPIO (aka Port 0 or OUTA[7..0]) delay_coefficient = ioport_get_value(ioport_drv, PORT_A); //loop value of delay_coefficient to A input of digital IO to display on the digital IO input channel A ioport_set_value(ioport_drv, PORT_A, delay_coefficient); //function to go and get the audio -- always gets audio when available and tries to fill input buffer get_audio(); //test for the IO port A status to indicate what type of effect to create if (effect_enable == 1) { //simple fetch and put audio function passthrough(); //function to put the audio in the output buffer put_audio(); } else if (effect_enable > 1) { //function to process the audio and create the echo process_audio_echo(delay_coefficient); //function to put the audio in the output buffer put_audio(); } } } /* *get audio and place into audio buffer */ void get_audio(void) { uint32_t rx_size; while (i2s_rx_avail(i2s_drv) < I2S_BUF_SIZE / 2) // if the incoming buffer is < 256 samples (1/2 buffer size), get more samples { i2s_rx_start(i2s_drv); // if no samples available, make sure the receiver is running } rx_size = i2s_rx_avail(i2s_drv) & ~1; // make even, the same number of samples for both channels rx_size = rx_size > I2S_BUF_SIZE ? I2S_BUF_SIZE : rx_size; i2s_read16(i2s_drv, in_temp_buf, rx_size); // read samples into the incoming buffer } /* *accept incoming audio and create a reverb effect */ void process_audio_echo(uint8_t delay) { static int16_t * prcs_insert_ptr = process_buf; //creating 2 pointers, slightly offset from one another to read data at different times in the history of the data acquisition process. //the delta between the two corresponds to the length of delay int16_t * prcs_echo_ptr = prcs_insert_ptr - ((MS_SAMPLES * ((delay) * 5)) + 1); int16_t * curr_ptr = in_temp_buf; if (prcs_echo_ptr <= process_buf) prcs_echo_ptr += AUDIO_BUF_SIZE; for (int i = 0; i < I2S_BUF_SIZE / 2; i++) { * prcs_insert_ptr = (* prcs_echo_ptr >> 1) + * curr_ptr; prcs_insert_ptr++; if (prcs_insert_ptr == & process_buf[AUDIO_BUF_SIZE]) prcs_insert_ptr = process_buf; curr_ptr++; prcs_echo_ptr++; if (prcs_echo_ptr == & process_buf[AUDIO_BUF_SIZE]) prcs_echo_ptr = process_buf; } } /* *passthrough audio triggered from the digital IO */ void passthrough(void) { static int16_t * prcs_insert_ptr = process_buf; int16_t * curr_ptr = in_temp_buf; for (int i = 0; i < I2S_BUF_SIZE / 2; i++) { * prcs_insert_ptr = * curr_ptr; prcs_insert_ptr++; if (prcs_insert_ptr == & process_buf[AUDIO_BUF_SIZE]) prcs_insert_ptr = process_buf; curr_ptr++; } } /* * write audio to the I2S output buffer */ void put_audio(void) { static int16_t * prcs_extract_ptr = process_buf; while (i2s_tx_avail(i2s_drv) < I2S_BUF_SIZE / 2) // wait till there is space for the received samples to store in the transmit buffer { i2s_tx_start(i2s_drv); // if no space available, make sure the transmitter is running } i2s_write16(i2s_drv, prcs_extract_ptr, I2S_BUF_SIZE / 2); prcs_extract_ptr += I2S_BUF_SIZE / 2; while (prcs_extract_ptr >= & process_buf[AUDIO_BUF_SIZE]) prcs_extract_ptr -= AUDIO_BUF_SIZE; } /* * initialize the audio peripherals */ void init_audio(void) { while (cs4270_drv == NULL) { cs4270_drv = cs4270_open(DRV_CS4270_1); } i2s_drv = i2s_open(DRV_I2S_1); i2s_rx_start(i2s_drv); i2s_tx_start(i2s_drv); } Listing of main.C. To complete the embedded project and download the software to the target: - Copy the contents of the listing in the box above to the clipboard. - Return to main.C and use Edit»Select All or Ctrl+A to select the existing C code and press Delete. - Select Edit»Paste or Ctrl+V to paste the clipboard contents into main.C. - Save your work. - Recompile and download the updated program by pressing the Compile and Download button in the toolbar. - Connect an audio source to the black Line In port on the back of the Desktop NanoBoard (or the front of the NB3000). - Switch to the Devices view by selecting View»Devices View. - Right click the Terminal Instrument and select Instrument to launch the device's Instrument Rack. The Terminal Instrument should display a list of instructions as they appear in the figure below. (These instructions are created using printf commands in the main.C source file). - Launch the instrument rack for the Digital IO instrument and set bit 0 of BOUT[7..0]by clicking on the right most bit in the instrument rack. This will enable the audio pass-through and audio should now be heard through the speakers on the Desktop NanoBoard. Use the volume knob on the front of the Desktop NanoBoard to adjust the volume. - Set bit 1of BOUT[7..0]from within the Digital IO instrument to enable the reverb effect. Use the slider AOUT[7..0]to hear the difference in audio output. Instrument Rack - Soft Devices. Considering your Deployment Options Now that we've arrived at a functional design, let's examine how you might deploy your product in the field. Altium provides a range of deployment NanoBoards that you can either use entirely as an off-the-shelf solution or that you can customize with your own peripheral boards. Alternatively you can go for a fully custom PCB solution that makes selective use of existing NanoBoard circuit blocks and combines them together into a single design. The choice of deployment options will be influenced by a range of factors including cost, time to market, logistics, and form and fit constraints. While we can't tell you which solution will be best for your specific situation, we can present the range of options so that you can get a fair indication of their pros and cons. Level 1: Development of pure 'Device Intelligence' Almost all designs will begin at this level. The focus of development is around the application software and programmable hardware using one of the Development NanoBoards such as the Desktop NanoBoard. Very little regard is given to the hardware platform to be used in the final implementation and work can rapidly proceed on proving out and cementing the features of the design. The decision of how (or if) to deploy the newly created system is independent of this level of design. Assuming you intend to deploy your design beyond one of the Desktop NanoBoard products, you have two degrees of freedom. - Hardware Platform - Will you use off-the-shelf (OTS) hardware, create your own, or use a mixture of the two? - Enclosure - Will you use an OTS enclosure, create your own from scratch, or modify an existing one? The following sections discuss how you might work within these degrees of freedom to varying levels. Level 2: OTS Hardware Platform, OTS Enclosure This level is the simplest deployment option as it makes use of both an off-the-shelf hardware platform and enclosure. By using one of Altium's deployment NanoBoards, you can mix and match different daughter boards and peripheral boards to produce a customized hardware platform that is tailored to your application. In addition, enclosing the complete NanoBoard in one of Altium's supplied cases will ensure a professional appearance of your product and avoids the logistical headaches associated with manufacturing. Design compatibility ensures you can seamlessly migrate your design from the Desktop NanoBoard to a complete deployment solution. Deploying your designs in this fashion allows you to focus primarily on the device intelligence without being bogged down by hardware implementation issues. The unique identification system that has been built into the NanoBoard enables it to probe all connected daughter and peripheral boards and quickly reconfigure the entire design. You can be up and running on your deployment platform in little more than the time it takes to rerun the FPGA build flow. Level 3: OTS Hardware Platform, Custom Enclosure Deploying your design using an OTS deployment NanoBoard inside an enclosure of your own design is an incremental step from level 2 that can have a dramatic impact on the level of professionalism that you are able to portray to your customers. Use one of Altium's mechanical STEP models as the basis for customization or construct a completely new enclosure of your own design. Either way, you'll have the ability to tailor the form and fit of your end product to ensure it fits snuggly into its final environment. Altium Designer's 3D bodies allow you to quickly visualize your end product and trap any interference issues that may crop up between the ECAD and MCAD environments. Level 4: Mixture of OTS and Custom Hardware Platform, OTS or Custom Enclosure While Altium is continuously developing more peripheral boards, there still might be occasions when you need to include your own custom hardware as part of the design. The expandability of the NanoBoards ensures that this is a relatively simple task and it gives you the best of both worlds. You can selectively customize the hardware platform while still leveraging off the existing infrastructure that has been designed into the NanoBoard architecture. As with the previous level, the extent to which you use an existing enclosure or create/customize your own is completely up to you. Level 5: Custom Hardware Platform, OTS or Custom Enclosure This final level requires the greatest amount of hardware development but gives you the greatest flexibility in terms of form and fit. In particularly cost-conscious applications it may be necessary to rationalize NanoBoard circuitry to only those subsystems that are absolutely necessary to the design. The design reuse capabilities of Altium Designer makes this process a very quick and easy task. Because all of the NanoBoard circuits are included as design reuse blocks and installed as part of Altium Designer, you can link those blocks into your custom hardware design and avoid having to reinvent the wheel. Altium Designer even includes the part numbers and supplier information of all parts used in the NanoBoards. This makes the process of procuring parts an absolute breeze. Transferring the Design to a Deployment Platform The deployment level you choose will have some bearing on how simply you can retarget your design. The NanoBoard infrastructure includes intelligence that allows it to probe connected daughter boards and peripheral boards and automatically create a new configuration based on the connected hardware. All hardware supplied by Altium conforms to this standard but if you are using hardware from a third party or your own custom hardware that does not include this feature then you may need to perform some configuration steps manually to arrive at the same destination. There is plenty of help available in Altium Designer, through the Knowledge Centre, to assist with the creation of new constraint files and configurations and so I won't repeat that content here. It is sufficient to say that once the new configuration has been defined, you can be up and running on your deployment platform in little more than the time it takes to rerun the FPGA build flow.
https://techdocs.altium.com/display/ADOH/Tutorial+-+Getting+Started+with+the+Innovation+Station
CC-MAIN-2019-13
refinedweb
9,882
51.38
Go to: Synopsis. Return value. Keywords. Related. Flags. Python examples. timer([endTimer=boolean], [lapTime=boolean], [name=string], [startTimer=boolean]) Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis. timer is NOT undoable, NOT queryable, and NOT editable.Allow simple timing of scripts and commands. The resolution of this timer is at the level of your OS's gettimeofday() function. Note:This command does not handle stacked calls. For example, this code below will give an incorrect answer on the second timer -ecall. timer -s; timer -s; timer -e; timer -e;To do this use named timers: timer -s; timer -s -name "innerTimer"; timer -e -name "innerTimer"; timer -e; None import maya.cmds as cmds cmds.timer( s=True ) # code being timed print "START: time this" for i in range (0, 50): print ("time this "+str(i)) print "END: time this" cmds.timer( e=True ) # Named timers can be used for nesting cmds.timer( s=True, name="outerLoop" ) print "START: macro loop timing" for i in range(0,50): cmds.timer( s=True ) for j in range(5,50): newObjs = cmds.sphere( spans=j ) cmds.delete( newObjs ) innerTime = cmds.timer( e=True ) lapTime = cmds.timer( lap=True, name="outerLoop" ) print "\tInner loop %d = %g" % (i, innerTime) print "\t SUB = %g" % lapTime fullTime = cmds.timer( e=True, name="outerLoop" ) print "END: Full timing was %g" % fullTime
http://download.autodesk.com/global/docs/maya2014/en_us/CommandsPython/timer.html
CC-MAIN-2019-13
refinedweb
234
69.28
As a Backend Engineer or Data Scientist, there are times when you need to improve the speed of your program assuming that you have used the right data structures and algorithms. One way to do this is to take advantage of the benefit of using Muiltithreading or Multiprocessing. In this post, I won't be going into detail on the inner working of Muiltithreading or Multiprocessing. Instead, we will write a small Python script to download images from Unsplash. We will start with a version that downloads images synchronously or one at a time. Next, we use threading to improve execution speed. I am sure you are excited to learn this... Multithreading In a nutshell, threading allows you to run your program concurrently. Tasks that spend much of their time waiting for external events are generally good candidates for threading. They are also called I/O Bound tasks e.g writing or reading from a file, network operations or using an API to download stuff online. let take a look at an example that shows the benefit of using threads. Without Threading In this example, we want to see how long it takes to download 15 images from Unsplash API by running our program sequentially. import requests import time img_urls = [ ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ] start = time.perf_counter() #start timer for img_url in img_urls: img_name = img_url.split('/')[3] #get image name from url img_bytes = requests.get(img_url).content with open(img_name, 'wb') as img_file: img_file.write(img_bytes) #save image to disk finish = time.perf_counter() #end timer print(f"Finished in {round(finish-start,2)} seconds") #results Finished in 23.101926751 seconds With Threading Let's see how the threading module in Pyhton can significantly improve our program execution. import time from concurrent.futures import ThreadPoolExecutor def download_images(url): img_name = img_url.split('/')[3] img_bytes = requests.get(img_url).content with open(img_name, 'wb') as img_file: img_file.write(img_bytes) print(f"{img_name} was downloaded") start = time.perf_counter() #start timer with ThreadPoolExecutor() as executor: results = executor.map(download_images,img_urls) #this is Similar to map(func, *iterables) finish = time.perf_counter() #end timer print(f"Finished in {round(finish-start,2)} seconds") #results Finished in 5.544147536 seconds To have a better understanding of how to use the Threading module in Python visit this link: We can see that with threading code speed improved significantly compared with without threading code i.e from 23 secs to 5 secs 💃. For this example, please note that there is an overhead in creating threads so it makes sense to use threads for multiple API calls, not just a single call. Also, for intensive computations like data crunching, image manipulation Multiprocessing performs better than thread. In conclusion, for I/O bound tasks, anytime our program is running synchronously it is actually not doing much on the CPU. It's probably waiting around for some input. That's actually a good sign that we can get some benefits running our program concurrently using Multithreading. Next week, In part 2, we'll learn how to use Multiprocessing for CPU heavy tasks to speed up our programs :). Next, we learn how to connect a Django application to a dockerized PostgreSQL and pgAdmin 4 image running on your local machine😎 Please follow me and turn on your notification. Thank you! Happy coding! ✌ Discussion (4) Hi, I get why you would compare http requests but I think the article would benefit from a final step advertising the use of async http framework such as or aio A lot of people unfortunately copy paste sample from the internet ;-) True but this post is an introduction to the main topic. Thanks for pointing that out I like the map syntax, I havent used that before! Seems to make sense to me now. The standard library threading module always confused be before. I've been using background.py for a few years and really like its simplicity. Background Tasks in Python for ⚡ Data Science Waylon Walker ・ Dec 8 '19 ・ 3 min read Yes, I love the concurrent module than the old threading module. The methods are straight forward and easy to understand. I have not tried background.py. I will have a look at it this week. Thanks for your feedback.
https://dev.to/mojemoron/a-beginners-guide-to-multithreading-and-multiprocessing-in-python-part-1-n6h
CC-MAIN-2022-21
refinedweb
698
66.23
Swing Section Index If I want to manually size my icon for the system tray, how do I find out the correct size for the platform? The getTrayIconSize() method of SystemTray will return the appropriate Dimension for the icon. This allows you to either pick the most appropriate size from available icons or generate an icon of ...more How do I show a popup message over a system tray icon/application? The displayMessage() method of TrayIcon allows you to show a timed message, if supported on the platform. How do I place a tooltip over a system tray icon? The TrayIcon class offers a setToolTip() method to provide a string for the message. When I display HTML in a JEditorPane, does it support JavaScript? No. How do I select a row (or rows) in a JTable? In the Model-View-Controller (MVC) world of Swing, the JTable is just a display mechanism. Instead of selecting rows in the table (view), you select rows in the selection model: ListSelectionMode...more What is a peer? AWT uses native code (code specific to a single operating system) to display its components. Each implementation of the Jave Runtime Environment (JRE) must provide native implementations of Button...more How can I highlight some text in an inactive textfield? Here is a sample for you. import javax.swing.*; import java.awt.*; import javax.swing.text.*; public class TpHighlight extends JFrame { String tf_str="Hello this is a demo for High Lighting...more SSN Validation Are there any guides to validating social security numbers? The first three digits of an SSN map to the location where the number was requested. The Social Security Administration maintains a list of valid digit mappings at How can I provide word completion functionality in a JComboBox? (When the user enters characters then the closest matching item should be selected) How can I provide word completion functionality in a JComboBox? (When the user enters characters then the closest matching item should be selected))? How do I specify the popup menu to show for an application on the system tray? When you create the TrayIcon for the SystemTray, you specify the PopupMenu. TrayIcon trayIcon = new TrayIcon(anImage, "Tooltip", aPopupMenu); Just don't forget to fill it with MenuItem objects. more
http://www.jguru.com/forums_index.php/faq/client-side-development/swing
CC-MAIN-2018-13
refinedweb
376
56.76
08 February 2012 07:28 [Source: ICIS news] SINGAPORE (ICIS)--Finnish chemicals firm Kemira reported on Wednesday a 50.6% year-on-year increase in its net profit to €37.8m ($50.4m) in the fourth quarter of last year, despite a surge in raw materials prices. The firm’s revenues slipped by 0.6% year on year to €543.3m, while earnings before interest, taxes, depreciation and amortisation (EBITDA) fell by 2.81% to €65.9m, the company said in a statement. “Raw material prices increased rapidly in the beginning of the year,” it said. “The prices for some of our key raw materials have continued to increase during the second half of the year and are still at a very high level,” the company added. The firm’s net profit fell to €140.3m in the full year of 2011, compared with €646.9m in the same period a year earlier, the firm said. Kemira’s 2010 net profit figure included a non-recurring income of €529.2m from the divestment of decorative paints firm Tikkurila. Its overall sales in 2011 rose 2.3% year on year to €2.21bn, while EBITDA fell by 2.29% to 259.6m. “In the near term, uncertainty in ?xml:namespace> “In 2012, Kemira expects the revenue and operative earnings before interest and taxes (EBIT) to be slightly higher than in 2011," Kermin
http://www.icis.com/Articles/2012/02/08/9530297/finlands-kemira-q4-net-profit-surges-50.6-to-37.8m.html
CC-MAIN-2014-15
refinedweb
231
69.89
> > ...I sent out a note a while ago now trying to scare up > > some ideas on how to vet the current list of 2.6 proposals > > and get to a final "plan". I didn't get much (any?) response :( > > > > I am, as the author of the dtml-set tag, of course willing to > commit to the > implementation of this tag for 2.6 > > How about 'vetted' - It's set to N, when will I know if I should start > coding? Ivo - I don't have a problem with the spelling of this. I _do_ have a problem with the fact that it (your existing release) actually stores the variable in REQUEST. If it were to store them somewhere more appropriate in the DTML namespace stack, I'd be happy to OK it. We'd also need someone to commit to providing the extra docs for the help system and the dtml reference section of the online Zope book. Brian Lloyd [EMAIL PROTECTED] V.P. Engineering 540.361.1716 Zope Corporation _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/[email protected]/msg10212.html
CC-MAIN-2017-04
refinedweb
184
72.56
Attendees - Present - +1.617.715.aaaa, dbs, DaveReynolds, aisaac, +1.510.435.aabb, Workshop_room, +1.510.435.aacc, +1.510.435.aadd, +1.510.435.aaee, kcoyle - Chair - Arnaud Le Hors and Harold Solbrig - Scribe - sandro, arthur Contents - Topics - Introductions - State of the Art - Presentation from Mark Harrison (U Cambrdige) - Requirements for RDF Validation - Harld Solbrig - RDF Validation in a Linked Data World - Esteban-Gutiérrez - Lightening Talks - Requirements Discussion - Guoquin Jiang presentation - Mayo Clinic - Simple Application-Specific Constraints for RDF Models - Shawn Simister - Tim Cole - Using SPARQL to validate Open Annotation RDF Graphs - Futher requirements Discussion Introductions State of the Art <Ashok_Malhotra:> When we started RDF, folks said it was great BECAUSE it had no schema. Are we changing our mind? <Arnaud:> Sounds like JSON :-) <hsolbri:> The schema is there whether you write it down formally or not. <DavidBooth:> There are lots of different schemas in RDF. The beauty of RDF is the ability to combine them. <arthur> PDF version of my charts at Presentation from Mark Harrison (U Cambrdige) (slides, , report summary, ???) Robert: GS1: we did bar codes. We work with the Auto-Id Labs (started here at MIT) ... GS1 digital, trying to leveral all the master data in the supply chain, business-to-consumer Mark: (slide with iPones, LOD for products, Pre-Sale) ... more informed choices, eg products with particular environmental impact [on slide 9] ... do we want broken hyperlink checking? ... (can we validate offline) ... what is the scope/boundary of what we validate? ... When we have these huge code lists, the scale of validation queries might be problematic ... 3000 attributes, hundreds of which are code-list-driven <hsolbri> Focus on markup and validation tools rather than the actual validation <DaveReynolds> +1, publishing and inspecting the contract is at least as important as enforcing the contract hsolbri: Happy to see these use cases. I think RDF "validation" is not the best framing. I think it's MORE important to publish the characters of what's in a store, rather than just validating. arthur: This sounds a lot like what we've done at IBM. Can you describe.... mark: It's about making sure you can ... ... We need to make sure the two datasets are in sync with each other. ... You need to have confidence that these are the true values asserted by manufacturer. ... Maybe we could use digital signatures. There's liability to consider. arthur: you're comparing published data with Reference data. you don't need to comopute a sig mark: true, we could use prov as an alterantive to sigs <arthur> GS1 uses cases very similar to OSLC, except for digital signatures timcole: The issue cardinality, not validations. Value is correct... unit transformations. 600g = 1.2lbs or whatever. Are you encompassing that in validation? mark: Yes. ... like in eric's example of reproducedOn date -- you want to do checking like that, with units conversion ... EU legislation says vitamins are expressed in certain units. Sanity checking on values -- to make sure we're not off by orders of magnitude timCole: Does broaden the scope. mark: Yes. Robert: We used to have a closed network for this. To open it to millions of producers makes this more complex. Ashok_Malhotra: If you want to test whether this date follows this other date, there are xquery functions to handle all of that stuff. So we can just pick them up. We don't have to invent them again mark: We should leverage what we can, yes. ... And using qudt for conversion of units, and so on. Requirements for RDF Validation - Harld Solbrig hsolbri: [re: ASN1] we had "strings" where were kind of like rdf graphs. a ptext code was a sort of ontology <guoqian> hsolbrig: from ptxt to ASN.1 hsolbri: RDF only guarantees triples, literals ... With SPARQL, you have to code EVERYTHING as optional! <mgh> In SPARQL need to use OPTIONAL extensively for defensive coding in case value is not present hsolbri: ... which is NP ... SIde note: Dataset (identity is content), Triple store (Identity separate from content) <guoqian> hsolbri: a definition about what is RDF store. ... We need a way for them to be published, and for them to be discovered. ... Future -- invariants will change over time. <guoqian> hsolbri:RDF validation must provide a standard syntax and semantics for describing RDF invariants hsolbri: Semantic Versioning. semver.org ... That was the MUST. Here's the SHOULD. ... representable in RDF, maybe also a DSL ... formally verifiable, consistent, maybe complete ... self-defining ... able to express subset of UM 2. class and attribute assertions (and some OCL?) ... able to express XML Schema invariants ... implementable in exising tooling and infrastructure (RDF, SPARQL, REST, ...) hsolbri: [slide 17] Example of allowed transitions -- you're allowed to add subjects, but not to add predicates. ... spectrum from read-only to write-any-triple. <guoqian> hsolbri: LOD today OK for research but not for production systerms <guoqian> ... OK for relatively static stores but not for federation and evolution <aisaac> Question for Harold: Just checking, when you say "All constraints of XML Schema", this includes sequences? guoqian: You're offering another definition of "store". Is this different from existing defn of named graphs? hsolbri: I'd have to go back and look at that. I think Named Graphs are local to quad store. ANd I'm focussing on having the identity of a store, but have the contents be constrainted. <sandro:> as i understand SPARQL11 terminology, a "graph store" can have multiple "states" ... so you're talking about a particular graph store to only contain certain datasets Arnaud: people use the term "graph" sometimes to mean something mutable or not, gboxes and gsnaps. hsolbri: "magic box" was a term we onces used. aisaac: I heard Harold say he wants to represent all that's allowed by XML Schema. Does that include Sequence Information? hsolbri: Great question. There are situations where people take advantage of order, but this may be a drawback. so, maybe MOST of XML schema. The challenge is how to get it back out in the right order.... Arnaud: We have on the agenda a presentation from Noah Mendelson, to talk about XML Schema, warning us against reproducing some of their mistakes. ... Some people will say 20/80 rule, but which 80? Arthur: Your summary slide was a bit disappointing/negative. hsolbri: I believe fixing this is necessary to to make RDF able to be a primary source for content. arthur: I consider your second negative to be a positive. It's why we've adopted RDF. Traditional data warehouses are very expensive because they completely enforce the schema. RDF allows more graceful evolution.. evrensirin: Graceful evolution of data is an advantage of RDF. That's not about enforcement of schema, but about having the option to not have a schema. ... Clarification on post-conditions. State transitions, or states? hsolbri: Closely related to reasoning. If you're doing anything beyong a basic PUT, adding a triple to a store may involve doing additional inferences, eg adding a firstname may result in the presence of a fullname in a store. ... what has to be true for this set of rules to fire; what is true if they do. RDF Validation in a Linked Data World - Esteban-Gutiérrez [discussion of dynamics in validation not captured] Linked Data Profiles - Paul Davidson Pauls wants a "Linked Data Profile" that describes the properties, values, etc., that should be used so that multiple councils in England can share data <sandro> +1 Paul Davidson, make it easier to share municipal data Forms to direct interaction with Linked Data Platform APIs - Roger Menday Roger: described use of REST APIs at Fujitsu Roger: participating in LDP activity ... need to descibe parameters to create resources (Progenitor) ... use case: enable robots to fill in forms ... proposed a vocab (f:parameterSet ...) to be included in an LDP container Europeana and RDF data validation (slides, slideshare, report summary) Antoine: aggregates data from multiple sources (musems) and need to enforce constraints ... described as table: property, occurence, range ... using OWL now ... EDM is implemented as XML Schema (for RDF) with Schematron rules <dbs> EDM = Europeana Data Model Antoine: Also using Dublin Core Description Set ... OWL = hard, SPARQL = low-level Thoughts on Validating RDF Healthcare Data <guoqian> -- Schema promiscuous: why RDF? <aisaac> Bye folks. It was a great morning. Enjoy the rest of your day, and thx a lot for the slide moving! dbooth multiple schema, multiple data sources ... ==> need multiple perspectives on validation of the same data ... wish list: build on SPARQL, ... use SPARQL UPDATE to build intermediate results (instead of one giant SPARQL query) ... check URI patterns ... must be incremental so you can do it continuously, e.g. like rgression testing ... declarative is too awkward for complex rules ==> need operational (imperative): SPARQL UPDATE pipelines Validate requirements and approaches - Dave Reynolds DaveReynolds currently working with UK gov: multiple vocabs, manual docs, each publsiher validates their data ... need a shared validation approach: need to specify "shape" of data ... declarative rules are desirable ... understandable by "mortals" <hsolbri> Interesting: does Reynold's declarative requirement clash with Booth's procedural? DaveReynolds cites W3C Datacube vocab <DavidBooth> Harold, I think it depends on the complexity of the validation check. If it can be expressed in a simple declarative rule, then that is easiest. My point is that for more complex checks, operational is needed. DaveReynolds SPARQL used to express Datacube integrity constraints ... SPARQL queries hard to understand ... for irregular data, OWL is also too hard <guoqian> need ability to validate against external services such as registries DaveReynolds need to specify controlled terms too Requirements Discussion Arnaud framing discussion: what do we need? What can we afford? Harold: compare need for procedural steps versus declarative constraints ... must declarative description also be executable (for validation) e.g. by translation to SPARQL ... e.g. in many cases, the datastore content is already valid, so the missing capability is to advertise what's in a store David: desirable to have high-level specification that is translatable to an executable language (SPARQL) Arnaud: use the IRC queue system "q+" to get on queue <DavidBooth> David: Want the best of both worlds: declarative when a constraint can be easily expressed that way, while allowing fall back to SPARQL when necessary. So to my mind the ideal would be declarative *within* the SPARQL framework. <Zakim> ericP, you wanted to discus XML Schema/RNG + schematron Dave: SPARQL is too low level: need high-level description Eric: uses multiple schema langauges XSD, RelaxNG, Schematron ... we'll probably have a high-level validation language that is extensible with low-level rules in SPARQL, JS, etc <hsolbri> UML has Class, property and OCL (schematron equivalent) Evren: SPARQL has extension points. Concern about SPARQL UPDATE since it changes data David: didn't imply to actually change data Tim: OWL wasn't developed for validation, SPARQL wasn't developed for validation: why not have a language without baggage Harold: we should be informed by UML Ashok: should split up problem, 1) state, 2) structure, 3) constraints Arnaud: perspectives are 1) validation, 2) description Eric: description should be translatable to SPARQL, SPIN, whatever <Zakim> hsolbri, you wanted to say if it isn't compatible, I think we need a good justification as to why. Eric: cites Stephan Decker proposal to translate description into SPARQL Harold: cites project to translate UML -> Z - SPARQL <Zakim> evrensirin, you wanted to talk about what we can afford with sparql translation <guoqian> hsholbri: working on translating from UML to Z to Sparql Evren: translation is good implementation strategy, but not for state transitions <Zakim> ericP, you wanted to say that coverage of all triples may be tricky in SPARQL David: use multiple graphs or datasets to describe pre/post conditions <Zakim> labra, you wanted to talk about RDF profiles Labra: descibes work on RDF validation based on profiles ... like Schematron, using SPARQL instead of XPath <Zakim> hsolbri, you wanted to say proposed requirement - invariants (and rules?) expressible in RDF Harold: SPARQL not using RDF (unlike SPIN) - we should require an RDF representation <guoqian> hsolbri:SPARQL should be able to be defined in RDF with meta data Evren: SPIN is going to allow a literal string of SPARQL Harold: don't want to parse another grammar Evren: SPIN has both - RDF based and literal SPARQL string <Zakim> ericP, you wanted to ask if the expressivity of SPIN in RDF is of opperational valye Evren: what is the value of the RDF representation of SPARQL in SPIN? Is this just for query governance? Harold: RDF is useful for impact analysis <Zakim> DavidBooth, you wanted to say I think a main reason for the RDF-based SPIN syntax is the ability to change namespaces in the query Steve: need to also see why validation fails <guoqian> hsolbri: meta-repository may be an argument for RDF validation <Zakim> DavidBooth, you wanted to say one thing I particularly like about SPIN CONSTRUCT rules is the ability to attach arbitrary data to a validation error Harold: metadata merging is important so RDF is useful in that use case David: SPIN CONSTRUCT rules allow attachment of other data <SteveS> I'd like the validation results to not only provide a useful message that a tool could possibly recover, but also the context such as the triples causing problem and rules that cause it (some guidance on how to become validate would be helpful) Arnaud: need to discuss what is affordable ... need to prioritize what we can do in a 2-year period ... experience shows that the experience of developing standards in charter groups can be brutal [laughs] End <arthur> Break for lunch courtesy of W3C <arthur> check out this w3c spec that contains Z notation Guoquin Jiang presentation - Mayo Clinic <hsolbri> [at Slide 8: Architecture] Clinical Element Models converted to XML Schema, Instance data to XML then Schema to OWL and instance to RDF Guoquin:: Slide 11: Check constraints and validate Guoquin:: ... Use SPARQL Guoquin:: Eric: Is SPIN generated fron Schema Guoquin:: Jiang: No, by hand ... perhaps in future' Guoquin:: Slide 15: Reference Model picture there is a SPARQL error on chart 16 Guoquin:: Slide 16: Data values Guoquin:: Slide 19: RDF Rendering of Domain Template Guoquin:: ... using SPIN in an RDF Form Guoquin:: Slide 20: Discussion Points Guoquin:: ... RDF Validation against CIMI Models Guoquin:: ... Challenging issues (data types, value set binding) Guoquin:: ... XML Semantics Resuse Technology <arthur> i don't undertand XSD->OWL <arthur> XSD = constraints, OWL = Inference Slide 21: Picure showing Technologies and their Relationships Overlay: BRIDGing Technology Arthur: How can you translate XML Schema to OWL or UML to OWL? Eric explains ... they are different but can be used in similar ways Discussion on translation between UML and OWL, XML and OWL scribe: constraints and reasoning are just different <kcoyle> aadd is kcoyle Q&A Discussion of constraint checking vs. inference Arnaud: Are you doing this mapping on Slide 10 or are you thinking of doing this? ... asks about validation at different levels Harold: This is a vision ... ... MIF is an extension of UML with a higher degree of expressivity ... Effort to translate MIF to OWL [example resulting clinical data] Simple Application-Specific Constraints for RDF Models - Shawn Simister ssimister: RDF Validation at Google ... we are triplifying the Web ... What approaches did we consider? ... Schematron, SchemaRama ... SPIN constraints ... nice to be able to have metadata on constraints, like for severity of violations ... OWL Integrity Constraints ... Our Solution ... path-based constraints ... What did we learn ... Most constraints are property paths. SPARQL handles the rest ... constraints describes the app, not the world it inhabits ... Constraints need to be app specific <arnaud:> how do the constraints get created? do you do it, does the developer? ssimister: some of each. gmail team had their own internal software with their internal test cases, so it as easy to get them to generate stuff for us. <guoqian> -- schema.org <sandro:> surely an app has one set of property paths for what's needed to use the data at all, and another that it might be able to use. ssimister: we only talk about the required stuff. for one thing, we're trying to not discourage people from providing information we don't happen to use yet. <sandro:> It would be nice, probably, to still tell folks what data you can use if provided. ssimister: good idea. DBooth: Are the paths RDF property paths? ssimister: No they are not ... very similar Arthur: Why do you split into context and constraints when you can use a single SPARQL query? ssimister: The design came from Schematron <mgh> Seems like a constrained subset of the property paths that can be used in SPARQL 1.1 - not supporting *, + notation Question about the parser ssimister: Superset of RDF ... ... not public yet Using SPARQL to validate Open Annotation RDF Graphs - Tim Cole Tim: Context: W3C Open Annotation CG ... has 102 members ... narrow and easy usecase for RDF Tim describes the OA data model Tim: describes the OA Ontology ... LoreStore Annotation Repository ... store, search, query, display and validate annotations ... approach Bob Morris on FilteredPush RDF Validation Tim: rules are grouped into RuleSets. All rules in a set must be valid ... the OAD namespace has some extensions to the OA namespace [Q&A] Tim: I was happy that most of these topics came up in the more complex cases as well COFEE BREAK for 15 Minutes Requirements List (discussion pirate pad, report summary) The group collaborated on a PiratePad, with some extra coordination because PiratePad permits a maximum of 10 simultaneous users. Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log) $Date: 2013-10-04 17:30:29 $
http://www.w3.org/2013/09/10-rdfval-minutes
CC-MAIN-2014-10
refinedweb
2,926
55.74
Greetings, I want to prompt the user to enter todays date in format of dd/mm/yyyy First they enter the day (dd) i.e. 1 - 31 into integer variable dd The validation routine is as follows:- - Is the day value EMPTY i.e. they hit the ENTER KEY then re-prompt for day value. - Is the day value an integer, if not then re-prompt for day value. - If the day value is an integer ensure its between 1 - 31 if not not then re-prompt for value else continue. Im at my wits end for such a simple problem, as I cannot for the life of me trap when the user hits return. Knowingwon't trap this I then usewon't trap this I then useCode:cin.failbut having to check intially when they enter the value and then again for when they are prompting due to a fail on one of the other validation checks has become too complicated for me.but having to check intially when they enter the value and then again for when they are prompting due to a fail on one of the other validation checks has become too complicated for me.Code:cin.get() I have the following code below:- Can anyone please assist? I think Im making it too complicated.Can anyone please assist? I think Im making it too complicated.Code:#include <iostream> using namespace std; int main() { int dd; bool validDay = false; //Prompt and valdiate day entry //Loop until date is valid cout << "\n\tPlease enter todays date:-\n\n"; cout << "\tPlease enter day e.g. 1 - 31 :"; cin >> dd; while(!validDay) { //Check if enter has been pressed //Do IF day IS entered IS a number if (!cin.fail()) { //Is day value valid if (dd >=1 && dd <=31) { validDay = true; } else { cout << "\n\tInvalid day, please enter day value between 1 - 31 :"; } } else // The user enter something other than a number, doh! { while (cin.fail()) { cin.clear(); cout << "\n\tWarn: Please enter a NUMBER ONLY between 1 - 31 :"; cin.ignore(80, '\n'); cin >> dd; } } } } I believe I will have to check if they haven't entered a value on initial prompt and re-prompts if they entered a non-numeric value or if they entered a number <= 0 or > 31 Thanks Rob
https://cboard.cprogramming.com/cplusplus-programming/115693-newbie-validation-integer-when-enter-key-has-been-pressed.html
CC-MAIN-2017-26
refinedweb
383
69.72
Hello, In IntelliJ, when you copy a .java class file from ClassA.java to ClassB.java, the name of the class in the file ClassB.java is automatically changed to ClassB. I would expect the same behavior in PhpStorm as it does not make sense to have 2 classes with the same name, at least not in the same namespace. But when I copy a class, either with ctrl+c or F5, it just simply copy the file but does not rename the class itself. I search the help and forum and did not find anything about this. Is there a way to force PhpStorm to rename the class in the new file please? Thanks Hello, Hi there, There is no such action as "Copy Class" in PhpStorm -- ATM all operations performed on files do not affect class names/namespaces inside them .. so, for example, if you want to move file from one folder to another (or rename file) while still keeping the namespace/class name in syn .. then you should use "Refactor | Rename" or "Refactor | Move" on actual class (invoking these actions on file in Project View panel will do one thing while invoking them in actual Editor while having caret on class name will do different thing). Thanks for the answer. The thing is that I don't want to move it nor do I want to refactor it and have all its usages update. I just want to copy it as a new different class as it is done in IntelliJ for java classes. This is specially useful when mapping PDO query to classes and you have tables that are very similar except for one or two fields. I can still copy the file then update the name manually but it would be nice to have it done automatically. As you said "ATM" is there any plan in the future to have this? 1) I gave you the link to a specific "Copy Class" ticket (which is not closed or declined) -- that should answer this question straight away. You can star/vote/comment to get notified on progress. Feel free to ask questions/describe your needs there -- maybe it will speed up implementation of that ticket. So far this is rather rarely use case + no votes .. therefore it's not in "implement this in next version" list. 2) My "ATM" was referring to actions when user manipulates files ("move"/"rename" -- no "copy" here) and explained about (more appropriate) alternatives to those actions. But yes -- there are tickets that ask to alter class names/namespaces when performing move/rename on files (e.g. ). ATM "copy" action is not one of them. I'd assume that if such behaviour will be implemented it would cover "copy" as well.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206320759-Copy-class-issue
CC-MAIN-2020-24
refinedweb
459
71.75
I may have a silly question. I tried to directly use "say" function of perl without declaring "use v5.1x". What should I do to achive this? Why the "say" function cannot be directly used? my perl is: This is perl 5, version 18, subversion 2 (v5.18.2) built for x86_64-li +nux-gnu-thread-multi [download] Thanks. The say function is not imported into the main namespace (Packages) by default. The shortest chain to importing it is via the feature pragma: use feature 'say';. say was implemented in Perl v5.10, but if it was imported by default, there would be the risk of backward compatibility problems and spurious warnings for perfectly fine old code. Alternatively, you can use say without importing it by giving the full package: CORE::say();. #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way. kennethk Thank you for your prompt response. Actually, I have been wondering why "say" was not imported be default if this is a better feature and upgrade. Similar situations for smart comparision operators... For backwards compatibility with program/modules that have a sub/method called say. You do not need to enable a feature to use the smart match operator. You do for given and when, though, for the same reason as.
http://www.perlmonks.org/?node_id=1097147
CC-MAIN-2017-47
refinedweb
225
68.47
Red Hat Bugzilla – Full Text Bug Listing Some uses of chroot, such as building packages for different releases, clash with SELinux as they may need to use different security policies to what is available on the host system. See discussion at: There has been some internal SELinux project discussion on the issue of allowing multiple policies to be loaded on a system (perhaps via namespaces), which is also a requirement for fully supporting the shipping of policy with RPMs (e.g. the file system labels for the RPM being built may not exist on the host system, and also use different policy). Changing version to '9' as part of upcoming Fedora 9 GA. More information and reason for this action is here: livecd patches and setfiles_mac are in or going into Rawhide. Which will allow livecd to be built within an enforcing environment. *** Bug 459398ames morris: Please can the version be bumped to 11? Feedback about that ? --- Fedora Bugzappers volunteer triage team James, can multiple policies be loaded on a system now? no. We are adding changes to MOCK to stop it from doing SELinux activity inside the chroot, which will allow us to enforce policy on the entire mock environment and mock will not try to load policy.. Mock now works correctly with SELinux.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=430075
CC-MAIN-2017-13
refinedweb
215
60.14
Multiple colons in namespace names? Discussion in 'XML' started by Grant Robertson, Jan 13, 2007. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads __doPostback method with colons problemSteven Livingstone, Aug 4, 2003, in forum: ASP .Net - Replies: - 15 - Views: - 918 - John Saunders - Aug 28, 2003 Colons, indentation and reformatting. - Replies: - 2 - Views: - 326 - Neil Cerutti - Jan 9, 2007 Colons, indentation and reformatting. (2) - Replies: - 9 - Views: - 430 - Hendrik van Rooyen - Jan 10, 2007 About the use of double colons in javascript ::Québec, Nov 13, 2003, in forum: Javascript - Replies: - 6 - Views: - 237 - Thomas 'PointedEars' Lahn - Nov 17, 2003 double colons in Javascript, Jul 25, 2007, in forum: Javascript - Replies: - 2 - Views: - 225 - Erik Langhout - Jul 25, 2007
http://www.thecodingforums.com/threads/multiple-colons-in-namespace-names.392721/
CC-MAIN-2016-07
refinedweb
155
79.6
Compiler Error C2440 Error Message'conversion' : cannot convert from 'type1' to 'type2' The compiler cannot cast from 'type1' to 'type2'. C2440 can occur as the result of conformance and update work that was done in the Standard C++ Library. See Standard C++ Library Changes: Visual C++ .NET 2003 and Library Changes in Visual C++ Releases for more information. Example C2440 can be caused if you attempt to convert a pointer to member to void*. The following sample generates C2440. The C2440 errors on lines 15 and 16 of the next sample will be qualified with the Incompatible calling conventions for UDT return value message. (A UDT is a user-defined type, such as a class, struct, or union.) These types of incompatibility errors are caused when the calling convention of a UDT specified in the return type of a forward declaration conflicts with the actual calling convention of the UDT and when a function pointer is involved. In the example, we first have forward declarations for a struct and for a function that returns the struct; the compiler assumes that the struct uses the C++ calling convention. Then we have the struct definition, which, by default, uses the C calling convention. Since the compiler does not know the calling convention of the struct until it finishes reading the entire struct, the calling convention for the struct in the return type of get_c2 is also assumed to be C++. The struct is followed by another function declaration that returns the struct, but at this point, the compiler knows that the struct's calling convention is C++. Similarly, the function pointer, which returns the struct, is defined after the struct definition so the compiler knows that the struct uses the C++ calling convention. To resolve C2440 because of incompatible calling conventions, declare functions that return a UDT after the UDT definition. // C2440b.cpp struct MyStruct; MyStruct get_c1(); struct MyStruct { int i; static MyStruct get_C2(); }; MyStruct get_C3(); typedef MyStruct (*FC)(); FC fc1 = &get_c1; // C2440, line 15 FC fc2 = &MyStruct::get_C2; // C2440, line 16 FC fc3 = &get_C3; class CMyClass { public: explicit CMyClass( int iBar) throw() { } static CMyClass get_c2(); }; int main() { CMyClass myclass = 2; // C2440 // try one of the following // CMyClass myclass(2); // CMyClass myclass = (CMyClass)2; int *i; float j; j = (float)i; // C2440, cannot cast from pointer to int to float } C2440 can also occur if you assign zero to an interior pointer: C2440 can also occur for an incorrect use of a user-defined conversion. For more information on user-defined conversions, see User-Defined Conversions). The following sample generates C2440. // C2440d.cpp // compile with: /clr value struct MyDouble { double d; // convert MyDouble to Int32 static explicit operator System::Int32 ( MyDouble val ) { return (int)val.d; } }; int main() { MyDouble d; int i; i = d; // C2440 // Uncomment the following line to resolve. // i = static_cast<int>(d); } C2440 can also occur if you try to create an instance of a Visual C++ array whose type is a Array. For more information, see array (Visual C++). The following sample generates C2440. C2440 can also occur because of changes in the attributes feature. The following sample generates C2440. This error can also be generated as a result of compiler conformance work that was done for Visual C++ 2005: the Visual C++ compiler no longer allows the const_cast Operator to down cast when compiling source code that uses /clr programming. To resolve this C2440, use the correct cast operator (for more information, see Casting Operators). See Breaking Changes in the Visual C++ 2005 Compiler for more information. The following sample generates C2440. C2440 can also be generated using /clr:oldSyntax. The following sample generates C2440.
http://msdn.microsoft.com/en-us/library/sy5tsf8z(v=VS.80).aspx
CC-MAIN-2014-15
refinedweb
608
52.19
For people new to DirectX development, MSDN. Ideally, I'd like a Win32 desktop project template that looks similar to the other C++ DirectX templates as a common starting point for some tutorials and other explanatory posts. And so, here it is! This is a Visual Studio extension for VS 2013 Pro, VS 2013 Premium, VS 2013 Ultimate, or VS 2013 Community which installs a Direct3D Win32 Game project template for Visual C++. VS Express for Windows Desktop: I recommend taking a look at the VS 2013 Community edition if you don't have the budget for purchasing a license for the VS 2013 Pro+ editions. DirectX 12: The details of the Direct3D 12 versions of Direct3D Win32 Game is different than the Direct3D 11 version since the APIs are quite different, but it’s the same design. See this page for a more detailed overview. Related: Direct3D Game Visual Studio templates (Redux) Using the VSIX To Install, launch Direct3DWin32Game.vsix on a system with VS 2013 installed. If VS 2013 was already open, exit and restart it. To Uninstall, in VS 2013 go to Tools / Extensions and Updates... then uninstall "Direct3DWin32Game". You then restart VS 2013. Creating a new project To create a new project, use File / New -> Project... When finished, you have a Win32 desktop app project that is ready to use for learning Direct3D 11 on Windows 7 or Windows 8.x. For those familiar with the existing "DirectX App" VS templates or XNA Game Studio, it has a similar structure with a Game class with methods like Render, Update, and Clear. Search for TODO for hints as to where to add your code. Template overview The project has the following properties: - It creates a window, Direct3D 11 device, and swap chain with depth buffer. This supports both DirectX 11.0 and DirectX 11.1 runtime systems. Debug configurations enable the Direct3D debug device with some additional debugging features enabled. - By default it supports all possible Direct3D feature levels from 9.1 to 11.1. - The default swapchain format is BGRA 8-bit UNORM with a D24S8 depth buffer. - The default is for Game::Clearto render a classic "Cornflower blue" screen. - This project makes use of StepTimer calling the Game::Updatemethod as needed to update the game state, and defaults to variable-time steps. - The application handles resizing--the smallest window allowed is 320 x 200, defaulting to 800 x 600 or whatever is returned by Game::GetDefaultSize. When the window is resized, Game::OnWindowSizeChangedis called which in turn calls Game::CreateResourcesin order update the swap chain, re-create any window-sized dependent resources like the depth buffer, and reset the default viewport. The project template makes some simplifying assumptions: - If the window is minimized, Game::OnSuspendingis called (same as if a power management state is encountered). - No support for either 'exclusive' full-screen or 'fake' full-screen. - It does not have any handling of input (keyboard/mouse), which you can add by modifying the WndProcin Main.cpp. You can easily add support for the Xbox Common Controller gamepad using DirectX Tool Kit's GamePadclass. Some additional notes: - This makes use of Microsoft::WRL::ComPtras a smart-pointer to manage the lifetime of the COM objects. - It makes use of C++ Exception handling for errors, including the DX::ThrowIfFailedhelper present in the other DirectX templates. - The modules default to using the DirectXnamespace to simplify use of DirectXMath or other helper libraries like DirectX Tool Kit. Following good C++ coding practice, you should use fully qualified names in the headers (i.e. in Game.h). - If Game::Presentdetects as a device-removed or device-reset case, it will call Game::OnDeviceLostwhich releases all Direct3D objects and then calls Game::CreateDeviceand Game::CreateResourcesagain to re-create them. - COM is initialized using a multi-threading model to simplify use of Windows Imaging Component (WIC) or XAudio2. - The project sets _WIN32_WINNTto 0x0600 in pch .hto support Windows Vista, Windows 7, or Windows 8.x. Set this to 0x0601 to require Windows 7 or later (i.e. you don't want to deal with Windows Vista Direct3D 11 deployment), or 0x0602 to require Windows 8.0 or later (so you can rely on XAudio 2.8 or XInput 1.4). - The project includes a complete embedded manifest. - The project supports both Win32 (32-bit) and x64 native configurations (Debug and Release). Adding DirectX Tool Kit The basic project template is self-contained. If you want to make use of DirectX Tool Kit with this template, there are two ways to add it: Use NuGet Go to Project / Manage NuGet Packages... Search for "DirectX Tool Kit" online and select the package with Id: directxtk_desktop_2013 (use the latest one which is 2014.11.24.2 at this time; if you have an older version you can update it using the NuGet interface) Use Project-to-project references Follow the directions given on the CodePlex site under Adding to a VS solution with the DirectXTK_Desktop_2013.vcxproj project, and adding the appropriate Additional Include Directories property for all platforms & configurations. Note: This applies to adding use of DirectXTex, DirectXMesh, Effects 11, and/or UVAtlas to the project as well. Adding DirectX Tool Kit headers Edit the pch .h file in the project and add the following lines at the end: #include "CommonStates.h" #include "DDSTextureLoader.h" #include "DirectXHelpers.h" #include "Effects.h" #include "GamePad.h" #include "GeometricPrimitive.h" #include "Model.h" #include "PrimitiveBatch.h" #include "ScreenGrab.h" #include "SimpleMath.h" #include "SpriteBatch.h" #include "SpriteFont.h" #include "VertexTypes.h" #include "WICTextureLoader.h" Then add the following line near the top of Game.cpp after the existing using namespace statement to make it easier to use SimpleMath in your code. using namespace DirectX::SimpleMath; Windows Store 8.1 / Windows phone 8.1 / Universal apps You can get a similar "empty" starting project creating a new DirectX App project, removing/deleting all files under the Content folder and then replacing them with these two files: Game .h // Game.h #pragma once #include "..\Common\DeviceResources.h" #include "..\Common\StepTimer.h" namespace DXTKApp1 // TODO: Change to match project namespace { class Game { public: Game(const std::shared_ptr<DX::DeviceResources>& deviceResources); void Update(DX::StepTimer const& timer); void Render(); void CreateDeviceDependentResources(); void CreateWindowSizeDependentResources(); void ReleaseDeviceDependentResources(); private: // Cached pointer to device resources. std::shared_ptr<DX::DeviceResources> m_deviceResources; // Direct3D Objects (cached) D3D_FEATURE_LEVEL m_featureLevel; Microsoft::WRL::ComPtr<ID3D11Device2> m_d3dDevice; Microsoft::WRL::ComPtr<ID3D11DeviceContext2> m_d3dContext; }; } Game .cpp // Game.cpp #include "pch.h" #include "Game.h" #include "..\Common\DirectXHelper.h" using namespace DXTKApp1; // TODO: Change to match project namespace using namespace DirectX; using namespace Windows::Foundation; Game::Game(const std::shared_ptr<DX::DeviceResources>& deviceResources) : m_deviceResources(deviceResources) { CreateDeviceDependentResources(); CreateWindowSizeDependentResources(); } void Game::Update(DX::StepTimer const& timer) { float elapsedTime = float(timer.GetElapsedSeconds()); // TODO: Add your game logic here elapsedTime; } void Game::Render() { // TODO: Add your rendering code here } void Game::CreateDeviceDependentResources() { m_featureLevel = m_deviceResources->GetDeviceFeatureLevel(); m_d3dDevice = m_deviceResources->GetD3DDevice(); m_d3dContext = m_deviceResources->GetD3DDeviceContext(); // TODO: Initialize device dependent objects here (independent of window size) } void Game::CreateWindowSizeDependentResources() { // TODO: Initialize windows-size dependent objects here } void Game::ReleaseDeviceDependentResources() { m_d3dDevice.Reset(); m_d3dContext.Reset(); // TODO: Add Direct3D resource cleanup here } After that, update the *Main.cpp/.h files to use the new Game class rather than Sample3DSceneRenderer, and remove SampleFpsTextRenderer usage. Remember to update the DXTKApp1 namespace in both Game.cpp/.h files to match that in the rest of the project. You can use NuGet (Id: directxtk_windowsstore_8_1 or directxtk_windowsphone_8_1) or project-to-project references ( DirectXTK_Windows81.vcxproj or DirectXTK_WindowsPhone81.vcxproj) to add DirectX Tool Kit. DirectX Tool Kit: This template is used extensively in the DirectX Tool Kit tutorial series. GitHub: The files for the template are also hosted on GitHub. Download: Direct3DWin32Game.vsix (VS 2013), Direct3DUWPGame (VS 2015 for both Win32 and UWP) Excellent! I will definitely be trying this out. Thanks for sharing – about time the Desktop saw more love. Maybe you can add the 'DeviceResource' class like windows store apps and add support for Visual Studio 2015 (preview), thank you! DeviceResource is a complicated beast, and it assumes that you have the DirectX 11.1 runtime (i.e. won't work on WIndows Vista, Windows 7 RTM, or Windows 7 SP1 without KB2670838) so you can make use of Direct2D/DirectWrite as well as Direct3D 11. It also handles issues like orientation which is not needed for Win32 desktop apps. Which part do you find yourself missing? BTW the Direct3DWin32Game template structure is identical to the Xbox One XDK/ADK templates which also do not use DeviceResources, and Xbox One XDK doesn't support Direct2D/DirectWrite. I'll be adding VS 2015 support soon, likely whenever they hit their next public preview release. Can this be extended to support templates in Visual Basic for those few of us who are C impaired? @Craig – In order to use DirectX from any .NET language (VB, C#, etc.) you need assemblies that expose the DirectX API. As such, this is really a question for the maintainers of SharpDX, SlimDX, etc. My main purpose in creating this template is to have a Win32 desktop equivalent to the other DirectX VS template for C++ developers in order to write tutorials for DirectX Tool Kit, which is a C++ library. I totally see the value in having a VB DirectX template, but that's going to have to be integrating something like SharpDX to actually work. @Chuck – Thanks. I appreciate the sobering response. Makes me fond for the VS2005-ish days when it was very easy in VB.NET to work with DirectX. Hopefully they will get this all back on track for Windows desktop in VS2015? Even back in VS 2002 .NET, you still needed managed assemblies to use DirectX from VB. I hear great things about SharpDX as a wrapper, so check it out. I like this. It's very useful to me. However, I'm a bit confused as to 1. why no support for fullscreen? 2. How do I add support for fullscreen? Thanks! Doing fullscreen correctly is actually quite challenging, and really complicates the template. I have a task on my backlog to do a tutorial lesson on making it work fullscreen, but generally for development and education purposes it's better if it's windowed. Ah, I see. I actually figured out how to make it fullscreen, with just a couple lines of code. Thanks for the template! Getting it to go fullscreen is easy enough. Doing it robustly without getting any performance issues is the complicated part 🙂 Also, the template really has to support both DirectX 11.0 and DirectX 11.1+ methods for doing fullscreen, and potentially cope with multi-monitor as well. Fair point. But the current fullscreen will be enough for me for now. I second the request for the a fullscreen template. I've gone through this one: msdn.microsoft.com/…/ee417025(v=vs.85).aspx, but without code samples it is daunting to get it prefect. It does not work in VS Community 2015. It does work with VS 2015 Community and VS 2017 Community assuming you have the proper workloads installed. For VS 2015, that means having the Windows Tools 1.4.x optional feature installed in addition the C++ tools.
https://blogs.msdn.microsoft.com/chuckw/2015/01/06/direct3d-win32-game-visual-studio-template/
CC-MAIN-2017-34
refinedweb
1,865
58.69
Writing Conversion Methods in Rails As developers, we're occasionally tasked with maintaining software that we weren't directly involved in crafting. We don't have extensive knowledge of the domain, yet we can read the code, (hopefully) understand what's there, and modify it. We'll even attempt to improve the application and leave it in a better state than we found it. What follows is one such tale. I recently had to make a change on a legacy Rails application. In order to make the change, I had to find the correct instance of Site. I didn't know which attribute on Site to query for, so the first thing I did was open the database schema to see a list of all attributes on Site. Wouldn't it be nice if I didn't need to look at the database schema to find the attribute I need? What if I told you I had an easier way of finding instances of your classes? Avdi Grimm reminded me in his book Confident Ruby (which we loved) about Ruby's conversion methods. Specifically, methods like Array('thing') # => ["thing"] and URI('') # => URI object with the URL of ''. I thought, wouldn't it be great if I could write Site('widget') and have it return the correct instance of Site? So that's just what I did. Writing the conversion method was fairly simple: def Site(site) if site.is_a?(Site) site elsif site.is_a?(String) Site.find_by_subdirectory!(site) else raise ArgumentError, 'bad argument (expected Site object or string)' end end The above code says that if site is already an instance of Site, then return it. If site is a string, find the instance of Site by subdirectory. If site is neither an instance of Site nor an instance of String, then raise an ArgumentError, informing the developer what the method was expecting. Great -- now I have a conversion method in which I can write Site('widget') and get the instance of the Widget site. But where do I put this method in a Rails application? I could put it in an initializer. But it feels like it should really go with the Site class definition, in case a developer wants to extend it in the future. Where does Ruby place its conversion methods? To answer that question, I took a peek at Ruby's URI class. I noticed how the conversion method was with the class definition and at the bottom of the file, within the Kernel module. I took this precedence and modified the Site class to the following: class Site < ActiveRecord::Base # … wow. such code … end module Kernel # Returns +site+ converted to a Site object. def Site(site) if site.is_a?(Site) site elsif site.is_a?(String) Site.find_by_subdirectory!(site) else raise ArgumentError, 'bad argument (expected Site object or string)' end end end I really like that I can now retrieve a site with Site('widget') and not have to look up the correct attribute to query by. What do you think? Do you use conversion methods in your Rails applications? Let me know in the comments below.
https://www.viget.com/articles/writing-conversion-methods-in-rails/
CC-MAIN-2019-09
refinedweb
521
73.37
. Welcome back. I’m experiencing DataValve, which mixing with Seam-3.1.0.CR1 and PrimeFaces-3.0.RC1 in GlassFish-3.1.2.b13. I have some ideas: 1. To make doUpdate() abstract in AbstractEntityHome. then we can implement doUpdate() in JpaEntityHome. public class JpaEntityHome extends AbstractEntityHome { @Override protected void doUpdate() { getEntityManager().merge(getEntity()); } // … } 2. To change EntityHome as the following: public interface EntityHome<T, PK> { T getEntity(); void setEntity(T entity); PK getId(); void setId(PK id); // … } 3. In AbstractQueryDataProvider, to deprecate addRestrictionStr() and modify isValidTestValue() as the following: protected boolean isValidTestValue(Object obj) { // … if (obj instanceof String) { String s = (String) obj; return s.length() != 0; } //… } 4. How about add method delete() in EntityHome? 5. update the link “Download Now” in DataValve fans should download the last source from instead of Hey There, Thanks for the notice on the links, I’ve already fixed them. The entity home stuff is still very much in development, but since I didn’t know anyone was using it, I haven’t been developing it much. As for the addRestrictionStr, the reason for the two is so you still have the option of running queries against null strings. I’m thinking of changing it though to use the one method, but with flags to trim and exclude zero length strings which will be the default, but you can turn it off. I’ll have to have a think about it, because some of these things get evaluated at query time (i.e. EL Expressions) when the user is not in control, so I’ll have to have a look and see what is done there. Cheers, Andy
http://www.andygibson.net/blog/quickbyte/more-time-saved-by-unit-tests/
CC-MAIN-2019-09
refinedweb
275
64.1
dm.zdoc 2.0 pydoc based documentation for Zope dm.zdoc Tiny wrapper around pydoc to make it usable for Zope. Note: Python versions below 2.6 lack good support for namespace packages in pydoc. While Zope itself does not use namespace packages before version 2.12 (which uses Python 2.6), important Zope applications (such as Plone) do use namespace packages. In these cases, the documentation produced by pydoc (and by extension zdoc) is incomplete. Usage zdoc can either be used via the script dmzdoc, via module import or integrated in a running Zope instance. In the first two cases it might be necessary to set the Zope environment variables INSTANCE_HOME and SOFTWARE_HOME. Integrated in a running Zope instance For this use, you must install the module in your Zope installation and activate its configure.zcml at Zope startup. This will give the “Zope Root Folder” the view @@zdoc which presents the documentation in the same way as the pydoc http server. ATTENTION: Exposing the documentation of a Zope instance in this way provides sensible insights and could give hackers valuable clues for attacks. Likely, you will install this only in development instances with restricted access. Version History - 2.0 support for the “integrated in a runnging Zope instance” use case - 1.1 works around a bug in either zope.interface or inspect. - Downloads (All Versions): - 4 downloads in the last day - 61 downloads in the last week - 258 downloads in the last month - Author: Dieter Maurer - Keywords: pydoc documentation Zope - License: BSD (see "dm/zdoc/LICENSE.txt", for details) - Categories - Package Index Owner: dmaurer - DOAP record: dm.zdoc-2.0.xml
https://pypi.python.org/pypi/dm.zdoc/
CC-MAIN-2015-40
refinedweb
273
59.09
A Publisher's Journey Into His Former Life When I first heard about lambdas, I was highly skeptical primarily because I haven't seen or thought about these things since my computer science classes in college. If you need further evidence for my general skepticism, please see the definition of lambda expressions on Wikipedia. However, I was forced to look at them while researching the Linq project, so I decided to spend a bit more time with them and have a deeper look. And as it is with so many things, the more I surrounded myself with them, the more I kind of started to like them. Therefore, I'll spend a few seconds going over how I see them and you can use them. At the core, a lambda expression is simply a function. You give it inputs, it gives you output. These expressions can then be linked together to build very complex programs. For the mind-numbing explanation, please see the Wikipedia article listed above. Anyways, in .NET terminology a lambda expression is a function that is fulfills the signature requirements specified by a delegate. For a little refresher, consider the following delegate that is defined in the .NET Framework 2.0. public delegate bool Predicate<T> (T obj) By the way, if you haven't gotten used to generics yet, I would start now as use of them is dramatically increased in the .NET Framework 3.5. Anyways, the predicate delegate is used by standard .NET collection classes to enable searching by arbitrary criteria. For example, let's say I define the class Person as follows. public class Person { public Person(string name, int age) { Name = name; Age = age; } public string Name; public int Age; } If I create a list of Person objects, I can filter that list to find all people with an age greater than 10 by creating an instance of the Predicate delegate and passing it to the FindAll method on my List class. The code would look as follows. 1: public static void Main() { 2: List<Person> people = new List<Person>(); 3: people.Add(new Person("howard", 29)); 4: people.Add(new Person("jennifer", 30)); 5: people.Add(new Person("hannah", 8)); 6: 7: List<Person> oldPeople = people.FindAll(ageGT10Filter); 8: 9: foreach (Person p in oldPeople) 10: Console.WriteLine(p.Name); 11: } 12: 13: static bool ageGT10Filter(Person p) { 14: return p.Age > 10; 15: } When the FindAll method is called (line 7), it iterates through my List object and calls my delegate instance (ageGT10Filter - line 13), passing it the Person object at that point in the list. If the delegate instance returns true, that Person object is added to the list returned by FindAll. C# 2.0 extended this pattern of delegate passing with the introduction of anonymous methods (therefore, when looking at lambdas, I would expect that the concept will initially seem more familiar to C# developers as anonymous methods were not supported by VB.NET in VS 2005). By using anonymous methods, the code block above would look more like the following. 7: List<Person> oldPeople = people.FindAll(delegate(Person p) 8: { return p.Age > 10; } 9: ); 10: 11: foreach (Person p in oldPeople) 12: Console.WriteLine(p.Name); 13: } As you can see, the delegate method instance is replaced by an inline method declaration (lines 7-9). This provides 2 major benefits to the previous approach: 1) fewer methods and smaller code, and 2) the constant declaration of 10 could be replaced with a user supplied value such as the following. List<Person> oldPeople = people.FindAll(delegate(Person p) { return p.Age > iValue; }); So now let's look at this implementation in C# 3.0 using lambda expressions. List<Person> oldPeople = people.FindAll(p => p.Age > 10); The first thing that I want to point out here is that the argument required by the FindAll method has not changed at all – it still requires a Predicate delegate instance. The lambda in this context is used as a shorthand way of defining the delegate instance. If we then look at the structure of the lambda expression itself, we'll see that it's really very simple. p => p.Age > 10 inputs (delegate parameters) Lambda operator Expression to be evaluated – expression return must match delegate return value type Now, at this point, you're probably saying, "big deal – this is just a slightly shorter way to do something that I could already do in C# 2.0" – and if this was the only benefit that lambdas brought, you would be correct. However, there is something much, much cooler about lambdas in C# 3.0. This is that they can be treated as data and reinterpreted by other code (metaprogramming anyone?). Rather than delve into an esoteric description, let's just see an example. Say that I wanted to be able to develop a generic query language – one that would be based on native C# code, but could be translated to all sorts of data sources, like SQL Server (starting to sound familiar??). For example, we would want our database query to look just like our previous in-memory query. SQLEntityProvider.FindAll<Person>(p => p.Age > 10); For the purposes of this example, we're going to assume a TON about the mapping relationship between the object and the database. Our generic provider might look something like the following. 1: public static IEnumerable<T> FindAll<T>( 2: Expression<Predicate<T>> filterExp) { 3: 4: StringBuilder sb = new StringBuilder(); 5: sb.Append("select * from tbl" + typeof(T).Name); 6: sb.Append(" where "); 7: 8: BinaryExpression expBody = filterExp.Body as BinaryExpression; 9: 10: if (expBody.Left.NodeType == ExpressionType.MemberAccess) 11: sb.Append(((MemberExpression)expBody.Left).Member.Name); 13: switch (expBody.NodeType) { 14: case ExpressionType.GreaterThan: 15: sb.Append(" > "); 16: break; 17: case ExpressionType.LessThan: 18: sb.Append(" < "); 19: break; 20: case ExpressionType.Equal: 21: sb.Append(" = "); 22: break; 23: case ExpressionType.NotEqual: 24: sb.Append(" <> "); 25: break; 26: } 27: 28: if (expBody.Right.NodeType == ExpressionType.Constant) 29: sb.Append(((ConstantExpression)expBody.Right).Value); 30: 31: Console.WriteLine(sb.ToString()); 32: 33: return null; 34: } When the program is run and we examine the console output, we see the following. select * from tblPerson where Age > 10 Now when you step back and think about it, this is actually REALLY cool – and what enables this coolness is the fact that lambda expressions, in addition to being directly executable, can also be treated as data. This is what enabled me to look through the expression tree and convert the lambda expression elements into equivalent SQL statement elements. So how do you control whether your lambda expression is treated as code (delegate) or as data? Simple – either pass it to a parameter expecting a delegate instance (as in my first case) OR pass it to a parameter expecting a object of type Expression<TDelegate>. The Expression generic type has special meaning in C# 3.0 and parses the lambda expression into a data structure rather than generating IL executable code. So now that you have a general idea of how this new language feature works, what can you do with it? As is always the case – whatever you want! Personally, I would love to see somebody write an extension to NHibernate to allow use of Linq queries in the place of HQL – so if anyone's looking down that path, please let me know! However, even if you're not running to the bleeding edge to start implementing your own lambda based solutions, I hope that this article has given you just a bit of insight into how Linq works, and how we are just scratching the surface on what we can do with this powerful new language feature! If you would like to receive an email when updates are made to this post, please register here RSS Actually this post has nothing to do with llamas but Flickr never fails me: I've been personally putting You have no idea how crystal clear you just made things for me... Now that is a post! THANK YOU!!! Great article Howard. So let me get this right. You mean I can even write my own find function that would take C# code and translate it into say XSLT? That sounds really cool! Wait, the LINQ guys already thought of it...it's called XLINQ. :) A while back I wrote a post about lambda expressions in C# 3.0 and how they are one of the enabling technologies A while back, I posted on lambda expressions in C# 3.0 . In that post, I concluded by saying the following. wow! i have been schooled. gud article, Expression(Lambda) => LINQ Excellent post - thanks Howard! You've been kicked (a good thing) - Trackback from DotNetKicks.com Howard Dierking over at MSDN has a really interesting Blog post entitled " Lambda, Lambda, Lambda! " Absolutely great description of lambda's. You are a gift to developerkind. This is the first post of many that I've read that clearly described Lambda expressions for the beginner. I've read all the popular guru's blogs, and while they have useful information, I always left their sites missing a vital piece of information that prevented complete comprehension. You've provided that piece and everything falls into place now. I'm now a convert. love lambdas! Thank you. Nice Lambda, Lambda,Lambdaaaaaaaaaa. Really Great job Since I'm posting this in advance, I hope my session titled Understanding LINQ was a huge hit and
http://blogs.msdn.com/howard_dierking/archive/2007/01/18/lambda-lambda-lambda.aspx
crawl-001
refinedweb
1,586
65.62
I'm trying to create an array of empty string vectors, but I can't manage to initialize the vectors to be able to push values into them: vector <string> v[500]; // vector initializing code v[0].push_back("hello"); // should work now 'v' does not name a type As pointed in all the comments on your question, your error occurs because you wrote your code out of a main function. Each C++ program must have it. By the way, here are good practices for free (found also in comments). std::arrayinstead of C-array if you know the size at compile-time (and I believe you do). using namespace std;because it's bad. #include <string>, #include <vector>and #include <array>if using-so.
https://codedump.io/share/IJSNZik0fxKB/1/initializing-an-array-of-empty-vectors
CC-MAIN-2017-04
refinedweb
124
72.46
This is a question on my homework, can anyone help me understand what it saying ? Thanks "Suppose that you have an ArrayList and that it contains String objects. Which declaration of the ArrayList... This is a question on my homework, can anyone help me understand what it saying ? Thanks "Suppose that you have an ArrayList and that it contains String objects. Which declaration of the ArrayList... Solved. Solved. Found it. Thanks! I .add(found) onto the arrayList, is there a way to check if an ArrayList contains elements? Since the "found" only get added when the String match, if there no string match then there be nothing... I'm working on sequential search and this is my code to search a String. The problem I have is that once the array is found, it stop. I was wondering what if there are more than one occurrence of... Thanks for the hints on the value of max. Here is my new code and it working, but I don't understand why the inner loop need to have an index of max which = i for it to work. public static... I'm currently doing selection sort and this is my method to sort a array of string into alphabet order. I interpret the code line by line into words and test it by hand it seem to make sense but when... I'm current doing insertion sorts with ArrayList and I'm getting java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at replace.get(insertindex).setTitle(next); . I understand there is nothing at... Jaguar: Quantity = 2, Total cost = 2100000.0 Neon: Quantity = 3, Total cost = 52375.32 Jigsaw: Quantity = 1, Total cost = 149.18 RAM: Quantity = 1, Total cost = 35700.0 CircularSaw: Quantity = 2,... Is there another way to print out the results? That the only wait I could of think, as it complete the inner loop, it print. Oh, I don't mean to code it for me. I updated the code with the loop going through and keeping the quantity above, but it printing all the items. IE if there 2 item A, it would print item A two time... Chris, thanks for the reply! Can you give me help on my updated code above? Thanks takeInventory suppose to keep count of the items that passes through and print their quantity. The result I'm getting is "guar: Quantity = 2, Total cost = 1110000.0 Neon: Quantity = 3, Total cost... So do I change it to %5.2d? I still get the error on that. It not allowing me to print at all, it giving me the error at that line. I think what causing is I don't have a conversion format inside the printf for "(c.get(x).getVotes() / total ))" I do have... This is part of my code. I'm getting "illegalformatconversion" at the printf statement. How can I change the code so that the third thing it print is the votes / total votes? Thanks public... The second indexes should be 1 more than the first, am I correct? --- Update --- It work when I set i = x+1. Is this the correct way? Thanks I understand the starting at another index part but let say I put i=1 instead of 0 , it might not compare itself to thing at index 0 but at index 1 and so on it will compare itself. This is the nested loop that I have but it also compare the objects to itself since x and i have the same value at some point. How can I avoid the objects inside the list comparing itself? Thanks ... Would the List that I created with the objects being added fine or do I need to create a different array? Thanks For the code belong, I'm trying to compare all the objects with each other and print if they're the same, is there any efficient methods in doing so beside me creating like 20 if else statement and... I forgot about calling the method, I thought it would call itself for some reason. Thanks you so much! It print out nothing. Since I "setType("Math"); " won't it print out "Math"? public class testHomework { public static void main(String[]args) { MyMath math = new MyMath(); System.out.println(math.toString()); }
http://www.javaprogrammingforums.com/search.php?s=62cf4581d280d2dd426b330f8672a6b9&searchid=2052715
CC-MAIN-2016-07
refinedweb
715
84.07
From: "Samuele Pedroni" [email protected] From: ? maybe the question was unclear, but it was serious, what I was asking is whether some restricted code can do: try: deliberate code to force exception except Exception,e: ... so that e is caught unproxied. Looking at zope/security/_proxy.c it seems this can be the case... then to be (likely) on the safe side, all exception class definitions for possible e classes: like e.g. class MyExc(Exception): ... ought to be executed _in restricted mode_, or be "trivial/empty": something like class MyExc(Exception): def __init__(self, msg): self.message = msg Exception.__init__(self, msg) def __str__(self): return self.message is already too much rope. Although it seems not to have the "nice" two-level-of-calls behavior of Bastion instances, an unproxied instance of MyExc if MyExc was defined outside of restricted execution, can be used to break out of restricted execution. regards.
https://mail.python.org/archives/list/[email protected]/message/LALCVVGYHRE3KO4CMZYK3YYLVWOAXSRR/
CC-MAIN-2021-21
refinedweb
154
58.99
This is the last article looks at techniques for customizing the NoSpamEmailHyperlink to make it unique for any given page without downloading and editing the code directly, making it difficult to incorporate any future improvements to the base control. NoSpamEmailHyperlink It assumes at least a basic knowledge of C# and class inheritance. With a few simple overrides, it is possible to completely change the nature of the NoSpamEmailHyperlink so that each site implementing the control uses it in a slightly different way, making it nearly impossible for email harvesters to detect and decode the email addresses. It is even possible to use numerous derived controls on the same page (as seen in the screenshot above), but this is not recommended. The aim of this control is to force harvesting software to handle JavaScript as effectively as any browser and thus push up the price of such software, in turn pulling down the profit margins of the email spammers. If all they have to do is detect the link array, or the code key and either use it or move on, this would not cause them too many problems. If, on the other hand, the control acts slightly different on some sites and very different on others, the email harvesters are going to have to work a lot harder. The NoSpamEmailHyperlink offers six properties and three methods which can be overridden to create a completely different control with the same principles. Replacing the encoding string is easy as long as you follow a couple of simple rules. Create a new control, derived from NoSpamEmailHyperlink and override the protected .CodeKey property. .CodeKey public class NoSpamEmailHyperlinkEx : NoSpamEmailHyperlink { protected override string CodeKey { get { return "QpWlEmR5ToY4UkInO0PiAjS19DbFuGhHvJ2KyLgZc3XtCxVf6BNrdMeszawq78"; } } } It is essential that you never include the same character twice. This will confuse the decoding algorithm. If it finds the duplicated character, it cannot possibly know which character it was translated from. It is also essential that you only include alphanumeric characters, unless you are also overriding the Encode/Decode functionality to handle it. Any other characters may translate a valid email address to an invalid one (for example, if the first character becomes a hyphen). It is not essential to include every alphanumeric character in the key. Missing out one or two characters can actually make it more difficult to decode. For example, if you miss the "a" and "A" characters out of the key string, all other characters will be substituted except for the As. Once you realize that a string is encoded using substitution, the last thing you expect is for some characters not to be substituted at all. And yet, the decoding algorithm will handle the missing characters correctly. Should the NoSpamEmailHyperlink become excessively popular, there are a number of ways in which a harvester could identify the encoded hyperlinks and discount them, or even decode them without using JavaScript. Because the NoSpamEmailHyperlink uses GetType().Name to build the array names, function names and global-level variable names, any control derived from it will automatically use different names to avoid clashes. GetType().Name However, a harvester could easily look for arrays with names ending _LinkArray and discount any links with IDs found in those arrays. Without too much more effort, it could find the _SeedArray and the ky variable and attempt to decode them. _LinkArray _SeedArray ky But if we change the names of those variables on just a few pages, the process of detecting them becomes a lot more difficult. public class NoSpamEmailHyperlinkEx : NoSpamEmailHyperlink { protected override string CallScriptName { get { return GetType().Name + "Elephant"; } } protected override string FuncScriptName { get { return GetType().Name + "SilverFish"; } } protected override string SeedArrayName { get { return GetType().Name + "TexasHoldem"; } } protected override string LinkArrayName { get { return GetType().Name + "Leichtenstein"; } } protected override string CodeKeyName { get { return "ck"; } } } As you can see, it is not entirely necessary for these strings to be related in any way to their function. For example, the above code changes the name of the seed array definition so that it resembles the following: var NoSpamEmailHyperlinkExTexasHoldem = new Array("23"); You may know what this means, and the calling script will adjust itself to find the new array name, but the harvester will no longer find the array simply by hunting for _SeedArray. Note that all of the above properties, except for the CodeKeyName are used in the JavaScript at a global level. It is always advisable to use GetType().Name somewhere in the definition to allow for further controls deriving from yours and failing to override these properties. CodeKeyName For the more adventurous tinkerer, it is also possible to override the .Encode() and .GetFuncScript() methods to provide a completely new algorithm for encoding and decoding the email address. .Encode() .GetFuncScript() The new algorithm may be as simple or as complex as you like. Just because your favorite algorithm is simple, do not assume that this is a bad thing. As long as it is different, it is more confusing for the harvesters. Maybe you want to make a simple change, such as accelerating the rate of change in the base number (initially the seed). Simply copy the code as described in NoSpamEmailHyperlink: 3. Email Encoding and Decoding into your derived control and amend it however you please. For example: public class NoSpamEmailHyperlinkEx : NoSpamEmailHyperlink { protected override string GetFuncScript() { #if DEBUG // Formatted script text in debug version JavaScriptBuilder jsb = new JavaScriptBuilder(true); #else // Compress script text in release version JavaScriptBuilder jsb = new JavaScriptBuilder(); #endif jsb.AddLine("function ", FuncScriptName, "(link, seed)"); jsb.OpenBlock(); // function() jsb.AddCommentLine("This is the decoding key for all ", LinkArrayName, " objects"); jsb.AddLine("var ", CodeKeyName, " = \"", CodeKey, "\";"); jsb.AddLine(); jsb.AddCommentLine("Store the innerText so that it doesn't get"); jsb.AddCommentLine("distorted when updating the href later"); jsb.AddLine("var storeText = link.innerText;"); jsb.AddLine(); jsb.AddCommentLine("Initialize variables"); jsb.AddLine("var baseNum = parseInt(seed);"); jsb.AddLine("var atSym = link.href.indexOf(\"@\");"); jsb.AddLine("if (atSym == -1) atSym = 0;"); jsb.AddLine("var dotidx = link.href.indexOf(\".\", atSym);"); jsb.AddLine("if (dotidx == -1) dotidx = link.href.length;"); jsb.AddLine("var scramble = link.href.substring(7, dotidx);"); jsb.AddLine("var unscramble = \"\";"); jsb.AddLine("var su = true;"); jsb.AddLine(); jsb.AddCommentLine("Go through the scrambled section of the address"); jsb.AddLine("for (i=0; i < scramble.length; i++)"); jsb.OpenBlock(); // for (i = 0; i < scramble.length; i++) jsb.AddCommentLine("Find each character in the scramble key string"); jsb.AddLine("var ch = scramble.substring(i,i + 1);"); jsb.AddLine("var idx = ", CodeKeyName, ".indexOf(ch);"); jsb.AddLine(); jsb.AddCommentLine("If it isn't there then add the character"); jsb.AddCommentLine("directly to the unscrambled email address"); jsb.AddLine("if (idx < 0)"); jsb.OpenBlock(); // if (idx < 0) jsb.AddLine("unscramble = unscramble + ch;"); jsb.AddLine("continue;"); jsb.CloseBlock(); // if (idx < 0) jsb.AddLine(); jsb.AddCommentLine("Decode the character"); jsb.AddLine("idx -= (su ? -baseNum : baseNum);"); jsb.AddLine("baseNum -= (su ? -Math.pow(i, 2) : Math.pow(i, 2));"); jsb.AddLine("while (idx < 0) idx += ", CodeKeyName, ".length;"); jsb.AddLine("idx %= ", CodeKeyName, ".length;"); jsb.AddLine(); jsb.AddCommentLine("... and add it to the unscrambled email address"); jsb.AddLine("unscramble = unscramble + ", CodeKeyName, ".substring(idx,idx + 1);"); jsb.AddLine("su = !su;"); jsb.CloseBlock(); // for (i = 0; i < scramble.length; i++) jsb.AddLine(); jsb.AddCommentLine("Adjust the href property of the link"); jsb.AddLine("var emAdd = unscramble + ", "link.href.substring(dotidx, link.href.length + 1);"); jsb.AddLine("link.href = \"mailto:\" + emAdd;"); jsb.AddLine(); jsb.AddCommentLine("If the scrambled email address is also in the text"); jsb.AddCommentLine("of the hyperlink, replace it"); jsb.AddLine("var findEm = storeText.indexOf(scramble);"); jsb.AddLine("while (findEm > -1)"); jsb.OpenBlock(); // while (findEm > -1) jsb.AddLine("storeText = storeText.substring(0, findEm) + emAdd ", "+ storeText.substring(findEm + emAdd.length, storeText.length);"); jsb.AddLine("findEm = storeText.indexOf(scramble);"); jsb.CloseBlock(); // while (findEm > -1) jsb.AddLine(); jsb.AddLine("link.innerText = storeText;"); jsb.CloseBlock(); // function() return jsb.ToString(); } protected override string Encode (string Unencoded) { // Convert string to char[] char[] scramble = Email.ToCharArray(); // Initialize variables int baseNum = ScrambleSeed; bool subtract = true; // Find the @ symbol and the following . // if either don't exist then we don't have a // valid email address and should return it unencoded int atSymbol = Array.IndexOf(scramble, '@'); if (atSymbol == -1) atSymbol = 0; int stopAt = Array.IndexOf(scramble, '.', atSymbol); if (stopAt == -1) stopAt = scramble.Length; // Go through the section of the address to be scrambled for (int i=0; i < stopAt; i++) { // Find each character in the scramble key string char ch = scramble[i]; int idx = CodeKey.IndexOf(ch); // If it isn't there then ignore the character if (idx < 0) continue; // Encode the character idx += (subtract ? -baseNum : baseNum); baseNum -= (subtract ? -(int)Math.Pow(i, 2) : (int)Math.Pow(i, 2)); while (idx < 0) idx += CodeKey.Length; idx %= CodeKey.Length; scramble[i] = CodeKey[idx]; subtract = !subtract; } // Return the encoded string return new string(scramble); } } Only the highlighted lines have been changed, but this is a massive change to the coding algorithm and an extra JavaScript command for the harvesters to understand. The variations on this theme are limited only by your imagination. You could use multiple keys, perhaps one upper-case and one lower-case key. Perhaps you want to substitute underscores and hyphens, prefixing with a random letter to keep the address valid. You could simulate the World War II "one time pad" system, by "adding" the first letter of the email address to the first letter of the key, the second letter of the email address to the second letter of the key, and so on. You do not have to limit yourself to substitution algorithms. You could reverse the characters in both the user and domain segments of the email address (e.g. [email protected] becomes [email protected]) or use a more complex transposition algorithm. It really makes no difference what approach you take, the more people that add their own personal touch to the NoSpamEmailHyperlink the more painful it becomes for the email harvesters. Let your imagination run.
http://www.codeproject.com/Articles/5200/NoSpamEmailHyperlink-6-Customization?fid=24627&df=10000&mpp=10&sort=Position&spc=None&tid=1078729
CC-MAIN-2016-18
refinedweb
1,642
50.94
Web Scraping: Introduction and Best Reverse Engineering to Collect Data Introduction : The First Step of Web Scraping is sent a request to the server. This is the most important step for scraping of any website. Different backgrounds of people in terms of coding skills, use different programming languages to send a request. Today we show you different types of HTTP web requests in three different programming languages. Http Web Request has Below Part : 1 – Header: Any web request has header part. The header is given an identity of to server, from where the request for data is coming, which user – is it desktop user? or is it a mobile user? or Is it Tablet User. Http, Web request is a different type, Like GET, POST, PUT, DELETE and many more. Each of these contains its own definition. Http web request is medium to communicate between client and server, the client means us and server means website or host. GET— HTTP get request is the simplest way to get data from a website or host or any online resource. Below is a code snippet to send Http web request in c# using Http web request lib. # Example showing how to use the requests library # Install requests moddule using below command # pip install requests # Import Moduel import requests # Send Request r = requests.get("") # Print Responce print (r.text()) 2. BeautifulSoup: Now you got the webpage, but now you need to extract the data. BeautifulSoup is a very powerful Python library which helps you to extract the data from the page. It’s easy to use and has a wide range of APIs that’ll help you to extract the data. We use the requests library to fetch an HTML page and then use the Beautiful Soup to parse that page. In this example, we can easily fetch the page title and all links on the page. Check out the documentation for all the possible ways in which we can use BeautifulSoup. from bs4 import BeautifulSoup import requests #Fetch HTML Page r = requests.get("") soup = BeautifulSoup(r.text, "html.parser") #Parse HTML Page print "Webpage Title:" + soup.title.string print "Fetch All Links:" soup.find_all('a') CHALLENGES IN SCRAPING: Pattern Change: Below are challenges when we scrap data at large scaleWhen we scrap data from more than one website we face issues in convert and generalize the data and store it into a database. Each website has its own structure of HTML. Most website changes their UI periodically, due to that some time we get incomplete data or crash scraper. This is the most commonly encountered problem. Anti- Scraping Technologies: Now a days many companies use anti-scraping script to protect their website from scraping and data mining. LinkedIn is a good example of this. If you scrap data from one single ip, they catch you and banned your ip address, sometimes they block your account also. Honeypot traps : Some website designers put honeypot traps inside websites to detect web spiders, There may be links that normal user can’t see and a crawler can. Some honeypot links to detect crawlers will have the CSS style “display: none” or will be color disguised to blend in with the page’s background color. This detection is obviously not easy and requires a significant amount of programming work to accomplish properly. Captchas : Captchas have been around for a long time and they serve a great purpose — keeping spam away. However, they also pose a great deal of accessibility challenge to the web crawling bots out there. When captchas are present on a page from where you need to scrape data from, basic web scraping setups will fail and cannot get past this barrier. For this, you would need a middleware that can take captcha, solve it and return the response. POINTS SHOULD BE TAKE CARE DURING DATA SCRAPING: Respect the robots.txt file: Below are some important when we scrap data at large scale. we have to take care of it.Robots.txt is a text file webmasters create to instruct robots (typically search engine robots) how to crawl & index pages on their website So this file generally contain instruction for crawlers. Robots.txt should be the first thing to check when you are planning to scrape a website. Every website would have set some rules on how bots/spiders should interact with the site in their robots.txt file. Some websites block bots altogether in their robots file. If that is the case, it is best to leave the site and not attempt to crawl them. Scraping sites that block bots is illegal. Apart from just blocking, the robots file also specifies a set of rules that they consider as good behavior on that site, such as areas that are allowed to be crawled, restricted pages, and frequency limits for crawling. You should respect and follow all the rules set by a website while attempting to scrape it. Usually at the website admin area, one can find this file. Do not hit the servers too Fast : Web servers are not fail-proof. Any web server will slow down or crash if the load on it exceeds a certain limit up to which it can handle. Sending multiple requests too frequently can result in the website’s server going down or the site becoming too slow to load. While scraping, you should always hit the website with a reasonable time gap and keep the number of parallel requests in control. User Agent Rotation : A User-Agent String in the request header helps identify which browser is being used, what version, and on which operating system. Every request made from a web browser contains a user-agent header and using the same user-agent consistently leads to the detection of a bot. User Agent rotation and spoofing is the best solution for this. Disguise your requests by rotating IPs and Proxy Services : This we’ve discussed in challenges topic. It’s always better to use rotating IPs and proxy service so that your spider won’t get blocked in the near future. Get more on How to Use HTTP Proxy with Request Module in Python. Scrape during off-peak hours: To make sure that a website isn’t slowed down due to high traffic accounting to humans as well as bots, it is better to schedule your web-crawling tasks to run in the off-peak hours. The off-peak hours of the site can be determined by the geo-location of where the site’s traffic is from. By web scraping during the off-peak hours, you can avoid any possible load you might put on the server during peak hours. This will also help in significantly improving the speed of the scraping process. Still are you facing problem while scraping then once visit our Python web scraping tutorials and download python script or get insight about data from previously scraped sample data of various data scraping services.
https://www.worthwebscraping.com/web-scraping-request-basic/
CC-MAIN-2021-43
refinedweb
1,163
71.04
User Details - User Since - Jan 2 2013, 4:34 PM (314 w, 6 d) Yesterday Looks good to me with a minor comment fix. Let's wait for @efriedma to approve the new name before we commit to it, though. Non gmock changes look good. I can't compile the example you gave yet because I haven't applied your patches locally, but this is the "three stack pointer" case that I have in mind: struct Foo { void (*ptr)(); int x, y, z; }; Mon, Jan 14 I have no idea how you ended up here, but looks good, just a few NFC comments. :) Functionality seems reasonable, I'll let @aaron.ballman finish the review. - rebase lgtm Thanks for the report and patch! Looks good. Do you need someone to commit this? Thanks for pushing the MC work to support this forward! Thanks! I'm surprised we passed as much of the Microsoft exception tests as we did with this bug. Maybe it regressed. Fri, Jan 4 rebase @phil-opp, sorry for the breakage, I understand your frustration. From my perspective, the x86 interrupt calling convention was hacked into the x86 backed in a way that doesn't play nice with orthogonal features, like the copy elision I added. Your fix proliferates the wrong direction of the existing design of the interrupt convention. Yes, it fixes the problem, but if I don't push back, require a test, contact Intel, the original authors of this feature, and pressure them to do it in a more proper way, LLVM will continue to become more unmaintanable. Thu, Jan 3 Still needs tests, right? lgtm! I don't think this should be llvm_unreachable, I think it should be report_fatal_error. LLVM has some some support for out of tree backends, and they could plausibly call this when using a release build of LLVM as a library. If all the callers were in-tree, then calling this would represent an LLVM-internal logic error (forgetting to check hasRawTextSupport()), and unreachable or assert would be appropriate. Fri, Dec 28 Thu, Dec 27 Here is the original test case from the PR: #include <stdio.h> int offset = 0; static void foo2() { printf("foo2 is called\n"); } void foo() { __asm__ volatile("movq %0,%%gs:(%1)" : : "ir"((void *)&foo2), "r"(offset)); printf("foo is called\n"); } If you compile with GCC, you will see that it generates the same assembly that LLVM does: foo: .LFB15: .cfi_startproc subq $8, %rsp .cfi_def_cfa_offset 16 movl offset(%rip), %eax #APP # 21 "t.c" 1 movq $foo2,%gs:(%eax) # 0 "" 2 #NO_APP leaq .LC1(%rip), %rdi call puts@PLT addq $8, %rsp .cfi_def_cfa_offset 8 ret .cfi_endproc It looks like this was actually intentionally removed back in rL107727 to fix. This change breaks the test that was added for it, llvm/test/CodeGen/X86/2010-07-06-asm-RIP.ll. I guess, since this is GCC inline asm, the question is, how does GCC handle the 'i' constraint? I suspect that this new behavior may be closer to GCC, but it'll require more analysis. Wed, Dec 26 We can change the code to avoid the crash, but what debug info did you want clang to emit? It seems like the best way to describe StructWithNoFields_add is as a static method of the class StructWithNoFields, since it doesn't appear to have a 'this' parameter. lgtm, thanks! Do you need someone to commit this? Fri, Dec 21 Want to stamp this? It's 4pm on the Friday before Christmas, what could go wrong? :) One more thing I just realized that was probably obvious to you: Computing ghash in parallel completely eliminates the need for the /debug:ghash flag, and we can eliminate the non-ghash codepath. That's great. Neat! I didn't have a chance to review with a fine toothed comb, but I like the idea, and here's how I analyze it. We've already talked about how merging a type record without ghash is O(size of type), and with a content hash, you can make that O(size of hash), i.e. look at 8 or 20 bytes instead of up to 64K. But, computing the hash requires reading the type record data, so we made the compiler do it and put it in .debug$H. If one can't do that because MSVC is being used, parallelizing the hash lookup makes it so that the linker only takes O(size of all types / n cores) wall time to do it, and then it does O(# types) (not size of types) work to deduplicate them. lgtm Thu, Dec 20 - add positive test for dynamic allocas with intervening call My experience so far with inalloca has been that, no, those passes are not safe. This is only the latest in a long line of bugs in LLVM around handling dynamic allocas. I think conflating static and dynamic allocas was a mistake, and we would've been better served with a separate construct for allocating stack memory for local variables, but at this stage, we'd be better off standardizing on the terminology of "alloca" for standard locals and pushing dynamic allocas into intrinsic instructions. Wed, Dec 19 - move to IgnoreParens Looks good, thanks. lgtm, thanks! Tue, Dec 18 Go for it. lgtm Mon, Dec 17 lgtm I think it's worth making this work incrementally like this. @ruiu, the motivation is to have some abstraction over the collection of chunks before "merging" section names. So, this would allow us to enumerate all chunks contributing to the .CRT$XCU section before it is concatenated with all the other .CRT$* sections and then merged into the .rdata section by our builtin /merge: rule. We accumulate those sections in a plain vector currently, and I felt it would be useful to have some name for that grouping. Oops, too many Zacharies. I mistook you for him (only in this patch, not the other one). He has more ownership over this code, so I will let him approve it. Regarding the alignment optimization, I implemented it, and I only measured a 3% speedup when making clang.pdb (without ghash, I think, since I was lazy and didn't rebuild...). However, I put in stats to show that it saves 433MB of memory allocations, and that's a significant fraction of the overall process memory usage, so it's definitely worth it. I'll post a patch with harder numbers soon.
https://reviews.llvm.org/p/rnk/
CC-MAIN-2019-04
refinedweb
1,074
71.24
Important: Please read the Qt Code of Conduct - keyPressEvent(QKeyEvent* event) not triggering - U7Development last edited by U7Development Hi!. keyPressEvent works as espected if it is inside a QDialog form class. But i wanted to create a separated custom class debug.hpp to handle it.. #ifndef DEBUG_HPP #define DEBUG_HPP #include <QKeyEvent> #include <QLabel> #include <QMessageBox> struct debug { private: static QWidget* obj; void keyPressEvent(QKeyEvent* event){ const QRect r = obj->geometry(); switch (event->key()){ case Qt::Key_W: obj->move(r.x(), r.y() - 1); break; case Qt::Key_Y: obj->move(r.x(), r.y() - 24); break; case Qt::Key_S: obj->move(r.x(), r.y() + 1); break; case Qt::Key_H: obj->move(r.x(), r.y() + 24); break; case Qt::Key_A: obj->move(r.x() - 1, r.y()); break; case Qt::Key_G: obj->move(r.x() - 24, r.y()); break; case Qt::Key_D: obj->move(r.x() + 1, r.y()); break; case Qt::Key_J: obj->move(r.x() + 24, r.y()); break; case Qt::Key_Return: QMessageBox::information(0, "DEBUG", QString::number(r.x()) + " : " + QString::number(r.y())); break; } } public: debug() = delete; //setup widget that needs to be arranged inline static void DHANDLE(QWidget* _o){ if (!_o) return; obj = _o; } }; QWidget* debug::obj = nullptr; #endif /DEBUG_HPP Basically the only callable function is DHANDLE where i set a widget to be controlled with A,W,S,D keys. From a form class cpp i call: #include "debug.hpp" void myclass::foo(){ QPushButton myButton = new QPushButton(this); debug::DHANDLE(myButton); } This should allow me to control myButton position (geometry) using WASD keys, but nothing happens.. i know DHANDLE is being called (i have qDebugged()), but keyPressEvent() is not... What could it be? thanks. Edit: I forgot to say i'm not using Qt Creator but VIM to write my code so im blindly working, thats why i want to implement a position debugging way to know where to locate my widgets. The class needs to derive from QWindow or QWidget for the event loop to know that calling keyPressEvent is an option, and then the instance needs to be in the keyboard focus chain. i'll inherits from QWindow or QWidget.. what do you mean by keyboard focus chain? Thanks - jsulm Lifetime Qt Champion last edited by @U7Development said in keyPressEvent(QKeyEvent* event) not triggering: what do you mean by keyboard focus chain? The widget needs to have the focus to get key press events @U7Development you can (assuming you know when the widget needs to listen to the key events) to force focus by using grabKeyboard()- just remember to unset it by calling releaseKeyboard()afterwards. alright thanks I will give a try and comment later.. Update, since I need to inherits from QWindow or QWidget that does not had sense to me for the purpose of the debug, so I move the function to an abstract class derived from QDialog, and I got working!!.. Now I have another related to Key issue but I will do another thread for it.. Thanks so much!
https://forum.qt.io/topic/124270/keypressevent-qkeyevent-event-not-triggering/3
CC-MAIN-2021-49
refinedweb
500
58.18
Web, etc). The ASPX application consumes this data through the Requestobject. During a code review exercise, look for this object's usage. These are the important entry points to the application in the case study. It is possible to grab certain key patterns in the submitted data using regular expressions from multiple files to trace and analyze patterns. scancode.py is a source code-scanning utility. It is simple Python script that automates the review process. This Python scanner has three functions with specific objectives: The scanfile function scans the entire file for specific security-related regex patterns: ".*.[Rr]equest.*[^\n]\n" # Look for request object calls ".*.select .*?[^\n]\n|.*.SqlCommand.*?[^\n]\n" # Look for SQL execution points ".*.FileStream .*?[^\n]\n|.*.StreamReader.*?[^\n]\n" # Look for file system access ".*.HttpCookie.*?[^\n]\n|.*.session.*?[^\n]\n" # Look for cookie and session information "<!--.*?#include.*?-->" # Look for dependencies in the application ".*.[Rr]esponse.*[^\n]\n" # Look for response object calls ".*.write.*[^\n]\n" # Look for information going back to browser ".*catch.*[^\n]\n" # Look for exception handling scan4requestfunction scans the file for entry points to the application using the ASP.NET Requestobject. Essentially, it runs the pattern ".*.[Rr]equest.*[^\n]\n". scan4tracefunction helps analyze the traversal of a variable in the file. Pass the name of a variable to this function and get the list of lines where it is used. This function is the key to detecting application-level vulnerabilities. Using the program is easy; it takes several switches to activate the previously described functions. D:\PYTHON\scancode>scancode.py Cannot parse the option string correctly Usage: scancode -<flag> <file> <variable> flag -sG : Global match flag -sR : Entry points flag -t : Variable tracing Variable is only needed for -t option Examples: scancode.py -sG details.aspx scancode.py -sR details.aspx scancode.py -t details.aspx pro_id D:\PYTHON\scancode> The scanner script first imports Python's regex module: import re Importing this module makes it possible to run regular expressions against the target file: p = re.compile(".*.[Rr]equest.*[^\n]\n") This line defines a regular expression--in this case, a search for the Request object. With this regex, the match() method collects all possible instances of regex patterns in the file: m = p.match(line) Now use scancode.py to scan the details.aspx file for possible entry points in the target code. Use the -sR switch to identify entry points. Running it on the details.aspx page produces the following results: D:\PYTHON\scancode>scancode.py -sR details.aspx Request Object Entry: 22 : NameValueCollection nvc=Request.QueryString; This is the entry point to the application, the place where the code stores QueryString information into the NameValue collection set. Here is the function that grabs this information from the code: def scan4request(file): infile = open(file,"r") s = infile.readlines() linenum = 0 print 'Request Object Entry:' for line in s: linenum += 1 p = re.compile(".*.[Rr]equest.*[^\n]\n") m = p.match(line) if m: print linenum,":",m.group() The code snippet shows the file being opened and the request object grabbed using a specific regex pattern. This same approach can capture all other entry points. For example, here's a snippet to identify cookie- and session-related entry points: # Look for cookie and session management p = re.compile(".*.HttpCookie.*?[^\n]\n|.*.session.*?[^\n]\n") m = p.match(line) if m: print 'Session Object Entry:' print linenum,":",m.group() After locating these entry points to the application, you need to trace them and search for vulnerabilities.: 22 : NameValueCollection nvc=Request.QueryString; Running the script with the -t option will help to trace the variables. (For full coverage, trace it right through to the end, using all possible iterations).: D:\PYTHON\scancode>scancode.py -t details.aspx sta2 Tracing variable:sta2 String[] sta2=nvc.GetValues(arr1[0]); pro_id=sta2[0]; Here's another iteration; tracing
http://www.oreillynet.com/lpt/a/6778
CC-MAIN-2014-15
refinedweb
644
52.46
1.4 Control Structures Statements execute sequentially: The first statement in a function is executed first, followed by the second, and so on. Of course, few programs—including the one we'll need to write to solve our bookstore problem—can be written using only sequential execution. Instead, programming languages provide various control structures that allow for more complicated execution paths. This section will take a brief look at some of the control structures provided by C++. Chapter 6 covers statements in detail. 1.4.1 The while Statement. 1.4.2 The for Statement true. 1.4.3 The if Statement A logical extension of summing the values between 1 and 10 is to sum the values between two numbers our user supplies. We might use the numbers directly in our for loop, using the first input as the lower bound for the range and the second as the upper bound. However, if the user gives us the higher number first, that strategy would fail: Our program would exit the for loop immediately. Instead, we should adjust the range so that the larger number is the upper bound and the smaller is the lower. To do so, we need a way to see which number is larger. Like most languages, C++ provides an if statement that supports conditional execution. We can use an if to write our revised sum program: #include <iostream> int main() { std::cout << "Enter two numbers:" << std::endl; int v1, v2; std::cin >> v1 >> v2; // read input // use smaller number as lower bound for summation // and larger number as upper bound int lower, upper; if (v1 <= v2) { lower = v1; upper = v2; } else { lower = v2; upper = v1; } int sum = 0; // sum values from lower up to and including upper for (int val = lower; val <= upper; ++val) sum += val; // sum = sum + val std::cout << "Sum of " << lower << " to " << upper << " inclusive is " << sum << std::endl; return 0; } If we compile and execute this program and give it as input the numbers 7 and 3, then the output of our program will be Sum of 3 to 7 inclusive is 25 Most of the code in this program should already be familiar from our earlier examples. The program starts by writing a prompt to the user and defines four int variables. It then reads from the standard input into v1 and v2. The only new code is the if statement // use smaller number as lower bound for summation // and larger number as upper bound int lower, upper; if (v1 <= v2) { lower = v1; upper = v2; } else { lower = v2; upper = v1; } The effect of this code is to set upper and lower appropriately. The if condition tests whether v1 is less than or equal to v2. If so, we perform the block that immediately follows the condition. This block contains two statements, each of which does an assignment. The first statement assigns v1 to lower and the second assigns v2 to upper. If the condition is false—that is, if v1 is larger than v2—then we execute the statement following the else. Again, this statement is a block consisting of two assignments. We assign v2 to lower and v1 to upper. 1.4.4 Reading an Unknown Number of Inputs Another change we might make to our summation program on page 12 would be to allow the user to specify a set of numbers to sum. In this case we can't know how many numbers we'll be asked to add. Instead, we want to keep reading numbers until the program reaches the end of the input. When the input is finished, the program writes the total to the standard output: #include <iostream> int main() { int sum = 0, value; // read till end-of-file, calculating a running total of all values read while (std::cin >> value) sum += value; // equivalent to sum = sum + value std::cout << "Sum is: " << sum << std::endl; return 0; } If we give this program the input 3 4 5 6 then our output will be Sum is: 18 As usual, we begin by including the necessary headers. The first line inside main defines two int variables, named sum and value. We'lluse value to hold each number we read, which we do inside the condition in the while: while (std::cin >> value) What happens here is that to evaluate the condition, the input operation std::cin >> value is executed, which has the effect of reading the next number from the standard input, storing what was read in value. The input operator (Section 1.2.2, p. 8) returns its left operand. The condition tests that result, meaning it tests std::cin. When we use an istream as a condition, the effect is to test the state of the stream. If the stream is valid—that is, if it is still possible to read another input— then the test succeeds. An istream becomes invalid when we hit end-of-file or encounter an invalid input, such as reading a value that is not an integer. An istream that is in an invalid state will cause the condition to fail. Until we do encounter end-of-file (or some other input error), the test will succeed and we'll execute the body of the while. That body is a single statement that uses the compound assignment operator. This operator adds its right-hand operand into the left hand operand. Once the test fails, the while terminates and we fall through and execute the statement following the while. That statement prints sum followed by endl, which prints a newline and flushes the buffer associated with cout. Finally, we execute the return, which as usual returns zero to indicate success.
http://www.informit.com/articles/article.aspx?p=384462&seqNum=4
CC-MAIN-2019-04
refinedweb
949
66.88
I thought today I would whip up a quick post regarding Jupyter Notebooks and how to download, install and use various “addons” that I like using and find more than just a little bit useful. Among other things I’ll show how to use the “jupyter-themes” module to change and manipulate the basic theme and styling of the overall notebook, I’ll show how to download and install the Jupyter Notebook extensions module giving access to a whole range of usefull goodies you can try out, and I’ll even show you how to use Jupyter widgets and how to embed URLs, PDFs, and Youtube videos directly into a notebook itself. Below is the full list of contents I am going to cover, either all in this post or if it starts to get a bit long I may break it up over 2 posts to keep it a bit more manageable to read and digest. - Jupyter-Themes Module - Notebook Extensions - Jupyter Widgets - Qgrid - Slideshow - Embedding URLs, PDFs, and Youtube Videos So onto our first entry – the jupyter-themes module. First of all we need to make sure we have the required module installed, and that is as easy as running the following command in a terminal window: pip install jupyterthemes Once we have the module correctly installed we can begin to use the relevant commands within ou notebook to change theme and styling elements. Let’s look at an example below. Firstly we run the command that lists out all the available themes: !jt -l Available Themes: chesterish grade3 gruvboxd gruvboxl monokai oceans16 onedork solarizedd solarizedl So if we wanted to implement the “chesterish” theme, for example, we would just run the code in the cell as shown in the animated screen grab below. You can see the theme changes from the default light theme, to a dark theme. Great, at least we know it’s working! You may have noticed however that when the new theme is set, there are a few things that seem to disappear, things that we may find useful to keep around – for example the toolbar at the top and the name of the notebook! The module has a whole array of options which can be set which control the behaviour of all these things and more. I wont copy them all out again as they are available to look up on the official Github page for the module. This can be found here One set up that I like to use can be created using the following command – just run it in a notebook cell, save the notebook and then refresh the browser and it should appear correctly. !jt -t onedork -fs 95S -altout -tfs 14 -nfs 115 -cellw 90% -T -cursc r -cursw 5 -dfs 8 -N You should now have a notebook that looks as follows, and we can see the top toolbar has reappeared, along with the notebook name in the top left and the respective “environment” that the notebook is being run in at the top right. You may notice a bunch of other toolbar icons and options that you haven’t seen before, but more on them later! One thing to note however at this point is that although we have successfully altered our overall theme, the plotting theme still needs to be changed also to match out new style. If we were to currently plot a pandas DataFrame it would appear as follows: import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline Fs = 8000 f = 5 sample = 8000 x = np.arange(sample) y = np.sin(2 * np.pi * f * x / Fs) df = pd.DataFrame({'sin': y}, index=pd.date_range(start='01/01/2000', periods=sample, freq="D")) df['sin'].plot() plt.show() That doesn’t look great at the moment. So let us run the relevant command to fix it and run the plot again: # import jtplot submodule from jupyterthemes from jupyterthemes import jtplot # currently installed theme will be used to # set plot style if no arguments provided jtplot.style() df['sin'].plot() plt.show() That looks much better! There are again a whole range of options that can be set using the “jtplot.style()” and passing various arguments. Details of these options can be found on the Github homepage also. We now move onto the second entry in our list, the Notebook Extensions. To ensure we have the correct and necessary packages installed, run the following line of code in a terminal: pip install jupyter_contrib_nbextensions && jupyter contrib nbextension install Once that has finished installing, if you then restart your Jupyter Notebook server, you will hopefully find that you are presented with a couple of new options that didn’t appear previously. For the purposes of this article I shall revert back to the default Notebook theme so as to be able to show screenshots that will match your screen more closely. On the main entry page that opens up when you start the Jupyter Notebook server, you should now see an option for “Nbextensions”. If you select that option you will be presented with a list of available extensions, ready and waiting to be added to your notebook. I wont go through all of them, but I will highlight a couple that I find partciluarly useful. The first one I want to mention is the “Table of Contents”. As Notebooks become large and code is added and added, it can become pretty cumbersome to have to scroll up and down searching for the specific cell that you happen to be looking for at that time. And well, the cells can often start to look petty similar…it’s not an easy task to quickly locate and identify specific parts/cells of your notebook. Well, enter the “Table of Contents” – it “does what it says in the tin” and it creates a side window that lists out a full, selectable, hyperlinked table of contents. The table is populated using entries that have been entered as markdown, using the “#” syntax to denote levels of hierarchy in the content section titles and headings. Entering a title in a markdown cell as follows, for example: # Title Top Level Will generate a “top level” title/section heading. Addoing more “#” then allows you to create section headings that would sit on the “second level” and “third level” etc. I have demonstrated an example of this below: # Title Top Level 1 ## Title Second Level 1 ### Title Third Level 1 ### Title Third Level 2 # Title Top Level 2 ## Title Second Level 2 ### Title Third Level 3 ## Title Second Level 3 ### Title Third Level 4 The above would create the following table of contents, and display in the markdown cell as shown below: You can then use the table of contents in the top left to navigate around the notebook more easily and quickly when looking for a specific cell or notebook “section”. I have found it to be a Godsend to be honest…really like this extension. The second extension I want to highlight is the “Hinterland” extension. This sets the notebook to suggest auto completes on every key stroke, rather than just when one presses “shift+tab”. Below is a screen grab of it in action. The third extension I think is worth mentioning is the “Snippets Menu”. It allows you to store customised code snippets that can be readily accessed and entered into notebook cells using a quick drop-down menu selection method. You can set your customised code snippets in the relevant option box as shown in the screen grabs below and as long as you have the correct box checked they will appear in your toolbar drop down as soon as the notebook is refreshed. And finally, the last extension that I want to mention is “Autopep8”. It creates a toolbar button that formats and cleans up your code, according to PEP8 guidelines with a single click – great for cleaning up those cells where the code seemed to get away from you! 😉 Not that the code ever “get’s away from you” of course! Try it out for yourself and see if you like it as much as I do. Now onto Jupyter Widgets! The first step as always is to install the necessary library: pip install ipywidgets Once that finishes, you can activate widgets for Jupyter Notebook with: jupyter nbextension enable --py widgetsnbextension To import the ipywidgets library in a notebook, run: import ipywidgets as widgets from ipywidgets import interact, interact_manual from IPython.core.display import display, HTML For this part of the article I shall be using the data found in this file: df = pd.read_excel('Financial-Sample.xlsx') df.head() How can we view all entries with more than 1500 units sold? Here’s one way: df.loc[df['Units Sold'] > 1500] But if we want to show entries with more than 20,000 COGS, we have to write another line of code: df.loc[df['COGS'] > 20000] What if we could just rapidly change these parameters — both the column and threshold — without writing more code? Try this: @interact def show_entries_more_than(column='COGS', x=20000): return df.loc[df[column] > x] Let’s apply the same method but this time create an interactive chart that allows you to specify the width and height of the chart figure, the colour of the bars within the plot and whether or not the plot has a visible grid. @widgets.interact_manual( color=['blue', 'red', 'green'], width=(5., 12.), height=(5., 12.)) def plot(width=9.,height=6., color='blue', grid=True): t = df['Gross Sales'] fig, ax = plt.subplots(1, 1, figsize=(width, height)) ax.bar(t.index, t, color=color) ax.grid(grid) I think I will leave it here for now as the post is starting to get a bit on the long side – I’ll follow up reasonably swiftly with another post covering the second half of the list of extensions/add-ons outlined at the start of the post. Until next time!
https://www.pythonforfinance.net/2019/07/07/jupyter-notebook-python-extensions-themes-and-addons/?utm_source=rss&utm_medium=rss&utm_campaign=jupyter-notebook-python-extensions-themes-and-addons
CC-MAIN-2020-10
refinedweb
1,672
65.05
Let. Let. I spent my last column giving an overview of the role signals play in the Linux system-call API and explaining the different ways signals could be handled by the kernel. Now that the background information is out of the way, I’ll describe how to make use of signals in Linux applications. I’m going to describe only the POSIX signal API; it has all the richness you need and is portable. Legacy applications should be rewritten to use it because it’s the best-defined signal API. The data type that the signal functions rely on is called sigset_t. It represents a set of signals. If you remember fd_set you’ll notice that sigset_t is quite similar. Every signal is either contained in or absent from each sigset_t you use, which lets programs use sigset_t to easily pass a list of signals to the kernel. You should never manipulate a sigset_t object directly; always use the following functions: first argument to all of these functions is a pointer to the signal set that is being manipulated. The first function, sigemptyset(), removes all signals from the referenced signet_t, while the second function, sigfillset(), adds every signal to the sigset_t. All sigset_t variables should be initialized with one of these two functions. The next two functions, sigaddset() and sigdelset(), add or remove the signal specified by the second argument to or from the signal set pointed to by the first argument. By doing this, programs can build sets that contain exactly the signals they need. The final function, sigismember(), lets you test whether or not a given signal, which is passed as the second argument to the system call, is contained in a given sigset_t. It returns 0 if the signal is not a member of the set, and nonzero if it is a member. Catching a Signal Now that we’ve introduced sigset_t, I’ll explain how to catch a signal. The structsigaction data type tells the kernel how to deliver a signal to the program. typedef void (*sighandler_t)(int signo); struct sigaction { sighandler_t sa_handler; sigset_t sa_mask; unsigned long sa_flags; void (*sa_restorer)(void); }; I started off by giving the typedef for sighandler_t, which is a pointer to a function that takes a single int argument and returns void. This is the prototype for a signal handler. It is passed a single argument, the signal that it is handling. (This is a white lie; there is a more complicated way of catching signals that passes information on why the signal was generated in a siginfo_t, but it’s rarely used so I won’t discuss it here.) This is exactly the same as the signal_handler_type I talked about last month. The first field in a sigaction structure, sa_handler, points to the function that should be used to handle the signal, either a signal-handling function (from now on, called a “signal handler”), or one of the following magic values: SIG_IGN: Specifies that the signal should be ignored; the kernel will silently dispose of the signal. SIG_DFL: The kernel should handle the signal in whatever way it thinks best; this often means killing the process, but it may be equivalent to SIG_IGN (depending on the signal). The next field in structsigaction, sa_mask, specifies which signals should be blocked when the signal handler is being run. A blocked signal is not delivered until it is unblocked. By letting the program specify an arbitrary set of signals that should be blocked when a given signal is being caught, the kernel makes it quite easy to have a signal handler catch a number of different signals and still never be reinvoked while it’s already running. By default, the kernel always blocks a signal when its signal handler is being run. For example, when a SIGCHLD signal handler is running, the kernel will block other SIGCHLDs from being delivered; I’ll explain why this is a good idea later, when I discuss how to write a good signal handler. The sa_mask field has no effect on this behavior, but the sa_flags field does let a program override this. The sa_flags field is a bitmask of various flags logically ORed together, the combination of which specifies the kernel’s behavior when the signal is received. The values it may contain are: SA_NOCLDSTOP: A SIGCHLD signal is normally generated when one of a process’s children has terminated or stopped. If this flag is specified, SIGCHLD is generated only when a child process has terminated; stopped children will not cause any signal. SA_NOMASK: Remember that a program can’t use the sa_mask field of sigaction to allow a signal to be sent while its signal handler is currently running? This flag gives the opposite behavior, allowing a signal handler to be interrupted by the delivery of the same signal. SA_NOMASK: For several reasons, SA_ NOMASK is a very bad idea, since it makes it impossible to write a properly functioning signal handler. It’s included in the POSIX signal API because many old versions of Unix provided this behavior (this is one of the behaviors the term “unreliable signals” describes), and the POSIX group wanted to be able to emulate that behavior for old applications that relied on it. SA_ONESHOT: If this flag is specified, as soon as the signal handler for this signal is run, the kernel will reset the signal handler for this signal to SIG_DFL. Like SA_NOMASK, this is a bad idea. (In fact, this is the other behavior associated with unreliable signal implementations.) It is provided for two reasons. The first is to enable emulation of older platforms. The second is that ANSI C requires this type of signal behavior, and POSIX had to live with that not-so-bright decision. SA_RESTART: Normally slow system calls return EINTR when a signal is delivered while they are running. If SA_RESTART is specified, the system calls don’t return (the kernel automatically restarts them) after a signal is delivered to the process. I talked last month about why this is handy. The last field, sa_restorer, is actually the easiest. Just ignore it. It’s not used by Linux, and no other system uses it without setting a special flag in sa_flags that indicates sa_restorer is worth paying attention to. Lights, Camera, sigaction The sigaction() system call is used to tell the kernel how it should deliver a particular signal to the process. int sigaction(int signum, struct sigaction * act, struct sigaction * oact). The first parameter, signum, is the signal whose delivery is being specified. The next parameter, act, should point to a structsigaction, which tells the kernel what to do when signal signum is delivered. If act is NULL, the current behavior is retained. If the final parameter, oact, is not NULL, the kernel fills in the struct sigaction to which oact points with the current disposition of that signal. Listing One contains a simple program that illustrates how to use sigaction. Listing One installs a simple signal handler for two signals, SIGINT and SIGTSTP. SIGINT is sent to applications when the user presses ^C on the terminal, and SIGTSTP is sent when the user presses ^Z. (Well, those are the standard keystrokes anyway; they can be changed.) Don’t worry about how the kernel knows what processes to send the signals to. That’s something of a black art that I may discuss in the future. SIGINT normally kills a process, and SIGTSTP normally suspends it. By setting up a signal handler for those signals, the program disables those default behaviors. Listing One: Sigaction in Action #include <signal.h> #include <stdlib.h> #include <stdio.h> #include <unistd.h> /* Simple signal handler */ void handler(int sig) { if (sig == SIGINT) printf(”got SIGINTn”); else printf(”got SIGTSTPn”); }; int main(void) { struct sigaction sa; struct sigaction oldint; /* Initialize the sa structure */ sa.sa_handler = handler; sigemptyset(&sa.sa_mask); sa.sa_flags = 0; /* Set up the signal handlers; remember the old handler for SIGINT so we can restore it later */ sigaction(SIGINT, &sa, &oldint); sigaction(SIGTSTP, &sa, NULL); /* Sleep for five seconds, or until a signal wakes us up*/ sleep(5); /* Restore the behavior of SIGINT just to show it can be done. */ sigaction(SIGINT, &oldint, NULL); return 0; }; After setting up a signal handler, the program calls sleep() for five seconds. Since sleep() takes a predictable amount of time, it’s considered a fast system call and will not be restarted after the signal is delivered. Notice the oldint variable. It’s used to store the original sigaction information for SIGINT so it can be restored at the end of the program. While that’s not terribly important for this program, it’s a good illustration of the technique. To give this program a try, compile it, run it, and press ^C or ^Z. You’ll get a message from the signal handler, and the program will exit gracefully. Manipulating the Signal Mask The only other major concept in signal programming is manipulating the process’s signal mask. The signal mask specifies which signals the process has blocked, which prevents those signals from being delivered to the process. If one is sent, the kernel waits to deliver it until the process unblocks that signal. (Remember from last month that signals aren’t queued, though; no matter how many times a blocked signal is sent, only one is delivered.) Blocking signals is incredibly handy, since it restricts when a signal handler can be run and makes it much easier to write a good signal handler. Changing the process’s signal mask is done through sigprocmask(). int sigprocmask(int how, const sigset_t * modset, sigset_t *oldset); There are three different things this system call can do: set the signal mask, remove signals from the current signal mask, and add signals to the current signal mask. The first parameter, how, tells sigprocmask() how to combine the current signal mask with the modset value passed as the second parameter. If the final parameter is not NULL, the sigset_t to which it points is filled in with the process’s original signal mask, which makes it easy to restore later. Here are the values the how parameter can take, and how modset is combined with the signal mask in each case: SIG_BLOCK: The signals contained in modset are added into the current signal mask. (Those signals become blocked.) SIG_UNBLOCK: The signals contained in modset are removed from the current signal mask. SIG_SETMASK: The current signal mask is set to exactly the signals contained in modset. When a process has received a signal that is currently blocked and thus hasn’t yet been delivered, that signal is said to be pending. A process can find out what signals are pending through the sigpending() system call. int sigpending(sigset_t *set); On return, the sigset_t pointed to by sigpending()’s sole parameter contains exactly the signals that are currently pending. sigsuspend The last major system call related to POSIX signals lets a process suspend itself until a signal (any signal) has been received. At the same time, the process can change its signal mask for the duration of that system call. int sigsuspend(const sigset_t *mask); When sigsuspend() is called, the process’s signal mask is set to the sigset_t pointed to by mask (if mask is NULL, the process’s signal mask is left alone). Once a signal has been sent to the process and handled by the process, sigsuspend() restores the process’s signal mask and returns. (Note that multiple signals could arrive before sigsuspend returns.) The only gotcha here is that the signals in the sigset_t passed to sigsuspend are blocked; the application ends up waiting for signals that are not in that signal set. On first glance, it might look like sigsuspend() is basically equivalent to this code snippet: sigprocmask(SIG_SETMASK,&newmask,&oldmask); sleep(1000000); /* a close approximation of forever */ sigprocmask(SIG_SETMASK, &oldmask); Unfortunately, it’s not the same at all. For most cases, it will behave identically. However, think about what happens if the program is written as follows: Steps 3-5 are what both sigsuspend and the code snippet I showed claim to do. Now think about what happens if you use the code snippet and you get a SIGUSR1 delivered to us between steps 3 and 4. The program ends up doing this: Note that SIGUSR1 was handled before you waited for the signal to arrive, so when you start waiting for the signal, you keep on waiting even though you already got one! This isn’t what you intended. This particular type of problem is called a race condition. It’s essentially a race between the signal arriving and the process getting to the “wait for SIGUSR1” step. The program will work properly most of the time, but if the signal arrives at the wrong moment, it breaks. Race conditions are notoriously difficult to debug, and the only good way of fixing them is to avoid them in the first place. The sigsuspend() system call avoids this mess entirely. By combining steps 3, 4, and 5 into a single step, it doesn’t leave any place for an unwelcome signal to sneak in. The kernel guarantees that a signal won’t arrive in the middle of a fast system call, so the race condition is avoided. Since fast system calls are single, indivisible units, they are said to be “atomic operations.” The idea of atomic operations that can’t be interrupted is important in many aspects of programming, and signals are a fairly gentle introduction to the idea. Next month, I’ll explain what the various signals are and give some pointers on how to write a good signal handler. Until then, happy hacking. Erik Troan is a developer for Red Hat Software and co-author of Linux Application Development. He can be reached at [email protected]. Last Chance: Enter to Win an HP BladeSystem for Your IT Infrastructure Linux Simply Runs Better on ProLiant Servers Harness the Power of Virtualization for Server Consolidation SUSE Linux Enterprise Server: The Solution for Mission-critical Computing
http://www.linux-mag.com/id/407
crawl-002
refinedweb
2,348
60.55
Getting started The C# code generated by Thrift uses the Thrift namespace. To be able to use the code generated by Thrift you need to reference Thrift.dll in your project to gain access to the Thrift namespace. This is required for the Thrift generated C# code to compile. Building Thrift.dll using Visual Studio If you haven't done so already download a snapshot of the Thrift source code and extract it. - Direct link to the snapshot Open the Thrift.sln solution file under thrift/lib/csharp/src Build Solution by pressing F7 You should now have the required Thrift.dll located under thrift/lib/csharp/src/bin/Debug/Thrift.dll Reference Thrift.dll in your application (in Visual Studio) - Open up your application solution in Visual Studio. Under the Project menu, click Add Reference - Browse to and select the freshly built Thrift.dll You can now use the Thrift namespace in your source code: using Thrift; using Thrift.Protocol; using Thrift.Server;
http://wiki.apache.org/thrift/ThriftUsageCSharp
CC-MAIN-2016-30
refinedweb
164
60.11
This dataset was produced as part of the Crop Type Detection competition at the Computer Vision for Agriculture (CV4A) Workshop at the ICLR 2020 conference. The objective of the competition was to create a machine learning model to classify fields by crop type from images collected during the growing season by the Sentinel-2 satellites. The ground reference data were collected by the PlantVillage team, and Radiant Earth Foundation curated the training dataset after inspecting and selecting more than 4,000 fields from the original ground reference data. The dataset has been split into training and test sets (3,286 in the train and 1,402 in the test). The dataset is cataloged in four tiles. These tiles are smaller than the original Sentinel-2 tile that has been clipped and chipped to the geographical area that labels have been collected. Each tile has a) 13 multi-band observations throughout the growing season. Each observation includes 12 bands from Sentinel-2 L2A product, and a cloud probability layer. The twelve bands are [B01, B02, B03, B04, B05, B06, B07, B08, B8A, B09, B11, B12]. The cloud probability layer is a product of the Sentinel-2 atmospheric correction algorithm (Sen2Cor) and provides an estimated cloud probability (0-100%) per pixel. All of the bands are mapped to a common 10 m spatial resolution grid.; b) A raster layer indicating the crop ID for the fields in the training set; and c) A raster layer indicating field IDs for the fields (both training and test sets). Fields with a crop ID of 0 are the test fields. A Guide to Access the data on Radiant MLHub Hamed Alemohammad A Guide to load and visualize the data in Python Hamed Alemohammad CV4A ICLR 2020 Starter Notebooks Devis Peressutti Field-Level Crop Type Classification with k Nearest Neighbors: A Baseline for a New Kenya Smallholder Dataset, Hannah Kerner, Catherine Nakalembe and Inbal Becker-Reshef Radiant Earth Foundation (2020) "CV4A Competition Kenya Crop Type Dataset", Version 1.0, Radiant MLHub. [Date Accessed] from radiant_mlhub import Dataset ds = Dataset.fetch('ref_african_crops_kenya_02') for c in ds.collections: print(c.id) Python Client quick-start guide RADIANT EARTH
https://mlhub.earth/data/ref_african_crops_kenya_02
CC-MAIN-2022-40
refinedweb
360
52.19
I am unable to play "protected" flash video, such as Amazon Prime Instant Video. From what I've read and uncovered, this seems to be due to a lack of HAL being installed on my computer. Confirmation that it is required for protected video can be seen towards the beginning of However, hal is not in the gentoo portage tree, and in any case has been deprecated and replaced by udev. How can I go about getting Amazon Prime Instant Video to work again? I was considering grabbing the source from but the links there won't load, and trying to install it from old ebuilds or from overlays which claim to still support it (e.g. kde-sunset) result in a compilation error: In file included from addon-generic-backlight.c:38:0: /usr/include/glib-2.0/glib/gmain.h:21:2: error: #error "Only <glib.h> can be included directly." Has anyone else solved this issue? emerge hal-flash That's all you gotta do these days as it is in portage. WFM For anyone in my shoes who needs to get this installed, grawity's comments to his answer hold the key on how to do it. For an explicit step-by-step: Step 1: Grab the code # git clone # git clone Step 2: Install hal-info # cd hal-info # ./autogen.sh # make && make install # cd .. Step 3: Fix the hal code To do this, replace all instances of #include <glib/gmain.h> with #include <glib.h>. You can do that with a command like: #include <glib/gmain.h> #include <glib.h> # find hal -name "*.c" -print|xargs sed -i 's/#include <glib\/gmain\.h>/#include <glib\.h>/g' For some reason, that missed one reference (I'm not really a regexp / sed guru) so I just did a grep -r "#include <glib/gmain.h>" * and fixed it manually. grep -r "#include <glib/gmain.h>" * Step 4: Install hal # cd hal # ./autogen.sh --disable-policy-kit # make && make install Step 5: Don't forget the dbus config! # cp hal.conf /etc/dbus-1/system.d/ That's it! Now just run it with hald (/usr/local/sbin/hald) hald HAL works on top of udev; it has never been "replaced by" it completely; those features that were can be disabled in hal (such as ACL management). There shouldn't be any conflicts as long as Flash Player is the only user of HAL. asked 4 years ago viewed 1579 times active 2 years ago
http://superuser.com/questions/415238/protected-flash-video-requires-hal-on-gentoo
CC-MAIN-2016-30
refinedweb
418
75.4
Sunsetting so excited to show you all of the great things about our latest API: API v4. We’re confident that once you get started using it, you’ll see how easy API v4 makes managing your CloudFlare settings. (For those of you who are curious where CloudFlare’s API v2 and v3 went, they ran away with IPv5 and PHP 6.) If you are using API v1 and need to migrate to API v4, we’ve written extensive migration docs here for you to follow. They contain every API call from v1 and their equivalent in v4 side by side. What will happen after the sunset? After CloudFlare discontinues support for API v1 in November 2016, any calls to API v1 will return the HTTP status code 410 Gone with the message: “This API has been deprecated in favor of API v4, available at.” What can you do with API v4? CloudFlare uses the v4 API to power our customer dashboard, so unlike API v1, it has support for every single feature on CloudFlare. The list of features API v4 has that weren’t previously available in v1 is extensive. Here's a list: - Manage your zone's Page Rules. - Upload a new SSL certificate for a zone. - Manage Origin Certificates for your zones. - Manage your Railgun connections. - Manage custom error pages for your zone. - View analytics by status code and by data center (Enterprise only). - Set firewall access rules at a user level, allowing access rules to be set across all zones you control. - Added options for access rules including: - IPv6 support - IPv4 and IPv6 CIDR support - ASN support - Country support - Additional CAPTCHA and JavaScript Challenge options - Pass through 502 and 504 error pages from your origin server instead of showing a CloudFlare error page (Enterprise only). - Create mobile redirects. - Toggle on/off response buffering vs. streaming (Enterprise only). - Turn on Query String Sorting in cache (Enterprise only). - Manage Polish. - Manage HSTS. - Change your TLS Client Auth setting. - Add/remove the True-Client-IP header (Enterprise only). - Force traffic to connect over TLS 1.2 only. - Purge by Tag (Enterprise only). - Add zones to your account or initiate another zone activation check. - Manage your user account. - View your billing profile or billing history - View subscriptions on an app and zone level. Besides this extensive list, API v4 has support for Multi-User for Enterprise customers and for Virtual DNS management for Virtual DNS customers. JSON in, JSON out As well as the added functionality, developers will like working with the v4 API because of the consistent use of JSON in both the request and response - no need to parse and transcode data in and out of JSON and form-encoded data. While API v1 used form-encoded data in the request and JSON in the response, API v4 uses JSON throughout. This reduces any extra work on the client or server-side to translate the data between request and response. A namespace for humans You’ll also find the namespacing in API v4 more friendly to interact with. For example, the Always Online setting previously looked like this: "ob": 1 While v4 represents Always Online as: { "id": "always_online", "value": "on", "editable": true, "modified_on": "2014-01-01T05:20:00.12345Z" } How can you get started using API v4? We have plenty of resources for you to get started with. If you need to migrate from using API v1 to using API v4, we have thorough migration docs for you written here. If you want to jump right in, you can find the API v4 documentation here. And if you’re a Go, Python or Node.js user, we have client libraries you can use to interact with the API available on our github here: (Go client, Python client and Node.js client).
https://blog.cloudflare.com/sunsetting-api-v1-in-favor-of-cloudflares-current-client-api-api-v4/
CC-MAIN-2018-22
refinedweb
633
73.37
Corporate Finance For Dummies, 2nd EditionExplore Book Buy On Amazon The simplest rate of return to calculate is the accounting rate of return (ARR). This is a very fundamental calculation to determine how much value an investment generates for the corporation and its owners, the stockholders. It requires only two pieces of information: the amount of earnings before interest and taxes (EBIT) generated by the project and the cost of the investment. Once you know those two things, the calculation goes like this: ARR = EBIT attributed to project / Net investment The accounting rate of return is calculated by dividing the amount of EBIT generated by the project by the net investment of the project. This calculation tells you the proportion of net earnings before taxes that you’re generating for the investment cost. This calculation is usually done on a year-by-year basis. Note that because this equation doesn’t take multi-period variables into consideration, you have to calculate it anew for each period (usually a year). So, in year 1, you might calculate a –3 percent rate of return. That sounds bad, but if you’re talking about the investment on developing a whole new product line, you need to consider that sales are usually slow during the first year. By year 3, you might expect a 2 percent rate of return, and so forth. You may be asking how to determine the amount of EBIT to attribute to a given project! The answer isn’t too bad. Basically, you just go through the steps of developing an income statement, but only for the new project. Find out how many sales this new line or product is generating, and then subtract the costs of operating the project. That’s simple, right? When your capital investment is just a single step in the production process, determining how much value is being added by that step takes a little more work. Basically, you have to break down the entire production process into its individual contributing steps. The total production process is 100 percent of the final product. There are a couple ways to determine what percentage of the production process a single step constitutes. One way is to simply use a proportion of the total cost of production. Sure, this method is easy, but there’s a better way: You can do something called transfer pricing, which estimates the market value of each step in the process by doing some research to find out how much it would cost to hire some other company to do that step. This method helps you in two ways: It helps you do your capital budgeting by determining the amount of added value for that single step and the amount of EBIT you can attribute to that step, to make sure that the investment will actually generate a positive return on investment. It determines the fair market value of performing that step to see whether your company is being financially efficient. If some other company can perform that step better or more cheaply, you should probably outsource that step to the other company. If you know the lifespan of the project or machine, you can forecast the rate of return you experience each year. Whether you’re successful at this forecast or not will depend entirely on how closely your forecasts match the actual rate of returns, of course, but you can still do these forecasts. The total rate of return on the investment is the total EBIT generated by that investment divided by the cost of the investment. The revenues used to calculate EBIT include all the revenues that investment generates over its entire life, plus the final revenue generated using its salvage or scrap value. The final revenue generated by any project is its scrap or salvage value.
https://www.dummies.com/article/business-careers-money/business/accounting/general-accounting/how-to-calculate-the-accounting-rate-of-return-169219/
CC-MAIN-2022-21
refinedweb
637
58.01
How to show texts on views Hi, i'm trying put some texts and forms in a view on Openstack Horizon Dashboard, so , i was trying do it using the method rende_to_response from django.shortcuts on python code and the view don't show nothing . my code is this. from django import http from django.utils.translation import ugettext_lazy as _ from django.views.generic import TemplateView from django.shortcuts import render_to_response class IndexView(TemplateView): template_name = 'visualizations/validar/index.html' def index(self): text = "it's a little text to do a test" return self.render_to_response('visualizations/validar/index.html', 'text':text}) and in my template like this {% block main %} <h3> The text is : {{ text }}</h3> {% endblock %} It's a simple code, just to test this method. and my urls.py file like this from django.conf.urls.defaults import patterns, url from .views import IndexView urlpatterns = patterns('', url(r'^$', IndexView.as_view(), name='index') ) Can you be more specific as to what or how it doesn't work? What other steps did you follow (e.g. did you add your view to urls.py, etc?) when i say that it didn't work, i want say that the view don't show nothing. I add the urls.py in the question.
https://ask.openstack.org/en/question/1992/how-to-show-texts-on-views/?answer=2131
CC-MAIN-2020-29
refinedweb
211
70.5
1440/how-can-i-compare-the-content-of-two-files-in-python You can use the exponentiation operator or ...READ MORE While making POST requests how can we ...READ MORE When I am using os.listdir I am ...READ MORE Hi @Mike. First, read both the csv ...READ MORE down voteacceptedFor windows: you could use winsound.SND_ASYNC to play them ...READ MORE There are several options. Here is a ...READ MORE You could try using the AST module. ...READ MORE If you're already normalizing the inputs to ...READ MORE You probably want to use np.ravel_multi_index: [code] import numpy ...READ MORE len() >>> mylist=[] >>> print len(mylist) 0 READ MORE OR
https://www.edureka.co/community/1440/how-can-i-compare-the-content-of-two-files-in-python?show=1443
CC-MAIN-2019-35
refinedweb
111
73.54
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. dnlib is a library that can read, write and create .NET assemblies and modules. It was written for de4dot which must have a rock solid assembly reader and writer library since it has to deal with heavily obfuscated assemblies with invalid metadata. If the CLR can load the assembly, dnlib must be able to read it and save it. Features - Supports reading, writing and creating .NET assemblies/modules targeting any .NET framework (eg. desktop, Silverlight, Windows Phone, etc). - Supports reading and writing mixed mode assemblies (eg. C++/CLI) - Can read and write non-ECMA compatible .NET assemblies that MS' CLR can load and execute - Very stable and can handle obfuscated assemblies that crash other similar libraries. - High and low level access to the metadata - Output size of non-obfuscated assemblies is usually smaller than the original assembly - Metadata tokens and heaps can be preserved when saving an assembly - Assembly reader has hooks for decrypting methods and strings - Assembly writer has hooks for various writer events - Easy to port code from Mono.Cecil to dnlib - Add/delete Win32 resource blobs - Saved assemblies can be strong name signed and enhanced strong name signed Compiling You must have Visual Studio 2008 or later. The solution file was created by Visual Studio 2010, so if you use VS2008, open the solution file and change the version number so VS2008 can read it. Examples All examples use C#, but since it's a .NET library, you can use any .NET language (eg. VB.NET). See the Examples project for several examples. Opening a .NET assembly/module First of all, the important namespaces are dnlib.DotNet and dnlib.DotNet.Emit. dnlib.DotNet.Emit is only needed if you intend to read/write method bodies. All the examples below assume you have the appropriate using statements at the top of each source file: using dnlib.DotNet; using dnlib.DotNet.Emit; ModuleDefMD is the class that is created when you open a .NET module. It has several Load() methods that will create a ModuleDefMD instance. If it's not a .NET module/assembly, a BadImageFormatException will be thrown. Read a .NET module from a file: ModuleDefMD module = ModuleDefMD.Load(@"C:\path\to\file.exe"); Read a .NET module from a byte array: byte[] data = System.IO.File.ReadAllBytes(@"C:\path\of\file.dll"); ModuleDefMD module = ModuleDefMD.Load(data); You can also pass in a Stream instance, an address in memory (HINSTANCE) or even a System.Reflection.Module instance: System.Reflection.Module reflectionModule = typeof(void).Module; // Get mscorlib.dll's module ModuleDefMD module = ModuleDefMD.Load(reflectionModule); To get the assembly, use its Assembly property: AssemblyDef asm = module.Assembly; Console.WriteLine("Assembly: {0}", asm); Saving a .NET assembly/module Use module.Write(). It can save the assembly to a file or a Stream. module.Write(@"C:\saved-assembly.dll"); If it's a C++/CLI assembly, you should use NativeWrite() module.NativeWrite(@"C:\saved-assembly.dll"); To detect it at runtime, use this code: if (module.IsILOnly) { // This assembly has only IL code, and no native code (eg. it's a C# or VB assembly) module.Write(@"C:\saved-assembly.dll"); } else { // This assembly has native code (eg. C++/CLI) module.NativeWrite(@"C:\saved-assembly.dll"); } Strong name sign an assembly Use the following code to strong name sign the assembly when saving it: using dnlib.DotNet.Writer; ... // Open or create an assembly ModuleDef mod = ModuleDefMD.Load(.....); // Create writer options var opts = new ModuleWriterOptions(mod); // Open or create the strong name key var signatureKey = new StrongNameKey(@"c:\my\file.snk"); // This method will initialize the required properties opts.InitializeStrongNameSigning(mod, signatureKey); // Write and strong name sign the assembly mod.Write(@"C:\out\file.dll", opts); Enhanced strong name signing an assembly See this MSDN article for info on enhanced strong naming. Enhanced strong name signing without key migration: using dnlib.DotNet.Writer; ... // Open or create an assembly ModuleDef mod = ModuleDefMD.Load(....); // Open or create the signature keys var signatureKey = new StrongNameKey(....); var signaturePubKey = new StrongNamePublicKey(....); // Create module writer options var opts = new ModuleWriterOptions(mod); // This method will initialize the required properties opts.InitializeEnhancedStrongNameSigning(mod, signatureKey, signaturePubKey); // Write and strong name sign the assembly mod.Write(@"C:\out\file.dll", opts); Enhanced strong name signing with key migration: using dnlib.DotNet.Writer; ... // Open or create an assembly ModuleDef mod = ModuleDefMD.Load(....); // Open or create the identity and signature keys var signatureKey = new StrongNameKey(....); var signaturePubKey = new StrongNamePublicKey(....); var identityKey = new StrongNameKey(....); var identityPubKey = new StrongNamePublicKey(....); // Create module writer options var opts = new ModuleWriterOptions(mod); // This method will initialize the required properties and add // the required attribute to the assembly. opts.InitializeEnhancedStrongNameSigning(mod, signatureKey, signaturePubKey, identityKey, identityPubKey); // Write and strong name sign the assembly mod.Write(@"C:\out\file.dll", opts); Type classes The metadata has three type tables: TypeRef, TypeDef, and TypeSpec. The classes dnlib use are called the same. These three classes all implement ITypeDefOrRef. There's also type signature classes. The base class is TypeSig. You'll find TypeSigs in method signatures (return type and parameter types) and locals. The TypeSpec class also has a TypeSig property. All of these types implement IType. TypeRef is a reference to a type in (usually) another assembly. TypeDef is a type definition and it's a type defined in some module. This class does not derive from TypeRef. :) TypeSpec can be a generic type, an array type, etc. TypeSig is the base class of all type signatures (found in method sigs and locals). It has a Next property that points to the next TypeSig. Eg. a Byte[] would first contain a SZArraySig, and its Next property would point to Byte signature. CorLibTypeSig is a simple corlib type. You don't create these directly. Use eg. module.CorLibTypes.Int32 to get a System.Int32 type signature. ValueTypeSig is used when the next class is a value type. ClassSig is used when the next class is a reference type. GenericInstSig is a generic instance type. It has a reference to the generic type (a TypeDef or a TypeRef) and the generic arguments. PtrSig is a pointer sig. ByRefSig is a by reference type. ArraySig is a multi-dimensional array type. Most likely when you create an array, you should use SZArraySig, and not ArraySig. SZArraySig is a single dimension, zero lower bound array. In C#, a byte[] is a SZArraySig, and not an ArraySig. GenericVar is a generic type variable. GenericMVar is a generic method variable. Some examples if you're not used to the way type signatures are represented in metadata: ModuleDef mod = ....; // Create a byte[] SZArraySig array1 = new SZArraySig(mod.CorLibTypes.Byte); // Create an int[][] SZArraySig array2 = new SZArraySig(new SZArraySig(mod.CorLibTypes.Int32)); // Create an int[,] ArraySig array3 = new ArraySig(mod.CorLibTypes.Int32, 2); // Create an int[*] (one-dimensional array) ArraySig array4 = new ArraySig(mod.CorLibTypes.Int32, 1); // Create a Stream[]. Stream is a reference class so it must be enclosed in a ClassSig. // If it were a value type, you would use ValueTypeSig instead. TypeRef stream = new TypeRefUser(mod, "System.IO", "Stream", mod.CorLibTypes.AssemblyRef); SZArraySig array5 = new SZArraySig(new ClassSig(stream)); Sometimes you must convert an ITypeDefOrRef ( TypeRef, TypeDef, or TypeSpec) to/from a TypeSig. There's extension methods you can use: // array5 is defined above ITypeDefOrRef type1 = array5.ToTypeDefOrRef(); TypeSig type2 = type1.ToTypeSig(); Naming conventions of metadata table classes For most tables in the metadata, there's a corresponding dnlib class with the exact same or a similar name. Eg. the metadata has a TypeDef table, and dnlib has a TypeDef class. Some tables don't have a class because they're referenced by other classes, and that information is part of some other class. Eg. the TypeDef class contains all its properties and events, even though the TypeDef table has no property or event column. For each of these table classes, there's an abstract base class, and two sub classes. These sub classes are named the same as the base class but ends in either MD (for classes created from the metadata) or User (for classes created by the user). Eg. TypeDef is the base class, and it has two sub classes TypeDefMD which is auto-created from metadata, and TypeRefUser which is created by the user when adding user types. Most of the XyzMD classes are internal and can't be referenced directly by the user. They're created by ModuleDefMD (which is the only public MD class). All XyzUser classes are public. Metadata table classes Here's a list of the most common metadata table classes AssemblyDef is the assembly class. AssemblyRef is an assembly reference. EventDef is an event definition. Owned by a TypeDef. FieldDef is a field definition. Owned by a TypeDef. GenericParam is a generic parameter (owned by a MethodDef or a TypeDef) MemberRef is what you create if you need a field reference or a method reference. MethodDef is a method definition. It usually has a CilBody with CIL instructions. Owned by a TypeDef. MethodSpec is a instantiated generic method. ModuleDef is the base module class. When you read an existing module, a ModuleDefMD is created. ModuleRef is a module reference. PropertyDef is a property definition. Owned by a TypeDef. TypeDef is a type definition. It contains a lot of interesting stuff, including methods, fields, properties, etc. TypeRef is a type reference. Usually to a type in another assembly. TypeSpec is a type specification, eg. an array, generic type, etc. Method classes The following are the method classes: MethodDef, MemberRef (method ref) and MethodSpec. They all implement IMethod. Field classes The following are the field classes: FieldDef and MemberRef (field ref). They both implement IField. Comparing types, methods, fields, etc dnlib has a SigComparer class that can compare any type with any other type. Any method with any other method, etc. It also has several pre-created IEqualityComparer<T> classes (eg. TypeEqualityComparer, FieldEqualityComparer, etc) which you can use if you intend to eg. use a type as a key in a Dictionary<TKey, TValue>. The SigComparer class can also compare types with System.Type, methods with System.Reflection.MethodBase, etc. It has many options you can set, see SigComparerOptions. The default options is usually good enough, though. // Compare two types TypeRef type1 = ...; TypeDef type2 = ...; if (new SigComparer().Equals(type1, type2)) Console.WriteLine("They're equal"); // Use the type equality comparer Dictionary<IType, int> dict = new Dictionary<IType, int>(TypeEqualityComparer.Instance); TypeDef type1 = ...; dict.Add(type1, 10); // Compare a `TypeRef` with a `System.Type` TypeRef type1 = ...; if (new SigComparer().Equals(type1, typeof(int))) Console.WriteLine("They're equal"); It has many Equals() and GetHashCode() overloads. .NET Resources There's three types of .NET resource, and they all derive from the common base class Resource. ModuleDef.Resources is a list of all resources the module owns. EmbeddedResource is a resource that has data embedded in the owner module. This is the most common type of resource and it's probably what you want. AssemblyLinkedResource is a reference to a resource in another assembly. LinkedResource is a reference to a resource on disk. Win32 resources ModuleDef.Win32Resources can be null or a Win32Resources instance. You can add/remove any Win32 resource blob. dnlib doesn't try to parse these blobs. Parsing method bodies This is usually only needed if you have decrypted a method body. If it's a standard method body, you can use MethodBodyReader.Create(). If it's similar to a standard method body, you can derive a class from MethodBodyReaderBase and override the necessary methods. Resolving references TypeRef.Resolve() and MemberRef.Resolve() both use module.Context.Resolver to resolve the type, field or method. The custom attribute parser code may also resolve type references. If you call Resolve() or read custom attributes, you should initialize module.Context to a ModuleContext. It should normally be shared between all modules you open. AssemblyResolver asmResolver = new AssemblyResolver(); ModuleContext modCtx = new ModuleContext(asmResolver); // All resolved assemblies will also get this same modCtx asmResolver.DefaultModuleContext = modCtx; // Enable the TypeDef cache for all assemblies that are loaded // by the assembly resolver. Only enable it if all auto-loaded // assemblies are read-only. asmResolver.EnableTypeDefCache = true; All assemblies that you yourself open should be added to the assembly resolver cache. ModuleDefMD mod = ModuleDefMD.Load(...); mod.Context = modCtx; // Use the previously created (and shared) context mod.Context.AssemblyResolver.AddToCache(mod); Resolving types, methods, etc from metadata tokens ModuleDefMD has several ResolveXXX() methods, eg. ResolveTypeDef(), ResolveMethod(), etc. Creating mscorlib type references Every module has a CorLibTypes property. It has references to a few of the simplest types such as all integer types, floating point types, Object, String, etc. If you need a type that's not there, you must create it yourself, eg.: TypeRef consoleRef = new TypeRefUser(mod, "System", "Console", mod.CorLibTypes.AssemblyRef); Importing runtime types, methods, fields To import a System.Type, System.Reflection.MethodInfo, System.Reflection.FieldInfo, etc into a module, use the Importer class. Importer importer = new Importer(mod); ITypeDefOrRef consoleRef = importer.Import(typeof(System.Console)); IMethod writeLine = importer.Import(typeof(System.Console).GetMethod("WriteLine")); You can also use it to import types, methods etc from another ModuleDef. All imported types, methods etc will be references to the original assembly. I.e., it won't add the imported TypeDef to the target module. It will just create a TypeRef to it. Using decrypted methods If ModuleDefMD.MethodDecrypter is initialized, ModuleDefMD will call it and check whether the method has been decrypted. If it has, it calls IMethodDecrypter.GetMethodBody() which you should implement. Return the new MethodBody. GetMethodBody() should usually call MethodBodyReader.Create() which does the actual parsing of the CIL code. It's also possible to override ModuleDefMD.ReadUserString(). This method is called by the CIL parser when it finds a Ldstr instruction. If ModuleDefMD.StringDecrypter is not null, its ReadUserString() method is called with the string token. Return the decrypted string or null if it should be read from the #US heap. Low level access to the metadata The low level classes are in the dnlib.DotNet.MD namespace. Open an existing .NET module/assembly and you get a ModuleDefMD. It has several properties, eg. StringsStream is the #Strings stream. The MetaData property gives you full access to the metadata. To get a list of all valid TypeDef rids (row IDs), use this code: using dnlib.DotNet.MD; // ... ModuleDefMD mod = ModuleDefMD.Load(...); RidList typeDefRids = mod.MetaData.GetTypeDefRidList(); for (int i = 0; i < typeDefRids.Count; i++) Console.WriteLine("rid: {0}", typeDefRids[i]); You don't need to create a ModuleDefMD, though. See DotNetFile.
https://bitbucket.org/manojdjoshi/dnlib
CC-MAIN-2016-40
refinedweb
2,440
53.37
OK, I'm feeling really dumb right now on a C program I'm working on. I'm getting a segfault that I assume is due to an int array I am creating. I say "i assume" because when I comment out that line of code, the segfaults go away. The code is multi-threaded using pthreads, and I change the number of threads by a #define directive. A low number of threads never causes the segfault. I am manually assigning the stack size before creating the threads, and I am including the overhead of the array. I've narrowed down the potential area where it occurs and I've gone through and commented out ALL array indexing operations to get eliminate the possibility of bad indexing. It's to the point where I am literally just indexing through for loops. here's how I am allocating the thread stack size: all of the threads are successfully created before the segfault, and if I run out of memory, the program catches it and aborts.all of the threads are successfully created before the segfault, and if I run out of memory, the program catches it and aborts.Code:pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_getstacksize (&attr, &s); s = 7 * sizeof(int) + 1 * sizeof(long) + #ifndef SKIPSTEAL sizeof(int[NUM_THREADS - 1]) + #endif sizeof(struct1 *) + sizeof(struct2 *) + 100; if (s < PTHREAD_STACK_MIN) s = PTHREAD_STACK_MIN; pthread_attr_setstacksize (&attr, s); I can't get the same place to show up in GDB for the segfault. WTF am I missing here?
http://cboard.cprogramming.com/c-programming/137388-pthreads-seg-fault-woes.html
CC-MAIN-2014-23
refinedweb
252
51.78
48984/python-code-to-send-an-email-with-an-attachment Hi @Vipul, try out this code. I've given the explanation through the comments. import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.base import MIMEBase from email import encoders from = "sender email address" to = "receiver email address" # instance of MIMEMultipart data = MIMEMultipart() # storing the senders email address data['From'] = from # storing the receivers email address data['To'] = to # storing the subject data['Subject'] = "Subject of the Mail" # string to store the body of the mail body = "Body-of-the-mail" # attach the body with the msg instance data.attach(MIMEText(body, 'plain')) # open the file to be sent filename = "File-name-with-extension" attachment = open("Path of the' data.attach(p) # creates SMTP session s = smtplib.SMTP('smtp.gmail.com', 587) # start TLS for security s.starttls() # Authentication s.login(from, "Password-of-the-sender") # Converts the Multipart msg into a string text = data.as_string() # sending the mail s.sendmail(from, to, text) # terminating the session s.quit() Hope this helps!! If you need to know more about Python, join Python online course today. Thanks! Hi, @Aditya, I would suggest you follow these regarding your query: Hey @alex0809, When your testing a website ...READ MORE David here, from the Zapier Platform team. ...READ MORE can you give me the exact code .. Hey @Varsha, you can try out the ...READ MORE Hi @Deb, try out the following code: import ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/48984/python-code-to-send-an-email-with-an-attachment
CC-MAIN-2021-43
refinedweb
273
60.51
The following form allows you to view linux man pages. #include <stdio.h> size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream); size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream); The function fread() reads nmemb elements of data, each size bytes long, from the stream pointed to by stream, storing them at the loca- tion given by ptr. The function fwrite() writes nmemb elements of data, each size bytes long, to the stream pointed to by stream, obtaining them from the loca- tion given by ptr. For nonlocking counterparts, see unlocked_stdio(3).. C89, POSIX.1-2001. read(2), write(2), feof(3), ferror(3), unlocked_stdio(3) [email protected]
http://www.linuxguruz.com/man-pages/fread/
CC-MAIN-2018-26
refinedweb
113
68.1
Message-ID: <507414956.83404.1397646126260.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_83403_390072442.1397646126260" ------=_Part_83403_390072442.1397646126260 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Scriptom is an optional Groovy module originally develo= ped by Guillaume Laforge. It combines the elegant "syntac= tical sugar" of Groovy with the power of the Jacob library (Java COM Bridge). Scriptom lets you use ActiveX or COM Windows components from Groovy. The re= sult is something that looks eerily similar to VBScript - onl= y groovier. You can use Scriptom to automate Word = or Excel documents, control Internet Explorer, make your PC talk using the Microsoft Speech API, mo= nitor processes with WMI (Windows Management Instrumentati= on), or browse the Windows Registry using WShell= strong> - and much more. Scriptom also provides an e= asy way to talk to custom VB6 or Microsoft.NET libraries. Of course, Scriptom can be used only on Microso= ft Windows. Scriptom is included as an option in the Windows Installer, and Scriptom can be download= ed from this page (see below). The Scriptom codebase = is stable and feature-complete. The Jacob project -&= nbsp;Scriptom's foundation - was started = in 1999 and is being used in countless production Java applications worldwi= de. Scriptom is now in wide use as well, and ha= s proven to be stable and mature. Scriptom gives you all the COM-scripti= ng power of Jacob, only it is a lot easier. See Getting Started with COM for= some tips to get you started. The following are required to run Scriptom: Maven-generated documentation is available at<= /a>. This contains useful information from the build, including JavaD= oc for all projects. Scriptom is part of the Windows Installer= . If you are running Groovy scripts on Windows, you ar= e all set. If you are running Groovy outside the Windows Installer= , and you aren't using Maven or Grape, yo= u probably just need the pre-packaged JAR files and associated binaries. Yo= u've come to the right place. Note that as of release 1.6.0, the distribution archive= is formatted differently. The new format matches the folder structur= e needed by the Windows Installer. Source code, build scri= pts, and documentation are no longer included in the distribution. You can add Scriptom to a Maven projec= t. Because there is JNI involved, it isn't quite as simple as adding a depe= ndency, but it is doable. Need to come up with an example of creati= ng an assembly with GMaven. Add the Scriptom dependency to your Maven= strong>project. <dependency> <groupId>org.codehaus.groovy.modules.scriptom</groupId> <artifactId>scriptom</artifactId> <version>1.6.0</version> </dependency>=20 Add the Jacob JAR dependency. This JAR must be load= ed only once, so if you are working in a server application like To= mcat or GlassFish, you need to ensure that <dependency> <groupId>net.sf.jacob-project</groupId> <artifactId>jacob</artifactId> <version>1.14.3</version> <type>jar</type> </dependency>=20 Jacob requires a DLL. It's a JNI project, after all= . But Jacob actually supports two versions of the DLL it n= eeds, one for 32-bit x86 systems, and one for the AMD x64 architecture (wor= ks with 64-bit Intel chips too). The easiest way to get this to work is to = put both DLLs somewhere on the system path. Jacob will aut= omatically pick the one it needs. Need to detail the other ways to = specify the Jacob DLL. <dependency> <groupId>net.sf.jacob-project</groupId> <artifactId>jacob</artifactId> <version>1.14.3</version> <type>dll</type> <classifier>x64</classifier> </dependency> <dependency> <groupId>net.sf.jacob-project</groupId> <artifactId>jacob</artifactId> <version>1.14.3</version> <type>dll</type> <classifier>x86</classifier> </dependency>=20 Need to detail the TLB projects, constants, etc. Download the project archive and extract the files= . Install the jar file and DLL file(s) into your project, and optionally i= nstall an update from Microsoft: Add the Scriptom jar file (scriptom-1.5.4bX= -XX.jar) into your Java classpath. It contains both = Scriptom and Jacob class files, so you must not i= nclude jacob.jar.=20 =20 Copy both Scriptom-1.5.4bX-XX.dll files t= o somewhere on your java.library.path. (usually somew= here on the system 'PATH').=20 =20 To avoid the dreaded java.lang.UnsatisfiedLinkError= , download and install one of the following updates from Microsoft: Microsoft Visual C++ 2005 SP1 Redistributable Package (x86)= a>or Microsoft Visual C++ 2005 SP1 Redistributable Pack= age (x64). Scriptom doesn't support the = IA-64 (Itanium) architecture at this time, mainly= due to lack of interest. If you are wondering about the different processo= r architectures, check out the x86-64 wiki. It is usually= necessary to install these updates on Windows Server 2003= and Windows 2000, and we've found that it may also be nec= essary for Windows XP and even Vista.= =20 =20 Scriptom 1.5 is not supported for Groovy= 1.0 and earlier (Scriptom 1.2 is still available= ). The project source is managed by Subversion. The projec= ts are already set up to work with Eclipse, but it isn't h= ard to get them working with other IDEs, and you can get by with Notepad. T= he project trunk is located at odehaus.org/groovy/modules/scriptom/trunk. This is a Maven build. Because the tests are dependent on Windows tec= hnologies, there are some rather strange software requirements: To build, use the following command line: C:\work\groovy\modules\scriptom\trunk>mvn clean install=20 The latest build requires Groovy 1.5 or better. Scriptom includes source files and compiled DLLs (Windo= ws o= perating systems. The build process requires Java 1.5 or higher (= Java 1.6 recommended) and ANT 1.6.5 or better. Scriptom 1.5 is a substantial upg= rade to previous versions of Scriptom, and is not back= ward compatible. We hope you will agree that it is worth a littl= e code rework to get all these great new features! Scriptom 1.2 is the version that is documented in Groovy in Action. Scriptom 1.5 is not backward compatible with previ= ous versions of Scriptom. To get your scripts runnin= g again, do this: also going to have it speak asynchronously and wait until it= is done. import org.codehaus.groovy.scriptom.* import static org.codehaus.groovy.scriptom.tlb.sapi.SpeechVoiceSpeakFlags.* import static org.codehaus.groovy.scriptom.tlb.sapi.SpeechRunState.* //Definitive proof that you CAN talk and chew gum at the same time. Scriptom.inApartment { def voice =3D new ActiveXObject('SAPI.SpVoice') //This runs synchronously. voice.speak "Hello, GROOVY world!" //This runs asynchronously. voice.speak "GROOVY and SCRIPT um make com automation simple, fun, a= nd groovy, man!", SVSFlagsAsync while(voice.Status.RunningState !=3D SRSEDone) { println 'Chew gum...' sleep 1000 } } println 'Speaker is done.'=20 If you have scripted COM before, you are probably used to using "ma= gic numbers" throughout your code in place of COM constants. In = this code sample, we're using fully-qualified constants instead.! Some additional examples included with Scriptom: Consuming Visual Basic 6 (VB6) and Visual Basic= .NET COM-enabled DLLs. Articles about COM scripting in general, and All known (unresolved) issues and feature requests are listed in the Scriptom Jira database. Changes to each build are summarized in the Change Log. Recent builds of Scriptom can be found here. Older versions are archive= d: Scriptom uses late binding. Visual Basic and VBA uses e= arly binding by default, but they also support late binding (with C= reateObject). If you are translating working Visual Basic code and= if your Scriptom code fails at the point where you've got= your call ActiveXObject oSDO =3D new ActiveXObject(&qu= ot;SageDataObject140.SDOEngine")=20 (or whatever your object is) with an error message Caught: org.codehaus.groovy.scriptom.ActiveXObject$CreationExce= ption: Could not create ActiveX object: 'SageDataObject140.SDOEngine'; Can't get object clsid from progid=20 then the chances are the progid "Sage...." that you are = using is the wrong one. So you need to go and look in the registry to find what it might be. One= way to do this is with Microsoft's new Powershell cmd utility which you ca= n download from the ms web site. Install this and run the command line dir REGISTRY::HKEY_CLASSES_ROOT\CLSID -include PROGID -re= curse | foreach {$_.GetValue("")}=20 It will produce a ream of different progids which you can sort and searc= h for one that looks a likely candidate. In my case it was SDOEngine.14 Alternatively if you have used the scriptom utility ExtractTlbInfo.groov= y to generate name maps for the com object you could read the source code f= or the (in my case SageDateObject140.java) class and you might find some co= de and/or comment like this: /** * A {@code Map} of CoClass names to prog-ids for this type lib= rary.<p> * * Note that some objects that support events do not publish a = prog-id. * This is a known limitation of this library that we hope to r= esolve in * a future release.<p> * * Supported prog-ids: * <ul> * <li><b>SDOEngine</b> =3D SDOEn= gine.14</li> * </ul> */ public final static Map progIds; static { TreeMap v =3D new TreeMap(); v.put("SDOEngine", "SDOEngine.14"); progIds =3D Collections.synchronizedMap(Collections.unmo= difiableMap(v)); }=20 Additionally, you can use the Object Browser included with Visu= al Basic (I recommend the one with VBA) to figure out possible progids and = then search the registry. This is a good way to associate interfaces = (and their methods) with a particular progid. If there are VBScri= pt or JScript examples available, they will include the corre= ct progids, since these languages are always late-bound. If all else fails, hit the Groovy mailing lists. There are lots of= people out there with COM experience who can point you in the right direct= ion.
http://docs.codehaus.org/exportword?pageId=24576222
CC-MAIN-2014-15
refinedweb
1,674
58.79
: The program that executes may need command line arguments. I am fine with writing a wrapper to do this, but does that create an insecurity? e.g. If the command is "/usr/local/bin/myapp --mode=search", and I write a script like: Yes....use a public/private key pair. On the authorized_key file, if you do your research you will note that you can restrict a private key to a specific key and even protect it from being allowed from only certain IP number. What you want is to use ChrootDirectory in your sshd_config file. Here's a nice tutorial on the process. There are three options: Set the application (or suitable wrapper for it) as the shell for the account. This way the account does not have a shell. When the user passes command, it will be passed as command line argument to the "shell" preceded by the -c flag. Do anything you want with it. -c Use public key authentication and define what each user can run using the command option in .ssh/authorized_keys. command .ssh/authorized_keys This allows running different commands for different keys and allows revoking permissions of individual users. It however needs each user to send you their public key (the private keys should never leave the machine on which they are generated, so users should generate keys and send the public parts to you). The command gets the command passed on ssh command line in SSH_ORIGINAL_COMMAND environment variable, so you can use it if you want. SSH_ORIGINAL_COMMAND In this case the account does need a valid shell, because ssh uses it to run the configured command. ssh Does it really have to be ssh? You mention it does not have to be authenticated, in which case perhaps directly starting it from inetd could do. Or telnet if you need terminal rather than just stdin/stdout. Note, that the option 1 will work with telnet too. inetd A wrapper script is safe if you take due care writing it. If it does not accept any input, it should be OK. If you do want to use the input (command-line arguments in 1., SSH_ORIGINAL_COMMAND envvar in 2.), you have to carefully sanitize it and properly quote it. Of course depending on how much you trust the application itself you still may want to put it in chroot or separate namespace. By posting your answer, you agree to the privacy policy and terms of service. asked 1 year ago viewed 93 times active
http://superuser.com/questions/748206/linux-create-a-public-ssh-account-that-executes-a-specific-binary-and-then-cl
CC-MAIN-2015-40
refinedweb
418
65.22
Hi there, Hope somebody could help me. Following is the issue. I have a VB code that checks the date in an *.mdb file, comparing system date with expiration date contained in the same *.mdb. Indeed the current data, got from system, is written in this *.mdb. All is working fine when the system date format *.mdb format for date are the same. All is not working when the two formats are different. (ex: US format and European format) The software expires since it finds a different date that the one set. Please, has any of you ever encountered a similar situation ? Have you got any idea of how solving this issue ? Ideal solution would be putting the two dates in the same format. Many thanks Have a great Sunday Where is the date data coming from that it would have two different formats? Is it being entered manually? Linked to another db or source? Imported? Which format is the system date? US? European? Why can't the dates be modified to agree in format before it gets to Access? If this is older data, you could do an update query to change the format of the dates to be consistent with the system date. If this is new data that is being imported, then you need to run a check on the date formats and update prior to importation if possible, or import to a temp table, run an update query and then an append query. Just some thoughts with out having adequate information to give you a definitive answer. Alan Last edited by alansidman; 02-27-2011 at 08:34 AM. Alan Click the * below to say thanks. Database Principles Pivot Table Tips Good Excel Video Tutorials Sumifs or SumProduct DataPig Access Tutorials MS Query Tutorial Worst Pie Chart Ever? Hello, many thanks for your prompt answer. Apologizes for not being clear enough. The date data are coming. 1 - *.mdb They are written manually in the mdb once the mdb is issued. 2 - in the code are got from the system by code instruction. Then the data from the system is written in mdb database to be compared with the expiration date. - The format is European Indeed the aim of the thread was to know if it is possible to make the two dates agree before entering in access I suppose that the most logical solution should be as you kindly suggest running a check on the date formats and updating prior to importation or import to a temp table, run an update query and then an append query. Should the pieces of information provided be enough, my question now is - how to realize the check on date formats and - how to run the update query - in order to uniform the two data ? Many thanks Let's take these one step at a time. If the dates are being manually entered into the database, ensure that all entries are made in the European Format using an input form. Under no circumstances should you allow the user to enter data directly into tables. One way to ensure proper format is to set your properties to the format desired. Additionally, I like to use Allen Browne's calendar function to ensure consistency. Here is a link to that For the older items in your table, create a query that has the record id and the date fields. In a new field, create an expression using an expression such as =day(DateField)&"/"&Month(DateField)&"/"&Year(DateField) This will put all the dates in the European format (if I got the sequence correct). Save the query as a select Query. Now change it to an update query. Look at this video on how to create an update query. This final video on data manipulation may help also. Good luck and post back with any specific question or issues you may encounter.
http://www.excelforum.com/access-tables-and-databases/766199-date-format.html
crawl-003
refinedweb
649
73.37
I am kind of new to anything Python and was using the Basic Printing tutorial script from ESRI, but when I try and run it it works for a while and then my entire ArcCatalog crashes and asks if I would like to send a report. It is just the following code: import arcpy import os import uuid # Input WebMap json Web_Map_as_JSON = arcpy.GetParameterAsText(0) # The template location in the server data store templateMxd = r"c:\test] # Remove the service layer # This will just leave the vector layers from the template for lyr in arcpy.mapping.ListLayers(mxd, data_frame=df): if lyr.isServiceLayer: arcpy.mapping.RemoveLayer(df, lyr) arcpy.env.scratchWorkspace = 'c:/Temp' # Use the uuid module to generate a GUID as part of the output name # This will ensure a unique output name output = 'WebMap_{}.pdf'.format(str(uuid.uuid1())) Output_File = os.path.join(arcpy.env.scratchFolder, output) # Export the WebMap arcpy.mapping.ExportToPDF(mxd, Output_File) # Set the output parameter to be the output file of the server job arcpy.SetParameterAsText(1, Output_File) # Clean up - delete the map document reference filePath = mxd.filePath del mxd, result os.remove(filePath) what did you provide to GetParameterAsText(0) ? For code formatting see Code formatting... the basics++ I set it up in ArcCatalog as shown above. Well I just changed it to an mxd with just a point layer in it as a test and it ran fine, I will have to see what is in the mxd that is causing it to crash. Thanks for the response Dan, if I figure out what is going on with the original mxd causing the crash I will respond here.
https://community.esri.com/t5/python-questions/export-mxd-to-pdf-script-crashing-arccatalog/m-p/557781
CC-MAIN-2022-33
refinedweb
276
56.15
Introduction One of the most crucial data structures to learn while preparing for interviews is the linked list. In a coding interview, having a thorough understanding of Linked Lists might be a major benefit. The java.util.LinkedList.push() function is basically used to push an element to the top (beginning) of the stack which is represented by a linked list. This is similar to LinkedList's addFirst() function, and it simply puts the element at the first position or top of the linked list. push() Method Syntax: LinkedListObject.push(Object x) Parameters: The function takes parameter an object type element that represents the element to be added. The type of Object should be the same as that of the stack, represented by a LinkedList. Return Value: It has a void return type. This method pushes the element passed as a parameter to the top of the stack represented by a linked list. We will just call the push(Object X) method. Input Empty List, Elements to be pushed: Coding, is, Fun Output [Fun, is, Coding] Explanation - First, Coding was added to the top of the empty list. - Then, is was added to the top of the list. - Then, finally, Fun was added to the top of the list. Code Implementation import java.util.LinkedList; public class PrepBytes { public static void main(String[] args) { LinkedList llst = new LinkedList<>(); llst.push("Coding"); llst.push("is"); llst.push("Fun"); System.out.println(llst); } } Output [Fun, is, Coding] Time Complexity: O(1), as the new element gets added to the top of the list, i.e., no list traversal is needed. [forminator_quiz id="4620"] So, in this article, we have tried to explain the most efficient way to implement the LinkedList push() Method in Java. Java Collection Framework is very important when it comes to coding interviews. If you want to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List.
https://www.prepbytes.com/blog/java/linkedlist-push-method-in-java/
CC-MAIN-2022-21
refinedweb
330
64.3
Class::MethodMaker::Engine - The parameter passing, method installation & non-data-structure methods of Class::MethodMaker. This class is for internal implementation only. It is not a public API. The non-data-structure methods do form part of the public API, but not called directly: rather, called through the use/ import interface, as for data-structure methods. This performs argument parsing ready for calling create_methods. In particular, this is the point at which v1 & v2 calls are distinguished. This is implicitly called as part of a use statement: use Class::MethodMaker [ scalar => [qw/ foo bar baz /], new => [qw/ new /] , ]; is equivalent to Class::MethodMaker->import([scalar => [qw/ foo bar baz /], new => [qw/ new /] , ]); See perldoc -f use for details of this equivalence. The methods created are installed into the class calling the import - or more accurately, the first class up the calling stack that is not Class::MethodMaker or a subclass thereof. Class::MethodMaker->import([scalar => [+{ -type => 'File::Stat', -forward => [qw/ mode size /], '*_foo' => '*_fig', '*_gop' => undef, '*_bar' => '*_bar', '*_hal' => '*_sal', }, qw/ -static bob /, ] ]); Parse the arguments given to import and call create_methods appropriately. See main text for options syntax. Class::MethodMaker->parse_options('TargetClass', [scalar => [{ -type => 'File::stat', -forward => [qw/ mode size /], '*_foo' => '*_fig', '*_gop' => undef, '*_bar' => '*_bar', '*_hal' => '*_sal', }, qw( -static bob ), ]])}, Class::MethodMaker->parse_options('TargetClass2', [scalar => ['baz', { -type => 'File::stat', -forward => [qw/ mode size /], '*_foo' => '*_fog', '*_bar' => '*_bar', '*_hal' => '*_sal', }, qw( -static bob ), ]], +{ -type => 'Math::BigInt', }, +{'*_foo' => '*_fig', '*_gop' => undef,}, )}, The class into which to install components The arguments to parse, as a single arrayref. A hashref of options to apply to all components created by this call (subject to overriding by explicit option calls). A hashref of renames to apply to all components created by this call (subject to overriding by explicit rename calls). Add methods to a class. Methods for multiple components may be added this way, but create_methods handles only one set of options. parse_options is responsible for sorting which options to apply to which components, and calling create_methods appropriately. Class::MethodMaker->create_methods($target_class, scalar => bob, +{ static => 1, type => 'File::Stat', forward => [qw/ mode size /], }, +{ '*_foo' => '*_fig', '*_gop' => undef, '*_bar' => '*_bar', '*_hal' => '*_sal', } ); The class to add methods to. The basic data structure to use for the component, e.g., scalar. Component name. The name must be a valid identifier, i.e., a continuous non-empty string of word ( \w) characters, of which the first may not be a digit. A hashref. Some options ( static, type, default, default_ctor) are handled by the auto-extender. These will be invoked if the name is present as a key and the value is true. Any other options are passed through to the method in question. The options should be named as-is; no leading hyphen should be applied (i.e., use {static => 1} not {-static => 1}). A list of customer renames. It is a hashref from method name to rename. The method name is the generic name (i.e., featuring a * to replace with the component name). The rename is the value to rename with. It may itself contain a * to replace with the component name. If rename is undef, the method is not installed. For methods that would not be installed by default, use a rename value that is the same as the method name. So, if a type would normally install methods '*_foo', '*_gop', '*_tom' and optionally installs (but not by default) '*_bar', '*_wiz', '*_hal' using a renames value of { '*_foo' => '*_fig', '*_gop' => undef, '*_bar' => '*_bar', '*_hal' => '*_sal', } with a component name of xx, then *_foo is installed as xx_fig, *_bar is installed as xx_bar, *_wiz is not installed, *_hal is installed as xx_sal, *_gop is not installed, and *_tom is installed as xx_tom. The value may actually be an arrayref, in which case the function may be called by any of the multiple names specified. Class::MethodMaker->install_methods ($classname, { incr => sub { $i++ }, decr => sub { $i-- }, } ); The class into which the methods are to be installed The methods to install, as a hashref. Keys are the method names; values are the methods themselves, as code refs. use Class::MethodMaker [ new => 'new' ]; Creates a basic constructor. Takes a single string or a reference to an array of strings as its argument. For each string creates a simple method that creates and returns an object of the appropriate class. The generated method may be called as a class method, as usual, or as in instance method, in which case a new object of the same class as the instance will be created. The constructor will accept as arguments a list of pairs, from component name to initial value. For each pair, the named component is initialized by calling the method of the same name with the given value. E.g., package MyClass; use Class::MethodMaker [ new => [qw/ -hash new /], scalar => [qw/ b c /], ]; sub d { my $self = shift; $self->{d} = $_[0] if @_; return $self->{d}; } package main; # The statement below implicitly calls # $m->b(1); $m->c(2); $m->d(3) # on the newly constructed m. my $m = MyClass->new(b => 1, c => 2, d => 3); Note that this can also call user-supplied methods that have the name of the component. Instead of a list of pairs, a single hashref may also be passed, which will be expanded appropriately. So the above is equivalent to: my $m = MyClass->new({ b => 1, c => 2, d => 3 }); Advanced Users: Class::MethodMaker method renaming is taken into account, so even if the * method is renamed or removed, this will still work. This option causes the new method to call an initializer method. The method is called init (original, eh?) by default, but the option may be given an alternative value. The init method is passed any arguments that were passed to the constructor, but the method is invoked on the newly constructed instance. use Class::MethodMaker [ new => [qw/ -init new1 /, { -init => 'bob' } => 'init2' ]]; Constructing with new1 involves an implicit call to init, whilst constructing with new2 involves an implicit call to bob (instead of init). It is the responsibility of the user to ensure that an init method (or whatever name) is defined. Creates a basic constructor which only ever returns a single instance of the class: i.e., after the first call, repeated calls to this constructor return the same instance. Note that the instance is instantiated at the time of the first call, not before. use Class::MethodMaker [ abstract => [ qw / foo bar baz / ] ]; This creates a number of methods that will die if called. This is intended to support the use of abstract methods, that must be overridden in a useful subclass. use Class::MethodMaker [ copy => [qw/ shallow -deep deep /] ]; This creates method that produce a copy of self. The copy is a by default a shallow copy; any references will be shared by the instance upon which the method is called and the returned newborn. One option is taken, -deep, which causes the method to create deep copies instead (i.e., references are copied recursively). Implementation Note: Deep copies are performed using the Storable module if available, else Data::Dumper. The Storable module is liable to be much quicker. However, this implementation note is not an API specification: the implementation details are open to change in a future version as faster/better ways of performing a deep copy become available. Note that deep copying does not currently support the copying of coderefs, ties or XS-based objects. Martyn J. Pearce <[email protected]>
http://search.cpan.org/dist/Class-MethodMaker/lib/Class/MethodMaker/Engine.pm
CC-MAIN-2016-30
refinedweb
1,247
61.77
. Hi Ben, The file structure is what it is, because you are now within a framework, or rather a level of abstraction (or two) above Vue. And as such, Quasar has its own “predetermination” or “opinionation”. Basically, the idea with quasar is to offer a low barrier to start with. This is what is mentioned about the structure of the files in the docs. If you are a beginner, all you’ll need to care about is /quasar.conf.js (Quasar App Config file), /src/router, /src/layouts, /src/pages and optionally /src/assets. In other words, you just have to start designing your app and don’t overthink what is happening in the background (at least not at first). Quasar will take care of what you are needing to think about for you. Scott Thanks Scott; Your point is well taken and I’m trying to go along with it. So perhaps you can answer this question for me. In regular Vue App, we have the main.js as you see in the screenshot below, before we instantiate the Vue object, we have to check Firebase’s state (To login user async) BEFORE we can instantiate the Vue obj. But in Quasar built apps, we get another file called app.js in Quasar folder that does what main.js does, except, this is auto generated and should not be edited. So my question (which I can’t find in docs), where do I add codes in the entry point like we did in main.js?? Thanks! @Ben-Hayat Ok, so I misread…I followed the herring instead of the bigger fish. I think what you are looking for is a Quasar plugin: Examples of plugins I use: import AnimatedVue from 'animated-vue' export default ({ Vue }) => { Vue.use(AnimatedVue) } import axios from 'axios' export default ({ Vue }) => { Vue.prototype.$axios = axios } // batman's utility belt for javascript import _ from 'lodash' export default ({ Vue }) => { Vue.prototype._ = _ } import moment from 'moment' // leave the export, even if you don't use it export default ({ Vue }) => { Vue.prototype.$moment = moment } // form validation // See: this project is dead // now using dodoslug, a fork of node-slug // See: import slug from 'dodoslug' slug.defaults.modes['mymode'] = { replacement: '-', // replace spaces with replacement symbols: true, // replace unicode symbols or not remove: /[._]/g, // (optional) regex to remove characters lower: true, // result in lower case charmap: slug.charmap, // replace special characters multicharmap: slug.multicharmap // replace multi-characters } slug.defaults.mode = 'mymode' export default ({ Vue }) => { Vue.prototype.$slug = slug } A common use case for Quasar applications is to run code before the root Vue instance is instantiated. That’s exactly what I was looking for. Thank you Sir! Glad to have helped. - s.molinari last edited by s.molinari Yep. quasar-conf.jsis a perfect example of that layer of abstraction higher above Vue I was talking about. I’m thinking about how getting started as a beginner can be more clear in the docs. I mean quasar init my-projectis pretty simple, but where to go from there for a beginner isn’t really given as a clear “guide”. I also think overall concepts about the abstraction that probably most experienced Quasar devs take for granted are also not really clear and need better explaining. Let me give it some more thought over the weekend. Scott @s-molinari Scott, more docs to make things more transparent, are always welcomed. Basically, a workflow of things, after the Quasar CLI has created the project. What those extra files are for. Basically, explanation on anything that Quasar adds on the top of Vue CLI that NEW users are NOT familiar with. By disclosing those info and explaining the DIFFERENCES (A table would be nice to show Vue vs. Quasar), and we can immediately see, for example, Vue uses main.js and it is editable but Quasar uses app.js and requires plugins. Then what and how plugins are wriiten and used. This way, new users can see things more clear than it is now. Hope this helps. @Ben-Hayat Forgot to give you the plug-in information (above) for quasar.config.js: module.exports = function (ctx) { return { // app plugins (/src/plugins) plugins: [ 'animated-vue', 'axios', 'download', 'lodash', 'moment', 'slug' ], @hawkeye64 Thanks Bud! @s! BTW, I’m working on a blog series to help newcomers to dive into Quasar faster. The topic of this thread will also be covered in that series too. This tutorial blog series is currently planned to be released around the time 1.0 is released and will be released in, most likely, weekly installments. Scott
https://forum.quasar-framework.org/topic/2743/suggestion-feedback-from-a-new-user-s-perspective/?lang=en-US&page=1
CC-MAIN-2019-43
refinedweb
769
66.64
#include <lib_ca.h> CAMorph class contains the data for each morph. It has to be retrieved from the CAPoseMorphTag. Retrieves the name of the morph. Sets the name of the morph. Copies morph data from src. Retrieves the morph node for the object specified by bl. Retrieves the index of the specified morph node. Each morph node can be accessed through their indices. Retrieves the index of the morph node for the object specified by bl. Retrieves the morph node specified by index. Retrieves the first node of the morph. Changes the morph's mode. Example: Point data could be stored as rotational or correctional and in a delta form (only differences from the base). This can not be edited in this form so the data mode must be changed to relative (CAMORPH_MODE::REL) or absolute (CAMORPH_MODE::ABS) before editing and then restored to (CAMORPH_MODE::AUTO) when finished. The flags must be passed as CAMORPH_MODE_FLAGS::EXPAND to expand the data from the delta form and then returned with CAMORPH_MODE_FLAGS::COLLAPSE when finished. For example VAMP uses the following line to expand all data types to relative data: It then does some changes and finally restores all types to collapsed (delta) form and to the users mode (AUTO): Stores the current object's state into the morph. The corresponding flags have to be set for the data. This should normally be CAMORPH_DATA_FLAGS::ASTAG if it is to be used by the user. Applies the morph to the object. The data to be applied is set with the flags. Retrieves the target of the morph. Sets the target of the morph. Sets the strength of the morph. Retrieves the strength of the morph. Retrieves if the morph is applied at PostDeform.
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_c_a_morph.html
CC-MAIN-2021-10
refinedweb
289
77.03
Debugging Contents - 1 Launching ImageJ in debug mode - 2 Debugging plugins in an IDE (Netbeans, IntelliJ, Eclipse, etc) - 3 Monitoring system calls - 4 Debugging shared (dynamic) library issues - 5 Debugging JVM hangs - 6 Debugging memory leaks - 7 Debugging hard JVM crashes - 8 Debugging Java code with jdb - 9 Inspecting serialized objects - 10 Debugging Swing (Event Dispatch Thread) issues - 11 Debugging Java3D issues - 12 Interactive debugging using a shared Terminal session Launching ImageJ in debug mode To debug problems with ImageJ, it is often helpful to launch it in debug mode. See the Troubleshooting page for instructions. Debugging plugins in an IDE (Netbeans, IntelliJ, Eclipse, etc) To debug a plugin in an IDE, you most likely need to provide a main() method. To make things easier, we provide helper methods in fiji-lib in the class fiji.Debug to run plugins, and to load images and run filter plugins: import fiji.Debug; ... public void main(String[] args) { Debug.runFilter("/home/clown/my-profile.jpg", "Gaussian Blur", "radius=2"); } You need to replace the first argument by a valid path to a sample image and the second argument by the name of your plugin (typically the class name with underscores replaced by spaces). If your plugin is not a filter plugin, i.e. if it does not require an image to run, simply use the Debug.run(plugin, parameters) method. Attaching to ImageJ instances Sometimes, we need to debug things directly in ImageJ, for example because there might be issues with the plugin discovery (ImageJ wants to find the plugins in <ImageJ>/plugins/, and often we want to bundle them as .jar files, both of which are incompatible with Eclipse debugging). JDWP (Java Debug Wire Protocol) to the rescue! After starting the Java Virtual Machine in a special mode, debuggers (such as Eclipse's built-in one) can attach to it. To start ImageJ in said mode, you need to pass the --debugger=<port> option: ImageJ.app/ImageJ-linux64 --debugger=8000 In Eclipse (or whatever JDWP-enabled debugger) select the correct project so that the source code can be found, mark the break-points where you want execution to halt (e.g. to inspect variables' values), and after clicking on Run>Debug Configurations... right-click on the Remote Java Application item in the left-hand side list and select New. Now you only need to make sure that the port matches the value that you specified (in the example above, 8000, Eclipse's default port number). If you require more control over the ImageJ side -- such as picking a semi-random port if port 8000 is already in use -- you can also use the -agentlib:jdwp=... Java option directly (--debugger=<port> is just a shortcut for convenience): ImageJ.app/ImageJ-linux64 -agentlib:jdwp=server=y,suspend=y,transport=dt_socket,address=localhost:8000 -- (the -- marker separates the Java options -- if any -- from the ImageJ options). Once started that way, ImageJ will wait for the debugger to be attached, after printing a message such as: Listening for transport dt_socket at address: 46317 Note: calling imagej -agentlib:jdwp=help -- will print nice usage information with documentation of other JDWP options. Attach ImageJ to a waiting Eclipse Instead of making ImageJ the debugging server, when debugging startup events and headless operations it is easier to make ImageJ the client and Eclipse (or equivalent) the server. In this case you start the debugging session first, e.g. in Eclipse debug configurations you specify "Standard (Socket Listen)" as the connection type. Then, simply start ImageJ without the "server=y" flag to connect and debug: ImageJ.app/ImageJ-linux64 -agentlib:jdwp=server=y,suspend=y,transport=dt_socket,address=localhost:8000 -- Monitoring system calls Linux On Linux, you should call ImageJ using the strace command: strace -Ffo syscall.log ./imagej <args> MacOSX Use the dtruss wrapper around dtrace to monitor system calls: dtruss ./imagej <args> Windows To monitor all kinds of aspects of processes on Windows, use Sysinternal's Process Monitor. Linux Set the LD_DEBUG environment variable before launching ImageJ: LD_DEBUG=1 ./imagej <args> MacOSX Set the DYLD_PRINT_APIS environment variable before launching ImageJ: DYLD_PRINT_APIS=1 ./imagej <args> Windows Often, dynamic library issues are connected to a dependent .dll file missing. Download depends.exe and load the .dll file you suspect is missing a dependency. Debugging JVM hangs When the Java VM hangs, the reason might be a dead-lock. Try taking a stack trace. If you have trouble, you can try one of the following advanced techniques: - You can use the jstackcommand (you don't need to run ImageJ from the command line in this case). This requires that you first find the PID (process ID) of ImageJ. You can do so by running: jps from the command line to print a list of running Java processes. If you're not sure which PID is ImageJ's, you can close ImageJ, run jps, open ImageJ and run jpsagain. Whichever PID is present in the second run but not the first is ImageJ's. Then, to acquire a stack trace, just run: jstack <ImageJ's PID> - For GUI-based debugging, can also attach to the ImageJ PID using the jvisualvmprogram that you can find in java/<platform>/<jdk>/bin/. Here you can simply press a big Thread Dump button to view the stack trace. MacOSX users, please note that Apple decided that the VisualVM tool should no longer be shipped with the Java Development Kit; you will have to download it from here. Regardless of which method you use to acquire the stack trace, to debug you will want to acquire multiple stack traces over time and compare. If all the stack traces are in the same method execution, then that's the source of the deadlock (or slowdown). Debugging memory leaks Sometimes, memory is not released properly, leading to OutOfMemoryExceptions. One way to find out what is happening is to use jvisualvm (see #Debugging JVM hangs) to connect to the ImageJ process, click on Heap Dump in the Monitor tab, in said tab select the sub-tab Classes and sort by size. Double-clicking on the top user should get you to a detailed list of Instances where you can expand the tree of references to find out what is holding a reference still. Debugging hard JVM crashes When you have found an issue that crashes the JVM, and you can repeat that crash reliably, there are a number of options to find out what is going on. Using gdb Typically when you debug a program that crashes, you start it in a debugger, to inspect the stack trace and the variables at the time of the crash. However, there are substantial problems with gdb when starting the Java VM; either gdb gets confused by segmentation faults (used by the JVM to handle NullPointerExceptions in an efficient manner), or it gets confused by the threading system -- unless you compile gdb yourself. But there is a very easy method to use gdb to inspect serious errors such as segmentation faults or trap signals nevertheless: ./imagej -XX:OnError="gdb - %p" -- Using lldb On newer OS X versions, gdb has been replaced with lldb. For those familiar with gdb already, there is an LLDB to GDB Command Map cheat sheet which may be useful. Using the hs_err_pid<pid>.log files The Java virtual machine (JVM) frequently leaves files of the format hs_err_pid<number>.log in the current working directory after a crash. Such a file starts with a preamble similar to this: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f3dc887dd8b, pid=12116, tid=139899447723792 # # JRE version: 6.0_20-b02 # Java VM: Java HotSpot(TM) 64-Bit Server VM (16.3-b01 mixed mode linux-amd64 ) # Problematic frame: # C [libc.so.6+0x86d8b] memcpy+0x15b # # If you would like to submit a bug report, please visit: # # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # followed by thread dumps and other useful information including the command-line arguments passed to the JVM. The most important part is the line after the line # Problematic frame: because it usually gives you an idea in which component the crash was triggered. Out of memory error If the specific exception you're receiving (or you suspect) is an OutOfMemoryError, there are JVM flags that can be enabled when running ImageJ to help pinpoint the problem: ./imagej -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/desired/path/ The first option: -XX:+HeapDumpOnOutOfMemoryError tells the JVM to create a heap dump (.hprof file) if an OutOfMemoryError is thrown. This is basically a snapshot of the JVM state when memory ran out. The second option: -XX:HeapDumpPath=/desired/path/ is not required, but convenient for controlling where the resulting .hprof file is written. Note that these heap dumps are named by PID, and thus are not easily human distinguishable. After acquiring a heap dump, you can analyze it yourself, e.g. with a memory analyzer, or post on [Mailing Lists|imagej-devel] with a brief explanation of your problem. Debugging Java code with jdb How to attach the Java debugger jdb to a running ImageJ process This requires two separate processes, ImageJ itself and the debugger. You can do this either in one shell, backgrounding the first process or in two shells, this is recommended. In the two shells do the following: - Shell 1 - In the first shell, start ImageJ with special parameters to open a port (8000 in this case) to which jdb can connect afterwards: ./imagej -Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y -- - (Tested with Java 1.5.0, ymmv) - Shell 2 - In the second shell, tell jdb to attach to that port: jdb -attach 8000 This is an ultra quick start to jdb, the default Java debugger Hopefully you are a little familiar with gdb, since jdb resembles it lightly. Notable differences: - a breakpoint is set with "stop in <class>.<method>" or "<class>:<line>". Just remember that the class must be fully specified, i.e. <package>.<subpackages...>.<classname> - no tab completion - no readline (cursor up/down) - no shortcuts; you have to write "run", not "r" to run the program - no listing files before the class was loaded - much easier method to specify the location of the source: "use <dir>" - "until" is "step", "step" is "stepi" Okay, so here you go, a little demonstration: (If you attach jdb to a running ImageJ process, you have to use the line from the previous section instead.) $ jdb -classpath ij.jar ij.ImageJ > stop in ij.ImageJ.main Deferring breakpoint ij.ImageJ.main. It will be set after the class is loaded. > run run ij.ImageJ Set uncaught java.lang.Throwable Set deferred uncaught java.lang.Throwable > VM Started: Set deferred breakpoint ij.ImageJ.main Breakpoint hit: "thread=main", ij.ImageJ.main(), line=466 bci=0 main[1] use . main[1] list 462 //prefs.put(IJ_HEIGHT, Integer.toString(size.height)); 463 } 464 465 public static void main(String args[]) { 466 => if (System.getProperty("java.version").substring(0,3).compareTo("1.4")<0) { 467 javax.swing.JOptionPane.showMessageDialog(null,"ImageJ "+VERSION+" requires Java 1.4.1 or later."); 468 System.exit(0); 469 } 470 boolean noGUI = false; 471 arguments = args; main[1] print args[0] java.lang.IndexOutOfBoundsException: Invalid array range: 0 to 0 args[0] = null main[1] print args.length args.length = 0 main[1] step > Step completed: "thread=main", ij.ImageJ.main(), line=470 bci=28 470 boolean noGUI = false; main[1] step > Step completed: "thread=main", ij.ImageJ.main(), line=471 bci=30 471 arguments = args; main[1] set noGUI = true noGUI = true = true main[1] cont > The application exited Inspecting serialized objects If you have a file with a serialized object, you can use this Beanshell in the Script Editor to open a tree view of the object (double-click to open/close the branches of the view): import fiji.debugging.Object_Inspector; import ij.io.OpenDialog; import java.io.FileInputStream; import java.io.ObjectInputStream; dialog = new OpenDialog("Classifier", null); if (dialog.getDirectory() != null) { path = dialog.getDirectory() + "/" + dialog.getFileName(); in = new FileInputStream(path); in = new ObjectInputStream(in); object = in.readObject(); in.close(); Object_Inspector.openFrame("classifier", object); } Debugging Swing (Event Dispatch Thread) issues Swing does not allow us to call all the methods on all UI objects from wherever we want. Some things, such as setVisible(true) or pack() need to be called on the Event Dispatch Thread (AKA EDT). See Sun's detailed explanation as to why this is the case. There are a couple of ways to test for such EDT violations, see this blog post by Alexander Potochkin (current versions of debug.jar can be found here). Debugging Java3D issues When Java3D does not work, the first order of business is to use Plugins ▶ Utilities ▶ Debugging ▶ Test Java3D. If this shows a rotating cube, but the 3D Viewer does not work, please click on Help ▶ Java3D Properties... in the 3D Viewer's menu bar. Command line debugging If this information is not enough to solve the trouble, or if Test Java3D did not work, then you need to call ImageJ from the command line to find out more. From the command line, you have several options to show more or less information about Java3D. Windows & Linux Please find the ImageJ-<platform> executable in the ImageJ.app/ directory (on 32-bit Windows, that would be ImageJ-win32.exe. Make a copy in the same directory and rename that to debug (on Windows: debug.exe). Simply double-click that. On Windows, you will see a console window popping up; to copy information for pasting somewhere else, please right-click the upper-left window icon, select Properties..., activate the Quick Edit mode. Then mark the text in question by dragging the mouse with the left mouse button pressed, and copy it to the clipboard by right-clicking. On Linux, the output will be written to the file .xsession-errors in the home directory. MacOSX On MacOSX, you need to remember that any application is just a directory with a special layout. So you can call ImageJ like this from the Terminal (which you will find in the Finder by clicking on Go>Utilities. Example command line: cd /Applications/ImageJ.app/Contents/MacOS/ cp ImageJ-macosx debug ./debug Show Java3D debug messages ./imagej -Dj3d.debug=true -- (Of course, you need to substitute the ./imagej executable name with the appropriate name for your platform.) Note: do not forget the trailing --; without them, ImageJ mistakes the first option for an ImageJ option rather than a Java one. Note, too: on Windows, you must not forget to pass the --console option (this can be anywhere on the command line). Windows-specific stuff On Windows, you can choose between OpenGL and Direct3D by passing -Dj3d.rend=ogl or -Dj3d.rend=d3d, respectively. Further, some setups require enough RAM to be reserved, so you might need to pass an option like --mem=1200m (make sure that you have enough RAM free before starting ImageJ that way, though!). If it turns out that memory was the issue, you can make the setting permanent by clicking ImageJ's Edit ▶ Options ▶ Memory & Threads... menu entry. More Java 3D properties You can control quite a few things in Java 3D through setting Java properties. Remember, you can set properties using a command line like this: ./imagej -D<property-name>=<property-value> -- where you substitute <property-name> and <property-values> appropriately. You can have more than one such option, but make sure that they are appearing before the -- on the command line, otherwise ImageJ will mistake them for ImageJ options. This list of Java 3D properties was salvaged from the now-defunct j3d.org website: For users running Linux and MacOSX computers (or on Windows, Cygwin with an openssh server), one can use an SSH tunnel for a debugging session shared between a user and a developer. All that is needed is a shared account on a public SSH server. The user should execute this command: ssh -R 2222:127.0.0.1:22 -t $ACCOUNT@$SSHSERVER screen Once connected, the command ssh -p 2222 [email protected] will open a connection back to the local machine. The developer should then execute this command: ssh -t $ACCOUNT@$SSHSERVER 'screen -x' Since this provides a shared GNU screen session, both the user and the developer can execute commands and see the output. It is even quite common to use the terminal window as sort of a private chat room by typing out what you have to say, ending the line with a ^ Ctrl+C (lest it get executed as a command). After the debugging party is over, the user can log out securely by hitting ^ Ctrl+D to log out from the local machine (since the user typed in their password in the GNU screen session herself, there is no way for the developer to log back in without the user's explicit consent). Another ^ Ctrl+D will terminate the GNU screen session, and yet another ^ Ctrl+D will log out from the shared account on the SSH server.
https://imagej.net/Debugging
CC-MAIN-2017-13
refinedweb
2,857
54.73