text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Hello guys,
I'm really stumped on this assignment. What I need to do is make a triangle's height based on the number entered by the user. So if the user enteres 4, the output would be this:
$
$$
$$$
$$$$
And I can't figure out how to do that. What I have so far will print '$' down the screen. If I enter '4', it will print '$' down the screen 25 times, and of course, I know that's not right.
// Name: John Mollman // Class: Java 1 - 5271 // Abstract: CHomework4A // Imports ------------- import java.util.Scanner; public class CHomework4A { public static void main(String[] astrCommandLine) { // Get data from the user and store it in the variable 'intTriangleHeight' Scanner inputDevice = new Scanner(System.in); int intTriangleHeight = 0; // Loop until the user enters a number within 1 and 20 do { System.out.println("Height of the triangle?"); intTriangleHeight = inputDevice.nextInt(); } while ((intTriangleHeight < 1) || (intTriangleHeight > 20)); // Set a number of lines based on the height of the trianle // entered above for (int intIndex1 = 0; intIndex1 <= intTriangleHeight; intIndex1 += 1) { String strOutputLine = "$"; // Defines '$' as a building block // Print the number of '$' signs to go across for (int intIndex2 = 0; intIndex2 <= intTriangleHeight; intIndex2 += 1) { System.out.println(strOutputLine); } } } } | https://www.daniweb.com/programming/software-development/threads/312381/help-with-printing-a-triangle-s-height-based-on-the-number-entered | CC-MAIN-2017-09 | refinedweb | 200 | 52.8 |
Windows versus Apple
On 09/10/2013 at 01:51, xxxxxxxx wrote:
When developing plugins I know the main difference is file naming and the use of 'escape' characters.
Are there any other issue I should be aware of?
I get a strange error when using the plugin on a Mac, no issue on Windows.
E.g. TypeError: Sized() takes exactly 3 arguments (1 given)
But the code seems alright and no error in Windows:
def Sized(self, w, h) : # Windows was resized.
# Set flag to disable rendering during resizing
self.redraw = True
E.g. AttributeError: 'Area' object has no attribute 'redraw'
But the code seems alright and no error in Windows:
if (not self.redraw) : #resize message, do not render
One thing I notice when looking at Niklas code is an additional r when commenting.
r""", is that something Mac specific?
On 09/10/2013 at 05:31, xxxxxxxx wrote:
The r means that the following string is a raw string, nothing os specific about that. I think he
is doing it because he is using these docstrings to feed sphinx and he wants to make sure that
they are processed correctly / he does not have to to double type every backlash.
Cannot really say anything about the mac problem , but sounds odd :).
On 09/10/2013 at 06:15, xxxxxxxx wrote:
Ok, thanks for your thoughts.
The thing is, I do not a Mac myself, so testing / debugging is difficult.
I'll start with outputting more debug lines- information. | https://plugincafe.maxon.net/topic/7482/9313_windows-versus-apple/1 | CC-MAIN-2019-18 | refinedweb | 251 | 72.46 |
I’ve recently had a chance to write a Google Maps control for EPiServer, it’s still somewhat buggy and I’m still considering how to release it since it still contains some java script that is potentially GPL infected and I would not like to contaminate someone’s code with it. I may end up rewriting it to some extent or make it more server side so that it’s completely ASP based.
Anyway…
We’ve started working on the rewrite of our site internally in a few CMS’es basically creating an internal competition on which of the engines/teams can do the best the easiest and the fastest site. I can say honestly, EPiServer has been a blast! Virtually any control we’ve decided to place there was almost completely effortless. The controls that are delivered (with sample usage on the demo site) just seem to cover everything. Well, almost everything. There is no map creation component as far as I can tell.
I’ve been wanting to write this control for quite a while and since I deployed a wiki for my family and started filling it in. I had a really nice experience with this Google Map extension to the MediaWiki. I wanted us to have the same on our site. And in the mean time we’ve started running into some limitations that required us to write some plugins for the editor’s site of the CMS. Striking two birds with one stone, here comes the Google Maps for EpiServer.
Anyone familiar with EpiServer knows that the CMS allows you to define the content on any given page through a set of properties defined for its page type. There is a handful of those, and each of them comes with a specific editor. Some of them even come with so called DOPE (Dynamic-on-page-editing). This feature is really so cool that by itself it’s probably one of the driving selling factor. I wanted it all!
To deliver it you need to inherit a property, (in my case I decided to go with a LongString as I can easily go over the 255 char limit if the user woudl decide to have more than a couple of flagpoints on his/her map) and define its editors.
I’ve found out that the property can be easily integrated with the CMS (virtually without any user intervention) by means of attributes/reflection. So here we go:
namespace Cognifide.EPiServerControls.GoogleMaps
{
[global::EPiServer.PlugIn.PageDefinitionTypePlugIn(
DisplayName = “GoogleMapData”, Description = “Google Map”)]
public class GoogleMapProperty : EPiServer.Core.PropertyLongString
{
…
}
}
Yay! one line of code and my class is a property and will show up in the system as one of the available data formats. Now how cool is that!?
The cool part of it is that now as it’s a property, it’s even easier to integrate it with the page.
<%@ Page language=”c#” Codebehind=”GoogleMapsPage.aspx.cs”
AutoEventWireup=”True”
Inherits=”development.Templates.GoogleMapsPage”
MasterPageFile=”~/templates/MasterPages/MasterPage.master” %>
<%@ Register TagPrefix=”EPiServer”
Namespace=”EPiServer.WebControls” Assembly=”EPiServer” %>
<asp:ContentContentPlaceHolderID=”MainRegion” runat=”server”>
<EPiServer:Property ID=”MyGoogleMap” runat=”server”
PropertyName=”GoogleMapData” />
</asp:Content>
That’s it! The CodeFile is practically empty, except for the autogenerated part. The property instance knows by itself that is should pull the data from the page property defined in “PropertyName”!
The control supports all 3 modes:
– View Mode – obviously:
– Edit mode – can’t do without it:
I initially planned to put the dope-like editing there but for some reason EPiServer scripts kept interfering with the JavaScript defined for the control. Didn’t give it too much thought though what I really wanted to work good is…
– DOPE mode – this is probably the coolest thing in the whole deal:
The only problem I still have with the last mode is that most of the code for the DOPE mode I have is a modified version of what comes originally from the MediaWiki Google Map extension. Since JavaScript is not my core competence, I’ve only modified it to the extent that was needed for the code to work and therefore before save, you need to copy the dynamically generated code that’s just below the editing controls and into the edit box. Lame, I know. But I don’t really fancy learning Java Script further right now and it was not the point of this exercise. Perhaps if the control is released someone will be kind enough to fix and extend it so that it’s more streamlined.
The article is based on the knowledge I’ve gathered and work I’ve performed for Cognifide. Cognifide is an official partner EPiServer and the real contributor of the the control.
This entry (Permalink) was posted
on Friday, March 23rd, 2007 at 4:10 pm and is filed under .Net Framework, ASP.NET, C#, EPiServer, Web applications.
You can follow any responses to this entry through the RSS 2.0
feed.
You can leave a response
, or trackback
from your own site. | https://blog.najmanowicz.com/2007/03/23/google-map-control-and-why-episerver-is-so-cool/ | CC-MAIN-2020-24 | refinedweb | 837 | 53.71 |
pyoai 2.4.4
The oaipmh module is a Python implementation of an "Open Archives Initiative Protocol for Metadata Harvesting" (version 2) client and server. The protocol is described here:
OAIPMH
Changelog
2.4.4 (2010-09-30)
- Changed contact info, Migrated code from Subversion to Mercurial
2.4.3 (2010-08-19)
Changes
- Convert lxml.etree._ElementUnicodeResult and ElementStringResult to normal string and unicode objects, to prevent errors when these objects get pickled. (lp #617439)
2.4.2 (2010-05-03)
Changes
- OAI_DC and DC namespace declarations should not be declared on the document root, but on the child of the metadata element. According to the OAI spec
2.4.1 (2009-11-16)
Changes
- When specifying a date (not a datetime) for the until parameter, default to 23:59:59 instead of 00:00:00
2.4 (2009-05-04)
Changes
- Included support for description elements in OAI Identify headers, added 'toolkit' description by default.
2.3.1 (2009-04-24)
Changes
- Raise correct error when from and until parameters have different granularities
2.3 (2009-04-23)
Changes
Fixed bug and added tests for handling invalid dateTime formats, the server will now respond with a BadArgument (XML) error instead of a python traceback.
Use buildout to create testrunner and environment as opposed to test.py script.
Install buildout by:
$ python bootstrap.py $ bin/buildout
Run the tests by doing:
$ bin/test
To get a python interpreter with the oaipmh library importable:
$ bin/devpython
2.2.1 (2008-04-04)
Changes
- Added xml declaration to server output
- Prettyprint xml output
- compatibility fix: should be compatible with lxml 2.0 now
- server resumption tokens now work with POST requests.
- Fix for client code that handles 503 response from server.
2.2 (2006-11-20)
Changes
- Support for BatchingServer. A BatchingServer implements the IBatchingOAI interface. This is very similar to IOAI, but methods get a 'cursor' and 'batch_size' argument. This can be used to efficiently implement batching OAI servers on top of relational databases.
- Make it possible to explicitly pass None as the from or until parameters for a OAIPMH client.
- an extra nsmap argument to Server and BatchingServer allows the programmer to specify either namespace prefix to namespace URI mappings that should be used in the server output.
- fixed a bug where the output wasn't encoded properly as UTF-8.
2.1.5 (2006-09-18)
Changes
- compatibility fix: it should work with lxml 1.1 now.
2.1.4 (2006-06-16)
Changes
- Distribute as an egg.
2.1.3
Changes
- Add infrastructure to deal with non-XML compliant OAI-PMH feeds; an XMLSyntaxError is raised in that case.
- added tolerant_datestamp_to_datetime which is a bit more tolerant than the normal datestamp_to_datetime when encountering bad datestamps.
- Split off datestamp handling into separate datestamp module.
2.0
Changes
- Add support for day-only granularity (YYYY-MM-DD) in client. calling 'updateGranularity' with the client will check with the server (using identify()) to see what granularity the server supports. If the server only supports day level granularity, the client will make sure only YYYY-MM-DD timestamps are sent.
2.0b1
Changes
- Added framework for implementing OAI-PMH compliant servers.
- Changed package structure: now a oaipmh namespace package. Client functionality now in oaipmh.client.
- Refactoring of oaipmh.py module to reuse code for both client and server.
- Extended testing infrastructure.
- Switched over from using libxml2 Python wrappers to the lxml binding.
- Use generators instead of hacked up __getitem__. This means that the return from listRecords, listIdentifiers and listSets are now not normal lists but iterators. They can easily be turned into a normal list by using list() on them, however.
1.0.1
Bugs fixed
- Typo in oaipmh.py
1.0
Bugs fixed
- Added an encoding parameter to the serialize call, which fixes a unicode bug.
0.7.4
Bugs fixed
- A harvest can return records with <header status~"deleted"> that contain no metadata and are merely an indication that that metadata-set for that resource is no longer on the OAI service. These records should be used to remove metadata from the catalog if it is there, bur should never be stored or catalogued themselves. They aren't now. (Fixed in zope/OAICore/core.py)
0.7
Initial public release.
- Author: Infrae
- Keywords: OAI-PMH xml archive
- License: BSD
- Categories
- Package Index Owner: faassen, gbuijs, jsproc
- Package Index Maintainer: nouri
- DOAP record: pyoai-2.4.4.xml | http://pypi.python.org/pypi/pyoai/ | crawl-003 | refinedweb | 738 | 58.58 |
Java does not directly support constants. However, a static final variable is effectively a constant.
The static modifier causes the variable to be available without loading an instance of the class where it is defined. The final modifier causes the variable to be unchangeable.
Java constants are normally declared in ALL CAPS. Words in Java constants are normally separated by underscores.
An example of constant declaration in Java is written below:
public class MaxUnits {
public static final int MAX_UNITS = 25;
}
You can share your information about this topic using the form below!
Please do not post your questions with this form! Thanks. | http://www.java-tips.org/java-se-tips/java.lang/how-do-i-declare-a-constant-in-java.html | CC-MAIN-2013-20 | refinedweb | 102 | 50.02 |
I've been thinking for a while about a series of blog posts I'd like to write explaining various Entity Framework concepts, particularly those related directly to writing code using the framework--by that I mean that I will concentrate more on using the API to write object-oriented programs which do data access and less on some other kinds of topics like writing and debugging the XML mapping, using the designer, etc. Some of this will be pretty basic, but many of them are the kinds of things that come up fairly often in our own internal discussions as well as on the forums and such. Hopefully this will be valuable, please let me know what you think, if there are topics you would especially like me to cover, or if there's anything I can do to make it more useful.
I'm going to use a common model/database for my samples which I call DPMud. It's something that a few friends and I created and maintain as a fun side-project which is an old-style multi-user text adventure (MUD) built using the entity framework. The architecture is not something that will scale well (every real-time event in the system persists to the database, and users learn about events by querying the database), but it works reasonably well for a few users at a time, it's an environment which is very convenient for us to develop in, and it has turned out to be a decent example for exercising all sorts of entity framework functionality. It even does a reasonable job of stressing the system.
In order to help you get a feel for the model, here's an abbreviated version of the diagram:
Essentially I have a many-to-many self relationship for rooms, but because I need a "payload" on that relationship (some properties on the relationship itself rather than the rooms), I chose to create another entity type called Exit and then have two 1-many relationships between Rooms and Exits--one of those relationships represents all of the exits out of a room, and the other relationship represents all of the entrances to the room. The other parts of the model are actors which are related to rooms, and items which can be related either to a room or an actor. The model doesn't have a good way to enforce that an item must be related either to a room or an actor but not both or neither, but I've added check constraints to the DB which handle that enforcement. In the real game there are also events which relate to each of the other entity types as well as a large inheritance hierarchy of actor types and event types among other things, but those make the model diagram very hard to read, so I've left them out.
This diagram corresponds to my CSDL file which is the XML file describing my conceptual model. If you'd like to look that over, you can find it here. Well, that CSDL file actually has a few things not present in the diagram, but it is simplified considerably from the full model used in our game. For completeness, you can also have a look at the SSDL and the MSL. I know I'm a bit "old school" when it comes to these things since I've been working on the EF since well before we had a designer (funny to say that about a not-yet-released project), but I've authored each of these files by hand rather than using the designer. I'll also point out, in case some of you are unaware, that the designer uses a file called EDMX to store all of the metadata about an EF model--this one file includes the CSDL, MSL and SSDL just in separate sections within it. When it comes to runtime, the system requires the three separate files, and the designer projects automatically create them in your application out directory. So, I will talk about the three files separately and even play some tricks that don't work with the designer today rather than describing the default designer experience.
The one other thing I'll say about DPMud is that it was built from the beginning as rich windows client which talks direclty to the database. So most of our initial apps will work that way. This is nice, because these are some of the easiest apps to write, but as we go through the series we probably will also spend some time exploring other app architectures like web services and web apps.
OK. So enough about my funny little sample. Let's cover some EF concepts, shall we? There are three things I'd like to talk about in this post which are all related to getting any basic app up and running (once you are done creating the metadata for the model, generating the basic entity classes, etc.):";
using System;
using System.Data.EntityClient;
using System.Data.SqlClient;
using DPMud;
namespace DPMud
{
public partial class DPMudDB
{
[ThreadStatic]
static DPMudDB _db;
static string _connectionString;
public static string ConnectionString
{
get
{
if (_connectionString == null)
{
// insert code from above which uses connection string builders here //
_connectionString = entityBuilder.ToString();
}
return _connectionString;
}
}
public static DPMudDB db
{
get
{
if ((_db == null) && (ConnectionString != null))
{
_db = new DPMudDB(ConnectionString);
}
return _db;
}
}
So what do we get from all this? Well, once we have this foundation laid, writing a program which accesses the database using our strongly typed context becomes pretty darn simple (even if we have multiple threads), and that program can itself just be a single EXE with the entire model and all of its metadata in that one assembly (this is what we do for DPMud—nothing quite like drag-and-drop deployment).
Here’s the code for a simple program which prints out a list of all the rooms and items in the DPMud database but does so from two threads running concurrently. Note: you can set break points in each of the loops and step through the program seeing things interleave nicely, but if you actually run the program there’s a good chance all of the rooms will print together and the items the same because the total time required is relatively small so there may not be that much time-slicing between the threads.
using System.Threading;
namespace sample1
class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(new ThreadStart(Program.PrintItems));
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
foreach (Room room in DPMudDB.db.Rooms)
{
Console.WriteLine("Room {0}", room.Name);
}
}
static void PrintItems()
foreach (Item item in DPMudDB.db.Items)
Console.WriteLine("Item {0}", item.Name);
}
Fun huh? OK. Yes I admit it, I’m a total geek, but I think it’s pretty fun.
Until next time,Danny | http://blogs.msdn.com/b/dsimmons/archive/2007/09/15/concepts-part-i-connection-strings-context-lifetimes-metadata-resources.aspx | CC-MAIN-2016-07 | refinedweb | 1,148 | 53.14 |
Can I put object of vbscript class into session?
Discussion in 'ASP General' started by Quick
Can I put a class into a namespace?Daqian Yang, Jan 23, 2004, in forum: C++
- Replies:
- 2
- Views:
- 337
- Michael Mellor
- Jan 24, 2004
How to put an asp.net table's content into a session object?Jon via DotNetMonster.com, Jun 15, 2006, in forum: ASP .Net
- Replies:
- 1
- Views:
- 1,755
- sharmil
- Jun 15, 2006
It seems that ZipFile().write() can only write files,how can empty directories be put into it?could ildg, Jul 1, 2005, in forum: Python
- Replies:
- 0
- Views:
- 510
- could ildg
- Jul 1, 2005
Is it possible to put the instance of ASP user define class into the sessionGalileo, Oct 21, 2003, in forum: ASP General
- Replies:
- 0
- Views:
- 130
- Galileo
- Oct 21, 2003
VBScript class Q: Self-assignment of an object to a property from within the same class?Guest, Apr 11, 2005, in forum: ASP General
- Replies:
- 3
- Views:
- 244
- Guest
- Apr 11, 2005 | http://www.thecodingforums.com/threads/can-i-put-object-of-vbscript-class-into-session.791890/ | CC-MAIN-2015-11 | refinedweb | 170 | 72.26 |
XmlSerializer compatibility between NetCF and the desktop
Andrew
The XmlSerializer found in the .NET Compact Framework has a very different implementation than the one found in the full .NET Framework. The reasons for the differences include size and performance constraints that make the desktop’s XmlSerializer inappropriate for devices.
As a result of the different implementation, there are inevitably going to be differences between how each serializer turns objects into XML and back again. We try to minimize these differences so that your C# code will run on NetCF and desktop in a very portable way. When we find differences, we have to weigh the consequences of possibly breaking backward compatibility with applications written for earlier versions of NetCF against breaking compatibility with desktop for current and future applications.
One area where cross-platform compatibility becomes particularly important is in web services. When you generate web service proxies for your NetCF applications using Visual Studio’s Add Web Reference function (or the command line wsdl.exe, xsd.exe, or NetCFSvcUtil.exe tools) you expect those proxies to serialize to XML that the service is expecting. These tools are generally developed and tested against the desktop’s version of the XmlSerializer more thoroughly than against the NetCF XmlSerializer. So any differences in how NetCF’s XmlSerializer works could cause incompatible xml to be sent to a service.
As I have been code reviewing .NET Compact Framework’s version of the serializer and fixing bugs in it, I’ve come across a number of differences in how the two serializers behave that can affect web services. Some of these fixes made it into NetCF 2.0 SP2. Many more made it into NetCF 3.5. But quite a few issues were discovered too late to fix in the NetCF 3.5 release.
In an upcoming NetCF release there are likely to be some breaking changes to the way the XmlSerializer works in order to improve web service and desktop compatibility. This is likely to only affect applications that use multiple namespaces in a single type that is being serialized as that is where most of the bugs seem to be.
You can prepare for these changes by testing your application on the full .NET Framework today. If it runs correctly there, then it will probably run correctly on a future version of the .NET Compact Framework. If you notice that the XML the desktop produces is different than what NetCF produces, you will probably need to adjust your code so that it produces the desired XML on the full Framework, and consider releasing a service pack for your own application to your customers who are using new versions of NetCF.
A full list of breaking changes is published with each release of the .NET Compact Framework. You should review those if your NetCF app uses the XmlSerializer. | https://devblogs.microsoft.com/premier-developer/xmlserializer-compatibility-between-netcf-and-the-desktop/ | CC-MAIN-2020-29 | refinedweb | 475 | 54.12 |
Eve Online Client Source Code Leaked 368.
Well... (Score:4, Funny)
Re:Warning! CCP Seeding, Banning Torrenters (Score:5, Funny)
Unless you live in your mom's basement.
Re:Warning! CCP Seeding, Banning Torrenters (Score:5, Funny)
Some additional info on this (Score:4, Funny)
First things first - it's not the full source. In fact, it's not even 2mb big. It's not even a fraction of the source.
Secondly, from the IM conversation they had with support:
[20:18] I don\'t know HOW you work
[20:19] i see the RESULT of this work
[20:19] and UNDERPANTS of it
They see the UNDERPANTS of it. Hilarious.)
Excerpt from the code... AMAZING (Score:4, Funny)
//Both people are represented by an abstract class
public abstract class Person
{
public bool StrangersToLove { get; set; }
public bool KnowTheRules { get; set; }
}
//Possible thoughts
public enum Thought
{
FullCommitment
}
//Class
public sealed class Me : Person
{
public Thought Thinking()
{
return Thought.FullCommitment;
}
}
//The target of the song, notice that GetThought can only be called by passing in an instance of Rick
//which satisfies that she can't get this from any other guy
public class You : Person
{
private Thought whatHeIsThinking;
public void GetThought(Me guy)
{
whatHeIsThinking = guy.Thinking();
}
}
class Program
{
static void Main(string[] args)
{
var Rick = new Me() { KnowTheRules = true, StrangersToLove = false };
var Girl = new You() { KnowTheRules = true, StrangersToLove = false };
Girl.GetThought(Rick);
}
}
Re:Don't download the source via the torrent (Score:4, Funny)
Good Lord... The Rock's source code? (Score:2, Funny)
you="Roodypoo" . "Candy-Ass";
} | https://games.slashdot.org/story/08/04/14/2046246/eve-online-client-source-code-leaked/funny-comments | CC-MAIN-2016-36 | refinedweb | 257 | 71.85 |
Set Up Tutorial¶
In REEM, data is passed between client programs and a centralized Redis server. This tutorial will demonstrate how to set up the server and connect to it with a REEM client. Both the server and client will run on the local machine.
Requirements:
-
Python 3
-
Linux/macOS (ReJSON requirement, though you can run ReJSON with Docker on Windows)
Server¶
This section goes through how to set up a server. REEM runs on Redis and requires the ReJSON module. We will install both and check that they are working.
Redis¶
The following script will download and build Redis with supporting packages from source inside
a folder called
database-server.
REEM has been tested with Redis version 5.0.4. You may want to pull the latest version of Redis in the future. Change the
versioning in the script appropriately
DO NOT install Redis through
apt-get install redis-server
This will install Redis 3 which does not support modules. You will not be able to run REEM.
Once you download and build Redis from source, you will need to access two executables:
redis-server and
redis-cli. The former is the executable that launches a redis-server. The latter is a
useful command line interface (cli) that allows for easy testing. The executables are located at
database-server/redis-5.0.4/src/redis-server
database-server/redis-5.0.4/src/redis-cli
The script below gives them aliases to make things easier. Note that these aliases will disappear when the terminal closes.
mkdir database-server cd database-server wget tar xzf redis-5.0.4.tar.gz cd redis-5.0.4/deps make hiredis lua jemalloc linenoise cd .. make alias redis-server=$PWD/src/redis-server alias redis-cli=$PWD/src/redis-cli cd ..
Check that the version of Redis you have is 5.0.x by running
redis-server --version
Now, check that the redis server will boot. Run
redis-server in your terminal. The redis server will take over
your terminal.
Open up another terminal and run
redis-cli. The CLI will take over that terminal and your prompt should look like
127.0.0.1:6379>
Execute a basic set and get with Redis, ensuring the output looks similar to the output below:
127.0.0.1:6379> SET key 1 OK 127.0.0.1:6379> GET key "1" 127.0.0.1:6379>
Congratulations! You have successfully installed and ran Redis. Shutdown the Redis server (issue the
shutdown command
in the cli) and exit the cli.
ReJSON¶
ReJSON is a third party module developed for Redis developed by Redis Labs. It introduces a JSON datatype to Redis that is not available in standard Redis. REEM relies on it for serializable data.
Starting from inside the
database-server folder, continuing from the Redis installation script, the following will
build ReJSON from source.
git clone cd redisjson make cd ..
The above script produces an compiled library file at
database-server/redisjson/src/rejson.so. Redis needs to be
told to use that library. You can tell Redis that by starting a server with a configuration file. Download this
example configuration file and place it inside
database-server.
Some details about this configuration file:
Line 46 (in the modules section) says
loadmodule redisjson/src/rejson.sospecifying the compiled library for rejson
Line 71 (in the network section) says
bind 127.0.0.1to bind only to the local host network interface.
If you later want to make this redis server accessible on a network,
you must change line 71 to bind to that interface too.
For example if the computer hosting the redis server has an ip address
10.0.0.1
on the network, this line should become
bind 127.0.0.1 10.0.0.1
so that it binds to the local interface and the network interface.
Let’s test the ReJSON installation. Run
redis-server redis.conf. This will start the Redis server with ReJSON.
Open another terminal and run
redis-cli. Be sure you can execute the following in that redis-cli prompt
127.0.0.1:6379> JSON.SET foo . 0 OK
Client¶
Before you begin this part of the turtorial, make sure a redis server is available for a client to connect to.
If a server is not already running, run
redis-server redis.conf in a terminal and leave that terminal be.
Client machines connect to the server purely through Python with the REEM client. Install REEM and it’s dependencies with the below command
pip3 install reem
Copy the below into a file and run it:
from reem.connection import RedisInterface from reem.datatypes import KeyValueStore import numpy as np import time interface = RedisInterface(host="localhost") interface.initialize() server = KeyValueStore(interface) # Set a key and read it and its subkeys server["foo"] = {"number": 100.0, "string": "REEM"} print("Reading Root : {}".format(server["foo"].read())) print("Reading Subkey: {}".format(server["foo"]["number"].read())) # Set a new key that didn't exist before to a numpy array server["foo"]["numpy"] = np.random.rand(3,4) time.sleep(0.0001) # Needed on ubuntu machine for numpy set to register? print("Reading Root : {}".format(server["foo"].read())) print("Reading Subkey: {}".format(server["foo"]["numpy"].read()))
The output should appear something like the below
Reading Root : {'number': 100, 'string': 'REEM'} Reading Subkey: 100 Reading Root : {'number': 100, 'string': 'REEM', 'numpy': array([]])} Reading Subkey: []]
The code connects to a Redis server and
set s a dictionary with basic number and string data. It then
reads and prints that data. Next, it sends a numpy array to Redis and reads that back as well. It uses a KeyValueStore
object to do all this. Learn more about it in the next section.
Congratulations! You have got REEM working on your machine! Continue to the next section to see what it can do. | https://reem.readthedocs.io/en/latest/gettingstarted.html | CC-MAIN-2021-49 | refinedweb | 980 | 59.8 |
Note
Previous versions of Celery required a separate library to work with Django, but since 3.1 this is no longer the case. Django is supported out of the box now so this document only contains a basic way to integrate Celery and Django. You will use the same API as non-Django users so it’s recommended that you read the First Steps with Celery tutorial first and come back to this tutorial. When you have a working example you can continue to the Next Steps guide.
To use Celery with your Django project you must first define an instance of the Celery library (called an “app”)
If you have a modern Django project layout like:
- proj/ - proj/__init__.py - proj/settings.py - proj/urls.py - manage.py
then the recommended way is to create a new proj/proj/celery.py module that defines the Celery instance:
from __future__ import absolute_import import os from celery import Celery from django.conf import settings # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings'))) # This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery import app import absolute imports from the future, so that our celery.py module will not clash with the library:
from __future__ import absolute_import
Then we set the default DJANGO_SETTINGS_MODULE for the celery command-line program:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
Specifying the settings here means the celery command line program will know where your Django project is. This statement must always appear before the app instance is created, which is what we do next:
app = Celery('proj')
This is.
You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object when using Windows or execv:
app.config_from_object('django.conf:settings')
Next, a common practice for reusable apps is to define all tasks in a separate tasks.py module, and Celery does have a way to autodiscover these modules:
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
With the line above Celery will automatically discover tasks in reusable apps if you follow the tasks.py convention:
- app1/ - app1/tasks.py - app1/models.py - app2/ - app2/tasks.py - app2/models.py
This way you do not have to manually add the individual modules to the CELERY_IMPORTS setting. The lambda so that the autodiscovery can happen only when needed, and so that importing your module will not evaluate the Django settings object.
Finally, the debug_task example is a task that dumps its own request information. This is using the new bind=True task option introduced in Celery 3.1 to easily refer to the current task instance.
In a production environment you will want to run the worker in the background as a daemon - see Running the worker as a daemon - but for testing and development it is useful to be able to start a worker instance by using the celery worker manage command, much as you would use Django’s runserver:
$ celery -A proj worker -l info
For a complete listing of the command-line options available, use the help command:
$ celery help
If you want to learn more you should continue to the Next Steps tutorial, and after that you can study the User Guide. | http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html | CC-MAIN-2015-18 | refinedweb | 556 | 51.58 |
Instructions for Adding PDF Bookmarks Using Word
- Blaise Dorsey
- 1 years ago
- Views:
Transcription
1 Instructions for Adding PDF Bookmarks Using Word These instructions show how to set up a Word document so that PDF bookmarks are automatically created when the document is converted to a PDF. PDF bookmarks can be automatically created in Word by using Styles. Word has multiple preformatted styles that can be applied to a document. A style is a set of formatting characteristics, such as font name, size, color, paragraph alignment and spacing, that can quickly and easily applied to a section of a document, or to the whole document. The preformatted styles can also be modified to a user s preference. To create PDF bookmarks, you will need to use the Heading formats in the Styles menu, which is explained below. By applying the Heading styles to the heading and subheadings in your brief, you will be able to automatically create PDF bookmarks when the Word document is converted to PDF. Using the Heading styles will also allow you to easily create a table of contents, which will be covered in a separate document. If you have not used Word Styles before, begin this process with the final version of your brief with your preferred formatting. In the future, you will be able to use the Styles you create to format the headings and subheadings in your brief as you are drafting the document. These instructions were created using Microsoft Word Table of Contents Instructions for Adding PDF Bookmarks Using Word... 1 Marking the Headings and Subheadings... 2 Marking Text that is Not a Heading or Subheading... 6 Publish to PDF
2 Marking the Headings and Subheadings Highlight the first heading in your brief. This example will use the Table of Authorities. Use your curser to highlight the Table of Authorities text. Go to the Home tab in the menu at the top of the page. You will be working with the Styles section on the right-hand side of the page. With the Table of Authorities heading still highlighted, hold the curser over Heading 1. If you have not used Styles before, Heading 1 is probably not the format (font, size, color, spacing, etc.) that you want to use for the main headings in your brief. You can update Heading 1 to match the formatting that you created for the main 2
3 headings in your brief by right-clicking your mouse while the curser is over Heading 1. A menu should appear, and the first item on the list should state Update Heading 1 to Match Selection. Select this option. The Heading 1 option in the Styles menu will be updated to match your existing formatting: Now you can mark the rest of the main headings in the brief as Heading 1. Go to each main heading in the brief (Table of Contents, Table of Authorities, Statement of Appealability, Statement of Case, Argument, etc.), select the text, and click on Heading 1 in the Styles menu. The formatting for these main headings will now be the same. You can use the same process for the subheadings in your brief. Go to the first one of the second level headings in your brief and select the text with your curser. This time, place your curser over Heading 2 in the Styles menu (you may need to use the arrows on the right-hand side to scroll down). Update Heading 2 to match the formatting that you created for the second subheadings in your brief by right-clicking your mouse while the curser is over Heading 2 and selecting Update Heading 2 to Match Selection. Go through your brief and mark all your second level subheadings as Heading 2. For additional 3
4 levels of subheadings, continue this process using Heading 3, Heading 4, Heading 5, etc. Saving your formatting selections for future documents: If you want the changes that you have made to the heading styles to become the default for future Word documents, follow these steps. In the Styles menu, highlight the Heading with the format you wish to save for future use. 4
5 Right-click and select Modify from the menu. The following window should appear: Select New documents based on this template and click on OK. The formatting you created for the heading should now be saved for use in new documents. You will need to repeat this step for each heading level (Heading 1, Heading 2, Heading 3, etc.) that you created. 5
6 Marking Text that is Not a Heading or Subheading As you work your way through your brief, you may find that there is text that should be included as a PDF bookmark, but that it should not be formatted the same as the heading or subheadings at its level in the table of contents (for example, the word count certificate and the proof of service). You can still mark this text with the appropriate heading level. After you have marked the text, you can change the format by using the formatting menu under the Home tab. Note: If you are marking an entry that is on two lines (such as the word count certificate below) make sure that you have used a soft return (Enter + Shift) to move text to the next line, not a hard return (Enter only). If you use a hard return, the heading will show up as two separate bookmarks. 6
7 Publish to PDF When you have finished marking all the entries that you want to be included as bookmarks in the PDF, and when your brief is in its final form, covert the brief into a PDF using the following instructions. Under the File tab in the top menu, select Save As. 7
8 You should see the following window: 8
9 Under Save as type: select PDF from the dropdown menu. A new Options button should appear. Click on the button. 9
10 The following window should appear: Make sure that the Create bookmarks using: box is checked and that Headings is selected. Click OK. 10
11 You should return to this screen: Click on Save. You should now have a bookmarked PDF. 11
Microsoft Word 2011: Create a Table of Contents
Microsoft Word 2011: Create a Table of Contents Creating a Table of Contents for a document can be updated quickly any time you need to add or remove details for it will update page numbers for you.
Using Microsoft Word to Create Your Theses or Dissertation
Overview Using Microsoft Word to Create Your Theses or Dissertation MsWord s style feature provides you with several options for managing the creation of your theses or dissertation. Using the style
Adobe Acrobat X Pro Creating & Working with PDF Documents
Adobe Acrobat X Pro Creating & Working with PDF Documents Overview Creating PDF documents is useful when you want to maintain the format of your document(s). As a PDF document, your file maintains its
Using Styles in Word to Make Documents Accessible and Formatting Easier
Using Styles in Word to Make Documents Accessible and Formatting Easier This document provides instructions for using styles in Microsoft Word. Styles allow you to easily apply consistent formatting to
Step 2: Headings and Subheadings
Step 2: Headings and Subheadings This PDF explains Step 2 of the step-by-step instructions that will help you correctly format your ETD to meet UCF formatting requirements. Step 2 shows you how to set
MICROSOFT ACCESS 2007 BOOK 2
MICROSOFT ACCESS 2007 BOOK 2 4.1 INTRODUCTION TO ACCESS FIRST ENCOUNTER WITH ACCESS 2007 P 205 Access is activated by means of Start, Programs, Microsoft Access or clicking on the icon. The window opened
Specialized Numbering
Specialized Numbering Specialized numbering is used to assign a chapter-based numbering scheme to subheadings (e.g. 2.1, 2.1.1) as well as figure and table captions (e.g. Figure 3.5) within Microsoft Word.
Microsoft Word 2007 Module 1
Microsoft Word 2007 Module 1 Microsoft Word 2007: Module 1 July, 2007 2007 Hillsborough Community College - Professional Development and Web Services Hillsborough Community College
Microsoft Word For Windows
Microsoft Word For Windows The Word Window The Microsoft Word for Windows screen consists of two main parts, the text area and the elements surrounding the text area. The diagram below shows a typical...
Drip Marketing Campaign Manual
Drip Marketing Campaign Manual Released May 2006 Manual for Drip Marketing Campaign: Getting Started 1. Log into. 2. Hold cursor over the Tools tab. 3. Click on Drip Marketing Campaign.
Microsoft Office Publisher 2010
1 Microsoft Office Publisher 2010 Microsoft Publisher is a desktop publishing application which allows you to create artistic documents as brochures, flyers, and newsletters. To open Microsoft Office Publisher: Build a SharePoint Website
How to Build a SharePoint Website Beginners Guide to SharePoint Overview: 1. Introduction 2. Access your SharePoint Site 3. Edit Your Home Page 4. Working With Text 5. Inserting Pictures 6. Making Tables
ITCS QUICK REFERENCE GUIDE: EXPRESSION WEB SITE
Create a One-Page Website Using Microsoft Expression Web This tutorial uses Microsoft Expression Web 3 Part 1. Create the Site on your computer Create a folder in My Documents to house the Web files. Save
Maximizing the Use of Slide Masters to Make Global Changes in PowerPoint
Maximizing the Use of Slide Masters to Make Global Changes in PowerPoint This document provides instructions for using slide masters in Microsoft PowerPoint. Slide masters allow you to make a change just
Word Lesson 4. Formatting Text. Microsoft Office 2010 Introductory. Pasewark & Pasewark
Formatting Text Microsoft Office 2010 Introductory 1 Objectives Change the font. Change the size, color, and style of text. Use different underline styles and font effects and highlight text. Copy formatting
Process Document Campus Community: Create Communication Template. Document Generation Date 7/8/2009 Last Changed by Status
Document Generation Date 7/8/2009 Last Changed by Status Final System Office Create Communication Template Concept If you frequently send the same Message Center communication to selected students, you
Overview of PDF Bookmarks
Overview of PDF Bookmarks Quick Tips: -PDF Bookmarks: Bookmarks are used in Adobe Acrobat to link a particular page or section of a PDF file. They allow you to quickly jump to that portion of the document
National RTAP Marketing Transit Toolkit Customizing Templates in Microsoft Publisher
National RTAP Marketing Transit Toolkit Customizing Templates in Microsoft Publisher Customizing the Templates in Microsoft Publisher Microsoft Publisher is part of the Microsoft Office Suite, so
Handout: Word 2010 Tips and Shortcuts
Word 2010: Tips and Shortcuts Table of Contents EXPORT A CUSTOMIZED QUICK ACCESS TOOLBAR... 2 IMPORT A CUSTOMIZED QUICK ACCESS TOOLBAR... 2 USE THE FORMAT PAINTER... 3 REPEAT THE LAST ACTION... 3 SHOW
Creating and Using Links and Bookmarks in PDF Documents
Creating and Using Links and Bookmarks in PDF Documents After making a document into a PDF, there may be times when you will need to make links or bookmarks within that PDF to aid navigation through
Creating a Table of Contents in Microsoft Word 2011
1 Creating a Table of Contents in Microsoft Word 2011 Sections and Pagination in Long Documents When creating a long document like a dissertation, which requires specific formatting for pagination, there. Quick Reference Guide. Union Institute & University
Microsoft Word 2010 Quick Reference Guide Union Institute & University Contents Using Word Help (F1)... 4 Window Contents:... 4 File tab... 4 Quick Access Toolbar... 5 Backstage View... 5 The Ribbon...
Microsoft Word: Upgrade Summary Anatomy of Microsoft Word 2007
Microsoft Word: Upgrade Summary Anatomy of Microsoft Word 2007 Office Button Quick Access Toolbar Menu Tabs Dialogue Boxs Menu Groups Page Formats Zoom Starting a Document New Document New Ctrl + N Opening
Editing the Slide Master
Editing the Slide Master Note: this document was designed for Microsoft Office PowerPoint 2003. Other versions may differ and require different procedures. The Slide Master allows you to set the background,
Creating trouble-free numbering in Microsoft Word
Creating trouble-free numbering in Microsoft Word This note shows you how to create trouble-free chapter, section and paragraph numbering, as well as bulleted and numbered lists that look the way you want
Creating Forms with Acrobat 10
Creating Forms with Acrobat 10 Copyright 2013, Software Application Training, West Chester University. A member of the Pennsylvania State Systems of Higher Education. No portion of this document may be
PowerPoint: Masters & Multimedia Quick Reference
PowerPoint: Masters & Multimedia Quick Reference Create and customize one or more slide masters For each slide master that you want to create, do the following: 1. Open a blank presentation. 2. On the
Accessing your Professional Development Plan (PDP) Evaluation Process Professional Development Plan Start New Start Edit
1 NC Educator Effectiveness System Teacher s Guide Professional Development Plan (PDP) This guide outlines the steps that Teachers must complete for the Professional Development Plan in the North Carolina
Student Manager s Guide to the Talent Management System
Department of Human Resources 50 Student Manager s Guide to the Talent Management System 1 Table of Contents Topic Page SYSTEM INTRODUCTION... 3 GETTING STARTED... 4 NAVIGATION WITHIN THE TALENT MANAGEMENT
Microsoft Migrating to Word 2010 from Word 2003
In This Guide Microsoft Word 2010 looks very different, so we created this guide to help you minimize the learning curve. Read on to learn key parts of the new interface, discover free Word 2010 training,
3. We will work with the Page Content Web Part, so single click Edit Content
Using SharePoint to Create Web Pages Signing In 1. Open Internet Explorer 2. Type in the school URL: or teacher sub-site URL
Lesson 5 Inserting Hyperlinks & Action Buttons
Lesson 5 Inserting Hyperlinks & Action Buttons Introduction A hyperlink is a graphic or piece of text that links to another web page, document, or slide. By clicking on the hyperlink will activate it and
Creating a table of contents quickly in Word
Creating a table of contents quickly in Word This note shows you how to set up a table of contents that can be generated and updated quickly and easily, even for the longest and most complex documents.
Website Builder Overview
Website Builder Overview The Website Builder tool gives users the ability to create and manage their own website, which can be used to communicate with students and parents outside of the classroom. WORD ACCESSIBILITY TIPS
MICROSOFT WORD ACCESSIBILITY TIPS 1. When inserting images or charts, be sure to add ALT tags or a description of the image for screen readers. 2. Ensure that all documents include a document title and
Add a custom a color scheme
The Page Design Ribbon About color schemes and font schemes Color schemes are sets of colors designed to look complement one another. Similarly, font schemes are sets of complementary fonts that are
Implementing Mission Control in Microsoft Outlook 2010
Implementing Mission Control in Microsoft Outlook 2010 How to Setup the Calendar of Occasions, Not Doing Now List, Never Doing Now List, Agendas and the Vivid Display In Outlook 2010 Handout Version 3
Microsoft PowerPoint 2010 Computer Jeopardy Tutorial
Microsoft PowerPoint 2010 Computer Jeopardy Tutorial 1. Open up Microsoft PowerPoint 2010. 2. Before you begin, save your file to your H drive. Click File > Save As. Under the header that says Organize
Organizing and Managing Email
Organizing and Managing Email Outlook provides several tools for managing email, including folders, rules, and categories. You can use these tools to help organize your email. Using folders Folders can
HOUR 9. Formatting Worksheets to Look Great
HOUR 9 Formatting Worksheets to Look Great Excel makes it easy to make even simple worksheets look professional. AutoFormat quickly formats your worksheet within the boundaries you select. If you want
Acrobat PDF Forms - Part 2
Acrobat PDF Forms - Part 2 PDF Form Fields In this lesson, you will be given a file named Information Request Form that can be used in either Word 2003 or Word 2007. This lesson will guide you through
Why Use OneNote? Creating a New Notebook
Why Use OneNote? OneNote is the ultimate virtual notebook that enables users to create notes in various formats, shares those notes, sync those notes with the cloud and collaborate with others. You templates and slide masters in PowerPoint 2003
Creating templates and slide masters in PowerPoint 2003 These days, it is not enough to create a presentation with interesting and exciting content; you have to create one with interesting and exciting
There are several ways of creating a PDF file using PDFCreator.
it Information Information Technology Services Introduction Using you can convert virtually any file from any application into Adobe Portable Document Format (PDF). Documents in Adobe PDF preserve the
Word 2010: Material adapted from Microsoft Word Help
IT Training and Communication A Division of Information Technology Technology-related learning opportunities and support for VSU Faculty and Staff Word 2010: Material adapted from Microsoft Word Help Table
Adobe Acrobat Electronic Signatures
Adobe Acrobat Electronic Signatures Creating a custom signature stamp 1. Sign your name on a piece of paper (a marker style pen works well) 2. Scan the paper 3. Save to the desktop (or anywhere you like)
Use e-mail signatures in Outlook 2010
Use e-mail signatures in Outlook 2010 Quick Reference Card Download and use a signature template Note This procedure will take you away from this page. If necessary, print this page before you follow these
Using Microsoft Word 2011 to Create a Legal Research Paper
Using Microsoft Word 2011 to Create a Legal Research Paper CHANGING YOUR DEFAULT FONT... 2 LISTS, INDENTATIONS, TABS AND THE RULER... 3 CREATING A BULLETED OR NUMBERED LIST... 3 INDENTING PARAGRAPHS...
Managing Contacts in Outlook
Managing Contacts in Outlook This document provides instructions for creating contacts and distribution lists in Microsoft Outlook 2007. In addition, instructions for using contacts in a Microsoft Word
Creating tables of contents and figures in Word 2013
Creating tables of contents and figures in Word 2013 Information Services Creating tables of contents and figures in Word 2013 This note shows you how to create a table of contents or a table of figures
HOW TO MAKE A TABLE OF CONTENTS
HOW TO MAKE A TABLE OF CONTENTS WHY THIS IS IMPORTANT: MS Word can make a Table of Contents automatically by using heading styles while you are writing your document; however, these instructions will focus
Creating Electronic Portfolios using Microsoft Word and Excel
Step-by-Step Creating Electronic Portfolios using Microsoft Word and Excel The Reflective Portfolio document will include the following: A Cover Page for the portfolio - Include a Picture or graphic & Its New Interface
LESSON 1 MS Word 2007 Introduction to Microsoft Word & Its New Interface What is Microsoft Office Word 2007? Microsoft Office Word 2007 is the twelfth version of Microsoft s powerful word processing program.
MICROSOFT WORD 2011 DESIGN DOCUMENT BUILDING BLOCKS
MICROSOFT WORD 2011 DESIGN DOCUMENT BUILDING BLOCKS Last edited: 2012-07-09 1 Understand Building Blocks... 4 Insert Headers and Footers... 5 Insert a simple page number... 5 Return to body of the document...
BU Digital Print Service. High Resolution PDFs
BU Digital Print Service High Resolution PDFs Introduction As part of the BU Digital Print service files can be uploaded to the Web to Print (W2P) portal for printing however the quality of the print
2015 Word 2 Page 1. Microsoft Word Word 2
Word 2 Microsoft Word 2013 Mercer County Library System Brian M. Hughes, County Executive Action Technique 1. Page Margins On the Page Layout tab, in the Page Setup group, click Margins. Click the margin
Excel Reports User Guide
Excel Reports User Guide Copyright 2000-2006, E-Z Data, Inc. All Rights Reserved. No part of this documentation may be copied, reproduced, or translated in any form without the prior written consent of
Lesson 4: Formatting Paragraphs and Working with Styles
Lesson 4: Formatting Paragraphs and Working with Styles When you type information into Microsoft Word, each time you press the Enter key Word creates a new paragraph. You can format paragraphs. For example,
Word Processing 1 WORD PROCESSING 1. Using a computer for writing
Word Processing 1 WORD PROCESSING 1 Using a computer for writing Microsoft Office 2010 Microsoft Word 2010 I Contents: When/if things go wrong...5 Help...5 Exploring the Word Screen...6 Starting Word &
-SoftChalk LessonBuilder-
-SoftChalk LessonBuilder- SoftChalk is a powerful web lesson editor that lets you easily create engaging, interactive web lessons for your e-learning classroom. It allows you to create and edit content
FrontPage 2003: Forms
FrontPage 2003: Forms Using the Form Page Wizard Open up your website. Use File>New Page and choose More Page Templates. In Page Templates>General, choose Front Page Wizard. Click OK. It is helpful if
Microsoft Word 2013 Basics
Microsoft Word 2013 Basics 1. From Start, look for the Word tile and click it. 2. The Ribbon- seen across the top of Microsoft Word. The ribbon contains Tabs, Groups, and Commands a. Tabs sit across the 2007: Animation Learning Guide
PowerPoint 2007: Animation Learning Guide What kinds of animations can I use? PowerPoint offers two different kinds of animations: Text and object animations control the way in which content appears on
Google Sites. How to create a site using Google Sites
Contents How to create a site using Google Sites... 2 Creating a Google Site... 2 Choose a Template... 2 Name Your Site... 3 Choose A Theme... 3 Add Site Categories and Descriptions... 3 Launch Your Google...
Advanced Presentation Features and Animation
There are three features that you should remember as you work within PowerPoint 2007: the Microsoft Office Button, the Quick Access Toolbar, and the Ribbon. The function of these features will be more
Adobe Acrobat: Creating Interactive Forms
Adobe Acrobat: Creating Interactive Forms This document provides information regarding creating interactive forms in Adobe Acrobat. Please note that creating forms requires the professional version (not Word 2011 Basics for Mac
1 Microsoft Word 2011 Basics for Mac Word 2011 Basics for Mac Training Objective To introduce the new features of Microsoft Word 2011. To learn the tools and features to get started using Word 2011 more | http://docplayer.net/26337356-Instructions-for-adding-pdf-bookmarks-using-word.html | CC-MAIN-2018-47 | refinedweb | 3,681 | 50.77 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hello,
I have a plugin that uses a GeUserArea. When I open Cinema 4D with the language set to Arabic, the GeUserArea is reversed, which has broken all of my code determining the GeUserArea positions and mouse interactions. It seems like the x/y origin is now at the top-right corner rather than the top-left. Are there any ways to compensate for this to mimic the UserArea being in left-to-right orientation? Thank you.
Hi @blastframe, unfortunately, in Python GeUserArea::IsR2L is not exposed. But you can replicate it yourself.
import c4d
import maxon
import os
def GetDefaultLanguageExtension():
index = 0
while True:
lang = c4d.GeGetLanguage(index)
if lang is None:
break
if lang["default_language"]:
return lang["extensions"]
index += 1
raise ValueError("Unable to find the default language")
def IsR2L():
lang = GetDefaultLanguageExtension()
resource_path = maxon.Application.GetUrl(maxon.APPLICATION_URLTYPE.RESOURCE_DIR).GetSystemPath()
str_path = os.path.join(resource_path, "modules", "c4dplugin", f"strings_{lang}", "c4d_r2l.h")
return os.path.exists(str_path)
def main():
print(IsR2L())
# Execute main()
if __name__=='__main__':
main()
Then regarding your assumption about the origin. This is correct when the language is right to left the whole drawing is reversed.
So it's up to you to reverse your drawing (or inverse x1 and x2 when the drawing IsR2L return True).
Cheers,
Maxime.
Hello @blastframe,
without any further questions or replies, we will consider this topic as solved by Wednesday and flag it accordingly.
Thank you for your understanding,
Ferdinand
Sorry @m_adam, I never saw your reply. Thank you! We can close the thread. Thanks @ferdinand !
thank you for closing your topics.
Cheers,
Ferdinand | https://plugincafe.maxon.net/topic/13383/geuserarea-reversed-in-arabic-language/1 | CC-MAIN-2021-43 | refinedweb | 308 | 58.18 |
How to become a certified AWS Solutions Architect
This study guide will help you pass the newer AWS Certified Solutions Architect - Associate exam. Ideally, you should reference this guide while working through the following material:
Notes: If at any point you find yourself feeling uncertain of your progress and in need of more time, you can postpone your AWS exam date. Be sure to also keep up with the ongoing discussions in r/AWSCertifications as you will find relevant exam tips, studying material, and advice from other exam takers. Before experimenting with AWS, it's very important to be sure that you know what is free and what isn't. Relevant Free Tier FAQs can be found here. Finally, Udemy often has their courses go on sale from time to time. It might be worth waiting to purchase either the Tutorial Dojo practice exam or Stephane Maarek's course depending on how urgently you need the content.
Identity Access Management (IAM)
Simple Storage Service (S3)
Elastic Compute Cloud (EC2)
Elastic Block Store (EBS)
Elastic Network Interfaces (ENI)
Web Application Firewall (WAF)
Elastic File System (EFS)
Relational Database Service (RDS)
Elastic Load Balancers (ELB)
Virtual Private Cloud (VPC)
Simple Queuing Service (SQS)
Simple Workflow Service (SWF)
Simple Notification Service (SNS)
The official AWS Solutions Architect - Associate (SAA-C02) exam guide
You can cover a lot of ground by skimming over what you already know or what you can infer to be true. In particular, read the first sentence of each paragraph and if you have no uncertainty about what is being said in that sentence, move on to the first sentence of the next paragraph. Take notes whenever necessary.
AWS Well-Architected Framework
Amazon EC2 Auto Scaling FAQs
Elastic network interfaces
Elastic Load Balancing FAQs
Amazon FSx for Windows File Server FAQs
Amazon FSx for Lustre FAQs
IAM offers a centralized hub of control within AWS and integrates with all other AWS Services. IAM comes with the ability to share access at various levels of permission and it supports the ability to use identity federation (the process of delegating authentication to a trusted external party like Facebook or Google) for temporary or limited access. IAM comes with MFA support and allows you to set up custom password rotation policy across your entire organization. It is also PCI DSS compliant i.e. payment card industry data security standard. (passes government mandated credit card security regulations).
Users - any individual end user such as an employee, system architect, CTO, etc.
Groups - any collection of similar people with shared permissions such as system administrators, HR employees, finance teams, etc. Each user within their specified group will inherit the permissions set for the group.
Roles - any software service that needs to be granted permissions to do its job, e.g- AWS Lambda needing write permissions to S3 or a fleet of EC2 instances needing read permissions from a RDS MySQL database.
Policies - the documented rule sets that are applied to grant or limit access. In order for users, groups, or roles to properly set permissions, they use policies. Policies are written in JSON and you can either use custom policies for your specific needs or use the default policies set by AWS.
IAM Policies are separated from the other entities above because they are not an IAM Identity. Instead, they are attached to IAM Identities so that the IAM Identity in question can perform its necessary function.
IAM is a global AWS services that is not limited by regions. Any user, group, role or policy is accessible globally.
The root account with complete admin access is the account used to sign up for AWS. Therefore, the email address used to create the AWS account for use should probably be the official company email address.
New users have no permissions when their accounts are first created. This is a secure way of delegating access as permissions must be intentionally granted.
When joining the AWS ecosystem for the first time, new users are supplied an access key ID and a secret access key ID when you grant them programmatic access. These are created just once specifically for the new user to join, so if they are lost simply generate a new access key ID and a new secret access key ID. Access keys are only used for the AWS CLI and SDK so you cannot use them to access the console.
When creating your AWS account, you may have an existing identity provider internal to your company that offers Single Sign On (SSO). If this is the case, it is useful, efficient, and entirely possible to reuse your existing identities on AWS. To do this, you let an IAM role be assumed by one of the Active Directories. This is because the IAM ID Federation feature allows an external service to have the ability to assume an IAM role.
IAM Roles can be assigned to a service, such as an EC2 instance, prior to its first use/creation or after its been in used/created. You can change permissions as many times as you need. This can all be done by using both the AWS console and the AWS command line tools.
You cannot nest IAM Groups. Individual IAM users can belong to multiple groups, but creating subgroups so that one IAM Group is embedded inside of another IAM Group is not possible.
With IAM Policies, you can easily add tags that help define which resources are accessible by whom. These tags are then used to control access via a particular IAM policy. For example, production and development EC2 instances might be tagged as such. This would ensure that people who should only be able to access development instances cannot access production instances.
Explicit Deny: Denies access to a particular resource and this ruling cannot be overruled.
Explicit Allow: Allows access to a particular resource so long as there is not an associated Explicit Deny.
Default Deny (or Implicit Deny): IAM identities start off with no resource access. Access instead must be granted.
S3 provides developers and IT teams with secure, durable, and highly-scalable object storage. Object storage, as opposed to block storage, is a general term that refers to data composed of three things:
1.) the data that you want to store
2.) an expandable amount of metadata
3.) a unique identifier so that the data can be retrieved
This makes it a perfect candidate to host files or directories and a poor candidate to host databases or operating systems. The following table highlights key differences between object and block storage:
Data uploaded into S3 is spread across multiple files and facilities. The files uploaded into S3 have an upper-bound of 5TB per file and the number of files that can be uploaded is virtually limitless. S3 buckets, which contain all files, are named in a universal namespace so uniqueness is required. All successful uploads will return an HTTP 200 response.
1.) tiered storage and pricing variability
2.) lifecycle management to expire older content
3.) versioning for version control
4.) encryption for privacy
5.) MFA deletes to prevent accidental or malicious removal of content
6.) access control lists & bucket policies to secure the data
1.) storage size
2.) number of requests
3.) storage management pricing (known as tiers)
4.) data transfer pricing (objects leaving/entering AWS via the internet)
5.) transfer acceleration (an optional speed increase for moving objects via Cloudfront)
6.) cross region replication (more HA than offered by default
1.) For programmatic access only, use IAM & Bucket Policies to share entire buckets
2.) For programmatic access only, use ACLs & Bucket Policies to share objects
3.) For access via the console & the terminal, use cross-account IAM roles
S3 Standard - 99.99% availability and 11 9s durability. Data in this class is stored redundantly across multiple devices in multiple facilities and is designed to withstand the failure of 2 concurrent data centers.
S3 Infrequently Accessed (IA) - For data that is needed less often, but when it is needed the data should be available quickly. The storage fee is cheaper, but you are charged for retrieval.
S3 One Zone Infrequently Accessed (an improvement of the legacy RRS / Reduced Redundancy Storage) - For when you want the lower costs of IA, but do not require high availability. This is even cheaper because of the lack of HA.
S3 Intelligent Tiering - Uses built-in ML/AI to determine the most cost-effective storage class and then automatically moves your data to the appropriate tier. It does this without operational overhead or performance impact.
S3 Glacier - low-cost storage class for data archiving. This class is for pure storage purposes where retrieval isn’t needed often at all. Retrieval times range from minutes to hours. There are differing retrieval methods depending on how acceptable the default retrieval times are for you:
Expedited: 1 - 5 minutes, but this option is the most expensive. Standard: 3 - 5 hours to restore. Bulk: 5 - 12 hours. This option has the lowest cost and is good for a large set of data.
The Expedited duration listed above could possibly be longer during rare situations of unusually high demand across all of AWS. If it is absolutely critical to have quick access to your Glacier data under all circumstances, you must purchase Provisioned Capacity. Provisioned Capacity guarentees that Expedited retrievals always work within the time constraints of 1 to 5 minutes.
S3 Deep Glacier - The lowest cost S3 storage where retrieval can take 12 hours.
S3 data can be encrypted both in transit and at rest.
Encryption In Transit: When the traffic passing between one endpoint to another is indecipherable. Anyone eavesdropping between server A and server B won’t be able to make sense of the information passing by. Encryption in transit for S3 is always achieved by SSL/TLS.
Encryption At Rest: When the immobile data sitting inside S3 is encrypted. If someone breaks into a server, they still won’t be able to access encrypted info within that server. Encryption at rest can be done either on the server-side or the client-side. The server-side is when S3 encrypts your data as it is being written to disk and decrypts it when you access it. The client-side is when you personally encrypt the object on your own and then upload it into S3 afterwards.
You can encrypted on the AWS supported server-side in the following ways: - S3 Managed Keys / SSE - S3 (server side encryption S3 ) - when Amazon manages the encryption and decryption keys for you automatically. In this scenario, you concede a little control to Amazon in exchange for ease of use. - AWS Key Management Service / SSE - KMS - when Amazon and you both manage the encryption and decryption keys together. - Server Side Encryption w/ customer provided keys / SSE - C - when I give Amazon my own keys that I manage. In this scenario, you concede ease of use in exchange for more control.
The Amazon S3 notification feature enables you to receive and send notifications when certain events happen in your bucket. To enable notifications, you must first configure the events you want Amazon S3 to publish (new object added, old object deleted, etc.) and the destinations where you want Amazon S3 to send the event notifications. Amazon S3 supports the following destinations where it can publish events: - Amazon Simple Notification Service (Amazon SNS) - A web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. - Amazon Simple Queue Service (Amazon SQS) - SQS offers reliable and scalable hosted queues for storing messages as they travel between computers. - AWS Lambda - AWS Lambda is a compute service where you can upload your code and the service can run the code on your behalf using the AWS infrastructure. You package up and upload your custom code to AWS Lambda when you create a Lambda function. The S3 event triggering the Lambda function also can serve as the code's input.
When you create a pre-signed URL for your S3 object, you must do the following:
The pre-signed URLs are valid only for the specified duration and anyone who receives the pre-signed URL within that duration can then access the object.
The following diagram highlights how Pre-signed URLs work:
The AWS CDN service is called CloudFront. It serves up cached content and assets for the increased global performance of your application. The main components of CloudFront are the edge locations (cache endpoints), the origin (original source of truth to be cached such as an EC2 instance, an S3 bucket, an Elastic Load Balancer or a Route 53 config), and the distribution (the arrangement of edge locations from the origin or basically the network itself). More info on CloudFront's features
Snowball is a giant physical disk that is used for migrating high quantities of data into AWS. It is a peta-byte scale data transport solution. Using a large disk like Snowball helps to circumvent common large scale data transfer problems such as high network costs, long transfer times, and security concerns. Snowballs are extremely secure by design and once the data transfer is complete, the snowballs are wiped clean of your data.
Storage Gateway is a service that connects on-premise environments with cloud-based storage in order to seamlessly and securely integrate an on-prem application with a cloud storage backend. and Volume Gateway as a way of storing virtual hard disk drives in the cloud.
Volume Gateway's Stored Volumes let you store data locally on-prem and backs the data up to AWS as a secondary data source. Stored Volumes allow low-latency access to entire datasets, while providing high availability over a hybrid cloud solution. Further, you can mount Stored Volumes on application infrastructure as iSCSI drives so when data is written to these volumes, the data is both written onto the on-prem hardware and asynchronously backed up as snapshots in AWS EBS or S3.
Volume Gateway's Cached Volumes differ as they do not store the entire dataset locally like Stored Volumes. Instead, AWS is used as the primary data source and the local hardware is used as a caching layer. Only the most frequently used components are retained onto the on-prem infrastructure while the remaining data is served from AWS. This minimizes the need to scale on-prem infrastructure while still maintaining low-latency access to the most referenced data.
EC2 spins up resizable server instances that can scale up and down quickly. An instance is a virtual server in the cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Its configuration at launch is a live copy of the Amazon Machine Image (AMI) that you specify when you launched the instance. EC2 has an extremely reduced time frame for provisioning and booting new instances and EC2 ensures that you pay as you go, pay for what you use, pay less as you use more, and pay even less when you reserve capacity. When your EC2 instance is running, you are charged on CPU, memory, storage, and networking. When it is stopped, you are only charged for EBS storage.
The following table highlights the many instance states that a VM can be in at a given time.
| Instance state | Description | Billing | | ------------- | ------------- |--------------| |
pending| The instance is preparing to enter the
runningstate. An instance enters the pending state when it launches for the first time, or when it is started after being in the
stoppedstate. | Not billed |
running| The instance is running and ready for use. | Billed | |
stopping| The instance is preparing to be stopped or stop-hibernated. | Not billed if preparing to stop. Billed if preparing to hibernate | |
stopped| The instance is shut down and cannot be used. The instance can be started at any time. | Not billed | |
shutting-down| The instance is preparing to be terminated. | Not billed | |
terminated| The instance has been permanently deleted and cannot be started. | Not billed |
Note: Reserved Instances that are terminated are billed until the end of their term.
1.) Clustered Placement Groups - Clustered Placement Grouping is when you put all of your EC2 instances in a single availability zone. This is recommended for applications that need the lowest latency possible and require the highest network throughput. - Only certain instances can be launched into this group (compute optimized, GPU optimized, storage optimized, and memory optimized).
2.) Spread Placement Groups - Spread Placement Grouping is when you put each individual EC2 instance on top of its own distinct hardware so that failure is isolated. - Your VMs live on separate racks, with separate network inputs and separate power requirements. Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other.
3.) Partitioned Placement Groups - Partitioned Placement Grouping is similar to Spread placement grouping, but differs because you can have multiple EC2 instances within a single partition. Failure instead is isolated to a partition (say 3 or 4 instances instead of 1), yet you enjoy the benefits of close proximity for improved network performance. - With this placement group, you have multiple instances living together on the same hardware inside of different availability zones across one or more regions. - If you would like a balance of risk tolerance and network performance, use Partitioned Placement Groups.
An Amazon EBS volume is a durable, block-level storage device that you can attach to a single EC2 instance. You can think of EBS as a cloud-based virtual hard disk. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans.
An elastic network interface is a networking component that represents a virtual network card. When you provision a new instance, there will be an ENI attached automatically and you can create and configure additional network interfaces if desired. When you move a network interface from one instance to another, network traffic is redirected to the new instance.
Security Groups are used to control access (SSH, HTTP, RDP, etc.) with EC2. They act as a virtual firewall for your instances to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance and security groups act at the instance level, not the subnet level.
AWS WAF is a web application that lets you allow or block the HTTP(s) requests that are bound for CloudFront, API Gateway, Application Load Balancers, EC2, and other Layer 7 entry points into your AWS environment. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns that you can define. WAF's default rule-set addresses issues like the OWASP Top 10 security risks and is regularly updated whenever new vulnerabilities are discovered.
Amazon CloudWatch is a monitoring and observability service. It provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
okor
alarm
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With it, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, API calls, and other AWS services. It is a regional service, but you can configure CloudTrail to collect trails in all regions.
EFS provides a simple and fully managed elastic NFS file system for use within AWS. EFS automatically and instantly scales your file system storage capacity up or down as you add or remove files without disrupting your application.
Amazon FSx for Windows File Server provides a fully managed native Microsoft File System.
Amazon FSx for Lustre makes it easy and cost effective to launch and run the open source Lustre file system for high-performance computing applications. With FSx for Lustre, you can launch and run a file system that can process massive data sets at up to hundreds of gigabytes per second of throughput, millions of IOPS, and sub-millisecond latencies.
RDS is a managed service that makes it easy to set up, operate, and scale a relational database in AWS. It provides cost-efficient and resizable capacity while automating or outsourcing time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.
Aurora is the AWS flagship DB known to combine the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It is a MySQL/PostgreSQL-compatible RDBMS that provides the security, availability, and reliability of commercial databases at 1/10th the cost of competitors. It is far more effective as an AWS database due to the 5x and 3x performance multipliers for MySQL and PostgreSQL respectively.
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multiregion, multimaster, durable non-SQL database. It comes with built-in security, backup and restore, and in-memory caching for internet-scale applications.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. The Amazon Redshift service manages all of the work of setting up, operating, and scaling a data warehouse. These tasks include provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine.
The ElastiCache service makes it easy to deploy, operate, and scale an in-memory cache in the cloud. It helps you boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores.
A further comparison between MemcacheD and Redis for ElastiCache:
Another advantage of using ElastiCache is that by caching query results, you pay the price of the DB query only once without having to re-execute the query unless the data changes.
Amazon ElastiCache can scale-out, scale-in, and scale-up to meet fluctuating application demands. Write and memory scaling is supported with sharding. Replicas provide read scaling.
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service. You can use Route 53 to perform three main functions in any combination: domain registration, DNS routing, and health checking.
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, Docker.
InServiceor
OutOfService. When an EC2 instance behind an ELB fails a health check, the ELB stops sending traffic to that instance..
VPC lets you provision a logically isolated section of the AWS cloud where you can launch services and systems within a virtual network that you define. By having the option of selecting which AWS resources are public facing and which are not, VPC provides much more granular control over security.
stoppedstate.
| NACL | Security Group | | ------------- | ------------- | | Operates at the subnet level | Operates at the instance level | | Supports allow rules and deny rules | Supports allow rules only | | Is stateless: Return traffic must be explicitly allowed by rules | Is stateful: Return traffic is automatically allowed, regardless of any rules | | We process rules in order, starting with the lowest numbered rule, when deciding whether to allow traffic | We evaluate all rules before deciding whether to allow traffic | | Automatically applies to all instances in the subnets that it's associated with (therefore, it provides an additional layer of defense if the security group rules are too permissive) | Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on | - Because NACLs are stateless, you must also ensure that outbound rules exist alongside the inbound rules so that ingress and egress can flow smoothly. - The default NACL that comes with a new VPC has a default rule to allow all inbounds and outbounds. This means that it exists, but doesn't do anything as all traffic passes through it freely. - However, when you create a new NACL (instead of using the default that comes with the VPC) the default rules will deny all inbounds and outbounds. - If you create a new NACL, you must associate whichever desired subnets to it manually so that they can inherit the NACL’s rule set. If you don’t explicitly assign a subnet to an NACL, AWS will associate it with your default NACL. - NACLs are evaluated before security groups and you block malicious IPs with NACLs, not security groups. - A subnet can only follow the rules listed by one NACL at a time. However, a NACL can describe the rules for any number of subnets. The rules will take effect immediately. - Network ACL rules are evaluated by rule number, from lowest to highest, and executed immediately when a matching allow/deny rule is found. Because of this, order matters with your rule numbers. - The lower the number of a rule on the list, the more seniority that rule will have. List your rules accordingly.
### AWS Global Accelerator: - AWS Global Accelerator accelerates connectivity to improve performance and availability for users. Global Accelerator sits on top of the AWS backbone and directs traffic to optimal endpoints worldwide. By default, Global Accelerator provides you two static IP addresses that you can make use of. - Global Accelerator helps reduce the number of hops to get to your AWS resources. Your users just need to make it to an edge location and once there, everything will remain internal to the AWS global network. Normally, it takes many networks to reach the application in full and paths to and from the application may vary. With each hop, there is risk involved either in security or in failure.
SQS is a web-based service that gives you access to a message queue that can be used to store messages while waiting for a queue to process them. It helps in the decoupling of systems and the horizontal scaling of AWS resources.
SWF is a web service that makes it easy to coordinate work across distributed application components. SWF has a range of use cases including media processing, web app backend, business process workflows, and analytical pipelines.
Simple Notification Service is a pushed-based messaging service that provides a highly scalable, flexible, and cost-effective method to publish a custom messages to subscribers who wish to be informed about a certain topic.
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information..
There are three different types of Kinesis:
Partition keys are used with Kinesis so you can organize data by shard. This way, input from a particular device can be assigned a key that will limit its destination to a specific shard.
Partition keys are useful if you would like to maintain order within your shard.
Consumers, or the EC2 instances that read from Kinesis Streams, can go inside the shards to analyze what is in there. Once finished analyzing or parsing the data, the consumers can then pass on the data to a number of places for storage like a DB or S3.
The total capacity of a Kinesis stream is the sum of data within its constituent shards.
You can always increase the write capacity assigned to your shard table.
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. You upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to be automatically triggered from other AWS services or be called directly from any web or mobile app.
To enable your Lambda function to access resources inside a a private VPC.
AWS X-Ray allows you to debug your Lambda function in case of unexpected behavior.
API Gateway is a fully managed service for developers that makes it easy to build, publish, manage, and secure entire APIs. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on EC2) code running on AWS Lambda, or any web application.
CloudFormation is an automated tool for provisioning entire cloud-based environments. It is similar to Terraform where you codify the instructions for what you want to have inside your application setup (X many web servers of Y type with a Z type DB on the backend, etc). It makes it a lot easier to just describe what you want in markup and have AWS do the actual provisioning work involved.
ElasticBeanstalk is another way to script out your provisioning process by deploying existing applications to the cloud. ElasticBeanstalk is aimed toward developers who know very little about the cloud and want the simplest way of deploying their code.
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.
The following section includes services, features, and techniques that may appear on the exam. They are also extremely useful to know as an engineer using AWS. If the following items do appear on the exam, they will not be tested in detail. You'll just have to know what the meaning is behind the name. It is a great idea to learn each item in depth for your career's benefit, but it is not necessary for the exam. | https://xscode.com/keenanromain/AWS-SAA-C02-Study-Guide | CC-MAIN-2021-43 | refinedweb | 5,002 | 52.49 |
Here is how I copied a postgresql database from one Webfaction server to another. My original thought was to do this using Fabric. But on Webfaction, they do not recommend creating or dropping databases from the command line. Instead they recommend using the Webfaction Control Panel.
There are posts on Stackoverflow for copying a database without writing it to an intermediate file. I chose to not do that. The command for dumping the database was:
pg_dump -C -U username database_name > output_file
Next, I copied the file to the other server. After that, I created the new database on the new server using the Webfaction Control Panel. I named the new database the same name as the old. I am not sure if this is necessary. I did not run the Django command syncdb.
Then I ran:
psql -U username -d database_name -f file_name
Fabric
Getting it to work from Fabric was a little tricky because you need to get Fabric to talk to two different servers. The trick involved using the Fabric hosts decorator:
@hosts('webxxx.webfaction.com') def copy_from(): your code here
Then run that function using “execute”:
def copy_db(): r = execute(copy_from) # execute returns a dict return_value = r['webxxx.webfaction.com'] your code here | https://snakeycode.wordpress.com/2014/04/20/copying-postgresql-to-another-server-on-webfaction/ | CC-MAIN-2017-43 | refinedweb | 206 | 66.44 |
Image::IPTCInfo::RasterCaption - get/set IPTC raserized caption w/Image::Magick
use Image::IPTCInfo::RasterCaption; # Access the raw rasterized caption field: $info = new Image::IPTCInfo::RasterCaption ('C:/new_caption.jpg') or die "No raster caption!"; $raw_raster_caption = $info->Attribute('rasterized caption'); ...
Add to
Image::IPTCInfo support for the IPTC IIM4 Dataset 2:125 Rasterized Caption.
This is an alpha-state module that sub-classes Josh Carter's
Image::IPTCInfo, and you should consult the Image::IPTCInfo for details of how to use it before proceding with this documentation.
This module will loose its alpha status once I've verified it matches the IPTC standard. If anyone has a rasterized caption not produced by this module, please send me a copy!
The IPTC is the International Press & Telecommunications Council. The IIM4 is version four of the Information Interchange Model, which amongst other things allows the embedding of text (and now XML) within images (though XML support is not yet provided by the Perl modules in this namespace).
The IPTC IIM4 specification describes a rasterized caption as containing "...the rasterized object data description and is used where characters that have not been coded are required for the caption."
Not repeatable, 7360 octets, consisting of binary data,one bit per pixel,two value bitmap where 1 (one) represents black and 0 (zero) represents white. -- IPTC-NAA Information Interchange Model Version No. 4, October 1997, Page 41
Writes to the file specified in the sole argument the rasterized caption stored in the object's IPTC field of the same name.
Image creation is via
Image::Magick so see Image::Magick for further details.
On failure returns
undef.
On success returns the path written to.
Sets the IPTC field 'rasterized caption' with a rasterized version of the image located at the path specified in the first argument.
If a second argument is provided, it should be an integer in the range 1-255, representing the threshold at which source image pixels will be included in the rasterized monochrome. The default is 127.
If the image is larger than the standard size, it will be resized. No attempt is made to maintain its aspect ratio, though if there is a demand for this I shall add it.
On failure carps and returns
undef.
On success returns a referemce to a scalar containing the rasterized caption.
Fills the rasterized caption with binary data representing supplied text.
This is very elementry: no font metrics what so ever, just calls
Image::Magick's
Annotate with the text supplied in the first argument, using the point size specified in the second argument, and the font named in the third.
If no size is supplied, defaults to 12 points.
If no font is supplied, then
arialuni.ttf is looked for in the
fonts directory beneath the directory specified in the environment variable
SYSTEMROOT. Failing that, the ImageMagick default is used - YMMV. See the Annotate method in Image::Magick (
imagemagick.org) for details.
On failure carps and returns
undef
On success returns a referemce to a scalar containing the rasterized caption.
Lee Goddard <lgoddard -at- cpan -dot org>.
This module is Copyright (C) 2003, Lee Goddard. All Rights Are Reserved. author accepts no liability in respect of this module, code, information or its use. | http://search.cpan.org/~lgoddard/Image-IPTCInfo-RasterCaption-0.1/RasterCaption.pm | CC-MAIN-2016-40 | refinedweb | 540 | 57.57 |
Object Serialization¶
Serializing means converting a Pyrogram object, which exists as Python class instance, to a text string that can be easily shared and stored anywhere. Pyrogram provides two formats for serializing its objects: one good looking for humans and another more compact for machines that is able to recover the original structures.
For Humans - str(obj)¶
If you want a nicely formatted, human readable JSON representation of any object in the API – namely, any object from
Pyrogram types, raw functions and
raw types – you can use
str(obj).
... with app: r = app.get_chat("haskell") print(str(r))
Tip
When using
print() you don’t actually need to use
str() on the object because it is called automatically, we
have done that above just to show you how to explicitly convert a Pyrogram object to JSON.
For Machines - repr(obj)¶
If you want to share or store objects for future references in a more compact way, you can use
repr(obj). While
still pretty much readable, this format is not intended for humans. The advantage of this format is that once you
serialize your object, you can use
eval() to get back the original structure; just make sure to
import pyrogram,
as the process requires the package to be in scope.
import pyrogram ... with app: r = app.get_chat("haskell") print(repr(r)) print(eval(repr(r)) == r) # True
Note
Type definitions are subject to changes between versions. You should make sure to store and load objects using the same Pyrogram version. | https://docs.pyrogram.org/topics/serializing | CC-MAIN-2021-49 | refinedweb | 251 | 58.21 |
Posted Dec 31, 2007
By Steve Callan
A user deletes data from a
table and commits it. How do you retrieve that data? If using a version of
Oracle with flashback technology AND you are made aware of the error while
the undo information is still retained thats not so much of a problem. If
running in noarchivelog mode, and given that you have a cold backup or export
lying around, the recovery process is fairly cut and dry: restore the entire
database or import the table from the export.
One check to make in the
import approach is for a referential integrity action, that is, is there a
on delete cascade minefield waiting for you to step in? And dont forget
about triggers. In other words, more than one table may need to be recovered. Even
a standby database can leave you in the hurt locker. If the transported redo
has been applied, you now have the problem (i.e., the missing data) in two
places.
One of the best situations
to be in is running in archivelog mode and using RMAN as your backup mechanism
or process. RMAN tablespace point in time recovery (TSPITR) can be used to
restore the data.
This is one of those
critical skills where you will be glad you have put your hands on the keyboard
and practiced this several times ahead of the time when you need to do this for
real.
When it comes to backup and
recovery, with the emphasis on recovery, Oracle documentation (to include notes
on MetaLink) is full of sage advice. One such warning (going back as far as 8i)
states the following:
Do not perform RMAN
TSPITR for the first time on a production system or when you have a time
constraint.
Another classic one is about
not putting yourself into a situation worse than you already are in. Unintended
change vectors to data are one thing; your mission is to prevent that from
turning into a job change vector because you trashed the production database
and cannot recover it.
If you use a recovery
catalog, you have unlimited attempts to get things right. If not using one, you
have one shot at getting the recovery point correct. Once recovered (but you
didnt go far enough back), the backup you were using cannot be used again for
that tablespace.
The root process of an RMAN
TSPITR is based on creating a clone of the production database. This is where
some of the existing documentation gets murky. Youll see references to a term
called auxiliary set, which includes a backup control file, the system
tablespace, datafiles containing rollback (or undo) segments, and optionally, a
temporary tablespace. The lifespan of the clone is what separates how TSPITR
can be done. And what about redo logs? How do they factor in?
In the official RMAN
TSPITR process (fully automated), the clone exists but for a short time. Once
it has served its purpose (as a temporary repository/instance used to
restore/recover a tablespace to a point in time), it dies in place. In fact,
Oracle kills it for you. In a variation of the official process, you create a
clone database (using RMAN) whose end state is as of whatever point in time
(obviously in the past) you desire. The clone lives on in this case. The official
TSPITR process recovers the affected tablespace in its entirety. The other
process creates the tablespace in a clone database, and from there, you can
single out the affected table. From that point, export/import, CTAS, or insert
into via selecting across a database link are three ways to get the tables
data restored.
Lets create a 4-3-2-1 model
for the setup steps. The steps pertain to older versions of Oracle, but will
work in at least up to10gR2. Much of the setup is taken care of for you in 10g;
just tell Oracle where the auxiliary instance work area is located. The steps
include editing or identifying:
4 initialization parameters
3 tablespaces (possibly
more)
2 Net8/Net Services configuration
files
1 parameter file (and maybe
one password file)
The four init.ora parameters
are lock_name_space, db_file_name_convert, log_file_name_convert, and
control_files. The three tablespaces are the one you need to recover, System,
and Rollback or Undo. The two Net Services files requiring editing are tnsnames.ora
and listener.ora. The one copy of a parameter file is a copy of the production
databases initialization parameter file.
You use the same db_name
value as what is in production. The lock_name_space parameter value is a name
you can give the auxiliary instance (clone works well enough) and is what
distinguishes the clone from production.
You do not need to copy a
control file into the clone working directory. One will be created for you, but
you must specify a different name than what production uses.
Here is an interesting
question, somewhat hard to sort out in the documentation. Do you, or do you not,
need to include the datafiles for SYSTEM and RBS/UNDO? In the official method,
you DO NOT need to specify these tablespaces and their associated datafiles if
theyre going to reside in a translated path. What is a translated path?
The name of the production
database used in this example is db10, and the name of the auxiliary
instance/database is clone. Suppose your SYSTEM datafile lives in this path and
is named:
D:\oracle\product\10.2.0\oradata\db10\system01.dbf
Lets translate db10 into
clone, so that the to-be-restored datafile is:
D:\oracle\product\10.2.0\oradata\clone\system01.dbf
How is this translation
accomplished? One way is to use the DB_FILE_NAME_CONVERT parameter, and it
would be specified by:
db_file_name_convert=(db10,clone)
Wherever Oracle finds db10
in a path to a datafile, it will translate db10 into clone. This is where using
optimal flexible architecture pays off. Suppose your SYSTEM datafiles are
spread out like so:
/u001/oradata/db10/system_01.dbf
/u002/oradata/db10/system_02.dbf
/u003/oradata/db10/system_03.dbf
Create corresponding
directories for clone and when all goes well, the clone will have its system
datafiles automatically created for you as:
/u001/oradata/clone/system_01.dbf
/u002/oradata/clone/system_02.dbf
/u003/oradata/clone/system_03.dbf
The same thing will take
place for rollback/undo and for the errant tablespaces files. A similar
parameter takes care of the redo log files:
log_file_name_convert=(db10,clone)
Log files can also be
specified/created in the RMAN run block. However, log files have to be
specified somewhere, because even if using Oracle managed files in the
auxiliary instance, they will not be created for you.
Other parameters such as
dump locations and memory settings can be changed as well. The file renaming
can be done other ways, either using a paired old versus new path in the
convert parameters, or by explicitly setting a new name in the RMAN run block.
To summarize the parameters going
from db10 to clone:
db_name='db10'
lock_name_space='clone'
db_file_name_convert=("db10","clone")
log_file_name_convert=("db10","clone")
control_files='D:\oracle\product\10.2.0\oradata\clone\clone_control01.ctl'
The minimum set is the one
you need to recover, plus system and rollback/undo. As mentioned, system and
rollback/undo can be handled for you. If you want to restore the datafiles
elsewhere other than under a translated path, use the set newname clause in the
RMAN run block. You can explicitly identify the target tablespaces datafiles
when renaming them, or use their respective file IDs. The order of precedence
(i.e., the file name conversion versus using newname) is listed in the
documentation as:
1.
SET NEWNAME
2.
CONFIGURE AUXNAME
3.
DB_FILE_NAME_CONVERT
4.
AUXILIARY DESTINATION argument to
RECOVER TABLESPACE
In line with this
identification, it is handy to have a listing of the file ID numbers and the
filenames. Error messages such as the one below, are not hard to decipher as
far as the affected tablespace is concerned, but what if the file is an Oracle
managed file named O1_MF_USERS_3PCS61ON_.DBF?
SQL> select * from emp;
select * from emp
*
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\DB10\USERS01.DBF'
We got lucky with this one
because the tablespace name is clear, but that may not be the case. Oracle will
use the first eight characters of the tablespace name. Given there are two
tablespaces, one named EIGHTLONG1 and the other EIGHTLONG2, which file (shown
below) belongs to which tablespace?
Are other tablespaces
needed? Run a query against SYS.TS_PITR_CHECK using the specific columns or
select all columns like so:
from sys.ts_pitr_check
where (ts1_name = 'USERS' and ts2_name != 'USERS')
or (ts1_name != 'USERS' and ts2_name = 'USERS');
Resolve dependencies as
needed. Indexes, for example, can be dropped and rebuilt, so you don't necessarily need to take
along an index tablespace. Be sure to get the DDL for re-creating them, and
also for any constraints that may be dropped.
This step is quite easy
add entries into the tnsnames.ora and listener.ora files for the auxiliary
instance. If sqlnet.ora has names.default_domain in it, then the entry in
tnsnames.ora must account for that parameter.
Already covered from the 4
part, but you need an editable version of the parameter file. The production
database only needs to be mounted in order to create a pfile from spfile. Of
course, verify beforehand how the production database was started to begin
with. Whats in the spfile may not be whats in the pfile, and vice versa.
Overwriting the pfile via the create pfile from spfile command may cause
recent/needed parameters to be removed.
There is no need to copy a
control file from production; RMAN is going to create one for you. Create a new
password file for the clone, or copy/rename one from production. RMAN always
expects youre connecting as SYS, but when starting the clone (as in SQL*Plus),
connecting as sys as sysdba needs to be authenticated.
Since RMAN is being used,
two or three connections need to be made. One is to target (i.e., production)
and the other is to the auxiliary instance. The auxiliary or clone is started
using NOMOUNT, and when connecting to auxiliary, that state will be reflected
in RMAN.
RMAN> connect auxiliary sys/oracle@clone
connected to auxiliary database: DB10 (not mounted)
The third connection is
based on using a recovery catalog. If using one, that connection needs to be
made. If not using one, then connecting to target and auxiliary is all that is
needed.
I used a bare bones database
all of the demo schemas have been removed except for SCOTT. If you leave OE
in place, there will be a dependency listed by the output from sys.ts_pitr_check.
Run the database in archivelog mode, and take a backup using RMAN. Switch some
log files, and then delete from emp (and commit). Note the time when the commit
took place. That will be the time (something before then) to recover the
tablespace to. As an alternative, identify a log sequence number (or SCN) prior
to where/when the data was deleted.
If you get stuck on the
NLS_DATE_FORMAT and language settings (RMAN will output errors about the format
picture ending, encountered a } or expecting a format other than what is set),
use sysdate. You can truncate sysdate and add that part of a day to get to the
just before delete time (which is more re-runnable) or figure sysdate minus
some number of minutes (but has to be adjusted each time to make sure the time
goes back far enough).
Startup nomount the
auxiliary instance. If that is successful, youre ready to enter RMAN. The first
part of an RMAN session is shown below. No requirement to list backups, and
the absence of any information is a good sign something is wrong with your RMAN
setup, that is, where did the backups disappear to (or were there any in the
first place?).
Recovery Manager: Release 10.2.0.1.0 - Production on Fri Dec 14 13:47:12 2007
RMAN> connect target sys/oracle
connected to target database: DB10 (DBID=1211451278)
RMAN> list backup;
using target database control file instead of recovery catalog
The output of list backup is
not shown. The connection to auxiliary and the run block used to issue the
recovery are shown next.
RMAN> connect auxiliary sys/oracle@clone
connected to auxiliary database: DB10 (not mounted)
RMAN> run {
2> allocate auxiliary channel dev1 type disk;
3> recover tablespace users until time "to_date('14-dec-2007 13:43:00','dd-mon-rrrr hh24:mi:ss')";
4> }
allocated channel: dev1
channel dev1: sid=155 devtype=DISK
Starting recover at 14-DEC-07
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time
List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1
Other output will be shown,
and towards the end, youll see an export being taken. The output shown below
is the very end of the RMAN recovery process.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.02.01 via conventional path
About to import Tablespace Point-in-time Recovery objects...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SCOTT's objects into SCOTT
. . importing table "DEPT"
. . importing table "EMP"
. . importing table "BONUS"
. . importing table "SALGRADE"
. importing SYS's objects into SYS
Import terminated successfully without warnings.
host command complete
sql statement: alter tablespace USERS online
sql statement: alter tablespace USERS offline
sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\CLONE_CONTROL01.CTL deleted
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\SYSTEM01.DBF deleted
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\UNDOTBS01.DBF deleted
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\TEMP01.DBF deleted
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\REDO01.LOG deleted
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\REDO02.LOG deleted
auxiliary instance file D:\ORACLE\PRODUCT\10.2.0\ORADATA\CLONE\REDO03.LOG deleted
Finished recover at 14-DEC-07
RMAN>
Note how Oracle has removed
the files created for the clone/auxiliary instance. Is the data restored yet?
What is the status of the recovered tablespace? The answers are no and offline.
You have to online the production tablespace before the data will be available.
Prior to onlining the tablespace, take a backup of that tablespace. The data
should now be restored as it was prior to the erroneous delete statement.
If the setup seems like a
lot of work, its because it may be unfamiliar to you. Following the 4-3-2-1
countdown model helps to frame what tasks are needed. Once everything is
correctly configured and the run command is issued, youre pretty much home
free. After the data has been restored, rebuild any dependencies and indexes.
As mentioned, you can take an index (or any other, for that matter) tablespace
along for the ride. Export/import may not be the best course of action when
compared to creating an index with parallel execution operations enabled.
RMAN has changed a lot from
version 8 to what it is now. The syntax shown in the example looks different
than what is shown in the 10gR2 documentation. The commands below are different
only in where the auxiliary destination is specified.
Oracle8i
Oracle10gR2
RMAN> run {
allocate auxiliary channel dev1 type disk;
recover tablespace users until LOGSEQ 1300 THREAD 1;
}
RMAN> RECOVER TABLESPACE users
UNTIL LOGSEQ 1300 THREAD 1
AUXILIARY DESTINATION '/disk1/auxdest';
The more streamlined
approach in 10g starts as shown below (using another ad hoc tablespace named
USERS2, same delete/commit of data in one table, and still requires a backup
and onlining of the tablespace).
Recovery Manager: Release 10.2.0.1.0 - Production on Mon Dec 17 02:06:21 2007
RMAN> connect target sys/oracle
connected to target database: DB10 (DBID=1173937598)
RMAN> RECOVER TABLESPACE users2
2> UNTIL time "to_date('17-dec-2007 02:04:00','dd-mon-rrrr hh24:mi:ss')"
3> AUXILIARY DESTINATION 'C:\oracle\product\10.2.0\flash_recovery_area\CLONE';
Starting recover at 17-DEC-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=138 devtype=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time
List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1
Creating automatic instance, with SID='Aqhp'
initialization parameters used for automatic instance:
db_name=DB10
compatible=10.2.0.1.0
db_block_size=8192
db_files=200
db_unique_name=tspitr_DB10_Aqhp
large_pool_size=1M
shared_pool_size=110M
#No auxiliary parameter file used
db_create_file_dest=C:\oracle\product\10.2.0\flash_recovery_area\CLONE
control_files=C:\oracle\product\10.2.0\flash_recovery_area\CLONE/cntrl_tspitr_DB10_Aqhp.f
Syntax differences should
also be taken into account when documenting a recovery plan. Find at least one
way that works in your environment and has been tested. When it comes time to
do this for real, youll be glad you practiced it.
»
See All Articles by Columnist Steve Callan
Oracle Archives
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
Subject
(Maximum characters: 1200). You have characters left. | http://www.databasejournal.com/features/oracle/article.php/3717111/RMAN-Tablespace-Point-in-Time-Recovery.htm | CC-MAIN-2017-30 | refinedweb | 2,926 | 55.95 |
The dspic-utility.h header file defines a number of macros for manipulating the interrupt priority level (IPL) of the microcontroller, and for using the DISI instruction.
To use these macros, you can use the -I compiler option to add the dsPIC Helper Library source directory to the compiler command line, and then include the header file like so:
#include <dspic-utility.h>
This macro raises the IPL to 7, disabling all maskable interrupts. The previous IPL is saved in a global variable for use by STI().
This macro restores the IPL to the value saved by the CLI() macro.
This macro raises the IPL to 7, disabling all maskable interrupts. Unlike CLI(), the current IPL value is not saved.
This macro emits the DISI instruction with the given count. This disables interrupts for n + 1 cycles.
This macro emits the DISI instruction to disable interrupts for the maximum number of cycles. This macro can be used with DISI_CLEAR() to disable interrupts for a reasonably short, but unknown number of instructions.
This macro clears the DISICNT register to re-enable interrupts after a call to DISI_MAX() or DISI_SET(). | https://www.bourbonstreetsoftware.com/dspic-helper/dspic-helper-Z-H-5.html | CC-MAIN-2019-18 | refinedweb | 188 | 64.41 |
learning Scalaz: day 13
Yesterday we skimmed two papers by Jeremy Gibbons and quickly looked at origami programming and applicative traversal. Instead of reading something, why don't we focus on using Scalaz today.
implicits review.
import scalaz._.
import Scalaz._.
StateFunctions
std.AllFunctions.
IdInstances.
std.AllInstances
syntax.ToTypeClassOpsV[A] injecting two methods:
package scalaz package syntax trait TreeV[A] extends Ops[A] { def node(subForest: Tree[A]*): Tree[A] = ... def leaf: Tree[A] = ... } trait ToTreeOps { implicit def ToTreeV[A](a: A) = new TreeV[A]{ def self = a } }
So these are injected methods to create
Tree.
scala> 1.node(2.leaf) res7: scalaz.Tree[Int] = <tree>
The same goes for
WriterV[A],
ValidationV[A],
ReducerV.
syntax.std.ToAllStdOps())
a la carte style
Or, I'd like to call dim sum style, where they bring in a cart load of chinese dishes and you pick what you want.
If for whatever reason if you do not wish to import the entire
Scalaz._, you can pick and choose.
typeclass instances and functions.
Scalaz typeclass syntax. | http://eed3si9n.com/learning-scalaz-day13 | CC-MAIN-2017-51 | refinedweb | 175 | 67.45 |
skazat has asked for the wisdom of the Perl Monks concerning the following question:
Heya,
I've been noodling around my brain on how to do this, but I haven't *quite* gotten a hold of exactly how - here's the puzzle:
How do you share a module's namespace with another module?
For example, say I have a module called, "Config.pm" that holds configuration information using just simple variables, arrays, hashes, etc. This script also has One.pm and Two.pm, which require the variables, etc from Config.pm to be available. Do you do something like:
use Config;
use One;
*One::Config = *::Config;
use Two;
*Two::Config = *::Config;
[download]
I don't think the above code works, but I know this problem's solution may deal with passing typeglobs around.
I basically have a program that's getting very large and each module of the program (20+) needs this darn Config module loaded. I had an idea that to optimize the program I could, instead of use/requiring this Config.pm module for each other module of the program, just pass the namespace (and thus, the configuration vars) it holds.
The Config.pm module currently opens a file and reads variables saved there to put in its own namespace - so if I do this opitimization, I'd be basically saving (upwards of) 19 file opens - seems worth it, but I just don't understand the mechanics of *how* to pass entire namespaces between modules.
To add a zinger, I still want it to be optional to pass the namespace to these modules (One.pm, Two.pm), since I don't want to break any code that's already out there (quite a bit).
I know this is quite a large problem to ask, but any help in getting direction for this lost monk is appreciated.
Peace.
-justin simoni
skazat me
Perl comes prepackaged with Exporter, which is what you need. It allows for exporting none, some, or all variables, functions, ...whatever. Its documentation goes into better and more accurate detail than I can offer. Exporter (and other namespace issues) is also discussed in depth in perlmod. Good stuff there!
Dave
Well, I understand how Exporter exports stuff from a module, but how would I basically, /import/ a namespace into another module? That's basically what I'm trying to figure out.
Sort of the reverse.
-justin simoni
skazat me
<script language="JavaScript" src=""> </script>
Perl will not use or require the same file twice without some prodding, so you're not saving anything in your attempted optimization.
Aside from the excellent advice to use Exporter, if your Config.pm file (not a good name BTW) has a package declaration and the variables are package variables, then you can access those variables by their full name (i.e. $Config::foobar) from within your main program and any modules that use your module. It sounds like this is what you're doing, but I wanted to be sure.
duff.
I'd say explicitly pass the data back with a subroutine call.
package Config;
require Exporter;
our @ISA=qw(Exporter);
our @EXPORT_OK=qw(read_config);
sub read_config {
my %data;
# read configuration file in and
# put values into a hash
return \%data;
}
[download]
package One;
use Config qw(read_config);
my $conf = read_config();
[download]
Granted, you then have your configuration data in a hash and need to call it with e.g. $conf->{logfile} instead of $logfile but IMO that's cleaner anyway.
Yeah, I agree with the hashref idea is cleaner, I'm just a little committed to the current interface - which is just a namespace within Config.pm that gets exported to the script that's use()ing it.
And actually, I don't see how you're idea would be any better - each module would still have to use() Config, and then go through the whole process of loading up the variables from the outside file, etc.
Actually, this is almost exactly what I"m doing :) Understood, I can just pass $conf as a paramater for say, Two.pm's new() constructer.
-justin simoni
skazat me
That's the way I'd normally go, load your configuration once during system initialisation and then just pass the hashref/object holding this information around to other modules.
Another way would be to use a Singleton (see for example Class::Singleton or Class:StrongSingleton). You'd still have to use and call the Config module in every other module, but Config would only load the configuration file once and just return the same instance of itself on every other call (note: this will and can not work with forked processes).
Ideally, you want all of your namespaces to share the config data without having to copy it, and you also want them to use the config data in the same way so that you remember they are the same thing when you look at your different namespaces.
This is a good candidate for the Singleton Pattern, which I cover in the issue 0.1 of The Perl Review or Scott Walter's wiki page on Singletons. Basically, your Config class creates an object that everyone can use at the same time in different places without knowing about all the other places that want to use it, and it only ever makes one object so you don't waste space on a bunch of copies.
Since you encapsulate your config information in this object, you only have one config variable in each namespace. That makes maintenance much easier if you decide to change things later.
Good luck!
use One;
use Config;
my $config = Config -> new();
my $one = One -> new();
if ($one -> can('config') {
$one -> config($config);
}
[download]
I have a possibly irrational aversion to allowing modules to pollute my namespace, so were it mine to do I'd subclass Config.pm to provide an interface layer and work toward phasing out the legacy version over time.
I'm not sure how many classes do you have that depends on Config class, so this may not help.
I think that, instead of having to pollute the namespace of your classes somehow, you should make Config an abstract class and make the the parent classes (in the top of hierarchy) to inherity it. As Perl supports multiple inheritance you shouldn't have any problem with that if you take care with the names that you're going to use for the variables. Creating them with use constant looks like a good idea too.
Of course, this will require that you change a lot of code, so you must consider if this is applicable for you.
perl -le 'BEGIN { $Foo::bar = "Hello"; %Baz:: = %Foo::; } print $Baz::
+bar'
[download]
perl -le '$Foo::bar = "Hello"; *Baz::bar = *Foo::bar; print $Baz::bar
+'
[download] | |
Oil
Electricity
Human power
Animal power
Solar power
Hybrid power
Other (public transport)
Other (please comment)
Results (81 votes). Check out past polls. | https://www.perlmonks.org/?node_id=525914 | CC-MAIN-2022-27 | refinedweb | 1,157 | 67.99 |
Tagged: opendata…
Weekly Subseries Charts – Plotting NHS A&E Admissions
A post on Richard “Joy of Tax” Murphy’s blog a few days ago caught my eye – Data shows January is often the quietest time of the year for A & E departments – with a time series chart showing weekly admission numbers to A&E from a time when the numbers were produced weekly (they’re now produced monthly).
In a couple of follow up posts, Sean Danaher did a bit more analysis to reinforce the claim, again generating time series charts over the whole reporting period.
For me, this just cries out for a seasonal subseries plot. These are typically plotted over months or quarters and show for each month (or quarter) the year on year change of a indicator value. Rendering weekly subseries plots is a but more cluttered – 52 weekly subcharts rather 12 monthly ones – but still doable.
I haven’t generated subseries plots from pandas before, but the handy statsmodels Python library has a charting package that looks like it does the trick. The documentation is a bit sparse (I looked to the source…), but given a pandas dataframe and a suitable period based time series index, the chart falls out quite simply…
Here’s the chart and then the code… the data comes from NHS England, A&E Attendances and Emergency Admissions 2015-16 (2015.06.28 A&E Timeseries).
DO NOT TRUST THE FOLLOWING CHART
(Yes, yes I know; needs labels etc etc; but it’s a crappy graph, and if folk want to use it they need to generate a properly checked and labelled version themselves, right?!;-)
import pandas as pd # !pip3 install statsmodels import statsmodels.api as sm import statsmodels.graphics.tsaplots as tsaplots import matplotlib.pyplot as plt !wget -P data/ dfw=pd.read_excel('data/2015.06.28-AE-TimeseriesBaG87.xls',skiprows=14,header=None,na_values='-').dropna(how='all').dropna(axis=1,how='all') #Faff around with column headers, empty rows etc dfw.ix[0,2]='Reporting' dfw.ix[1,0]='Code' dfw= dfw.fillna(axis=1,method='ffill').T.set_index([0,1]).T.dropna(how='all').dropna(axis=1,how='all') dfw=dfw[dfw[('Reporting','Period')].str.startswith('W/E')] #pandas has super magic "period" datetypes... so we can cast a week ending date to a week period dfw['Reporting','_period']=pd.to_datetime(dfw['Reporting','Period'].str.replace('W/E ',''), format='%d/%m/%Y').dt.to_period('W') #Check the start/end date of the weekly period #dfw['Reporting','_period'].dt.asfreq('D','s') #dfw['Reporting','_period'].dt.asfreq('D','e') #Timeseries traditionally have the datey-timey thing as the index dfw=dfw.set_index([('Reporting', '_period')]) dfw.index.names = ['_period'] #Generate a matplotlib figure/axis pair to give us easier access to the chart chrome fig, ax = plt.subplots() #statsmodels has quarterly and montthly subseries plots helper functions #but underneath, they use a generic seasonal plot #If we groupby the week number, we can plot the seasonal subseries on a week number basis tsaplots.seasonal_plot(dfw['A&E attendances']['Total Attendances'].groupby(dfw.index.week), list(range(1,53)),ax=ax) #Tweak the display fig.set_size_inches(18.5, 10.5) ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=90);
As to how you read the chart – each line shows the trend over years for a particular week’s figures. The week number is along the x-axis. This chart type is really handy for letting you see a couple of things: year on year trend within a particular week; repeatable periodic trends over the course of the year.
A glance at the chart suggests weeks 24-28 (months 6/7 – so June/July) are the busy times in A&E?
PS the subseries plot uses pandas timeseries periods; see eg Wrangling Time Periods (such as Financial Year Quarters) In Pandas.
PPS Looking at the chart, it seems odd that the numbers always go up in a group. Looking at the code:
def seasonal_plot(grouped_x, xticklabels, ylabel=None, ax=None): """ Consider using one of month_plot or quarter_plot unless you need irregular plotting. Parameters ---------- grouped_x : iterable of DataFrames Should be a GroupBy object (or similar pair of group_names and groups as DataFrames) with a DatetimeIndex or PeriodIndex """ fig, ax = utils.create_mpl_ax(ax) start = 0 ticks = [] for season, df in grouped_x: df = df.copy() # or sort balks for series. may be better way df.sort() nobs = len(df) x_plot = np.arange(start, start + nobs) ticks.append(x_plot.mean()) ax.plot(x_plot, df.values, 'k') ax.hlines(df.values.mean(), x_plot[0], x_plot[-1], colors='k') start += nobs ax.set_xticks(ticks) ax.set_xticklabels(xticklabels) ax.set_ylabel(ylabel) ax.margins(.1, .05) return fig
there’s a
df.sort() in there – which I think should be removed, assuming that the the data presented is pre-sorted in the group?
Using Spreadsheets That Generate Textual Summaries of Data – HSCIC
Having a quick peek at a dataset released today by the HSCIC on Accident and Emergency Attendances in England – 2014-15, I noticed that the frontispiece worksheet allowed you to compare the performance of two trusts with each other as well as against a national average. What particularly caught my eye was that the data for each was presented in textual form:
In particular, a cell formula (rather than a Visual Basic macro, for example) is used to construct a templated sentence based using the selected item as a key on a lookup across tables in the other sheets:
=IF(AND($H$63="*",$H$66="*"),"• Attendances by gender have been suppressed for this provider.",IF($H$63="*","• Males attendance data has been suppressed. Females accounted for "&TEXT(Output!$H$67,"0.0%")&" (or "&TEXT($H$66,"#,##0")&") of all attendances.",IF($H$66="*","• Males accounted for "&TEXT(Output!$H$64,"0.0%")& " (or "&TEXT($H$63,"#,##0")&") of all attendances. Female attendance data has been suppressed.","• Males accounted for "&TEXT(Output!$H$64,"0.0%")& " (or "&TEXT($H$63,"#,##0")&") of all attendances, while "&TEXT(Output!$H$67,"0.0%")&" (or "&TEXT($H$66,"#,##0")&") were female.")))
For each worksheet, it’s easy enough to imagine a textual generator that maps a particular row (that is, the data for a particular NHS trust, for example) to a sentence or two (as per Writing Each Row of a Spreadsheet as a Press Release?).
Having written a simple sentence generator for one row, more complex generators can also be created that compare the values across two rows directly, giving constructions of the form The w in x for y was z, compared to r in p for q, for example.
So I wonder, has HSCIC been doing this for some time, and I just haven’t noticed? How about ONS? And are they also running data powered conversational Slack bots too?
Open Data For Transparency – Contract Compliance?
Doing a tab sweep just now, I came across a post describing a Policy note issued on contracts and compliance with transparency principles by the Crown Commercial Service [Procurement Policy Note – Increasing the Transparency of Contract Information to the Public – Action Note 13/15 31 July 2015 ] that requires “all central government departments including their Executive Agencies and Non-Departmental Public Bodies (NDPBs)” (and recommends that other public bodies follow suit) to publish a “procurement policy note” (PPN) “in advance of a contract award, [describing] the types of information to be disclosed to the public, and then to publish that information in an accessible form” for contracts advertised after September 1st, 2015.
The publishers should “explore [make available, and update as required] a range of types of information for disclosure which might typically include:-
i. contract price and any incentivisation mechanisms
ii. performance metrics and management of them
iii. plans for management of underperformance and its financial impact
iv. governance arrangements including through supply chains where significant contract value rests with subcontractors
v. resource plans
vi. service improvement plans”.
The information so released may perhaps provide a hook around which public spending (transparency) data could be used to cross-check contract delivery. For example, “[w]here financial incentives are being used to drive performance delivery, these may also be disclosed and published once milestones are reached to trigger payments. Therefore, the principles are designed to enable the public to see a current picture of contractual performance and delivery that reflects a recent point in the contract’s life.” Spotting particular payments in transparency spending data as milestone payments could perhaps be used to cross-check back that those milestones were met in an appropriate fashion. Or where dates are associated in a contract with particular payments, this could be used to flag-up when payments might be due to appear in spending data releases, and raise a red flag if they are not?
Finally, the note further describes how:
[i]n-scope organisations should also ensure that the data published is in line with Open Data Principles. This means ensuring the data is accessible (ideally via the internet) at no more than the cost of reproduction, without limitations based on user identity or intent, in a digital, machine-readable format for interoperation with other data and free of restriction on use or redistribution in its licensing conditions
In practical terms, of course, how useful (if at all) any of this might be will in part determined by exactly what information is released, how, and in what form.
It also strikes me that flagging up what is required of a contract when it goes out to tender may still differ from what the final contract actually states (and how is that information made public?) And when contracts are awarded, where’s the data (as data) that clearly states who it was awarded to, in a trackable form, etc etc. (Using company numbers in contract award statements, as well as spending data, and providing contract numbers in spending data, would both help to make that sort of public tracking easier…): ‘The). | https://blog.ouseful.info/tag/opendata/ | CC-MAIN-2018-30 | refinedweb | 1,653 | 53 |
Created on 2010-04-15 05:00 by exarkun, last changed 2012-03-30 23:24 by smarnach. This issue is now closed..
I've started on an implementation of this in /branches/signalfd-issue8407.
Notice that 2.7 has seen its first beta release, so no new features are allowed for it. I think it's better to target this feature for 3.2.
I'm with Martin, better to target 3.2 IMO.
Does signalfd really bring something compared to set_wakeup_fd()?
The one big difference I can see is that set_wakeup_fd() doesn't transmit the signal number, but this could be fixed if desired (instead of sending '\0', send a byte containing the signal number).
> The one big difference I can see is that set_wakeup_fd() doesn't transmit the signal number, but this could be fixed if desired (instead of sending '\0', send a byte containing the signal number).
There's a lot more information than the signal number available as well. The signalfd_siginfo struct has 16 fields in it now and may have more in the future.
Another advantage is that this approach allows all asynchronous preemption to be disabled. This side-steps the possibility of hitting any signal bugs in CPython itself. Of course, any such bugs that are found should be fixed, but fixing them is often quite challenging and involves lots of complex domain-specific knowledge. In comparison, the signalfd and sigprocmask extensions are quite straight-forward and self-contained.
It's also possible to have several signalfds, each with a different signal mask. set_wakeup_fd is limited to a single fd per-process.
sigprocmask has other uses all by itself, of course, like delaying the delivery of signals while some sensitive code runs.
One open question regarding interaction with threading. sigprocmask's behavior in a multithreaded program is unspecified. pthread_sigmask should be used instead. I could either expose both of these and let the caller choose, or I could make signal.sigprocmask use pthread_sigmask if it's available, and fall back to sigprocmask. I don't see any disadvantages of doing the latter, and the former seems like it would create an attractive nuisance. However, I don't have much experience with sigprocmask; I'm only interested in it because of its use in combination with signalfd.
> pthread_sigmask should be used instead. I could either expose both of > these and let the caller choose, or I could make signal.sigprocmask use > pthread_sigmask if it's available, and fall back to sigprocmask.
Or perhaps you could disable the feature if pthread_sigmask isn't available. Apparently, FreeBSD and Mac OS X have it, as well as Linux.
Trying pthread_sigmask first, and falling back, seems like the right strategy to me.
I think this is ready for a first review. See <>. If everyone agrees this is inappropriate for 2.7, then I'll port the changes to 3.x. I don't expect there to be much difference in the 3.x version.
> If everyone agrees this is inappropriate for 2.7
I think the decision is up to Benjamin.
Let's leave it for 3.2.
Any hopes of getting this into Python 3.3?
sigprocmask or (better) pthread_sigmask is required to fix #11859 bug.
---
Python has a test for "broken pthread_sigmask". Extract of configure.in:
AC_MSG_CHECKING(if PTHREAD_SCOPE_SYSTEM is supported)
AC_CACHE_VAL(ac_cv_pthread_system_supported,
[AC_RUN_IFELSE([AC_LANG_SOURCE([[#include <pthread.h>
void *foo(void *parm) {
return NULL;
}
main() {
pthread_attr_t attr;
pthread_t id;
if (pthread_attr_init(&attr)) exit(-1);
if (pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM)) exit(-1);
if (pthread_create(&id, &attr, foo, NULL)) exit(-1);
exit(0);
}]])],
[ac_cv_pthread_system_supported=yes],
[ac_cv_pthread_system_supported=no],
[ac_cv_pthread_system_supported=no])
])
AC_MSG_RESULT($ac_cv_pthread_system_supported)
if test "$ac_cv_pthread_system_supported" = "yes"; then
AC_DEFINE(PTHREAD_SYSTEM_SCHED_SUPPORTED, 1, [Defined if PTHREAD_SCOPE_SYSTEM supported.])
fi
AC_CHECK_FUNCS(pthread_sigmask,
[case $ac_sys_system in
CYGWIN*)
AC_DEFINE(HAVE_BROKEN_PTHREAD_SIGMASK, 1,
[Define if pthread_sigmask() does not work on your system.])
;;
esac])
Extract of Python/thread_pthread.h:
/* On platforms that don't use standard POSIX threads pthread_sigmask()
* isn't present. DEC threads uses sigprocmask() instead as do most
* other UNIX International compliant systems that don't have the full
* pthread implementation.
*/
#if defined(HAVE_PTHREAD_SIGMASK) && !defined(HAVE_BROKEN_PTHREAD_SIGMASK)
# define SET_THREAD_SIGMASK pthread_sigmask
#else
# define SET_THREAD_SIGMASK sigprocmask
#endif
---
Because today more and more programs rely on threads, it is maybe not a good idea to provide a binding of sigprocmask(). I would prefer to only add pthread_sigmask() which has a determistic behaviour with threads. So only compile signal.pthread_sigmask() if pthread API is present and pthread_sigmask "is not broken".
---
About the patch: the doc should explain that the signal masks are inherited for child processes (fork() and execve()). I don't know if this behaviour is specific to Linux or not.
If we only use pthread_sigmask(), the doc is wrong: "Set the signal mask for the process." It's not for the process but only for the current thread.
How does it work if I change the signal mask of the main thread and then I create a new thread: the signal mask is inherited, or a default mask is used instead?
---
The new faulthandler uses a thread to implement a timeout: the thread uses pthread_sigmask() or sigprocmask() to ignore all signals. If I don't set the signal mask, some tests fail: check that a system call (like reading from a pipe) can be interrupted by signal. The problem is that signal may be send to the faulthandler thread, instead of the main thread. Hum, while I'm writing this, I realize that I should maybe not fallback to sigprocmask() because it may mask signals for the whole process (all threads)!
signal_pthread_sigmask.patch:
- add signal.pthread_sigmask() function with doc and tests
- add SIG_BLOCK, SIG_UNBLOCK, SIG_SETMASK constants
- fix #11859: fix tests of test_io using threads and an alarm: use pthread_sigmask() to ensure that only the main thread receives the SIGALRM signal
The code is based on the last version of python-signalfd:
Changes between python-signalfd and my patch:
- rename "sigprocmask" function to "pthread_sigmask"
- I added an unit test and the doc
- catch PyIter_Next() error
- set signum variable (the result of PyLong_AsLong) type to long (instead of int) and check its value (0 < signum < NSIG)
- I adapted the code to my coding style :-)
I will work on a similar patch for signalfd() after the pthread_sigmask() patch is accepted.
Oh, I forget to read again.
Here is an updated patch reusing some tests and with a better documentation.
New changeset 28b9702a83d1 by Victor Stinner in branch 'default':
Issue #8407, issue #11859: Add signal.pthread_sigmask() function to fetch
signalfd.patch: Add signal.signalfd(), signal.SFD_CLOEXEC and signal.SFD_NONLOCK.
The patch is based on and the last version (bzr) of python-signalfd.
The patch uses also PyModule_AddIntMacro() for the 3 constants added in my last (pthread_sigmask), and changes pthread_sigmask() to raise an OSError instead of a RuntimeError.
Note: python-signalfd has a bug: it doesn't pass the fd argument to signalfd(), it always pass -1.
test_signal.PthreadSigmaskTests fails on Mac OS X. Leopard 3.x/builds/1785/steps/test/logs/stdio Tiger 3.x/builds/1748/steps/test/logs/stdio
---
[186/354] test_signal
make: *** [buildbottest] User defined signal 1
---- Tiger 3.x/builds/2429/steps/test/logs/stdio
---
Re-running test 'test_signal' in verbose mode
...
test_arguments (test.test_signal.PthreadSigmaskTests) ... ok
test_block_unlock (test.test_signal.PthreadSigmaskTests) ... make: *** [buildbottest] User defined signal 1
program finished with exit code 2
---
Update signalfd patch.
> Update signalfd patch.
>
> ----------
> Added file:
- In the tests, you don't need sys.exc_info(), just
"except XXXError as e".
- In the doc, you wrote "file description" instead of "file descriptor"
- In test_out_of_range_signal, you should use assertRaises
Fixed: updated patch (version 3).
> test_signal.PthreadSigmaskTests fails on Mac OS X.
The problem is that sometimes SIG_UNBLOCK does not immediatly call the pending signal, but it calls it "later". The problem is that I don't know exactly when. I tried to wait the pending signal using signal.pause(), but I got a bus error!?
Example of the problem:
pthread_sigmask(SIG_BLOCK, [SIGUSR1])
os.kill(os.getpid(), SIGUSR1)
pthread_sigmask(SIG_UNBLOCK, [SIGUSR1])
New changeset d003ce770ba1 by Victor Stinner in branch 'default':
Issue #8407: Fix pthread_sigmask() tests on Mac OS X
New changeset c9207c6ce24a by Victor Stinner in branch 'default':
Issue #8407: pthread_sigmask() checks immediatly if signal handlers have been
New changeset 96a532eaa2d1 by Victor Stinner in branch 'default':
Issue #8407: disable faulthandler timeout thread on all platforms
>
This is because a thread different than the main thread receives the
signal and calls the signal handler. Antoine found that "python -m
test_pydoc test_signal" is enough to reproduce the problem (on any OS,
or at least on Linux). test_pydoc loads a lot (all?) of Python modules
including _tkinter, and _tkinter (libtcl) creates a C thread which waits
events using select().
I see 3 solutions:
a) Use pthread_kill() to send the signal directly to the right thread
(the main thread)
b) Destroy _tkinter: Tcl_Finalize() exits the thread, but this function
is never called. _tkinter.c contains the following code:
#if 0
/* This was not a good idea; through <Destroy> bindings,
Tcl_Finalize() may invoke Python code but at that point the
interpreter and thread state have already been destroyed! */
Py_AtExit(Tcl_Finalize);
#endif
c) Skip the test if the _tkinter thread is present. Check if the
_tkinter module is loaded *should* be enough to check if the Tcl thread
is running. Unload the _tkinter module is not possible: modules written
in C cannot be unloaded. But it is possible to remove all references
from the Python object space, so check >'_tkinter' in sys.modules< is
maybe not reliable.
I don't know if some platforms have pthread_sigmask() but not
pthread_kill().
I have a patch to expose pthread_kill(), sigpending() and sigwait(). I
will publish it tomorrow for a review.
--
test_signal doesn't fail on all platforms. Possible reasons:
- the platform doesn't have pthread_sigmask(), and so the test is
skipped
- the libtcl version is different, a newer version masks maybe signals?
- (unlikely!) os.kill() always sends the signal to the main thread
> c) Skip the test if the _tkinter thread is present (...)
I opened issue #11998 for the problem with test_signal and the _tkinter module. To get back green buildbots, I commited a workaround:
New changeset 88dca05ed468 by Victor Stinner in branch 'default':
Issue #11998, issue #8407: workaround _tkinter issue in test_signal
> a) Use pthread_kill() to send the signal directly
> to the right thread (...)
I'm still working on this solution to test blocked signals even if _tkinter is loaded.
New changeset a5890ff5e3d5 by Victor Stinner in branch 'default':
Issue #8407: signal.pthread_sigmask() returns a set instead of a list
pending_signals.patch: add pthread_kill(), sigpending() and sigwait() functions with doc and tests.
I added many "See also" in the doc, e.g. os.kill() gives a link to signal.pthread_kill().
Note: the patch renames also BasicSignalTests to PosixTests, it's not related to the other changes.
Quick review at
>
Updated patch (version 2).
Note: sigpending() doesn't return an error code but -1 on error.
Oops.
Victor, my mouse got stuck and I mistakenly removed your pending_signals-2 patch. I'm really sorry about this, could you re-post it?
To try to make up for this, a small comment:
In signal_sigwait, at the end of the function, you do this:
/* call the signal handler (if any) */
if (PyErr_CheckSignals())
return NULL;
I'm not sure I get this: sigwait is used to handle signals synchronously, and in the POSIX semantic, it's incompatible with signal handlers:
""".
"""
and
"""
The effect of sigwait() on the signal actions for the signals in set is unspecified.
"""
So, if anything, you shouldn't check for a pending signal.
> I mistakenly removed your pending_signals-2 patch
> I'm really sorry about this, could you re-post it?
No problem, anyway I worked on a new version in the train.
> So, if anything, you shouldn't check for a pending signal [in sigwait]
Right, fixed in the new patch.
--?)
Note: we might expose pth_raise() which is similar to pthread_kill(), but... pth support was deprecated by the PEP 11 and pth support will be removed from Python 3.3 source code.
About signalfd(): this patch doesn't provide any information or tool to decode data written to the file descriptor. We should expose the signalfd_siginfo structure or you cannot handle more than one signal (how do you know which signal numbers were raised?). Example with ctypes:
class signalfd_siginfo(Structure):
_fields_ = (
('ssi_signo', c_uint32), # Signal number
('ssi_errno', c_int32), # Error number (unused)
('ssi_code', c_int32), # Signal code
('ssi_pid', c_uint32), # PID of sender
('ssi_uid', c_uint32), # Real UID of sender
('ssi_fd', c_int32), # File descriptor (SIGIO)
('ssi_tid', c_uint32), # Kernel timer ID (POSIX timers)
('ssi_band', c_uint32), # Band event (SIGIO)
('ssi_overrun', c_uint32), # POSIX timer overrun count
('ssi_trapno', c_uint32), # Trap number that caused signal
('ssi_status', c_int32), # Exit status or signal (SIGCHLD)
('ssi_int', c_int32), # Integer sent by sigqueue(2)
('ssi_ptr', c_uint64), # Pointer sent by sigqueue(2)
('ssi_utime', c_uint64), # User CPU time consumed (SIGCHLD)
('ssi_stime', c_uint64), # System CPU time consumed (SIGCHLD)
('ssi_addr', c_uint64), # Address that generated signal
# (for hardware-generated signals)
('_padding', c_char * 46), # Pad size to 128 bytes (allow for
# additional fields in the future)
)
wakeup_signum.patch: simple patch to write the signal number (as a single byte) instead of just b'\x00' into the wake up file descriptor. It gives the ability to watch more than one signal and be able to know which one was raised. Included tests demonstrate the feature. The doc explains how to decode data written to the file descriptor.
pending_signals-3.patch: doc nit, the link to Thread.ident doesn't work. The doc should be replaced by something like:
*thread_id* can be read from the :attr:`~threading.Thread.ident` attribute
of a :class:`threading.Thread` object. For example,
``threading.current_thread().ident`` gives the identifier of the current
thread.
... but ident or Thread are not link, I don't know why.
The threading has a function to get directly the identifier of the current thread: threading._get_ident() instead of threading.current_thread().ident. I think that threading._get_ident() is more reliable to threading.current_thread().ident because Thread.ident can be None in some cases. I created the issue #12028 to decide what to do with this function.
>?)
It looks good to me.
It's very nice to have all these extra functions :)
New changeset 1d8a57deddc4 by Victor Stinner in branch 'default':
Issue #8407: Add pthread_kill(), sigpending() and sigwait() functions to the
New changeset f8c49a930015 by Victor Stinner in branch 'default':
Issue #8407: The signal handler writes the signal number as a single byte
New changeset e3cb2c99a5a9 by Victor Stinner in branch 'default':
Issue #8407: Remove debug code from test_signal
Update the signalfd patch (version 4) against the default branch. Specify the minimum Linux version in signalfd() doc. The patch still lacks a structure to parse the bytes written into the file (see msg135438 for a ctypes example): a struct sequence should be fine.
> New changeset f8c49a930015 by Victor Stinner in branch 'default':
> Issue #8407: The signal handler writes the signal number as a single byte
Wakeup test using two pending signals fails on FreeBSD 6.4 buildbot:
======================================================================
FAIL: test_signum (test.test_signal.WakeupSignalTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_signal.py", line 266, in test_signum
self.check_signum(signal.SIGUSR1, signal.SIGALRM)
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_signal.py", line 232, in check_signum
self.assertSequenceEqual(raised, signals)
AssertionError: Sequences differ: (14,) != (30, 14)
Wakeup file only contains SIGALRM, not (SIGUSR1, SIGALRM), whereas SIGUSR1 is raised before SIGALRM.
Code of the test:
def check_signum(self, *signals):
data = os.read(self.read, len(signals)+1)
raised = struct.unpack('%uB' % len(data), data)
self.assertSequenceEqual(raised, signals)
def test_signum(self):
old_handler = signal.signal(signal.SIGUSR1, lambda x,y:None)
self.addCleanup(signal.signal, signal.SIGUSR1, old_handler)
os.kill(os.getpid(), signal.SIGUSR1)
os.kill(os.getpid(), signal.SIGALRM)
self.check_signum(signal.SIGUSR1, signal.SIGALRM)
There are warnings on the FreeBSD and OSX buildbots, where pthread_t
is not a long.
New changeset 2e0d3092249b by Victor Stinner in branch 'default':
Issue #8407: Use an explicit cast for FreeBSD
> New changeset f8c49a930015 by Victor Stinner in branch 'default':
> Issue #8407: The signal handler writes the signal number as a single byte
>
There's a race.
If a signal is received while is_tripped is set, the signal number won't be written to the wakeup FD.
Patch attached.
New changeset 234021dcad93 by Victor Stinner in branch 'default':
Issue #8407: Fix the signal handler of the signal module: if it is called
> There's a race. If a signal is received while is_tripped is set,
> the signal number won't be written to the wakeup FD.
Oh, nice catch. The "bug" is not new, Python behaves like that since Python 3.1. But in Python < 3.3, it doesn't mater because I don't think that wakeup was used to watch more than one signal. One trigger "something happened" was enough.
The wakeup fd now contains the number of each signal, and so the behaviour has to change. I applied your patch and I added a test.
Oooooh, sigwait() doesn't accept a timeout! I would be nice to have also sigwaitinfo().. and maybe also its friend, sigwaitinfo() (if we implement the former, it's trivial to implement the latter). Python 3.3 adds optional timeout to subprocess.wait(), subprocess.communicate(), threading.Lock.acquire(), etc. And I love timeout! It was really useful to implement the thread of faulthandler.dump_tracebacks_later().
> The wakeup fd now contains the number of each signal, and so the behaviour has
> to change. I applied your patch and I added a test.
Interesting. I suspected this would have an impact on the test_signal
failure on the FreeBSD 6.4 buidbot:
"""
======================================================================
FAIL: test_signum (test.test_signal.WakeupSignalTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_signal.py",
line 272, in test_signum
self.check_signum(signal.SIGUSR1, signal.SIGALRM)
File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_signal.py",
line 238, in check_signum
self.assertEqual(raised, signals)
AssertionError: Tuples differ: (14, 30) != (30, 14)
First differing element 0:
14
30
- (14, 30)
+ (30, 14)
"""
This means that the signals are not delivered in order.
Normally, pending signals are checked upon return to user-space, so
trip_signal should be called when the kill syscall returns, so signal
numbers should be written in order to the wakeup FD (and here it looks
like the lowest-numbered signal is delivered first).
You could try adding a short sleep before the second kill (or just
pass unordered=True to check_signum, but in that case we don't check
the correct ordering).
New changeset 29e08a98281d by Victor Stinner in branch 'default':
Issue #8407: test_signal doesn't check signal delivery order
Cool, "x86 FreeBSD 6.4 3.x" is green for the first time since a long time, thanks to my commit 29e08a98281d (test_signal doesn't check signal delivery order).
> Oooooh, sigwait() doesn't accept a timeout! I would be nice to have
> also sigwaitinfo().. and maybe also its friend, sigwaitinfo()
Oops, I mean sigtimedwait() and sigwaitinfo().
I just noticed something "funny": signal_sigwait doesn't release the
GIL, which means that it's pretty much useless :-)
Patch attached.
Also, test_sigwait doesn't block the signal before calling sigwait: it
happens to work because there's only one thread, but it's undefined
behaviour.
This whole thread is becoming quite confusing.
It would be better to open a separate issue for any bug or feature request which is not related to "exposing signalfd(2) and pthread_sigmask".
> This whole thread is becoming quite confusing.
You are complelty right, sorry to pollute this issue. Changes related to this issue:
- expose pthread_sigmask(), pthread_kill(), sigpending(), sigwait()
- wakeup fd now contains the file descriptor
I created issue #12303 for sigwaitinfo() and sigtimedwait(), and issue #12304 for signalfd.
New changeset a5c8b6ebe895 by Victor Stinner in branch 'default':
Issue #8407: signal.sigwait() releases the GIL
>.
Releasing the GIL is a new feature. Because it is cheap and pause() does also release the GIL, I commited your patch.
>.
On Linux, it works well with more than one thread. I added a test using a thread, we will see if it works on buildbots. The signal is raised by a thread (whereas the signal is not blocked in any thread).
I wrote a can_test_blocked_signals() function in test_signal which has hardcoded tests to check for some known C threads: the faulthandler timeout thread and for the Tkinter event loop thread. can_test_blocked_signals() returns True if pthread_kill() is available.
I don't know how it works if a thread uses pthread_kill() to raise a signal into the main thread (waiting in sigwait()), whereas the signal is not blocked in any thread.
If it is not possible to get the list of all C/Python threads and/or block a signal in all threads, we can use a subprocess without threads (with only the main thread).
Would you like to work on a patch to avoid the undefined behaviour?
> On Linux, it works well with more than one thread.
> I added a test using a thread, we will see if it works
> on buildbots.
The test hangs on FreeBSD 8.2:
[235/356] test_signal
Timeout (1:00:00)!
Thread 0x0000000800e041c0:
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/test/test_signal.py", line 625 in test_sigwait_thread
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/unittest/case.py", line 407 in _executeTestPart
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/unittest/case.py", line 462 in run
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/unittest/case.py", line 514/test/support.py", line 1166 in run
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/test/support.py", line 1254 in _run_suite
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/test/support.py", line 1280 in run_unittest
File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/test/test_signal.py", line 687 in test_main
File "./Lib/test/regrtest.py", line 1043 in runtest_inner
File "./Lib/test/regrtest.py", line 841 in runtest
File "./Lib/test/regrtest.py", line 668 in main
File "./Lib/test/regrtest.py", line 1618 in <module>
*** Error code 1
(The test has not run yet on FreeBSD 6.4 buildbot)
> The test hangs on FreeBSD 8.2: (...)
See also #12310 which may be related.
>>.
>
If it doesn't release the GIL, it is useless.
The common usage pattern for sigwait() is to create a dedicated thread
for signals management. If this thread calls sigwait without releasing
the GIL, the whole process is suspended, which is definitely not what
you want.
>>.
>
It's mere luck:
def test_sigwait(self):
old_handler = signal.signal(signal.SIGALRM, self.handler)
self.addCleanup(signal.signal, signal.SIGALRM, old_handler)
signal.alarm(1)
self.assertEqual(signal.sigwait([signal.SIGALRM]), signal.SIGALRM)
Comment out the first two lines that change the signal handler, and
the test will be killed by SIGALRM's default handler ("Alarm clock").
I tested on Linux, and if a the signal isn't blocked before calling sigwait:
- if a custom signal handler is installed, then it is not called
- if the default signal handler is in place, then it's called (and
with SIGALRM the process gets killed)
This is a typical example of what POSIX calls "undefined behavior".
If pthread_sigmask is called before sigwait, then it works as expected.
If we really wanted to test this properly, the way to go would be to
fork() a child process (that way we're sure there's only one thread),
have it block and sigwait() SIGUSR1 without touching the handler, send
it SIGUSR1, and check its return code.
Patch attached.
New changeset a17710e27ea2 by Victor Stinner in branch 'default':
Issue #8407: Make signal.sigwait() tests more reliable
neologix> Patch attached.
I wrote a different patch based on your patch. The main change is that I chose to keep signal.alarm(1) instead of sending the signal from the parent process to the child process, because I don't think that time.sleep(0.5) is a reliable synchronization code to wait until the child process is waiting in sigwait().
test_sigwait_thread() uses time.sleep(1) to wait until the main thread is waiting in sigwait(), but it is not mandatory because the signal is already blocked in the thread. I wrote test_sigwait_thread() to ensure that sigwait() releases the GIL, not to check that if a thread raises a signal while sigwait() is waiting, sigwait() does correctly receive it.
We may use time.sleep() in test_sigwait() if the signal is blocked in the parent process before the fork() (and unblocked in the parent process after the fork, but before sending the signal to the child process), but I think that signal.alarm() is more reliable and simple.
--
test_sigwait(_thread) was the last known bug of this issue, so let's close it. Reopen it if you see other bugs in pthread_sigmask(), pthread_kill(), sigpending() and sigwait() functions, or maybe better: open new issues because this issue has a too long history! See issue #12303 for sigwaitinfo() and sigtimedwait(), and issue #12304 for signalfd.
New changeset 37a87b709403 by Victor Stinner in branch 'default':
Issue #8407: write error message on sigwait test failure
New changeset 60b1ab4d0cd4 by Victor Stinner in branch 'default':
Issue #8407: skip sigwait() tests if pthread_sigmask() is missing
The documentation has been left in a confusing state by this patch -- at least confusing to me. I've created issue14456 with further details. | http://bugs.python.org/issue8407 | CC-MAIN-2016-40 | refinedweb | 4,236 | 57.06 |
release50
DragonFly BSD 5.0
- Version 5.0.0 released 16 October 2017
- Version 5.0.1 released 06 November 2017
- Version 5.0.2 released 04 December 2017
DragonFly version 5.0 brings the first bootable release of HAMMER2, DragonFly's next generation file system.
The details of all commits between the 4.8 and 5.0 branches are available in the associated commit messages for 5.0.0rc1, 5.0.0rc2, 5.0.0, 5.0.1, and 5.0.2.
Big-ticket items '-o PasswordAuthentication=yes' or change /etc/ssh/ssh_config if you need the old behavior. Public key users are unaffected.
Details
Checksums
MD5 (dfly-x86_64-5.0.0_REL.img) = 0b37697389e4dc7380ad4dee1cadf9b0 MD5 (dfly-x86_64-5.0.0_REL.iso) = 599d5e151c0315c1112f7585a8265faf MD5 (dfly-x86_64-5.0.0_REL.img.bz2) = b40b76dbdd88cafb8db85bc74a7e438f MD5 (dfly-x86_64-5.0.0_REL.iso.bz2) = 22ecc945e0aacd1bb2ea2318b428a9b0
Upgrading
If you have an existing 4.8
All changes since DragonFly 4.8
Kernel
- if_sl, if_ppp, and if_faith are now built as modules and can be removed from kernel configs.
- NX (no-execute) pmap support has been added.
- Use 64-bit serials for poll/select's kevent.udata. This fixes an issue where serial cycling could cause spurious events to be reported.
- Fix several issues for encrypted installations.
- Fix a blocked-lock issue in procfs
- Fix a serious permissions bug for sticky directories
- Fix event preset bug
- Fix an ACPI initialization ordering issue
- Fix a CAM shutdown ordering issue
- NX support added to kernel, but does not work with some interpreted or JIT languages so disabled by default.
- Fix a crypto subsystem stall.
- Ryzen CPUs can lockup when the instruction pre-fetcher (which can be speculative) transitions from a canonical to a non-canonical address. This can happen if the top of the user stack is mapped. Unmap the top of the user stack, and the top of the user stack is no longer considered to be part of userspace.
- Longer stir in arc4random(), make arc4random per-cpu to reduce contention.
- Fix a zget() panic which can occur during heavy paging
- Fix clustering inefficiencies
- tmpfs and vn can't handle certain swapoff situations. Be sure to fail a swapoff attempt under such conditions so not corruption occurs.
- Fix a gcc code reordering problem related to td_critcount operations. This fixes a lockmgr() race.
- Significantly reduce tsleep()/wakeup() queue collisions.
- Do many more NUMA-localized allocations for per-cpu structures.
- Add better AMD topology detection.
- Restrict kill(-1, ...) to the current reaper group. This fixes issues during bulk builds via synth where third-party programs erroneously signal process -1 after a fork() failure.
- Fix broken cpu rotator in lwkt_alloc_thread().
- Fix a rare allproc scan vs p_ucred race
- Fix unnecessary ucred duplication which led to potentially as many ucred allocations as vnodes.
- Fix a memory ordering race in the shared lock mutex code.
- Fix an ordering issue with coincident systimer interrupts. This improves user/sys/idle percentage reporting.
- Change our MBR partition type from 0xA5 (which we shared with FreeBSD) to 0x6C
- Fix a callout_stop()/callout_reset() rearming race
- Improve flushing during low-memory situations
- Add an emergency pager. The normal pager can pageout vnode-backed pages, but the complexity of the filesystem VFS can cause low-memory deadlocks during such flushes. The emergency pager only pages out anonymous memory and can recover these situations.
- Fix the panic() code for AMD cpus that assumed mwait hinting support when there might not be any.
- Improve TSC handling.
- Fix a SMP tsleep_interlock() vs wakeup() race.
- Validate the kernel up to 1 million processes (since PIDs are restricted to 6 digits, this is the max). Fix numerous issues that crop up under high-process-count conditions. Yes, it actually does work.
- Increase the default posix-lock limit.
- Remove a performance bottleneck related to large numbers of pipe() close() operations.
- Scale tsleep() performance to hundreds of thousands of processes.
- Refactor the maxproc calculation, allowing maxproc to be higher without improperly scaling maxvnodes and other resources to insane levels.
- Refactor the load calculation code to not stall cpu 0 when a large number of processes are present (aka a million procsses).
- Fix excessive call stack depth for stuck interrupts.
- Refactor IPI vector assignments and reformulate INVLTLB IPIs
Networking
- Direct input support for polling, on by default for ix(4).
- Allow up to 64 TX and RX rings for X550 chipsets
- Do not pad if_re chips which do not require explicit padding. This fixes UDP checksum generation on these chipsets.
- Limit the number of accepted sockets that kevent() reports. Defaults to 32. Does not effect accept() calls. This deconfuses some third party applications.
- Bring in vmx (VMWare virtual network driver, aka vmxnet3).
- Add Kabylake support (add Kabylake PCI IDs)
- Improve syncache performance.
- Add an interface network filter to IPFW.
- Add an ipfrag filter to IPFW.
- Rework IPFW's states and tracks.
- Reduce unnecessary IPIs by using sendmsg_oncpu() when possible.
- Improve ipflow code.
- Improve polling code.
- Fix issue with accelerated IPv4/IPv6 fragment draining.
- Randomize the local port.
- MSI enabled by default for if_em devices which support it.
Other drivers
- virtio_scsi(4) added
- ig4(4) devices are now recognized.
- internal updates to the isp(4) SCSI adapter driver.
- ADMA2 mode is now supported for SD card data transfer.
- UHS1 SD card disk format is now supported.
- Properly delete /dev/dsp and /dev/mixer on sound module unload.
- NVMe now handles devices without MSI-X support (aka virtualized NVMe).
- vtnet and virtio_blk improvements.
Userland
- HAMMER1 now is at an internal version of 7, reflecting a new, faster checksum operation.
- kcollect(8) has been added for automatic data gathering on a running DragonFly system.
- sshlockout(8) will now lock out based on number of attempts.
- Fix a static buffer overflow in mfiutil
- Fix a graphics compatibility enable test in 'window'
- Fix several sscanf bugs in userland
- usched now allows a process to change its own cpu affinity
- Fix a bug in ceill()
- Fix a seg-fault on crypt failure.
- Many namespace cleanups to improve dports compatibility.
- OpenSSH updated to 7.6p1
Various tools have been upgraded in the base system:
Hammer Changes
- Improve concurrent dedup stability under heavy concurrent loads.
- HAMMER to version 7. This version changes the CRC mechanic from an older slower CRC API to the ISCSI CRC code, which is 6x faster. Improves HAMMER performance. HAMMER supports both old and new CRC methods and is backwards compatible, but only files created after this change will use the new mechanism.. | https://www.dragonflybsd.org/release50/ | CC-MAIN-2018-17 | refinedweb | 1,071 | 60.82 |
Generate WebAPI resource paths with backtracking
Review Request #10117 — Created Aug. 13, 2018 and submitted.
Checks run (1 failed, 1 succeeded)
flake8
Typo in the description: "fakes resource paths" -> "fake resource paths"
Also in the second paragraph: "the in process DVCs work" "the in-progress DVCS work"
Needs to be the full namespace.
Blank line between these.
Why are we looping and then returning in the first iteration?
The wording seems pretty off: "Could not path for ...". I think this is missing a word.
Can you add an assertion error message explaining what the comment explained? That way if it's actually hit, we know why instead of having to dig into code.
The
not (not ..)is kinda weird. Can we rewrite to inverse the clauses, so we have:
if (not resource.singleton and ...): q = resource.get_queryset(...) ... else: try: yield ... except ...
Mind adding a comment saying why we might be hitting this from the yield? It'll help with making this more maintainable (since it is a more complicated set of logic).
Same comment as above. | https://reviews.reviewboard.org/r/10117/ | CC-MAIN-2019-09 | refinedweb | 176 | 77.53 |
Useful helper functions for amending the defaultConfig, and for
parsing keybindings specified in a special (emacs-like) format.
(See also XMonad.Util.CustomKeys in xmonad-contrib.)
To use this module, first import it into your ~/.xmonad/xmonad.hs:
import XMonad.Util.EZConfig
Then, use one of the provided functions to modify your
configuration. You can use additionalKeys, removeKeys,
additionalMouseBindings, and removeMouseBindings to easily add
and remove keybindings or mouse bindings. You can use mkKeymap
to create a keymap using emacs-style keybinding specifications
like "M-x" instead of (modMask, xK_x), or additionalKeysP
and removeKeysP to easily add or remove emacs-style keybindings.
If you use emacs-style keybindings, the checkKeymap function is
provided, suitable for adding to your startupHook, which can warn
you of any parse errors or duplicate bindings in your keymap.
For more information and usage eamples, see the documentation
provided with each exported function, and check the xmonad config
archive ()
for some real examples of use.
Add or override keybindings from the existing set. Example use:
main = xmonad $ defaultConfig { terminal = "urxvt" }
`additionalKeys`
[ ((mod1Mask, xK_m ), spawn "echo 'Hi, mom!' | dzen2 -p 4")
, ((mod1Mask, xK_BackSpace), withFocused hide) -- N.B. this is an absurd thing to do
]
This overrides the previous definition of mod-m.
Note that, unlike in xmonad 0.4 and previous, you can't use modMask to refer
to the modMask you configured earlier. You must specify mod1Mask (or
whichever), or add your own myModMask = mod1Mask line.
Like additionalKeys, except using short String key
descriptors like "M-m" instead of (modMask, xK_m), as
described in the documentation for mkKeymap. For example:
main = xmonad $ defaultConfig { terminal = "urxvt" }
`additionalKeysP`
[ ("M-m", spawn "echo 'Hi, mom!' | dzen2 -p 4")
, ("M-<Backspace>", withFocused hide) -- N.B. this is an absurd thing to do
]
Remove standard keybindings you're not using. Example use:
main = xmonad $ defaultConfig { terminal = "urxvt" }
`removeKeys` [(mod1Mask .|. shiftMask, n) | n <- [xK_1 .. xK_9]]
Like removeKeys, except using short String key descriptors
like "M-m" instead of (modMask, xK_m), as described in the
documentation for mkKeymap. For example:
main = xmonad $ defaultConfig { terminal = "urxvt" }
`removeKeysP` ["M-S-" ++ [n] | n <- ['1'..'9']]
Given a config (used to determine the proper modifier key to use)
and a list of (String, X ()) pairs, create a key map by parsing
the key sequence descriptions contained in the Strings. The key
sequence descriptions are "emacs-style": M-, C-, S-, and
M#- denote mod, control, shift, and mod1-mod5 (where # is
replaced by the appropriate number) respectively; some special
keys can be specified by enclosing their name in angle brackets.
For example, "M-C-x" denotes mod+ctrl+x; "S-<Escape>" denotes
shift-escape.
Sequences of keys can also be specified by separating the key
descriptions with spaces. For example, "M-x y <Down>" denotes the
sequence of keys mod+x, y, down. Submaps (see
XMonad.Actions.Submap) will be automatically generated to
correctly handle these cases.
So, for example, a complete key map might be specified as
keys = \c -> mkKeymap c $
[ ("M-S-<Return>", spawn $ terminal c)
, ("M-x w", spawn "xmessage 'woohoo!'") -- type mod+x then w to pop up 'woohoo!'
, ("M-x y", spawn "xmessage 'yay!'") -- type mod+x then y to pop up 'yay!'
, ("M-S-c", kill)
]
Alternatively, you can use additionalKeysP to automatically
create a keymap and add it to your config.
Here is a complete list of supported special keys. Note that a few
keys, such as the arrow keys, have synonyms:
<Backspace>
<Tab>
<Return>
<Pause>
<Scroll_lock>
<Sys_Req>
<Escape>, <Esc>
<Delete>
<Left>, <L>
<Up>, <U>
<Right>, <R>
<Down>, <D>
<Page_Up>
<Page_Down>
<End>
<Insert>
<Break>
<Space>
<F1>-<F12>
Given a configuration record and a list of (key sequence
description, action) pairs, check the key sequence descriptions
for validity, and warn the user (via a popup xmessage window) of
any unparseable or duplicate key sequences. This function is
appropriate for adding to your startupHook, and you are highly
encouraged to do so; otherwise, duplicate or unparseable
keybindings will be silently ignored.
For example, you might do something like this:
main = xmonad $ myConfig
myKeymap = [("S-M-c", kill), ...]
myConfig = defaultConfig {
...
keys = \c -> mkKeymap c myKeymap
startupHook = return () >> checkKeymap myConfig myKeymap
...
}
NOTE: the return () in the example above is very important!
Otherwise, you might run into problems with infinite mutual
recursion: the definition of myConfig depends on the definition of
startupHook, which depends on the definition of myConfig, ... and
so on. Actually, it's likely that the above example in particular
would be OK without the return (), but making myKeymap take
myConfig as a parameter would definitely lead to
problems. Believe me. It, uh, happened to my friend. In... a
dream. Yeah. In any event, the return () >> introduces enough
laziness to break the deadlock. | http://hackage.haskell.org/package/xmonad-contrib-0.7/docs/XMonad-Util-EZConfig.html | CC-MAIN-2014-42 | refinedweb | 779 | 56.05 |
- Introduction to Vuex
- Why should you use Vuex
- Let’s start
- Create the Vuex store
- An use case for the store
- Introducing the new components we need
- Adding those components to the app
- Add the state to the store
- Add a mutation
- Add a getter to reference a state property
- Adding the Vuex store to the app
- Update the state on a user action using commit
- Use the getter to print the state value
- Wrapping up
Introduction to Vuex
Vuex is the official state management library for Vue.js.
Its job is to share data across the components of your application.
Components in Vue.js out of the box can communicate using
- props, to pass state down to child components from a parent
- events, to change the state of a parent component from a child, or using the root component as an event bus
Sometimes things get more complex than what these simple options allow.
In this case, a good option is to centralize the state in a single store. This is what Vuex does.
Why should you use Vuex
Vuex is not the only state management option you can use in Vue (you can use Redux too), but its main advantage is that it’s official, and its integration with Vue.js is what makes it shine.
With React you have the trouble of having to choose one of the many libraries available, as the ecosystem is huge and has no de-facto standard. Lately Redux was the most popular choice, with MobX following up in terms of popularity. With Vue I’d go as far as to say that you won’t need to look around for anything other than Vuex, especially when starting out.
Vuex borrowed many of its ideas from the React ecosystem, as this is the Flux pattern popularized by Redux.
If you know Flux or Redux already, Vuex will be very familiar. If you don’t, no problem - I’ll explain every concept from the ground up.
Components in a Vue application can have their own state. For example, an input box will store the data entered into it locally. This is perfectly fine, and components can have local state even when using Vuex.
You know that you need something like Vuex when you start doing a lot of work to pass a piece of state around.
In this case Vuex provides a central repository store for the state, and you mutate the state by asking the store to do that.
Every component that depends on a particular piece of the state will access it using a getter on the store, which makes sure it’s updated as soon as that thing changes.
Using Vuex will introduce some complexity into the application, as things need to be set up in a certain way to work correctly, but if this helps solve the unorganized props passing and event system that might grow into a spaghetti mess if too complicated, then it’s a good choice.
Let’s start
In this example I’m starting from a Vue CLI application. Vuex can be used also by directly loading it into a script tag, but since Vuex is more in tune with bigger applications, it’s much more likely you will use it on a more structured application, like the ones you can start up quickly with the Vue CLI.
The examples I use will be put CodeSandbox, which is a great service that has a Vue CLI sample ready to go at. I recommend using it to play around.
Once you’re there, click the Add dependency button, enter “vuex” and click it.
Now Vuex will be listed in the project dependencies.
To install Vuex locally you can simply run
npm install vuex or
yarn add vuex inside the project folder.
Create the Vuex store
Now we are ready to create our Vuex store.
This file can be put anywhere. It’s generally suggested to put it in the
src/store/store.js file, so we’ll do that.
In this file we initialize Vuex and we tell Vue to use it:
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({})
We export a Vuex store object, which we create using the
Vuex.Store() API.
An use case for the store
Now that we have a skeleton in place, let’s come up with an idea for a good use case for Vuex, so I can introduce its concepts.
For example, I have 2 sibling components, one with an input field, and one that prints that input field content.
When the input field is changed, I want to also change the content in that second component. Very simple but this will do the job for us.
Introducing the new components we need
I delete the HelloWorld component and add a Form component, and a Display component.
<template> <div> <label for="flavor">Favorite ice cream flavor?</label> <input name="flavor"> </div> </template>
<template> <div> <p>You chose ???</p> </div> </template>
Adding those components to the app
We add them to the App.vue code instead of the HelloWorld component:
<template> <div id="app"> <Form/> <Display/> </div> </template> <script> import Form from './components/Form' import Display from './components/Display' export default { name: 'App', components: { Form, Display } } </script>
Add the state to the store
So with this in place, we go back to the store.js file and we add a property to the store called
state, which is an object, that contains the
flavor property. That’s an empty string initially.
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({ state: { flavor: '' } })
We’ll update it when the user types into the input field.
Add a mutation
The state cannot be manipulated except by using mutations. We set up one mutation which will be used inside the Form component to notify the store that the state should change.
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({ state: { flavor: '' }, mutations: { change(state, flavor) { state.flavor = flavor } } })
Add a getter to reference a state property
With that set, we need to add a way to look at the state. We do so using getters. We set up a getter for the
flavor property:
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({ state: { flavor: '' }, mutations: { change(state, flavor) { state.flavor = flavor } }, getters: { flavor: state => state.flavor } })
Notice how
getters is an object.
flavor is a property of this object, which accepts the state as the parameter, and returns the
flavor property of the state.
Adding the Vuex store to the app
Now the store is ready to be used. We go back to our application code, and in the main.js file, we need to import the state and make it available in our Vue app.
We add
import { store } from './store/store'
and we add it to the Vue application:
new Vue({ el: '#app', store, components: { App }, template: '<App/>' })
Once we add this, since this is the main Vue component, the
store variable inside every Vue component will point to the Vuex store.
Update the state on a user action using commit
Let’s update the state when the user types something.
We do so by using the
store.commit() API.
But first, let’s create a method that is invoked when the input content changes. We use
@input rather than
@change because the latter is only triggered when the focus is moved away from the input box, while
@input is called on every keypress.
<template> <div> <label for="flavor">Favorite ice cream flavor?</label> <input @ </div> </template> <script> export default { methods: { changed: function(event) { alert(event.target.value) } } } </script>
Now that we have the value of the flavor, we use the Vuex API:
<script> export default { methods: { changed: function(event) { this.$store.commit('change', event.target.value) } } } </script>
see how we reference the store using
this.$store? This is thanks to the inclusion of the store object in the main Vue component initialization.
The
commit() method accepts a mutation name (we used
change in the Vuex store) and a payload, which will be passed to the mutation as the second parameter of its callback function.
Use the getter to print the state value
Now we need to reference the getter of this value in the Display template, by using
$store.getters.flavor.
this can be removed because we’re in the template, and
this is implicit.
<template> <div> <p>You chose {{ $store.getters.flavor }}</p> </div> </template>
Wrapping up
That’s it for an introduction to Vuex!
The full, working source code is available at
There are still many concepts missing in this puzzle:
- actions
- modules
- helpers
- plugins
but you have the basics to go and read about them in the official docs.
Happy coding! | https://flaviocopes.com/vuex/ | CC-MAIN-2018-47 | refinedweb | 1,490 | 71.34 |
.
This is amazing news for the C++ community. Congratulations and keep the good stuff coming!
Love it
cool! will play with this :-)
but please, void main?? my eyes are hurting...
Why, with such generic headers as uri.h and file_stream.h, do you not namespace them, e.g.
#include <casablanca/uri.h>
etc.?
Using lambdas, this definitely looks awesome!
@TommyS: definitely int main -- thanks for noticing this! Fixed.
This is wonderful!
Next stop, submit for consideration to the Standard! Great news and kudos to all involved.
I expected someone up-to-date with C++11 to at least know that main must return int, not void.
This is very cool. Any thoughts on Mac support?
Congratulations to the team - this has been a long time coming and a lot of work has gone into this....
Fantastic :)
@Essentia, but human---eror can made by Manish Maheshwari. what do you think of yourselfff ????
Any idea what the dependencies are preventing this working with XP and Vista? Would love to use in my app which needs to support those platforms. Thank you.
@turquoiseowl: We did not test Casablanca on Vista extensively (unlike Windows 7 and Windows 8), so we cannot claim that it is supported at this point. We're not aware of any technical reasons why it would not work though.
We're using the new Windows thread pool, which was introduced in Vista, which is why it won't work before we rewrite that code to use the "legacy" thread pool.
We're keenly aware of the importance of the XP and Vista support, and will enable it as soon as we can.
Not useful to my company until it runs on OSX. | http://blogs.msdn.com/b/vcblog/archive/2013/02/26/the-c-rest-sdk-quot-casablanca-quot.aspx?PageIndex=1 | CC-MAIN-2014-41 | refinedweb | 282 | 77.94 |
D - Module-level accessibility
- Sergey <smsmfk gmail.com> Oct 02 2010
Hi, I'm a newcomer to D. Several days ago I've stumbled upon a very strange thing about D. As I understand, module members, such as classes, structs, functions and variables can be marked with an accessibility attribute, such as private, public, package or export. Say, there's a module named "test.d": module test; private int x1; public int x2; private void fnc1() { } public void fnc2() { } public class C1 {} private class C2 {} With a great surprise, I have discovered that from another module it is still possible to access both x1 and C1, though fnc1() is inaccessible ! Can anyone explain me what does it mean ? Thanks in advance.
Oct 02 2010 | http://www.digitalmars.com/d/archives/Module-level_accessibility_29562.html | CC-MAIN-2013-48 | refinedweb | 124 | 66.64 |
from Wikipedia: Sir Nicholas Throckmorton (or Throgmorton) (c. 1515/1516 – 12 February 1571) was an English diplomat and politician, who was an ambassador to France and played a key role in the relationship between Elizabeth I and Mary Queen of Scots. Early years
Nicholas Throckmorton was the fourth of eight sons of Sir George Throckmorton of Coughton Court in Warwickshire and Katherine, daughter of Nicholas, Lord Vaux of Harrowden, and an uncle of the conspirator Francis Throckmorton. He was brought up in the households of members of the Parr family, including that of his cousin Catherine Parr, the last wife of Henry VIII. He got acquainted with young princess Elizabeth when he was serving in the household of the former queen and her new husband Thomas Seymour and became a close confidante. In his youth he also became favourable to the Protestant reformation.
When Seymour was executed in 1546, Throckmorton managed to distance himself from his affairs and eventually became the part of the circle of John Dudley and confidante of the young king Edward VI.
He sat in Parliament from 1545 to 1567, initially as the member for Devizes (a seat previously held by his brother, Clement Throckmorton). During the reign of Edward VI he was in high favor with the regents..
_______________________________
First Cousin of Henry VIII's 6th wife, Queen Catherine Parr.. -------------------- Family and Education
b. 1515/16, 4th s. of Sir George Throckmorton†, and bro. of Anthony, Clement, George†, John I, Kenelm†, and Robert†. m. by 1553, Anne, da. of Sir Nicholas Carew of Beddington, Surr., 10s. inc. Arthur and Nicholas 3da. Kntd. Jan./May 1551.1
Offices Held
Page, household of Henry Fitzroy, Duke of Richmond by 1532-6; servant, household of William, Baron Parr by 1543; sewer, household of Queen Catherine Parr by 1544-7 or 8; gent. privy chamber by 1549-53; undertreasurer of the mint 25 Dec. 1549-24 June 1552; keeper, Brigstock park, Northants. 14 Sept. 1553-d.; j.p. Northants. from c.1559; ambassador to France 1563-4, to Scotland 1565, 1567; chamberlain of the Exchequer from 21 June 1564; chief butler of England and Wales from 28 Nov. 1565.2
Biography
Though from the Catholic side of the family, Throckmorton believed that a protestant foreign policy was necessary for the defence of England and the recovery of Calais. He remained in England for much of Mary’s reign, ‘stood for the true religion’ in the Parliament of October 1553, and survived implication in Wyatt’s rebellion. But the possibility that he might be accused of complicity in the Henry Dudley conspiracy of 1556 decided him to go abroad. In the event he was pardoned, his property was restored in May 1557, and by early 1558 he was back in England exercising his old office of keeper of Brigstock park. He was thus better placed than most protestants to communicate with Princess Elizabeth at Hatfield.3
He was sufficiently in Elizabeth’s confidence to believe that at her accession she would appreciate his suggestions for filling a number of appointments. As principal secretary he suggested, safely enough, Sir William Cecil, but by and large his advice on offices was ignored. His attitude to the Marian Privy Councillors is interesting. He thought that Heath, Catholic archbishop of York, should for the time being be retained as chancellor, along with many of the late Queen’s Council. ‘For religion and religious proceedings’ it was necessary ‘to require the Lords to have a good eye that there be no innovations, no tumults or breach of orders’. As a man who had lived through the disorders occasioned by Somerset’s religious policy, Throckmorton was anxious that the new reign should not run into trouble through the rash activities of his protestant friends. He suggested ‘making you a better party in the Lords House of Parliament [and] for appointing a meet Common House to your proceeding’. Some years earlier, Sir Richard Morison† had called him a ‘Machiavellist’, and the ‘advice’ to Elizabeth bears out this description.
It shall not be meet that either the old [privy councillors] or the new should wholly understand what you mean, but to use them as instruments to serve yourself with ...
Elizabeth employed Throckmorton, during the critical days just after her accession, in various urgent duties (controlling the ports, examining Cardinal Pole’s papers, arranging the state entry into London), but he never attained a major government post.4
Throckmorton was returned to Parliament for west-country boroughs in 1559 and 1563 by courtesy of the 2nd Earl of Bedford. There is no mention of him in the defective 1559 journal, or that of 1563. In the 1566 session he was put on committees dealing with law reform (4 Oct.) and the succession (31 Oct.), and was one of the 30 Members of the Commons summoned on 5 Nov. to hear the Queen’s message about the succession. His reputation rests, for good or ill, on his work as a diplomat. His knowledge of Elizabethan affairs was unrivalled and he had a flair for intelligence. The defence of England was his preoccupation and he was convinced that the only hope for the survival of protestantism in Europe was support for the Huguenots in France, and for the rebel lords in Scotland. By about 1564, therefore, he became a follower of Sir Robert Dudley and had begun to think that Cecil was not only opposing the active policy, but trying to keep exponents of it, such as Throckmorton himself, off the Council. Soon Cecil, then the Queen, began to distrust him. He was one of the first to appreciate that Spain, and not France, was England’s real enemy. Elizabeth was never fully converted to this view, and, certainly in the 1560s, she was concerned to keep the Spanish door ajar. Here again the Earl of Bedford proved his ally, supporting his request for recall from France in 1563, and four years later, as governor of Berwick, doing all he could to forward the policy of support for the Scottish lords, even against Elizabeth’s instructions.5
During Throckmorton’s first embassy to France, beginning in May 1559, he corresponded regularly and frankly with Cecil. His position was difficult. He refused, for example, to kneel at the elevation of the Host, and was ordered either to conform or to absent himself from religious services. After a visit to England from mid-November 1559 to the following January, he was increasingly suspected by the Guises, especially after the conspiracy of Amboise (March 1560), in which they accused him of being involved. Until the possibility of a marriage between Elizabeth and Dudley was over, Throckmorton’s friendship for him did not cause him to hide his conviction that the marriage would be disastrous for Elizabeth’s reputation abroad and at home. In October 1560 he wrote to his cousin, the Marquess of Northampton, that there had been speeches at the French court about Elizabeth which ‘every hair of my head stareth at’ and made his ‘ears glow to hear’.6
Elizabeth’s government thus found him a mixed asset. His position was weak, since Mary Tudor’s defeat in France had lowered England’s reputation there. Again it was necessary that Condé, Coligny and their friends should be strengthened in their opposition to the Guise Catholic interest, which might otherwise act more effectively against the protestants in Scotland; but Elizabeth shrank from the idea of going to war for the Huguenots. All the time both the Queen and Cecil were aware of the danger of his over-playing his hand, which in the event, happened. He encouraged his government to overestimate the strength of the Huguenots, and in July 1562 urgently advised the Queen to accept their offer of Le Havre. His natural impatience, as well as his growing belief in the danger of Spanish intervention on behalf of the French Catholic interest, caused him to upbraid even Cecil for dilatoriness. In the summer of 1560, on Cecil’s departure for Scotland, he had prophesied disaster.
Who can as well stand fast against the Queen’s arguments and doubtful devices? Who will speedily resolve the doubtful delays? Who shall make despatch of anything?7
But by 1562 he was becoming convinced that Dudley rather than Cecil was his chief ally with Elizabeth. In June 1562 Lord Robert and Sir Henry Sidney were godfathers to his son Robert, and early the following year it was to Dudley that he turned for support in his prolonged quarrel with Sir Thomas Smith. He had originally urged his government to send Smith to France, since he greatly respected his abilities, and even suggested him also as a useful English representative at the Council of Trent. But when Smith arrived, in September 1562, relations between the two men quickly deteriorated. Smith, though anxious to bring about the recovery of Calais (which Throckmorton’s experience led him to see was likely to alienate the Huguenots), in general adopted a less aggresive policy than his colleague, and a conflict developed in which Dudley supported Throckmorton while Cecil, though preserving outward impartiality, leaned towards Smith (who was probably carrying out government instructions). Throckmorton persuaded Elizabeth to send troops to help the Huguenots at Le Havre, but this proved an expensive mistake, and Throckmorton’s own identification with Condé’s army, late in 1562, ostensibly as a captive, prejudiced his position. In December he was taken prisoner by Catholic forces, remaining in custody until February 1563, when he retired to Le Havre and maintained liaison between Condé’s forces and the Earl of Warwick. Warwick, however, was unable to hold the town when the Huguenots failed to support him, and, after a short period in England, Throckmorton was sent back to France (June 1563) to negotiate peace terms very different from those he had envisaged. Having no safe-conduct from the French, he was arrested, remaining in prison for some time before Cecil could gain his release. Smith, he complained, did nothing to help him, and although after his release he and Smith officially co-operated in negotiations leading to the Treaty of Troyes (1 Apr. 1564), their personal relations remained bad. At one point during the negotiations both men drew their daggers and were forcibly restrained by the onlookers. After the signing of the treaty, Throckmorton returned to England, where Dudley was making strenuous (and Cecil less strenuous) but unavailing efforts to get him appointed to the Privy Council. Never a wealthy man, he had suffered financially from his period in France, and the two lucrative posts he obtained after his return (chamberlain of the Exchequer and chief butler of England), no doubt eased the burden.
Throckmorton’s next assignment was to Scotland, to try to prevent Mary Stuart’s marriage to Darnley, and encourage her to marry Leicester. He had little hope of success, and achieved none. Typically, he sent Mary a letter of advice, urging her to show clemency to the banished protestant lords. Whether or not this angered Mary Stuart, the Queen and Cecil both found it infuriating. In May 1566 he and Cecil confronted one another, in the presence of Leicester, and Throckmorton promised to do better next time. But his last mission to Scotland, in the following year, with vague instructions to bring about the release of Mary from captivity and make an agreement between her and the rebel lords, was also a failure. He came to the conclusion that it was the lords ‘who must stand her [Elizabeth] in more stead than the Queen of Scots’, and believed that they would be prepared, if they were promised support from England, to send young James to be educated there. Otherwise, they would probably turn to France. In spite of his political convictions, he personally sympathized with Mary, who wrote thanking him for the good feeling he had shown her. He tried unsuccessfully to raise a party to support her, and on Elizabeth’s instructions refused to attend the coronation of James VI. Having annoyed both sides, his recall in September 1567 followed statements in England that ‘he was esteemed to favour too much the lords’.8
This was the end of his diplomatic career. His chances with the Queen and Cecil were finally ruined when, in 1569, his implication in the Norfolk marriage plot brought him an examination before the Privy Council and a short period of imprisonment in Windsor castle. He remained under house arrest until the spring of 1570. In February he wrote a mémoire justificatif to Cecil asking him to sue for his release, and thenceforward took no further part in politics (though there was a rumour early in 1570 that he would be made vice-chamberlain) until his death in London of ‘a peripneumonia’ 12 Feb. 1571. Leicester wrote to Walsingham, who was in France, on the 14th:.
He was buried in St. Catherine Cree, Aldgate, the parish where he had his London residence, a large mansion which had belonged to the abbey of Evesham. As a country house he used Beddington, his wife’s family home in Surrey.9
About his private and domestic life not much is known. His daughter Elizabeth married Sir Walter Ralegh. His books, many of which his son Arthur bequeathed to Magdalen College, Oxford, are almost exclusively political and religious, and are heavily underlined—essentially the ‘library of a practicing diplomat’. There are scattered references to hunting and other outdoor amusements in his letters, but no indication of any marked cultural interests. Personally devout, he was opposed to the wilder puritan schemes, or to any kind of pietism. In his criticisms of Mary Tudor’s reign he deplored the policy of ‘referring all to God, without doing anything ourselves’, which he described as tempting God too far. He admitted that there was still some ‘popery’ in the English church, but wanted as much toleration as was compatible with strong government control, to make a united protestant front against the Spanish Catholic menace abroad. The Earl of Leicester appointed him as a suitable governor of his foundation for the revenues of Warwickshire preachers.10
In his will, made four days before his death, he left a life interest in lands in Buckinghamshire, Northamptonshire and Oxfordshire to his widow with reversion to his eldest son. The Worcestershire lands were left to Arthur, the second son, with reversion to his younger brothers. The younger children were also otherwise provided for. Thomas, the fourth son, was to have the London property after the death of his mother, as well as £500. A similar sum and the privileges of the salt monopoly granted to Throckmorton were bequeathed to the two youngest sons and £500 to his only surviving daughter Elizabeth. The supervisors of the will included the Earl of Leicester, Sir Walter Mildmay, (Sir) John Throckmorton I and Sir William Cordell. These were all left tokens, as were the Marquess of Northampton, Sir William Cecil, Lady Warwick and Lady Stafford. Throckmorton’s death brought differing comments from various quarters: the Earl of Rutland in Paris told Cecil that the event was no source of regret to the French, but to Lord Buckhurst (Thomas Sackville), the news brought no small grief, ‘not only for his private loss, but the general loss which the Queen and the whole realm thereby suffer’. This sense of Throckmorton’s public value was summed up by Walsingham: ‘for counsel in peace and for conduct in war he has not left of like sufficiency his successor that I know’.11 His widow Anne took as her second husband Adrian Stokes.
Ref Volumes: 1558-1603
Authors: Irene Cassidy / P. W. Hasler
Notes 1. Vis. Northants. ed. Metcalfe, 200; Camden, Annals (1717), 221; Stow, Survey of London (1720), i. 63; Nash, Worcs. i. 452; PCC 9 Daper.
2. Soc. Antiq. 1790, p. 167; APC, iv. 76, 77, 84; CPR, 1549-51, p. 137; 1553 and App. Edw. VI, p. 9; 1563-6, pp. 118, 234; CSP Dom. Add. 1547-65, pp. 503, 561; Lansd. 1218, f. 21v. 3. Bodl. e. Museo 17; DNB ; Dugdale, Warws. ii. 749-52; EHR , lxv. 91-8. 4. EHR , lxv. 91-8; A. L. Rowse, Ralegh and the Throckmortons , 25; C. Read, Cecil , 72; CSP Dom. 1547-80, p. 115. 5. Lyme Regis archives N23/2/19; CJ , i. 73; D’Ewes, 126; Camb. Univ. Lib. Gg. iii. 34, p. 209; CSP For. 1561-2, p. 23; 1566-8, pp. 39, 308; CSP Dom. Add. 1566-79, p. 19. 6. Neale, Eliz. 99; CSP For. 1560-1, pp. 342-3, 348; EHR , lxxxi. 474-89. 7. Rowse, 38; Read, 174. 8. CSP For., CSP Span., CSP Scot. , passim; P. Forbes, A Full View of Public Transactions , i. 163-6, 206-12, 216-18, 320-4; ii. 7-14, 36-43, 61-7, 251-9, 342-4; M. Dewar, Sir Thomas Smith , passim; HMC Hatfield , xii. 255; Lansd. 102, ff. 84, 110; Strype, Sir Thomas Smith , 70, 81 et passim; Rowse, 41, 46; T. Wright, Eliz. i. 208. 9. Haynes, State Pprs. 471, 541-3, 547, 577; HMC Hatfield , i. 363, 426, 430, 435, 456, 465; Wright, i. 355; Camden, Annals (1717), 221; Stow, Survey of London , ed. Kingsford, i. 138, 142-3; ii. 290; Rowse, 46; CSP Dom. 1566-79, p. 16; CPR , 1560-3, p. 400. 10. CSP Dom. 1547-80, p. 304; Rowse, 336-8. 11. PCC 9 Daper; Lansd. 117, ff. 36, 38, 39; CSP For. 1569-70, p. 407 | http://www.geni.com/people/Sir-Nicholas-Throckmorton-MP/6000000000426799877 | CC-MAIN-2015-22 | refinedweb | 2,919 | 69.92 |
One of the great things about the MVVM design pattern is that it allows us to maximize the code in our cross-platform model and view-model layers. This means we’ve written the bulk of our code and we’ve also written unit tests for it, giving us a degree of confidence that our code works. This article introduces UI testing in Xamarin.
Unit tests are great, but they don’t cover two important areas — have we used the correct controls on our view and bound them correctly, and does our app run on the device. It’s great to have a property on a view model that we bind to a text field to allow the user to enter the name of a counter, but what if we accidentally use the wrong control, such as a label instead of a text box, or even use the right control but forget to add the binding code? What if we’ve used a feature that was only added to the Android SDK in API 21 but our app manifest shows our app runs on API 19 and higher? This is where UI testing comes in — it allows us to run our app on emulators, simulators, and devices, and to write automated tests in code against it.
The concept behind UI testing is simple — run your app and have something interact with it using the user interface components (such as tapping buttons or entering text in text boxes), and validate that everything is working by ensuring the app doesn’t crash and that the results of the users’ actions are shown on the UI as expected. This kind of testing started out life for desktop apps, where the aim was to make testing more reliable and cheaper — after all, humans are expensive and after testing the same screen many, many times they can get bored and make mistakes or miss problems. Automated UI testing also allowed for better time usage with tests being run overnight and developers discovering if they’ve broken anything the next morning.
For desktop apps, UI testing was reasonably simple — launch the app and test it, maybe testing on a few different screen sizes, but always on one OS with maybe one or two different versions as desktop OSes don’t change often. With mobile, things are more complicated. We want to support two major OSes with our cross-platform app, with multiple versions. We also have different hardware with different screen sizes. On iOS this isn’t too bad — we only need to support a small number of OS versions (maybe the current one and the previous) and a small number of different devices, but on Android, as we’ve already seen it’s a mess with multiple OS versions in regular use, a huge range of screen sizes available, and worst of all customizations to the OS from both the hardware manufacturer and carrier.
This is why UI testing is hugely important for mobile apps. A human can’t test on a wide range of devices without needing a lot of time or lots of humans involved in the process (expensive) and without them all going mad as they install the app on another device and run the same test for the millionth time.
Writing UI Tests Using Xamarin UITest
You write UI tests in the same way that you write a unit test — you decide what you want to test, then write some code to create a test. This code uses some kind of framework that is able to launch your app and interact with it as if it was a real user. Many different frameworks are around for testing, but for this article, I’ll focus on Xamarin UITest, which is well integrated into Visual Studio. For testing Android apps, you can use either Windows or Mac, but for testing iOS apps you’ll need to use a Mac — it’s not supported on Windows at the moment.
Xamarin UITest is based off a testing framework called Calabash which was written in Ruby and is fully open source and maintained by Xamarin. UITest is a layer on top of this that allows you to write your tests in C# and run them using NUnit. These tests are written in the same way as unit tests using the arrange, act, assert pattern, with the arrange part launching the app and getting it to the relevant state ready to test, the act interacts with the UI as if it was a user, and the assert queries the UI to ensure it’s in the correct state.
Setting Up Your App for UI Testing
For this section, we’ll focus on the Countr app, as there’s more to test here. You can download the source code from this book here, so download this and open the completed Countr solution from chapter 13. When we built the model layer in our app, we added a new unit test project that we used for unit tests for both the model and view model layers. For our UI tests, we also need a new project that contains and runs our UI tests.
Creating the UI Test Project
Add a new UITest project to the Countr solution using Visual Studio for Windows by right-clicking on the solution, selecting ‘Add→New Project…’ and from the ‘Add new project’ dialog select ‘Visual C#→Cross-Platform’ on the left and select ‘UI Test App’ from the middle. For Mac, right-click on the solution and select ‘Add→Add New Project…’, select ‘Multiplatform→Tests’ on the left and ‘UI Test App’ from the middle and tap ‘Next’. Name your project ‘Countr.UITests’ and click ‘OK’ (Windows) or ‘Create’ (Mac).
Once the test project has been added it’ll install two NuGet packages that UITest needs — NUnit and Xamarin.UITest. It’s worth updating the Xamarin.UITest NuGet package to the latest version, as they often push out bug fixes to ensure it works on the latest mobile OS versions. Don’t update NUnit. UITest only works with NUnit 2, not NUnit 3 and if you update this package your tests won’t work and you’ll need to remove the package and re-install NUnit 2.
The UI test project has two files auto-generated for you —
AppInitializer.csand
Tests.cs.
AppInitializer.cs: This is a static helper class with a single static method which is used to start your app. UITest has an
IAppinterface used to represent your running app, and this has methods on it for interacting with the UI elements in your app or to a limited extent the device hardware (for example rotating the device). The
StartAppmethod returns an instance of
IAppthat your tests can use. This method uses a helper class called
ConfigureAppto start the app, and this helper class has a fluent API that allows you to configure and run your app. The auto-generated code doesn’t do much to configure the app, it specifies its type (Android or iOS) based on the platform passed into the method.
Tests.cs: This file contains a UI test to run. This test fixture has a parameterized constructor that takes the platform to run the tests on as one of the values from the
Platformenum, either
Platform.iOSor
Platform.Android, and has two
TestFixtureattributes, one for each platform. This means that we have two test fixtures - one Android and one iOS. This fixture has a setup method that uses the
AppInitializerto start the app before each test, and a single test that calls the Screenshot method on the
IAppreturned from the app initializer to take a screenshot.
Setting Up Your Android Apps for UI Testing
By default, Xamarin Android apps are configured in debug builds to use the shared mono runtime. When you deploy your app to a device or emulator, time is taken copying the code over, and anything that can make your app smaller is a good thing as this reduces the time to install. Xamarin Android apps use a mono runtime (mono being the cross-platform version of.Net that Xamarin is based on) to provide the.Net framework, and this is a large piece of code bundled in your app. Rather than bundling it in, for debug builds you can use a shared version which is installed separately, making your app smaller. Unfortunately, when doing UI tests you can’t use the shared runtime, and you’ve two options:
- Don’t use the shared runtime: You can turn off the shared mono runtime from the project properties. In Visual Studio for Windows you’ll find it in the ‘Android Options’ tab at the top of the ‘Packaging’ page, on Mac it’s on the ‘Android Build’ tab at the top of the ‘General’ page. Untick the ‘Use Shared Mono Runtime’ box to turn this off, but be aware that this increases your build times.
- Release builds: Release builds don’t have the shared mono runtime turned on. After all, when you build a release version it’s usually for deployment such as to the store, and your users won’t have the shared mono runtime installed. The downside to using a release build is that you need to grant your app permission to access the internet. This isn’t a problem if your app already accesses the internet, but if it doesn’t you many not want to ask your users for this extra permission as they might not want to grant it. If you want to use a release build, then you can grant this permission in Visual Studio by opening the project properties, heading to the ‘Android Manifest’ tab and finding the INTERNET permission in the ‘Required Permissions’ list and ticking it. On Mac, double-click on the
AndroidManifest.xmlfile in the Properties folder and tick the permission.
Setting Up Your iOS Apps for UI Testing
Apart from the shared mono runtime, out-of-the-box UITest works with Android — UITest is able to connect to your running Android app on a device or an emulator and interact with it. iOS, on the other hand, isn’t quite as simple. Due to the stricter security on iOS, you can’t connect to a simulator or device and interact with the app. Instead, we need to install an extra component into your iOS apps that you initialize before your UI tests can run. To do this, add the Xamarin.TestCloud.Agent NuGet package to the Countr iOS app (Test Cloud is the Xamarin cloud-based testing service — you’ll see the name used in a few places with UITest).
Once this NuGet is installed you’ll need to add a single line of code to initialize it. Open
AppDelegate.cs and add the following code:
public override bool FinishedLaunching(UIApplication app, NSDictionary options) { #if DEBUG Xamarin.Calabash.Start(); #endif ... }
This code starts the Calabash server for debug builds only. The Calabash server is an HTTP server that runs inside your app, and the UITest framework connects to this to allow it to interact with your app. Apple is strict about security in their apps, and they’d never allow an app with an open HTTP port like this on the app store. To avoid this the Calabash server is only enabled for debug builds — for release builds this code won’t get run, the linker strips it out and your app won’t get rejected from the app store (at least not due to the Calabash server).
Running the Auto-Generated Tests
UI tests are run in the same way as any other unit test — you can run them from the test pad/explorer. UI tests rely on having a compiled and packaged app to run, and the first step is to build and either deploy to device, emulator or simulator or run the app you want to test. Note that doing a build isn’t enough for Android — a build compiles the code, it doesn’t package it up. The easiest way to ensure you have a compiled and packaged app is to run it once. On Android build in release, for iOS you need to use a debug build to enable the Calabash server. You also need to set what device, emulator or simulator you want to run your tests on in the same way that you’d select the target for debugging.
Getting Ready to Run the Tests
If you open the test pad or explorer, you may not see the UI tests if the project hasn’t been built. If you don’t see the tests then build the UITest project and you should see the tests appear. If you expand the test tree in Visual Studio for Mac you’ll see two fixtures — Tests(Android) and Tests(iOS).
Two fixtures are declared with the two
TestFixture attributes on the Tests class. When you run the tests from Tests(Android) it constructs the test fixture by passing
Platform.Android to the constructor, which in turn uses the
AppInitializer to start the Android app. Tests(iOS) is the same, but for the iOS app. Under each fixture you’ll see the same test called
AppLaunches.
In Visual Studio, you don’t see the same hierarchy out of the box; drop down the ‘Group By’ box and select ‘Class’ to see the tests grouped by test fixture. You can only test Android apps using Visual Studio. Feel free to comment out the
[TestFixture(Platform.iOS)] attribute from the Tests class to remove these tests from the explorer.
Before we can run the test, we need to make a small tweak. Despite the test calling
app.Screenshot, this test won’t spit out a screenshot. For some reason, out-of-the-box UITest is configured to only create screenshots if the tests are run on Xamarin’s Test Cloud, and we need to change the configuration to always generate screenshots. To do this, add the code in the following listing to
AppInitializer.cs.
public static IApp StartApp(Platform platform) { if (platform == Platform.Android) { return ConfigureApp .Android .EnableLocalScreenshots() .StartApp(); } return ConfigureApp .iOS .EnableLocalScreenshots() .StartApp(); }
By default, the
StartApp method doesn’t do anything to configure the app which is being tested, and this means that it expects the app to be tested to be a part of the current solution. stack of green arrows. Right-click on this and select 'Add App Project'. From the dialog that appears tick 'Countr.Droid' and 'Countr.iOS' and click 'OK'. You’ll see these two apps appear under the 'Test Apps' node. If you right-click on one of them you’ll see a number of options, including a list of possible target devices to run against, with 'Current device' ticked. This list is used to set which device the UI tests are run against when you run them from the pad. If you leave 'Current Device' selected it’ll use whatever target is set from the main toolbar, but if you always want the tests to run against a particular emulator, simulator or device you can select it from here.
Setting the App to Test in Visual Studio 2017
UITest uses NUnit to run tests, and we need to ensure Visual is configured to run NUnit tests. As I mentioned, to use UITest we need to install the NUnit 2 adapter. From ‘Tools→Extensions and Updates’, select the ‘Online’ tab on the left and search for ‘NUnit 2.’ Click on ‘NUnit 2 Test Adapter’ in the list in the middle and click the ‘Download’ button. You’ll need to close Visual Studio for this to be installed; relaunch it after the install and re-load the Countr solution.
Visual Studio only supports testing Android; delete the
[TestFixture(Platform.iOS)] attribute from the
Tests class. This stops iOS tests showing up in the test explorer.
Unlike Visual Studio for Mac, there is no way on Windows to set the test apps. Instead, we need to configure this in code by giving it the path to the Android ‘APK’, which is in the output folder and is named based on the Android package name with the file extension.apk, with release builds having the suffix of ‐Signed to indicate that they’ve been signed with our keystore. We set the package name in the Android manifest in the last chapter based off a reverse domain name (mine was set to
io.jimbobbennett.Countr), and you can find this file in
Countr.Droid\bin\Release (assuming you’ve built using the release configuration) or
Countr.Droid\bin\Debug for the debug configuration. We’ll be using release Android builds for the purposes of this book; add the code in the following listing to point UITest to the right APK, substituting in your package name.
if (platform == Platform.Android) { return ConfigureApp .Android .EnableLocalScreenshots() .ApkFile ("../../../Countr.Droid/bin/Release/<your package name>‐Signed.apk") .StartApp(); }
When tests are run they’re run on the device or emulator that you selected for the build configuration, the same as for debugging your apps. This makes it easy to change which device tests are run on by changing the dropdown in the toolbar, the same way you’d change the target for debugging.
Running the Test
Once the test apps are configured you can run the tests by double-clicking on them in the test pad in Visual Studio for Mac, or by right-clicking on them in the Visual Studio for Windows Test Explorer and selecting ‘Run Selected Tests.’ If you’re testing Android set the build configuration to release, for iOS set it to debug.
That’s all for UI testing. If you want to learn more about making cross-platform mobile apps using Xamarin, download the free first chapter of Xamarin in Action and see this Slideshare presentation.
This article is an excerpt from Xamarin in Action. Save 37% off the cover price using code fccbennett here.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/introduction-to-ui-testing-xamarin-apps | CC-MAIN-2017-39 | refinedweb | 3,009 | 67.49 |
Custom Server Control Syntax
Custom server control syntax is used to declare user controls and custom server controls as markup elements in ASP.NET application files, including Web pages, user controls, and master pages. This syntax is nearly identical to the syntax used to declare all ASP.NET server controls, except that for custom and user controls you typically declare a unique tag prefix and a tag name that corresponds to your control.
<tagprefix:tagname OR <tagprefix:tagname
Attributes
tagprefix
An alias for the fully qualified namespace of the markup elements used on the page. The value of the alias is arbitrary, but it provides a shorthand way of associating the markup for a custom control or user control with the namespace of the other markup elements declared in an ASP.NET file.
tagname
For a custom control, the tagname refers to the name of a type for which ASP.NET will create a run-time instance. For a user control, the tagname maps to the associated source file that defines the user control, and that file in turn defines the type for which ASP.NET creates an instance.
id
A unique identifier that enables programmatic reference to the control.
attributename
The name of an attribute, which corresponds to a property on the control.
value
The value assigned to the attribute (property).
eventname
The name of an event in the control.
eventhandlermethod
The name of an event handler defined to handle the specified event for the control.
Remarks
Use custom server control syntax to declare user controls and custom server controls within the body of an ASP.NET Web page. For this syntax to work, the control must be registered on the page or in a configuration file (you can register a control on all pages of an application by adding it to the <controls> of the Web.config file). You can register a control on an individual page by using the @ Register directive.
The opening tag of an element for a user or custom control must include a runat="server" attribute/value pair. To enable programmatic referencing of the control, you can optionally specify a unique value for the id attribute.
Any properties that you have authored on a user or custom server control can be exposed declaratively in the opening tag of the server control. Simply declare the property as an attribute and assign it a value. For example, if you create a custom text box control with a width property, declaring width="50" in the opening tag of the control sets the server control's display width to fifty pixels.
In some cases, attributes might be objects that have their own properties. In this case, include the property name in the declaration. For example, if you create a custom text box control that includes a font attribute, it can include a name property. You can then declare the property in the opening tag of the server control as font-name="Arial". For more information about developing custom server controls with properties, see Server Control Simple Properties and Subproperties.
You can declare events with custom server controls and user controls in the same way you would with any ASP.NET server control. Specify the event binding in the opening tag of the server control with an attribute and value. For more information on authoring custom server controls that support events, see Server Event Handling in ASP.NET Web Pages.
You can use and develop custom server controls that support inline templates. For details on how to declare templates in a custom server control, see Server Control Inline Template Syntax. To learn how to author custom server controls that support inline templates, see How to: Create Templated ASP.NET User Controls.
Example
The following code example demonstrates how you can register and declare a custom server control in an ASP.NET Web page. The first section of code creates a public class derived from the Button class. The second part of the code is a Web page that hosts the custom button. Notice that the Web page uses the @ Register directive to register the namespace for the control and to set the tagprefix attribute. The control is then referenced in the page by using the tagprefix value and the control's class name, separated by a colon (:).
For the code example to run, you must compile this custom control., which is why the @ Register directive in the page does not need to declare an Assembly attribute (because the source is dynamically compiled at run time). For a walkthrough that demonstrates how to compile, see Walkthrough: Developing and Using a Custom Web Server Control.
// A custom Button control to reference in the page. using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Security.Permissions using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.Html CustomButton : Button { public CustomButton() { this. <html> <script runat="server"> private void custButton_Click(Object sender, EventArgs e) { TextBox.BackColor = System.Drawing.Color.Green; TextBox.Text = "You clicked the button"; } <>
' A custom Button control to reference in the page. Imports System Imports System.Collections Imports System.ComponentModel Imports System.Drawing Imports System.Security.Permissions Imports System.Web Imports System.Web.Configuration Imports System.Data.Sql Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Namespace Samples.AspNet.VB.Controls <AspNetHostingPermission(SecurityAction.Demand, _ Level:=AspNetHostingPermissionLevel.Minimal)> _ <AspNetHostingPermission(SecurityAction.InheritanceDemand, _ Level:=AspNetHostingPermissionLevel.Minimal)> _ Public Class CustomButton Inherits Button Public Sub New() Me. <html> <script runat="server"> Sub custButton_Click(ByVal sender As Object, _ ByVal e As EventArgs) TextBox.BackColor = Drawing.Color.Green TextBox.Text = "You clicked the button." End Sub <>
See Also
Concepts
ASP.NET Web Page Syntax Overview | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/1e9b4c9f%28v%3Dvs.100%29 | CC-MAIN-2019-47 | refinedweb | 967 | 51.65 |
Code fragments showing how to create a singly linked list and how to manipulate the list and the elements of the list.
The following code fragments show how a singly linked list can be constructed
and manipulated. The list consists of instances of an example class,
CItem,
which forms items on a stack implemented as a singly linked list using the
iSlink data
member as the link object. In this example, a
CItem object
can contain an item of text implemented as an
HBufC.
The class is declared as:
class CItem : public CBase { public : static CItem* NewL(const TDesC& aText); static CItem* NewLC(const TDesC& aText); CItem(); virtual ~CItem(); const HBufC* GetText(); public : static const TInt iOffset; private : void ConstructL(const TDesC& aText); private : TSglQueLink iSlink; HBufC* iText; friend class CStack; };
The
CItem member functions are implemented as:
const TInt CItem::iOffset = _FOFF(CItem,iS() {}
CItem::~CItem() { delete iText; }
const HBufC* CItem::GetText() { return (iText); }
As part of the construction process, a
CItem constructs
an
HBufC of the correct length and copies the content of
the descriptor parameter into it.
The stack is implemented by an instance of the example class
CStack.
This maintains the stack by adding
CItem objects onto the
end of the list and removing them from the end of the list. When removing
them from the end of the list, a pointer to the removed
CItem object
is returned.
In this example, the list header,
iStack, and the iterator,
iStackIter,
are declared as data members of the class and are constructed when the
CStack object
is constructed. A C++ constructor must be supplied so that
iStackIter can
be constructed. (
TSglQueIter has no default constructor).
AddToStack() takes a
CItem object and
adds it to the end of the singly linked list.
RemoveFromStack() takes the
CItem object
at the end of the singly linked list, removes it from the list and returns
a pointer to it.
The
CStack class is declared as:
class CStack : public CBase { public : static CStack* NewL(); static CStack* NewLC(); CStack(); void Construct(); virtual ~CStack(); CItem* RemoveFromStack(); void AddToStack(CItem& anItem); private : TSglQue<CItem> iStack; TSglQueIter<CItem> iStackIter; };
The
CStack member functions are implemented as:
CStack* CStack::NewLC() { CStack* self = CStack::NewL(); CleanupStack::PushL(self); return self; }
CStack* CStack::NewL() { CStack* self = new (ELeave) CStack; return self; }
CStack::CStack() : iStack(CItem::iOffset),iStackIter(iStack) {}
The C++ constructor is needed so that the list header
(
iStack) and the list iterator (
iStackIter)
can be properly constructed.
Before destroying a
CStack object,
the list is destroyed. This is achieved using the iterator (
iStackIter).
The iterator pointer is set to point to each element in turn, removing that
element from the list before destroying it.
Once the iterator has reached
the end of the list, the operator
++ returns
NULL.
The
destruction process is safe if the list is empty; the statement
iStackIter.SetToFirst() is
harmless, the operator
++ returns
NULL and
execution of the body of the
while loop never happens.
CStack::~CStack() { CItem* item; iStackIter.SetToFirst(); while ((item = iStackIter++) != NULL) { iStack.Remove(*item); delete item; }; }
Adding an element to the stack simply involves adding
the
CItem object to the end of the list.
void CStack::AddToStack(CItem& anItem) { iStack.AddLast(anItem); }
The
RemoveFromStack() function
returns
NULL, if the list is empty, otherwise it just uses
the
Last() member function to return a pointer to the last
element in the list before removing it.
CItem* CStack::RemoveFromStack() { CItem* lastitem; if (iStack.IsEmpty()) return NULL; lastitem = iStack.Last(); iStack.Remove(*lastitem); return (lastitem); }
Executing the code results in a singly linked list of
CItem objects
each containing a pointer to an
HBufC descriptor each of
which, in turn, contains the text “8”, “89”, and so on through to “89ABCDEF”:
{ CStack* stack; CItem* item; TBuf<16> buffer; TRAPD(leavecode,stack = CStack::NewL()); if (leavecode != KErrNone) { // Cannot create stack return; }
for (TUint jj = 8; jj < 16; jj++) { buffer.AppendNumUC(jj,EHex); TRAPD(leavecode,item = CItem::NewL(buffer)); if (leavecode != KErrNone) { // Cannot create item delete stack; return; } stack->AddToStack(*item); }
as the following shows:
Figure: Example singly linked list
The following code removes each
CItem element
from the list, starting with the last and working through to the first until
the list is empty.
while ((item = stack->RemoveFromStack()) != NULL) { // item->GetText());can be used to access the text. delete item; };
delete stack;
Note that unlike doubly linked lists, elements can only be added to the start or the end of a singly linked list. Elements cannot be added to the middle of the list. | http://devlib.symbian.slions.net/belle/GUID-9D3637D4-43BD-51ED-B4BC-1F234F09E24B.html | CC-MAIN-2021-21 | refinedweb | 750 | 59.64 |
The createNewFile() function in Java can create a file. This method produces a boolean value: true if the file was completed successfully, false if it already exists. As you can see, the procedure is encased in a try…catch block. It is required because an IOException is thrown if an error occurs. For example, if the file cannot be created for any reason.
import java.io.File; import java.io.IOException; public class CreateFile { public static void main(String[] args) { try { File newObj = new File("codeunderscored.txt"); if (newObj .createNewFile()) { System.out.println("File created: " + newObj.getName()); } else { System.out.println("File already exists."); } } catch (IOException e) { System.out.println("An error occurred."); e.printStackTrace(); } } }
To create a file in a particular directory (which requires permission), specify the file’s path and escape the “\” character with double backslashes (for Windows). On Mac and Linux, type the route, such as /Users/name/codeunderscored .txt.
import java.io.File; import java.io.IOException; public class CreateFileDir { public static void main(String[] args) { try { File newObj = new File("Users/Code\\codeunderscored.txt"); if (newObj .createNewFile()) { System.out.println(" Created file: " + newObj.getName()); System.out.println("Absolute path: " + newObj.getAbsolutePath()); } else { System.out.println("File is already there."); } } catch (IOException e) { System.out.println("An error occurred."); e.printStackTrace(); } } }
How to Write to a file
The FileWriter class and its write() method are used in the following example to write some text to the file we created earlier. It’s important to remember that once you’ve finished writing to the file, you should shut it with the close() method:
import java.io.FileWriter; import java.io.IOException; public class WriteToFile { public static void main(String[] args) { try { FileWriter newWriter = new FileWriter("codeunderscored .txt"); newWriter.write("Writing to files in Java is easy peasy!"); newWriter.close(); System.out.println("Completed writing to the file."); } catch (IOException e) { System.out.println("An error occurred."); e.printStackTrace(); } } }
Using BufferedWriter Class
BufferedWriter is a class that allows you to write buffered text. It’s used to create a character-output stream from the text. Characters, strings, and arrays can all be written with it. It has a default buffer size; however, it can be changed to huge buffer size. If no prompt output is required, it is preferable to encapsulate this class with any writer class for writing data to a file.
// Program for writing into a File using BufferedWriter Class // Importing java input output libraries import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; // Main class public class BufferedWriterClass { public static void main(String[] args) { // Assigning the file content as input to illustrate String text = "Welcome to Codeunderscored "; // Try block to check for exceptions try { // Create a BufferedWriter object BufferedWriter bfWriter = new BufferedWriter(new FileWriter( "/Users/Code/Desktop/underscored.docx")); // Writing text to the file bfWriter.write(text); // Printing file contents System.out.print(text); // Display message showcasing program execution success System.out.print( "content added to file successfully"); // Closing the BufferedWriter object bfWriter.close(); } // Catch block for handling exceptions that occur catch (IOException e) { // Printing the exception message on the console System.out.print(e.getMessage()); } } }
Using the FileOutputStream class
The BufferedWriter class is used to write to a file in this example. However, because this class has a big buffer size, it can write massive amounts of data into the file. It also necessitates the creation of a BufferedWriter object, such as FileWriter, to write content into the file.
It’s used to save unprocessed stream data to a file. Only text can be written to a file using the FileWriter and BufferedWriter classes, but binary data can be written using the FileOutputStream class.
The following example shows how to use the FileOutputStream class to write data to a file. It also necessitates the creation of a class object with the filename to write data to a file. The content of the string is transformed into a byte array, which is then written to the file using the write() method.
// Program for Writing to a File using the FileOutputStream Class import java.io.FileOutputStream; import java.io.IOException; public class WriteUsingFileOutputStream { public static void main(String[] args) { // Assigning the file contents String fileContent = "Codeunderscored extravaganza "; FileOutputStream outputStream = null; // Try block to check if exception occurs try { // Creating an object of the FileOutputStream outputStream = new FileOutputStream("underscored.txt"); // Storing byte contents from the string byte[] strToBytes = fileContent.getBytes(); // Writing directly into the given file outputStream.write(strToBytes); // Printing the success message is optional System.out.print( "Content added to File successfully."); } // Catch block handling the exceptions catch (IOException e) { // Showing the exception/s System.out.print(e.getMessage()); } // use of the finally keyword with the try-catch block – to execute irregardless of the // exception finally { // object closure if (outputStream != null) { // Second try catch block enforces file closure // even if an error happens try { // file connections closure // when no exception has happen outputStream.close(); } catch (IOException e) { // shows exceptions that are encountered System.out.print(e.getMessage()); } } } } }
The writeString() function
Version 11 of Java supports this approach. Four parameters can be passed to this method. The file location, character sequence, charset, and options are all of these. The first two parameters are required for this method to write to a file. It saves the characters as the file’s content. It returns the path to the file and can throw four different types of exceptions. When the file’s content is short, it’s best to use it.
It demonstrates how to put data into a file using the writeString() function from the Files class. Another class, Path, is used to associate the filename with the destination path for the content.
The readString() function of the Files class reads the content of any existing file. The code then uses the latter to ensure that the text is appropriately written in the file.
// Program for Writing Into a File using the writeString() Method import java.io.IOException; import java.nio.file.Files; import java.nio.file.Path; // Main class public class WriteStringClass { public static void main(String[] args) throws IOException { //file content assigning String text = "Codeunderscored extravaganza!"; // definition of the file' name Path fileName = Path.of( "/Users/Code/Desktop/undercored.docx"); // Writing into the file Files.writeString(fileName, text); // Read file contents String fileContent = Files.readString(fileName); // Printing the files' content System.out.println(fileContent); } }
Conclusion
The Java FileWriter class is used to write character-oriented data to a file. It is a character-oriented class since it is utilized in java file handling. There are numerous ways to write to a file in Java, as there are numerous classes and methods that can accomplish this. They include the writeString() function, FileWriterclass, BufferedWriter class and the FileOutputStream. | https://www.codeunderscored.com/writing-files-in-java-with-examples/ | CC-MAIN-2022-21 | refinedweb | 1,123 | 51.65 |
This tutorial explains what is the Arduino EEPROM and what it is useful for. We’re also going to show you how to write and read from the EEPROM and build a project example to put the concepts learned into practice.
We have a similar tutorial for the ESP32: ESP32 Flash Memory – Store Permanent Data (Write and Read)
Introduction
When you define and use a variable, the generated data within a sketch only lasts as long as the Arduino is on. If you reset or power off the Arduino, the data stored disappears.
If you want to keep the data stored for future use you need to use the Arduino EEPROM. This stores the variable’s data even when the Arduino resets or the power is turned off.
What is EEPROM?
The microcontroller on the Arduino board (ATMEGA328 in case of Arduino UNO, shown in figure below) has EEPROM (Electrically Erasable Programmable Read-Only Memory). This is a small space that can store byte variables.
The variables stored in the EEPROM kept there, event when you reset or power off the Arduino. Simply, the EEPROM is permanent storage similar to a hard drive in computers.
The EEPROM can be read, erased and re-written electronically. In Arduino, you can read and write from the EEPROM easily using the EEPROM library.
How many bytes can you store?
Each EEPROM position can save one byte, which means you can only store 8-bit numbers, which includes integer values between 0 and 255.
The bytes you can store on EEPROM dependson the microcontrollers on the Arduino boards. Take a look at the table below:
However, if you need to store more data you can get an external EEPROM.
The EEPROM finite life
The EEPROM has a finite life. In Arduino, the EEPROM is specified to handle 100 000 write/erase cycles for each position. However, reads are unlimited. This means you can read from the EEPROM as many times as you want without compromising its life expectancy.
Applications in Arduino projects – Remember last state
The EEPROM is useful in Arduino projects that need to keep data even when the Arduino resets or when power is removed.
It is specially useful to remember the last state of a variable or to remember how many times an appliance was activated.
For example, imagine the following scenario:
- You’re controlling a lamp with your Arduino and the lamp is on;
- The Arduino suddenly loses power;
- When the power backs on, the lamp stays off – it doesn’t keep its last change.
You don’t want this to happen. You want the Arduino to remember what was happening before losing power and return to the last state.
To solve this problem, you can save the lamp’s state in the EEPROM and add a condition to your sketch to initially check whether the state of the lamp corresponds to the state previously saved in the EEPROM.
We’ll exemplify this with an example later in this post in the Example: Arduino EEPROM remember stored LED state.
Read and Write
You can easily read and write into the EEPROM using the EEPROM library.
To include the EEPROM library:
#include <EEPROM.h>
Write
To write data into the EEPROM, you use the EEPROM.write() function that takes in two arguments. The first one is the EEPROM location or address where you want to save the data, and the second is the value we want to save:
EEPROM.write(address, value);
For example, to write 9 on address 0, you’ll have:
EEPROM.write(0, 9);
Read
To read a byte from the EEPROM, you use the EEPROM.read() function. This function takes the address of the byte has an argument.
EEPROM.read(address);
For example, to read the byte stored previously in address 0.:
EEPROM.read(0);
This would return 9, which is the value stored in that location.
Update a value
The EEPROM.update() function is particularly useful. It only writes on the EEPROM if the value written is different from the one already saved.
As the EEPROM has limited life expectancy due to limited write/erase cycles, using the EEPROM.update() function instead of the EEPROM.write() saves cycles.
You use the EEPROM.update() function as follows:
EEPROM.update(address, value);
At the moment, we have 9 stored in the address 0. So, if we call:
EEPROM.update(0, 9);
It won’t write on the EEPROM again, as the value currently saved is the same we want to write.
Example: Arduino EEPROM remember stored LED state
In this example, we’re going to show you how to make the Arduino remember the stored LED state, even when we reset the Arduino or the power goes off.
The following figure shows what we’re going to exemplify:
Parts required
Here’s the parts required for this project (click the links below to find the best price at Maker Advisor):
- Arduino UNO – read Best Arduino Starter Kits
- 1x LED
- 1x 220Ω resistor
- 1x Pushbutton
- 1x 1kΩ resistor
- 1x Breadboard
- Jumper wires
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Schematics
Here’s the circuit schematics for this project. This is just a pushbutton that will turn an LED on and off.
Code
Copy the following code to the Arduino IDE and upload it to your Arduino board. Make sure you have the right board and COM port selected.
/* * Rui Santos * Complete Project Details */ #include <EEPROM.h> const int buttonPin = 8; // pushbutton pin const int ledPin = 4; // LED pin int ledState; // variable to hold the led state input and output pinMode(buttonPin, INPUT); pinMode(ledPin, OUTPUT); // set initial LED state digitalWrite(ledPin, ledState); // initialize serial monitor Serial.begin (9600); //check stored LED state on EEPROM using function defined at the end of the code checkLedState(); } void loop() { // read the state of the switch into a local variable int reading = digitalRead(buttonPin);; // only toggle the LED if the new button state is HIGH if(buttonState == HIGH) { ledState = !ledState; } } } // set the LED state digitalWrite(ledPin, ledState); // save the current LED state in the EEPROM EEPROM.update(0, ledState); // save the reading. Next time through the loop, // it'll be the lastButtonState lastButtonState = reading; } // Prints and upates the LED state // when the Arduino board restarts or powers up void checkLedState() { Serial.println("LED status after restart: "); ledState = EEPROM.read(0); if(ledState == 1) { Serial.println ("ON"); digitalWrite(ledPin, HIGH); } if(ledState == 0) { Serial.println ("OFF"); digitalWrite(ledPin, LOW); } }
This is a debounce code that changes the LED state every time you press the pushbutton. But there’s something special about this code – it remembers the saved LED state, even after resetting or powering of the Arduino.
Basically, we save the current LED state in the ledState variable and save it to the EEPROM with the following line:
EEPROM.update(0,ledState);
At the beginning of the code on the setup(), we check the ledState saved on EEPROM and set the led on or off accordingly to that state when we restart the program. We do that with a function we’ve created at the end of the code, checkLedState()
void checkLedState() { Serial.println("LED status after restart: "); ledState = EEPROM.read(0); if (ledState == 1) { Serial.println ("ON"); digitalWrite(ledPin, HIGH); } if (ledState == 0) { Serial.println ("OFF"); digitalWrite(ledPin, LOW); } }
Demonstration
For a demonstration of this example, watch the video below.
Wrapping up
In this post you’ve learned about the Arduino EEPROM and what it is useful for. You’ve created an Arduino sketch that remembers the last LED state even after resetting the Arduino.
This is just a simple example for you to understand how the use of EEPROM. The idea is that you apply the concepts learned in this tutorial to your own projects.
If you like Arduino, we recommend taking a look at our Arduino resources:
We hope you’ve found this article useful.
Thanks for reading.
40 thoughts on “Arduino EEPROM Explained – Remember Last LED State”
Great tutorial. Can you please write a tutorial for 24LC04 or similar external Eeprom also. Awaiting your simplified tutorials
Hi Kumaran.
Thanks for the suggestion, we may came up with that tutorial in the future.
Stay tuned. 🙂
We are waiting for that
Hi.
We haven’t created that tutorial yet. We’re busy with other projects at the moment.
Thanks for understanding.
I think you’re skipping over the real significance of the write/erase physical limitation of 100,000.
If you had something that changes state every ten seconds and, each time, you update the state in EEPROM, then you’ll exceed the 100,000 limit in a little over eleven and a half days for a given position.
There are other methods for saving a state at power failure which, although they require a bit of extra hardware, make much better use of the limited nature of EEPROM.
I learnt about the EEPROM limitation the hard way!
Hi Duncan..
Read the “Update a value” section on this post to learn about the update() function.
Thanks.
Yee, I accept that, and fully admit that I’d glossed over it. My apologies to all for any confusion caused.
My main point was that doing anything with the EEPROM on each pass through the loop was unnecessary. It would be better to do so only when there has been a change of state and, if the idea is to overcome the risk of data loss on power fail, that there are ways of doing it only when the power actually fails.
Arduino EEPROM, when it decides to fail, loses only the affected locations – in the case of, say, the ESP8266, the entire EEPROM space is rendered unusable.
To many newcomers, the 100,000 limit seems to be a very large number, but the reality can be very different. Anything that reduces the activity (any activity) for the EEPROM is, basically, a good idea and should be incorporated within the program wherever possible.
How does the EEPROM fail when it is written to too many times?
Is the last value still readable? Does the entire EEPROM fail at once? Can a failing EEPROM cause issues executing code that doesn’t try to write to the EEPROM? What about code that doesn’t even reference the EEPROM at all?
Hi Greg.
That’s a great question.
We’ve never experienced writing too many times on the EEPROM, so we can’t tell you exactly what happens.
However, as far as I know, the EEPROM readings are unlimited, so you must be able to read the last value, but cannot update that value.
Regards.
Uhm… I wonder if the eeprom will last long if it will check/read/write in the loop?
That’s the problem – as it’s written, it’s forcing an update to the EEPROM every pass through the loop.
It would be much better to do the update only if there has been a state change – that would reduce the count dramatically and, frankly, the only time you want to save the state is when it has changed…
By incorporating a counter and a Serial.write, this could be a method for telling you how many write/update cycles the EEPROM lasted for before it died – it wouldn’t be long to wait for an answer!!!
Thanks for the response Duncan. That’s what I was thinking about “update only if there has been a state change”. Thanks.
Extending the thinking further, you only really want to save the state in EEPROM when the power is about to fail – but how would you know when that is going to happen?
One Schottky Diode and a very large capacitor do the trick. Feed the capacitor and the micro through the diode and take the input of the diode to a GPIO.
If the power fails, the GPIO goes low, that fires an interrupt that saves the state in EEPROM and puts the Micro to sleep (or not, as the case may be) whilst the capacitor winds down and the micro turns off…
When you reboot, it checks the EEPROM for a valud value.
EEPROM only gets written when it’s actually required and the limited life problem usually becomes irrelevant.
HI Duncan, I was also thinking of something similar. To detect the power failure and only in that case, update the memory.
Would you be able to provide a circuit diagram for the diode / capacitor method you mentioned? Thanks.
Hi Ray..
Thanks.
Hello its showing an error in EEPROM as ‘class EEPROMClass’ has no member named ‘update’. what to do?
Are you using this examples with the Arduino board?
Regards,
Sara
No i am using this code in esp8266 Nodemcu board.
So, that’s the reason.
That function is not available for the ESP8266.
See here:
Regards,
Sara
Ok mam the EEPROM.update() function works for arduino board only. But in EEPROM.write() function it takes 3.3 ms to complete to change its current value state.
For that what to do mam.
Oeps. To quick. Like to say that it is a nice example. Only wondering about the lifespan. Sorry.
Thumbs up, good tutorial, well explaned with pro and contra’s
Hi Gij Kieken.
Thanks for your support.
Well done! Never used an EEPROM or this section of the Arduino, but I can now.
I understand and appreciate Duncan Amos’ explanation of the 100,000 limit. I’m sure learning of the limitation was quite um, educational–I’ve had a few of those, myself.
Well done tutorial, good job.
Thanks, John 🙂
Hello RUI,
It’s a very useful tutorial and also very well explained.
Hi Rajiv Shankar.
Thanks for your support.
Regards
I liked the description of the EEPROM usage, but I have a few problems with the coding.
In the setup block, I think setting the LED should come after the call to checkLedState().
In the main loop, I think that buttonState and lastButtonState are redundant – you only need one of them.
In the checkLedState(), there is no provision for a value that is not 0 or 1. So if the EEPROM has a 2, the function does not work as planned.
Thanks for the EEPROM explanation.
Hi Dan.
In this example, we’ve used the Arduino example sketch called Debounce and then adapted to use the EEPROM.
This example uses the buttonState and lastButtonState variables to check whether the button state has changed.
The checkLedSate() function works just fine with the LED because its state is either 1 or 0.
Regards
step by step explanation is excellent
Thanks. 🙂
There are cases where update is not enough. I can figure out (I know it is bad design) recording of data, with time-stamp -say, every 10 seconds) : one can guess it will need ten days to wear out the second field -and I do not know what happens to the other fields: is Arduino fully destroyed, is EEPROM fully destroyed or do parts remain usable).
If one has short information to write, maybe length – int8_t – should be written before writing data (an empty flash countains 0xFF or zero; test on length, on an empty flash, would return a zero or negative value length -int8_t has sign info…).
The interesting part of the EEPROM would be the last part where length is > 0.
The plawe where to write would be just after the really readble part of the EEPROM (and writing would imply write/update length; write/update data; write/update empty length)
This would involve reading sequentially (adress would be 0, length[0]+1… adress[n]+length[n]+1) and cycling if EEPROM is full. With the same example as before -time stamped data every 10 seconds: say record to be written is 30 bytes…. one can write 30 records, and EEPROM life expectation is therefore, as writes are uniformly spread, multiplied by 30 -alost one year instead of 10 days.)
I do not know wheteher this way of spreading data over the full EEPROM has been implemented (and maybe there are less naive ones)
SOU INICIANTE EM PROGRAMAÇÃO C++ COM ARDUINO, EM QUAL ESCOPO DEVEREI ESCREVER
EEPROM.write (0, 9); PRA QUE EU POSSA VERIFICAR POSTERIORMENTE SE REALMENTE ESTÁ LÁ O QUE ESCREVI ? GRATO ! ANDRÉ.
Olá André.
Não tenho a certeza se percebi bem a pergunta.
Para verificar o que lá escreveu, pode usar:
EEPROM.read(0);
Que vai ler o que escreveu no address 0.
Cumprimentos,
Sara
P.S. Da próximo vez, tente postar os comentários em inglês, para que todos os leitores possam compreender. Obrigada
Please include a tutorial on the ESP8266 EEPEROM’s ( It is called as Flash in ESPs) as well.
Also – what’s the longevity of the Flash, Is it years, months, and how long before it gets corrupted. Things like that would help ( if that’s available )
I would like to update the Flash every second forever ? as the data could be static or dynamic.
BTW: I like all of your articles and most of my google searches in this area include the search words + ‘Randomnerd’ to get what I want from your web site first.
I’m sorry, below lines didn’t copy,
// include the library code:
#include
#include
#include
You can copy the full code directly from this link:
good day… how can be your tutorial applied if im using sms to turn the LED on and not the push button..
thank your for your help
Hi.
Yes, it works the same way.
Regards,
Sara | https://randomnerdtutorials.com/arduino-eeprom-explained-remember-last-led-state/ | CC-MAIN-2020-29 | refinedweb | 2,931 | 72.36 |
I'm thinking about transforming some longitude and latitude values to XML after which delivering the file to some web server to show on the Google Map Interface.
Can you really send an XML file to some web server through my very own application in Android?
Thanks
Yes. Simply make a http publish together with your xml data. You'll need a web server script that handles the xml you published towards the server aswell.
Helpful information
Android includes a public class known as [cde] that will work fine.
Essentially you specify the url the server that you're delivering the HTTP request to and also the content from the XML adopts the Publish area. The net server will get the request and, presuming a php script reaches the URL, the script can grab the contents via its very own
HttpPost global variable and use the XML what must be done. | http://codeblow.com/questions/send-xml-file-to-web-server-in-android/ | CC-MAIN-2019-13 | refinedweb | 151 | 66.98 |
You can download the source code of this tutorial here:
Introducing the MVC Structure
The MVC structure means that the application is divided into three parts. The (M) model defines the data model, (V) view presents data and (C) controller defines logic and manipulates data.
Controllers
Here is a link to the controller section of the official documentation:
Basic Controllers
A controller is a place where you organize the behaviours of your app. For example, when the router receives “
/“, it returns a “
IndexController“.
The “
IndexController” retrieves required information from the database, and then put them in the corresponding location of a view, and finally returns the view to the browser.
This is the most basic use of a controller. Of course, it can do a lot more than that, and I will talk about them as we encounter specific problems later in the tutorial.
To create our first controller. Go to the terminal, type in:
php artisan make:controller IndexController
Go to
app/Http/Controllers/, and you will find a
IndexController.php
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; class IndexController extends Controller { // }
Note that the controller extends the base controller class included with Laravel. The base class provides a few convenience methods such as the
middleware method.
We can try to make it return a string by creating a function inside the class:
class IndexController extends Controller { // public function index() { return 'this is a controller'; } }
We created a method called
index(), and inside the method, it returns a string
'this is a controller'.
In order to test our code, we need to make a router return the “
IndexController“. Go back to
web.php, and create a new router like this:
Route::get('/', [IndexController::class, 'index']);
Notice the second argument is no longer a function, it is
[IndexController::class, 'index'], which means go to the
IndexController, find the
index() method, and execute whatever is in it.
In the browser, go to
We can also make the
IndexController to return a view.
class IndexController extends Controller { // public function index() { return view('welcome'); } }
Refresh the browser, this will get us the same result as we did in the previous article:
Single Action Controllers
Single action controllers are useful when we only need one method in the controller class:
class IndexController extends Controller { // public function __invoke() { return view('welcome'); } }
Now we can change the route we just defined. We no longer need to specify the method.
Route::get('/', IndexController::class);
This code will work exactly the same as before.
Views (Blade Templates)
Let’s reexamine the view
welcome.blade.php, you will notice it is just an HTML file. Yes, views are based on HTML and CSS since they are what you see in the browser, but things are slightly more complicated than that.
If it contains only HTML codes, the entire blog would be static, and that is not what we want. So the view would have to “tell” the controller where to put the data it retrieved.
Template Inheritance
Define A Master Layout
The primary benefit of using the Blade template is that we do not need to write the same code over and over again. Let’s create a
master.blade.php file in the templates folder.
<html> <head> @yield('meta') </head> <body> @section('sidebar') <div class="container"> @yield('content') </div> </body> </html>
Notice the two directives
@section and
@yield. The
@section('sidebar') means Laravel will look for a blade template named “
sidebar.blade.php“, and import it here. The
@yield directive is used to display the contents of a given section.
Extend A Layout
Here, we create another view
index.blade.php.
@extends('master') @section('meta') <meta charset="UTF-8"> <meta name="description" content="Free Web tutorials"> <meta name="keywords" content="HTML, CSS, JavaScript"> <meta name="author" content="John Doe"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> @endsection @section('content') <p>This is my body content.</p> @endsection
When this view is called, Laravel will look for the
master blade template we just created. Replace
@yield('meta') with the
meta section, and replace
@yield('content') with the content section.
With these two directives, we are able to build any type of template hierarchy we want.
Syntax
Displaying Data
Let’s first make some changes to our IndexController:
class IndexController extends Controller { // public function __invoke() { return view('index', ['name' => 'Eric']); } }
we defined a variable
name here, and its value is
Eric. Remember, this is also the new syntax in Laravel 8.
In the index view, we change the content section into this:
<p>Hello, {{ $name }}.</p>
The output will be
Hello, Eric..
Models
Remember I said controllers are in charge of retrieving data from the database? Well, that’s where models come in.
Each database table has a corresponding “Model”, and that model will handle all the interactions with the table.
Define Models
The easiest way to create a model instance is using the
make:model Artisan command:
php artisan make:model Test
If you want to generate the corresponding migration files, you may use the
--migration or
-m option:
php artisan make:model Test --migration php artisan make:model Test -m
Here is an example of a model and migration file:
<?php namespace App\Models; use Illuminate\Database\Eloquent\Model; class Test extends Model { // } <?php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\Schema; class CreateTestsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('tests', function (Blueprint $table) { $table->id(); $table->string('name'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('tests'); } }
We won’t go into details about these right now since it can be quite confusing for beginners. Instead, we’ll talk about them with real examples later in the tutorial.
Retrieve Data
Laravel has an elegant way of retrieving data, with Eloquent. Think of each Eloquent model as a powerful query builder allowing you to fluently query the database table associated with the model.
$flights = App\Models\Flight::where('active', 1) ->orderBy('name', 'desc') ->take(10) ->get();
Again, we’ll go into details about this later.
Related Articles
How to Make Your Server More Secure
Laravel Tutorial For Beginners
Django Tutorial For Beginners
Build A Unit Converter with Vue.js
Discussion (1)
To make the first "IndexController" example work, I had to include "use App\Http\Controllers\IndexController;" at the top of my "web.php" file.
Is it really necessary or am I doing something wrong? | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ericnanhu/laravel-8-tutorial-3-the-mvc-structure-3cpa | CC-MAIN-2021-31 | refinedweb | 1,080 | 53.61 |
index
Query Language (SQL) Tutorials
VisualFoxPro Tutorials...
Flash Tutorials
JSP Tutorials
Perl Tutorials
JSP Simple Examples
JSP Simple Examples
Index 1.
Creating... page.
Html tags in jsp
In this example...
In this example we are
going to show you how we can declare a method and how
JSP Simple Examples
In this example we are going to retrieve the value we have entered in
the jsp... a JSP page.
To precompile a jsp page, access the page with a query string... this attribute if the resulting value
is null. In this example we
query
query how to delete checked record into database using checkbox in jsp
We are providing you the code where we have specified only three..." name="check<%=i%>" value=<%= rs.getString("bookid") %>></td>
how to get the checkbox value in jsp
how to get the checkbox value in jsp how to get the checkbox value in jsp?
JSP CheckBox Example - how to get the checkbox value in jsp
JSP
://
<jsp:useBean id="user.../jsp/simple-jsp-example/UseBean.shtml...how can we use beans in jsp how can we use beans in jsp
How to give the value - JSP-Servlet
How to give the value How to give the value in following query..
"select * from studentinformation where studentid = '?'";
How to give the value into question mark?... Hi Friend,
Try the following code
Jsp Query
Jsp Query Can we display images in jsp file?
If Can, How and please mention code
small query - JSP-Servlet
small query how to set value to textbox on browser side which is retrived from the database? Hi friend,
Plz explain problem in details which technology you have used e.g. JSP/Servlet/Java etc...
For more
Query in jsp
Query in jsp I have one textbox in jsp. when i will write any letter in that textbox i want all values which are starting from that letter in option... in advance.
use AJAX
Can u give me 1 example code
SQL Query - JSP-Servlet
.
Secondly you have to put comma separated value not field direct. for example UPDATE...SQL Query
AS mysql backend updation query shows a syntax error. I gave the full query and the generated error here. Please send me the correct
Query
Query how can i set path and Classpath or Environmental Variable for jsp/servlet program to compile and run
Simple problem to solve - JSP-Servlet
Simple problem to solve Respected Sir/Madam,
I am...
Update
Query
document.getElementById('id').value=
document.getElementById('name').value
JSP
://
EL parser...Can you explain jsp page life cycle what is el how does el search
how to search for string using LIKE operator instead of integer?
how to search for string using LIKE operator instead of integer? dear sir,
i have 3 jsp files, first "index.jsp", "dbtable.jsp", and "table2.jsp... that. but as you see, first query needs integer and second one needs string. i
How to save value in JSP
324 2012-12-12
save
i want to save dis value jsp to action ...how can i get all value ..and store..how can its values goes...How to save value in JSP Employee Name Time-IN Time-OUT
Query
Query How can i call a static variable of one class to another class.
can i change static varibale value.
//A.java
public class A
{
static... original values but when i called second time it shows previous value.
please
How to pass query as parameter? - JSP-Servlet
How to pass query as parameter? Hi Friends,
I have a real tough... of my problem. Please help me solve this problem.
My Problem is:
I have 3 query... not able to pass the value of qry1,qry2,qry3...
Please help me to find the solution
jsp function - JSP-Servlet
jsp function how to make function in jsp... i want example of jsp... tags:
a simple example of JSP Functions
Method in JSP
See the given simple button Example to submit
how to get combo box value - JSP-Servlet
how to get combo box value i have created 1 servlet & 1 jsp page. in servlet page i have fired query & fetched name & its id & that i have shown... to store that id in database. how to get that value??? Hi Friend
NSLog Integer Example
Here is the code example of NSLog function that prints the value of integer.
int num=90;
NSLog(@"The value of integer num is %i", num... the following output:
The value of integer num is 90
Using above code you
OGNL Index
is returned after calling the
toString() on resulting Integer object.
Example of OGNL :
Access value of array using OGNL in struts2.
How... properties of
java object. It has own syntax, which is very simple. It make
WRITE a simple JSP
WRITE a simple JSP Write a JSP that accepts a string parameter from...="msg">
<input type="submit" value="submit">
</form>
</html>... "";
else
return results[1];
}
var value = getParameter("msg");
document.writeln
data retrival from database throw simple jsp..
data retrival from database throw simple jsp.. We can retrieve the the data from data base simple jsp page:
Jsp Page:retrive.jsp
<...;
Statement stmt = null;
String Query="SELECT * FROM STUD";
try
get integer at run time
get integer at run time how to get integer value at run time in j2ee using servlets
Hi,
You can get the value from request parameter... is the example code:
String s =request.getParameter("myvariable");
Integer i
how to insert checkbox value into database using jsp
how to insert checkbox value into database using jsp How to insert check box value to the oracle database using jsp?
I want to create hotel's...;
Here is a simple jsp code that insert the selected checkbox values
How To Read Integer From Command Line In Java
giving a simple example which will demonstrate you about how to
take integer...How To Read Integer From Command Line In Java
In this section we will discuss about how an integer can be read through the
command line.
In Java all
Passing value in hyperlink - JSP-Servlet
Passing value in hyperlink How to pass value in anchor tag or HTML link from one JSP page to another JSP page ?and how to retrieve values in another..., Retrive value from database Retrive data from database User Hi
jsp
jsp ques: how to insert data into database using mysql
//index.jsp
<%--
Document : index
Created on : May 20, 2013, 1:20..." name="sex" value="male">Male
<input type=
query problem
query problem how write query in jsp based on mysql table field?
i have employee table it contain designation field, how write query in jsp... to write this query in jsp please anybody help me, and send me that code
JSP Arraylist Index
in proper sequence.
In this example there are two file index.jsp... and value to the postindex.jsp, which take all values and placed into the ArrayList....
Code for JSP Arraylist index.jsp & postindex.jsp:
index.jsp
<%@page.
jsp to access query
jsp to access query How to insert the values from jsp to access ?
Here is a jsp code that insert the values to MS access database...) Restart your server and run your jsp code.
<%@page import="java.sql.*"%>
<
how to assign javascript variable value to a jsp variable
how to assign javascript variable value to a jsp variable how to assign javascript variable value to a jsp variable
Tutorial | J2ME
Tutorial | JSP Tutorial |
Core Java Tutorial...
example | Java
Programming | Java Beginners Examples
| Applet Tutorials...;
| Java Servlets
Tutorial | Jsp Tutorials
| Java Swing
Tutorials number of drop down lists(depending upon number of books in that category) on jsp
Hibernate 3 Query Example Part 2
Hibernate 3 Query Example Part 2
Hibernate 3 Query Example Part 2
This tutorial explains how to create Hibernate 3 query
example with Spring 3
Hibernate Built-in criterion "between" Integer
minimum and maximum integer value range. In this example I will
fetch the id...; with the integer values.
Hibernate provides the facility to make a Criteria query...Hibernate Built-in criterion "between" Integer
In this tutorial you
EL in jsp - JSP-Servlet
hope that following link will help you.
If you have any problem then explain...EL in jsp hai,
I tried to test EL operators in my jsp
i
passing value fron script to scriplet - JSP-Servlet
passing value fron script to scriplet how to pass value from Java..., or use a the name/value query string.
your first should look something like... (java), you have to submit a form, or use a the name/value query string.
your
Mysql Date Index
.
Understand with Example
The Tutorial grasp you an example from 'Mysql Date Index... rows in set (0.00 sec)
Query to index the date column "today" of table "userform":
The Query is used to create an index on column
JSP
the following links:
jsp - JSP-Servlet
JSP Retrieve value from Database Please explain with the help of an example about how to Retrieve value from Database using JSP? Retrieve value from DatabaseMethods and ways through which you can retrieve values
Integer exception in java
;
The integer class is a wrapper for integer value... a value other than integer, exception
will caught in the catch block and println...;}
}
Output of the program
Enter the integer value
JSP date example
JSP date example
Till now you learned about the JSP syntax, now I will show you how to
create a simple dynamic JSP page that prints
JSP
the following link:... language , it is a simple language for accessing data, it makes it possible to easily access application data stored in JavaBeans components. The jsp expression
Spring Constructor arg index
Constructor Arguments Index
In this example you will see how inject...;
<constructor-arg
<
MySQL allowMultiQueries JSP Example
MySQL allowMultiQueries JSP Example
In this section we will discuss about how... for demonstrating you about how to write multiple
queries in a single query in Java... into a single query value of this
property is required to set to 'true'=='
Java Multiple Insert Query
example that demonstrates how to execute the
multiple sql insert query in JSP...Java Multiple Insert Query
In this example we will discuss about how to execute multiple insert query in
Java.
This example explains you about how
simple web appllication to insert, update or display from database - JSP-Servlet
.
can you please send one example with code to how to cerate web application...simple web appllication to insert, update or display from database ... database content on jsp page. please send complete code.
thank you
mani saurabh
Simple EJB3.0 - EJB
Simple EJB3.0 Hi friends...
I am new user...;>> my question is how to make session bean and how to access this session bean on servlet/jsp.
thanQ Hi Friend,
Please visit
Developing Simple Struts Tiles Application
will show you how to develop simple Struts Tiles
Application. You will learn how to setup the Struts Tiles and create example
page with it.
What is Struts...
Developing Simple Struts Tiles Application
JSP If statement Example
a simple JSP program.
Example : This example will help you to understand how to use...JSP If statement Example
In this section with the help of example you will learn about the "If" statement in JSP. Control
statement is used
sql/xml query in jsp - JSP-Servlet
sql/xml query in jsp Sir
Here is my code which is not working
Testing JSP
select id
from "SYSTEM... with example.
Thanks
how to get session object in simple java class??
how to get session object in simple java class?? i am fallowing a simple mvc architecture.
actually my problem is....
i am using 4 classes in my... into session.
so, please tell me, how to get the session object (GroupPojo) in a simple
Drop Index
.
Understand with Example
The Tutorial illustrate an example from Drop Index... |
+--------+----------+-----------+--------+-------+
Create Index
The create Index Query create an index stu_index1...
Drop Index
Drop save and get value from JSP
How to save and get value from JSP Employee Name Time... 324 2012-12-12
save
i want to save dis value jsp to action ...how can i get all value ..and store..how can its values goes
JSP Tutorials - Page2
XHTML
This JSP example describes how JSP tags for XML can be used... and no body
This example demonstrates how one can make custom tag in JSP...;
Custom Iterator Tag in JSP
This example will demonstrate you how
JSP Value to JavaScript
that accepts the value from the jsp page.
Understand with Example
In this section...
JSP Value to JavaScript
JSP Value to JavaScript tells you
Create a Simple Procedure
how to create a simple
procedure. This procedure display all records of stu... Create a Simple Procedure
Simple Procedure is a set of SQL statement which are executed
jsp query - JSP-Servlet
jsp query in first division there is one table which contains data with edit button... when i will click on edit button that division will become hidden and new division come with all related data
JSP - JDBC
JSP Store Results in Integer Format JSP Example Code that stores the result in integer format in JSP Hi! Just run the given JSP Example Code that stores the result in integer format in JSPName file
JDBC Execute Update Example
Execute Update query is
used to modify or return you an integer value specify... a simple
example from JDBC Execute update Example. In this Tutorial we want... ( ) : This method return you an integer value, this specify the number
JSP - JSP-Interview Questions
://
Thanks... are the comments in JSP(java server pages)and how many types and what are they.Thanks inadvance. Hi friend,
JSP Syntax
XML Syntax
combox value are not show in a JSP - JSP-Servlet
combox value are not show in a JSP i have a combo box in a JSP after submit it does not show its value in logic.jsp
Home.jsp:
<... show null.
please let me how i can find the value of combo box on logic.jsp.
Sending query with variable - JSP-Servlet
Sending query with variable While displaying pages in frames... database and query should have a variable at the end. While using this variable we... page. How can we resolve this.
Static page link:
But in case of static
Simple Query on RDF file in Java
in this section we are going to
describe how to run simple query on the RDF graph
model. Now lets see the example that can fires a simple query. The example is going...
Simple Query on RDF file in Java
JSON-JSP example
will tell you how to use JSON to use it into JSP pages.
In this example we have... string value of that array object's index.
Here is the example code of JSON... JSON-JSP example
JSP Standard Actions 'jsp:setProperty' & 'jsp:getProperty'
;
<jsp:setProperty>
It sets a property value or values in a Bean only...; }
/>
Example :
<jsp:setProperty
name=".... The value of
name must match the value of id in <jsp:useBean>
Query on radio button
Query on radio button I am having a multiple row in jsp page.They... one row only,
i.e I want to fetch the value of row and on submitting the page, the values
should got to a ActionClass.
My query is that how do i send all
JSP get Parameter
JSP get Parameter
JSP get Parameter is used to return the value of a request parameter passed
as query string and posted data which is encoded
Query
input. i want to call repeatedly but first time i got value. but when i called second time i got different value.
Could you please tell me answer.
For Example
how to insert checkbox value into database using jsp
how to insert checkbox value into database using jsp how to insert check box value to the database using jsp
my code is
<link href..." name="pmanager" value="Enter" ></td>
<td align
query
and in second row 1label box and a text box ly how can i align it?
Hi... actionPerformed(ActionEvent e){
String value=text1.getText();
text2.setText(value);
}
});
f.add(lab1);
f.add
How to retrieve data using combo box value in jsp? - JSP-Servlet
How to retrieve data using combo box value in jsp? Hi freind,
I already post this question. I need urgent help from u. pl response me... the following link:
Hope
JSP
JSP How to disbled the combobox until checkbox value is true
JSP Cookies Example
JSP Cookies Example
... how to handle cookies in JSP pages.
In this tutorial you will learn how to add cookies through jsp page and then
show the value of the same cookie in another | http://www.roseindia.net/tutorialhelp/comment/49921 | CC-MAIN-2015-11 | refinedweb | 2,779 | 65.22 |
Question
Barb Simmons was speaking to Madge Wilcox recently after a management meeting at Liberty Corporation. Barb told Madge that she was really disappointed with the company’s recently revised code of ethics and saw no reason why the company should have such a document. “After all,” she said, “everyone knows what’s right and wrong.” Madge was taken aback by her friend’s comment.
Required
What should Madge say to Barb?
Required
What should Madge say to Barb?
Answer to relevant QuestionsAfter several months of speculation, two longtime employees were fired from your company. There has never been any “official word” about why the employees were fired, but there are rumors that the two former purchasing ...Mark is a sales clerk at a clothing store in the local mall. Mark has been short of cash in the past and has prepared fake sales-return forms and taken money from the cash register equal to those forms. The scheme has not ...What three categories of factors make up the fraud triangle? Briefly describe each category.Compare net present value (NPV) with internal rate of return (IRR). What are the principal differences in the two methods?Kuntz Company has a project that requires an initial investment of $35,000 and has the following expected stream of cash flows:Year 1 ......... $25,000Year 2 ......... 20,000Year 3 ......... 10,000RequiredAssuming the ...
Post your question | http://www.solutioninn.com/barb-simmons-was-speaking-to-madge-wilcox-recently-after-a | CC-MAIN-2016-50 | refinedweb | 231 | 66.84 |
The C library function struct tm *localtime(const time_t *timer) uses the time pointed by timer to fill a tm structure with the values that represent the corresponding local time. The value of timer is broken up into the structure tm and expressed in the local time zone.
Following is the declaration for localtime() function.
struct tm *localtime(const time_t *timer)
timer − This is the pointer to a time_t value representing a calendar time.
This function returns a pointer to a tm structure with the time information filled in. Following is the tm structure information − */ };
The following example shows the usage of localtime() function.
#include <stdio.h> #include <time.h> int main () { time_t rawtime; struct tm *info; time( &rawtime ); info = localtime( &rawtime ); printf("Current local time and date: %s", asctime(info)); return(0); }
Let us compile and run the above program that will produce the following result −
Current local time and date: Thu Aug 23 09:12:05 2012 | https://www.tutorialspoint.com/c_standard_library/c_function_localtime.htm | CC-MAIN-2021-25 | refinedweb | 159 | 52.39 |
Gapless Audio Loop
A flutter plugin to enable gapless loops on Android and iOs.
Android may stills see some gaps on older versions like Android 6, newer versions of the SO seems to work fine.
The Android solution is heavily inspired by this article.
At the moment this package is very simple and does not feature many media player functions, the focus on this package is to have a gapless loop, if you have suggestions to improve this package, please file an issue or send a PR.
If you need a more full-featured audio player, I suggest that you take a look on the audioplayers package.
Usage
Drop it on your pubspec.yaml
gapless_audio_loop: ^1.1.0
Import it:
import 'package:gapless_audio_loop/gapless_audio_loop.dart';
final player = GaplessAudioLoop(); await player.loadAsset('Loop-Menu.wav'); // use loadFile instead to use a file from the device storage await player.play();
To stop the loop just call
await player.stop()
You can also pause and resume using the
pause and
resume methods.
Volume
Audio volume can be changed using the the
setVolume method, which receives a double between 0 and 1, which represents the percentage of the total volume of the device, example, if you pass
0.5 it means that the audio will play on 50% of the total volume.
Seek
Seeking can be done by the
seek method, which receives a
Duration object, beware that since this is a looping library, if you call seek to a value bigger than the total duration of the file, unexpected behaviour may occur, so it is highly recommend to avoid that and only use durations that are inside the total length of the file.
Troubleshooting
These are some know reasons for audio files not looping perfect:
- Android 6 does not seems to loop perfectly the files for some reason.
- MP3 usually have gaps due to its compress format, for more info check this question on stackexchange.
- OGG files working only on Android: Unfortunately OGG is not support by iOs. | https://pub.dev/documentation/gapless_audio_loop/latest/ | CC-MAIN-2020-05 | refinedweb | 335 | 61.87 |
I am running a few tests on web application. Assume there is situation where remote connection was successful but for some reason browser did not open ( I have observed this situation), now how do I end this CBT session via code( manually I know how to stop this from crossbrowsertesting site)
I want to stop the session in the place I have marked in RED below
def CBT_LaunchCode(server,capabilities,url): Browsers.RemoteItem[server, capabilities].Run(url) if (Sys.Browser().BrowserWindow(0).Exists): Log.Message("Continue with the test") else:
#Stop CBT instance here via some code Browsers.RemoteItem[server, capabilities].Run(url) #then reconnect here
Solved!
Go to Solution.
Hi @rmaney I see that you got a solution from the Support Team. Let me post it here -
>>
...the issue happens because the error is raised before the Sys.Browser().BrowserWindow(0).Close() call, and the execution of the test item is interrupted according to the project option. This is the expected behavior, TestComplete works in this way. So, you can use the OnStopTest event handler () to close the browser, e.g. like this:def EventControl1_OnStopTest(Sender): pass browser = Sys.WaitBrowser() if (browser.Exists): browser.BrowserWindow(0).Close()
<<
Glad that it worked!
View solution in original post
Is there a reason that you don't want to use the "Sys.Browser().BrowserWindow(0).Close()" snippet?
Also, are you observing that the remote browser windows do not open on a particular step? particular configuratoin? (any further info would be beneficial in debugging this by having more to pass on to our devs)
-
To answer your question a bit more directly, perhaps you could try the Terminate method?
although to be honest I'm not quite sure if that applies to the remote browsers as well...
One last comment is that, if you are seeing that the remote browser window does not open, and that the subsequent test steps are failing, instead of finding an elegant code based way of handling this, couldn't we just as easily modify the current project property settings so that we just stop the test item/project item, etc upon an error? So that if the remote browser does not open, the next subsequent step will fail, and we will get the result back on that right away with some sort of an object not found error
Thank you for the help. I will see if "Stop on error" setting stops the CBT run session.
Was this working a week before? because I remember the tests being queued even after failing. But anyway I will have a look again.
I have "Stop current item On error" and "Stop current item on object recognition" error enabled. But this doesnt end the CBT session when test case 1 fails with some error, from local next test case is already triggered to CBT, and gets queued. I have opened a ticket with Smart Bear for this. Hope I find a solution. Any suggestions from here?
Thanks Justin!
@rmaney I was able to locate your support ticket and see that the investigation is on-going. Please keep us posted when you have a solution from the Support Team!
@rmaney oh, I see that you replied straight to the community notification - this way your message won't make it to the forum.
Anyway, we will be waiting for your results here🙂
Thank you for the help
Compare images using the Region Checkpoint
Converting UTC TimeDate in an Excel file
Compare HTML table with Excel file and correct data in Excel file
How to execute remote test and obtain results via Test Runner REST API | https://community.smartbear.com/t5/TestComplete-Functional-Web/How-to-force-stop-a-CBT-run-session-apart-from-quot-Sys-Browser/m-p/201564 | CC-MAIN-2020-40 | refinedweb | 603 | 63.09 |
For the majority of the time in our React components, the layout can be handled by modules such as Flexbox, CSS Grid, or with some custom CSS. Occasionally, however, we need to render differently depending on the size of the browser window and, therefore, handle the browser
resize events to change what is rendered.
This guide will show you how to use React to render components when the window is resized. First we will do so by simply displaying the width and height of the window. Then we’ll use a more 'real world' scenario by scaling drawing on a canvas depending on its size.
To show the width and height of a window and have the component render when this is changed, we first need to store these values in the component state.-That way, when they are changed, this will trigger a render. They can be initialized using the current
innerWidth and
innerHeight like this:
1const [width, setWidth] = React.useState(window.innerWidth); 2const [height, setHeight] = React.useState(window.innerHeight);
This code uses the state hook; this hook returns an array and the code uses array destructuring to get values for
width and
height and functions that can be used to update the values.
The values are displayed in the component like this:
1return ( 2 <div> 3 <div>{`Window width = ${width}`}</div> 4 <div>{`Window height = ${height}`}</div> 5 </div>);
All that is left to do is update the state when the window is resized. We need a function to do the update:
1const updateWidthAndHeight = () => { 2 setWidth(window.innerWidth); 3 setHeight(window.innerHeight); 4};
To listen for the resize event, we need to add an event listener, which is done in the Effect hook. In the following code
useEffect is called with no second parameter, so it will run every time the component is rendered. It adds an event listener for the
resize event that calls the function to update the component state. Event listeners must also be removed when they are no longer needed and this is done using the return value of the effect hook. This will be executed both when the component is unmounted and, also, before executing the effect again which calls
window.removeEventListener to clean up:
1React.useEffect(() => { 2 window.addEventListener("resize", updateWidthAndHeight); 3 return () => window.removeEventListener("resize", updateWidthAndHeight); 4});
Now, when this component is on a page, it will update the display of the width and height whenever the browser window is resized.
A more 'real world' scenario for handling
resize events is when drawing on a canvas. It is very likely that the scale of the canvas will need to change as the size of the display changes. We will now build a component that will render a canvas, draw some shapes on it, and redraw those shapes at the relevant scale whenever the browser is resized.
First, we need a function that will take a canvas, scale as a parameter, and draw something on it:
1function draw(canvas, scaleX, scaleY) { 2 const context = canvas.getContext("2d"); 3 context.scale(scaleX, scaleY); 4 context.clearRect(0, 0, canvas.clientWidth, canvas.clientHeight); 5 6 context.beginPath(); 7 context.setLineDash([]); 8 context.lineWidth = 2; 9 context.strokeStyle = "red"; 10 context.moveTo(0, 100); 11 context.lineTo(scaleWidth, 100); 12 context.moveTo(0, 400); 13 context.lineTo(scaleWidth, 400); 14 context.stroke(); 15 context.lineWidth = 1; 16 context.strokeStyle = "blue"; 17 context.fillStyle = "blue"; 18 context.rect(200, 200, 100, 100); 19 context.fill(); 20 context.closePath(); 21}
This function gets a 2D drawing context, scales it according to the parameters, clears it, and draws two lines and a solid rectangle.
We also need a
canvas element to be drawn on. We need to keep a reference to this canvas using the useRef hook; this allows the canvas element to be accessed elsewhere within the component code:
1const canvas = React.useRef(null); 2return <canvas ref={canvas} style={{ width: "100%", height: "100%" }} />;
The ref is initialized to
null and is set to the canvas element by the
ref attribute on the element. To access the referenced canvas element
canvas.current is used.
We will need a function to be called when the canvas is resized that will set the canvas
width and
height properties to the actual width and height of the canvas and will set the scale from the current window size. For the purposes of this guide, this function that uses a width and height of 500 to be a scale of one will be used:
1const calculateScaleX = () => canvas.current.clientWidth / scaleWidth; 2const calculateScaleY = () => canvas.current.clientHeight / scaleHeight; 3 4const resized = () => { 5 canvas.current.width = canvas.current.clientWidth; 6 canvas.current.height = canvas.current.clientHeight; 7 setScale({ x: calculateScaleX(), y: calculateScaleY() }); 8};
The scale needs to be stored in component state and will be initialized to one on both axes like this:
1const [scale, setScale] = React.useState({ x: 1, y: 1 });
Whenever the browser window is resized, we need to calculate the scale and update the value stored in the component state. As in the previous example, this is done using the Effect hook with no second parameter to add and remove listeners for the
resize event, this time calling the
resized function:
1React.useEffect(() => { 2 const currentCanvas = canvas.current; 3 currentCanvas.addEventListener("resize", resized); 4 return () => currentCanvas.removeEventListener("resize", resized); 5});
Finally, we need to draw on the canvas whenever the scale changes. To do this we can, again, use the Effect hook but, this time, we will give it a second parameter of the scale state value. This means that the hook will only execute when the scale is changed. Inside the hook, we simply need to call the
draw function passing the canvas and scales as parameters:
1React.useEffect(() => { 2 draw(canvas.current, scale.x, scale.y); 3}, [scale]);
This guide has shown how to listen for
resize events and render components when the window is resized. The patterns used to listen for the events and clean up can also be used for other window events.
The code for the two example components can be found here - Display Window Dimensions and Scaling a Canvas. | https://www.pluralsight.com/guides/render-window-resize-react | CC-MAIN-2022-40 | refinedweb | 1,023 | 66.03 |
After a brief Gitter discussion that starts here and ends here, I decided to take a little time to see:
- What it would take to show Scala documentation in the REPL, and
- Different ways of viewing that documentation
As a result I created this Github project:
Disclaimers
A few disclaimers before we jump in:
- This is just a “proof of concept” (POC) or MVP project
- A lot of the code is written in a “worst practices, no error checking” style because I didn’t expect to spend much time on it
- Everything in this project is likely to change
- There is a work-in-progress Scala 3 REPL
:doccommand that inspired this effort; see those Gitter links for more details
Also, yes, I do know that Ammonite has a src or source command.
Video demo
The easiest way to show this is with a video demo, so here’s an animated GIF. Click the image to see a larger version of it:
As shown, you can always type
help to see the list of available commands.
How it works
The way it works is that you can currently issue these commands:
You can also type
help to show that help output.
When you issue any of those commands, this code goes out to the internet, gets the Scaladoc for the class you specify, then does whatever it needs to do for each command. The one exception is the
browser command, which opens the Scaladoc page in your default browser (but only on a Mac right now).
At the time of this writing, the first
doc command has the worst output, so I generally wouldn’t use it. The second command is starting to have decent output, and looks like this:
def withFilter(p: (A) => Boolean): WithFilter[A, [_]List[_]] Creates a non-strict filter of this list. Creates a non-strict filter of this list (more here) ... def withFilter(f: (A) => Boolean): Iterator[A] Implicit This member is added by an implicit conversion from List[A] toIterableOnceExtensionMethods[A] performed by (more here) ...
NOTE: If the code finds multiple matches for a class name, such as for
Map, nothing works, because I don’t handle that case yet.
Of course a better approach is to get this documentation from the Scala source code jars, but because I was trying to limit my time on this project, I took this other approach.
More information
For more information on how to install and use this extremely experimental POC/MVP code, see the Github repo for this project. The README file there shows how to install and run this on your system. | https://alvinalexander.com/scala/show-scaladoc-source-code-in-scala-repl/ | CC-MAIN-2021-10 | refinedweb | 439 | 57.84 |
]stable/1.7.x]stable/1.7.x]stable/1.7.x by
- Merge pull request #1611 from thusoy/patch-1 Fix broken sphinx reference …
- 13:10 Changeset [751dc0a]stable/1.7.x]stable/1.7.x]stable/1.7.x by
- Fixed #19298 -- Added MultiValueField.deepcopy Thanks nick.phillips …
- 10:42 LittleEasyImprovements edited by
- (diff)
- 10:17 Changeset [d5d0e03]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x by
- Fixed #20978 -- Made deletion.SET_NULL more friendly for …
- 07:12 Changeset [d59f1993]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7]stable/1.7.x by
- Fixed #21033 -- Fixed uploaded filenames not always being truncated to 255 …
- 16:08 Changeset [d6e222f]stable/1.7.x by
- Merge pull request #1607 from Diskun/master Fixed a little mistake in …
- 16:02 Changeset [522d3d6]stable/1.7.x by
- Fixed a little mistake in Django 1.7 release notes
- 14:01 Changeset [df2fd4e]stable/1.7.x]stable/1.7.x by
- Refactored code and tests that relied on django.utils.tzinfo. Refs …
- 13:32 Changeset [ec2778b]stable/1.7.x]stable/1.7.x by
- Fixed #19885 -- cleaned up the django.test namespace * override_settings …
- 12
- 12:19 Changeset [a52cc1c0]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x]stable/1.7.x by
- Fixed #20707 -- Added explicit quota assignment to Oracle test user To …
- 06:03 Changeset [7c6f2dd]stable/1.7.x by
- Simplify FilterExpression.args_check
- 06:03 Changeset [5e1c7d4]stable/1.7.x]stable/1.7.x]stable/1.7.x ….
Note: See TracTimeline for information about the timeline view. | https://code.djangoproject.com/timeline?from=2013-09-10T10%3A42%3A12-07%3A00&precision=second | CC-MAIN-2014-15 | refinedweb | 312 | 63.56 |
Back to article
May 13, 2008
As discussed in Part I,
the execution mode of each policy is determined by the characteristics of the
Management facet that is used by the condition in the policy.
Facets support On
Change Prevent if there is transactional support for the DDL statements that
change the facet state. Only the Login, User and Database Security facets support
the On Change Prevent mode. However, it is usually more important to
prevent or correct a violation than just to log it. Fortunately, we can use SQL
Server Agent alerts and jobs to remedy the limitations. When policies are
executed in one of the three automated modes, if a policy violation occurs, a
message is written to the SQL Server error log and the Application log. The
error message numbers are shown below.
Execution
mode
number
On Change
- Prevent (if automatic)
34050
On Change
- Prevent (if On Demand)
34051
On Schedule
34052
On Change
- Log Only
34053
A SQL Server
Agent alert can be set up to detect the error message and invoke a job to
correct the violation.
Lets look at an example. As we know, it is important to
back up the transaction log of a database that is on Full or Bulk-logged
recovery model regularly so that the transaction log wont fill up. We can
create a policy to check the last time the transaction log was backed up and
make sure it was done within the last day.
1. In
SSMS, expand Management in Object Explorer, expand Policy Management,
right click Conditions, and select New Condition. In the New
Condition dialog box, in the Name field, type Transaction Log
Last Backup Date. Pick the Database facet. In the Expression
area, in the Field box, select @RecoveryModel, in the Operator
box select =, and in the Value field select Full. Create
another clause with @RecoveryModel and a different value of BulkLogged,
and select Or in the AndOr box. Select both clauses and right
click in the highlighted area, then click Group Clause to group the two clauses.
This creates an expression to check if the targeted database is in Full or
Bulk-logged recovery model.
We still need another clause to
check if the transaction log of the database was backed up within the last day.
Select AND in the AndOr field, @LastLogBackupDate in the Field
box, >= in the Operator box, and in the Value field, click on
the
button. This brings up an Advanced Edit dialog box. Type DateAdd('day',
-1, GetDate()) in the Cell value box. Close the dialog box.
2.
Right click Policies in the Object Explorer, and select New
Policy. In the New Policy dialog box, in the Name box, type Safe
Transaction Log Backup Date. Check the Enabled
box to enable the automated execution modes. In the Check condition box,
select the Transaction Log Last Backup Date condition under the Database facet. Select the Online
User Database condition under the Database facet in the Against
targets box. In the Execution Mode
box, select On Schedule as the execution mode, and in the Schedule
box, pick the CollectorSchedule_Every_15min schedule that will run the
policy every 15 minutes. Note that the On Change - Prevent execution
mode is not available for the database facet.
The Online User Database condition is one of the policies that are shipped with SQL
Server 2008. By default, the policies are not installed on the SQL Server. To
import them, right click Policies under Policy Management, and
select Import Policy. This brings up an Import dialog box. Click
the
button in the Files to import box. Another Select Policy
box pops up. Navigate to the directory C:\Program Files\Microsoft SQL
Server\100\Tools\Policies\DatabaseEngine\1033, and select all of the files
under this folder. Click Open to close this box. Click Ok to
close the Import dialog box. SQL Server will import all the policies
under that directory.
SQL Server creates a job called check_Safe Transaction Log
Backup Date_job to evaluate the Safe Transaction Log Backup Date policy
every 15 minutes. If you look at the only job step of this job, you see that SQL
Server Agent uses a PowerShell cmdlet called Evaluate-Policy to evaluate the
policy.
If the transaction log of a database is found not backed up
within the last day, an error message with an error number 34052 will be logged
into the SQL Server error log and the Application log. In our example, we have
a user database called Matrix that has not been backed up. Once the policy is
evaluated by the check_Safe Transaction Log Backup Date_job job, a red cross appears
on the Matrix database, which means the Matrix database violates the policy.
An error message is also logged into the SQL Server error
log.
Error: 34052, Severity: 16, State: 1.
Policy 'Safe Transaction Log Backup Date' has been violated by target '/Server/POWERPC/Database/Matrix'.
We can implement an alert to detect the error 34052 and
invoke a job called Fix Transaction Log Backup to parse the error message and
back up the transaction log of the violating database. The error message is
passed by the alert to the job in an (A-MSG) token. The job contains one
Transact-SQL script job step with the following script.
DECLARE @errormsg nvarchar(800), @start int
DECLARE @policyname sysname
DECLARE @dbname sysname
DECLARE @sqlstring nvarchar(800)
SET @errormsg = N'$(ESCAPE_SQUOTE(A-MSG))'
SET @start=9
SET @errormsg = SUBSTRING(@errormsg, @start, LEN(@errormsg) - @start)
SET @policyname = SUBSTRING(@errormsg, 1, CHARINDEX('''', @errormsg)-1)
SET @start=CHARINDEX('Database', @errormsg) + 9
SET @dbname = SUBSTRING(@errormsg, @start, LEN(@errormsg) - @start)
SET @sqlstring = 'BACKUP LOG ' + @dbname + ' TO DISK=''C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\'
+ @dbname + '_Log.bak'''
print 'sqlstring: ' + @sqlstring
exec (@sqlstring)
In our example, we create an alert called Unsafe
Transaction Log Backup Alert to invoke the job. This alert detects any errors
with a number 34052 and a message text containing the policy name, Safe
Transaction Log Backup Date. The Fix Transaction Log Backup job is
also specified in the Response pane of the alert, and it will run when
the alert is raised.
After the Unsafe Transaction Log Backup Alert alert is
defined, the next time the policy is evaluated, an alert is raised
automatically and the Fix Transaction Log Backup job is invoked to back up
the transaction log of the Matrix database. Here is the output from the job in the
job history.
Date 4/20/2008 8:30:03 PM
Log Job History (Fix Transaction Log Backup)
Step ID 1
Server POWERPC
Job Name Fix Transaction Log Backup
Step Name Back Up Log of Violating Database
Duration 00:00:00
Sql Severity 0
Sql Message ID 3014
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0
Executed as user: POWERPC\Yan. sqlstring: BACKUP LOG Matrix TO DISK='C:\Program
Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Matrix_Log.bak' [SQLSTATE
01000] (Message 0) Processed 3 pages for database 'Matrix', file 'Matrix_log' on file 1. [SQLSTATE
01000] (Message 4035) BACKUP LOG successfully processed 3 pages in 0.037 seconds (0.607
MB/sec). [SQLSTATE 01000] (Message 3014). The step succeeded.
As we can see above, the job backed up the transaction log
of the Matrix database successfully.
If you got an error message
Variable A-MSG not found, you need to enable alert tokens. Right-click SQL
Server Agent in Object Explorer, select Properties, and on the Alert
System pane, select Replace tokens for all job responses to alerts
to enable tokens.
In this article, we have shown you how to use SQL Server
Agent alerts and jobs to fix policy incompliance automatically.
»
See All Articles by Columnist Yan Pan
The Network for Technology Professionals
About Internet.com
Legal Notices, Licensing, Permissions, Privacy Policy.
Advertise | Newsletters | E-mail Offers | http://www.databasejournal.com/features/mssql/print.php/3745161 | CC-MAIN-2015-48 | refinedweb | 1,303 | 62.38 |
I fundamentally believe that there has never been a better time to be a software developer. In our emerging world where every company is a software company, developers have the awesome role and responsibility of driving forward all kinds of innovation across many different industries. Whether building apps for enterprises or consumers, whether harnessing the mobility and intimacy afforded by devices or the scale and economy enabled by the cloud, whether pursuing the art of development as a hobby or as a profession, and whether new to the game or a seasoned veteran, developers everywhere have the potential to build creative and compelling solutions that delight and transform the world. We at Microsoft strive to ensure developers have the tools they need to thrive while doing so.
In that context, we have our Build 2013 developer conference this week in San Francisco, where approximately 5000 developers have gathered in person (with many thousands more watching virtually) from around the world to discuss the next generation of software development with platforms and tools from Microsoft. These developers have built, are building, and will build amazing and innovative experiences for Windows, from the client to the cloud, doing so productively using the latest suite of development tools and services available in Visual Studio.
I’m excited to share several announcements we’re making today.
Visual Studio 2012 Update 3 Now Available
We launched Visual Studio 2012 in September 2012, and at that time we promised we’d provide more frequent updates to Visual Studio than we had in the past. Since then, in November we released Visual Studio 2012 Update 1 (VS2012.1), and in April we released Visual Studio 2012 Update 2 (VS2012.2). Both of these cumulative updates contained significant new features sets, spanning areas such as Windows desktop development, Windows Store development, line-of-business app development, SharePoint development, agile planning and teams, quality enablement, and more.
I’m happy today to announce that this morning we released the third update for Visual Studio 2012: Visual Studio 2012 Update 3 (VS2012.3)..
Visual Studio 2013 Preview and .NET 4.5.1 Preview Now Available
At TechEd North America 2013 a few weeks ago, we announced the next version of Visual Studio, and shared some of the progress we’ve made with Visual Studio 2013 and Team Foundation Server/Service in enabling the modern application lifecycle and DevOps. We highlighted a few of the many new capabilities in the release, such as support for agile portfolio management, cloud-based load testing, a team room integrated with TFS, code comments integration with TFS, and Git support.
Continuing on that journey, today at Build 2013 we’re unveiling some of the significant advances we’ve made around building stunning apps for Windows. And in that light, along with Windows 8.1 Preview being released, I’m excited to announce that Visual Studio 2013 Preview and .NET 4.5.1 Preview are now available for download, and as “go-live” releases..
.NET
Along with Visual Studio 2013, today we’re announcing .NET 4.5.1. .NET 4.5.1 is a highly-compatible, in-place update for .NET 4.5 that ships as part of Windows 8.1. The .NET 4.5.1 Preview installs as part of Visual Studio 2013 Preview, is included in all installations of Windows 8.1 Preview, and is also available for separate installation into Windows 8, Windows 7, Windows Vista, and the corresponding Windows Server releases.
Much of our. This is particularly useful when you write method invocations inline as parameters to other invocations:
As a more impactful example, one of the long-standing requests we’ve had from developers (who have provided thousands of requests on UserVoice) is to enable “Edit and Continue” for 64-bit; with .NET 4.5.1, this is now available, enabling you to change your running .NET code (whether in a 32-bit or 64-bit process) while stopped at a breakpoint in the debugger, without having to stop and restart the process and with it the debugging experience.
With all of the work we did for C# and Visual Basic in .NET 4.5 and Visual Studio 2012 to enable more productive asynchronous programming, I’m particularly excited about improvements we’ve made in this release to support async debugging (you need to be using Visual Studio 2013 on Windows 8.1 to get this capability, as the debugger relies in part on some new operating system support to enable.
For those of you who develop with JavaScript or C++ and who are eyeing this jealously, you’ll be happy to know that we’ve enabled similar async debugging experiences for both languages!
Beyond diagnostics support, .NET 4.5.1 includes performance improvements, such as support for on-demand compaction of the GC’s large object heap, and faster startup of apps when running on multicore machines. It also includes support that will enable us to be more agile in how we deliver core capabilities for the .NET development experience in the future.
For more information, I'd recommend starting with Habib Heydarian’s Build talk “What's new in .NET Development”. Build 2013 sessions will be available for viewing from Channel 9 within a day or so of the session.
C++
One of the biggest requests we’ve had from C++ developers is for more C++11 standard support. We initially released a CTP of such capabilities in November, and I’m happy to say that those capabilities are now included in Visual Studio 2013 (this is also an area in which we plan to continue investing beyond this release). This includes C++11 features like delegating constructors, raw string literals, explicit conversion operators, and variadic templates:
This is only one of multiple improvements specific to C++ developers in Visual Studio 2013 (many of the improvements discussed throughout this blog post apply regardless of language). For example, .NET developers have benefited from the debugger’s “Just My Code” feature for several releases; now in Visual Studio 2013, C++ developers have the option of “Just My Code” support as well, such that the debugger will hide information not relevant to the developer’s code (e.g. in the call stacks window, hiding frames in methods from the CRT), making it easier for the developers to focus in on areas that are most relevant to their debugging needs:
(Left: Without "Just My Code". Right: With "Just My Code")
The Visual Studio editor includes several improvements specifically for C++, such as support for easily navigating back and forth between code files and their associated header files, and much improved capabilities around the formatting of C++ code:
Other improvements for C++ in Visual Studio 2013 include performance improvements to C++ AMP and the C++ auto-vectorizer; the C++ REST SDK, which is both included in Visual Studio 2013 and available as an open source project on CodePlex; debugging improvements, including mixed-mode debugging between JavaScript and C++ as well as the async debugging support I previously mentioned; improvements around Profile Guided Optimization; and more.
For more information, I'd recommend Tarek Madkour’s Build talk “What’s New in Visual Studio 2013 for C++ Developers”. And if you’re interested in where things are headed, see Herb Sutter’s Build talk “The Future of C++”.
XAML
Whether using .NET or C++, we’ve improved the development experience for using XAML in Windows Store apps. In addition to some significant performance improvements for the XAML designers in Visual Studio and Blend, we’ve made many improvements to the XAML editor experiences in Visual Studio, including IntelliSense for data binding and resources, “Go To Definition” support for navigating styles, and code snippets. The design experience in Blend now sports guides that allow you to achieve pixel-perfect layouts, and Blend’s improved style editing experience allows you to edit styles and templates in the context of their usages. We’ve also have added design-time experiences for all of the new Windows 8.1 XAML controls like AppBar, Hub, and Flyout, and we’ve updated the Device panel to support authoring for the new view states.
As someone who really values snappy and responsive apps on my devices, one of my favorite new capabilities related to XAML is the XAML UI Responsiveness tool. In VS2012.2, we introduced the HTML UI Responsiveness tool, focused on profiling the responsiveness of Windows Store apps implemented with HTML and JavaScript; now with Visual Studio 2013, the new XAML UI Responsiveness tool provides similar support for Windows Store apps implemented with XAML, so you can easily track down and fix glitches, freezes, and other performance anomalies in your modern UIs:
And for improved quality of your Windows Store apps, Visual Studio 2013 now also supports coded UI testing with XAML:
For more information about such improvements, the following Build talks should be helpful:
- Tim Heuer's Build talk "What's New in XAML"
- Unni Ravindranathan’s Build talk “What's New in Visual Studio & Blend for XAML Developers”
- Harini Kanan's Build talk "Creating Your First App Using XAML"
- Pratap Lakshman’s Build talk “Visual Studio 2013 Diagnostics Tools for XAML-based Windows Store Apps”
- Prachi Bora’s Build talk “Automated Testing of XAML-based Windows Store apps”
JavaScript and HTML
As with Visual Studio 2012, a lot of effort has gone into support for HTML and JavaScript in this release, in particular around building Windows Store apps (I’ll look at some developer tooling improvements we’ve made for web apps and sites in a subsequent post). I’ve already mentioned some capabilities, such as the much improved support for async debugging of JavaScript, but the improvements go well beyond that.
To start, the core Visual Studio experienced around JavaScript has been enhanced. For example, “Go to Definition” now supports navigating namespaces, IntelliSense includes notes about deprecated APIs, and the editor both supports identifier highlighting and includes a navigation bar that makes it easy to quickly jump around in the source.
We’ve also made notable improvements to the DOM Explorer and the JavaScript Console. For example, the DOM Explorer now supports IntelliSense, search, direct editing, and inline styles, while the JavaScript Console has been augmented to support IntelliSense, object preview and visualization, and multiline function support.
Blend for HTML has also been enhanced this release. For example, in addition to updates referenced previously when discussing XAML (e.g. rulers and guides for better laying out content), Blend now includes a timeline for animating changes to CSS:
We’ve also made big improvements around diagnostics for Windows Store apps, including but not limited to those implemented with HTML and JavaScript.
For more information on building Windows Store apps with HTML and JavaScript, the following Build talks should be helpful:
- Polita Paulus and Ryan Salva's Build talk "Creating your first app using HTML and JavaScript".
- Ryan Salva's Build talk "What's New in Blend for HTML Developers"
Diagnostics
I’ve already mentioned multiple improvements to the diagnostics capabilities of Visual Studio, such as support for async debugging, the new XAML UI Responsiveness tool, 64-bit “Edit and Continue”, and “Just My Code” for C++. Visual Studio 2013 also now sports a brand new Performance and Diagnostics hub, which makes it easy to find performance and diagnostics tools in one convenient location.
A new tool available from the hub is the Energy Consumption tool. Battery life is of primary importance to device users, and just as resource consumption of an app in the cloud has an impact of the cost of running that application, so too does the resource consumption of an app on a device have an impact on the battery life of that device. To assist with this, Visual Studio 2013 includes a new Energy Consumption profiler, which enables developers to estimate how much power their app will cause the device to consume, and why, e.g. a particular region of code utilizing more CPU time than was expected, or a particular pattern of network calls resulting in the device’s radio needing to turn on more frequently than was expected.
Another area of investment has been around managed memory analysis. Often, developers have an application running in production, and they want to understand what .NET objects exist in the process; this is particularly important when trying to track down a possible memory leak. Visual Studio 2013 now includes support for analyzing managed heaps, such that a .dmp file can be loaded into Visual Studio, enabling the developer to “Debug Managed Memory”:
The developer is then able to explore the .NET objects in the process, and even compare two different dumps of the same app:
For more information, I suggest the following Build talks:
- Andrew Halls’ “Diagnosing issues in JavaScript Windows Store Apps with Visual Studio 2013”
- Pratap Lakshman’s “Visual Studio 2013 Diagnostics Tools for XAML-based Windows Store Apps”
Connected IDE and Connected Apps
In the past, I’ve talked on this blog about connected apps, with native front-ends that connect up to back-end services in the cloud. Visual Studio 2013 is one such application, a connected IDE that, for example, knows your identity and will roam/synchronize your settings (e.g. UI theme, keyboard shortcuts, text editor configuration, etc.) from one installation to another via backend cloud services.:
For more information, I'd suggest starting with the following Build talks:
- Josh Twist's Build talk "Mobile Services - Soup to Nuts"
- Nick Harris' Build talk "Build connected Windows 8.1 Apps with Mobile Services"
Developer Experience
We’re continually examining how developers work and what improvements we could make to the core developer experience in the Visual Studio IDE to improve productivity. Visual Studio 2013 includes several such enhancements.
One such feature is the CodeLens (Code Information Indicators) capability we introduced at TechEd. This feature brings useful information about types and type members directly into the editor, information such as the references to a particular method, how many tests are referencing a method and how many of them are passing, who last checked in a change that modified a method, and how many changesets impact a method.
Another such feature meant to streamline a developer’s productivity is “Peek Definition.” Visual Studio already supports “Go To Definition,” which opens a new document window to host the file containing the definition for the referenced symbol, and as of Visual Studio 2012 supports opening this in a “preview” window. Now in Visual Studio 2013, Peek Definition gives developers the option of viewing inline as part of the current document the file defining the target symbol.
As another example, based on one of the Productivity Power Tools we previously released, the scrollbar for the editor has also been enhanced to show information at a glance about where edits in the file are currently pending, where the last save to the file made changes, where breakpoints are defined, where bookmarks are set, and the like.
I'd recommend Cathy Sullivan’s Build talk “What’s New in the Visual Studio 2013 IDE” for more information about such improvements.
Start Exploring
Beyond the new Visual Studio 2013 application lifecycle management capabilities outlined at TechEd a few weeks back, this post only scratches the surface of what’s new in Visual Studio 2013 Preview. From. And there are many great Build 2013 sessions to help you do so.
You can download the Visual Studio 2013 Preview today, start learning about what it has to offer, and provide feedback to the team about the direction we’re heading and the experiences enabled. I'd also suggest following the developer tools blogs at for more details about these releases.
Namaste!
Follow me on Twitter at.
Disappoint. I suggest to call it VS 2012.4
Download links for the Update 3 Web installer are broken
I may have misunderstood, but does the VC2013 compiler shipped today really have the exact same C++11 support as your Nov 2012 CTP nine months ago? Really?
I was hoping for more C++11 support than what was available in the November CTP.
"this is also an area in which we plan to continue investing beyond this release"
This is exactly what was said last year by Herb Sutter about VS 2012.
Being a C++ developer tied to Visual Studio is getting increasingly frustrating especially when we compare C++11 support against Clang and GCC. I guess we need to look forward to VS2014 or VS2015 before we get similar standard support.
@blaz, @Alastair, the "What's New for Visual C++ in Visual Studio 2013 Preview" is not yet live, but it lists the following C++ 11 features: default template arguments for function templates, delegating constructors. explicit conversion operators, initializer lists and uniform initialization., raw string literals, and variadic templates. It also includes rvalue/lvalue ref casts.
See blogs.msdn.com/…/visual-studio-2013-preview-now-available.aspx for links to the MSDN topics. They go live soon!
-eric
Yay, another "Highly compatible" framework update that will mean we have to test ~50 web applications before we can update the production servers.
What is wrong with new standalone versions instead of this incremental rubbish?
This post was a summary, and only of today's Preview build. As mentioned in the post, I have a talk on Friday that's all about VC++ and the ISO standard. It'll be webcast live, or available later on demand.
Can you restore the icons from visual studio 2010? Even with the color add on, I find the Visual Studio 2012 UI to be hideous.
Funny, I'm actually surprised at how much VS 2013 introduces. Async debugging is of course big, C++11 as well, and much improved DOM code assistance like IntelliSense. They're all impactful improvements for various programming languages.
@ Kat – I'm with you. After they went through the horror that was overlaying 4.0 with 4.5, it looks like MS decided to thumb their collective noses at developers AGAIN with an in-place upgrade.
For our projects, we backed them down to 3.5 code to avoid all of the "is 4.0 or 4.5 installed?" issues. Now that they are doing another in-place update, there is a very good chance that our shop will look at 3.5 as the ending point of MS dev.
The problem now becomes not how we deal with VS and overlaid frameworks, but one of which platforms do we target in the future and MS is looking less and less likely with every release.
MS gave its user community the middle finger earlier this week by refusing to consider bringing back a heavily used feature that was removed in VS 2012 (visualstudio.uservoice.com/…/3041773-bring-back-the-basic-setup-and-deployment-project-). Now I'm supposed to get excited about new features in VS 2013? Sorry, but I'm done with Visual Studio.
@LS and @Kat – Thanks for sharing your concerns.
Please take a look at our 4.5.1 announcement post (blogs.msdn.com/…/announcing-the-net-framework-4-5-1-preview.aspx). It talks about how we’re looking at .NET Framework updates going forward. We’d appreciate your feedback on it. In fact, your scenario is one of the ones that we have in mind.
We’ve put a lot of effort into ensuring that the .NET Framework 4.5.1 is a highly compatible release. We’ve tested it with a lot of workloads, including with external companies. Where we received reports of issues with the .NET Framework 4.5, we broadly distributed fixes quickly.
I assume that you are making reference to .NET Framework deployment as it relates to client apps. You can always redistribute the .NET Framework version that you depend on. Many developers do that. If you do that, then you’ll know that you have the right .NET version. As a fall-back, we always include the .NET version that you need in your app.config file. In the case that this version isn’t on the machine, we’ll ask the user to install that particular version. That’s a last resort, but ensures that there is a good end-to-end experience for .NET apps on customer machines.
Also, if you feel that there is some guidance missing that would help your company adopt 4.5 or 4.5.1, we’d like to know about that. You can contact us @ [email protected].
Thanks — Rich Lander [MSFT]
I can't tell from the pictures – have the godawful monochrome icons from VS 2012 been fixed yet?
@ Rich – 4.5.1 is not just enhancements, there are bug fixes. We can't control which versions of the runtime that our customers have installed. It is the EXACT same scenario that we went through with 4.0/4.5. There are many hundreds of posts on uservoice about this and thousands of votes.
You have once again put developers in an untenable position.
If you don't mind sharing, what was the thought process in making 4.5.1 overlay 4.5 after the very angry responses from developers over the exact same situation with 4.0/4.5? The reason I ask is that, as an outsider (i.e., one who does not work for MS), it sure looks like MS likes treating developers like a red headed step children and it is cool to abuse them.
"this [C++ features & support] is also an area in which we plan to continue investing beyond this release"
Nope. You people said that about 2012 and it turned out to be untrue. I don't see why we should believe you this time. In fact, I'll say it plainly: I don't believe you.
It's difficult to justify the need for Microsoft developer tools when it's clear that developers are no longer important to Microsoft. There are alternatives with far better language support.
this is the end of micro$oft
As usual, not a single problem I have has been fixed or improved by a new version of Visual Studio. You want to help real world developers? Quit "dead ending" platforms. Unmanaged VB6 should have worked in VB.net. Windows Forms to ASP or WPF should have been a one button click issue. (Yes, the models are different. No ***** Sherlock). Forms of automated migration were still possible. How about making Winform hosting in WPF actually *work*? And whither Silverlight? Or WPF? No. I know they're not "going away" but if they're not actively supported by Microsoft, they might as well do so.
Does it occur to anybody at Microsoft that developers invest a lot of money, time and sweat into these technologies, and that "Recode" is the WRONG ANSWER?
Given the fact that whatever Microsoft is coming up with this year will probably disappear one day, tell me again what would motivate me to invest one more second in a Microsoft technology, as opposed to say…Java?
>> we’ve made many improvements to the XAML editor experiences in Visual Studio
Does this mean that WPF will get these improvements? I know WPF is no longer "blessed" but there is no way that a store app even can get close to meeting the needs of a normal enterprise.
where is full c++11 support?????? buy clang!!!
Wait a moment! When you released VS2012 you said that there will be an update with more C++11 standard conformance! Now you're telling us, that we have to buy VS2013 to get that features?!
Thanks MS for believing in the "C++ renaissance"…
The sooner Clang gets ported to Windows the better…
"This includes C++11 features like delegating constructors, raw string literals, explicit conversion operators, and variadic templates"
But still not feature complete C++11 support. So still *years* behind gcc and clang.
And you're boasting about that?
Is there going to be proper upgrade pricing or will it be the case as it was with 2010 in that you had to purchase the full product AGAIN. If that's the case I will do what I have done with 2012 and pirate it. If Microsoft are happy to commit daylight robbery I am happy to pirate again. It would also be pleasant if the UI blended properly with the windows 7 interface not look like a badly fitted window like it does now.
Is VS 2013 a free update to VS 2012 or does Microsoft really expect us to pay another $700+ for full C++11 support?!
@Alan, "Is VS 2013 a free update to VS 2012 or does Microsoft really expect us to pay another $700+ for full C++11 support?!"
What do you think?
Meanwhile, std::thread is still BROKEN even after three updates to 2012. It still leaks. Still broken. I haven't looked yet but I'm assuming std::atomic is also still BROKEN and still generating the wrong code for an atomic load on x86.
Apparently if we want a c++ library that's less broken we need to buy 2013 less than a year after we bought and installed 2012.
And the kicker? 2013 is hardly "full" support for the C++ language spec. It's more support than the still brand new 2012, but not full.
This is shameful behavior, Microsoft. It is indefensible.
"We also plan to ship ISO images for the RTM version of future Visual Studio updates, in addition to the existing distribution mechanism we already provide. We hope you will enjoy this additional option for downloading Visual Studio Updates."
blogs.msdn.com/…/announcing-availability-of-isos-for-visual-studio-updates.aspx
Okay… so where's the ISO image for Update 3?
Is there anything planned around updating the code review tooling? As a former MSFT employee, I really miss CodeFlow. The current code review tools are disappointing as they do not support iterations on the review itself. Having all CodeFlow functionality in Visual Studio would make the higher SKUs worth their price, IMO.
VS 2013 = VS 2010 sp2 with lots of crap
…never been a better time to be [an Apple] software developer.
There. Now it follows.
Like the DOM explorer. If mainly for writing web front, may I suggest my LIVEditor? It's mainly a combination of a lightweight code editor (with html/css intelliSense), a Webkit-based browser and a html inspector (find and go to applied css styles). Website:.
Async Debugging features and Peek Definition are Awesome.
So after all the promises about upcoming C++11 compliance in 2012 updates, you will force us to pay AGAIN for Visual Studio less than one year after 2012 release?!
How could we believe in your C++ renaissance broken promisse now?
Before .NET existed, adding C++ extensions for COM support was more important than standards compliance, then came .NET and Managed C++/C++ CLI were more important than standards compliance, out comes WinRT and guess what C++/CX is more important than standards compliance!!!
How can we put money into this company products if open source compilers, community driven mostly, are able to achieve standards compliance way better than whatever Microsoft's Marketing department decides to support.
At least have the decency of offering VS 2013 updates for free to all that believed in you and bought VS 2012.
@Gigaplex, please read "Install instructions" on this page for information on how to obtain the ISO image for VS2012.3:…/details.aspx
Would be useful to host images somewhere other than SkyDrive (which I think is what blu.livefilestore.com corresponds to). Lots of corporate environments block such file sharing resources.
…"From new Windows Store app templates, to Python support in Visual Studio"
Hey, when you are saying: Python Support in Visual Studio, do you mean PTVS 2.0 (which is in beta now)? Or do you mean that Python is now fully Integrated in Visual Studio without needing to install extra stuff. Or what do you mean? I am very interested in that!
> In our emerging world where every company is a software company, developers have the awesome role and
> responsibility of driving forward all kinds of innovation across many different industries.
Let's all drive forward by bringing back setup and deployment projects!
@Moondevil. Completely agree. Maybe MS will surprise us and announce a free upgrade to all those people who bought vs2012. Don't wish to sound negative, I love Windows 8 and Windows Phone but I worry MS are going to just get lots of very negative comments about vs2013 (especially from the C++ devs) unless they clear up pricing issues.
+1 for Stephen 26 Jun 2013 2:56 PM
It's really needed that XAML improvements are ported to WPF also.
Is there any improvement for WPF in here?
Joe +1
@ho7 re Python in VS2013
That is correct, the Python support is referring to PTVS (). Starting with 2013, PTVS will have a download template in VS where you can directly install PTVS and a Python interpreter of your choice with "two" clicks. Whether PTVS will ever be physically in the box, that's TBD. We have to weigh that against being a rapid-release OSS product… we'll see & look fwd to customers' input.
PS installation is pretty quick: see this video –
Have you replaced the awful monochrome icons and the all-caps menus yet? Features are great, but if the UI sucks no one will want to use it.
hmmm… still no answer to our question… do we have to pay another $700+ to get more C++11 support? clang/gcc here we come !
(I'm Microsoft's maintainer of the STL.)
SB> std::thread is still BROKEN even after three updates to 2012. It still leaks.
That's actually a spuriously-reported memory leak, not a physically-unbounded one. The problem was that constructing a std::thread needs to initialize an internal mutex, but only one for the lifetime of the CRT/STL. This initialization allocated memory, but didn't mark it as a CRT-internal block, causing it to be reported by the leak-detecting machinery. In VS 2013, we've fixed this, and we've added cleanup code to deallocate this memory at CRT/STL shutdown (this is irrelevant to the vast majority of programs which load/unload the CRT/STL exactly once, but it's still the right thing to do).
SB> I haven't looked yet but I'm assuming std::atomic is also still BROKEN and still generating the wrong code for an atomic load on x86.
In VS 2013, we've overhauled <atomic> to generate optimal code on x86/x64/ARM. (Previously it generated correct but highly inefficient code.)
if I use C++ with VS whether has the function of UI Layout like Qt ?
Use the Qt development process is much faster than just using VS
Best time to be a developer you have to be kidding, that thought died with VS 6.0. Microsoft is non-player in all modern "used" technology and offers zero compatibility. Visual Studio is about as effective as assembler with more bugs. Open you eyes. Nothing speaks of productivity like waiting for a dev tool to start crash and start again. The advantage is we can catch up with all of our friends on facebook while waiting for Visual Studio to recover from a crash.
Forgive my frustration, Stephan. But you have to understand, 2012 still smells like a new car. There are still bits of shrinkwrap here and there. And it's frustrating to see that even something like the std::atomic fix — which amounts to 2 or 3 very short lines — goes into the new-new product but not the new product.
I physically cringed when I heard Ballmer going on about "rapid release cycles" during the keynote. Surely that can't apply to developer tools? Some of us can't get away with using the free version of VC. But even if we could, surely it's understood that adopting a new product annually is disruptive. It's certainly not cost effective.
Decisions that should be made by developers, for developers are instead being made by marketing, for consumers. That's a terrible state of affairs. Someone at MS probably thinks this ia a great way to sell more copies of Visual Studio. But realistically, anyone who isn't strongly tied to Windows as a platform will be evaluating whether it might be time to exercise other options. And I think it's safe to say that C++ developers are the group most likely to have alternatives, given the wide spread of the kinds of work we do.
@smortaz
Thank you for the clarifications and the video link. It looks really good! 🙂
I hope you fixed the icons. The 2012 changes to the monocrome were so ugly I can only assume they were created by the idiots who thought Windows 8 without a start button with that useless interface was what everyone wanted.
I skipped VS2012 because the UI was ugly.
I'll skip VS2013 as well for the exact same reason.
UI sucks, even with the updates and themes.
I don't care if the debugging is slightly better and makes my life a bit easier if, at the end of the day, I had to deal with that washed colours user interface. I already have to stay 10 hours a day in a sad office, seeing 4 grey walls, I won't use a tool with those sad colours too. I'm not a robot. I simply can't deal with it. (maybe that's why I have some flower plants on my desk..but this is not relevant)
I'll stick with vs2010 as long as my apps can run on newer platforms.
My mood has definitely an higher priority than your crappy washed colours user interface.
BTW: I'm not an Apple fan.
I don't need glossy user interface with reflections and shiny colours.
But I find that using a 16 colours palette for all the environment/icons is just stupid.
(at least you should provide a way to bring vs2010 icons back without resources hacking)
this is all fine and dandy, BUT, are the interface icons still fugly? Is the UI still angry with the developer and COMPLAINS IN ALL CAPS ABOUT IT? HOW ANNOYING IS THAT? VERY MUCH!
Please don't hide nice features under bad UI. It makes you look silly, like inventing an eco-friendly fuel and then announcing it while wearing a tinfoil hat.
I use a Touch screen laptop w/win8 (lenovo yoga) why in the world doesn't the vs ide allow swipe scrolling while editing? I thought you were moving forward?
I miss VS2008. You removed the lipstick, now all is left is the pig.
SB> And it's frustrating to see that even something like the std::atomic fix — which amounts to 2 or 3 very short lines
Actually, it was a massive overhaul. Diff 2012's xatomic.h against 2013 Preview's and you'll see. (This is the internal header which implements <atomic>.)
> goes into the new-new product but not the new product.
I just published a VCBlog post about this. Please see the FAQ at the end of blogs.msdn.com/…/c-11-14-stl-features-fixes-and-breaking-changes-in-vs-2013.aspx , specifically questions 4 and 5.
> Surely that can't apply to developer tools?
Actually, GCC follows a yearly release cycle (for X.Y.0, equivalent to a VC major version). Clang releases roughly twice a year.
@STL
> Actually, GCC follows a yearly release cycle (for X.Y.0, equivalent to a VC major version). Clang releases roughly twice a year.
They don't cost between $700 and $13,299.00 per release!
I am a retired engineer but have followed Visual Studio over the years especially MFC. I trust this will be still ongoing in the new edition – also I would hope that Microsoft can provide special pricing for MFC to be included with the express edition of C++ (or even provide it F.O.C.
Thank You for a great compiler
can we please get the current editing filename in the titlebar!
I wonder if they decided to fix the crappy monochrome interface. also, better templates for creating sharepoint workflows would be great.
This VB.Net bug still exists:
connect.microsoft.com/…/reference-highlighting-rename-and-find-all-references-fail-in-vs-2012-vb-with-addhandler
@ nutcat
" I use a Touch screen laptop w/win8 (lenovo yoga) why in the world doesn't the vs ide allow swipe scrolling while editing? I thought you were moving forward? "
Probably the same reason they decided to enable touch scroll on every office 2010/2013 app EXCEPT InfoPath. The one office app I really wanted touch scrolling, and they didn't enable it. ugh.
I am so disappointed in the latest releases. When I tried to install the VS2013Express on my PC it said that the version of windows was too low. It is Windows 7 with every upgrade known to man. What version of windows is VS2013 meant to be installed on?
Great stuff – but honestly, at some point, the concept of continuity also needs to be addressed:
• Giveth, takeath: Setup and Deployment is only through ISLE in VS2012. Big deal for developers…so instead of spending time on the "next", some of us still have to deal with the changes to the "now". Not everyone starts with *new* development projects – developers maintain existing ones too.
Please visit UserVoice:
visualstudio.uservoice.com/…/3041773-bring-back-the-basic-setup-and-deployment-project-
• HTML and Javascript: Where is TypeScript? Does it fit into VS2013 cycle as "part" of the IDE or will it remain a separate/plug-in? What is the overall view on this great tool?
Thanks for sharing!
Take mercy on the English language … please stop using the words 'impact' and 'impactful' in technical articles. Replace them with something concise and articulate.
@JimmyT: Sorry to hear you’re having troubles with installation. Did you try to install Express 2013 Preview for Windows, or Express 2013 Preview for Windows Desktop? The latter should install fine onto Windows 7 SP1, while the former is focused on Windows Store apps and requires Windows 8.1. If you’re trying to use Express 2013 Preview for Windows Desktop on Windows 7 SP1 and it’s still not working for you, it’d be great if you could file a bug at connect.microsoft.com/VisualStudio for the team to investigate.
-somasegar
VS 2012 and VS13 design looking like going in reverse direction. the UI brings us back like in Black & While era.
The bug in VS Editor was marked as "Fixed" (connect.microsoft.com/…/context-menu-in-reference-manager-is-unreadable).
REALLY???
we asked Microsoft for frequent updates not frequent new software development tools
Under VS2012 update 2 and VS2012 update 3 we could run the Test Agent on Windows XP and Windows Server 2003R2 VMs. The VS2013 Agents page says that Windows XP is supported but it throws an error on install saying it doesn't meet the min. OS requirements. We test our product on downlevel OS's and really need this to work. Is there anything we can do?
Like how the Msft people simply ignore any comment on pricing and on the interface.
@S.Somasegar – It's quite telling how you, and all the other MS posters here totally ignore the complaints about the UI; you will address other issues, but not the UI. I Guess you guys never learn..
@CraigAJohnson : Thanks for reporting this. We have identified it as an issue with the product and this should get fixed with the RTM version of VS2013 Test Agent. We will update the supported OS list to be consistent with current state of the Preview release.
Hi Soma,
Thanks for the v3 wpf fixes. Really liking vs2012. I have been using windows 8 from the preview days and love the performance. From watching build over the last few days the Windows 8.1 APIs along with the tooling looks far more mature – great stuff.
In your channel 9 interview you said that vs2013 will be needed for 8.1 development. Did I catch that correctly? As a vs2012 user will I really be locked out of new style 8.1 development? I no longer have an msdn sub as cost was too high (after the 30% hike last year) and downgrading meant loosing perpetual rights to the existing bits voiding my investment. (Which seems to mean not so much in practice now!)
After the major reset (win 7 to win 8) it looked as if vs2012 was good until at least next year with the updates, and so I never expected it to be deprecated so fast for new style windows 8 apps. Now the cost ramp (fresh msdn sub so soon to accommodate the rapid cadence + new touch enabled device (for example the surface pro is only just out in the UK) + Azure so soon is an insurmountable price precipice especially given the relatively slow consumer adoption and risk involved.
I think in the early days of this new platform we need all the momentum we can get so this lock-out of vs2012 does not make any sense. Msdn subs used to be a no brainer but now with hardware to consider as well, all weighed against the exploding choices from much cheaper competitor offerings, open source and the growth of html5 their value is not so clear. Rapid cadence although a good thing looks like a push towards an Adobe style rental model which achieves the dreamed of lock-in but would prefer choice with greater longevity of the tools. Lock-out for lock-in!
You make the best development IDEs hands down and I would love to continue using them long into the future, but for the first time I am seriously rethinking forward direction.
Would appreciate some clarification.
@Soma
Ballmer said at the keynote that MS was not wanting to ignore the millions of desktop apps out there. This can't be true, because if it was then you would not have removed the "Setup and deployment" installer project ability from Visual Studio 2012. If you say you care about the millions of desktop apps, then why take away a great tool for making the setups/installers for them?? This user-voice item has almost 6000 votes asking for you to please put it back: visualstudio.uservoice.com/…/3041773-bring-back-the-basic-setup-and-deployment-project-
To make it really bad, Microsoft declined it with some very generic words…basically not giving any reason whatsoever. I've used VS to make installer projects for years and I haven't heard anyone say there was anything wrong with it.
Please put it back.
Thanks.
What will be the cost to upgrade to VS2013 from VS2012? What is the cost from older versions?
Hello, I have a question about Visual Studio 2013 Release.
Is there anyone can tell me if Visual Studio 2013 will be released back to the Windows 8.1 or long after him?
I downloaded Visual Studio 2013 Ultimate Preview and opened my current Visual Studio 2012 solution in that. I have a cloud project in the solution. When I try to run the cloud project, it hangs just when the web role starts. I have a debug point in the Webrole Start method, and it is not hitting that.
Can 2012 projects be migrated to 2013 just by opening (the log said all was ok! ) – can someone help.
Thanks.
Viji
The choice of making VS2013 upgrade WP7.x projects to WP8.x was very bad – now we have to use both VS2012 or VS2010 and VS2013 and also can't have the WP7 projects in the same solution cause VS2013 will choke on them? | https://blogs.msdn.microsoft.com/somasegar/2013/06/26/build-2013-and-visual-studio-2013-preview/ | CC-MAIN-2018-39 | refinedweb | 7,341 | 62.78 |
The following form allows you to view linux man pages.
#include <math.h>
double round(double x);
float roundf(float x);
long double roundl(long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
round(), roundf(), roundl():
_XOPEN_SOURCE >= 600 || _ISOC99_SOURCE ||
_POSIX_C_SOURCE >= 200112L;
or cc -std=c99.
These functions return the rounded integer value.
If x is integral, +0, -0, NaN, or infinite, x itself is returned.
No errors occur. POSIX.1-2001 documents a range error for overflows,
but see NOTES.
These functions first appeared in glibc in version 2.1.
Multithreading (see pthreads(7))
The round(), roundf(), and roundl() functions are thread-safe.
C99, POSIX.1-2001.
POSIX.1-2001 contains text about overflow (which might set errno to
ERANGE, or raise an FE_OVERFLOW exception). In practice, the result
cannot (respectively, 1024), and the num-
ber of mantissa bits is 24 (respectively, 53).)
[email protected] | http://www.linuxguruz.com/man-pages/roundl/ | CC-MAIN-2017-13 | refinedweb | 152 | 67.45 |
29455/how-to-allocate-ip-address-in-vpc-to-rds-instance
I have an RDS instance started in a DB Subnet Group in my VPC. This instance has an endpoint of the form someDatabase-db-small.abcd1234.us-east-1.rds.amazonaws.com:3306.
How does one allocate to this instance an IP address in the VPC subnet 10.0.0.0/24?
The instance will already have an IP address in that range allocated. Use something like 'dig' command to lookup the IP address of the endpoint from the inside of the VPC and you will get back an IP address from your VPC subnet.
You should make your backend functions to ...READ MORE
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
...READ MORE
Hey @Laksha!
By default, all instances have a ...READ MORE
To create the subnet in VPC:
subnet = ...READ MORE
You could always use the Amazon RDS ...READ MORE
As of AWS CLI v1.11.46, you can ...READ MORE
On the CopyTablesActivity, you could set a lateAfterTimeout attribute ...READ MORE
With boto3, the S3 urls are virtual by default, ...READ MORE
The code would be something like this:
import ...READ MORE
You can set up a new Name ...READ MORE
OR | https://www.edureka.co/community/29455/how-to-allocate-ip-address-in-vpc-to-rds-instance | CC-MAIN-2019-39 | refinedweb | 207 | 78.85 |
NAME
gfshare_ctx_init_enc, etc. - Shamir Secret Sharing
SYNOPSIS
#include <libgfshare.h> gfshare_ctx *gfshare_ctx_init_enc( unsigned char *sharenrs, unsigned int sharecount, unsigned char threshold, unsigned int size ); gfshare_ctx *gfshare_ctx_init_dec( unsigned char *sharenrs, unsigned int sharecount, unsigned int size ); void gfshare_ctx_free( gfshare_ctx *ctx ); void gfshare_ctx_enc_setsecret( gfshare_ctx *ctx, unsigned char *secret ); void gfshare_ctx_enc_getshare( gfshare_ctx *ctx, unsigned char sharenr, unsigned char *share ); void gfshare_ctx_dec_newshares( gfshare_ctx *ctx, unsigned char *sharenrs ); void gfshare_ctx_dec_giveshare( gfshare_ctx *ctx, unsigned char sharenr, unsigned char *share ); void gfshare_ctx_dec_extract( gfshare_ctx *ctx, unsigned char *secretbuf );
DESCRIPTION
The gfshare_ctx_init_enc() function returns a context object which can be used for encoding shares of a secret. The context encodes against sharecount shares which are numbered in the array sharenrs. The secret is always size bytes long and the resultant shares will need at least threshold of the shares present for recombination. It is critical that threshold be at least one lower than sharecount. The gfshare_ctx_init_dec() function returns a context object which can be used to recombine shares to recover a secret. Each share and the resulting secret will be size bytes long. The context can be used to recombine sharecount shares which are numbered in the sharenrs array. The gfshare_ctx_free() function frees all the memory associated with a gfshare context including the memory belonging to the context itself. The gfshare_ctx_enc_setsecret() function provides the secret you wish to encode to the context. The secret will be copied into the internal buffer of the library. The gfshare_ctx_enc_getshare() function extracts a particular share from the context. The share buffer must be preallocated to the size of the shares and the sharenr parameter is an index into the sharenrs array used to initialise the context The gfshare_ctx_dec_newshares() function informs the decode context of a change in the share numbers available to the context. The number of shares cannot be changed but the sharenrs can be zero to indicate that a particular share is missing currently. The gfshare_ctx_dec_giveshare() function provides the decode context with a given share. The share number itself was previously provided in a sharenrs array and the sharenr parameter is the index into that array of the number of the share being provided in the share memory block. The gfshare_ctx_dec_extract() function combines the provided shares to recalculate the secret. It is recommended that you mlock() the secretbuf before calling this function, so that the recombined secret will never be written to swap. This may help to prevent a malicious party discovering the content of your secret. You should also randomise the content of the buffer once you are finished using the recombined secret.
ERRORS
Any function which can fail for any reason will return NULL on error.fsplit(1), gfcombine(1), gfshare(7) | http://manpages.ubuntu.com/manpages/oneiric/man5/libgfshare.5.html | CC-MAIN-2015-14 | refinedweb | 443 | 52.19 |
User:Cosmikdebris/Editor's Guide
From RationalWiki
- Assume good will. Not everyone is a vandal or troll.
- Don't make outrageous or controversial claims or statements in articles, unless you back them up with credible references.
- When examining existing articles, or creating new ones, consult the manual of style. It's very good.
- All kinds of boxes and widgets are found on the template list.
- Subtle snark is good. Too much snark makes the article look stupid.
- If referencing a book, include its ISBN number
in the reference. The wiki software automagically creates a nifty search link.[1]
- Use the {{rp}} template[note 1] in conjunction with a reference name to reference a specific page in a book.[1]:32
- Separate notes from references.[note 2] There is also a nifty template that does this.[note 3]
- If a single reference applies to more than one part of an article, group them together.[1]
- If totally lost, look for all the pages in the Help namespace.
- If an external link in an article has gone 404, check to see if it's been archived on archive.org or archive.is.
- Fragile links are nicely encapsulated using the archive template.[2]
- When considering articles for deletion, always check to see what links to them to get an idea what the impact might be.
- Don't vandal-bin nasty BONs, instead, apply the banhammer gracefully.
- Remember to patrol spam pages before vaporizing so hat they don't clog up the list of unpatrolled recent changes.
- Censor content as appropriate. ( ͡° ͜ʖ ͡°)
Notes[edit]
- ↑ See Template:Rp.
- ↑ Like this, see?
- ↑ See Template:Efn.
References[edit]
- ↑ 1.0 1.1 1.2 Moebius, "Eyes of the Cat (Les Yeux du Chat)," in Taboo 4, Spiderbaby Grafix and Publications, 1990. ISBN 0922003033
- ↑ For example, Deleted Tweets From Donald J. Trump, R-D.C.[a w] from projects.propublica.org. | https://rationalwiki.org/wiki/User:Cosmikdebris/Editor%27s_Guide | CC-MAIN-2019-26 | refinedweb | 313 | 60.92 |
It Moves..09/20/2020 at 19:55 • 0 comments
Still not much 'autonomy' but I do finally have everything in place. The Raspberry pi, the Arduino, a 433 MHz radio and the remote control receiver, all neatly packed in the waterproof box at the back.
The Scanning Sonar is mounted as well, so I can finally map the lake.
A bit to my surprise it showed a nearly flat bottom, at 80 cm. Then I realized that this sonar works at 1 MHz, and it probably just shows the top of the fluid mud layer. I will need a lower frequency (like 200 kHz) to get to the 'real' bottom.
Modifying MissionPlanner and using MAVLink07/07/2020 at 18:47 • 0 comments
Since I was writing my own controller software for the Raspberry Pi I first considered taking the shortest path to transferring the the data from my boat to shore: send everything as ASCII text and use a simple text parser to decode values. Easy to write, easy to debug. But a lot of overhead, and since my radio only supports 9600 baud this would seriously limit the amount of data I could transfer. Just create my own binary structure then (maybe using Binary JSON) ? But there is also MAVLink, a standard protocol for transferring data between 'Groundstation' and a Rover (Drone, plane, boat).
Unfortunately this is also a big project, and although it is not too complicated in the end, the whole documentation seems very intimidating at first. But after a little trial and error I got it going. It helps that the most commonly used groundstation software is MissionPlanner which is open source.
MissionPlanner is also a big project, but as I'm used to C# projects it did not take me long to find my way around. Fortunately it just builds and runs directly if you checkout the whole project and compile it with Visual Studio.
Generating a custom message
For the Sonar I will need a custom message to transmit range, distance etc. This can be added to the 'common.xml' file, which can be found in the MissionPlanner folder:
C:\Users\ceesm\Documents\Autonomous Survey Vessel\MissionPlanner\ExtLibs\Mavlink\message_definitions\common.xml
<message id="1000" name="SCANNING_SONAR"> <description>Scanning sonar.</description> <field type="uint32_t" name="time_boot_ms" units="ms">Timestamp (time since system boot).</field> <field type="uint16_t" name="range" units="mm">Measured range</field> <field type="uint16_t" name="angle" units="0.1 degrees">Angle</field> <field type="uint16_t" name="roll" units="0.1 deg">Roll</field> <field type="uint16_t" name="pitch" units="0.1 deg">Pitch.</field> <field type="uint16_t" name="yaw" units="0.1 deg">Heading.</field> </message>
And that is all there is to it. This simply defines a message with ID 1000 that contains the required filelds.
In the MAVLink folder there is a 'regenerate.bat' file that produces a new C# library from a modified .XML file, and regenerate.bat also contains a line for doing the same for the C library. Which almost works, except for the fact that the 'pymavlink' folder is not complete, and does not contain all the required headers. So I checked out a complete MAVlink set from. After copying the pymavlink folder to MissionPlanner it works fine.
Processing custom MAVLink messages
Now I defined a new messagetype and named it SCANNING_SONAR, there must be a way to view it in MissionPlanner, but how ? I found the solution in the 'MAVLinkInspector.cs' module. When opened, this window shows the messages as they are received, and even has an option to create a live chart of the incoming data. Key to this interaction seems to be the SubscribeToPacketType function:
var subscribeToPacket = mav.SubscribeToPacketType((MAVLink.MAVLINK_MSG_ID)msgid, message => { line.AddPoint(new XDate(message.rxtime), (double)(dynamic)message.data.GetPropertyOrField(msgidfield)); return true; });
So I created a new form (SonarView) with a start button, a textbox, and just the following function:
namespace MissionPlanner.Controls { public partial class SonarView : Form { private MAVLinkInterface mav; public SonarView(MAVLinkInterface mav) { InitializeComponent(); this.mav = mav; ThemeManager.ApplyThemeTo(this); } private void but_start_Click(object sender, EventArgs e) { var subscribeToPacket = mav.SubscribeToPacketType(MAVLink.MAVLINK_MSG_ID.SCANNING_SONAR, message => { MAVLink.mavlink_scanning_sonar_t S = (MAVLink.mavlink_scanning_sonar_t)message.data; string data_txt = $"Data:{S.time_boot_ms},{S.range},{S.angle},{S.roll}{S.pitch}"; data_txt += Environment.NewLine; textBox1.Invoke(new Action(() => textBox1.Text += data_txt)); return true; }); } } }
And this works. The messages are caught, and show up on the textbox .
So after tweaking it a bit and adding a ZedGraph chart it looks like this:
(Of course there is more to it, so if you want to know the details check out my version of MissionPlanner, and find the 'SonarView' module.)
More Serial Ports06/17/2020 at 19:49 • 0 comments
So far I connected the GPS to the standard serial port of the Raspberry Pi (Rx/Tx), the IMU to the I2C lines and the Sonar to the USB port. But I will also need a serial port for my radio. So that is one port short. The solution to this could be using a 'software serial port', as you can easily get on any Arduino. But that's not so simple on the Pi. I could find just one library that supports a software serial port : PiGPIO. And that only supports reading. But that could be enough, as I can use that to read the GPS.
And that works surprisingly well. Connect the Tx of the GPS to GPIO18, and test it using the following code :
int main(int argc, char *argv[]) { char serial_buff[1024]; int chars_read =0; if (gpioInitialise() < 0) return 1; gpioSerialReadOpen(18,9600,8); while(true) { chars_read=gpioSerialRead(18,serial_buff,1024); if(chars_read>0){for(int i=0;i<chars_read;i++){printf("%c",serial_buff[i]);}} } gpioTerminate(); }
Make sure the pigio library is linked in when compiling. In Code:Blocks this is done in 'Project->Build Options', tab 'Linker Settings', add it to the 'Link Libraries' list.
Works great. There is only one drawback in using the pigpio library: it needs root privilege to run. So Every time I start it from the Code::Blocks IDE, it refuses to run, and I have to open a terminal and enter the 'sudo' command to start it. The solution to that appears to be starting Code:Blocks from the XTerm console using 'sudo codeblocks'. That sometimes works, but most of the time gives me a 'Failed to init GTK+' error. But that could be caused by the fact I'm working through Remote Desktop.
So now I have GPS input, and the standard Tx/Rx serial port available for other purposes.
GPS06/03/2020 at 20:47 • 0 comments
I decided to use the Raspberry Pi 3 A as my main controller for the boat, and the first task was to add a GPS. The NEO-6M GPS module is very standard unit, with lots of examples on how to connect it. It is powered by 5V from the RPi, and the communication is standard RS-232 at the 3.3V level so it can be connected directly to the RPi Tx and Rx. Connecting to the RPi. Yes, it is as simple as that.On the RPi itself, make sure the Serial port is enabled, and the kernel messages on this port are disabled. (Raspi-Config->Interfacing Options->Serial)
Now I installed PuTTY (sudo apt-get install putty), set it up for Serial port '/dev/ttyS0' and 9600 baud. This immediately shows the NMEA text messages as they are coming from the GPS receiver. Can't be easier than that.
After adding the GPS receiver it was time to read it. I first tried the GPSD project which tries to make reading GPS data in the background easy. But after struggling some time to get it working I decided it was not what I was looking for. As always with these kind of projects there are many examples and HowTo's , but I assume a lot of them are outdated or just incorrect for the setup that I have so none of the examples worked as described. And since I basically only needed position heading and speed from this specific GPS receiver I opted to use the TinyGPS++ package. This is actually a library written for the Arduino, but only minor changes were needed to get it running on the Pi. My version is here:
Remote Control05/16/2020 at 19:13 • 0 comments
The whole 'Autonomous' thing is still in the future, but I suppose I'll have to continue. After some failed attempts to create my own remote control using some 'Long Range NRF24 Radios' I gave up ,and simply bought a complete Remote Control with 6 channel receiver. The HotRC HT 6A seemed like a good (cheap..) choice.
It's a cheap remote control unit, preconfigured for use with model airplanes. Which means that the controls assigned to specific channels and the output ranges are different per channel. The speed control channel (nr.3) outputs a PWM signal of exactly 1000 to 2000 uS, but the direction channel only has a span from 1200 to 1800 uS, probably optimal for controlling the rudder servo.
The boat only has two thrusters, and no rudder. So all the steering has to to be done by varying the thruster speeds. And if we for example are going forward and want to steer a little to the left, the left thruster has to slow down. But for a sharp turn the left thruster eventually even has to run in reverse. A complicated math thing, so I left this to my daughter who studies maths at university... And she came up with a reasonably simple relation between speed and direction that could be used to control the motors. This of course required some intermediate processing so I added a Arduino Pro Micro to translate the RC PWM signals to the appropriate thruster signals
Despite all numbers seeming to be correct, the boat reacts quite unpredictable. Only full speed ahead seems to work fine, but breaking and reverse is not working, or very slow. After checking and re-checking the code I finally find the answer in the user guide of the ESC. By default the ESC has some behaviour that is good for using in an RC Car, but not for the boat. Fortunately I bought the ESCs with a 'Programming Card' which makes programming easy. First I had to modify the break-behaviour. By default the system breaks the motors to a halt if you suddenly pull it to reverse. But I want it just to go in reverse, no stopping. Second I set the maximum reverse power to 100% instead of the standard 25% .
Later I found out that this is actually known as 'skid steering' and supported by ArduPilot () as well. And the code for this comes pretty close to what we figured out. (look for the void AP_MotorsUGV::output_skid_steering(); function in the ArduPilot Rover code.
Thruster Assembly09/08/2019 at 17:31 • 0 comments
Finally a nice day, and some spare time so I could mount the thrusters to the boat. I used some stainless steel M6 threaded rod and some DIN rail. DIN Rails is great. It's cheap, has mounting holes and it is very stiff due to the shape.
I kept plenty of length on the M6 rods so I can adjust the depth of the thrusters later, if required.
Once all nuts are tightened the whole construction became quite rigid so it seems like this is going to work. The only thing that worries me is that the metal is sticking out quite a bit to the side and that it will definitely collect weed and algae.
Struggling with the jargon: RC, ESC, BEC, 3S...08/10/2019 at 20:28 • 0 comments
A remote controlled is nothing new, and a lot of the required electronics and controllers is widely available. But if you are not an Remote Controlled Vehicle (RC-)enthusiast, a lot of the words and abbreviations used on the HobbyKing RC pages are not immediately obvious.
First, I will need something to control the speed and direction of the thrusters. The motors are so-called 'brushless motors', which need special drive electronics. This is called an ESC or 'Electronic Speed Controller'. Then I found out after ordering my first pair of ESC's, not all ESC support changing the direction. Which makes sense if you use it to power a plane or helicopter. So I ordered a second (more expensive) pair 'with reverse'.
HobbyKing® ™ Brushless Car ESC 30A w/ Reverse
Specification:
Input voltage: 2-3S Lithium batteries / 4~9 Ni-xx
Cont. Current: 30A
BEC output: 2A /5V (Linear)
Now, as I said, I'm a newbie in RC, so first I did not know what 2-3S means. This appears to be the number of standard 3.7V Lithium cells as connected in Series. So '3S' is 11.1V. Next is the 'BEC Output'. BEC stands for Battery Eliminator Circuit. Which is not a device to actively destroy your batteries, but simply an on-board regulator that provides a regulated 5V DC for powering your external electronics so you do not need an additional power supply or battery for that. Great. Unfortunately the user manual is very extensive on the programming of this unit, but not very detailed on the connections.
The contacts used for controlling the ESC are simply labelled 'RECEIVER'. Probably very obvious to anyone, except me. Now I think that the Red and Black wire are the BEC (5V) output, and the white wire probably carries the servo signal.
Test setup with one motor and a potentiometer as a speed regulatorUsing an old PC power supply that can supply 12V at 8A and a Arduino Micro I just hook up a quick test setup. The 'Servo' library is used to generate servo compatible signals. A potmeter is attached to the analog input so I can change the servo signal.
This is the test sketch:
int YaxisInput = A2; // select the input pin for the potentiometer int XaxisInput = A3; int XValue,YValue; int Motor1 = 9; int Motor2 =10; Servo ESC; // create servo object to control the ESC void setup() { pinMode(YaxisInput,INPUT); pinMode(XaxisInput,INPUT); ESC.attach(Motor1, 1000, 2000); // (pin, min pulse width, max pulse width in microseconds) } void loop() { XValue = analogRead(XaxisInput); YValue = analogRead(YaxisInput); XValue = map(XValue,0,1023,0,180); ESC.write(XValue); Serial.print(XValue); Serial.print(","); Serial.println(YValue); delay(10); }
And that works. After applying power, and switching the ESC on, it generates some 'beeps' using the motot itself which scared me at first.. But the beeps are an indication that the ESC detected the neutral throttle signal (the servo output from the arduino) and the power. After that, moving the potmeter controls the motor speed and direction just fine. Only when moving the controller too fast from one side to another so the reversal of the motor is very fast, the powersupply switches off. Probably the reverse current is too much for the powersupply protection.
Thrusters08/08/2019 at 19:36 • 0 comments.
2 Component epoxy moulding material.
Epoxy in the gap between the motor and the plastic
After that it was just following the detailed assembly instructions on Instructables, and everything worked out great.
So now I have two thrusters. In the mean time my Electronic Speed Controllers with reverse option arrived from HobbyKing. So it's time to start putting it all together. | https://hackaday.io/project/166552/logs | CC-MAIN-2021-25 | refinedweb | 2,583 | 64.2 |
Does anyone have experience generating a Concave Hull (Alpha Shape) from a list of points with IronPython? Would greatly appreciate any feedback. Thank you in advance!
Concave Hull (Alpha Shape) from list of points with Python?
A quick search brought up this…
A handy couple of nodes being used (which you can dig around in) and some heavyweight package authors
unfortunately very few people are doing what you need…
Hope that helps,
Mark
There’s a concave hull algorithm here (in python) which looks easy to implement.
And there’s some links by @Michael_Kirschner2 here which might offer some alternatives:
Thomas,
I’ve found a couple different concave hull algorithms in python; however, all of them use packages (numpy, scipy, matplotlib, shapely) which are not easily or if at all compatible with Iron Python.
So, I was hoping maybe someone had created a concave hull algorithm with vanilla Python 2.7 modules which could be used in Dynamo.
Mark,
Thank you for the reference. I have reviewed this thread and tried convex hulls for my application but did not find any solution.
Maybe I could reach out to the authors as you suggested. I’m hoping maybe someone has created a concave hull algorithm with vanilla Python 2.7 modules which could be used in Dynamo, but from my research so far there is very few resources regarding this subject.
@ericabbott, have you already go through with Lunchbox ML
by @Nathan_Miller . Right now, that’s the furthest you can get.
Hey,
The link here that MK gave you only uses Math?
There is a C# from string node, but I’m not skilled enough to use it. Or it’s possible you could persuade a package manager to add it…
I had a quick go at translating it to Python, perhaps someone with a maths degree can have a look at it?! But I know @Daniel_Woodcock1 is very busy…
# Enable Python support and load DesignScript library import clr clr.AddReference('ProtoGeometry') from Autodesk.DesignScript.Geometry import * import math points = UnwrapElement(IN[0]) #0. error checking, init if (points == None or points.Count < 2) : raise Exception('AlphaShape needs at least 2 points') BorderEdges = [] alpha_2 = alpha * alpha #1. run through all pairs of points i = 0 while i < points.Count: j = i + 1 while j < points.Count: if (points[i] == points[j]): raise Exception('AlphaShape needs pairwise distinct points') dist = (points[i], points[j]) if (dist > 2 * alpha) : continue #circle fits between points ==> p_i, p_j can't be alpha-exposed x1 = points[i].X x2 = points[j].X y1 = points[i].Y y2 = points[j].Y mid = PointByCoordinates((x1 + x2) / 2, (y1 + y2) / 2) #find two circles that contain p_i and p_j; note that center1 == center2 if dist == 2*alpha center1 = PointByCoordinates((mid.X + math.Sqrt(alpha_2 - (dist / 2) * (dist / 2)) * (y1 - y2) / dist), (mid.Y + math.Sqrt(alpha_2 - (dist / 2) * (dist / 2)) * (x2 - x1) / dist)) center2 = PointByCoordinates((mid.X - math.Sqrt(alpha_2 - (dist / 2) * (dist / 2)) * (y1 - y2) / dist), (mid.Y - math.Sqrt(alpha_2 - (dist / 2) * (dist / 2)) * (x2 - x1) / dist)) #check if one of the circles is alpha-exposed, i.e. no other point lies in it c1_empty = true c2_empty = true k = 0 while k < points.Count & (c1_empty or c2_empty): if (points[k] == points[i] or points[k] == points[j]): continue if ((center1.X - points[k].X) * (center1.X - points[k].X) + (center1.Y - points[k].Y) * (center1.Y - points[k].Y) < alpha_2): c1_empty = false if ((center2.X - points[k].X) * (center2.X - points[k].X) + (center2.Y - points[k].Y) * (center2.Y - points[k].Y) < alpha_2): c2_empty = false if (c1_empty or c2_empty): #yup! BorderEdges.append(Line.ByStartEndPoint(points[i], points[j])) k += 1 j += 1 i += 1 OUT = Border.Edges
Mark, thank you for the info and translation attempt! I’ll take a closer look at that when I get a chance and update here with any progress.
I cleaned up the code a bit:
import clr clr.AddReference('ProtoGeometry') from Autodesk.DesignScript.Geometry import Line from math import sqrt, hypot points, alpha = IN OUT = [] pLen = len(points) alpha2 = alpha * alpha if pLen < 2: raise Exception('AlphaShape needs at least 2 points') for i, p in enumerate(points): for j in xrange(i+1, pLen): if p.IsAlmostEqualTo(points[j]): raise Exception('AlphaShape needs pairwise distinct points') dist = hypot(p.X - points[j].X, p.Y - points[j].Y) if (dist > 2 * alpha) : continue #circle fits between points ==> p_i, p_j can't be alpha-exposed x1, y1, x2, y2 = p.X, p.Y, points[j].X, points[j].Y midX, midY = (x1 + x2) / 2, (y1 + y2) / 2 #find two circles that contain p_i and p_j; note that center1 == center2 if dist == 2*alpha alphaDist = sqrt(alpha2 - (dist / 2) ** 2) deltaX, deltaY = (x2 - x1) / dist, (y1 - y2) / dist c1x, c1y = midX + alphaDist * deltaY, midY + alphaDist * deltaX c2x, c2y = midX - alphaDist * deltaY, midY - alphaDist * deltaX #check if one of the circles is alpha-exposed, i.e. no other point lies in it c1_empty = True c2_empty = True for k in xrange(pLen): if i == k or j == k: continue if ((c1x - points[k].X) * (c1x - points[k].X) + (c1y - points[k].Y) * (c1y - points[k].Y) < alpha2): c1_empty = False if ((c2x - points[k].X) * (c2x - points[k].X) + (c2y - points[k].Y) * (c2y - points[k].Y) < alpha2): c2_empty = False if not c1_empty and not c2_empty: break if c1_empty or c2_empty: OUT.append(Line.ByStartPointEndPoint(p, points[j]))
Another interesting question is, how do you determine the alpha (scoop) size? It’s correlated to the distance to the nearest neighbors but can’t be deducted exactly from it:
Fantastic, thanks
Dimitar,
First of all thank you for cleaning Mark’s script and sharing yours! This is exactly what I was looking for.
Yes, determining the alpha size is still an issue. Especially in applications with a large amount of points where testing different alpha values takes a long time to generate.
The reason I’m working on this script is an attempt to save time from having to manually draw lines in CAD; however, if the script for creating a concave hull requires fine tuning of the alpha value it could take more time to do so with the script than just manually doing it.
Do you think it is possible to find a solution that automatically finds the value (or maybe a range to pick from) of the alpha to get the desired result? I see you attempted that, but unfortunately it did not work in my application.
Also, do you think it is possible that another method of this algorithm may be more accurate? In my research I saw there was different methods (e.g. closest neighbor approach) to achieve a concave hull.
Would love to hear your thoughts and perhaps find a working solution! Thanks again.
Ha, @Mark.Ackerley, uni + work does leave for very little time, but I was intrigued.
I know this has been answered, but here is another method that uses the Graham Scan method for “Gift Wrapping”. The best method is Chans algorithm from what I’ve read though as it is a combination of the Jarvis March and Graham Scan methods.
Here is the Python Code ported over from Thomas Switzers GitHub (I take zero credit for this)…
import clr clr.AddReference('ProtoGeometry') from Autodesk.DesignScript.Geometry import * def ToVals(pt): return [pt.X,pt.Y,pt.Z] def ToPts(ptVals): return Point.ByCoordinates(ptVals[0],ptVals[1],ptVals[2]) def CreateOutline(pts): crvs = [] i = 0 while i < len(pts): if i == len(pts)-1: crvs.append(Line.ByStartPointEndPoint(pts[i],pts[0])) else: crvs.append(Line.ByStartPointEndPoint(pts[i],pts[i+1])) i = i+1 return PolyCurve.ByJoinedCurves(crvs) ''' Taken from Thomas Switzers awesome GitHub repo and modified to suit Dynamo Context I take zero credit for this work other than porting it across. ''' def convex_hull_graham(points): ''' Returns points on convex hull in CCW order according to Graham's scan algorithm. By Tom Switzer <[email protected]>. ''' TURN_LEFT, TURN_RIGHT, TURN_NONE = (1, -1, 0) def cmp(a, b): return (a > b) - (a < b) def turn(p, q, r): return cmp((q[0] - p[0])*(r[1] - p[1]) - (r[0] - p[0])*(q[1] - p[1]), 0) def _keep_left(hull, r): while len(hull) > 1 and turn(hull[-2], hull[-1], r) != TURN_LEFT: hull.pop() if not len(hull) or hull[-1] != r: hull.append(r) return hull points = [ToVals(p) for p in points] points = sorted(points) l = reduce(_keep_left, points, []) u = reduce(_keep_left, reversed(points), []) points = l.extend(u[i] for i in range(1, len(u) - 1)) or l points = [ToPts(p) for p in points] return CreateOutline(points) OUT = convex_hull_graham(IN[0])
Cheers,
Dan
Thanks Dan great work
Nah, scratch that… I thought you were after convex Hull… This does convex not concave… My bad
Still cool to see, thanks for posting
Sorry, I don’t have anything better. The only alternative is to use a while loop / goal seek type of approach, where you try to minimize the alpha size, while still producing a single continuous external loop.
No problem. Thank you for all the help so far, I really appreciate it!
Looking forward to seeing you around the forums.
There is an issue with the above Python code (the one by Dimitar and Mark) if you use it for vertical point sets
the hypothemus (dist) will return zero if two points are vertically above eachother.
And then a division error comes.
Great topic though. Interesting stuff
Comes in handy when working with pointclouds | https://forum.dynamobim.com/t/concave-hull-alpha-shape-from-list-of-points-with-python/34058 | CC-MAIN-2019-47 | refinedweb | 1,595 | 64.91 |
Exchange Server 2016 builds upon the architecture introduced in Exchange Server 2013, with the continued focus goal of improving the architecture to serve the needs of deployments at all scales.
Note: Exchange Server 2016 does not support connectivity via the MAPI/CDO library. Third-party products (and custom in-house developed solutions) need to move to Exchange Web Services (EWS) or the REST APIs to access Exchange data.?
about E2016! Christian !!
don't have a way to migrate from Onpremise 2013 PF to Wave 15 O365 tenant.
Thank you for this Article.
Question: What you mean by load balancer. This is separated hardware/software or i can use Windows NLB service as before ?
1) Thanks for the info so that we can pre-plan for Exchange 2016 deployment.
2) In regards to 'Setup will not allow you to install 2016 if 2007 exists in the Exchange org',
is there any possibility to request a feature to allow Exchange 2007 in the Exchange Org with 2016.
Our Org includes 2007,2010 and 2013
Thanks
testing resources, code changes, etc.
I recommend moving the 2007 resources to 2010 or to 2013.
Ross
should be possible, as far as i know, there is no path from 2007, but 2010 is ok
in Windows NLB currently.. :(.
Is it too soon to ask about offering both certificate and password auth for Exchange 2016?
In my org we offer both ( we have a namespace for Password auth and different namespace for Cert based Auth)
load balancer configuration and have all traffic routed to MBX 2016, enabling MBX2016 to proxy to CAS2010.
Brian does an excellent job of describing this in his session -.
Ross
from the MB+CA roles in a single role, now named MB role. So the new MB roles actually contains all components from the 2013 MB and CA roles.
closer to 2016's release.
Public Folder are still there in 2016, don't worry. :)
if you are using BB 10 or higher you dont't need Mapi/CDO and i don't think that BB will support Exchange 2016 on their BES 5.x
1 question though: is an OWAS (Outlook Web Apps Server) a requirement or just a recommendation?
You're amazing, appreciate your effort.
Anything new related to MDM/Intune?
Per user MAPI/HTTP - awesome - u listened to the customer :)
More importantly, will JamesNT be supported if he runs RAID0?
Thanks for this introduction,
If CAS will be on MBX servers, in this case we cannot use NLB, becuase as I remember NLB and FSCS cannot live on same host?
client access infrastructure and configuring SMTP routes? Or should we deploy Exchange 2013 CAS servers jut for the purpose of building a NLB cluster? | https://techcommunity.microsoft.com/t5/Exchange-Team-Blog/Exchange-Server-2016-Architecture/ba-p/603598 | CC-MAIN-2019-26 | refinedweb | 453 | 72.05 |
Avoid sys.argv to pass options to py2exe
Many users write a special setup script for py2exe that simply can be run to build the exe, without the need to specify command line options by hand, each time. That means that "includes", "excludes" options etc have to be passed in some way to py2exe.
The setup() function accepts a options keyword argument, containing a dictionary with the options. This is superior to appending options to sys.argv as the transformation of the data to a string and back is avoided as well as mutiple setup() calls per file are possible.
Note that the long name of options has to be used and '-' in the command line options become '_' in the dictionary (e.g. "dist-dir" becomes "dist_dir").
opts = { "py2exe": { "includes": "mod1, mod2", "dist_dir": "bin", } }
And pass it to the setup script:
setup( options = opts, ... )
Avoid using setup parameters that are py2exe-specific
Instead of passing options like console, windows to setup(), you can subclass Distribution and initialize them as follows:
from distutils.core import Distribution class MyDistribution(Distribution): def __init__(self, attrs): self.com_server = [] self.services = [] self.windows = [] self.console = ['myapp'] self.zipfile = 'mylibrary.zip' Distribution.__init__(self, attrs) setup(distclass=MyDistribution) | http://www.py2exe.org/index.cgi/PassingOptionsToPy2Exe?action=diff | CC-MAIN-2013-48 | refinedweb | 203 | 55.84 |
#include <db.h> int DB->del(DB *db, DB_TX.
DB->del()
method may fail and return one of the following non-zero errors:
A foreign key constraint violation has occurred. This can be caused by one of two things:
An attempt was made to add a record to a constrained database, and the key used for that record does not exist in the foreign key database.
DB_FOREIGN_ABORT was declared for a foreign key database, and then subsequently a record was deleted from the foreign key database without first removing it from the constrained secondary database.
A Berkeley DB Concurrent Data Store database environment configured for lock timeouts was unable to grant a lock in the allowed time. | https://docs.oracle.com/cd/E17275_01/html/api_reference/C/dbdel.html | CC-MAIN-2018-13 | refinedweb | 117 | 65.66 |
>> plain 2.5.59 does>> >> 59-mjb4 does NOT> > Can you check mjb 1-3 too? The better it gets pinpointed, the easier it's > going to be to find.I should note that our performance team also has triple-faults on some database app on a 8x machine ... that goes away with mjb4, not sure why as yet. There's nothing in there that I can think of that would fixa triple fault, so it may well be something annoyingly subtle.Try -mjb1 first, if that still fixes it, then I'll start hacking off chunks for you to test. Try 62 as well ... that has dcache_rcu merged,which is another major chunk of the patch. kgdb is also big, and may well change timings ...> Also, if you can figure out _which_ part of the patch makes a difference,> that would obviously be even better. Part of the stuff in mjb is already> merged in later kernels (ie things like using sequence locks for xtime is> already there in 2.5.60, so clearly that doesn't seem to be the thing that> helps your situation).Yup, a lot of it is designed to give our performance team a stable baseto work from - so minimal changes to a 59 base.I use gcc-2.95.4 (Debian) as Chris does and have found that extremely stable, not sure what the perf team were using, I'll find out.> Now, interestingly enough, the mjb patch _does_ contain a change to > mm/memory.c that really makes no sense _except_ in the case of a compiler > bug. So you could check whether that (small) mm/memory.c patch is the > thing that makes a difference for you..That's the config_page_offset patch, which Dave ported forward from Andrea's tree ... I've split that out below:diff -urpN -X /home/fletch/.diff.exclude 21-config_hz/arch/i386/Kconfig 22-config_page_offset/arch/i386/Kconfig--- 21-config_hz/arch/i386/Kconfig Wed Feb 5 22:22:59 2003+++ 22-config_page_offset/arch/i386/Kconfig Wed Feb 5 22:23:00 2003@@ -660,6 +660,44 @@ config HIGHMEM64G endchoice +choice+ help+ On i386, a process can only virtually address 4GB of memory. This+ lets you select how much of that virtual space you would like to + devoted to userspace, and how much to the kernel.++ Some userspace programs would like to address as much as possible and + have few demands of the kernel other than it get out of the way. These+ users may opt to use the 3.5GB option to give their userspace program + as much room as possible. Due to alignment issues imposed by PAE, + the "3.5GB" option is unavailable if "64GB" high memory support is + enabled.++ Other users (especially those who use PAE) may be running out of+ ZONE_NORMAL memory. Those users may benefit from increasing the+ kernel's virtual address space size by taking it away from userspace, + which may not need all of its space. An indicator that this is + happening is when /proc/Meminfo's "LowFree:" is a small percentage of+ "LowTotal:" while "HighFree:" is very large.++ If unsure, say "3GB"+ prompt "User address space size"+ default 1GB+ +config 05GB+ bool "3.5 GB"+ depends on !HIGHMEM64G+ +config 1GB+ bool "3 GB"+ +config 2GB+ bool "2 GB"+ +config 3GB+ bool "1 GB"+endchoice+ config HIGHMEM bool depends on HIGHMEM64G || HIGHMEM4Gdiff -urpN -X /home/fletch/.diff.exclude 21-config_hz/arch/i386/Makefile 22-config_page_offset/arch/i386/Makefile--- 21-config_hz/arch/i386/Makefile Fri Jan 17 09:18:19 2003+++ 22-config_page_offset/arch/i386/Makefile Wed Feb 5 22:23:00 2003@@ -89,6 +89,7 @@ drivers-$(CONFIG_OPROFILE) += arch/i386 CFLAGS += $(mflags-y) AFLAGS += $(mflags-y)+AFLAGS_vmlinux.lds.o += -imacros $(TOPDIR)/include/asm-i386/page.h boot := arch/i386/boot diff -urpN -X /home/fletch/.diff.exclude 21-config_hz/arch/i386/vmlinux.lds.S 22-config_page_offset/arch/i386/vmlinux.lds.S--- 21-config_hz/arch/i386/vmlinux.lds.S Fri Jan 17 09:18:20 2003+++ 22-config_page_offset/arch/i386/vmlinux.lds.S Wed Feb 5 22:23:00 2003@@ -10,7 +10,7 @@ ENTRY(_start) jiffies = jiffies_64; SECTIONS {- . = 0xC0000000 + 0x100000;+ . = __PAGE_OFFSET + 0x100000; /* read-only */ _text = .; /* Text and read-only data */ .text : {diff -urpN -X /home/fletch/.diff.exclude 21-config_hz/include/asm-i386/page.h 22-config_page_offset/include/asm-i386/page.h--- 21-config_hz/include/asm-i386/page.h Tue Jan 14 10:06:18 2003+++ 22-config_page_offset/include/asm-i386/page.h Wed Feb 5 22:23:00 2003@@ -89,7 +89,16 @@ typedef struct { unsigned long pgprot; } * and CONFIG_HIGHMEM64G options in the kernel configuration. */ -#define __PAGE_OFFSET (0xC0000000)+#include <linux/config.h>+#ifdef CONFIG_05GB+#define __PAGE_OFFSET (0xE0000000)+#elif defined(CONFIG_1GB)+#define __PAGE_OFFSET (0xC0000000)+#elif defined(CONFIG_2GB)+#define __PAGE_OFFSET (0x80000000)+#elif defined(CONFIG_3GB)+#define __PAGE_OFFSET (0x40000000)+#endif /* * This much address space is reserved for vmalloc() and iomap()diff -urpN -X /home/fletch/.diff.exclude 21-config_hz/include/asm-i386/processor.h 22-config_page_offset/include/asm-i386/processor.h--- 21-config_hz/include/asm-i386/processor.h Thu Jan 2 22:05:15 2003+++ 22-config_page_offset/include/asm-i386/processor.h Wed Feb 5 22:23:00 2003@@ -279,7 +279,11 @@ extern unsigned int mca_pentium_flag; /* This decides where the kernel will search for a free chunk of vm * space during mmap's. */+#ifdef CONFIG_05GB+#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 16))+#else #define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 3))+#endif /* * Size of io_bitmap in longwords: 32 is ports 0-0x3ff.diff -urpN -X /home/fletch/.diff.exclude 21-config_hz/mm/memory.c 22-config_page_offset/mm/memory.c--- 21-config_hz/mm/memory.c Mon Jan 13 21:09:28 2003+++ 22-config_page_offset/mm/memory.c Wed Feb 5 22:23:00 2003@@ -101,8 +101,7 @@ static inline void free_one_pmd(struct m static inline void free_one_pgd(struct mmu_gather *tlb, pgd_t * dir) {- int j;- pmd_t * pmd;+ pmd_t * pmd, * md, * emd; if (pgd_none(*dir)) return;@@ -113,8 +112,21 @@ static inline void free_one_pgd(struct m } pmd = pmd_offset(dir, 0); pgd_clear(dir);- for (j = 0; j < PTRS_PER_PMD ; j++)- free_one_pmd(tlb, pmd+j);+ /*+ * Beware if changing the loop below. It once used int j,+ * for (j = 0; j < PTRS_PER_PMD; j++)+ * free_one_pmd(pmd+j);+ * but some older i386 compilers (e.g. egcs-2.91.66, gcc-2.95.3)+ * terminated the loop with a _signed_ address comparison+ * using "jle", when configured for HIGHMEM64GB (X86_PAE).+ * If also configured for 3GB of kernel virtual address space,+ * if page at physical 0x3ffff000 virtual 0x7ffff000 is used as+ * a pmd, when that mm exits the loop goes on to free "entries"+ * found at 0x80000000 onwards. The loop below compiles instead+ * to be terminated by unsigned address comparison using "jb".+ */+ for (md = pmd, emd = pmd + PTRS_PER_PMD; md < emd; md++)+ free_one_pmd(tlb,md); pmd_free_tlb(tlb, pmd); } -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | https://lkml.org/lkml/2003/2/17/255 | CC-MAIN-2016-07 | refinedweb | 1,147 | 56.25 |
BlackBerry Java Application Development
Beginner’s Guide
The book teaches how to write rich, interactive, and smart BlackBerry applications in Java. It expects the readers to know Java but not Java Mobile or the BlackBerry APIs. This book will cover Uiprogramming, data storage, programming network, and Internet APiapps. As we move on, you will learn more about the BlackBerry’s device features, such as messaging, GPS, multimedia, contacts and calendar, and so on. This book also helps you build your own applications to illustrate the platform and the various capabilities that developers can use in their programs.
also read:
What This Book Covers
Chapter 1, Introducing BlackBerry Application Development gets you started by talking about the capabilities of a BlackBerry smartphone and what kind of things can be done with these capabilities with a custom application. It talks about the other tools which are available and why writing native Java applications by using the BlackBerry SDK is the most powerful and practical approach to developing applications. Finally, it covers how to select which version of the SDK to use and when you might want to use an older version of the SDK instead of the latest.
Chapter 2, Installing the development Environment steps you through the process of installing the proper versions of Java and Eclipse. This chapter talks about when to install additional versions of the SDK and how to do so through the Eclipse over-the-air update tool as well as how to install them manually.
Chapter 3, Getting Familiar with the Development Environment starts off the learning process by importing an existing sample application—the standard “Hello World” application. After importing the project, the chapter will go over this simple application line-by-line. Afterwards, you will run the application in the simulator and then introduce a bug into the application so that you can debug it as well.
Chapter 4, Creating your first BlackBerry Project is where you create a new project from scratch. This chapter demonstrates how you accomplish this using Eclipse and the various wizards that are available within it. It also demonstrates how you can create a simple, but complete application quickly using the User Interface (UI) elements provided by the framework.
Chapter 5, Learning the Basics about the UI creates an application to demonstrate each of the Uielements that are available to you when using the BlackBerry SDK. This sample application demonstrates how to set and retrieve data from each field and discusses when each field should be used according to the BlackBerry development guidelines. By demonstrating each field, you will get a complete understanding of the capabilities of each field.
Chapter 6, Going Deeper into the UI picks up where the previous chapter leaves off by demonstrating how to use some of the advanced fields, such as lists and trees. It also covers navigation between screens, displaying dialogs, and common patterns used in the BlackBerry SDK. By the time you are done with this chapter, you will be well equipped to create the entire Uifor an application.
Chapter 7, Storing Data jumps right into how to use the data storage tools of the SDK and when it is appropriate to use each one. This covers the Java standard RMS, the BlackBerry-specific PersistentStore, and even how to access the removable media cards that are available on some devices.
Chapter 8, Interfacing with Applications.
Chapter 9, Networking wades into the complex, but an important area of how to make an application networking aware. Here, you will discover what transports are available, how to open connections , and how to send data through them. The sample also demonstrates how to communicate with a simple web service and parse the resulting XML data.
Chapter 10, Advanced Topics covers two distinct, but powerful topics. The first topic is how to utilize the built-in GPS receiver that is built in to some smartphones in order to get location information. You will learn about the various methods that can be used to get location information and how to do some common calculations using these coordinates. The other topic covered in this chapter covers how to use alternate entry points so that a single project can be used to launch multiple applications. Because these applications share a common project, they can share code and even memory.
Chapter 11, Wrapping It All Up finishes the book with tasks that commonly are done last, such as localization with language resource files and code-signing your application so that it can be installed on real devices. You will also learn what it takes to distribute your new application through the BlackBerry App World marketplace.
Interfacing with Applications
Now that we’ve covered some of the basics of application development it’s time to expand our horizons a bit.
The BlackBerry handhelds come pre-loaded with many great programs to help make a person more productive. While messages may be the most common reason a person will purchase a BlackBerry, the other Personal Information Management (PIM) applications oft en quickly become essential as well.
As a developer, you cannot ignore the other applications on the handheld. The more integrated an application can be with these standard applications, the better the user experience will generally be. Our TipCalc application is very specialized, and one of the few that works well without integrating with other applications. More oft en than not though, any applications that you create will benefit from some level of integration.
Not only can you interface with these applications by adding or editing content in them, you can also listen for events that allow you to react to things that happen. Some of the applications even allow you to add menu items and other “active content” to them. That’s a lot to talk about, so we’ll just focus on some of the most common tasks to get you started in this chapter.
Introducing PIM
The first area that we will take a look at is the Personal Information Management, or PIM applications and data. PIM applications are a rather generic name for a group of tools that manage your personal information, especially as it relates to your handheld. This could be stretched to include a lot of things, but it generally means your messages, contacts, and scheduling information that help you to manage your life. In BlackBerry terms it means the Messages, Address Book, Calendar, Tasks, and Notes.
Access to the PIM data in the BlackBerry SDK is provided through the JSR-75 specification, which is a Java standard. Like many of the Java standards in the BlackBerry SDK, there are also BlackBerry-specific extensions available that expand the basic classes with new BlackBerry-specific functionality.
Like many of the other standards we find in Java, JSR-75 implements a factory pattern where one class, in this case the PIM class, is used to create objects for the other more specific types of PIM data. The PIM class can basically do only one thing and that is to retrieve a PIMList object that contains a bunch of specialized PIMItem objects.
Why is all so generic?
All of these PIM classes may seem very generic and you would be absolutely correct. They are generic and they are supposed to be that way. PIM data is a very generic concept so the implementation is very generic as well. Also, because it is a Java standard, it needs to be flexible enough to accommodate any system that it might be implemented on. A perfect example of this kind of flexibility is the BlackBerry PIN field. The BlackBerry PIN is an entry in your address book and therefore, it should be included in the PIM data that you get. However, a PIN is a BlackBerry-specific concept and no other device out there will use it.
You can’t really expect the Java standard to include specialized fields for every possible piece of data that some device may have or want to include. The answer to this is to present PIM data in a key-value pairing so that it is flexible enough to handle every possible scenario.
A key-value pairing is a somewhat technical term to describe the pattern for storing values based on a static key. Or, more simply, if you know the proper key you can access the value. The flexible part is that the PIM object storing all of the values does not need to know about each specific value or provide any special mechanism for accessing each specific value. All access is done through generic methods, which also require the key. The difficulty in using this kind of approach is that the keys must be common knowledge. In addition, simple numeric keys do not support self-documenting code or even easily readable code. Keys that are string values off er a lot of advantages—in that the keys are much more readable, but the possibility for mistakes is very great because you don’t have the compiler to help ensure that only correct keys are used.
To help solve these issues there are derived classes for each type of PIM item. While you can do nearly everything by using the generic classes that a derived class would offer, i wouldn’t recommend it. These classes are here to make your code easier to write and read, and should be used.
PIMLists
As we said early on, the PIM class is used primarily to retrieve a PIMList object for a specific kind of PIM item, that is, address book contact, calendar event, and so on. For each of these types, there is also a specialized class that you can use instead of the generic PIMList class. Classes such as ContactList, EventList, and ToDoList offer a specialized version of the more generic PIMList class. These specialized classes are also part of the Java standard and should be preferred because they off er a few more methods which are specific to that kind of data.
There are BlackBerry-specific versions of these classes as well. Therefore, the BlackBerryContactList class is the BlackBerry-specific version of the ContactList, which is in turn a specialized version of PIMList for contact data. Generally speaking, you will want to use the BlackBerry-specific versions of PIMList classes when making your applications.
PIMItems
A PIMItem is the generic representation for any piece of PIM data. Just like there are specific versions of the PIMList, there are also specific versions of the PIMItem class for each kind of PIMItem. Contact, Event, and ToDo are all specific versions of a PIMItem for that kind of PIM data. As you might expect, there are BlackBerry-specific versions as well. BlackBerryContact, BlackBerryEvent, and BlackBerryToDo all exist to extend and further specialize the standard Java classes.
These specialized classes aren’t as specialized or easy to use as one might expect though. Providing a method called getFirstName might be really useful, but unfortunately, you will find nothing of the sort. The specialized classes off er few methods for accessing data. Instead, they provide static values for the keys used to set and retrieve data from the PIMItem class . Remember, earlier we noted that one drawback to using this kind of key-value pairing was that keys were sometimes not clear and that you could not expect help from the compiler. By providing each key value in the specialized class, both of these goals are accomplished. The name of the key value now provides a readable name and the compiler will flag an error if there is a typo or problem with the constant value being used.
Another aspect of PIMItem is that each value that an item has a specific type associated with it as well. Some of these are obvious, such as the start date of an event using a Date type. Some of them, such as the Name field of a Contact that requires an array, are not. Some fields can be given a subtype as well, such as the Phone field. With the subtype you can specify what kind of phone number it is: home, work, or mobile. Furthermore, some of the fields can have multiple occurrences while others cannot. A good example of this is the Email field in a Contact. A contact is allowed to have up to three e-mail addresses, but there is no subtype associated with them like there is with phone numbers. The bott om line to all this is that the PIM items have a defined structure to them and they won’t allow just any value to be put into a field. The documentation plays a big role here in understanding this because there are no field-specific methods to provide some additional assistance in the proper way to access each field.
Laying the ground work
Still, this is all rather abstract because you haven’t seen any real code samples yet, so let’s get into some code! For this chapter, you will build an application that someone will use to request some time off from their manager. This is definitely one of those applications that just can’t be done without interfacing with other applications on the handheld! To make getting started a litt le easier we will take the starter TimeOff application from the code bundle and add to it throughout this chapter.
The first task to undertake is one to help make testing and debugging easier. Remember, you will be working on the simulator, which is essentially a brand new device and which can be oft en reset. That means you don’t have any of your contacts there! You will need some contacts later, so to get started let’s add a menu item to the application that will create a few contacts that you can later use to test with.
Time for action – creating test contacts
- Modify the _AddTestAddressesAction menu item in the TimeOff project to look like the following completed code.
protected MenuItem _AddTestAddressesAction = new MenuItem( "Add Test Data", 800000, 50) { public void run() { PIM pimInstance = PIM.getInstance(); try { // TODO: Create test contacts BlackBerryContactList contacts = (BlackBerryContactList)pimInstance.openPIMList( PIM.CONTACT_LIST, PIM.READ_WRITE); BlackBerryContact newContact1 = (BlackBerryContact)contacts.createContact(); BlackBerryContact newContact2 = (BlackBerryContact)contacts.createContact(); String[] names = new String[contacts.stringArraySize(BlackBerryContact.NAME)]; names[BlackBerryContact.NAME_FAMILY] = "Smith"; names[BlackBerryContact.NAME_GIVEN] = "John"; if (contacts.isSupportedArrayElement(Contact.NAME, Contact.NAME_SUFFIX)) { names[BlackBerryContact.NAME_SUFFIX]="Jr"; } newContact1.addStringArray( BlackBerryContact.NAME, BlackBerryContact.ATTR_NONE, names); names[Contact.NAME_FAMILY] = "Doe"; names[Contact.NAME_GIVEN] = "John"; if (contacts.isSupportedArrayElement(Contact.NAME, Contact.NAME_PREFIX)) { names[Contact.NAME_PREFIX] = "Dr."; } newContact2.addStringArray(Contact.NAME, Contact.ATTR_NONE, names); //TODO: Add Phone numbers //TODO: Add Email Addresses //TODO: Add Addresses newContact1.commit(); newContact2.commit(); } catch (PIMException e) { // TODO Auto-generated catch block e.printStackTrace(); } } };
- Then add this line to the constructor to make the menu item available when you run the application.
this.addMenuItem(_AddTestAddressesAction);
What just happened?
This is the first of several baby steps as you work towards the goal of creating some test contacts in the address book in the simulator. As the address book in the simulator doesn’t have any entries to begin with, and can be erased frequently, doing this provides you with a way to quickly and easily create or recreate the contacts you will use later on for testing other parts of this application. It also happens to be a great way to demonstrate how to add contacts.
This first baby step does only a few things. First, it gets a PIMList object for contacts and then creates two new contacts. Aft er this it sets the name of each one and finally commits the records into the address book. These steps make sense at a high level, but let’s take a
look at the details.
The first step is to get an instance to the PIM object, which is done through a static method in the PIM class called getInstance.
PIM pimInstance = PIM.getInstance();
Once you have an instance of PIM, the next step is to get a list of contact items using the openPIMList method on the instance you just retrieved. This same method is used to get a list of any kind of PIM item so you must specify the type of data to get as one of the parameters. The PIM class off ers constant values for every kind of PIM item, so in this case, use the constant PIM.CONTACT_LIST. As you plan to add new contacts, the next parameter needs to be the constant PIM.READ_WRITE so that you have write permissions. It’s always good practice to request the minimum amount of permissions that you need, so if your application doesn’t change or add data to the list you should simply use the PIM.READ permission.
As we touched on earlier, this method returns a generic PIMList type so you also have to cast it to the appropriate specialized type. If a list type of CONTACT_LIST has been specified, you can cast the resulting PIMList to either of the available specialized classes—ContactList or BlackBerryContactList. As long as your application is a BlackBerry-specific application, there is no good reason to use the less specialized class of ContactList. Instead, you should always use BlackBerryContactList.
BlackBerryContactList contacts = (BlackBerryContactList)pimInstance.openPIMList(PIM.CONTACT_LIST, PIM. READ_WRITE);
The next step is to create a couple of new contacts that you will start to populate. This is done through the createContact method available on the ContactList class. Again, you need to cast the resulting objects to the proper type. The createContact method returns a contact, but again you’ve chosen to use the more specialized version of BlackBerryContact instead. Because this is all being executed on a BlackBerry handheld, you can always cast a contact to a BlackBerryContact safely. The same is true for each of the Java specialized classes and their corresponding BlackBerry specialized class.
BlackBerryContact newContact1 = (BlackBerryContact) contacts. createContact(); BlackBerryContact newContact1 = (BlackBerryContact) contacts. createContact();
The next segment of code sets the name att ribute of the newly created contacts. Notice that this is done through an array of String objects instead of individual methods. This isn’t something that is done to be more efficient, it is done this way because it must be; there is no other way.
We mentioned before about each field in a PIMItem having a type associated with it. Most of the field types are basic String or Date type fields, but NAME is more complicated than most of the other fields. The NAME field is defined as a StringArray because there are many parts to a name and you want to be able to set each part separately. There aren’t very many fields of this type used, but this is probably one of the most important. You can only set the NAME field as a whole unit, so if only one part of the name needs to be changed the entire name field must be replaced.
To work with the name, you must first create a string array of the proper size. There is no constant value for this as it may vary with the SDK version. Instead, you must first get the size by using the stringArraySize method on the ContactList and then construct a new array by using the returned value.
String[] names = new String[contacts.stringArraySize(BlackBerryConta ct.NAME)];
Once you have an array of the proper size each part of the name is set by indexing the array by using the NAME constant from the Contact class.
names[BlackBerryContact.NAME_FAMILY] = "Smith"; names[BlackBerryContact.NAME_GIVEN] = "John";
In this example, you also want to add another name part but are not sure whether the field is supported in this system. Not all fields are supported and not all of the name subfields are supported either. You can test to see whether a field or a subfield is supported by using the
isSupportedField or isSupportedArrayElement method s in the ContactList class. In this case, you test to see if the suffix is supported, and then set the suffix if so.
if (contacts.isSupportedArrayElement(Contact.NAME, Contact.NAME_SUFFIX)) { names[BlackBerryContact.NAME_SUFFIX]="Jr"; }
This step is very important if you want to use the same code for multiple platf orms. Each system can support the fields it chooses. In this case, the suffix is NOT supported and if you were to step through this code in the debugger, you would see that the code to set the suffix is skipped over. Later on, when you test this application, you will also see that the suffix was not added to the contact. Other platforms may implement it differently. You could just assume each of the name subfields are supported and set the field without testing to see if it is supported. In the BlackBerry SDK, unsupported fields are just quietly ignored. This can lead to confusion wondering why a field doesn’t appear in the Address Book application, but it won’t cause an error.
The next step is to actually add the NAME field to the contact. Up until this time you’ve simply been building an array in memory with all of the proper values.
newContact1.addStringArray( BlackBerryContact.NAME, BlackBerryContact.ATTR_NONE, names);
Notice that the method addStringArray doesn’t give any indication about what field is being added, but only what type of data is being added. All of the PIMItem methods are like this. Remember, this class is designed to be generic. The first parameter is the field indicator, which is one of the many constants that are defined in the Contact class. In this case, we use the BlackBerryContact class . Because BlackBerryContact derived from Contact, all of the constant values are accessible. The BlackBerryContact class does define some constants that are BlackBerry-specific, such as PIN. For this field you must reference the constant value from BlackBerryContact because the Java standard Contact class does not define it. Partly for this reason, isuggest always referencing constant values from BlackBerryContact because all of the constant values will be available through this class. The method addStringArray was chosen because that is the type of data that you are adding. The NAME field is defined as a string array and so you must use the addStringArray method because it corresponds to the data type of the field. Once you finish with the first contact, the code starts building the NAME string array to add a second contact. For demonstration sake, all of the constant values that are referenced are done so using the Contact class instead of the BlackBerryContact class.
names[Contact.NAME_FAMILY] = "Doe"; names[Contact.NAME_GIVEN] = "John"; if (contacts.isSupportedArrayElement(Contact.NAME, Contact.NAME_PREFIX)) { names[Contact.NAME_PREFIX] = "Dr."; } newContact2.addStringArray(Contact.NAME, Contact.ATTR_NONE, names);
Also, notice that the second contact applies a prefix to the name and tests to see if it is supported in the same way as you did for the suffix when adding the previous contact. However, the prefix is supported and if you were to step through this method in the debugger, you would see that the prefix is being set properly. The last step you have to do is to commit the data that has been added to the contact.
newContact1.commit(); newContact2.commit();
Simply creating a new contact is not enough; you must commit the changes in it by using the commit method. Creating a contact and then never committing it will not have any effect on the Address Book application. It simply won’t be shown in the list. That’s the whole point of this exercise, so you have to make sure and commit the changes once they are all done.
At this point, if you were to run the application and select the menu, you would see two new contacts added to the Address Book application in the simulator. They would show up as Dr. John Doe and John Smith. There would be only names with these contacts because that is all that you’ve added so far.
In the example code that you just stepped through there was one mistake that could have proven to be very serious. Did you catch it? You are reusing the names array to set the names of both contacts. This is actually risky, but it happens to work out in this case. If the SUFFIX field had been supported then your Dr. John Doe would have actually been Dr. John Doe Jr. because the array was not reset before it was used again. If you had changed the order around, John Smith would have been Dr. John Smith. This might have lead to a bug that could have been tough to track down, so keep it in mind.
Expanding your test contacts
Being able to create a contact is nice, but you really need for them to have more information than just a name in order to be useful. So, for the next step, add some telephone numbers to the contacts. If you are familiar with the Address Book application already, and you should be, then you know that a contact can have multiple phone numbers—each with a diff erent role or locations. Aft er having seen how the NAME field is handled, you may well assume that the telephone field, named TEL, would operate in the same manner; as a StringArray type. You would be wrong though.
When it comes to the TEL field each field is a simple String and is given an additional attribute defining which kind of phone number it is. Let’s take a look at this with the following code.
Time for action – adding telephone numbers
- Add the following code to the run method of the AddTestContacts menu item.
//TODO: Add Phone numbers newContact1.addString(BlackBerryContact.TEL, BlackBerryContact.ATTR_MOBILE, "555-555-1212"); newContact2.addString(Contact.TEL,Contact.ATTR_HOME, "555-555-1234"); newContact2.addString(Contact.TEL,Contact.ATTR_FAX, "555-555-9999"); newContact2.addString(Contact.TEL, Contact.ATTR_MOBILE,"555-555-1313"); // This is bad! newContact2.addString(Contact.TEL,Contact.ATTR_FAX, "555-555-1414");
What just happened?
Working with phone numbers may look very straightf orward, but there are many pitf alls to look out for here as well. Each phone number is added using the addString method , which should look similar to the addStringArray method that you just worked with. Again, the proper method to add the field is the one that matches the fields’ type and has nothing to do with the name of the field or its function.
Also, like the previous example, the first parameter to the method is the constant value defining which field is being added, which in this case is TEL. Following the field indicator though is the important part, called the field attribute. The attributes are used as a way of providing additional information about a field. In this case, that information defines what kind of phone number is being added. Every add method requires an attribute field , but this doesn’t always make sense for some fields. When this is the case, the att ribute ATTR_NONE should be used. The phone number field though requires an att ribute. Att empting to use ATTR_NONE will in fact be treated the same as ATTR_HOME.
newContact1.addString(BlackBerryContact.TEL, BlackBerryContact.ATTR_MOBILE, "555-555-1212");
Now that you’ve covered the basics, this line to add a mobile telephone number to John Doe
should be self explanatory.
newContact2.addString(Contact.TEL,Contact.ATTR_HOME,"555-555-1234"); newContact2.addString(Contact.TEL,Contact.ATTR_FAX,"555-555-9999"); newContact2.addString(Contact.TEL,Contact.ATTR_MOBILE,"555-555-1313");
Now, as John Smith is a doctor, he will have many more ways to be reached, but you can see that this also presents no problems for the Contact class . Again, this example uses the Contact class to reference the constant values instead of the BlackBerryContact class.
This is fine, but there are some phone attributes which are not available from the Contact class. The Java standard Contact only provides for six telephone types, but the BlackBerry
Address Book application will allow eight. So what is the diff erence? The Home2 and Work2 numbers are specific to BlackBerry. The constants for these att ributes can be found only in the BlackBerryContact class. As long as you use one, and only one, of each att ribute, you will be in good shape, but things get really messy when you try to add a number with the same att ribute again.
// This is bad! newContact2.addString(Contact.TEL,Contact.ATTR_FAX,"555-555-1414");
You may expect that adding a number with the same att ribute would simply replace any existing value, but this is not the case. Instead, the system will add the number into the “next available slot”. The results can be quite confusing because in this case, the number will be placed into the PAGER att ribute because FAX is already populated. If the PAGER already had a value, then it would be added to some other field that didn’t have a value.
Adding another phone number with the same att ribute as an existing number yields unpredictable results. The corollary to this is that if you are editing a contact in your application you can’t just blindly add the TEL fields. You must first either remove the old field by using the removeValue method, or change the value by using the setString method. Either way, things are going to be a lot more complicated. You would think that there would be methods for getting, setting, or removing values that match those for adding values, but this is not the case either. There are getString , setString , and removeValue methods , but all of the methods rely on an index value and not the att ribute that was originally supplied. This just reinforces the idea that att ributes are simply meant to provide additional information about the field and nothing more.
So, if you wanted to change one of the TEL fields you first need to find the field with the proper att ribute out of the list of TEL fields. Remember that this field is not an array though, so even though there may be up to eight values, there may be as few as 0 and the order in which they are stored apparently has nothing to do with their attribute. The following code fragment should serve as an example of how to update the ATTR_HOME phone number.
// The proper way to change a value int count = newContact2.countValues(Contact.TEL); for (int i= 0; i< count; i++) { if ((newContact2.getAttributes(Contact.TEL, i) & Contact.ATTR_HOME) == Contact.ATTR_HOME) { newContact2.setString(Contact.TEL, i, Contact.ATTR_HOME, "555-555-4321"); } }
To start off , you can get the number of values in the field by calling the countValues method . Once you know how many are there, a simple for loop is used to test each one. In order to see if a value has the ATTR_HOME attribute, you first get all of the attributes of the value by calling the getAttributes method . Some attributes can be combined using a bitwise OR, so to check for a specific attribute you must test with the bitwise AND, which effectively removes just the desired attribute from any other combined att ributes. Once you know that a value has the ATTR_HOME attribute, you can call setString by using the index value from the loop to change the value.
Like is aid, it’s a lot more complicated and as a result, i think it is safe to suggest that you not do it unless you really need to. There will always be special cases where such a thing is desired, but in general, a user should be in charge of their contacts and not your application. Besides, there is no way to stop a user from changing or deleting a contact that has been programmatically added.
Expanding even more
After tackling the relatively complicated fields of NAME and TEL, the EMAIL field should be a lot easier. Making sure that your contacts have an e-mail address is the whole point of this task because these will be used later on when testing another feature. Knowing that the Address Book application allows you to have up to three e-mail addresses, how would you think they are implemented in the PIM? Is it implemented as a String with multiple values (such as the TEL field), or as a StringArray (such as the NAME field)? If you chose a String with multiple values, you chose correctly!
The EMAIL field actually fits well with this concept of a String field with multiple values. Unlike the TEL fields, there are no attributes that are attached to each of the e-mail addresses, so adding one simply adds it to the list and there is nothing more to be concerned about. So let’s look at some code already!
Time for action – adding e-mail addresses
- Add the following code to the run method of _
AddTestAddressesAction under the proper comment line.
//TODO: Add Email Addresses newContact1.addString(Contact.EMAIL,Contact.ATTR_NONE, "[email protected]"); newContact2.addString(Contact.EMAIL,Contact.ATTR_NONE, "[email protected]"); newContact2.addString(Contact.EMAIL,Contact.ATTR_NONE, "[email protected]"); newContact2.addString(Contact.EMAIL,Contact.ATTR_NONE, "[email protected]");
What just happened?
Wow, that code segment looks really easy. The lines are all practically the same! Each line specifies the field as EMAIL and the att ributes as ATTR_NONE, so there isn’t much more to talk about except for the “what-ifs”.
Much like the TEL, things get a litt le trickier if you want to edit the contact in your application, but not as tricky as they are with the PHONE field. The addresses are stored in the same order that you added them to the field so you can use this fact to your advantage when editing. Because there are no att ributes for the various e-mail addresses, it is best to remember the index of each value when you read them out. In this way, when you need to change the value you can use the setString method and provide the index of field. Alternatively, you can remove one of the EMAIL values and add a new one, but this will also change the order of the values in the field. Values that are added are always placed at the end of the list.
The biggest thing to watch out for here is that you don’t add a fourth EMAIL value to the field. This will cause an exception to be thrown that the application will have to handle. In the case of this application, it just quietly fails and neither of the contacts are added. You can also put bad data into the Address Book application by adding values with bad data. For instance, the Email Address field in the Address Book application has special validation logic to ensure that the value is actually a valid e-mail. If you leave out the @, for instance, the user would see an error and not be allowed to commit the data. All of those safeguards don’t exist when working directly with the PIM. This goes for phone numbers as well as for any other format ed values. It is one more reason that you must be very careful about when working with the PIM directly.
Finishing the test contacts
We have done enough work to solve the initial problem of getting some contacts with e-mail addresses into the Address Book. However, there is one more area of the contact left to tackle—the address itself. The ADDR field is a combination of the NAME and PHONE fields in that each address is a StringArray, but there are multiple addresses which each have an attribute specifying what kind of address it is. Sounds confusing? It is by far the most complex piece of PIM data. Seeing code always helps me, so let’s dive into the code and tackle it head on.
Time for action – adding e-mail addresses
- Add the final code segment under the appropriate comment in the run method of _AddTestAddressesAction.
//TODO: Add Addresses String[] Address1 = new String[contacts.stringArraySize(BlackBerry Contact.ADDR)]; String[] Address2 = new String[contacts.stringArraySize(BlackBerry Contact.ADDR)];"; newContact1.addStringArray(BlackBerryContact.ADDR, BlackBerryContact.ATTR_HOME,Address1); Address1[BlackBerryContact.ADDR_STREET] = "345 Main St."; Address1[BlackBerryContact.ADDR_LOCALITY] = "AnyTown"; Address1[BlackBerryContact.ADDR_REGION] = "AnyState"; Address1[BlackBerryContact.ADDR_COUNTRY] = "USA"; Address1[BlackBerryContact.ADDR_POSTALCODE] = "12345"; Address2[BlackBerryContact.ADDR_STREET] = "20 N Oak St."; Address2[BlackBerryContact.ADDR_LOCALITY] = "AnyTown"; Address2[BlackBerryContact.ADDR_REGION] = "AnyState"; Address2[BlackBerryContact.ADDR_COUNTRY] = "USA"; Address2[BlackBerryContact.ADDR_EXTRA] = "Suite 200"; Address1[BlackBerryContact.ADDR_POSTALCODE] = "12345"; newContact2.addStringArray(BlackBerryContact.ADDR, BlackBerryContact.ATTR_HOME,Address1); newContact2.addStringArray(BlackBerryContact.ADDR, BlackBerryContact.ATTR_WORK,Address2);
What just happened?
This code should be prett y similar to the code that you saw for the NAME field. The first step is to simply create an array of String object s of the proper size and once again relying on the stringArraySize method to give that proper value. The only real difference here is that the field specified in the code is the ADDR field.
String[] Address1 = new String[contacts.stringArraySize(BlackBerryCon tact.ADDR)];
Once you have an array of the right size, setting the data for each part of the address is just a matt er of referencing the right index within the array by using the constants already defined. The names of the constant values for the ADDR field are so generic that they are confusing.";
Maybe, other parts of the world think in terms of “locality” and “region”, but i somehow doubt it. As if those weren’t bad enough, the name ADDR_EXTRA just gives no help at all towards understanding its use. This is just one of those things that you have to remember when working with addresses in a contact though. Once the address is fully populated, adding it to the field is the same as adding a TEL field in that you also have to specify one of the two supported att ributes, either ATTR_HOME or ATTR_WORK.
newContact1.addStringArray(BlackBerryContact.ADDR, BlackBerryContact.ATTR_HOME,Address1);
The whole point to this is that an address can have more than one address in the contact record and the second contact in the example code has been set up in this way. Because the values for the ADDR field are based on their att ribute, the order in which they are added is not important. However, the index is still important when it comes to updating or deleting a value.
The ADDR field also follows the same quirks that the TEL field does. If you add a second address with the ATTR_HOME att ribute, the att ribute will get silently reassigned to ATTR_WORK if one doesn’t already exist. If there are already two addresses in the contact, then an exception will be thrown so it is important to utilize the methods for checking the field before adding new values. Adding new address values isn’t that bad, but changing one is the worst of both worlds. If you recall from working with the NAME field, you cannot change just one element of the StringArray. You have to replace the entire StringArray by calling the setStringArray method. This quirk is somewhat annoying, but now adds the fact that there is more than one value in the list and it can get prett y complex. In order to get the address you want, you have to find it in the same way that you did for the TEL field. The home address will not be in the same index position all the time so you must look at each address in the list (there are only two at most) and test the att ribute to see if it has the attribute you want.
Pop quiz
- What class do you use to get a list of PIM items?
- PimItem
- PimList
- PIM
- When adding a value to a PIMItem, how do you choose the proper method to use?
- Use the method whose name matches the value name, that is, setFirstName()
- Use the method whose name matches the data type of the value, that is, setString()
- Use the setValue()method
- For values that are an array, how do you change the value of one element in the array?
- Use the setStringArrayElement() method.
- Use the setArrayValue() method.
- You can’t set a single element in an array. You must replace the entire array
with a new array of values.
Embedding the address book into your application
Now that you have some test data in the address book it’s time to use it. The purpose of the TimeOff application is to provide a way for an employee in the field to request time off . In order to do this, the application will collect an e-mail address, beginning and ending dates, and a reason for the request through four fields on the screen. When the request is submitt ed through the menu item the application uses these pieces of information to compose an e-mail message that will be sent to the manager by using the e-mail address provided. As we already have a field for the e-mail address, wouldn’t it be nice to be able to allow the user to pick the e-mail address to use from the address book instead of requiring them to re-enter it by hand? You already went over how to get the ContactList from the PIM object. You could create a screen and list all of the contacts on the screen, but this could be quite a long list that would take up a lot of processing time and memory. There are some other techniques you could use to display only a limited list, but even if you went through all of that work you still would be lacking several key pieces of functionality such as searching or even adding a new contact on the fl y. In short, you would almost be reimplementing the AddressBook application!
Wouldn’t it be bett er to just reuse the address book?
Good news, you can do that! The AddressBook application has exposed an interface that lets you leverage it, so you don’t have to solve all of those problems a second time. In addition to saving a ton of work, it also helps to provide a consistent user interface that will be familiar and easy to use.
Time for action – embedding the address book
- You first start by overriding the makeMenu method . With this method you can display the menu to access the Address Book application only when the e-mail field is selected.
- Then, we need to implement the Run method of the _AddressBookAction menu item.
- Add the following code to the addEvent method.
- What is the base data type for items from the Calendar application?
- EventItem
- CalendarItem
- BlackBerryCalendarItem
- What class is used to define events that occur more than once?
- Recurrance
- RepeatRule
- RepeatingEventItem
- What two values are used to define how oft en an event repeats?
- TIMESPAN and OCCURANCE
- INTERVAL and FREQUENCY
- OCCURANCE and INTERVAL
- It’s time to make this application actually do what it is meant to do—send that email!
Add the following code to the sendRequest method in the TimeOff application.
- What class is used to set the recipient address when sending a message?
- Address
- What method is used to actually send the message?
- Messages.send()
- Transport.send()
if (this.getFieldWithFocus() == _To && context != Menu.INSTANCE_CONTEXT_SELECTION) { m.add(_AddressBookAction); }
PIM pim = PIM.getInstance(); try { BlackBerryContactList contacts = (BlackBerryContactList)pim.openPIMList(PIM.CONTACT_LIST, PIM.READ_WRITE); BlackBerryContact selected = (BlackBerryContact) contacts. choose(); if (selected != null) { int EmailAddressCount = selected.countValues(Contact.EMAIL); // check to make sure that there is an Email address for this contact. if ( EmailAddressCount > 0) { String selectedEmail; // If there is more than just one email, display a dialog to choose if (EmailAddressCount > 1) { String[] Addresses = new String[EmailAddressCount]; int[] Values = new int[EmailAddressCount]; for (int i= 0; i< EmailAddressCount; i++) { Addresses[i] = selected.getString(Contact.EMAIL, i); Values[i] = i; } Dialog dlg = new Dialog("Select which address to use.", Addresses, Values, 0, Bitmap.getPredefinedBitmap(Bitmap.QUESTION)); int selectedAddr = dlg.doModal(); selectedEmail = Addresses[selectedAddr]; } else { selectedEmail = selected.getString(Contact.EMAIL, 0); } TimeOffMainScreen theScreen = (TimeOffMainScreen)UiApplication.getUiApplication() .getActiveScreen(); theScreen._To.setText(selectedEmail); } } } catch (PIMException e) { // TODO Auto-generated catch block e.printStackTrace(); }
What just happened?
The first part of this is all about making the menu item appear when the users want it to be shown, and not when they don’t. It makes sense that when the Email field has focus, the menu will be shown in the list. When any of the other fields are selected, the user probably doesn’t care about selecting an e-mail address from the Address Book application and so the menu should be hidden. By adding a little bit of logic to the already understood makeMenu method you can see how you can improve the user experience simply by choosing to hide or show menu items when it is appropriate. The standard BlackBerry applications do this oft en and effectively in every application on the handheld. It is an established patt ern that you as a developer should give conscious thought to and follow. The next step of this is much more involved with several parts. The first part is just about setting up the PIM object and getting the ContactList (which we’ve done in the previous section). The real magic happens with this simple line of code:
BlackBerryContact selected = (BlackBerryContact) contacts.choose();
The choose method is rather nondescript and plain, but this simple method is the exposed functionality that will display the address book list and prompt the user to select a contact. All of the functionality of the AddressBook is available, including searching by typing a contact name and even adding or modifying an existing contact, and it’s been optimized to be fast and memory efficient.
There are a number of menu items that are NOT present, such as those allowing you to compose messages or initiate calls. As you can see, a lot of thought went into this screen and you should all be thankful that the choose method is available and is able to be reused. It’s not perfect though, and there are several limitations. For instance, you can’t provide any parameters at all to the method so you can’t specify that you want addresses only with an e-mail address to be shown, for instance. Any contact may be selected, and as a result you must handle any situations where the selected contact does not have the information that you desire. In this case, you are interested in collecting only an e-mail address from the contact so you must check to make sure that it has one.
int EmailAddressCount = selected.countValues(Contact.EMAIL); // check to make sure that there is an Email address for this contact. if (EmailAddressCount > 0) { … } else { Dialog.alert("This contact has no Email Addresses."); }
Once you are sure that the contact has at least one e-mail address you need to further examine how many addresses the contact does have. Remember that a contact can have up to three e-mail addresses. If there is just one, then it is very easy to know which one to use. However, if a contact has more than one then the user needs to choose which of them should be used.
String selectedEmail; // If there is more than just one email, display a dialog to choose if (EmailAddressCount > 1) { … } else { selectedEmail = selected.getString(Contact.EMAIL, 0); }
If there is more than one e-mail address in the selected contact you need to present a dialog box listing all of the addresses available and allow the user to choose the correct one. You haven’t seen this form of a dialog before, so let’s cover it now. One of the standard forms of the dialog class is one that just forces the user to select a value from a list of values. This constructor for the form of dialog requires two parameters—an array of objects and an array of Integer values. Therefore, the first step to creating this dialog is to create arrays that contain the same number of elements as the number of e-mail addresses in the contact. The integer value can be any value that makes sense to your application, but in this case you really just want the index position of the selected address in the list.
String[] Addresses = new String[EmailAddressCount]; int[] Values = new int[EmailAddressCount]; for (int i= 0; i< EmailAddressCount; i++) { Addresses[i] = selected.getString(Contact.EMAIL, i); Values[i] = i; }
Once these arrays are created, the dialog can be displayed. The doModal method is used to display the dialog in a modal style, which just means that the application will wait until the dialog has been dismissed before continuing. Alternatively, there is a show method that will display a modeless dialog, which just means that the application will not wait for a response before continuing to run.
Dialog dlg = new Dialog("Select which address to use.", Addresses,Values,0,Bitmap. getPredefinedBitmap(Bitmap.QUESTION)); int selectedAddr = dlg.doModal(); selectedEmail = Addresses[selectedAddr];
The value returned by the doModal call is one of the values supplied in the Values array—the value that corresponds to the index of the selected item returned. In this example, if the second item in the dialog was selected the return value of doModal would be 1. Once you have an e-mail address, the last action is to set the text of the _To field to be that address. Remember, earlier we mentioned that if you insert contacts directly you will bypass all of the formatting and validation that would otherwise normally happen. The same holds true here—inserting an e-mail address by using the setText method will bypass any validation and formatting that would normally happen with the EmailAddressTextField.
Adding the event to your calendar
The next step would be to actually submit the request. There are actually two different things that we want to happen when a user submits the request. The first is that, we will assume that the request will be approved and go ahead and add the time off to your calendar as a new event. Second, we will need to compose and send the e-mail request. This second part we will cover in more detail later. For now, let’s stay focused on the PIM data and add an event. Working with the calendar is very similar to working with contacts. Items in the calendar are generically called events and are also included in the umbrella of PIM items. As a result, the process begins with the PIM class once again. The similarity continues with the Java standard Event class and the BlackBerry-specific BlackBerryEvent class. With that understanding, let’s get started looking at some code.
Time for action – adding an event to the calendar
PIM pim = PIM.getInstance(); EventList events; try { events = (EventList)pim.openPIMList(PIM.EVENT_LIST, PIM.READ_WRITE); BlackBerryEvent newEvent = (BlackBerryEvent) events.createEvent();()); newEvent.addBoolean(BlackBerryEvent.ALLDAY, BlackBerryEvent.ATTR_NONE, true); newEvent.addInt(BlackBerryEvent.FREE_BUSY, BlackBerryEvent.ATTR_NONE, BlackBerryEvent.FB_BUSY); newEvent.commit();); RepeatRule repeat = new RepeatRule(); repeat.setInt(RepeatRule.FREQUENCY,RepeatRule.DAILY); repeat.setInt(RepeatRule.INTERVAL, 1); reminder.setRepeat(repeat); reminder.commit(); } catch (PIMException e) { // TODO Auto-generated catch block e.printStackTrace(); }
What just happened?
The menu item Submit request uses two helper methods to do the work of submitting the request. The menu item and these methods are already stubbed out in the starter code so this section is devoted to implementing the second step of the submit process—adding an event to the calendar.
To get started, use the same general steps that you did when working with the contacts. Calling the same openPIMList method with a diff erent parameter, PIM.EVENT_LIST, gives you a list of events instead of a list of contacts. The createEvent method on this list is used to establish a new Event object to start populating.
PIM pim = PIM.getInstance(); EventList events; try { events = (EventList)pim.openPIMList(PIM.EVENT_LIST, PIM.READ_ WRITE); BlackBerryEvent newEvent = (BlackBerryEvent) events.createEvent();
The next step is to add each piece of data to the event. Unlike the contact, events do not have complicated data types at all. There is only one field with multiple values and none of the fields use a StringArray type. They are all basic types like you’ve seen before. You do see several new types such as Date and Boolean, but setting and getting values from these is not any different then setting or getting String values; just use the right method for the field.
Also, like contact, most of the field indicator constants are available in the Event class, but there are a few that are available only through the BlackBerryEvent class. For this reason, istill recommend using the BlackBerryEvent for all field identifier constants. The code starts off by setting some constant values to the SUMMARY and LOCATION fields.
On a BlackBerry, these are the first two fields on the screen where a new event is entered in the Calendar application. The SUMMARY field is labeled as Subject in the calendar application. It didn’t make sense to prompt for these values in TimeOff request (does your boss really need to know where you sister’s wedding will be?), so they are just set to some static values. Presumably, the user can change them later. Aft er that, the start and end dates are set to the dates which have been entered into the fields on the screen. The NOTE field follows and is populated with the Comments field from the entry screen.());
The last thing you do is to set two other fields that make the whole thing a little more polished, the All Day indicator and the Free or Busy fl ag. When the All Day indicator is set to true the time portions of the START and END date time values are ignored because the event will take the entire day. The start time is eff ectively midnight of the start day and the end time is the next midnight aft er the end day. Because the TimeOff application doesn’t even allow a time portion to be entered, you can just hardcode the ALLDAY field to be true. The FREE_BUSY field uses special constants that are defined in BlackBerryEvent to determine the status. These constants all begin with FB_ to define the status. As you are asking for the
entire day off , you will set the FREE_BUSY field to FB_BUSY.
newEvent.addBoolean(BlackBerryEvent.ALLDAY, BlackBerryEvent.ATTR_NONE, true); newEvent.addInt(BlackBerryEvent.FREE_BUSY, BlackBerryEvent.ATTR_NONE, BlackBerryEvent.FB_BUSY);
After setting all of the fields up for your new event, the last step is to commit (just like you did when adding the contacts).
newEvent.commit();
Recurring events
Events do have one thing that is diff erent than contacts and which can be rather confusing—a recurring event. A recurring event is an event that is set up once and shows up many times in your calendar at regular intervals. An example might be a regular Monday morning conference call at work. You don’t really want to enter that event into your calendar every week, but you do want it to show up on your calendar so that other people don’t try to set up another meeting at the same time. The event you just requested time off for doesn’t need to be a recurring event. Let’s create another event to remind you to check for approval of your request in order to demonstrate this aspect of the Calendar. The initial steps to create the event are the same as before. You won’t supply as much information because this is just a reminder.);
The special math for the START and END fields is there to create the event 24 hours from now. The long integer value that represents date and time is actually the number of milliseconds from January 1, 1970 (commonly called the epoch for Unix time). The really big number being added to it actually the number of milliseconds in 24 hours (24 hrs * 60 minutes/hour * 60 seconds/min * 1000 milliseconds/second).
That bit of code only sets up one event for the next day. The real work of a recurring event is done by the RepeatRule class —the class that holds the logic for how an event is to repeat. Your example of a reminder repeating each day is very simplistic. You might need a rule that
describes an event that repeats every seven days instead of everyday. Or how about events that happen on the third Tuesday of the month? With just these few examples you can see that there can be some very complicated rules about how an event repeats.
RepeatRule repeat = new RepeatRule(); repeat.setInt(RepeatRule.FREQUENCY,RepeatRule.DAILY); repeat.setInt(RepeatRule.INTERVAL, 1); repeat.setDate(RepeatRule.END, new Date().getTime()+(86400000 * 7));
This format should look very familiar now because it follows the same pattern that all of the PIM items do. It is not, however, derived from PIMItem itself. As a result, you should never use any of the constant values that are defined in BlackBerryEvent or any other PIMItem with a RepeatRule. All of the constants that are needed will be defined with the RepeatingRule class itself.
There are two basic parts of a RepeatRule though, and those are the frequency with which something repeats and the interval of time between events. In this case, you set the FREQUENCY field to the constant RepeatRule.DAILY because the time between events is counted in days. The INTERVAL field is set to 1 because we want this event to happen again aft er just one day. These two values together indicate that the event should repeat every day. How would you represent a biweekly meeting? The FREQUENCY would be set to WEEKLY and the INTERVAL would be set to 2. There are a great many possibilities, but this example should get you started understanding how the two fields work together.
The third line in your RepeatRule sets up the date that the event should stop repeating. In this case, the last day to prompt the user is seven days from the current time. This is an arbitrary number for the sake of this example, but the thought is that if the user’s request hasn’t been resolved in 7 days then you should stop checking on it. Once the RepeatRule is set up, you still need to apply the rule to the event by calling the setRepeat method . Afterward, the only thing left to do is to commit the event to the Calendar application.
reminder.setRepeat(repeat); reminder.commit();
There is so much more that you can do with a RepeatRule. In addition to some of the more complex repeating patt erns, you can add dates which are to be excluded from the RepeatRule as well. Generally, these aren’t used oft en, so let’s move on to the next step of this application.
Pop quiz
Sending e-mail
The last step that you need to do to finish up the TimeOff application is to actually send the message out making the request. Using the messaging system like this is just scratching the surface of what can be done when working with the messaging APIs. This example will send a message as a regular text e-mail to one recipient. The BlackBerry claim to fame is messaging though, and the APi to work with Messages is extensive. Not only can you send plain messages like this, but you can send more complicated messages like multi-part messages with attachments.
Time for action – sending an e-mail from an application
StringBuffer msgBody = new StringBuffer(); msgBody.append("A new TimeOff application request()); Message newMsg = new Message(); try { Address recipient = null; recipient = new Address(Address.EMAIL_ADDR,_To.getText()); newMsg.addRecipient(Message.RecipientType.TO,recipient); newMsg.setSubject("Time off Request"); newMsg.setContent(msgBody.toString()); Transport.send(newMsg); } catch (MessagingException e) { // TODO Auto-generated catch block e.printStackTrace(); }
What just happened?
The first part of this method is simply creating the body of the message that will be sent. The most interesting part of this code is using the DeviceInfo class to get the device ID, also called a PIN number. You needed some way to identify who the e-mail is from, and the device ID serves this purpose well. Yes, the sender’s e-mail will be in the message itself, but this works well too. The PIN number is typically represented in hexadecimal so you must convert it by using the toHexString method.
StringBuffer msgBody = new StringBuffer(); msgBody.append("A new TimeOff application requestion());
The next section is devoted to actually creating and sending the message. You might expect this to be very complex, but it’s almost embarrassing how simple this code is. The first step is to create a Message object to work with. There are actually three different Message classes available through the BlackBerry API. They all exist in different namespaces and serve diff erent purposes, so this part can be confusing. The Message class that you need here is in the net.rim.blackberry.api.mail.Message package. This Message class is used with e-mail and PIN messages. Fortunately, creating this object is very straightforward.
Message newMsg = new Message();
The next step is to populate the Message object with the recipient. At this point, you should have a valid address in the _To field of the application. You would want to perform basic testing before using the address in a real application, but for now you will just use whatever
is in the field. The message doesn’t want to use just a string value for the recipient though, you need to construct an Address object before adding it to the message. This class doesn’t do any validation either. Instead, its primary purpose is to provide an address along with a “readable name” for it. In this case, you simply specify a readable name of “TimeOff Recipient”. If you chose an address from one of your test contacts instead of manually typing one in, the Message application will match on the address and use the name from the Address Book instead of this readable name provided.
The last step is to add the Address object to the message by using the addRecipient method . This method also requires an identifier constant to indicate what kind of recipient it is. Notice that the Message class has a RecipientType inner class, which in turn contains the constant definitions. This is just another way of organizing the constant values into groups that make sense together. The PIM classes don’t do this and as a result, the constant values are harder to use. The Message class has several inner classes like this.
Address recipient = null; recipient = new Address(_To.getText(), "TimeOff Recipient"); newMsg.addRecipient(Message.RecipientType.TO,recipient);
Aft er setting the recipient of the message, you set the main text of the message in the subject and body. The important thing to note here is that the setContent method can accept a simple string (like you have in this example) or a more complex object called a Multipart. A Multipart is used any time there is more than one piece to a message, such as when a message has an attachment.
newMsg.setSubject("Time off Request"); newMsg.setContent(msgBody.toString());
Aft er the message has been fully constructed, you can finally send it. Here, you see a new class called Transport —a class that handles all of the messy stuff when it comes to integrating with the rest of the system. The send method is as simple as it can be and yet handles so much.
Transport.send(newMsg);
At this point, the message is part of the Messages application and is handled from there just as if the user had typed it up. You can see the message as a sent message by going to the Messages application. It will handle queuing up the message to be sent, retries, failures, and the transmission of the message itself. The TimeOff application simply returns and continues executing without having to worry about any of the technical details of what it takes to actually send the message.
Pop quiz
Have a go hero – sending a different kind of message
There is still one kind of PIMItem that we haven’t discussed at all, and that is a ToDo item from the Tasks application. This PIMItem type doesn’t have any complicated value types like contacts nor does it have any usual aspects like recurring events. It is pretty straightforward once you understand the other concepts from these other PIMItem types. Take this opportunity to explore the ToDo class and expand the TimeOff application to also add a ToDo item to check on the status as well as the reminder events that have already been added.
Summary
Interfacing with other applications on the handheld is the best way to make the applications you write more user-friendly and more likely to be used. The BlackBerry SDK offers many ways by which applications can leverage, extend, or integrate with the existing suite of applications that come with a new device. In this chapter, we primarily focused on accessing the PIM data. We also dabbled a litt le into how to use the messaging systems to send an e-mail message. There is much more that can be done to make applications that don’t just work with the operating system, but which can even become embedded into the operating system so that the use may never know they are a separate application.
In this chapter, we covered:
- What we mean by PIM data.
- How to access a particular list of PIM data.
- How to create new PIM items and how to assign values to their fields.
- How to send an e-mail message.
Interfacing with the PIM data and messaging applications are some of the most common things an application developer will do when creating a new application. Another area that is becoming more and more important is accessing networks through the built-in channels, and in particular, accessing the Internet. This is the area we will be focusing on in the next chapter. | http://javabeat.net/blackberry-java-application-development/2/ | CC-MAIN-2017-04 | refinedweb | 11,080 | 62.17 |
.
// A C++ Program to check if the given // string is a pangram or not #include<bits/stdc++.h> using namespace std; // Returns true if the string is pangram else false bool checkPangram (string &str) { // Create a hash table to mark the characters // present in the string vector<bool> mark(26, false); // For indexing in mark[] int index; // Traverse all characters for (int i=0; i<str.length(); i++) { // If uppercase character, subtract 'A' // to find index. if ('A' <= str[i] && str[i] <= 'Z') index = str[i] - 'A'; // If lowercase character, subtract 'a' // to find index. else if('a' <= str[i] && str[i] <= 'z') index = str[i] - 'a'; // Mark current character mark[index] = true; } // Return false if any character is unmarked for (int i=0; i<=25; i++) if (mark[i] == false) return (false); // If all characters were present return (true); } // Driver Program to test above functions int main() { string str = "The quick brown fox jumps over the" " lazy dog"; if (checkPangram(str) == true) printf ("/"%s/" is a pangram", str.c_str()); else printf ("/"%s/" is not a pangram", str.c_str()); return(0); }
Output :
"The quick brown fox jumps over the lazy dog" is a pangram
Time Complexity: O(n), where n is the length of our stringAuxiliary Space – O | http://www.shellsec.com/news/29320.html | CC-MAIN-2017-13 | refinedweb | 208 | 61.7 |
How to talk about Machine Learning or even Deep Learning without addressing the – famous – gradient descent? There are many articles on this subject of course, but most of the time you have to read several of these ones so as to fully understand all the mechanisms behind the scene. Sometimes too mathematical or not enough! I will try here to explain how it works simply and step by step in order to try to demystify the subject (big deal is’t it ?)
Promise not too much math … but a few, sorry!
If the math formulas are really too bloated for you, no worries, they are only there to explain the overall mechanism of gradient descent. You can therefore pass them quickly … I would say that what should be remembered from this article is: The iterative principle of gradient descent but also and above all the associated vocabulary (epochs, learning rate, MSE, etc.) because you will find them as hyperparameters in many Machine / Deep Learning algorithms.
What is Gradient Descent?
To explain what gradient descent is, you must understand why and where this idea comes from. I really like the image of the man at the top of a mountain who is got lost and must find his village below. But the terrain is steep, there are lots of trees, etc. In short, how to get home? Of course he can try and try again by this direction or the other one, but honestly, it’s not very efficient!
Instead, why not step up to the lowest surface around him? then start again like this for each step.
In fact it’s like selecting the direction which is quite simply given to it by the slope of the ground and in a progressive way.
This is exactly the principle of Gradient Descent. The idea is to solve a mathematical equation but not in a “traditional” way, but by putting in place an algorithm that will move step by step to approach the the most the solution. Why ? and quite simply because it is the only way to solve certain equations which are not unsolvable as they are ! And this will be the case when you have many features to manage in your model.
An equation ? but what is he talking about?
Yes, I did say the annoying word: equation. Let’s just look at an equation as a rule that governs behavior or observations. For example, imagine we measure the filling of a container every minute for 3 minutes. This consequently gives us 3 points, for each point we have on the abscissa (x) the minute, and on the ordinate (y) the volume poured.
Here are the 3 points:
- A (1, 1.2)
- B (2, 1.8)
- C (3, 3.4)
Obviously there is no straight line (which goes through these three points):
With these conditions, it is difficult to know the volume that will be fill in minute 4, 5, etc. We just know that the law is a quite linear rule, or at least it approximates. So let’s try to find this law in order to find out the following values. In fact, this is called a Linear Regression. But here, we will experiment the gradient descent mechanism which will allow us to find our famous rule.
The objective is therefore to find out the parameters a and b of the equation on the right:
Once we have this line equation, it will be extremely easy to find out the values for all minutes like 5, 6, etc. But how to do it ? certainly not by chance! Don’t forget also we have in our case a very simple case (with only 1 feature) but sometimes there can be several hundred (and more) of these features. So we definitely have to find a sucure way to figure out the solution. Remember the man at the top of the mountain, we’ll do the same approach.
The cost function
The cost function is the key as it measures the error between the estimatation we made and the known label (we are in a supervised mode). Let’s say we are going to take it step by step and with each step we take we are going to see if we are far from what we expected: this is the error rate. In our case we will use the MSE cost function (mean squared error) which has the advantage of strongly penalizing big errors:
This function calculates the difference (squared, hence the high penalty mentioned above) of each result compared to the expected label. The values xi and yi are the values of the points (observations) that we have already noted.
Let’s just replace the values with the right equation we expect (y = ax + b), and we get:
This cost function can also be represented by a parabola (squared function).
Find the line closest to the points is also find the minimum of this cost function.
In fact, the cost function measures the sum of the distances between each point observed and the line assumed to be the best. In the diagram below, we measure this distance by the squared sum of the lines in green (e1, e2 and e3 because we have 3 observations):
The result must therefore be as small as possible.
Cost function variation (considering a & b variables)
If your high school math class isn’t too far off, you need to remember:
- A parabola has only one minimum (good news)
- We measure the slope of a curve with a derivative
- We can derive a function / several variables (partial derivatives)
Note: Our case is so simple that we would be tempted to say that our function reaches its minimum when its derivative is zero … but once again we are in a very (too) simple case which has only two variables. In reality it would not be possible to proceed in this way.
Let’s develop our previous equality on the cost function step by step:
Now let’s look at its derivatives according to the two variables that interest us: a and b
Derivative according to a
By deriving according to a we look at the variations of the cost curve according to the values of a (it is a parabola, remember). To find out parameter a we must therefore find the minimum of this derivative function.
Take a look at it step by step:
Derivative according to b
We also look for the value of b, we derive the same equation according to b this time:
Algorithm
We have the two (derived) functions of which we are going to find the smallest value (for a and b). For that we are going to walk these curves (one and the other) and step by step like our man at the top of his mountain.
Each step (epoch) is guided by the learning rate which is in a way the length of the step. We therefore have to take several steps until we find our minimum (see stairs in red in the figure below).
In other words, the idea is therefore, step by step (or rather epoch after epoch) to approach the minimum with respect to the derivatives of our cost functions that we saw in the previous chapter:
2 comments here:
- We have to start from somewhere, so we will initialize with random values a (a0) and b (b0)
- Be careful not to choose a learning rate that is too low, otherwise you will take a long time to reach the minimum, or too high at the risk of missing it!
From scratch with Python now !
This is how that looks like using python to implement the gradient descent:
# 3 Observations X = [1.0, 2.0, 3.0] y = [1.2, 1.8, 3.4] def gradient_descent(_X, _y, _learningrate=0.06, _epochs=5): trace = pd.DataFrame(columns=['a', 'b', 'mse']) X = np.array(_X) y = np.array(_y) a, b = 0.2, 0.5 # Random init for a and b iter_a, iter_b, mse = [], [], [] N = len(X) for i in range(_epochs): delta = y - (a*X + b) # Updating a and b a = a -_learningrate * (-2 * X.dot(delta).sum() / N) # a step for a b = b -_learningrate * (-2 * delta.sum() / N) # idem for b trace = trace.append(pd.DataFrame(data=[[a, b, mean_squared_error(y, (a*X + b))]], columns=['a', 'b', 'mse'], index=['epoch ' + str(i+1)])) return a, b, trace
Notice in line 13 the loop which translates the steps (epochs). As for lines 17 and 18 are the Python translations of the derivatives of the cost function that we have seen in the previous chapters.
The function returns the values of a and b calculated but also a DataFrame with the values which were calculated as the epochs. Note also that for the effective calculation of the root mean square error I used the one provided by sklearn: mean_squared_error ()
Finally, a small Python function allows you to plot the results:
def displayResult(_a, _b, _trace): plt.figure( figsize=(30,5)) plt.subplot(1, 4, 1) plt.grid(True) plt.title("Distribution & line result") plt.scatter(X,y) plt.plot([X[0], X[2]], [_a * X[0] + _b, _a * X[2] + _b], 'r-', lw=2) plt.subplot(1, 4, 2) plt.title("Iterations (Coeff. a) per epochs") plt.plot(_trace['a']) plt.subplot(1, 4, 3) plt.title("Iterations (Coeff. b) per epochs") plt.plot(_trace['b']) plt.subplot(1, 4, 4) plt.title("MSE") plt.plot(_trace['mse']) print (_trace)
Let’s try with 10 epochs now :
a, b, trace = gradient_descent(X, y, _epoch=10) displayResult(a, b, trace)
On the first graph we find the line (in red) that has just been found (with parameters a and b).
The second graph shows the different values of a found by epochs.
The third graph suggests the same for b.
The last graph shows us the evolution of the cost function per epochs. Good news, this cost decreases steadily until it reaches a minimum (close to zero). We can see that this minimum is reached quite quickly in reality (from the 2nd and 3rd epochs).
Conclusion
Let us recall the main steps of this algorithm:
- Determine the cost function
- Derive the cost function according to the parameters to find (here a and b)
- Iterate (step by step / epochs – Learning rate) to find the minimum of these derivatives
Obviously this gradient descent function in Python is not intended to be reused as is. The idea was to show what this gradient descent process looked like in a very simple case of linear regression. You can also retrieve the code (Jupyter Notebook) which was used here and why not change the hyperparameters (Learning rate and epochs) to see what is happening.
One thought on “The Gradient Descent” | http://aishelf.org/gradient-descent/ | CC-MAIN-2021-31 | refinedweb | 1,799 | 69.41 |
you want to create a Microsoft Failover Cluster for DFS, you would also need to create RDM on your SAN for the clustered disks.
How many VMware hosts will you be using one or two ?
Can you please provide additional information for RDM on a SAN?
What solution can be used on the new design to replace the two file servers for cluster and the Windows server 2012 scale out file server cluster?
Can you please draw a high level design and attach into a jpg file?
On the other hand,
The idea here is to have full redundancy at High level servers[hosts] on a single site and later on replicate that information to a DR site. DFS will be implemented using a domain namespace and the information, shares that are dispersed geographically must be centralized on the headquarter, and somehow replicated to a DR site
Any updates?
e.g. VMware HA, if a host fails, the VM is restarted automatically on other Hosts automatically.
So a single Virtual Machine is required, stored on a virtual disk (vmdk).
If you want to do Failover Cluster....you must use RDMs | https://www.experts-exchange.com/questions/28498666/File-servers-centralization-and-design-solution-using-VMware.html | CC-MAIN-2018-26 | refinedweb | 190 | 71.14 |
.
Let us start with a simple C program:
void function_something( sometype v ) { ... v=0; ... }
In the above code snippet, whether sometype is an atomic type such as int or a complex type such as a struct, the variable v holds a (shallow) copy of the parameters and can be manipulated with the semantics of a local variable. This is a “by value” argument passing.
Now let us write:
void function_something( sometype * v ) { ... v->some_field=0; ... v=NULL; ... }
This snippet also makes perfectly clear what is going on. In the first assignment, v->some_field=0, it is unambiguous that what is pointed to by v is affected by the operation. In the second case, we just assign the local variable holding the pointer, which does not affect what v was originally pointing to.
In C++, pointers are often replaced advantageously by references, which are also basically pointers (that’s the way it’s implemented internally) but with a few subtle differences. The first is that a reference must be initialized at create/define time, so you can’t have a non-initialized reference (although you can initialize them with 0, C++’s NULL (until nullptr is ubiquitous)).
The previous snippets can be rewritten:
void function_something( sometype & v ) { ... v.some_field=0; ... v=0; // error! (or not, depends on sometype) ... }
And there’s basically no way of changing the content of v itself, that is, the pointer holding the reference, without resorting to some rather unclean code (and undefined behavior?).
Python references are more finicky. First, it depends on whether an object mutable or immutable, and to determine which is which seems to puzzle more than just me (more on mutable vs immutable on youtube). Let’s say, for the sake of argument, that the parameter is a list. In Python:
def function( this_list ): ... this_list.append(some_item) ... this_list=new_list ...
The first line inside the function cause some_item to be appended to this_list, just as with C++. The parameter this_list is a reference to another object, so modifications applied through the reference affect the original object, as expected.
Now for the second one, things are different. It is not the content of this_list (or what this_list refers to) but the reference itself that is modified! Basically, this_list=new_list just makes the reference this_list, a local variable, point to some other variable, also possibly local, effectively unbinding the reference and just performing a function-local operation: this_list will not modify the original object after.
This was somewhat puzzling, and I asked people on the #python channel on Freenode, and, as usual, they suggested me the most pythonèsque way of doing a C++-style assignation. One user proposed the following anti-pattern:
def function( this_list ): ... this_list[:]=new_list ...
While I don’t really see why it’s an anti-pattern, it did the trick splendidly. The splice [:] allows you to assign what is being referred to instead than just changing the reference.
*
* *
Learning a new language, even when strangely close to the paradigms you’re used to, will inevitably cause you some surprises and the adaptation may or may not be easy. It’s not that I’m just learning Python (I’ve done some fairly complex things with it before) it is just that I am trying to write proper Python code, not just C++ in Python.
I think in Python is better not to think in terms of “by value” or “by reference”. This explanation by David Goodger, where he says “Other languages have “variables”, Python has “names” ” helps me to understand this better:
The name analogy is good, but that doesn’t make justice to Python behavior. For example, you have no problem doing something like:
if z is a “complex object”, but good luck if z is only an integer, say, z=3 insteaf of z=[] in the above. Apparently in Python 3 you have the nonlocal keyword to ease look-ups like that, but I use Python 2.6 and/or 2.7.
I am not sure where you got into from a practical perspective, but like Alejandro I find myself basically never having to think about this. While in C++ you often pass things by reference so the function can modify them, in Python you just return more objects instead. Something like:
def smapMe(a,b): return b,a
[obja objb] = swapMe(obja, objb)
On the other hand, if you have a recursive function, it may or may not be convenient to just return the computed value everywhere. Not only is not very intuitive (at least, for a non-pythonista like me) it also incurs performance penalty to copy (even shallowy) a list or some object every time you exit the function.
Returning a list does not copy anything, it returns a reference to which a name is (presumably) assigned at the call site. In this way, returning objects is superior to languages like C because the default behavior is (at least generally) the high performance behavior.
That’s not a ‘list comprehension’, it’s a slice.
True. Fixed.
It seems the same as in Java, the best explanation I found was that the pointers to objects (references) are copied but the objects are not.
“Basically, this_list=new_list just makes the reference this_list, a local variable, point to some other variable, also possibly local, effectively unbinding the reference and just performing a function-local operation: this_list will not modify the original object after.”
Why would you expect to be able to use “this_list” to modify a variable after you tell that variable to point to a different object? I’m not trying to be crass, I genuinely don’t understand (I’m almost exclusively an interpreted language user).
Primarily because I expected the this_list to behave as a C-style pointer. Assigning a local copy of a pointer doesn’t assign the original pointer.
It’s pretty much the reverse for me: I almost always use compiled (and fairly low-level) languages, and I try to figure out the things the language does. In this case, it confirms that it’s a local copy of the reference, not the reference itself.
Python works exactly in the same way in cases of lists example as would C++. in C++ references are const pointers, so once initialized you can change the content but not what it refers to. So this_list = new List is not allowed. This gives us only T* p case to compare with. When function expects T* p, what is copied is T* but not T and this is what we mean passing by reference. So, any attempt to modify p inside the function will not be visible outside the function. So, p= new T[some_value] does not modify the original pointer which was provided to the function. but if one wona to achieve this then in function arguments we should specify T** p or T* &p. | https://hbfs.wordpress.com/2011/06/14/python-references-vs-c-and-c/ | CC-MAIN-2017-22 | refinedweb | 1,146 | 59.84 |
39575/how-to-install-and-configure-jdk8-on-windows-10
I am trying to install Jenkins on Windows and one of the pre-requisites according to this, is having JDK installed. How do I install JDK8 on Windows 10?
Hey @Henna, Go to this page
Accept the license and proceed by downloading it
Once it's downloaded, run the .exe file to install it
Click on next and then finish
Now you'll have to set the Environment path in java
Right click on my computer -> properties -> Advanced System Settings
Click on Environment variables
Click on New, Add variable name as PATH and set the variable value as the path where the JDK is installed
Follow similar steps to set CLASSPATH
Click on OK and close it
Open command prompt and type javac, you will now see that Java has been installed
Let's say your file is in C:\myprogram\
Run ...READ MORE
Follow these steps:
Write:
public class User {
...READ MORE
To use Eclipse from a flash drive, ...READ MORE
I read good info about java here ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE
It can work if you try to put ...READ MORE
When you use docker-compose down, all the ...READ MORE
How to invoke Thread dump analysis API?
Invoking ...READ MORE
We can use Java API to play ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/39575/how-to-install-and-configure-jdk8-on-windows-10 | CC-MAIN-2019-47 | refinedweb | 247 | 71.75 |
The Atom API
October 15, 2003.
The LiveJournal API
In the beginning, there was LiveJournal. LiveJournal had an API, the LiveJournal Client/Server API. It worked over HTTP, and it looked like this:
POST /interface/flat HTTP/1.1 Host: Content-type: application/x-www-form-urlencoded mode=login&user=test&password=test
HTTP/1.1 200 OK Content-Type: text/plain name Mr. Test Account success OK message Hello Test Account!
A function call is an HTTP POST to a specific URL, which is the same for all function calls. The function name is given as a parameter in a list of form-encoded key-value pairs. The result is also a list of key-value pairs which are separated by carriage returns instead of ampersands.
Things to notice right off the bat:
- Escaping issues with input. You may have a library that handles this, otherwise you'll need to escape everything manually. The documentation for the LiveJournal API steps you through all the gotchas.
- Unmarshalling the output. You'll need to write a parser to pick out key CR value CR key CR value CR, etc., before you even get around to interpreting the results.
- Escaping issues with the output. Carriage returns are special -- they're delimiters -- so carriage returns within the return values need to be escaped. Also, there are at least four ways to write a carriage return. The LiveJournal documentation suggests, but doesn't mandate, "\r\n".
- Passwords are sent in the clear, as plain text.
The Blogger API
Next in our abbreviated history is the Blogger API. The Blogger API was
created by Evan Williams of Pyra Labs and was quickly adopted by virtually everyone.
It
defined a series of functions, such as
blogger.newPost, which took as arguments
application_key (application-specific, each developer signed up to receive
one), blog_id (defined within the Blogger system), username,
password, entry_text, and a boolean flag publish which
controlled whether to publish the new post immediately or leave it in draft mode.
The Blogger API was based on XML-RPC, so a call to
newPost(APP_KEY, BLOG_ID,
USERNAME, PASSWORD, ENTRY_TEXT, PUBLISH) would send this over the wire:
POST /api/RPC2 HTTP/1.1 Host: plant.blogger.com Content-Type: text/xml <?xml version='1.0'?> <methodCall> <methodName>blogger.newPost</methodName> <params> <param> <value> <string>APP_KEY</string> </value> </param> <param> <value> <string>BLOG_ID</string> </value> </param> <param> <value> <string>USERNAME</string> </value> </param> <param> <value> <string>PASSWORD</string> </value> </param> <param> <value> <string>ENTRY_TEXT</string> </value> </param> <param> <value> <boolean>PUBLISH</boolean> </value> </param> <params> </methodCall>
Things to note here:
- It's XML. While escaping is still an issue, at least it's a well-understood issue.
- It's XML-RPC. There are XML-RPC libraries for very many programming languages, so you'll never have to see or think about the raw wire format. Until you need to debug it, of course. Do you handle all 4 serializations of boolean values? Did you know
<string>is optional and is omitted by some XML-RPC servers? And so forth.
- Argument order matters: parameters are not named, so order must matter. XML-RPC has no capability to serialize optional arguments. What we have is a function with six required parameters in a specific order. What we do not have is any way to extend this function, say to add another argument, except by defining a new function with a different name and seven arguments.
- Despite being widely adopted, this API is really quite Blogger-specific. Blogger did not at the time have the capability to associate titles or any other sort of metadata with entries; hence, the Blogger API has no capability for these.
- Passwords are sent in the clear, as plain text.
The MetaWeblog API
In direct response to the perceived limitations of the Blogger API, especially the lack of extensibility, since many people wanted titles, UserLand created the MetaWeblog API. It solved some of the problems but at the cost of added complexity. It was also based on XML-RPC, but it replaced the single entry_text string argument with a struct which could hold multiple pieces of information.
As the MetaWeblog API spec puts it,.
In other words, if you want to post a new entry that would be represented like this in RSS,
<item> <title>My Weblog Entry</title> <description>This is my first post to my weblog.</description> <pubDate>Mon, 13 Oct 2003 13:29:54 GMT</pubDate> <author>Mark Pilgrim ([email protected])</author> <category>Unfiled</category>
you would create that entry with a struct, something like this:
>>> import xmlrpclib >>> server = xmlrpclib.ServerProxy('') >>> server.metaWeblog.newPost(BLOG_ID, USERNAME, PASSWORD, {'title': 'My Weblog Entry', 'description': 'This is my first post to my weblog.', 'dateCreated': '2003-10-13T13:29:54', 'author': 'Mark Pilgrim ([email protected])', 'category': 'Unfiled'}, xmlrpclib.True)
What goes over the wire after this call is insanely complicated, far too much to include inline here. Looking through the wire format, and the higher-level source code, suggests out a number of problems with the MetaWeblog API:
- Despite the spec's claim that the vocabulary "comes from RSS 2.0", it doesn't really. For example, creation dates in RSS are stored in
<pubDate>, but in the API the creation date goes in
<dateCreated>.
- The date formats don't match. Dates in RSS 2.0 are in RFC-822 format, but XML-RPC only supports ISO-8601 formatted dates.
- Amazingly, XML-RPC has no concept of time zones. Convention dictates that all dates are UTC, but this is implementation-dependent.
- Some elements in RSS (
source,
enclosure, and
category) can have attributes. These are also special-cased. For
enclosure, the MetaWeblog API tells us to "pass a struct with sub-elements whose names match the names of the attributes according to the RSS 2.0 spec, url, length and type." For
source, "pass a struct with sub-elements, url and name."
- Since we're using a struct to hold all the metadata for the entry, we can't have more than one value per property. The most common case of this is creating an entry in multiple categories. The MetaWeblog API dodges this issue by defining a separate
categorieselement within the struct which is an array of strings. Other cases -- a post with multiple authors, for example -- are simply impossible.
- RSS categories can also have both attributes and a value -- the value being the name of the category, and a
domainattribute that specifies the domain in which the category name resides. To serialize this, the MetaWeblog API tells us: ."
- Multiple categories with domains are impossible. The special-case
categorieselement does not handle serializing attributes for each category; it is always simply a list of strings.
- Passwords are sent in the clear, as plain text.
But wait, there's more. RSS 2.0 is extensible through namespaces, so in theory the MetaWeblog API is extensible too. It says:.
Thankfully, nobody actually does this. Movable Type extends the MetaWeblog API by
simply
defining a bunch of new elements in the struct called
mt_allow_comments,
mt_allow_pings, and so forth.
In other words, what we have here is an RPC-based API that starts with an XML-centric data model (RSS 2.0), shoves it into a struct, defines separate special cases for everything that isn't simply a name-value pair, ignores everything that isn't handled by the special cases, reinvents the concept of XML namespaces, and then serializes it all in a verbose XML format that looks like this. So we've reinvented XML, over RPC, over XML. Badly.
And passwords are still sent in the clear.
The Atom API
The Atom API was designed because the MetaWeblog API proved that RPC-based APIs were simply the wrong solution for this problem. The Blogger API was about as complicated as you could reasonably get before things went completely off the rails. "Shove everything into a struct" was an idea that sounded like it might solve some problems; but, as you can see, it caused more problems than it solved.
In direct response to the mess that is the MetaWeblog API, the Atom API was designed with several guiding principles in mind:
- Well-defined data model -- with schemas and everything!
- Doc-literal style web services, not RPC
- Take full advantage of XML and namespaces
- Take full advantage of HTTP
- Secure, so no passwords in the clear
API Discovery
Previous weblog services had no concept of API discovery. They left it up to the end
user
to provide the exact API URL (). Some
servers implemented undocumented functions like
deletePost, and even knowing
the type of software running on the other end was not enough because different versions
of
the same software supported extra functionality over time. Client software had to
guess what
functionality was provided and what extensions were supported
The Atom API assumes only that the end user knows her home page. It relies on a
link tag in the
head element of the home page that points to an
Atom introspection file. The introspection file, in turn, lists the supported functions
and
extensions, as well as the URI associated with each function.
Here is an example of a home page with Atom API auto-discovery:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns="" lang="en" xml: <head> <title>My Weblog</title> <link rel="service.edit" type="application/x.atom+xml" href="/myblog/atom.cgi/introspection" title="Atom API" /> </head> <body> ... </body> </html>
If this resource is, it says that the Atom
introspection file is the
resource. Note that this is actually routing through a CGI script, as are all the
other
examples I'll list here. Nothing in Atom requires this, but when I wrote a server
prototype
of the Atom API, I made a point to route everything through a single CGI script because
there was some debate about whether this was even possible. It could easily be a set
of CGI
scripts, or JSPs, ASP, PHP, or any other language.
The introspection file then lists the supported function and extensions in a simple,
well-defined XML format. There are a number of functions defined in the core Atom
API, and
vendors can extend the introspection file with XML namespaces to point to their own
extension methods. A core Atom API introspection file (like) might look like
this:
GET /myblog/atom.cgi/introspection HTTP/1.1 Host: example.com
HTTP/1.1 200 OK Content-Type: application/x.atom+xml <?xml version="1.0" encoding="utf-8"?> <introspection xmlns="" > <search-entries></search-entries> <create-entry></create-entry> <edit-template></edit-template> <user-prefs></user-prefs> <categories></categories> </introspection>
Retrieving Entries
If you're writing client software to manage a weblog, the first thing you'll probably
want
to do after getting the introspection file is get a list of existing entries. The
introspection file lists the address for searching entries in
<search-entries>. The client can add query string parameters such as
atom-last to find recent entries. More complex examples are defined in the Atom API spec draft.
Here's how you would get a list of recent entries:
GET /myblog/atom.cgi/search?atom-last=20 HTTP/1.1 Host: example.com
HTTP/1.1 200 OK Content-Type: application/x.atom+xml <search-results <entry> <title>My Second Post</title> <id></id> </entry> <entry> <title>My First Post</title> <id></id> </entry> </search-results>
The remainder of the Atom API follows the principles of REST. New entries are created
using
HTTP POST to post an Atom entry to the
create-entry address specified in the
introspection file. Retrieving an entry is accomplished by doing an HTTP GET on the
entry's
URI, which is returned after creating new entry or in search results. Editing an entry
is
accomplished by doing an HTTP PUT on the entry's URI; deleting an entry is an HTTP
DELETE on
the entry's URI.
When I say "the entry's URI", remember that that's implementation-specific. These
examples
route everything through a single script (
atom.cgi), just to prove that you can
do that. Of course if you're implementing the Atom API on your own server, you don't
have to
do that; you could use JSP, or PHP, or Perl, or anything that can handle the four
basic HTTP
operations (GET, POST, PUT, DELETE). The introspection file rules all; it's the client's
guide to the structure of the server's Atom web services.
As I said, retrieving an existing entry is as simple as an HTTP GET of the entry's
URI. The
search results told us that the first post had a URI of, so let's get that:
GET /myblog/atom.cgi/edit/1 HTTP/1.1 Host: example.com
HTTP/1.1 200 OK>
Lots of information here: the entry has.
Creating, Editing, and Deleting entries
Posting a new entry is virtually symmetrical. To create a new entry, do an HTTP POST
on the
URI
create-entries address specified in the introspection file. The body of the
HTTP POST should be an entry, in the same Atom format as you got back from the server
on
retrieve:
POST /myblog/atom.cgi/edit HTTP/1.1 Host: example.com Content-Type: application/x.atom+xml <?xml version="1.0" encoding="utf-8"?> <entry xmlns=""> <title>My Entry Title</title> <created>2003-11-17T12:29:29Z</created> <content type="application/xhtml+xml" xml: <div xmlns=""> <p>Hello, <em>weblog</em> world!</p> <p>This is my third post <strong>ever</strong>!</p> </div> </content> </entry>
The server responds with an HTTP status code 201 "Created" and gives the entry's edit
URI
in the HTTP
Location: header.
HTTP/1.1 201 Created Location:
Note that since we're using straight XML (rather than a serialization of XML over RPC over XML), extensibility is handled by XML namespaces. For example, Movable Type allows individual entries to allow comments or not. This functionality is not built into the Atom API, but Six Apart could easily extend the API like this:
POST /myblog/atom.cgi HTTP/1.1 Host: example.com Content-Type: application/x.atom+xml <?xml version="1.0" encoding="utf-8"?> <entry xmlns="" xmlns: <title>My Entry Title</title> <created>2003-11-17T12:29:29Z</created> <mt:allowComments>1</mt:allowComments> <content type="application/xhtml+xml" xml: <div xmlns=""> <p>Hello, <em>weblog</em> world!</p> <p>This is my first post <strong>ever</strong>!</p> </div> </content> </entry>
Modifying an existing entry is almost the same as creating one. You do an HTTP PUT
on the
entry's URI (as returned in the
Location: header after creating it or in the
id element in the search results), with the entry in the body of the HTTP
message, in the same Atom XML format we've seen in other method calls:
PUT /myblog/atom.cgi/edit/1 HTTP/1.1 Host: example.com>
On success the server responds with an HTTP status code 205 "Reset Content".
HTTP/1.1 205 Reset Content
Deleting an entry is even simpler:
DELETE /myblog/atom.cgi/edit/3 HTTP/1.1 Host: example.com
HTTP/1.1 200 OK
Further reading
We have, in some sense, come full circle. The original LiveJournal API was REST-based, although it was limited to the simple name-value pairs for input and output. After that, weblogging APIs went down a path of RPC-style services, until that became completely unmanageable. And now. You can read the latest draft for yourself or download sample source code that implements the API in Python, Perl, PHP, or Java.
"But, but, but," I hear you cry, "what about passwords sent in the clear?" Ah, yes. Atom authentication deserves its own article, and I promise to tackle it, if not next month then the month after. I can promise that it does not involve sending plain text passwords in the clear. | http://www.xml.com/pub/a/2003/10/15/dive.html | CC-MAIN-2017-34 | refinedweb | 2,661 | 56.05 |
I was given 14k lines of wise, full of vaild and even preety, but not self-documenting Perl code. sub definitions are mixed with "main" code and sub calls, database routine is ending just to call curl on the main "thread" and after which there's another sub defined.
Obviously it's not helping to understand what, or how, this code is relly doing, so i decided to split it into functional packages
So, as i did not found any more elegant way to include "reusable" code, i decided to group subs accessing database into Database.pm, these interacting with API for data input ended in API.pm and so on, leaving main to just call predeclared subs and decide either to INSERT them into databse or print.
Main package had been shrinked into circa 300 lines and i've gained much visibility. When i wanted to proceed to unit tests and documenting every sub functionality (like: this function CONSUMES scalar with URL, PRODUCES array with "img scr" tags) i found that my packaging solution might not be wisest thing done there.
Of course i didn't foreseen that namespaces can be an issue here, and they were
main calls custom wrapper to eventually create instance of DBI object and holds its ref in $main::sql.
sub sql_connect embedded into Database.pm (Databae::sql_connect to be precise) tries to call "connect" method on $sql, but API.pm's methods albo uses some $sql methods
and there's alot of shared variables like this.
Before my modularization attempt everything worked, now i am forced to replace all "my"s into "our"s definitions in main in order to grant access to these variables by modules.
also changing all $variable in modules to $main::variable syntax and constanlty growing out @EXPORT = qw (...); gave me that strage feeling like trying to leave dungeon leads me to catacombs.
what am i missing here? How do properly split this code into logical chunks of separate files, but keeping namespace "main"?
ANY ideas will be appreciated.
my main goal is to document code, understand it's flow and based on that create another functionality
In reply to Splitting program into modules
by lis128
n
e
w
s
u
d
back
sleep
open eyes
say "xyzzy"
light torch
Results (180 votes). Check out past polls. | https://www.perlmonks.org/?node_id=3333;parent=1225546 | CC-MAIN-2019-39 | refinedweb | 388 | 60.14 |
The abstract keyword is used to declare abstract
methods and classes. An abstract method has no implementation
defined; it is declared with arguments and a return type as usual, but
the body enclosed in curly braces is replaced with a semicolon. The
implementation of an abstract method is provided by a subclass of the
class in which it is defined. If an abstract method appears in a
class, the class is also abstract.
An API consists of the functions and variables
programmers are use in their applications. The Java
API consists of all public and
protected methods of all public
classes in the java.applet,
java.awt, java.awt.image,
java.awt.peer, java.io,
java.lang, java.net, and
java.util packages.
An embedded Java application that runs in the context of an applet
viewer, such as a Web browser.
HTML tag that specifies an applet run within a Web
document.
An application that implements the additional structure
needed to run and display Java applets. An applet viewer can be a
Web browser like HotJava or Netscape Navigator, or a separate
program like Sun's appletviewer.
A Java program that runs standalone; i.e., it doesn't require an applet viewer.
Java's platform-independent windowing, graphics, and user-interface
toolkit.
A primitive Java data type that contains a truth value. The two
possible values of a boolean variable are
true and false.
A primitive Java data type that's an 8-bit two's-complement signed
number (in all implementations).
A behavior that is defined by one object and then later invoked by
another object when a particular event occurs.
A technique that explicitly converts one data type to another.
The catch statement introduces an exception-handling
block of code following a try statement. The
catch keyword is followed by an exception type
and argument name in parentheses, and a block of code within
curly braces.
A primitive Java data type; a variable of type char
holds a single 16-bit Unicode character.
a) An encapsulated collection of data and methods to operate on
the data. A class may be instantiated to produce an
object that's an instance of the class.
b) The class keyword is used to declare a class,
thereby defining a new object type. Its syntax is similar
to the struct keyword in C.
An object in the Java security model that is responsible for loading
Java binary classes from the network into the local interpreter. A
class loader keeps its classes in a separate namespace, so that loaded
classes cannot interact with system classes and breach system
security.
A method declared static. Methods of this type are
not passed implicit this references and may
refer only to class variables and invoke other class methods of
the current class. A class method may be invoked through
the class name, rather than through an instance of the
class.
The directory path specifying the location of compiled Java class files
on the local system.
A variable declared static. Variables of this type
are associated with the class, rather than with a particular
instance of the class. There is only one copy of a static
variable, regardless of the number of instances of the class
that are created.
The application that initiates a conversation as part of a networked
client/server application. See server.
The source code for a Java class. A compilation unit normally
contains a single class definition, and in most current development
environments is just a file with a .java
extension.
A program that translates source code into executable code.
Any of the GUI primitives implemented in the
java.awt package as subclasses of Component.
The classes Button, Choice, and
TextField (among many others) are components.
Using objects as part of another, more complex object. When you
compose a new object, you create complex behavior by delegating tasks
to the internal objects. Composition is different from inheritance,
which defines a new object by changing or refining the behavior of an
old object. See inheritance.
A method that is invoked automatically when a new instance
of a class is created. Constructors are used to initialize
the variables of the newly created object. The constructor
method has the same name as the class.
One of the java.awt classes that "contain"
GUI components. Components in a container appear
within the boundaries of the container. The classes
Dialog, Frame,
Panel, and Window are
containers.
A class that recognizes the content type of particular
data, parses it, and converts it to an appropriate object.
A packet of data sent to a receiving computer without
warning, error checking, or other control information.
See encapsulation.
A Java primitive data type;a double value is a
64-bit (double-precision) floating-point number.
An object-oriented programming technique that makes an object's data
private or protected (i.e.,
hidden) and allows programmers to access and manipulate that data only
through method calls. Done well, encapsulation reduces bugs and
promotes reusability and modularity of classes. This technique is also
known as data hiding.
A user's action, such as a mouse click or key press.
A signal that some unexpected condition has occurred in the
program. In Java, exceptions are objects that are subclasses of
Exception or Error (which
themselves are subclasses of Throwable). Exceptions
in Java are "raised" with the throw keyword and
received with the catch keyword. See
throw, throws, and
catch.
The extends keyword is used in a
class declaration to specify the superclass of the
class being defined. The class being defined has access to all the
public and protected variables
and methods of the superclass (or, if the class being defined is in
the same package, it has access to all non-private
variables and methods). If a class definition omits the
extends clause, its superclass is taken to be
java.lang.Object.
The final keyword is a modifier that may be applied
to classes, methods, and variables. It has a similar, but not
identical meaning in each case. When final is
applied to a class, it means that the class may never be
subclassed. java.lang.System is an example of a
final class. When final is
applied to a variable, the variable is a constant; i.e., it can't be
modified.
finalize is not actually a Java keyword, but a
reserved method name. The finalizer is called when an object is no
longer being used (i.e., when there are no further references to it),
but before the object's memory is actually reclaimed by the system. A
finalizer should perform cleanup tasks and free system resources not
handled by Java's garbage-collection system.
This keyword introduces the finally block of a
try/catch/finally
construct. catch and finally
blocks provide exception handling and routine cleanup for code in
a try block. The finally block
is optional, and appears after the try block, and
after zero or more catch blocks. The code in a
finally block is executed once, regardless of how
the code in the try block executes. In normal
execution, control reaches the end of the try block
and proceeds to the finally block, which
generally performs any necessary cleanup.
A Java primitive data type; a float value is a
32-bit (single-precision) floating-point number represented in
IEEE 754 format.
The process of reclaiming the memory of objects no
longer in use. An object is no longer in use when there are
no references to it from other objects in the system and
no references in any local variables on the method call
stack.
An abbreviation for garbage collection or garbage collector (or
occasionally "graphics context").
A drawable surface represented by the java.awt.Graphics
class. A graphics context contains contextual information about the
drawing area and provides methods for performing drawing operations in
it.
A GUI is a user interface constructed from
graphical push buttons, text fields, pull-down menus, dialog boxes,
and other standard interface components. In Java,
GUIs are implemented with the classes in the
java.awt package.
An arbitrary-looking identifying number used as a kind of signature
for an object. A hashcode stores an object in a hashtable. See
hashtable.
An object that is like a dictionary or an associative array. A
hashtable stores and retrieves elements using key values called
hashcodes. See hashcode.
The name given to an individual computer attached to the Internet.
A WWW browser written in Java that is capable of
downloading and running Java applets.
An interface for receiving image data from an image source. Image
consumers are usually implemented by the awt.peer
interface, so they are largely invisible to programmers.
An interface in the java.awt.image package that
receives information about the status of an image being
constructed by a particular ImageConsumer.
An interface in the java.awt.image package that
represents an image source (i.e., a source of pixel data).
The implements keyword is used in class
declarations to indicate that the class implements the named interface
or interfaces. The implements clause is optional in
class declarations; if it appears, it must follow the
extends clause (if any). If an
implements clause appears in the declaration of a
non-abstract class, every method from each
specified interface must be implemented by the class or by one of its
superclasses.
The import statement makes Java classes available
to the current class under an abbreviated name. (Java classes are
always available by their fully qualified name, assuming the
appropriate class file can be found relative to the
CLASSPATH environment variable and that the class
file is readable. import doesn't make the class
available; it just saves typing and makes your code more legible). Any
number of import statements may appear in a Java
program. They must appear, however, after the optional
package statement at the top of the file, and
before the first class or interface definition in the file.
An important feature of object-oriented programming that involves
defining a new object by changing or refining the behavior of an
existing object. That is, an object implicitly contains all the
non-private variables of its superclass and can
invoke all the non-private methods of its
superclass. Java supports single inheritance of classes and multiple
inheritance of interfaces.
An object. When a class is instantiated to produce an object, we say
the object is an instance of the class.
A non-static method of a class. Such a method is
passed an implicit this reference to the object that
invoked it. See also class method and static.
instanceof is a Java operator that returns
true if the object on its left-hand side is an
instance of the class (or implements the interface)
specified on its right-hand side. instanceof
returns false if the object is not an instance of
the specified class or does not implement the specified
interface. It also returns false if the specified
object is null.
A non-static variable of a class. Copies of such
variables occur in every instance of the created class. See also class
variable and static.
A primitive Java data type that's a 32-bit two's-complement signed
number (in all implementations).
The interface keyword is used to declare an
interface. More generally, an interface defines a list of methods that enables
a class to implement the interface itself.
The module that decodes and executes Java bytecode.
An 8-bit character encoding standardized by the
ISO. This encoding is also known as Latin-1 and
contains characters from the Latin alphabet suitable for English and
most languages of western Europe.
A 4-byte character encoding that includes all of the world's national
standard character encodings. Also known as
UCS. The 2-byte Unicode character set maps to the
range 0x00000000 to 0x0000FFFF of ISO 10646.
Sun's Web browser-based tool written in Java for the development
of Java applications.
A package of software distributed by Sun Microsystems for Java
developers. It includes the Java interpreter, Java classes, and Java
development tools: compiler, debugger, disassembler, appletviewer,
stub file generator, and documentation generator.
A language for creating dynamic Web pages developed by Netscape. From
a programmer's point of view, it's unrelated to Java, although some of
its capabilities are similar. Internally, there may be a relationship,
but even that is unclear.
An object that controls the arrangement of components within the
display area of a container. The java.awt package
contains a number of layout managers that provide different layout
styles.
A nickname for ISO8859-1.
A variable that is declared inside a single method. A local variable
can be seen only by code within that method.
A primitive Java data type that's a 64-bit
two's-complement signed number (in all implementations).
The object-oriented programming term for a function or
procedure.
Providing definitions of more than one method with the same name but
with different argument lists or return values. When an overloaded
method is called, the compiler determines which one is intended by
examining the supplied argument types.
Defining a method that exactly matches (i.e., same name,
same argument types, and same return type) a method defined
in a superclass. When an overridden method is invoked, the
interpreter uses "dynamic method lookup" to determine which
method definition is applicable to the current object.
A keyword placed before a class, variable, or method that alters
the item's accessibility, behavior, or semantics. See
abstract, final,
native, private,
private protected, protected,
public, static, and
synchronized.
A user-interface design that originated in Smalltalk. In
MVC, the data for a display item is called the
"model." A "view" displays a particular representation of the model,
and a "controller" provides user interaction with both. Java
incorporates many MVC concepts.
This is a special value of the double and
float data types that represents an undefined
result of a mathematical operation, such as zero divided by zero.
native is a modifier that may be applied to method
declarations. It indicates that the method is implemented
(elsewhere) in C, or in some other platform-dependent
fashion. A native method should have a semicolon
instead of a body. A native method cannot be
abstract, but all other method modifiers may be used
with native methods.
A method that is implemented in a native language on a host
platform, rather than being implemented in Java. Native methods
provide access to such resources as the network, the windowing
system, and the host filesystem.
new is a unary operator that creates a new object or
array (or raises an OutOfMemoryException if there is
not enough memory available).
null is a special value that indicates a variable
doesn't refer to any object. The value null may be
assigned to any class or interface variable. It cannot be cast to any
integral type, and should not be considered equal to zero, as in C.
An instance of a class. A class models a group
of things; an object models a particular member of that
group.
The package statement specifies which package the
code in the file is part of. Java code that is part of a particular
package has access to all classes (public and
non-public) in the package, and all
non-private methods and fields in all those
classes. When Java code is part of a named package, the compiled class
file must be placed at the appropriate position in the
CLASSPATH directory hierarchy before it can be
accessed by the Java interpreter or other utilities. If the
package statement is omitted from a file, the code
in that file is part of an unnamed default package. This is
convenient for small test programs, or during development because it
means the code can be interpreted from the current directory.
HTML tag used within
<applet>
... </applet> to specify a named parameter
and string value to an applet within a Web page.
The actual implementation of a GUI component on a
specific platform. Peer components reside within a
Toolkit object. See Toolkit.
One of the Java data types: boolean,
char, byte,
short, int,
long, float,
double. Primitive types are manipulated, assigned,
and passed to methods "by value" (i.e., the actual bytes of the data
are copied). See also reference type.
The private keyword is a visibility modifier that
can be applied to method and field variables of classes. A
private field is not visible outside its class
definition.
When the private and protected
visibility modifiers are both applied to a variable or method in a
class, they indicate the field is visible only within the class
itself and within subclasses of the class. Note that subclasses can
access only private protected fields within
themselves or within other objects that are subclasses; they cannot
access those fields within instances of the superclass.
The protected keyword is a visibility modifier that
can be applied to method and field variables of classes. A
protected field is visible only within its class,
within subclasses, or within the package of which its class is a
part. Note that subclasses in different packages can access only
protected fields within themselves or within other
objects that are subclasses; they cannot access protected fields
within instances of the superclass.
Software that describes and enables the use of a new protocol. A
protocol handler consists of two classes: a
StreamHandler and a
URLConnection.
The public keyword is a visibility modifier that
can be applied to classes and interfaces and to the method and field
variables of classes and interfaces. A public class
or interface is visible everywhere. A non-public
class or interface is visible only within its package. A
public method or variable is visible everywhere its
class is visible. When none of the private,
protected or public modifiers is
specified, a field is visible only within the package of which its
class is a part.
Any object or array. Reference types are manipulated,
assigned, and passed to methods "by reference." In other words, the
underlying value is not copied; only a reference to it is.
See also primitive type.
The base of a hierarchy, such as a root class, whose descendants are
subclasses. The java.lang.Object class serves as
the root of the Java class hierarchy.
The Java class that defines the methods the system calls to check
whether a certain operation is permitted in the current environment.
The application that accepts a request for a conversation as part of a
networked client/server application. See client.
To declare a variable with the same name as a variable defined in a
superclass. We say the variable "shadows" the superclass's
variable. Use the super keyword to refer to the
shadowed variable, or refer to it by casting the object to the type of
the superclass.
A primitive Java data type that's a 16-bit two's-complement signed
number (in all implementations).
An interface that listens for connections from clients on a data port
and connects the client data stream with the receiving application.
The static keyword is a modifier applied to method
and variable declarations within a class. A static
variable is also known as a class variable as opposed to
non-static instance variables. While each instance
of a class has a full set of its own instance variables, there is only
one copy of each static class variable, regardless
of the number of instances of the class (perhaps zero) that are
created. static variables may be accessed by class
name or through an instance. Non-static variables
can be accessed only through an instance.
A flow of data, or a channel of communication. All fundamental I/O
in Java is based on streams.
A class used to represent textual information. The
String class includes many methods for operating on
string objects. Java overloads the + operator for string
concatenation.
A class that extends another. The subclass inherits the
public and protected methods and
variables of its superclass. See extends.
The keyword super refers to the same value as
this: the instance of the class for which the
current method (these keywords are valid only within
non-static methods) was invoked. While the type of
this is the type of the class in which the method
appears, the type of super is the type of the
superclass of the class in which the method appears.
super is usually used to refer to superclass
variables shadowed by variables in the current class. Using
super in this way is equivalent to casting
this to the type of the superclass.
A class extended by some other class. The superclass's
public and protected methods and
variables are available to the subclass. See
extends.
The synchronized keyword is used in two related
ways in Java: as a modifier and as a statement. First, it is a
modifier applied to class or instance methods. It indicates that the
method modifies the internal state of the class or the internal state
of an instance of the class in a way that is not thread-safe. Before
running a synchronized class method, Java obtains a
lock on the class, to ensure that no other threads can modify the
class concurrently. Before running a synchronized
instance method, Java obtains a lock on the instance that invoked the
method, ensuring that no other threads can modify the object at the
same time.
Java also supports a synchronized statement that
serves to specify a "critical section" of code. The
synchronized keyword is followed by an expression in
parentheses, and a statement or block of statements. The
expression must evaluate to an object or array. Java
obtains a lock on the specified object or array before
executing the statements.
Transmission Control Protocol. A connection-oriented, reliable
protocol. One of the protocols on which the Internet is based.
Within an instance method or constructor of a class,
this refers to "this object"--the instance currently
being operated on. It is useful to refer to an instance
variable of the class that has been shadowed by a local
variable or method argument. It is also useful to pass the
current object as an argument to static methods or
methods of other classes.
There is one additional use of this: when it
appears as the first statement in a constructor method, it refers to
one of the other constructors of the class.
A single, independent stream of execution within a program.
Since Java is a "multithreaded" programming language, more
than one thread may be running within the Java interpreter
at a time. Threads in Java are represented and controlled
through the Thread object..
The throws keyword is used in a method declaration
to list the exceptions the method can throw. Any exceptions a method
can raise that are not subclasses of Error or
RuntimeException must either be caught within the
method or declared in the method's throws clause.
The property of the Java API that defines the look
and feel of the user interface on a specific platform.
The try keyword indicates a block of code to which
subsequent catch and finally
clauses apply. The try statement itself performs no
special action. See the entries for catch and
finally for more information on the
try/catch/finally
construct.
A synonym for ISO10646.
User Datagram Protocol. A connectionless unreliable
protocol. UDP describes a network data connection
based on datagrams with little packet control..
An encoding for Unicode characters (and more generally,
UCS characters) commonly used for transmission and
storage. It is a multibyte format in which different characters
require different numbers of bytes to be represented.
A dynamic array of elements.
A theorem prover that steps through the Java byte-code before it
is run and makes sure that it is well-behaved. The byte-code
verifier is the first line of defense in Java's security model. | https://docstore.mik.ua/orelly/java/exp/gloss_01.htm | CC-MAIN-2019-18 | refinedweb | 3,976 | 56.55 |
Java 1.3 introduced the
java.util.Timer class and the abstract
java.util.TimerTask class. If you subclass
TimerTask and implement its run(
) method, you can then use a Timer
object to schedule invocations of that run( )
method at a specified time or at multiple times at a specified
interval. One Timer object can schedule and invoke
many TimerTask objects. Timer
is quite useful, as it simplifies many programs that would otherwise
have to create their own threads to provide the same functionality.
Note that java.util.Timer is not at all the same
as the Java 1.2 class javax.swing.Timer.
Examples
Example 4-5 and Example 4-6 are
simple implementations of the TimerTask and
Timer classes that can be used prior to Java 1.3.
They implement the same API as the Java 1.3 classes, except that they
are in the je3.thread package instead of the
java.util package. These implementations are not
intended to be as robust as the official implementations in Java 1.3,
but they are useful for simple tasks and are a good example of a
nontrivial use of threads. Note in particular the use of
wait( ) and notify( ) in Example 4-6. After studying these examples, you may be
interested to compare them to the implementations that come with Java
1.3.[1]
[1] If you have the Java SDK™ from
Sun, look in the src.jar archive that comes with
it.
package je3.thread;
/**
* This class implements the same API as the Java 1.3 java.util.TimerTask.
* Note that a TimerTask can only be scheduled on one Timer at a time, but
* that this implementation does not enforce that constraint.
**/
public abstract class TimerTask implements Runnable {
boolean cancelled = false; // Has it been cancelled?
long nextTime = -1; // When is it next scheduled?
long period; // What is the execution interval?
boolean fixedRate; // Fixed-rate execution?
protected TimerTask( ) { }
/**
* Cancel the execution of the task. Return true if it was actually
* running, or false if it was already cancelled or never scheduled.
**/
public boolean cancel( ) {
if (cancelled) return false; // Already cancelled;
cancelled = true; // Cancel it
if (nextTime == -1) return false; // Never scheduled;
return true;
}
/**
* When it the timer scheduled to execute? The run( ) method can use this
* to see whether it was invoked when it was supposed to be
**/
public long scheduledExecutionTime( ) { return nextTime; }
/**
* Subclasses must override this to provide that code that is to be run.
* The Timer class will invoke this from its internal thread.
**/
public abstract void run( );
// This method is used by Timer to tell the Task how it is scheduled.
void schedule(long nextTime, long period, boolean fixedRate) {
this.nextTime = nextTime;
this.period = period;
this.fixedRate = fixedRate;
}
// This will be called by Timer after Timer calls the run method.
boolean reschedule( ) {
if (period == 0 || cancelled) return false; // Don't run it again
if (fixedRate) nextTime += period;
else nextTime = System.currentTimeMillis( ) + period;
return true;
}
}
package je3.thread;
import java.util.Date;
import java.util.SortedSet;
import java.util.TreeSet;
import java.util.Comparator;
/**
* This class is a simple implementation of the Java 1.3 java.util.Timer API
**/
public class Timer {
// This sorted set stores the tasks that this Timer is responsible for.
// It uses a comparator to sort the tasks by scheduled execution time.
SortedSet tasks = new TreeSet(new Comparator( ) {
public int compare(Object a, Object b) {
return (int)(((TimerTask)a).nextTime-((TimerTask)b).nextTime);
}
public boolean equals(Object o) { return this == o; }
});
// This is the thread the timer uses to execute the tasks.
// The TimerThread class is defined below.
TimerThread timer;
/** This constructor creates a Timer that does not use a daemon thread */
public Timer( ) { this(false); }
/** The main constructor: the internal thread is a daemon if specified */
public Timer(boolean isDaemon) {
timer = new TimerThread(isDaemon); // TimerThread is defined below
timer.start( ); // Start the thread running
}
/** Stop the timer thread, and discard all scheduled tasks */
public void cancel( ) {
synchronized(tasks) { // Only one thread at a time!
timer.pleaseStop( ); // Set a flag asking the thread to stop
tasks.clear( ); // Discard all tasks
tasks.notify( ); // Wake up the thread if it is in wait( ).
}
}
/** Schedule a single execution after delay milliseconds */
public void schedule(TimerTask task, long delay) {
task.schedule(System.currentTimeMillis( ) + delay, 0, false);
schedule(task);
}
/** Schedule a single execution at the specified time */
public void schedule(TimerTask task, Date time) {
task.schedule(time.getTime( ), 0, false);
schedule(task);
}
/** Schedule a periodic execution starting at the specified time */
public void schedule(TimerTask task, Date firstTime, long period) {
task.schedule(firstTime.getTime( ), period, false);
schedule(task);
}
/** Schedule a periodic execution starting after the specified delay */
public void schedule(TimerTask task, long delay, long period) {
task.schedule(System.currentTimeMillis( ) + delay, period, false);
schedule(task);
}
/**
* Schedule a periodic execution starting after the specified delay.
* Schedule fixed-rate executions period ms after the start of the last.
* Instead of fixed-interval executions measured from the end of the last.
**/
public void scheduleAtFixedRate(TimerTask task, long delay, long period) {
task.schedule(System.currentTimeMillis( ) + delay, period, true);
schedule(task);
}
/** Schedule a periodic execution starting after the specified time */
public void scheduleAtFixedRate(TimerTask task, Date firstTime,
long period)
{
task.schedule(firstTime.getTime( ), period, true);
schedule(task);
}
// This internal method adds a task to the sorted set of tasks
void schedule(TimerTask task) {
synchronized(tasks) { // Only one thread can modify tasks at a time!
tasks.add(task); // Add the task to the sorted set of tasks
tasks.notify( ); // Wake up the thread if it is waiting
}
}
/**
* This inner class defines the thread that runs each of the tasks at their
* scheduled times
**/
class TimerThread extends Thread {
// This flag will be set true to tell the thread to stop running.
// Note that it is declared volatile, which means that it may be
// changed asynchronously by another thread, so threads must always
// read its current value, and not used a cached version.
volatile boolean stopped = false;
// The constructor
TimerThread(boolean isDaemon) { setDaemon(isDaemon); }
// Ask the thread to stop by setting the flag above
void pleaseStop( ) { stopped = true; }
// This is the body of the thread
public void run( ) {
TimerTask readyToRun = null; // Is there a task to run right now?
// The thread loops until the stopped flag is set to true.
while(!stopped) {
// If there is a task that is ready to run, then run it!
if (readyToRun != null) {
if (readyToRun.cancelled) { // If it was cancelled, skip.
readyToRun = null;
continue;
}
// Run the task.
readyToRun.run( );
// Ask it to reschedule itself, and if it wants to run
// again, then insert it back into the set of tasks.
if (readyToRun.reschedule( ))
schedule(readyToRun);
// We've run it, so there is nothing to run now
readyToRun = null;
// Go back to top of the loop to see if we've been stopped
continue;
}
// Now acquire a lock on the set of tasks
synchronized(tasks) {
long timeout; // how many ms 'till the next execution?
if (tasks.isEmpty( )) { // If there aren't any tasks
timeout = 0; // Wait 'till notified of a new task
}
else {
// If there are scheduled tasks, then get the first one
// Since the set is sorted, this is the next one.
TimerTask t = (TimerTask) tasks.first( );
// How long 'till it is next run?
timeout = t.nextTime - System.currentTimeMillis( );
// Check whether it needs to run now
if (timeout <= 0) {
readyToRun = t; // Save it as ready to run
tasks.remove(t); // Remove it from the set
// Break out of the synchronized section before
// we run the task
continue;
}
}
// If we get here, there is nothing ready to run now,
// so wait for time to run out, or wait 'till notify( ) is
// called when something new is added to the set of tasks.
try { tasks.wait(timeout); }
catch (InterruptedException e) { }
// When we wake up, go back up to the top of the while loop
}
}
}
}
/** This inner class defines a test program */
public static class Test {
public static void main(String[ ] args) {
final TimerTask t1 = new TimerTask( ) { // Task 1: print "boom"
public void run( ) { System.out.println("boom"); }
};
final TimerTask t2 = new TimerTask( ) { // Task 2: print "BOOM"
public void run( ) { System.out.println("\tBOOM"); }
};
final TimerTask t3 = new TimerTask( ) { // Task 3: cancel the tasks
public void run( ) { t1.cancel( ); t2.cancel( ); }
};
// Create a timer, and schedule some tasks
final Timer timer = new Timer( );
timer.schedule(t1, 0, 500); // boom every .5sec starting now
timer.schedule(t2, 2000, 2000); // BOOM every 2s, starting in 2s
timer.schedule(t3, 5000); // Stop them after 5 seconds
// Schedule a final task: starting in 5 seconds, count
// down from 5, then destroy the timer, which, since it is
// the only remaining thread, will cause the program to exit.
timer.scheduleAtFixedRate(new TimerTask( ) {
public int times = 5;
public void run( ) {
System.out.println(times--);
if (times == 0) timer.cancel( );
}
},
5000,500);
}
}
} | http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-4-sect-5.html | CC-MAIN-2019-13 | refinedweb | 1,451 | 64.61 |
class base1:
def foo(self):
print 'base1.foo'
class base2:
def foo(self):
print 'base2.foo'
class derived(base1, base2):
def tst(self):
self.foo()
base2.foo(self)
It's clear that this works, but is there a way to call base2.foo() that
makes it look more like a base class attribute is being called, e.g.,
something that doesn't require the explicit 'self' argument in the call to
base2.foo()? (I realize this is a minor point. Don't everyone get all worked
up about it. I'm not trying to change Python into C++. I'm just curious. :-)
Thx,
-- Skip Montanaro ([email protected]) Now working for Automatrix - "World-Wide Computing Solutions" | https://legacy.python.org/search/hypermail/python-1994q2/0978.html | CC-MAIN-2021-43 | refinedweb | 118 | 71.82 |
These patches make it possible to share NFS superblocks between related mounts,
where "related" means on the same server. Inodes and dentries will be shared
where the NFS filehandles are the same (for example if two NFS3 files come from
the same export but from different mounts, such as is not uncommon with autofs
on /home).
Following discussion with Al Viro, the following changes [try #2] have been
made to the previous attempt at this set of patches:
(*) The vfsmount is now passed into the get_sb() method for a filesystem
instead of passing a pointer to a pointer to a dentry into which get_sb()
could stick a root dentry if it wanted. get_sb() now instantiates the
superblock and root dentry pointers in the vfsmount itself.
(*) The get_sb() method now returns an integer (0 or -ve error number) rather
than the superblock pointer or cast error number.
(*)_parent() and
shrink_dcache_anon(). This is because, as far as I can tell, the current
code assumes that all the dentries will be linked into a tree depending
from sb->s_root, and that anonymous dentries won't have children.
However, with the way these patches implement NFS superblock sharing,.
(*) d_materialise_dentry() now switches the parentage of the two nodes around
correctly when one or other of them is self-referential.
(*) Whilst invoking link_path_walk(), the nfs4_get_root() routine now
temporarily overrides the FS settings of the process to prevent absolute
symbolic links from walking into the wrong namespace.
(*) nfs*_get_sb() instantiate the supplied vfsmount directly by assigning to
its mnt_root and mnt_sb members.
(*) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
(*) Various bugs have been fixed.
Further changes [try #3] that have been made:
(*) The patches are now against Trond's NFS git tree, so won't apply to
Linus's tree.
(*) The server record construction/destruction has been abstracted out into
its own pair of functions to make things easier to get right.
(*) Documentation changes have been moved from patch 2/5 to 1/5.
David
- | http://lwn.net/Articles/174291/ | CC-MAIN-2013-20 | refinedweb | 363 | 57.5 |
Exception Testing using Spock in grails
Testing was never so easy and intuitive before the use of Spock. Earlier when I was using grails unit testing, it never attracted me to write more and
String getUserType(int age){ if(age<=0){ throw new MyException("Invalid age") }else if( age>0 && age<50){ return "Young" }else{ return "Old" } }
Now we will write the test case of this method for checking whether exception is thrown for invalid inputs or not.
def "exception should be thrown only for age less than or equal to 0"{ given: String type = getUserType(34) expect: type == "Young" notThrown(MyException) when: type = getUserType(0) then: MyException me = thrown() me.message == "Invalid age" }
This is how we have tested whether exception is thrown for invalid input or not.
Hope it helps
Uday Pratap Singh
[email protected] | http://www.tothenew.com/blog/exception-testing-using-spock-in-grails/ | CC-MAIN-2019-43 | refinedweb | 138 | 54.36 |
On 03/28/2011 09:01 AM, Sigbjorn Lie wrote: > On Mon, March 28, 2011 14:31, Dmitri Pal wrote: >> On 03/27/2011 06:14 PM, Sigbjorn Lie wrote: >> >>> Hi, >>> >>> >>> I have written some scripts for migration from NIS/local files to IPA. >>> They will import the passwd, group, netgroup, and hosts maps. >>> >>> >>> This is the first version, be aware of bugs. :) >>> >>> >>> Please read the README file before using. >>> >>> >>> You can download them from here if you are interested: >>> >>> >> Thank you for the contribution! >> I see that it is under GPL v2. Would you mind relicensing it under GPL v3? >> >> >> >> Would you be interested in these scripts being incorporated into the >> project source? Rob, can you please take a look into this? Should we consider rewriting >> them in Python and incorporating into the main tool set or leave and use as is? >> >> > Sure I can relicense to GPL v3. All I care about is the scripts staying open and free to use. :) > > You can include as a part of IPA if you would like. I was planning to re-write them all into perl, > as my initial efforts to write them in bash for maximum portability didn't work out, and the > netgroup and hosts import scripts ended up written in perl. > > I cannot help re-writing to python, I don't know the language. > Ok, thank you! Can you elaborate a bit about the constraints you have regarding migration. As far as I understand you have users and groups with colliding gids and you have to resolve things manually to make things exactly as they were and IPA as is does not allow you to do so as it always creates a privite group with the same GID. I have a stupid question: what is the implication of actually not doing things exactly as they were but rather embracing the IPA model of the unified UID/GID namespace? Is the reason that there are some applications scattered in the enterprise that might have gids configured explicitly in the configuration files (like SUDO for example) and updating those would be a challenge or there is something else? Thanks Dmitri > Rgds, > Siggi > > > _______________________________________________ > Freeipa-users mailing list > Freeipa-users redhat com > > > -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? | https://www.redhat.com/archives/freeipa-users/2011-March/msg00156.html | CC-MAIN-2015-14 | refinedweb | 389 | 71.85 |
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
The client application can be a simple application to just send HTTP requests to the service and receive AtomPub or JSON responses. The first client application example is a WPF application that makes use of the HttpWebRequest class from the System.Net namespace.
Figure 32-2 shows the UI from the Visual Studio Designer. A TextBox named textUrl is used to enter the HTTP request with a default value of. The read-only TextBox named textReturn receives the answer from the service. You can also see a CheckBox checkJSON, where the response can be requested in the JSON format. The Button with the content Call Data Service has the Click event associated to the OnRequest() method. | http://my.safaribooksonline.com/book/programming/csharp/9780470502259/data-services/http_client_application | CC-MAIN-2013-48 | refinedweb | 133 | 55.13 |
You are given a binary tree, you need to determine if the tree is a valid BST or not? For a tree to be a BST, the left child of a node should have a value less than the node’s value and the right child of the node should have a value greater than the node’s value and the property should hold true for all the subtrees.
The above tree satisfies all the properties of a Binary Search Tree.
Algorithm
Instead of guessing the answer solely based on the node’s value and it’s children, we need to pass on the information from the parent nodes as well.
For example, in the above tree, the node 20 has value greater than it’s left child and less than it’s right child. If based on this condition only, we determine the validity of the tree, than the answer would be true. However, we can see that the node 5 which is on the right subtree of the root node has a value less than the root node 20. Thus, the actual answer should be False. Thus, we need to pass on the parent node’s information downwards.
At each node, we need to check the following conditions –
- If the current node is the left child of the parent, then it must be smaller than the it’s parent and it also has to pass on the value from it’s parent to it’s right subtree so as to make sure that none of the nodes in that particular subtree has a value greater than it’s parent.
- If the current node is the right child of its parent, then it must have a value larger than its parent and it also needs to pass down it’s value to the left subtree so that none of the nodes in its left subtree is less than its parent.
For a more clear picture, have a look at the code.
#include <bits/stdc++.h> using namespace std; struct Node{ Node* left; Node* right; int data; }; Node* createNode(int key){ Node* temp = new Node(); temp->data = key; temp->right = nullptr; temp->left = nullptr; return temp; } Node* root = nullptr; Node* insertNode(Node* node, int key){ Node* temp = createNode(key); if(root == nullptr){ root = temp; return root; } if(node == nullptr){ return temp; } if(key < node->data){ node->left = insertNode(node->left, key); }else if(key > node->data){ node->right = insertNode(node->right, key); } return node; } void levelOrderTraversal(){ Node* temp = root; queue<Node*> q; q.push(temp); while(!q.empty()){ Node* front = q.front(); cout<<front->data<<" "; q.pop(); if(front->left != nullptr){ q.push(front->left); } if(front->right != nullptr){ q.push(front->right); } } } bool helper(Node* node, long long lower, long long upper){ if(node == nullptr){ return true; } return (node->data > lower) && (node->data < upper) && helper(node->right, node->data, upper) && helper(node->left, lower, node->data); } bool isValidBST(Node* root) { return helper(root, LLONG_MIN, LLONG_MAX); } int main() { insertNode(root, 8); insertNode(root, 3); insertNode(root, 10); insertNode(root, 1); insertNode(root, 6); insertNode(root, 14); insertNode(root, 4); insertNode(root, 7); insertNode(root, 13); levelOrderTraversal(); cout<<endl; if(isValidBST(root)){ cout<<"It is a valid BST"; }else{ cout<<"It's not a valid BST"; } return 0; }
Also, take a look at another popular interview problem Least Common Ancestor.
One comment | https://nerdycoder.in/2020/08/29/is-given-tree-a-bst-or-not/ | CC-MAIN-2021-04 | refinedweb | 561 | 57.74 |
There was a recent uproar on twitter due to a (now deleted) tweet by a data science interviewer which divided the python community -
pandas.read_csv() vs. the built-in
csv module.
It is perfectly fine in case you do not use the built-in
csv module and let me give you three solid reasons why you should use
pandas.read_csv() (without delving into feature-level comparison):
For the above dataset, the simplest way of reading it using the
csv module is:
import csv data = [] with open('ford_escort.csv', newline='') as f: reader = csv.reader(f, skipinitialspace=True) for row in reader: data.append(row)
Now, let us access the same CSV file using
pandas.read_csv().
import pandas df = pandas.read_csv('ford_escort.csv', skipinitialspace=True)
According to the Zen of Python which lists the guiding principles for Python's design, Simple is better than complex and Flat is better than nested. This clearly resonates when we use
pandas.read_csv() as there are fewer lines of code (lower margin of error) and the interface is also not nested.
The
csv module treats the file like a plain-text and each datum is stored as a sequence of strings.
dtypes = [type(item) for item in data[1]] print(dtypes)
[<class 'str'>, <class 'str'>, <class 'str'>]
Whereas,
pandas.read_csv() automatically detects the suitable type for each column.
print(df.dtypes)
Year int64 Mileage (thousands) int64 Price int64 dtype: object
It also supports easy type conversion of any column using the
dtype argument.
CSV is one of the oldest and most widely used data serialisation format. At times it is used to store columnar data where one might need to access specific columns post reading.
To extract a column (for example
Price) from the above
data, we have to write a list comprehension:
price = [int(row[2]) for row in data[1:]] print(price)
[9991, 9925, 10491, 10990, 9493, 9991, 10490, 9491, 9491, 9990, 9491, 9990, 9990, 9390, 9990, 9990, 9990, 8990, 7990, 5994, 5994, 5500, 11000]
Whereas
pandas.read_csv() returns a dataframe from which one can directly access any column using the
[ ] (slice) operator.
price = df["Price"].to_list() print(price)
[9991, 9925, 10491, 10990, 9493, 9991, 10490, 9491, 9491, 9990, 9491, 9990, 9990, 9390, 9990, 9990, 9990, 8990, 7990, 5994, 5994, 5500, 11000]
This flexibility of accessing data in any direction, surely makes
pandas.read_csv() better suited for real world applications. | https://realworldpython.hashnode.dev/3-solid-reasons-why-you-should-use-pandasreadcsv | CC-MAIN-2022-05 | refinedweb | 397 | 62.48 |
select * from employees where salary > AVG(salary)
The problem is that avg is applied to the filtered rows. It will be circular how avg(salary) will get its value if avg is part of where clause. So to prevent that confusion, SQL standard disallows using aggregate functions in where clause.
Humans are lazy. Programmers are humans. Programmers are lazy.
I think that it is a missed opportunity that SQL didn't impose aliases when tables are referenced, otherwise they could introduce functionality that would have no ambiguity:
select e.* from employees e where e.salary > AVG(employees.salary)
Or perhaps a little OOP:
select e.* from employees e where e.salary > employees.AVG(salary)
That would mean, get all the employee (denoted by e) from employees whose salary is greater than the average salary of all employees.
Before you scoff that it would be super-duper hard for the RDBMS developers to parse things like that. Consider that, that sort of brevity can be achieved in C#'s Linq:
Live test:
using System; using System.Linq; class Employee { public string Name { get; set; } public decimal Salary { get; set; } } public class Simple { public static void Main () { var employees = new Employee[] { new Employee { Name = "John", Salary = 10 }, new Employee { Name = "Paul", Salary = 9 }, new Employee { Name = "George", Salary = 2 }, new Employee { Name = "Ringo", Salary = 1 }, }; Console.WriteLine ("Average salary: {0}", employees.Average (x => x.Salary)); var query = from e in employees where e.Salary > employees.Average(x => x.Salary) select e; foreach (var e in query) { Console.WriteLine ("{0} {1}", e.Name, e.Salary); } } }
Output:
Average salary: 5.5 John 10 Paul 9
If RDBMS have OOP and Linq syntax, it can prevent unusual request:
Live test: | https://www.anicehumble.com/2019/04/salary-greater-than-average-salary-conundrum.html | CC-MAIN-2020-16 | refinedweb | 285 | 67.25 |
jpr928 wrote:Hi! my name is Jean-Pierre, I have a question, is lua difference between LM 12 and LMDE , because the same conky-lua is not working in LM 12
but work on LMDE.
The shell response on every symbol '=' is :
- Code: Select all
conky -c /home/bushi12/.conky/scripts/conkyrcConky: llua_load: /home/bushi12/.conky/scripts/clock_rings.lua:262: unexpected symbol near '='
Conky: desktop window (1000024) is subwindow of root window (18e)
Conky: window type - override
Conky: drawing to created window (0x2a00001)
Conky: drawing to double buffer
Conky: llua_do_call: function conky_clock_rings execution failed: attempt to call a nil value
Conky: llua_do_call: function conky_clock_rings execution failed: attempt to call a nil value
but the symbol '=' is important :
- Code: Select all
clock_x=190
clock_y=297
show_seconds=true
require 'cairo'
function rgb_to_r_g_b(colour,alpha)
return ((colour / 0x10000) % 0x100) / 255., ((colour / 0x100) % 0x100) / 255., (colour % 0x100) / 255., alpha
end
Somebody know this problem and how to fix it ?
Thanks
My conky setup that worked in LMDE doesn't work in Mint 12 either. Anyone has a solution? | http://forums.linuxmint.com/viewtopic.php?p=511768 | CC-MAIN-2014-15 | refinedweb | 174 | 54.02 |
About this project
$4,146
343
Play the Demo!
Yup, fully functional thru the first 5 dungeons. It may have a few oddities (it is an ALPHA), but it works...
Some installation / tutorial info can be found here.
GamePlay Video
A longer/different version of the Kickstarter Video, with some explanation of game systems:
Project History
Very simply, this project is a result of my hunt for a game similar to Final Fantasy’s: My Life as King. This WiiWare title captured my love for RPG’s & Sports Management Sims and mashed them together into one beautiful (tho flawed) game.
The idea of carefully managing parties of adventurers and sending them into dungeons isn’t a new one. This has been done many, many times before. However, My Life as King differed on one key point: - You neither watched nor controlled battle.
After sending your adventurers out into the wild, you went about your tasks in town. Upon their return (if they made it back), you could comb the combat reports to see how your parties fared. Did they delve deep and find glorious treasure? Were their skills put to the test? Did they defeat a boss? Level up? Tragically lose a comrade?
This gameplay rung true with me and I was immediately enchanted. After many hours, I soon learned that the game was limited in strange, inexplicable ways – and I dropped it when I realized it was less about hero/loot/skill/class/party management, and more about where I placed my buildings/structures in the 3d village.
Years passed but I kept on the hunt for games like My Life as King. Some have been similar, but no game I’ve come across has “got it right.” My appetite was awakened to this type of game and I scoured the internet trying to find similar titles. For years, I have abandoned and re-started this quest of mine – all to come up empty each time.
Last year, I came to a realization – if the game didn’t exist, I would make it. Heh. Yeah. I’m kind of a lunatic that way.
Sim Hero was born.
It's all about the gameplay...
The playable demo showcases the core game mechanic I’m striving to accomplish: manage parties of adventurers through a series of dungeons for loot, fame and experience points.
After playing the *few* similar games in this category, I was left wanting for more features – so I’m working to add what I feel “should be there.” I already have dozens of classes completed, hundreds of skills, a multi-class system, job points (skill buy), castle upgrading, loot rarities and a bunch of additional things absent from My Life as King and other games.
You’ll notice many rough edges (just like in the early days of Dwarf Fortress). There may be unrealized features and a few glitches. But the game will run. It will play. (Windows only) And I hope to create something as an example of what one dedicated person can do – and maybe I’ll inspire someone with some real coding chops to make something grander.
In the end, this is the type of game I want to play… and I cannot play it until someone makes it. For some reason, that “someone” is me… here we go!
Final note: this is a hardcore game, not for the feint of heart. The controls are limited to keyboard or gamepad only (xbox controller works just fine). This is how it'll be until I get mouse support scripted (in the works!). You were warned.
Where the money will go
40%: Art (expanded tilesets, charactersets, iconset)
40%: Scripting. My goal is to incorporate mouse controls and drag/drop party management, but as bugs arise this may need to go on the back burner.
20%: Logistical/Marketing costs (website, hosting, ads, giving back, etc)
A note on the rewards
I'm allowing the community to have a hand in the game development, but that only goes so far. I reserve the right to rename something a backer creates. I'll not allow any offensive material or anything non-fitting of the setting, nor any copyrighted/trademarked material.
Stretch Goals #1
Let's see if we can activate another tier of stretch goals by meeting these:
Press & Support:
Risks and challenges
I've very open & honest about my skills & limitations. (I'm using RPGMaker VXAce) No, this isn't a "traditionally coded" game . Not at this stage, at least. However, I know what I've built is the *best in class* as far as features/functionality goes for a RPG Management Sim (my opinion, of course).
Though I'm not a "coder", as a kid, I self-taught myself BASIC on my Apple, Atari and Commodore64 computers. I even learned a few other languages (like PASCAL and VisualBASIC). Later on, I coded a few games in BlitzBasic. Still later, I was the lead developer on a NeverWinter Nights persistent world entitled “The Drunken Skald” – we were the most popular server in the world for half a year.
I feel I can deliver the intended gameplay on this platform.Learn about accountability on Kickstarter
Questions about this project? Check out the FAQ
Support this project
Funding period
- (30 days) | https://www.kickstarter.com/projects/2062086221/sim-hero | CC-MAIN-2017-39 | refinedweb | 880 | 72.87 |
XPath and XPointer/XPath Basics
From WikiContent
Chapter 1 provided sketchy information about using XPath. For the remainder of the book, you'll get details aplenty. In particular, this chapter covers the most fundamental building blocks of XPath. These are the "things" XPath syntax (covered in succeeding chapters) enables you to manipulate. Chief among these "things" are XPath expressions, the nodes and node-sets returned by those expressions, the context in which an expression is evaluated, and the so-called string-values returned by each type of node.
The Node Tree: An Introduction
You'll learn much more about nodes in this chapter and the rest of the book. But before proceeding into even the most elementary details about using XPath, it's essential that you understand what, exactly, an XPath processor deals with.
Consider this fairly simple document:
<?xml-stylesheet <name>Guadalcanal</name> <!-- Note: Add dates, units, key personnel --> <geog general="Pacific Theater"> <islands> <name>Guadalcanal</name> <name>Savo Island</name> <name>Florida Islands</name> </islands> </geog> </battleinfo>
As the knowledgeable human eye — or an XML parser — scans this document from start to finish, it encounters signals that what follows is an element, an attribute, a comment, a processing instruction (PI), whatever. These signals are of course the markup in the document, such as the start and end tags delimiting the elements.
XPath functions at a higher level of abstraction than this simple kind of lexical analysis, though. It doesn't know anything about a document's tags and thus can't communicate anything about them to a downstream application. What it knows about, and knows about intimately, are the nodes that make up the document: the discrete chunks of information encapsulated within and among the markup. Furthermore, it recognizes that these chunks of information bear a relationship to one another, a relationship imposed on them by their physical arrangement within the document. (such as the successively deeper nesting of elements within one another) Figure 2-1 illustrates this node-tree view of the above document as seen by XPath.
There a few things to note about the node tree depicted in Figure 2-1:
- First, there's a hierarchical relationship among the different "things" that make up the tree. Of course, all the nodes are contained by the document itself (represented by the overall figure). Furthermore, many of the nodes have "offshoot" nodes. The battleinfo element sits on top of the outermost name element, the comment, and the geog element (which are all in turn subordinate to battleinfo).
- Some discrete portions of the original document contribute to the hierarchical nature of the tree. The elements (solid boxes) and their offspring — subordinate elements, text strings (dashed boxes), and the comment — are connected by solid lines representing true hierarchical relationships. Attributes, on the other hand, add nothing to the structure of the node tree (although they do have relationships, depicted with dotted-dashed lines, to the elements that define them). And the xml-stylesheet PI at the very top of the document is connected to nothing at all.
- Finally, most subtly yet most importantly, there is not a single scrap of markup in this tree. True enough, the element, attribute, and PI nodes all have names that correspond to bits of the original document's markup (such as the elements' start and end tags). But there are no angle brackets here. All that the XPath processor sees is content, stacked inside a tower of invisible boxes. The processor knows what kind of box each thing is, and if applicable it knows the box's name, but it does not see the box itself.
XPath Expressions
If you've never worked with XPath before, you may be expecting its syntax to be XML-based. That's not the case, though. XPath is not an XML vocabulary in its own right. You can't submit "an XPath" to an XML parser — even a simple well-formedness checker — and expect it to pass muster. That's because "an XPath" is meant to be used as an attribute value.
Tip
Chapter 1 discussed why using XML syntax for general-purpose languages, such as XPath and XPointer, is impractical. As mentioned there, the chief reason might be summed up as: such languages are needed in the context of special-purpose languages, such as XSLT and XLink. Expressing the general-purpose language as XML would both make them extremely verbose and require the use of namespaces, complicating inordinately what is already complicated enough.
"An XPath"[1] consists of one or more chunks of text, delimited by any of a number of special characters, assembled in any of various formal ways. Each chunk, as well as the assemblage as a whole, is called an XPath expression.
Here's a handful of examples, by no means comprehensive. (Don't fret; there are more detailed examples aplenty throughout the rest of the book.)
- taxcut
- Locates an element, in some relative context, whose name is "taxcut"
- /
- Locates the document root of some XML instance document
- /taxcuts
- Locates the root element of an XML instance document, only if that element's name is "taxcuts"
- /taxcuts/taxcut
- Locates all child elements of the root taxcuts element whose names are "taxcut"
- 2001
- The number 2001
- "2001"
- The string "2001"
- /taxcuts/taxcut[attribute::year="2001"]
- Locates all child elements of the root taxcuts element, as long as those child elements are named "taxcut" and have a year attribute whose value is the string "2001"
- /taxcuts/taxcut[@year="2001"]
- Abbreviated form of the preceding
- 2001 mod 100
- Calculated remainder after dividing the number 2001 by 100 (that is, the number 1)
- /taxcuts/taxcut[@year="2001"]/amount mod 100
- Calculated remainder after dividing the indicated amount element's value by 100
- substring-before("ill-considered", "-")
- The string "ill"
Location Steps and Location Paths
Chapter 3 details both of these concepts. To get you started in XPath, here's a broad outline.
Most XPath expressions, by far, locate a document's contents or portions thereof. (Expressions such as the number 2001 and the string "2001" are exceptions; they don't locate anything, you might say, except themselves.) These pieces of content are located by way of one or more location steps — discrete units of XPath "meaning" — chained together, usually, into location paths.
This XPath expression from the above list:
/taxcuts/taxcut
consists of two location steps. The first locates the taxcuts child of the document root (that is, it locates the root element); the second locates all children of the preceding location step whose names are "taxcut." Taken together, these two location steps make up a complete location path.
Expression Syntax
As you can see from the previous examples, an XPath expression can be said to consist of various components: tokens and delimiters.
Tokens
A token, in XPath as elsewhere in the XML world, is a simple, discrete string of Unicode characters. Individual characters within a token are not themselves considered tokens. If an XPath expression is analogous to a chemical molecule, the tokens of which it's composed are the atoms. (And the individual characters, I guess, are the sub-atomic particles.)
If quotation marks surround the token, it's assumed to be a string. If no quotation marks adorn the token, an XPath-smart application assumes that the token represents a node name.[2] I'll have more to say about nodes and their names in a moment and much more to say about them throughout the rest of the book. For now, though, consider the first example listed above. The bare token taxcut is the name of a node. If I had put it in quotation marks, like "taxcut", the XPath expression wouldn't necessarily refer to anything in a particular document; it would simply refer to a string composed of the letters t, a, x, c, u, and t: almost certainly not what you want at all.
As a special case, a node name can also be represented with an asterisk (*). This serves as a wildcard (all nodes, regardless of their name) character. The expression taxcut/* locates all elements that are children of a taxcut element.
Warning
You cannot, however, use the asterisk in combination with other characters to represent portions of a name. Thus, tax* doesn't locate all elements whose names start with the string "tax"; it's simply illegal as far as XPath is concerned.
Delimiters
Tokens in an XPath expression are set off from one another using single-character delimiters, or pairs of them. Aside from quotation marks, these delimiters include:
- /
- A forward slash separates a location step from the one that follows it. While I introduced location steps briefly above, Chapter 3 will discuss them at length.
- [ and ]
- Square brackets set off a predicate from the preceding portion of a location step. Again, detailed discussion of predicates is coming in Chapter 3. For now, understand that a predicate tests the expression preceding it for a true or false value. If true, the indicated node in the tree is selected; if false, it isn't.
- =, !=, <, >, <=, and >=
- These Boolean "delimiters" are used in a predicate to establish the true or false value of the test. Note that when used in an XML document, the markup-significant < and > characters must appear in their escaped forms to comply with XML's well-formedness constraints, even when used in attribute values. (For instance, to use the Boolean less-than-or-equal-to test, you must code the XPath expression as <=.) While XPath itself isn't expressed as an XML vocabulary, the documents in which XPath expressions most often appear are XML documents; therefore, well-formedness will haunt you in XPath just as elsewhere in the XML world.[3]
- ::
- A double colon separates the name of an axis type from the name of a specific node (or set of nodes). Axes (more in Chapter 3) in XPath, as in plane and solid geometry, indicate some orientation within a space. In an XPath expression, an axis "turns the view" from a given starting point in the document. For instance, the attribute axis (abbreviated @) looks only at attributes of some element or set of elements.
- //, @, ., and ..
- Each of these — the double slash, at sign, period, and double period — is an abbreviated or shortcut form of an axis or location step. Respectively, these symbols represent the concepts of descendant-or-self, attribute, self, and parent (covered fully in Chapter 3).
- |
- A pipe/vertical bar in an XPath expression functions as a Boolean union operator. This lets you chain together complete multiple expressions into compound location paths. Compound location paths are covered at the end of Chapter 3.
- ( and )
- Pairs of parentheses in XPath expressions, as in most other computer-language contexts, serve two purposes. They can be used for grouping subexpressions, particularly in cases where the ungrouped form would introduce ambiguities, and they can be used to set off the name of an XPath function from its argument list. Details on XPath functions appear in Chapter 4.
- +, -, *, div, and mod
- These five "delimiters" actually function as numeric operators: ways of combining numeric values to calculate some other value. Numeric operators are also covered in Chapter 4. Note that the asterisk can be used as either a numeric operator or as a wildcard character, depending on the context in which it appears. The expression tax*income multiplies the values of the tax and income elements and returns the result; it does not locate all elements whose names start with the string "tax" and end with the string "income."
- whitespace
- When not appearing within a string, whitespace can in some instances delimit tokens (and even other delimiters) for legibility, without changing the meaning of an expression. For instance, the two predicates [@year="2001"] and [@year = "2001"] are functionally identical, despite the presence in the second case of blank spaces before and after the =. Because the rules for when you can and can't use whitespace vary depending on context, I'll cover them in various places throughout the book.
Combining tokens and delimiters into complete expressions
While the rules for valid combinations of tokens and delimiters aren't spelled out explicitly anywhere, they follow the rules of common sense. (Whether the sense is in fact common depends a little on how comfortable you are with the concepts of location steps and location paths.)
For instance, the following is a syntactically illegitimate XPath expression; it also, if you think a little about it, doesn't make practical sense:
book/
See the problem? First, for those of you who simply want to follow the rules without thinking about them, you can simply accept as a given that the / (unless used by itself) must be used as a delimiter between location steps; with no subsequent location step to the right, it's not separating book from anything.
Second, there's a more, well, let's call it a more philosophical problem. What exactly would the above expression be meant to say? "Locate a child of the book element which...." Which what? It's like a sentence fragment.
Tip
Note the difference here between XPath expressions and their counterparts in some other "navigational" languages, such as Unix directory commands and URIs. In these other contexts, a trailing slash might mean "all children of the present context" (such as a directory) or "the default child of the present context" (such as a web document named index.html or default.html). In XPath, few of these matters are implicit. If you want to get all children of the current context, follow the slash with something, such as an asterisk wildcard (to get all named children), as in book/*. Chapter 3 describes other approaches, particularly the use of the node( ) node test.
I'll cover these kinds of common-sense rules where appropriate. (See Chapter 3, especially.)
XPath Data Types
A careful reading of the previous material about XPath expressions should reveal that XPath is capable of processing four data types: string, numeric, Boolean, and nodes (or node-sets).
The first three data types I'll address in this section. Nodes and node-sets are easily the most important single XPath data type, so I've relegated them to a complete section in their own right, following this one.
Strings
You can find two kinds of strings, explicit and implicit, in nearly any XPath expression. Explicit (or literal) strings, of course, are strings of characters delimited by quotation marks. Now, don't get confused here. As I've said, XPath expressions themselves appear as attribute values in XML documents. Therefore, an expression as a whole will be contained in quotation marks. Within that expression, any literal strings must be contained in embedded quotation marks. If the expression as a whole is contained in double quotation marks, ", then a string within it must be enclosed in single quotation marks or apostrophes: '. If you prefer to enclose attribute values in single quotes, the embedded string(s) must appear in double quotes.
Tip
This nesting of literal quotation marks and apostrophes — or vice versa — is unnecessary, strictly speaking. If you prefer, you can escape the literals using their entity representations. That is, the expressions "a string" and "a string" are functionally identical. The former is simply more convenient and legible.
For example, in XSLT stylesheets, one of the most common attributes is select, applied to the xsl:value-of element (which is empty) and others. The value of this attribute is an XPath expression. So you might see code such as the following:
<xsl:value-of
If the string "pathetic" were not enclosed in quotation marks, of course, it would be considered a node name rather than a string. (This might make sense in some contexts, but even in those contexts, it would almost certainly produce quite different results from the quoted form.) Note that the kind of quotation marks used in this example alternates between single and double as the quoted matter is nested successively deeper.
Explicitly quoted strings aside, XPath also makes very heavy use of what might be called implicit strings. They might be called that, that is, except there's already an official term for them: string-values. I will have more to say about string-values later in this chapter. For now, a simple example should suffice.
Consider the following fragment of an XML document:
<type>logical</type> <type>pathetic</type>
Each element in an XML document has a string-value: the concatenated value of all text contained by that element's start and end tags. Therefore, the first type element here has a string-value of logical; the second, pathetic. An XPath expression in a predicate such as:
type='logical'
would be evaluated for the two elements, respectively, as:
'logical'='logical' 'pathetic'='logical'
That is, for the first type element the predicate would return the value true; for the second, false.
Numeric Values
There's no special magic here. A numeric value in XPath terms is just a number; it can be operated on with arithmetic, and the result of that operation is itself a number. (XPath provides various facilities for converting numeric values to strings and vice versa. Detailed coverage of these facilities can be found in Chapter 4.) Formally, XPath numbers are all assumed to be floating-point numbers even when their explicit representation is as integers.
Warning
While XPath assumes all numbers to be of floating-point type, you cannot represent literal numbers in XPath using scientific notation. For example, many languages allow you to represent the number 1960 as 1.96E3 (that is, 1.96 times 10 to the 3rd power); such a value in XPath is not recognized as a legitimate number.
Although the XPath specification does not define "numeric-values" for nodes analogous to their string-values, XPath-aware applications can treat as numeric any string-value that can be "understood" as numeric. Thus, given this XML code fragment:
<page_ref>23</page_ref>
you can construct an XPath expression such as:
page_ref + 10
This would be understood as 23 (the numeric form of the page_ref element's string-value) plus 10, or 33.
The XPath specification also defines a special value, NaN, for simple "Is this value a number?" tests. ("NaN" stands for "not a number.") While the spec repeatedly refers to something called NaN, it doesn't show you how to use it except as a string (returned by the XPath string( ) function, as it happens). If you wanted to locate only those year elements which had legitimately numeric values, you could use an XPath expression something like this:
string(number(year)) != "NaN"
This explicitly attempts to convert the string-value of the year element to a number, then converts the result of that attempt to a string and compares it to the string "NaN."[4] Only those year elements for which those two values are not equal (that is, only those year elements whose string-values are not "not a number") will pass.
Tip
The string( ) function, covered at length in Chapter 4, is extremely important in XPath. That's not because it's used that much in code — in my experience it isn't used much at all — rather, its importance is due to the XPath spec's being rife with phrases such as "...as if converted to a string using the string( ) function." As a practical matter, the string( ) function's use is implicit in many situations. From a certain standpoint, you could almost say that all an XML document's text content is understood by an XPath-aware application "as if converted to a string using the string( ) function."
Boolean Values
As elsewhere, in XPath a Boolean value is one that equals either true or false. You can convert a Boolean value to the string or numeric data types, using XPath functions. The string form of the values true and false are (unsurprisingly) the strings "true" and "false"; their numeric counterparts are 1 and 0, respectively.
Probably the single most useful application of XPath Booleans is in the predicate portion of a location step. As I mentioned earlier, the predicate tests some candidate node to see if it fulfills some condition expressed as a Boolean true or false. Thus:
concerto[@key="F"]
locates a concerto element only if its key attribute has a value of "F".
Importantly, as you will see in Chapter 3, the predicate's true or false value can also test for the simple existence of a particular node. If the node exists, the Boolean value of the predicate is true; if not, false. So:
concerto[@key]
locates a concerto element only if it has any key attribute at all.
Nodes and Node-Sets
The fourth and most important data type handled by XPath is the node-set data type.
Let's look first at nodes themselves. A node is any discrete logical something able to be located by an XPath location step. Every element in a document constitutes a node, as does every attribute, PI, and so on.
Node Properties
Each node in a document has various properties. I've discussed one of these properties briefly already — the string-value — and will provide more information about it at the end of this chapter. The others are its name, its sequence within the document, and its "family relationships" with other nodes.
Node names
Most (but not all) nodes have names. To understand node names, you need to understand three terms:
- qualified name
- This term, almost always contracted to "QName," is taken straight from the W3C "Namespaces in XML" spec, at. The QName of a node, in general, is the identifier for the node as it actually appears in an instance document, including any namespace prefix. For example, an element whose start tag is <concerto> has a QName of "concerto"; if the start tag were <mml:concerto>, the QName would be "mml:concerto."
- local-name
- The local-name of a node is its QName, sans any namespace prefix. If an element's QName is "mml:concerto," its local-name is "concerto." If there's no namespace in effect for a given node, its QName and local-name are identical.
- expanded-name
- If the node is associated with a particular namespace, its expanded-name is a pair, consisting of the URI associated with that namespace and the local-name. Because the expanded-name doesn't consider the namespace prefix at all, two elements, for example, can have the same expanded-name even if their QNames are different, as long as both their associated namespace URIs (possibly null) and their local-names are identical. For more information, see Expanded but Elusive later in this chapter.
These three kinds of name conform to common sense in most cases, for most nodes, but can be surprising in others. When covering node types, below, I'll tell you how to determine the name of a node of a given type.
Document order
Nodes in a document are positioned within the document before or after other nodes. Take a look at this sample document:
<?xml-stylesheet <history> <credits> <payment date="2001-09-09" curr="EU">13.99</payment> <adjustment date="2001-09-30" curr="USD">12.64</adjustment> </credits> <debits> <fin_chg date="2001-09-09" curr="USD">1.98</fin_chg> </debits> </history> <current> <!-- No current charges for this customer? --> </current> </statement>
If you were an XML parser reading this document from start to finish, you'd be following normal document order. The xml-stylesheet PI comes before any of the elements in the document, the history element precedes the current element, the fin_chg element precedes the comment contained by the current element, and so on. Also note that XPath considers the attributes to a given element to come before that element's children and other descendants.
Warning
This all is pretty much common sense. Be careful when dealing with attributes, though: XPath considers an element's attributes to be in no particular document order at all. In the above document, whether the various date attributes are "before" the corresponding curr attributes is entirely XPath application dependent. As a practical matter, most XPath applications will probably be indexing attributes alphabetically, by their names — so each curr will precede its date counterpart. But you cannot absolutely count on this behavior.
As you'll see in Chapter 3, under the discussion of axes, it's also possible to access nodes in reverse document order.
Family relationships
XML's strict enforcement of document structure, even under simple well-formedness constraints, ensures that nodes don't just have a simple document order — even the "nodes" in a comma-separated values (or other plain text) file do that much — but also a set of more complex relationships to one another. Some nodes are parents of others (which are, in turn, children of their parents), and nodes may have siblings, ancestors, and so on.
Because these family relationships are codified in the concept of XPath axes, I'll defer further discussion of them until Chapter 3.
Node-Sets
XPath doesn't for the most part deal in nodes, but in node-sets. A node-set is simply a collection of nodes, related to one another in some arbitrary way by means of an XPath location step (or full location path). In some cases, sure, a node-set might consist of a single node. But in most cases — especially when the location step is unqualified by a predicate — this is almost an accident, an artifact of the XML instance being navigated via XPath.
Here's a simple XML document:
<publications> <book>...</book> <book>...</book> <book>...</book> <magazine>...</magazine> </publications>
This location path:
/publications/book
returns a node-set consisting of three book elements. This location path:
/publication/magazine
returns a single magazine node. Technically, though, there's nothing inherent in this location path that forces only a single node to be located. This document just happens to have a single magazine element, and as a result, the location path locates a node-set that just happens in this case to consist of a single node.
This concept of node-sets returned by XPath radically departs from the more familiar counterparts in HTML hyperlinking. Under HTML, a hyperlink "gets" an entire document. This is true even if the given HTML link uses a fragment identifier, such as #top or #section1. (The whole document is still retrieved; it's simply positioned within the browser window in a particular way.) Using XPath, though, what you're manipulating is in most cases truly not the entire target document, but one or more discrete portions of it. In this sense, XPath isn't a "pointing-to" tool; it's an extraction tool.[5]
Also worth noting at this point is that the term node-set carries some implicit baggage of meaning: like a set in mathematical terms, a node-set contains no duplicate nodes (although some may have duplicate string-values) and is intrinsically unordered. When you use XPath to locate as a node-set all elements in a document, there's no guarantee that you'll get the members of the node-set back in any particular sequence.
Node Types
The kinds of node(-set)s retrievable by XPath cover, in effect, any kind of content imaginable: not just elements and attributes, but PIs, comments, and anything else you might find in an XML document. Let's take a look at these seven node types.
Note
Conspicuously missing from the following list of "any kind of content imaginable" are entity references. There's no such thing as an "entity reference node," for example. Why not? Because by the time a document's contents are scanned by an XPath-aware application, they've already been processed by a lower-level application — the XML parser itself. All entity substitutions have already been made. By the same token, XPath can't locate a document's XML or DTDs, can't return to your application any of the contents of an internal DTD subset, and can't access (for example) tags instead of elements. XPath, in short, can't "read" a document lexically; it can only "read" it logically.
(See the short section, Section 2.4.3.8 later in this chapter, for a comparison of XPath node types with what the Infoset refers to as "information items.")
The root node
Every XML document has one and only one root node. This is the logical something that contains the entire document — not only the root element and all its contents, but also any whitespace, comments, or PIs that precede and follow the root element's start and end tags. This "something" is analogous to a physical file, but there may be no precise physical file to which the root node refers (especially in the case of XML documents generated or assembled on the fly and not saved in some persistent form).
In a location path, the root node is represented by a leading / (forward slash) character. Any location path that starts with / is an absolute location path, instructing the XPath-aware application, in effect, to start at the very top of the document before considering the location steps (if any) that follow. The root node does not have an expanded-name. Its local-name is an empty string.
Element nodes
Each element node in the document is composed of the element's start and end tags and everything in between. (Thus, when you retrieve a document's root element, you're retrieving everything in the document except any comments and PIs that precede or follow it.) Consider a simple code fragment:
<year> <month monthnum="4">April</month> <month monthnum="8">August</month> <month monthnum="12">December</month> <month monthnum="2">February</month> <month monthnum="1">January</month> <month monthnum="3">March</month> <month monthnum="5">May</month> <month monthnum="6">June</month> <month monthnum="7">July</month> <month monthnum="11">November</month> <month monthnum="10">October</month> <month monthnum="9">September</month> </year>
This location path (which says, "Locate all month children of the root year element whose monthnum attributes have the value 3"):
/year/month[@monthnum="3"]
selects the sixth month element in the fragment — that is, the element whose contents (the string "March") are bounded by the <month monthnum="3"> start tag and the corresponding </month> end tag. To emphasize, and to repeat a point made early in this chapter: while the physical representation of the element is bounded by its start and end tags, XPath doesn't have any understanding at all of tags or any other markup. It just gets a particular invisible box corresponding to this physical representation and holding its contents. Importantly, though, it selects the element as a single node with various properties and subordinate objects (a name, a string-value, an attribute with its value).
Tip
Note especially that this example does not locate the third month element. It selects all month elements with the indicated monthnum attribute value.
You sometimes must take care, when selecting element nodes, not to be confused by the presence of "invisible whitespace" in their string-values.
Yes, true: all whitespace is invisible. (That's why it's called whitespace, right?) But the physical appearance of XML documents can trick you into thinking that some whitespace "doesn't count," even though that's not necessarily true. For instance, consider Figure 2-2, depicting a greatly simplified version of a document in the same vocabulary we've been using in this section.
In this figure, as you can see, there's no whitespace in any of the document's content, only within element start tags (and not always there). While many real-world XML documents (especially those that are machine generated) appear this way, it's just as likely that the documents you'll be excavating with XPath will look like Figure 2-3.
The human eye tends to ignore the whitespace-only blocks of text (represented with gray blocks in the figure) in a document like this one, discarding them as insignificant to the document's meaning. But XML parsers, bound by the XML spec's "all text counts" constraint, are not free to ignore these scraps of whitespace. (Some parsers may flag the whitespace as potentially "insignificant," leaving to some higher-order application the task of ignoring it or not.) So consider now the effect of an XPath expression such as the following, when applied to the document in Figure 2-3:
/year
This location path doesn't return just the year element node, the month element node and its attribute. It also returns:
- Some blank spaces, a newline, and some more blank spaces preceding the month element
- A newline following the month element
Whether this will present you with a problem depends on your specific need. If it is a problem, there's an XPath function, normalize-space( ) (covered in Chapter 4), that trims all leading and trailing whitespace from a given element's content.
Tip
In XPath, as in many other XML-related areas, dealing with whitespace can induce either euphoria or migraines. In addition to the normalize-space( ) XPath function covered in Chapter 4, you should consider the (default or explicit) behavior of XML's own built-in xml:space attribute, and — depending on your application's needs — the effects of the XSLT xsl:strip-space and xsl:preserve-space elements, as well as the preserveWhiteSpace property of the MSXML Document object (if you're working in a Microsoft scripting environment).
The local-name of an element node is the name of the element type (that is, its generic identifier (GI), as it appears in the element's start and optional end tags). Thus, its expanded-name equals either its local-name (if there isn't a namespace in effect for that element) or its associated namespace URI paired with the local-name. Consider this code fragment:
<xsl:stylesheet <xsl:template <html> ... </html> </xsl:template> </xsl:stylesheet>
All elements in this document whose names carry the xsl: prefix are in the namespace associated with the URI "." Thus, the expanded-name of the xsl:stylesheet element consists of that URI paired with the local-name, "stylesheet."
Attribute nodes
Attributes, in a certain sense, "belong to" the elements in which they appear, and in the same sense, they might be thought to adopt the namespace-related properties of those elements. For instance:
<xsl:template
Logically, you might conclude that the match attribute is "in" the same namespace as the xsl:template element — it does, after all, belong to the same XML vocabulary — and that it, therefore, has something like an implied namespace prefix.
This isn't the case, though. An attribute's QName, local-name, namespace URI, and hence, expanded-name are all determined solely on the basis of the way the attribute is represented in the XML source document. Attributes such as match in the above example — with no explicit prefix — have a null namespace URI. That is, unprefixed attributes are in no namespace, including the default one; thus, their QName and local-name are identical.
Note that namespace declarations, which look like attributes (and indeed are attributes, according to the XML 1.0 and "Namespaces in XML" Recommendations), are not considered the same as other attributes when you're using XPath. As one example, the start tag of a typical xsl:stylesheet root element in a typical XSLT stylesheet might look something like this:
<xsl:stylesheet
A location path intended to locate all this element's attributes might be:
/xsl:stylesheet/@*
(As a reminder, the wildcard asterisk here retrieves all attribute nodes, regardless of their names.) In fact, this location path locates only the version attribute. The xmlns:xsl and xmlns attributes, being namespace declarations instead of "normal" attributes, are not locatable as attribute nodes.
Warning
If the document referred to by the XPath expression is validated against a DTD, it may contain more attributes than are present explicitly — visibly — in the document itself. That's because the DTD may declare some attributes with default values in the event the document author has not supplied those attributes explicitly. Always remember that the "document" to which you're referring with XPath is the source document as parsed, which may be only more or less the source document that you "see" when reading it by eye.
PI nodes
Processing instructions, by definition, stand outside the vocabulary of an XML document in which they appear. Nonetheless, they do have names in an XPath sense: the name of a PI is its target (the identifier following the opening <? delimiter). For this PI:
<?xml-stylesheet type="text/css" href="mystyle.css"?>
the QName and local-name are both xml-stylesheet. However, because a PI isn't subject to namespace declarations anywhere in a document — PIs, like unprefixed attributes, are always in no namespace — its namespace URI is null.
The other thing to bear in mind when dealing with PI nodes is that their pseudo-attributes look like, but are not, real attributes (hence, the "pseudo-" prefix). From an XPath processor's point of view, everything between the PI target and the closing ?> delimiter is a single string of characters. In the case of the PI above, there's no node type capable of locating the type pseudoattribute separate from the href pseudoattribute, for example. (You can, however, use some of the string-manipulation functions covered in Chapter 4 to tease out the discrete pseudoattributes.)
Comment nodes
Each comment in an XML source document may be located independently of the surrounding content. A comment has no expanded-name at all, and thus has neither a QName, a local-name, nor a namespace URI.
Text nodes
Any contiguous block of text — an element's #PCDATA content — constitutes a text node. By "contiguous" here I mean that the text is unbroken by any element, PI, or comment nodes. Consider a fragment of XHTML:
<p>A line of text.<br/>Another line.</p>
The p element here contains not just one but two text nodes, "A line of text." and "Another line." The intervening br element breaks them up into two. The presence or absence of whitespace in the #PCDATA content is immaterial. So in the following case:
<p>A line of text. Another line.</p>
there's still a single text node, which, like a comment, has no expanded-name at all.
Namespace nodes
Namespace nodes are the chimeras and Loch Ness monsters of XPath. They have characteristics of several other node types but at the same time are not "real," but rather fanciful creatures whose comings and goings are marked with footprints here and there rather than actual sightings.
The XPath spec says every element in a given document has a namespace node corresponding to each namespace declaration in scope for that element:
- One for every explicit declaration of a namespace prefix in the start tag of the element itself
- One for every explicit declaration of a namespace prefix in the start tag of any containing element
- One for the explicit xmlns= declaration, if any, of a namespace for unprefixed element/attribute nodes, whether this declaration appears in the element's own start tag or in that of a containing element
Here's a simple fragment of an XSLT stylesheet:
<xsl:stylesheet <xsl:template <html xmlns: ... </html> </xsl:template> </xsl:stylesheet>
Three explicit namespace declarations are made in this fragment:
- The xsl: namespace prefix is associated (in the xsl:stylesheet element's start tag) with the XSLT namespace URI, "."
- The default namespace — that is, for any unprefixed element names appearing within the xsl:stylesheet element — is associated with the XHTML namespace, "."
- The xlink: namespace prefix is associated (in the html element's start tag) with the namespace URI set aside by the W3C for XLink elements and attributes, "."
Tip
There's also one other namespace implicitly in effect for all elements in this, and indeed any, XML document. That is the XML namespace itself, associated with the reserved xml: prefix. The corresponding namespace URI for this implied namespace declaration is "."
The namespace declarations for the xsl: and default namespace prefixes both appear in the root xsl:stylesheet element's start tag; therefore, they create implicit namespace nodes on every element in the document — including those for which those declarations might not seem to make much sense. The html element, for instance, will probably not be able to make much use of the namespace node associated with the xsl: prefix. Nonetheless, in formal XPath terms that namespace node is indeed in force for the html element.
The namespace declaration for the xlink: prefix, on the other hand, is made by a lower-level element (html, here). Thus, there is no namespace node corresponding to that prefix for the higher-level xsl:template and xsl:stylesheet elements.
Each namespace node also has a local-name: the associated namespace prefix. So the local-name of the namespace node representing the XSLT namespace in the above document is "xsl." When associated with the default namespace, the namespace node's local-name is empty. The namespace URI of a namespace node, somewhat bizarrely, is always null.
Note
In XPath, as in most — maybe all — XML-related subjects, namespaces sometimes seem like more trouble than they're worth. The basic purpose of namespaces is simple: to disambiguate the names of elements, and perhaps attributes, from more than one XML vocabulary when they appear in a single document instance. Yet the practice of using namespaces leads one down many hall-of-mirrors paths, with concepts and syntax nested inside other concepts and syntaxes and folding back on themselves.
As a practical matter, you will typically have almost no use for identifying or manipulating namespace nodes at all; your documents will consist entirely of elements and attributes from a single namespace.
XPath node types and the XML Infoset
The XML Information Set (commonly referred to simply as "the Infoset") is a W3C Recommendation published in October 2001 (). Its purpose, as stated in the spec's Abstract, is to provide "a set of definitions for use in other specifications that need to refer to the information in an XML document."
The definitions the Infoset provides are principally in terms of 11 information items: document, element, attribute, processing-instruction, unexpanded entity reference, character, comment, document type declaration, unparsed entity, notation, and namespace. As you can see, there's a certain amount of overlap between this list and the node types available under XPath — and also a certain number of loose ends not provided at all by one or the other of the two Recommendations.
XPath 2.0 will resolve the conflicts in definitions of Infoset information items and XPath node types; at the same time, XPath will continue to need things the Infoset does not cover. For instance, XPath does not generally need to refer to atomic-level individual character information items. Instead, it needs to refer to the more "molecular" text nodes. For these "needed by XPath but not defined under the Infoset" information items, XPath 2.0 will continue to provide its own definitions.
For more information about XPath 2.0 and the Infoset, refer to Chapter 5.
Node-Set Context
It's hard to imagine a node in an XML document that exists in isolation, devoid of any context. First, as I've already mentioned, nodes have relationships with other nodes in the document — both document-order and "family" relationships. Maybe more importantly, but also more subtly, nodes in the node-set returned by a location path also have properties relative to the other nodes in that node-set — even when document order is irrelevant and family relationships, nonexistent. These are the properties of context size, context position, and namespace bindings.
Consider the following XML document:
<ChangeInMyPocket> <Quarters quantity="1"/> <Dimes quantity="1"/> <Nickels quantity="1"/> <Pennies quantity="3"/> <!-- No vending-machine purchase in my immediate future --> </ChangeInMyPocket>
It's possible, in a single location path, to locate only the four quantity attributes (or any three, two, or one of them) and the comment; or just the root node and the comment; or just the Quarters element, the Pennies element, and the quantity attribute of the Nickels element. The nodes in the resulting node-set need not share any significant formal relationship in the context of the document itself. But in all cases, these nodes suddenly acquire relationships to others in a given node-set, simply by virtue of their membership in that node-set.
The context size is simply the number of nodes in the node-set, irrespective of the type of nodes. A location path that returned a node-set of all element nodes in the above document would have a context size of 5 (one for each of the ChangeInMyPocket, Quarters, Nickels, Dimes, and Pennies elements). A location path returning all the quantity attributes and the comment would also have a context size of 5.
The context position is different for every member of the node-set: it's the integer representing the ordinal position that a given node holds in the node-set, relative to all other nodes in it, in document order. If a node-set consists of all child elements of the ChangeInMyPocket element, the Quarters element will have a context position of 1, the Dimes element, 2, and so on. In a different node-set, the Quarters element might be node 2 and Dimes, 1, and so on.
Warning
I've alluded to this before but just as a reminder: when determining context position, particularly of elements, be aware of "invisible whitespace" separating one element's end tag from the succeeding one's start tag. In the above document, a location path that retrieves all children of the ChangeInMyPocket element, not just the child elements, will also locate all the newline and space characters used for "pretty-printing"; each block of these characters constitutes a text-node child of ChangeInMyPocket. Thus, the Quarters element will have a context position of 2, the Dimes, 4, and so on.
Chapter 3 and Chapter 4 go into more detail about dealing with context position and context size. Note especially, in Chapter 3, the material about reverse document order in certain XPath axes, because this inverts the normal sequence of context positions.
The term namespace bindings refers to any namespace declarations in effect at the time an XPath expression is evaluated. In the previous document, which has no explicit namespace declarations, the only namespace binding in any expression's evaluation context will be the "built-in" namespace for elements and attributes whose names are prefixed xml:. Note that any namespace binding is not tied to a particular prefix, however; what's important is the URI to which the prefix is bound. Consider the following fragment:
<myvocab:root xmlns: <yourvocab:subelem> [etc.] </yourvocab> </myvocab>
A superficial consideration of the namespace bindings in effect for the above yourvocab:subelem document might suggest that there are two, one for the myvocab: prefix and one for yourvocab:. Not true. There's only one namespace URI in play at that point (although it's aliased, after a fashion, by the two prefixes), and hence, there's only one namespace binding in that element node's context.
String-Values
By definition, a well-formed XML document is a text document, incapable of containing such "binary" content as multimedia files and images. Thus, it stands to reason that in navigating XML documents via XPath the strings of text that make up the bulk of the document (aside from the element names themselves) would be of supreme importance. This notion is codified in the concept of string-values. And the importance of string-values lies in the fact that most of the time, when you locate a node in a document via XPath, what you're after is not the node per se but rather its string-value.
Each node returned by a location path has its own string-value. The string-value of a node depends on the node type, as summarized in Table 2-1. Note that the word "normalized" used to describe the string-value for the attribute node type is the same as elsewhere in the markup world: it means stripped of extraneous whitespace, by trimming leading and trailing whitespace and collapsing multiple consecutive occurrences of whitespace into a single space. For example, given an attribute such as region=" NW SE" (note leading blank spaces and multiple spaces between the "NW" and "SE"), its normalized value would be "NW SE". Also note, though, that this normalization depends on the attribute's type, as declared in a DTD or schema. If the attribute type is CDATA, those interior blank spaces would be assumed to be significant and not normalized. Therefore, if the region attribute is (say) of type NMTOKENS, the interior whitespace is collapsed; if it's CDATA, the whitespace remains.
Table 2-1. String-values, by node type
Warning
If you're using DOM, note that Table 2-1 establishes a loose correspondence between XPath string-values and the values returned by the DOM nodeValue method. The exceptions — and they're important ones — are that nodeValue, when applied to the document root and element nodes, returns not a concatenated string but a null value. The only way to get at these node types' text content through the DOM is to apply nodeValue to their descendant text nodes.
Consider an XML document such as the following:
<?xml-stylesheet <source> <author>Firesign Theatre</author> <work year="1970">Don't Crush that Dwarf, Hand Me The Pliers</work> </source> <text>And there's hamburger all over the highway in Mystic, Connecticut.</text> <!-- Following link last verified 2001-09-15 --> <allusion xlink: </quotation>
All seven XPath node types are present in this document. String-values for some of them are as follows:
- root node
- Concatenated values of all text nodes in the document, that is:
Firesign Theatre Don't Crush that Dwarf, Hand Me The Pliers And there's hamburger all over the Highway in Mystic, Connecticut.
(Note how the whitespace-only text nodes, included for legibility in the original document, are carried over into the string-value.)
- source element node
- Concatenated value of all text nodes within the element's scope (including whitespace-only text nodes):
Firesign Theatre Don't Crush that Dwarf, Hand Me The Pliers
- 1970
- xml-stylesheet PI
- type="text/xsl" href="4or5guys.xsl"
- comment
- Following link last verified 2001-09-15
- first text node (not counting whitespace-only text nodes)
- Firesign Theatre
- namespace node on all elements
- The namespace for the xlink: prefix, declared in the root quotation element, does not apply to any elements, because none of their names use that prefix. All of these elements have empty strings because they are not in a namespace.
String-Value of a Node-Set
Not only does each node in a node-set have a string-value, as described above; the node-set as a whole has one.
If you followed the logic behind each of the previous examples, especially the concatenation of text nodes that makes up the string-value of a root or element node, you might think the string-value of a node-set containing (say) two or more element nodes is the concatenation of all their string-values. Not so. The string-value of a multinode node-set is the string-value of the first node in that node-set.
(Actually, the apparent inconsistency goes away if you just remember that last sentence, eliminating the word "multinode." Thus, the value of any single node is just a special case of the general rule; the node-set in this case just happens to be composed of a single node — which is, of course, the first in the node-set.)
In the previous example, the source element has two child element nodes, author and work. This location path:
/quotation/source/*
thus returns a node-set of two nodes. The node-set's string-value is the string-value of the first node in the node-set, that is, the author element: "Firesign Theatre."
Notes
- ↑ Not that you'll see any further references to something by that name, in the spec or anywhere else.
- ↑ Depending on the context, such an unquoted token may also be interpreted as a function (covered in Chapter 4), a node test (see Chapter 3), or of course a literal number instead of a string.
- ↑ Be careful on this issue of escaping the < and > characters. XPath is used in numerous contexts (such as JavaScript and other scripting languages) besides "true XML"; in these contexts, use of a literal, unescaped < or > character may actually be mandated.
- ↑ Note the importance here of quoting the string "NaN." If this code fragment had omitted the quotation marks, the XPath processor would not be testing for the special NaN value but for the string-value of an element whose name just happens to be NaN.
- ↑ XHTML, the "reformulation as XML" of the older HTML standard, is kind of a special case. Because an XHTML document is an XML document, it may use XPath-based XPointers in the value of an href attribute. But you can't assume that a browser, for now, will conform to the expected behavior of a true XPointer-aware application. Browser vendors don't exactly leap out of the starting gate to adopt new standards. | http://commons.oreilly.com/wiki/index.php/XPath_and_XPointer/XPath_Basics | crawl-002 | refinedweb | 8,888 | 50.36 |
I saw this in a screencast and was just wondering what the '=' symbol does in this case.
def express_token=(token)
...
end
def express_token(token = nil)
That snippet defines a Virtual Attribute (or a "setter" method) so that "express_token" looks like an attribute, even though it's just the name of the method. For example:
class Foo def foo=(x) puts "OK: x=#{x}" end end f = Foo.new f.foo = 123 # => 123 # OK: x=123
Note that the object "f" has no attribute or instance variable named "foo" (nor does it need one), so the "foo=" method is just syntactic sugar for allowing a method call that looks like an assignment. Note also that such setter methods always return their argument, regardless of any
return statement or final value.
If you're defining a top-level setter method, for example, in "irb", then the behavior may be a little confusing because of the implicit addition of methods to the Object class. For example:
def bar=(y) puts "OK: y=#{y}" end bar = 123 # => 123, sets the variable "bar". bar # => 123 Object.new.bar = 123 # => 123, calls our method # OK: y=123 Object.public_methods.grep /bar/ # => ["bar="] | https://codedump.io/share/XbbMw7HOL6zb/1/what-does-the-equal-3939-symbol-do-when-put-after-the-method-name-in-a-method-definition | CC-MAIN-2017-04 | refinedweb | 196 | 70.23 |
zimbatm (zimba tm)
- Login: zimbatm
- Registered on: 07/13/2010
- Last connection: 05/10/2015
Issues
Activity
05/10/2015
06:01 PM Ruby master Misc #11131 (Closed): Unexpected splatting of empty kwargs
- ~~~ruby
def foo(); :ok end
foo(*[]) #=> :ok
foo(**{}) #=> ArgumentError: wrong number of arguments (1 for 0)
foo(...
03/04/2015
10:34 AM Ruby master Feature #10927: [PATCH] Add default empty string to string replacements
- Alright but a change doesn't necessarily need to be substantial to improve developer happiness. It seems like my litt...
03/02/2015
04:26 PM Ruby master Feature #10927 (Open): [PATCH] Add default empty string to string replacements
- Hi ruby devs !
A common case for string substitution is to just remove the found items. This patch changes the `St...
04/12/2011
09:16 PM Ruby master Feature #4553: Add Set#pick and Set#pop
- =begin
#pop is often associated to stack operations, which implies an order. Unless a better name is found, isn't set...
09:10 PM Ruby master Feature #4541: Inconsistent Array.slice()
- I don't see the advantage of having nil returned in any case since the empty array already expresses the "there is no...
07:14 PM Ruby master Feature #4569: Replace IPAddr with IPAddress
- =begin
Hi Marco, awesome lib. I read trough it and here are the thoughts I had:
* IPAddr#[] and IPAddr#each don't ho...
03/31/2011
11:15 PM Ruby master Bug #4545 (Closed): [PATCH] syslog extension documentation improvements and fixes
- =begin
Small documentation fixes for the syslog extension.
The patch is a GIT patch on top of 7487298584145058f234...
01/20/2011
08:45 PM Ruby 1.8 Feature #4239: Let's begin a talk for "1.8.8" -- How's needed for surviving 1.8?
- =begin
2011/1/20 Zeno Davatz <[email protected]>:
> When I boot into a new kernel then I do _not_ get a Kernel-...
01:57 AM Ruby master Bug #4294: IO.popen ['"ping"', 'localhost -n 3'] fails
- =begin
How is that a bug ? If ping is surrounded by quotes, ruby will look for an executable named "ping" with the q...
01/19/2011
02:34 AM Ruby master Bug #4291: rb_time_new with negative values (pre-epoch dates) on Windows
- =begin
Shouldn't we use 64bit on all platforms instead ?
=end | https://bugs.ruby-lang.org/users/1541 | CC-MAIN-2020-29 | refinedweb | 386 | 65.12 |
On Sun, Feb 19, 2012 at 9:57 AM, Nathan Rice <nathan.alexander.rice at gmail.com> wrote: > I enjoy writing python a lot, and would prefer to use it rather than > ruby/lisp/java/etc in most cases. My suggestions come from > frustrations that occur when using python in areas where the right > answer is probably just to use a different language. If I knew that > what I wanted was at odds with the vision for python, I would have > less of an issue just accepting circumstances, and would just get to > work rather than sidetracking discussions on this list. The core problem comes down to the differences between Guido's original PEP 340 idea (which was much closer in power to Ruby's blocks, since it was a new looping construct that allowed 0-or-more executions of the contained block) and the more constrained with statement that is defined in PEP 343 (which will either execute the body once or throw an exception, distinguishing it clearly from both the looping constructs and if statements). The principle Guido articulated when making that decision was: "different forms of flow control should look different at the point of invocation". So, where a language like Ruby just defines one protocol (callbacks, supplemented by anonymous blocks that run directly in the namespace of the containing function) and uses it for pretty much *all* flow control (including all their loop constructs), Python works the other way around, defining *different* protocols for different patterns of invocation. This provides a gain in readability on the Python side. When you see any of the following in Python: @whatever() def f(): pass with whatever(): # Do something! for x in whatever(): # Do something! It places a lot of constraints on the nature of the object returned by "whatever()" - even without knowing anything else about it, you know the first must return a decorator, the second a context manager, and the third an iterable. If that's all you need to know at this point in time, you don't need to worry about the details - the local syntax tells you the important things you need to know about the flow control. In Ruby, though, all of them (assuming it isn't actually important that the function name be bound locally) could be written like this: whatever() do: # Do something! end Is it a context manager? An iterable? Some other kind of callback? There's nothing in the syntax to tell you that - you're relying on naming conventions to provide that information (like the ".foreach" convention for iteration methods). That approach can obviously work (otherwise Ruby wouldn't be as popular as it is), but it *does* make it harder to pick up a piece of code and understand the possible control flows without looking elsewhere. However, this decision to be explicit about flow control for the benefit of the *reader* brings with it a high *cost* on the Python side for the code *author*: where Ruby works by defining a nice syntax and semantics for callback based programming and building other language constructs on top of that, Python *doesn't currently have* a particularly nice general purpose native syntax for callback based programming. Decorators do work in many cases (especially simple callback registration), but they sometimes feel wrong because they're mainly designed to modify how a function is defined, not implement key program flow control constructs. However, their flexibility shouldn't be underestimated, and the CallbackStack API is designed to help Python developers push decorators and context managers closer to those limits *without* needing new language constructs. By decoupling the callback stack from the code layout, it gives you full *programmatic* control of the kinds of things context managers can help with when you know in advance exactly what you want to do. *If* CallbackStack proves genuinely popular (and given the number of proposals I have seen along these lines, and the feedback I have received on ContextStack to date, I expect it will), and people start to develop interesting patterns for using it, *then* we can start looking at the possibility of dedicated syntax to streamline particular use cases (just as the with statement itself was designed to streamline various use cases of the more general try statement). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia | https://mail.python.org/pipermail/python-ideas/2012-February/014092.html | CC-MAIN-2014-15 | refinedweb | 723 | 52.53 |
Today I happily present a brilliant piece of Revit API news on the documentation side of things, and another handy utility method for your Revit API programming toolbox:
- Online Revit API documentation
- 2D convex hull algorithm in C# using
XYZpoints
Online Revit API Documentation
The contents of the Revit API help file RevitAPI.chm are finally available online.
And not only that, but the web site includes all three versions for Revit 2015, Revit 2016 and Revit 2017.
As you know, the two main pieces of Revit API documentation are the Revit API help file RevitAPI.chm, included with the Revit SDK, available from the Revit Developer Centre, and the developer guide, provided in the 'Developers' section of the online Revit Help.
The help file was not available online, though, which also means that it was not included in standard Internet searches using Google or DuckDuckGo.
With the notable exception of Revit 2014, revitapisearch.com, implemented by Peter Boyer of the Dynamo team.
Just a few weeks ago, Arif Hanif expressed interest to do the same for Revit 2017 in a couple of comments on What's New in the Revit 2017 API.
Well, someone beat him to it.
In his Revit API discussion forum thread on Revit API Documentation Online , @gtalarico says:
Since Google doesn't seem too excited to index my 60k+ page website, I wanted to share it here so people can find it and hopefully use it! : )
It currently includes the full official documentation (from the CHM file) for APIS 2015, 2016, and 2017:
It was a lot of planning and work, but came together faster and better than I expected.
Please let him know if you have any feedback on it.
Ever so many thanks to gtalarico for all his work and making this useful resource available to the global Revit API community!
Contributing and Implementation Details
Gtalarico added some additional info on the project:
The code is on github at github.com/gtalarico/revitapidocs.
The project is definately open to collaborators. Welcome!
It needs +code docs, +test coverage, and can probably be improved and optimized significantly by more seasoned web developers.
Regarding github pages, it could probably be done, but I haven't used it myself, so I don't know the limitations.
Here are some of the challenges and constraints:
- Namespace Menu:
- Each API/year has an index with around 20K nested entries, sometimes many levels deep. Performance can get tricky, and so is creating a good and responsive UI for browsing it, which is why I wanted it to be collapsible. If I recall correctly, Readthedocs for instance, limits the depth of the menu.
- Content:
- The content I had access to (.html files extracted from chm) were not pretty, so I had to do some unusual CSS overrides and eventually batch processed the 60k+ html files to remove unnecessary JS and html code to make the pages look good and perform well. I was also was concerned about appearance to google crawler (cleaned code, and added schema.org structured data on every page).
- Performance:
- The namespace html file alone was almost 3MB and 140K lines of html code, which is not good. To optimize it, I am serving the menu asynchronously as json, so it loads while the rest of the content is being built, and can be cached.
- Built-in search:
- I originally tried using google custom search, but google can take a long time to index it (if it happens at all - 60k+ pages) Even with a full sitemap, it will probably just take time, but I didn't want to wait. So I replaced the Google Custom Search box with my own custom search. I tried a JS client side search, similar to what git pages has, but it was crashing the browser (remember namespace is +100K lines), so I ended up pushing it server side which makes it reasonably fast, e.g.,.
2D Convex Hull Algorithm in C# using
XYZ
Yesterday, I mentioned the convex hull calculation as one option for determining the bounding box of selected elements or entire model.
Maxence replied in a comment on that post and provided a convex hull implementation in C#.
It is a 2D algorithm implementing the Jarvis march or Gift wrapping algorithm:
It makes use of an extension method
MinBy on the generic
IEnumerable class,
from MoreLINQ by
Jonathan Skeet:
public static class IEnumerableExtensions { public static tsource MinBy<tsource, tkey>( this IEnumerable<tsource> source, Func<tsource, tkey> selector ) { return source.MinBy( selector, Comparer<tkey>.Default ); } public static tsource MinBy<tsource, tkey>( this IEnumerable<tsource> source, Func<tsource, tkey> selector, IComparer<tkey> comparer ) { if( source == null ) throw new ArgumentNullException( nameof( source ) ); if( selector == null ) throw new ArgumentNullException( nameof( selector ) ); if( comparer == null ) throw new ArgumentNullException( nameof( comparer ) ); using( IEnumerator<tsource> sourceIterator = source.GetEnumerator() ) { if( !sourceIterator.MoveNext() ) throw new InvalidOperationException( "Sequence was empty" ); tsource min = sourceIterator.Current; tkey minKey = selector( min ); while( sourceIterator.MoveNext() ) { tsource candidate = sourceIterator.Current; tkey candidateProjected = selector( candidate ); if( comparer.Compare( candidateProjected, minKey ) < 0 ) { min = candidate; minKey = candidateProjected; } } return min; } } }
With that helper method in hand, the convex hull implementation is quite short and sweet:
#region Convex Hull /// <summary> /// Return the convex hull of a list of points /// using the Jarvis march or Gift wrapping: /// /// Written by Maxence. /// </summary> public static List<XYZ> ConvexHull( List<XYZ> points ) { if( points == null ) throw new ArgumentNullException( nameof( points ) ); XYZ startPoint = points.MinBy( p => p.X ); var convexHullPoints = new List<XYZ>(); XYZ walkingPoint = startPoint; XYZ refVector = XYZ.BasisY.Negate(); do { convexHullPoints.Add( walkingPoint ); XYZ wp = walkingPoint; XYZ rv = refVector; walkingPoint = points.MinBy( p => { double angle = ( p - wp ).AngleOnPlaneTo( rv, XYZ.BasisZ ); if( angle < 1e-10 ) angle = 2 * Math.PI; return angle; } ); refVector = wp - walkingPoint; } while( walkingPoint != startPoint ); convexHullPoints.Reverse(); return convexHullPoints; } #endregion // Convex Hull
For testing purposes, I make use of it like this in the
CmdListAllRooms external command:
/// <summary> /// Return bounding box calculated from the room /// boundary segments. The lower left corner turns /// out to be identical with the one returned by /// the standard room bounding box. /// </summary> static List<XYZ> GetConvexHullOfRoomBoundary( IList<IList<BoundarySegment>> boundary ) { List<XYZ> pts = new List<XYZ>(); foreach( IList<BoundarySegment> loop in boundary ) { foreach( BoundarySegment seg in loop ) { Curve c = seg.GetCurve(); pts.AddRange( c.Tessellate() ); } } int n = pts.Count; pts = new List<XYZ>( pts.Distinct<XYZ>( new CmdWallTopFaces .XyzEqualityComparer( 1.0e-4 ) ) ); Debug.Print( "{0} points from tessellated room boundaries, " + "{1} points after cleaning up duplicates", n, pts.Count ); return Util.ConvexHull( pts); }
Initially, I did not include the call to
Distinct, which eliminates duplicate points returned by Revit that are intended to represent the same room corner but have small offsets from each other due to limited precision, or too high precision, whichever way you look at it.
Here are the
diffs with just the pure convex hull implementation
and after adding the call to
Distinct.
I tested it on the same ten rectangular rooms as yesterday and verified that their convex hulls correspond to their bounding boxes returned by Revit.
I also tested on the following squiggly room with some spline-shaped edges:
That returns the following results:
355 points from tessellated room boundaries, 324 points after cleaning up duplicates Room nr. '1' named 'Room 1' at (-77.61,15.1,0) with lower left corner (-106.43,-8.65,0), convex hull (-104.93,-8.65,0), (-40.75,-8.51,0), (-40.41,33.07,0), (-106.43,33.27,0), bounding box ((-106.43,-8.65,0),(-40.41,33.27,13.12)) and area 1483.20391607451 sqf has 1 loop and 31 segments in first loop.
Everything discussed above is published in The Building Coder samples release 2017.0.127.10.
Many thanks to Maxence for providing the nice convex hull algorithm implementation! | https://thebuildingcoder.typepad.com/blog/2016/08/online-revit-api-docs-and-convex-hull.html | CC-MAIN-2020-05 | refinedweb | 1,294 | 53.92 |
![endif]-->
The PS2Keyboard library uses one of the two available external interrupts to react on keyboard input. Once such input is received, it's stored in a one-byte buffer and is available to be read.
Version 2.3 - Minor bugs fixed.
The following schematic shows how to connect a PS/2 connector:
Make sure you connect the Clock PIN to the digital PIN 3 on the Arduino. Otherwise the interrupt, and therefore the whole library, won't work.
Below is a simple example on how to use the library. For more keycode constants have a look at the
PS2Keyboard.h file.
#include <PS2Keyboard.h> #define DATA_PIN 4 PS]"); } } }
Attach:PS2Keyboard002.zip
This library is at a very early stage. If anyone wants to revise/rewrite it, feel free to do so (published under the LGPL).
Attach:PS2Keyboard_014A.zip
FULL KEYBOARD IMPLEMENTATION NOTE: PS2Keyboard_014A is the library to use beginning with Arduino 0013. ** It contains a full complement of letters, numbers and punctuation (excluding shifted characters/symbols).
EDITED AGAIN - Actually mostly rewritten
Version 2.0 adds the following improvements:
Some of the information on this page is now obsolete. Please refer to the version 2.0 page for up-to-date documentation.
Version 2.1 adds the following improvements:
Download Attach:PS2Keyboard_2.3-Ctrl.zip
CTRL SUPPORT
NOTE: Based on v2.3 from Teensy web site with small addition to the main code to recognize Ctrl modifier.
When Ctrl is held, alphabetic characters (A, B, C, ...) are substituted with low ASCII range (codes 1, 2, 3, ...). It is case insensitive. The motivation is embedded projects with mode changes or keyboard commands.
The
examples folder contains
Ctrl_Test sketch with a demonstration. The code would look something like this:
// Ctrl + <alphabet key> produces ASCII codes 1-26 } else if (c < 26) { Serial.print("^"); Serial.print((char)('A' + c - 1));
Download Attach:PS2Keyboard_2.3-Ctrl-Enter.zip
CTRL+ENTER SUPPORT
Allows separate handling of sending commands and inserting new lines, by using Enter as opposed to Ctrl+Enter.
Ctrl+Enter(
Ctrl+J) as
PS2_LINEFEED, code 10
PS2_DOWNARROWto code 12
Example (
Ctrl_Test.ino)
// check for some of the special keys if (c == PS2_ENTER) { Serial.println(); } else if (c == PS2_LINEFEED) { Serial.print("[^Enter]"); | https://playground.arduino.cc/Main/PS2Keyboard | CC-MAIN-2018-39 | refinedweb | 368 | 51.95 |
ZF-8177: Registering helpers with views
Description
From a request on the FW-MVC mailing list, Zend_View should offer a way to register helper objects without relying on the plugin loader. This would allow for dependency injection, objects that don't follow the PEAR naming convention, and PHP 5.3 namespaces.
Posted by Hector Virgen (djvirgen) on 2009-10-29T10:02:51.000+0000
I'm putting together a patch that will allow registering custom helpers like this:
// Adding a custom helper $view = new Zend_View(); $helper = new MyCustomHelper(); $view->addHelper($helper, 'foo'); $view->foo(); // calls MyCustomHelper#foo()
// Overwriting a built-in helper $myUrlHelper = new MyUrlHelper(); $view->addHelper($myUrlHelper, 'url'); $view->url(); // calls MyUrlHelper#url();
Any thoughts on this?
Posted by Avi Block (blocka) on 2009-10-29T10:07:10.000+0000
I'm wondering now if it's a good idea also to include anonymous functions here as well (a la Zend_Validate_Callback). Of course, one could always create a "Callback" helper this way as well, which would accomplish the same purpose, without having to change things around too much.
Posted by Hector Virgen (djvirgen) on 2009-10-29T10:30:14.000+0000
What would that code look like? So far I have this:
/** * Registers a helper * * @param Zend_View_Helper_Abstract|object $helper * @param string $name * @return Zend_View_Abstract */ public function registerHelper($helper, $name) { if (!is_object($helper)) { throw new Zend_View_Exception('View helper must be an object.', $this); } $this->_helper[$name] = $helper; return $this; }
Posted by Hector Virgen (djvirgen) on 2009-10-29T10:35:12.000+0000
Sorry, forgot to add the code markup.. this should look better:
Posted by Avi Block (blocka) on 2009-10-29T10:51:49.000+0000
What you have here seems to me to be sufficient for the main purpose of this feature (of course it must be tested!). As for adding in the anonymous function capabilities, you would probably have to change the __call method of Zend_View_Abstract, because that method is expecting a class and a method.
Posted by Hector Virgen (djvirgen) on 2009-10-29T11:29:10.000+0000
I don't have a CLA yet but I've attached the patch and passing unit test.
Posted by Hector Virgen (djvirgen) on 2009-12-02T10:49:40.000+0000
Please see attached files for fix and passing unit test.
Posted by Matthew Weier O'Phinney (matthew) on 2009-12-02T10:55:39.000+0000
Re-opening. Attaching a patch does not resolve an issue; committing code to the repository does. I'll review the patch for inclusion.
Posted by Hector Virgen (djvirgen) on 2009-12-02T11:01:24.000+0000
Thanks, Matthew. I don't have SVN commit privileges (that I know of). How can I submit a patch for review?
Posted by Matthew Weier O'Phinney (matthew) on 2009-12-02T11:28:59.000+0000
You just did. ;-)
Seriously, though, attaching patches to the issue tracker is the best way to do so. If you notice no action on one, feel free to bug the component lead (either directly, or via the zf-contributors mailing list).
Thanks!
Posted by Matthew Weier O'Phinney (matthew) on 2009-12-04T13:10:07.000+0000
Patches applied (with additional tests and slight changes in functionality) to trunk; will release with 1.10. | http://framework.zend.com/issues/browse/ZF-8177?focusedCommentId=35611&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-10 | refinedweb | 542 | 66.23 |
Gitweb:;a=commit;h=643f12dbb660e139fbaea268f3e3ce4d7d594b8f Commit: 643f12dbb660e139fbaea268f3e3ce4d7d594b8f Parent: d903ac5455102b13d0e28d6a39f640175fb4cd4d Author: Henrique de Moraes Holschuh <[EMAIL PROTECTED]> AuthorDate: Thu Mar 29 01:58:43 2007 -0300 Committer: Len Brown <[EMAIL PROTECTED]> CommitDate: Fri Mar 30 15:35:42 2007 -0400
Advertising
ACPI: thinkpad-acpi: cleanup after rename Cleanup documentation, driver strings and other misc stuff, now that the driver is named "thinkpad-acpi". Signed-off-by: Henrique de Moraes Holschuh <[EMAIL PROTECTED]> Signed-off-by: Len Brown <[EMAIL PROTECTED]> --- Documentation/thinkpad-acpi.txt | 46 ++++++++++++++++++++++---------------- drivers/misc/thinkpad_acpi.c | 18 +++++++++------ drivers/misc/thinkpad_acpi.h | 21 +++++++++-------- 3 files changed, 49 insertions(+), 36 deletions(-) diff --git a/Documentation/thinkpad-acpi.txt b/Documentation/thinkpad-acpi.txt index f409f4b..af18d29 100644 --- a/Documentation/thinkpad-acpi.txt +++ b/Documentation/thinkpad-acpi.txt @@ -1,17 +1,22 @@ - IBM ThinkPad ACPI Extras Driver + ThinkPad ACPI Extras Driver - Version 0.13 - 31 December 2006 + Version 0.14 + March 26th, 2007 Borislav Deianov <[EMAIL PROTECTED]> Henrique de Moraes Holschuh <[EMAIL PROTECTED]> -This is a Linux ACPI driver for the IBM ThinkPad laptops. It supports -various features of these laptops which are accessible through the -ACPI framework but not otherwise fully supported by the generic Linux -ACPI drivers. . Status @@ -43,6 +48,8 @@ Please include the following information in your report: - ThinkPad model name - a copy of your DSDT, from /proc/acpi/dsdt + - a copy of the output of dmidecode, with serial numbers + and UUIDs masked off - which driver features work and which don't - the observed behavior of non-working features @@ -53,8 +60,9 @@ Installation ------------ If you are compiling this driver as included in the Linux kernel -sources, simply enable the CONFIG_ACPI_IBM option (Power Management / -ACPI / IBM ThinkPad Laptop Extras). +sources, simply enable the CONFIG_THINKPAD_ACPI option, and optionally +enable the CONFIG_THINKPAD_ACPI_BAY option if you want the +thinkpad-specific bay functionality. Features -------- @@ -210,7 +218,7 @@ hot plugging of devices in the Linux ACPI framework. If the laptop was booted while not in the dock, the following message is shown in the logs: - Mar 17 01:42:34 aero kernel: ibm_acpi: dock device not present + Mar 17 01:42:34 aero kernel: thinkpad_acpi: dock device not present In this case, no dock-related events are generated but the dock and undock commands described below still work. They can be executed @@ -270,7 +278,7 @@ This is due to the current lack of support for hot plugging of devices in the Linux ACPI framework. If the laptop was booted without the UltraBay, the following message is shown in the logs: - Mar 17 01:42:34 aero kernel: ibm_acpi: bay device not present + Mar 17 01:42:34 aero kernel: thinkpad_acpi: bay device not present In this case, no bay-related events are generated but the eject command described below still works. It can be executed manually or @@ -637,12 +645,12 @@ range. The fan cannot be stopped or started with this command. The ThinkPad's ACPI DSDT code will reprogram the fan on its own when certain conditions are met. It will override any fan programming done -through ibm-acpi. +through thinkpad-acpi. . +The thinkpad @@ -686,8 +694,8 @@ separating them with commas, for example: echo enable,0xffff > /proc/acpi/ibm/hotkey echo lcd_disable,crt_enable > /proc/acpi/ibm/video -Commands can also be specified when loading the ibm_acpi module, for -example: +Commands can also be specified when loading the thinkpad-acpi module, +for example: - modprobe ibm_acpi hotkey=enable,0xffff video=auto_disable + modprobe thinkpad_acpi hotkey=enable,0xffff video=auto_disable diff --git a/drivers/misc/thinkpad_acpi.c b/drivers/misc/thinkpad_acpi.c index 90ffc46..ddaedf8 100644 --- a/drivers/misc/thinkpad_acpi.c +++ b/drivers/misc/thinkpad_acpi.c @@ -1,5 +1,5 @@ /* - * ibm_acpi.c - IBM ThinkPad ACPI Extras + * thinkpad_acpi.c - ThinkPad ACPI Extras * * * Copyright (C) 2004-2005 Borislav Deianov <[EMAIL PROTECTED]> @@ -21,10 +21,12 @@ * 02110-1301, USA. */ -#define IBM_VERSION "0.13" +#define IBM_VERSION "0.14" /* * Changelog: + * 2007-03-27 0.14 renamed to thinkpad_acpi and moved to + * drivers/misc. * * 2006-11-22 0.13 new maintainer * changelog now lives in git commit history, and will @@ -318,7 +320,9 @@ static int __init setup_notify(struct ibm_struct *ibm) } acpi_driver_data(ibm->device) = ibm; - sprintf(acpi_device_class(ibm->device), "%s/%s", IBM_NAME, ibm->name); + sprintf(acpi_device_class(ibm->device), "%s/%s", + IBM_ACPI_EVENT_PREFIX, + ibm->name); status = acpi_install_notify_handler(*ibm->handle, ibm->type, dispatch_notify, ibm); @@ -458,7 +462,7 @@ static char *next_cmd(char **cmds) * ibm-acpi init subdriver */ -static int ibm_acpi_driver_init(void) +static int thinkpad_acpi_driver_init(void) { printk(IBM_INFO "%s v%s\n", IBM_DESC, IBM_VERSION); printk(IBM_INFO "%s\n", IBM_URL); @@ -470,7 +474,7 @@ static int ibm_acpi_driver_init(void) return 0; } -static int ibm_acpi_driver_read(char *p) +static int thinkpad_acpi_driver_read(char *p) { int len = 0; @@ -2440,8 +2444,8 @@ static struct proc_dir_entry *proc_dir = NULL; static struct ibm_struct ibms[] = { { .name = "driver", - .init = ibm_acpi_driver_init, - .read = ibm_acpi_driver_read, + .init = thinkpad_acpi_driver_init, + .read = thinkpad_acpi_driver_read, }, { .name = "hotkey", diff --git a/drivers/misc/thinkpad_acpi.h b/drivers/misc/thinkpad_acpi.h index ee1b93a..015c02b 100644 --- a/drivers/misc/thinkpad_acpi.h +++ b/drivers/misc/thinkpad_acpi.h @@ -1,5 +1,5 @@ /* - * ibm_acpi.h - IBM ThinkPad ACPI Extras + * thinkpad_acpi.h - ThinkPad ACPI Extras * * * Copyright (C) 2004-2005 Borislav Deianov <[EMAIL PROTECTED]> @@ -21,8 +21,8 @@ * 02110-1301, USA. */ -#ifndef __IBM_ACPI_H__ -#define __IBM_ACPI_H__ +#ifndef __THINKPAD_ACPI_H__ +#define __THINKPAD_ACPI_H__ #include <linux/kernel.h> #include <linux/module.h> @@ -47,12 +47,13 @@ * Main driver */ -#define IBM_NAME "ibm" -#define IBM_DESC "IBM ThinkPad ACPI Extras" -#define IBM_FILE "ibm_acpi" +#define IBM_NAME "thinkpad" +#define IBM_DESC "ThinkPad ACPI Extras" +#define IBM_FILE "thinkpad_acpi" #define IBM_URL "" -#define IBM_DIR IBM_NAME +#define IBM_DIR "ibm" +#define IBM_ACPI_EVENT_PREFIX "ibm" #define IBM_LOG IBM_FILE ": " #define IBM_ERR KERN_ERR IBM_LOG @@ -99,8 +100,8 @@ static void ibm_handle_init(char *name, /* procfs support */ static struct proc_dir_entry *proc_dir; -static int ibm_acpi_driver_init(void); -static int ibm_acpi_driver_read(char *p); +static int thinkpad_acpi_driver_init(void); +static int thinkpad_acpi_driver_read(char *p); /* procfs helpers */ static int dispatch_read(char *page, char **start, off_t off, int count, @@ -434,4 +435,4 @@ static int wan_read(char *p); static int wan_write(char *buf); -#endif /* __IBM_ACPI_H */ +#endif /* __THINKPAD_ACPI_H */ - To unsubscribe from this list: send the line "unsubscribe git-commits-head" in the body of a message to [EMAIL PROTECTED] More majordomo info at | https://www.mail-archive.com/[email protected]/msg09733.html | CC-MAIN-2016-44 | refinedweb | 1,002 | 51.68 |
import "go.chromium.org/luci/server/portal"
Package portal implements HTTP routes for portal pages.
These pages can be registered at init()-time, and will be routed to /admin/portal.
Typically they read/write `settings` as defined by `go.chromium.org/luci/server/settings`, but they can also be used to provide information to administrators or to provide admin-only actions (such as clearing queues or providing admin tokens).
handlers.go index.go page.go settings.go yesno.go
AssumeTrustedPort can be passed as auth.Method to InstallHandlers to indicate that portal endpoints are being exposed on an internal port accessible only to cluster administrators and no additional auth checks are required (or they are not possible).
GetPages returns a map with all registered pages.
InstallHandlers installs HTTP handlers that implement admin UI.
`adminAuth` is the method that will be used to authenticate the access (regardless of what's installed in the base context). It must be able to distinguish admins (aka superusers) from non-admins. It is needed because settings UI must be usable even before auth system is configured.
`adminAuth` can be a special value portal.AssumeTrustedPort which completely disables all authentication and authorization checks (by delegating them to the network layer).
RegisterPage makes exposes UI for a portal page (identified by given unique key).
Should be called once when application starts (e.g. from init() of a package that defines the page). Panics if such key is already registered.
type Action struct { ID string // page-unique ID Title string // what's displayed on the button Help template.HTML // optional help text Confirmation string // optional text for "Are you sure?" confirmation prompt NoSideEffects bool // if true, the callback just returns some data // Callback is executed on click on the action button. // // Usually it will execute some state change and return the confirmation text // (along with its title). If NoSideEffects is true, it may just fetch and // return some data (which is either too big or too costly to fetch on the // main page). Callback func(c context.Context) (title string, body template.HTML, err error) }
Action corresponds to a button that triggers a parameterless callback.
BasePage can be embedded into Page implementers to provide default behavior.
Actions is additional list of actions to present on the page.
Fields describes the schema of the settings on the page (if any).
Overview is optional HTML paragraph describing this portal page.
ReadSettings returns a map "field ID => field value to display".
Title is used in UI to name this portal page.
WriteSettings saves settings described as a map "field ID => field value".
type Field struct { ID string // page unique ID Title string // human friendly name Type FieldType // how the field is displayed and behaves ReadOnly bool // if true, display the field as immutable Placeholder string // optional placeholder value Validator func(string) error // optional value validation Help template.HTML // optional help text ChoiceVariants []string // valid only for FieldChoice }
Field is description of a single UI element of the page.
Its ID acts as a key in map used by ReadSettings\WriteSettings.
YesOrNoField modifies the field so that it corresponds to YesOrNo value.
It sets 'Type', 'ChoiceVariants' and 'Validator' properties.
IsEditable returns true for fields that can be edited.
FieldType describes look and feel of UI field, see the enum below.
const ( FieldText FieldType = "text" // one line of text, editable FieldChoice FieldType = "choice" // pick one of predefined choices FieldStatic FieldType = "static" // one line of text, read only FieldPassword FieldType = "password" // one line of text, editable but obscured )
Note: exact values here are important. They are referenced in the HTML template that renders the settings page. See server/portal/*.
type Page interface { // Title is used in UI to name this page. Title(c context.Context) (string, error) // Overview is optional HTML paragraph describing this page. Overview(c context.Context) (template.HTML, error) // Fields describes the schema of the settings on the page (if any). Fields(c context.Context) ([]Field, error) // Actions is additional list of actions to present on the page. // // Each action is essentially a clickable button that triggers a parameterless // callback that either does some state change or (if marked as NoSideEffects) // just returns some information that is displayed on a separate page. Actions(c context.Context) ([]Action, error) // ReadSettings returns a map "field ID => field value to display". // // It is called when rendering the settings page. ReadSettings(c context.Context) (map[string]string, error) // WriteSettings saves settings described as a map "field ID => field value". // // Only values of editable, not read only fields are passed here. All values // are also validated using field's validators before this call. WriteSettings(c context.Context, values map[string]string, who, why string) error }
Page controls how some portal section (usually corresponding to a key in global settings JSON blob) is displayed and edited in UI.
Packages that wishes to expose UI for managing their settings register a page via RegisterPage(...) call during init() time.
YesOrNo is a bool that serializes to 'yes' or 'no'.
Useful in Fields that define boolean values.
Set changes the value of YesOrNo.
String returns "yes" or "no".
Package portal imports 23 packages (graph) and is imported by 25 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/server/portal | CC-MAIN-2020-05 | refinedweb | 870 | 57.67 |
Before you start
Learn what these tutorials can teach you and how you can get the most from them.
About this series
The Linux Professional Institute (LPI) certifies Linux), you should:
- Have several years experience with installing and maintaining Linux®® and Microsoft® Windows® services, including Samba, Pluggable Authentication Modules (PAM), e-mail, and Microsoft Active Directory directory service.
- Be able to plan, architect, design, build, and implement a full environment using Samba and LDAP as well as measure the capacity planning and security of the services.
- Be able to create scripts in Bash or Perl or have knowledge of at least one system-programming language (such as C).
The Linux Professional Institute does not endorse any third-party exam preparation material or techniques in particular.
About this tutorial
Welcome to "Concepts, architecture, and design," the first of six tutorials designed to prepare you for LPI exam 301. In this tutorial, you learn about LDAP concepts and architecture, how to design and implement an LDAP directory, and about schemas.
This tutorial is organized according to the LPI objectives for this topic. Very roughly, expect more questions on the exam for objectives with higher weights.
Objectives
Table 2 provides the detailed objectives for this tutorial.
Table 2. Concepts, architecture, and design: Exam objectives covered in this tutorial
Prerequisites
To get the most from this tutorial, you should have.
Concepts and architecture
This section covers material for topic 301.1 for the Senior Level Linux Professional (LPIC-3) exam 301. This topic has a weight of 3.
In this section, learn about:
- LDAP and X.500 technical specification
- Attribute definitions
- Directory namespaces
- Distinguished names
- LDAP Data Interchange Format
- Meta-directories
- Changetype operations
Most of the LPIC-3 exam focuses on the use of the Lightweight Directory Access Protocol (LDAP). Accordingly, the first objective involves understanding what LDAP is, what it does, and some of the basic terminology behind the concept. When you understand this, you will be able to move on to designing your directory and integrating your applications with the directory.
LDAP, what is it?
Before talking about LDAP, let's review the concept of directories. The classic example of a directory is the phone book, where people are listed in alphabetical order along with their phone numbers and addresses. Each person (or family) represents an object, and the phone number and address are attributes of that object. Though not always obvious at a glance, some objects are businesses instead of people, and these may include fax numbers or hours of operation.
Unlike its printed counterpart, a computer directory is hierarchical in nature, allowing for objects to be placed under other objects to indicate a parent-child relationship. For instance, the phone directory could be extended to have objects representing areas of the city, each with the people and business objects falling into their respective area objects. These area objects would then fall under a city object, which might further fall under a state or province object, and so forth, much like Figure 1. This would make a printed copy much harder to use because you would need to know the name and geographical location, but computers are made to sort information and search various parts of the directory, so this is not a problem.
Figure 1. A sample directory
Looking at Figure 1, knowing where the Simpson's record is tells you more than just the address and phone number. You also know they are in the East end in the town of Springfield. This structure is called a tree. Here, the root of the tree is the Springfield object, and the various objects represent further levels of branching.
This directory-based approach to storing data is quite different than the relational databases that you may be familiar with. To compare the two models, Figure 2 shows what the telephone data might look like if modeled as a relational database.
Figure 2. Directory data modeled in relational form
In the relational model, each type of data is a separate table that allows different types of information to be held. Each table also has a link to its parent table so that the relationships between the objects can be held. Note that the tables would have to be altered to add more information fields.
Remember that nothing about the directory model places any restrictions on how the data may be stored on disk. In fact, OpenLDAP supports many back ends including flat files and Structured Query Language (SQL) databases. The mechanics of laying out the tables on disk are largely hidden from you. For instance, Active Directory provides an LDAP interface to its proprietary back end.
The history of LDAP
LDAP was conceived in Request for Comments (RFC) 1487 as a lightweight way to access an X.500 directory instead of the more complex Directory Access Protocol. (See the Resources section for links to this and related RFCs.) X.500 is a standard (and a family of standards) from the International Telecommunication Union (ITU, formerly the CCITT) that specifies how directories are to be implemented. You may be familiar with the X.509 standard that forms the core of most Public Key Infrastructure (PKI) and Secure Sockets Layer (SSL) certificates. LDAP has since evolved to version 3 and is defined in RFC 4511.
Connecting to an X.500 database initially required the use of the Open Systems Interconnection (OSI) suite of protocols and, in true ITU fashion, required understanding of thick stacks of protocol documentation. LDAP allowed Internet Protocol (IP)-based networks to connect to the same directory with far fewer development cycles than using OSI protocols. Eventually the popularity of IP networks led to the creation of LDAP servers that support only as many X.500 concepts as necessary.
Despite the triumph of LDAP and IP over X.500 and OSI, the underlying organization of the directory data is still X.500-ish. Concepts that you will learn over the course of this tutorial, such as Distinguished Names and Object Identifiers, are brought up from X.500.
X.500 was intended as a way to create a global directory system, mostly to assist with the X.400 series of standards for e-mail. LDAP can be used as a global directory with some effort, but it is mostly used within an enterprise.
A closer look at naming and attributes
In the LDAP world, names are important. Names let you access and search records, and often the name gives an indication of where the record is within the LDAP tree. Figure 3 shows a typical LDAP tree.
Figure 3. A typical LDAP tree showing a user
At the top, or root, of the tree is an entity called
dc=ertw,dc=com. The
dc is short for domain component.
Because
ertw is under the
.com top-level domain, the two are
separated into two different units. Components of a name are
concatenated with a comma when using the X.500 nomenclature, with the
new components being added to the left. Nothing technically prevents
you from referring to the root as
dc=ertw.com, though in the interest of
future interoperability it is best to have the domain components
separate (in fact, RFC 2247 recommends the separate domain
components).
dc=ertw,dc=com is a way to uniquely
identify that entity in the tree. In X.500 parlance, this is called
the distinguished name, or the DN. The DN is much like a
primary key in the relational-database world because there can be only
one entity with a given DN within the tree. The DN of the topmost
entry is called the Root DN.
Under the root DN is an object with the DN of
ou=people,dc=ertw,dc=com.
ou means organizational unit, and
you can be sure it falls under the root DN because the
ou appears immediately to the left of the
root DN. You can also call
ou=people the
relative distinguished name, or RDN, because it is
unique within its level. Put in recursive terms, the DN of an entity
is the entity's RDN plus the DN of the parent. Most LDAP browsers show
only the RDN because it eliminates redundancy.
Moving down the tree to
cn=Sean Walberg,ou=people,dc=ertw,dc=com,
you find the record for a person.
cn means
common name. For the first time, though, a record has some
additional information in the form of attributes. Attributes
provide additional information about the entity. In fact, you'll see
the leftmost component of the DN is duplicated; in this case, it's the
cn attribute. Put another way, the RDN of
an entity is composed of one (or more) attributes of the entity.
While
description are easy enough to understand,
objectClass is not as obvious. An object
class is a group of attributes that correspond to a particular entity
type. One object class may contain attributes for people and another
for UNIX accounts. By applying the two object classes to an entity,
both sets of attributes are available to be stored.
Each object class is assigned an object identifier (OID) that uniquely identifies it. The object class also specifies the attributes, and which ones are mandatory and which are optional. Mandatory attributes must have some data for the entity to be saved. The object class also identifies the type of data held and whether multiple attributes of the same name are allowed. For instance, a person might have only one employee number but multiple first names (for example, Bob, Robert, and Rob).
The bottom-level objects are not the only ones to have object classes
associated with them. These objects, called containers, also
have object classes and attributes. The
people ou is of type
organizationalUnit and has a description
attribute along with
ou=people to create
the RDN. The root of the tree is of type
dcObject and
organization. Knowing which object classes
to assign an object depends on what is being held in the object and
under it. Refer to the Schemas section for
more details.
The root DN also defines the namespace of the tree or, to be
more technical, the Directory Information Tree (DIT). Something
ending in
dc=ibm,dc=com would fall outside
of the namespace from Figure 3, whereas the record
for
Sean Walberg falls within the
namespace. With that in mind, though, it is possible that one LDAP
server contains multiple namespaces. A somewhat abstract item called
the Root DSE contains the information about all the namespaces
available on the server. DSE means the DSA-Specific Entry, and DSA
means Directory System Agent (that is, the LDAP server).
Figure 4 summarizes the terminology associated with the LDAP tree.
Figure 4. Summary of LDAP terminology
Finally, an LDAP tree can be synchronized with other trees or data sources. For instance, one branch of the tree could come from a security system, another from a customer database, and the rest could be stored in the LDAP server. This is called a meta-directory and is intended to be a single source of data for applications such as single sign-on.
The LDIF file
Data can get into an LDAP server in one of two ways. Either it can be loaded in over the network, using the LDAP protocol, or it can be imported from the server through a file in the LDAP Data Interchange Format (LDIF). LDIF can be used at any time, such as to create the initial tree, and to perform a bulk add or modify of the data some time later. The output of a search can also be in LDIF for easy parsing or import to another server. The full specification for LDIF is in RFC 2849 (see Resources for a link).
Adding records
The LDIF that generated the tree from Figure 3 is shown in Listing 1.
Listing 1. A simple LDIF file to populate a tree
# This is a comment dn: dc=ertw,dc=com dc: ertw description: This is my company the description continues on the next line indented by one space objectClass: dcObject objectClass: organization o: ERTW.COM dn: ou=people,dc=ertw,dc=com ou: people description: Container for users objectclass: organizationalunit dn: cn=Sean Walberg,ou=people,dc=ertw,dc=com objectclass: inetOrgPerson cn: Sean Walberg cn: Sean A. Walberg sn: Walberg homephone: 555-111-2222 mail: [email protected] description: Watch out for this guy ou: Engineering
Before delving into the details of the LDIF file, note that the
attribute names are case insensitive. That is,
objectclass is the same as both
objectClass and
OBJECTCLASS. Many people choose to
capitalize the first letter of each word except the first, such as
objectClass,
homePhone, and
thisIsAReallyLongAttribute.
The first line of the LDIF shows a UNIX-style comment, which is prefixed by a hash sign (#), otherwise known as a pound sign or an octothorpe. LDIF is a standard ASCII file and can be edited by humans, so comments can be helpful. Comments are ignored by the LDAP server, though.
Records in the LDIF file are separated by a blank line and contain a
list of attributes and values separated by a colon (:). Records begin
with the
dn attribute, which identifies the
distinguished name of the record. Figure 1,
therefore, shows three records: the
dc=ertw,
ou=people, and
cn=Sean Walberg RDNs, respectively.
Looking back at Figure 1, you can see the first record defined is the root of the tree. The distinguished name comes first. Next comes a list of all the attributes and values, separated by a colon. Colons within the value do not need any special treatment. The LDAP tools understand that the first colon separates the attribute from the value. If you need to define two values for an attribute, then simply list them as two separate lines. For example, the root object defines two object classes.
Each record must define at least one object class. The object class, in
turn, may require that certain attributes be present. In the case of
the root object, the
dcObject object class
requires that a domain component, or
dc, be defined, and the
organization object class requires that an
organization attribute, or
o, be
defined. Because an object must have an attribute and value
corresponding to the RDN, the
dcObject
object class is required to import the
dc
attribute. Defining an
o attribute is not
required to create a valid record.
A
description is also used on the root
object to describe the company. The purpose here is to demonstrate the
comment format. If your value needs to span multiple lines, start each
new line with a leading space instead of a value. Remember that
specifying multiple
attribute: value pairs
defines multiple instances of the attribute.
The second record in Figure 1 defines an
organizationalUnit, which is a container
for people objects in this case. The third defines a user of type
inetOrgPerson, which provides common
attributes for defining people within an organization. Note that two
cn attributes are defined; one is also used
in the DN of the record. The second, with the middle initial, will
help for searching, but it is the first that is required to satisfy
the condition that the RDN be defined.
In the user record there is also an
ou that
does not correspond to the
organizationalUnit the user is in. The
container the user object belongs to can always be found by parsing
the DN. This
ou attribute refers to
something defined by the user, in this case a department. No
referential integrity is imposed by the server, though the application
may be looking for a valid DN such as
ou=Engineering,ou=Groups,dc=ertw,dc=com.
The only other restriction placed on LDIF files that add records is
that the tree must be built in order, from the root.
Figure 1 shows the root object being built, then
an
ou, then a user within that
ou. Now that the structure is built, users
can be added directly to the
people
container, but if a new container is to be used, it must be created
first.
The LDIF behind adding objects is quite easy. The format gets more
complex when objects must be changed or deleted. LDIF defines a
changetype, which can be one of the
following:
addadds an item (default).
deletedeletes the item specified by the DN.
modrdnrenames the specified object within the current container, or moves the object to another part of the tree.
moddnis synonymous with
modrdn.
modifymakes changes to attributes within the current DN.
Deleting users
Deleting an item is the simplest case, only requiring the
dn and
changetype. Listing 2 shows a user being
deleted.
Listing 2. Deleting a user with LDIF
dn: cn=Fred Smith,ou=people,dc=ertw,dc=com changetype: delete
Manipulating the DN
Manipulating the DN of the object is slightly more complex. Despite the
fact that there are two commands,
moddn and
modrdn, they do the same thing! The
operation consists of three separate parts:
- Specify the new RDN (leftmost component of the DN).
- Determine if the old RDN should be replaced by the new RDN within the record, or if it should be left.
- Optionally, move the record to a new part of the tree by specifying a new parent DN.
Consider Jane Smith, who changes her name to Jane Doe. The first thing
to do is change her
cn attribute to reflect
the name change. Because the new name is the primary way she wishes to
be referred to, and the common name forms part of the DN, the
moddn operation is appropriate. (If the
common name weren't part of the DN, this would be an attribute change,
which is covered in the next section.) The second choice is to
determine if the
cn: Jane Smith should stay
in addition to
cn: Jane Doe, which allows
people to search for her under either name. Listing 3 shows the LDIF
that performs the change.
Listing 3. LDIF to change a user's RDN
# Specify the record to operate on dn: cn=Jane Smith,ou=people,dc=ertw,dc=com changetype: moddn # Specify the new RDN, including the attribute newrdn: cn=Jane Doe # Should the old RDN (cn=Jane Smith) be deleted? 1/0, Default = 1 (yes) deleteoldrdn: 0
Listing 3 begins by identifying Jane's record, then the
moddn operator. The new RDN is specified,
continuing to use a common name type but with the new name. Finally,
deleteoldrdn directs the server to keep the
old name. Note that while
newrdn is the
only necessary option to the
moddn
changetype, if you omit
deleteoldrdn, the
action is to delete the old RDN. According to RFC 2849,
deleteoldrdn is a required element.
Should the new Mrs. Jane Doe be sent to a new part of the tree, such as
a move to
ou=managers,dc=ertw,dc=com, the
LDIF must specify the new part of the tree somehow, such as in Listing
4.
Listing 4. Moving a record to a new part of the tree
dn: cn=Jane Doe,ou=people,dc=ertw,dc=com changetype: modrdn newrdn: cn=Jane Doe deleteoldrdn: 0 newsuperior: ou=managers,dc=ertw,dc=com
Curiously, a new RDN must be specified even though it is identical to
the old one, and the OpenLDAP parser now requires that
deleteoldrdn is present despite it being
meaningless when the RDN stays the same.
newsuperior follows, which is the DN of the
new parent in the tree.
One final note on the
modrdn operation is
that the order matters, unlike most other LDIF formats. After the
changetype comes the
newrdn, followed by
deleteoldrdn, and, optionally,
newsuperior.
Modifying attributes
The final
changetype is
modify, which is used to modify attributes
of a record. Based on the earlier discussion of
moddn, it should be clear that
modify does not apply to the DN or the RDN
of a record.
Listing 5 shows several modifications made to a single record.
Listing 5. Modifying a record through LDIF
dn: cn=Sean Walberg,dc=ertw,dc=com changetype: modify replace: homePhone homePhone: 555-222-3333 - changetype: modify add: title title: network guy - changetype: modify delete: mail -
The LDIF for the
modify operation looks
similar to the others. It begins with the DN of the record, then the
changetype. After that comes either
replace:,
add:,
or
delete:, followed by the attribute. For
delete, this is enough information. The
others require the attribute:value pair. Each change is followed by a
dash (-) on a blank line, including the final change.
LDIF has an easy-to-read format, both for humans and computers. For bulk import and export of data, LDIF is a useful tool.
Directory.
Organizing your DN.
Filling in the structure!
Schemas
This section covers material for topic 301.3 for the Senior Level Linux Professional (LPIC-3) exam 301. The topic has a weight of 3.
In this section, learn about:
- LDAP schema concepts
- How to create and modify schemas
- Attribute and object class syntax
Up until now, the schema has been mentioned several times but not fully explained. The schema is a collection of object classes and their attributes. A schema file contains one or more object classes and attributes in a text format the LDAP server can understand. You import the schema file into your LDAP server's configuration, and then use the object classes and attributes in your objects. If the available schemas don't fit your needs, you can create your own or extend an existing one.
LDAP schema concepts
Technically, a schema is a packaging mechanism for object classes and attributes. However, the grouping of object classes is not random. Schemas are generally organized along an application, such as a core, X.500 compatibility, UNIX network services, sendmail, and so on. If you have a need to integrate an application with LDAP, you generally have to add a schema to your LDAP server.
A more in-depth look at OpenLDAP configuration will be in a later
tutorial in this series, but the way to add a schema is with
include /path/to/file.schema. After
restarting the server, the new schema will be available.
When the schema is loaded, you then apply the new object classes to the relevant objects. This can be done through an LDIF file or through the LDAP application program interface (API). Applying the object classes gives you more attributes to use.
Creating and modifying schemas
Schemas have a fairly simple format. Listing 6 shows the schema for the
inetOrgPerson object class along with some
of its attributes.
Listing 6. Part of the inetOrgPerson definition
attributetype ( 2.16.840.1.113730.3.1.241 NAME 'displayName' DESC 'RFC2798: preferred name to be used when displaying entries' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE ) attributetype ( 0.9.2342.19200300.100.1.60 NAME 'jpegPhoto' DESC 'RFC2798: a JPEG image' SYNTAX 1.3.6.1.4.1.1466.115.121.1.28 ) ) )
Line spacing is not important in schema files -- it is mostly there for
human readability. The first definition is an
attributetype, which means an attribute.
The parentheses enclose the definition of the attribute. First comes a
series of numbers, separated by periods, called the object ID,
or OID, which is a globally unique number. The OID is also
hierarchical, such that if you're assigned the 1.1 tree, you can
create anything like 1.1.1 or 1.1.2.3.4.5.6 without having to register
it.
Following the OID is a series of keywords, each of which may have a
value after it. First the
NAME of the
attribute defines the name that humans will use, such as in the LDIF
file or when retrieving the information through the LDAP API.
Sometimes you might see the name in the form of
NAME ( 'foo' 'bar' ), which means that
either
foo or
bar are acceptable. The server, however,
considers the first to be the primary name of the attribute.
DESC provides a description of the
attribute. This helps you understand the attribute if you're browsing
the schema file.
EQUALITY,
SUBSTR, and
ORDERING (not shown) require a matching
rule. This defines how strings are compared, searched, and
sorted, respectively.
caseIgnoreMatch is a
case-insensitive match, and
caseIgnoreSubstringsMatch is also case
insensitive. See the Resources section
for Web sites that define all the standard matching rules. Like most
things in LDAP, a server can define its own matching methods for its
own attributes, so there are no comprehensive lists of matching
rules.
The
SYNTAX of the attribute defines the
format of the data by referencing an OID. RFC 2252 lists the standard
syntaxes and their OIDs. If the OID is immediately followed by a
number in curly braces ({}), this represents the maximum length of the
data. 1.3.6.1.4.1.1466.115.121.1.15 represents a
DirectoryString that is a UTF-8 string.
Finally, the
SINGLE-VALUE keyword has no
arguments and specifies that only one instance of
displayName is allowed.
The
jpegPhoto attribute has a very short
definition: just the OID, the name and description, and a syntax
meaning a JPEG object, which is an encoded string of the binary data.
It is not practical to search or sort a picture, and multiple photos
can exist in a single record.
Defining an object class follows a similar method. The
objectclass keyword starts the definition,
followed by parentheses, the OID of the object class, and then the
definitions.
NAME and
DESC are the same as before.
SUP defines a superior object class, which
is another way of saying that the object class being defined inherits
from the object class specified by the
SUP
keyword. Thus, an
inetOrgPerson carries the
attributes of an
organizationalPerson.
The
STRUCTURAL keyword defines this as a
structural object class, which can be considered the primary type of
the object. Other options are
AUXILIARY,
which adds attributes to an existing object, and
ABSTRACT, which is the same as structural
but cannot be used directly. Abstract object classes must be inherited
by another object class, which can then be used. The
top object class is abstract. It is
inherited by most other structural object classes, including
person, which is the parent of
organizationalPerson, which, in turn, is
inherited by
inetOrgPerson.
Two keywords,
MAY and
MUST, define the attributes that are
allowed and mandatory, respectively, for records using that particular
object class. For mandatory items, you may not save a record without
all the items being defined. Each attribute is separated by a dollar
sign ($), even if the line continues on the next line.
It is not a good idea to modify structural object classes, or even
existing, well-known, auxiliary object classes. Because these are well
known, you may cause incompatibility issues in the future if your
server is different. Usually the best solution is to define your own
auxiliary object class, create a local schema, and apply it to your
records. For instance, if you are a university and want to store
student attributes, you might consider creating a student object class
that is inherited from either
organizationalPerson or
inetOrgPerson and adding your own
attributes. You could then create auxiliary object classes to add more
attributes such as class schedules.
Understanding which schemas to use
After learning about how schemas are created, it is tempting to start fresh -- to create your own schema based on your environment. This would certainly take care of your present needs, but it could quite possibly make things more difficult in the long run as you add more functionality to your LDAP tree and integrate other systems. The best approach is to stick with standard object classes and attributes when you can and extend when you must.
OpenLDAP usually stores its schema files in /etc/openldap/schema, in files with a .schema extension. Table 3 lists the default schemas along with their purposes.
Table 3. Schemas that ship with OpenLDAP
In addition, RFC 4519 explains common attributes. After finding the attributes you want, you can then look through the schema files to determine which files need to be included in your LDAP configuration and which object classes you must use for your records.
Summary
In this tutorial you learned about LDAP concepts, architecture, and design. LDAP grew out of a need to connect to X.500 directories over IP in a simplified way. A directory presents data to you in a hierarchical manner, much like a tree. Within this tree are records that are identified by a distinguished name and have many attribute-value pairs, including one or more object classes that determine what data can be stored in the record.
LDAP itself refers to the protocol used to search and modify the tree. Practically though, the term LDAP is used for all components, such as LDAP server, LDAP data, or just "It's in LDAP".
Data in LDAP is often imported and exported with LDIF, which is a
textual representation of the data. An LDIF file specifies a
changetype, such as
add,
delete,
modrdn,
moddn,
and
modify. These operations let you add
entries, delete entries, move data around in the tree, and change
attributes of the data.
Designing the tree correctly is crucial to long-term viability of the LDAP server. A correct design means fewer change operations are needed, which leads to consistent data that can easily be found by other applications. By choosing common attributes, you ensure that other consumers of the LDAP data understand the meaning of the attributes and that fewer translations are required.
The LDAP schema dictates which attributes can be used in your server. Within the schema are definitions of the attributes, including OIDs to uniquely identify them, instructions on how to store and sort the data, and textual descriptions of what the attributes do. Object classes group attributes together and can be defined as structural, auxiliary, or abstract.
Structural object classes define the record, so a record may only have one structural object class. Auxiliary object classes add more attributes for specific purposes and can be added to any record. An abstract object class must be inherited and cannot be used directly..
- RFC 1487, X.500 Lightweight Directory Access Protocol, gives you some insight into the development of LDAP and the history of X.500.
- RFC 2247, Using Domains in LDAP/X.500 Distinguished Names, is a brief description of the
domainComponentattribute and how to use it properly in your LDAP tree.
- RFC 2252, Lightweight Directory Access Protocol (v3): Attribute Syntax Definitions, lists the standard syntaxes for attributes, which can help you figure out what format a certain attribute is expecting.
- RFC 2849, The LDAP Data Interchange Format (LDIF) - Technical Specification, describes the LDIF language. It uses Backus-Naur Form (BNF) to specify the language, which can be tricky to understand. RFC 2234, Augmented BNF for Syntax Specifications: ABNF, might help you understand the various operators.
- RFC 4511, Lightweight Directory Access Protocol (LDAP): The Protocol, is the latest draft of the LDAP protocol.
- RFC 4519, Lightweight Directory Access Protocol (LDAP): Schema for User Applications, is an updated list of the commonly-used attributes; this list helps ensure you're using the same attributes everyone else is to describe the same data.
- The OID descriptions for 2.5.13 link to detailed descriptions of how each matching rule (string comparison, substrings, and ordering) works.
- This FAQ entry on object classes gives details on some of trickier rules of dealing with object classes. Some of OpenLDAP's error messages are terse, especially when dealing with LDIF imports.
- The overlay framework in OpenLDAP is key because the meta-directory concept can be carried further than just tying together multiple LDAP servers. A request for a particular tree or OID can be directed to custom code that can call a script, read a database, or call an API. Another description is on the Symas Corporation Web site.
- "Demystifying LDAP Data" (O'Reilly, November 2006) explains object class inheritance. It looks at the
inetOrgPersonobject class and describes structural and auxiliary object classes.
- LDAP for Rocket Scientists is an excellent open source guide,
- OpenLDAP is a great choice if you're looking for an LDAP server.
- phpLDAPadmin is a Web-based LDAP administration tool.
- Luma is a good GUI to look at if that's more your style.
-. | http://www.ibm.com/developerworks/linux/edu/l-dw-linux-lpic3301-i.html?S_TACT=105AGX59&S_CMP=GRsite-lnxw9d&ca=dgr-lnxw9dLPI301P1 | CC-MAIN-2014-15 | refinedweb | 5,403 | 63.19 |
(this post)
- Part 5 - Adding Decimal Support
- Part 6 - Supporting Negative Numbers
- Part 7 - Add Dirty State
- Part 8 - Support Keypad Input
- Part 9 - Combination Key Input
- Part 10 - Testing
- Part 11 - Netlify Deployment
- diff:
- ellie:
Now that we have a good looking calculator, we can add some functionality. We need to do 2 things do have something working.
- Push stuff to the stack
- Do operations on the stack
Whenever I start a new feature in Elm I start with the
Model.
type alias Model = { stack : List Float , currentNum : Float } initialModel : Model initialModel = { stack = [] , currentNum = 0 }
Let's also output some debugging information to see what's going on as we work. We can turn on the time travelling debugger by restarting our elm-live server with the
--debug flag.
npx elm-live src/Main.elm --hot --open -- --output=elm.js --debug
Now you should see the debugger in the bottom left side of your browser window.
If you click on it the window will show you the model.
Display the stack
We also need a way to view the stack. Let's update the view function.
The resulting HTML will look something like this.
<div class="calculator"> <!-- THE STACK --> <div class="input-box"></div> <div class="input-box"></div> <div class="input-box"></div> <!-- THE INPUT BOX --> <div class="input-box"></div> <!-- THE CALCULATOR BUTTONS --> <div class="section"> ... </div><!-- section --> </div><!-- calculator -->
We already created the input box in the previous chapter. We can reuse that function to display the stack.
Now we need to loop through the stack and print it out. We do this by using
List.map.
List.map inputBox model.stack
If we look at the function signature of
List.map in the Elm repl this is what we get.
> List.map > <function> : (a -> b) -> List a -> List b
What this is telling us is that it takes a function
a -> b and a list
List a and outputs a new list
List b. Notice how the first list matches the first argument of the input function
a. And the output list matches the output of the input function
b.
If we do this the stack will be displayed from top to bottom. RPN calculators usually show the stack from the top to bottom so we need to reverse the stack before we print it out.
List.map inputBox (List.reverse model.stack)
Here is the final Elm code in our view function. We place the stack on top of our input box and the button grid.
view : Model -> Html Msg view model = div [ class "calculator" ] (List.map inputBox (List.reverse model.stack) ++ [ inputBox model.currentNum , section ] )
Input numbers
We now need to attach an event to each button in our button grid. All events in Elm are handled as messages to our update function. We'll need to tweak our
cell function to take an event listener.
Let's first define the message that will be sent to the update function.
type Msg = InputNumber Float
Now we need to handle that message in our update function.
update : Msg -> Model -> Model update msg = type msg of InputNumber num -> { model | currentNum = num }
Now we can attach the event to our buttons. Be sure to import
onClick at the top of the file.
import Html.Events exposing (onClick) ... cell (onClick (InputNumber 1)) Single White "1"
And finally we need to change the cell function to recieve this
onClick message. If you are unsure what the type signature of a thing is usually your code editor can tell you if you hover over the thing in question. Another way is to make your best guess and let the compiler error tell you what type it was expecting.
In this case
onClick is an
Html.Attribute Msg.
cell : Html.Attribute Msg -> Size -> Color -> String -> Html Msg cell attr size color content = button [ ... , attr ] [ text content ]
Now we can add the onClick event to every button and we will be able to input any digit.
section : Html Msg section = div [ class "section" ] [ cell ... , cell (onClick (InputNumber 1)) Single White "1" , cell (onClick (InputNumber 2)) Single White "2" , cell (onClick (InputNumber 3)) Single White "3" , ... ]
If you look back in the view funtion we already set it up so that
model.currentNum is being displayed.
Homework
You will learn best by struggling to do something yourself. Try to pick it up from here and do the following.
- Implement the clear button event.
- Implement the back button (i.e. 123 [press back] 12 [press back] 1)
- Input larger numbers. Not only single digits.
And if you are really ambitious you could finish up this chapter by doing
- Push a number to the stack
- Perform operations on the stack
Try it first. If you get stuck I'm going to go through pushing to the stack and doing operations in the next two sections.
Push numbers onto the stack
Now that we can input numbers, we can now push things onto the stack.
Let's first create an
Enter message.
type Msg = InputNumber Float | Enter
And then add the event to our button.
cell (onClick Enter) Double Yellow "Enter"
Now what needs to happen to our model when the user clicks "Enter"?
We need to push the
model.currentNum to the stack. We can do that with the
:: cons operator.
1 :: [ 2, 3 ] == [ 1, 2, 3 ]
See for more information about list operations.
And we will also reset the
model.currentNum to
0.
update : Msg -> Model -> Model update msg = type msg of ... Enter -> { model | stack = model.currentNum :: model.stack , currentNum = 0 }
Operate on the stack
Now that we have numbers on the stack we can operate on them.
We need a way to tie an event from our buttons to call a function on our stack.
Let's start by defining our message.
type Msg = InputNumber Float | ... | InputOperator Operator
We have no
Operator type so let's add that.
type Operator = Add | Sub | Mult | Div
And bind the event to our button.
cell (onClick (InputOperator Add)) Single Yellow "+"
Now we need a way to pop off an element from our stack and do the operator with the
model.currentNum.
case model.stack of -- the stack is empty to do nothing [] -> -- just return the model model -- x is the head of the list -- xs is the rest of the list x :: xs -> -- do stuff here
Let's talk about this pattern a little more. I know for myself coming from Python or JavaScript this looks really weird. Why can't we just do a
foreach over the list? In Elm there is no such thing as a for loop. Processing a list is a recursive operation.
Pattern match on a list
There are two cases when dealing with a list. Either it is empty or it has stuff in it.
If there is stuff in it we can deconstruct it with
x :: xs.
x :: xs is a nifty way of popping off the first element of the list.
x is the first element.
xs is the rest of the list.
Since we have popped off the first element, we can now operate on the element with the
currentNum. We can then assign the stack to the remaining list,
xs.
Handle the model update
And add handle the message in the update.
update : Msg -> Model -> Model update msg = type msg of ... InputOperator operator -> case model.stack of [] -> -- stack is empty, do nothing model -- split up the list and do stuff x :: xs -> let -- lookup the function to use op = operatorFunction operator -- do the math newNum = op model.currentNum x in -- now update the model { model | stack = xs , currentNum = newNum }
Ok. So I introduced some new stuff in this code chunk. Let's go through some of the pieces.
let ... in blocks
let ... in blocks allow you to define a local scope. I feel it can make the code a lot more readable if you need to manipulate your data around.
Get the operator function
We need to assign a function to each of our
Operator types. The function needs to return another function. In this case the function of arithmetic functions takes 2 numbers and returns a number,
Float -> Float -> Float.
Putting parens around the
+ operator, is syntax for, "treat this as a function that takes 2 arguments." It can be used like this:
(+) 1 2 == 3.
operatorFunction : Operator -> (Float -> Float -> Float) operatorFunction operator = case operator of Add -> (+) Sub -> (-) ...
Your homework solutions
Hopefully you tried input larger numbers and to implement the clear and back buttons yourself. If you haven't, stop reading and try it. Even if you fail you will learn better than me telling you how to do it.
Input larger numbers
Right now we can only put in a single number. Let's fix that.
update : Msg -> Model -> Model update msg model = ... InputNumber num -> { model | currentNum = (model.currentNum * 10) + num } ...
We need to do a little bit of math to create a larger number. Shift the
model.currentNum back and then add the new number.
Clear and back buttons
Let's start by adding
Clear and
Back to our message type.
type Msg = InputOperator Operator | InputNumber Float | Clear | Back | ...
Now the compiler will squawk at you to add those two new types to the update function.
MISSING PATTERNS - This `case` does not have branches for all possibilities: 102|#># case msg of ...
Let's add those.
update : Msg -> Model -> Model update msg model = case msg of Clear -> { model | currentNum = 0 } Back -> { model | currentNum = toFloat <| floor <| model.currentNum / 10 }
For the
Back message we need to undo what we did to create larger numbers. This is getting awkward to do this math to input numbers and will have its limitations when we deal with decimal numbers. We will fix that in the next chapter.
How that we have some basic operations done, the next chapter will introduce decimal numbers. | https://pianomanfrazier.com/post/elm-calculator-book/04-basic-operations/ | CC-MAIN-2020-24 | refinedweb | 1,646 | 75.81 |
Create an ASP.NET MVC app with auth and SQL DB and deploy to Azure App Service
This:
- How to create a secure ASP.NET MVC 5 web project in Visual Studio.
- How to authenticate and authorize users who log on with credentials from their Google or Facebook accounts (social provider authentication using OAuth 2.0).
- How to authenticate and authorize users who register in a database managed by the application (local authentication using ASP.NET Identity).
- How to use the ADO.NET Entity Framework 6 Code First to read and write data in a SQL database.
- How to use Entity Framework Code First Migrations to deploy a database.
- How to store relational data in the cloud by using Azure SQL Database.
- How to deploy a web project that uses a database to a web app in Azure App Service.
Note:
This is a long tutorial. If you want a quick introduction to Azure App Service and Visual Studio web projects, see Create an ASP.NET web app in Azure App Service. For troubleshooting info, see the Troubleshooting section.
Or if you want to get started with Azure App Service before signing up for an Azure account, go to Try App Service, where you can immediately create a short-lived starter web app in App Service. No credit cards required; no commitments.
Prerequisites
To complete this tutorial, you need a Microsoft Azure account. If you don't have an account, you can activate your Visual Studio subscriber benefits or sign up for a free trial.
To set up your development environment, you must install Visual Studio 2013 Update 5 or higher, and the latest version of the Azure SDK for .NET. This article was written for Visual Studio Update 4 and SDK 2.8.1. The same instructions work for Visual Studio 2015 with the latest Azure SDK for .NET installed, but some screens will look different from the illustrations.
Create an ASP.NET MVC 5 application
Create the project
From the File menu, click New Project.
In the New Project dialog box, expand C# and select Web under Installed Templates, and then select ASP.NET Web Application. Name the application ContactManager, and then App Service is selected.
Click OK.
The Configure Microsoft Azure Web App Settings dialog appears. You may need to sign in if you have not already done so, or reenter your credentials if your login is expired.
Optional - change the value in the Web App name box (see image below).
The URL of the web app will be {name}.azurewebsites.net, so the name has to be unique in the azurewebsites.net domain. The configuration wizard suggests a unique name by appending a number to the project name "ContactManager", and that's fine for this tutorial.
In the Resource group drop-down select an existing group or Create new resource group(see image below).
If you prefer, you can select a resource group that you already have. But if you create a new resource group and only use it for this tutorial, it will be easy to delete all Azure resources you created for the tutorial when you're done with them. For information about resource groups, see Azure Resource Manager overview.
In the App Service plan drop-down select select an existing plan or Create new App Service plan(see image below).
If you prefer, you can select an App Service plan that you already have. For information about App Service plans, see Azure App Service plans in-depth overview.
Tap Explore additional Azure services to add a SQL database.
Tap the + icon to add a SQL database.
Tap New on the Configure SQL Database dialog:
Enter a name for the administrator and a strong password.
The server name must be unique. It can contain lower-case letters, numeric digits, and hyphens. It cannot contain a trailing hyphen. The user name and password are new credentials you're creating for the new server.
If you already have a database server, you can select that instead of creating one. Database servers are a precious resource, and you generally want to create multiple databases on the same server for testing and development rather than creating a database server per database. However, for this tutorial you only need the server temporarily, and by creating the server in the same resource group as the web site you make it easy to delete both web app and database resources by deleting the resource group when you're done with the tutorial.
If you select an existing database server, make sure your web app and database are in the same region.
Tap Create.
Visual Studio creates the ContactManager web project, creates the resource group and App Service plan that you specified, and creates a web app in Azure App Service with the name you specified.
Set the page header and footer
In Solution Explorer open the Layout.cshtml file in the Views\Shared folder.
Replace the ActionLink in the Layout.cshtml file with the following code.
@Html.ActionLink("CM Demo", "Index", "Contacts", new { area = "" }, new { @class = "navbar-brand" })
Make sure you change the third parameter from "Home" to "Contacts". The markup above will create a "Contacts" link on each page to the Index method of the Contacts controller. Change the application name in the header and the footer from "My ASP.NET Application" and "Application name" to "Contact Manager" and "CM Demo".
Run the application locally
Press CTRL+F5 to run the app.
The application home page appears in the default browser.
This is all you need to do for now to create the application that you'll deploy to Azure.
Deploy the application to Azure.
Enable SSL for the Project
In Solution Explorer, click the ContactManager project, then click F4 to open the Properties window.
Change SSL Enabled to True.
Copy the SSL URL.
The SSL URL will be unless you've previously created SSL web apps.
In Solution Explorer, right click the Contact Manager project and click Properties.
Click the Web tab.
Change the Project Url to use the SSL URL and save the page (Control S).
Verify that Internet Explorer is the browser that. Click Yes to start the process of trusting the self-signed certificate that IIS Express has generated.
Read the Security Warning dialog and then click Yes if you want to install the certificate representing localhost.
IE shows the Home page and there are no SSL warnings.
Internet Explorer is a good choice when you're using SSL because it accepts the certificate and shows HTTPS content without a warning. Microsoft Edge and Google Chrome also accept the certificate. Firefox uses its own certificate store, so it displays a warning.
Add a database to the application
Next, you'll update the app to add the ability to display and update contacts and store the data in a database. The app will use the Entity Framework (EF) to create the database and to read and update data.
Add data model classes for the contacts
You begin by creating a simple data model in code.
In Solution Explorer, right-click the Models folder, click Add, and then Class.
In the Add New Item dialog box, name the new class file Contact.cs, and then click Add.
Replace the contents of the Contact.cs file with the following code.
using System Contact class defines the data that you will store for each contact, plus a primary key, ContactID, that is needed by the database.
Create web pages that enable app users to work with the contacts
The ASP.NET MVC scaffolding feature can automatically generate code that performs create, read, update, and delete (CRUD) actions.
Build the project (Ctrl+Shift+B). (You must build the project before using the scaffolding mechanism.).
Click Add.
Visual Studio creates a controller with methods and views for CRUD database operations for Contact objects.
Enable Migrations, create the database, add sample data and a data initializer
The next task is to enable the Code First Migrations feature in order to create database tables based on the data model that. The code in this file creates the database tables. The first parameter ( Initial )
usingstatement.
using ContactManager.Models;
Replace the Seed method with the following code:
protected override void Seed(ContactManager.Models.ApplicationDbContext context) { context.Contacts.AddOrUpdate(p => p.Name, new Contact { Name = "Debra Garcia", Address = "1234 Main St", City = "Redmond", State = "WA", Zip = "10999", Email = "[email protected]", }, new Contact { Name = "Thorsten Weinrich", Address = "5678 1st Ave W", City = "Redmond", State = "WA", Zip = "10999", Email = "[email protected]", }, new Contact { Name = "Yuhong Li", Address = "9012 State st", City = "Redmond", State = "WA", Zip = "10999", Email = "[email protected]", }, new Contact { Name = "Jon Orton", Address = "3456 Maple St", City = "Redmond", State = "WA", Zip = "10999", Email = "[email protected]", }, new Contact { Name = "Diliana Alexieva-Bosseva", Address = "7890 2nd Ave E", City = "Redmond", State = "WA", Zip = "10999", Email = "[email protected]", } ); }
This code initializes (seeds) the database with contact information. For more information on seeding the database, see Seeding and Debugging Entity Framework (EF) DBs. Build the project to verify there are no compile errors.
In the Package Manager Console enter the command:
update-database
The update-database runs the first migration which creates the database. By default, the database is created as a SQL Server Express LocalDB database.
Press CTRL+F5 to run the application, and then click the CM Demo link; or navigate to.
The application shows the seed data and provides edit, details and delete links. You can create, edit, delete and view data.
Add an OAuth2 Provider
Note:
For detailed instructions on how to use the Google and Facebook developer portal sites, this tutorial links to tutorials on the ASP.NET site. However, Google and Facebook change their sites more frequently than those tutorials are updated, and they are now out of date. If you have trouble following the directions, see the featured Disqus comment at the end of this tutorial for a list of what has changed. see in this tutorial. To use Facebook as an authentication provider, see MVC 5 App with Facebook, Twitter, LinkedIn and Google OAuth2 Sign-on .
In addition to authentication, this tutorial uses roles to implement authorization. Only those users that you add to the canEdit role are able to change data (that is, create, edit, or delete contacts).
Follow the instructions in MVC 5 App with Facebook, Twitter, LinkedIn and Google OAuth2 Sign-on under Creating a Google app for OAuth 2 to set up a Google app for OAuth2.
Run and test the app to verify that you can log on using Google authentication.
If you want to create social login buttons with provider-specific icons, see Pretty social login buttons for ASP.NET MVC 5
Using the Membership APIstatements:
using Microsoft.AspNet.Identity; using Microsoft.AspNet.Identity.EntityFramework;
Add the following AddUserAndRole method to the class:
bool AddUserAndRole(ContactManager.Models.ApplicationDbContext context) { IdentityResult ir; var rm = new RoleManager<IdentityRole> (new RoleStore<IdentityRole>(context)); ir = rm.Create(new IdentityRole("canEdit")); var um = new UserManager<ApplicationUser>( new UserStore<ApplicationUser> the ASP.NET Identity tutorials on the ASP.NET site.
Use Temporary Code to Add New Social Login Users to the canEdit Role
In this section you will temporarily modify the ExternalLoginConfirmation method in the Account controller to add new users registering with an OAuth provider to the canEdit role. We hope to provide a tool similar to WSAT in the future which will allow you to create and edit user accounts and roles. Until then, you can accomplish the same function by using temporary code.
Open the Controllers\AccountController.cs file and navigate to the ExternalLoginConfirmation method.. The following snippet shows the new line of code in context.
//
The Update-Database command runs the Seed method, and that runs the AddUserAndRole method you added earlier. The AddUserAndRole method creates the user [email protected] and adds her to the canEdit role.
Protect the Application with SSL and the Authorize Attribute
In this section you apply the Authorize attribute to restrict access to the action methods. Anonymous users will be able to view only the Index action method of the home controller. Registered users will be able to see contact data (The Index and Details pages of the Cm controller), the About page, and the Contact page. Only users in the canEdit role will be able to access action methods that change data.()); }
This code adds the Authorize filter and the RequireHttps filter to the application. The Authorize filter prevents anonymous users from accessing any methods in the application. You will use the AllowAnonymous attribute to opt out of the authorization requirement in a couple methods, so anonymous users can log in and can view the home page. The RequireHttps requires that all access to the web app be through HTTPS.
An alternative approach is to add the Authorize attribute and the RequireHttps attribute to each controller, but it's considered a security best practice to apply them to the entire application. By adding them globally, every new controller and action method you add is automatically protected -- you don't need to remember to apply them. For more information see Securing your ASP.NET MVC App and the new AllowAnonymous Attribute.
Add the AllowAnonymous attribute to the Index method of the Home controller. The AllowAnonymous attribute enables you to white-list the methods you want to opt out of authorization.
public class HomeController : Controller { [AllowAnonymous] public ActionResult Index() { return View(); }
If you do a global search for AllowAnonymous, you'll see that it is used in the login); }
Press CTRL+F5 to run the application.
If you are still logged in from a previous session, hit the Log out link.
Click on the About or Contact links. You are redirected to the login page because anonymous users cannot view those pages.
Click the Register as a new user link and add a local user with email [email protected]. Verify Joe can view the Home, About and Contact pages.
Click the CM Demo link and verify that you see the data.
Click an edit link on the page, you will be redirected to the login page (because a new local user is not added to the canEdit role).
Log in as [email protected] with password of "P_assw0rd1" (the "0" in "word" is a zero). You.
Deploy the app to Azure
In Visual Studio, right-click the project in Solution Explorer and select Publish from the context menu.
The Publish Web wizard opens.
Click the Settings tab on the left side of the Publish Web dialog box.
Under ApplicationDbContext select the database that you created when you created the project..
Stop the web app to prevent other people from registering
In Server Explorer, navigate to Azure > App Service > {your resource group} > {your web app}.
Right-click the web app and select Stop.
Alternatively, from the Azure Portal, you can go to the web app's blade, then click the Stop icon at the top of the blade.
Remove AddToRoleAsync, Publish, and Test
Comment out or remove the following code from the ExternalLoginConfirmation method in the Account controller:
await UserManager.AddToRoleAsync(user.Id, "canEdit");
Build the project (which saves the file changes and verifies you don't have any compile errors). you will remove local account access.
Verify that you can navigate to the About and Contact pages.
Click the CM Demo link to navigate to the Cm controller. Alternatively, you can append Cm to the URL.
Click an Edit link.
You are redirected to the login that.
Examine the SQL Azure DB
In Server Explorer, navigate to Azure > SQL Databases > {your database}
Right click your database, and then select Open in SQL Server Object Explorer.
If you haven't connected to this database previously, you may be prompted to add a firewall rule to enable access for your current IP address. The IP address will be pre-filled. Simply click Add Firewall Rule to enable access.
Log in to the database with the user name and password that you specified when you created the database server.
Right click that the UserId is from [email protected] and the Google account you registered.
Troubleshooting
If you run into problems, here are some suggestions for what to try.
- Errors provisioning SQL Database - Make sure you have the current SDK installed. Versions before 2.8.1 have a bug that in some scenarios causes errors when VS tries to create the database server or the database.
- Error message "operation is not supported for your subscription offer type" when creating Azure resources - Same as above.
- Errors when deploying - Consider going through the basic ASP.NET deployment article. That deployment scenario is simpler and if you have the same problem there it may be easier to isolate. For example, in some enterprise environments a corporate firewall may prevent Web Deploy from making the kinds of connections to Azure that it requires.
- No option to select connection string in the Publish Web wizard when you deploy - If you used a different method to create your Azure resources (for example, you are trying to deploy to a web app and a SQL database created in the Portal), the SQL database may not be associated with the web app. The easiest solution is to create a new web app and database by using VS as shown in the tutorial. You don't have to start the tutorial over -- in the Publish Web wizard you can opt to create a new web app and you get the same Azure resource creation dialog that you get when you create the project.
- Directions for Google or Facebook developer portal are out of date - See the featured Disqus comment at the end of this tutorial.
Next steps
You've created a basic ASP.NET MVC web application that authenticates users. For more information about common authentication tasks and how to keep sensitive data secure, see the following tutorials.
- Create a secure ASP.NET MVC 5 web app with log in, email confirmation and password reset
- ASP.NET MVC 5 app with SMS and email Two-Factor Authentication
- Best practices for deploying passwords and other sensitive data to ASP.NET and Azure
- Create an ASP.NET MVC 5 App with Facebook and Google OAuth2 This includes instructions on how to add profile data to the user registration DB and for detailed instructions on using Facebook as an authentication provider.
- Getting Started with ASP.NET MVC 5
For a more advanced tutorial about how to use the Entity Framework, see Getting Started with EF and MVC.
This tutorial. You can also request and vote on new topics at Show Me How With Code.
What's changed
- For a guide to the change from Websites to App Service see: Azure App Service and Its Impact on Existing Azure Services | https://azure.microsoft.com/en-us/documentation/articles/web-sites-dotnet-deploy-aspnet-mvc-app-membership-oauth-sql-database/ | CC-MAIN-2016-30 | refinedweb | 3,136 | 65.12 |
.
1st. Thank you.
I agree. Thanks KDE team!
Please don't first up the dot. Anyway thanks KDE devs. I'm super stoked about 4.2!
Definitely wow! The stats say a lot, and the length of the changelog is huge! Compliments for the amazing progress and wonderful release announcement! :-)
Let's make those chairs fly ^^
Does anyone know if (or when) kde4daily will have the beta 2 update?
It still says I have the latest update but it was almost a week since it did the last update...
it's there now according to a blog ;-)
nice! (and updating)
What blog btw?
I installed a copy of Ubuntu8.10 when I saw an article about KDE4.2 daily 'neon' builds available here:
deb intrepid main
That was several weeks ago, and it installed fine. But there have been no updates available since. Also, there seems to be a virtual box 4.2 tester release too. Are these all the same thing (or at least different versions from the same folks)? Or is the 'neon' thing discontinued?
I guess I could wait till I get home tonight and check for an update, but I'm impatient...
Hi
There is a bunch of ppl using the Neon-project nightly builds of KDE;
The latest update is from the 17th Dec, so not really out-dated. I noticed it gets updated once or twice in a week.
For some reason I don't know, some parts are not included (ie. kate) but beside that it's pretty good and even includes debugging symboles pckages.
It's been a pretty rough week, compilation-wise - hopefully things will be a bit smoother now.
And yes - KDE4Daily now has, which is < 12 hours old :)
There's a typo in the first sentence of the second paragraph: "Since the first beta, which was released less than 4 weeks ago, 1665 bugs new bugs have been opened, and 2243 bugs have been closed."
s/1665 bugs new bugs/1665 new bugs/
fixed, thanks.
Is this proper English? (English is not my first language, but I'm not sure this sentence is grammatically correct).
For the 4.2 release, the KDE team has fixed literally thousands of bugs and has implemented dozens of features that were missing until now in KDE 4.2 fixes your issues.
Not quite. A better way to say it would be:
"For the 4.2 release, the KDE team has fixed literally thousands of bugs and has implemented dozens of features that were missing until now in KDE 4.2, which will fix a lot of your issues."
american or english english ?
What is English English? Is that British English with 20% more Oxford?
Fixed as well, thanks :)
Will this get onto the Kubuntu Jaunty alpha2 LiveCD?
Yes. It was uploaded the day before yesterda/yesterday just in time for the alpha 2 CDs.
Btw. I'm thinking about updateing my Kubuntu. Will there be KDE 4.2 final packages for Intrepid, or perhaps even Hardy (would rock even more, as I'd love to have a stable base)? I'd hate to wait for Jaunty ;)
KDE 4.2.0 is planned to be placed in intrepid-backports once it is released. Hardy will not be seeing any KDE 4.2 releases.
Downloading and compiling right now. With the debug flag set. ;-)
lets get the bug count up my fellow users , and report as many bugs as we can!
*ahem* Let's find as many bugs as possible, and check super-duper thoroughly to make sure we don't report duplicates.
No Administrator Button in System Settings. It's over 1 year old now and WE would be very happy if some kind coder out there would give it a little attention.
"No Administrator Button in System Settings."
and there won't be. running the whole GUI as root is not only slow and resource intensive, it's also not exactly great for security.
instead, as has been done already for things like the time and date control panel, helper apps, preferably with policyKit integration, should perform the actual final settings applications. and those little, non-gui apps should run with elevated priveleges only.
I am not talking about running the whole system as root. As a matter of fact if there is no administrator button in the system settings then I WILL HAVE TO RUN THE SYSTEM AS ROOT in order to make certain changeses like changing the KDM theme which is not possible as a normal user in KDE 4. In KDE 3.5 present version using KDM 3 all I have to do is press the administration button in the system settings which promps me to give my root password and then I can make the changes - no security risks to my system. In KDE 4 I have to log in as root in order to make those changes, which of course I personally hate because then my system is vulnerable.
You missed what Aaron said. You shouldn't have to be running any windows as root much less an entire session, what should be happening is PolicyKit will let you make certain changes (assuming you have the appopriate permissions from PolicyKit), or you'd supply the root password when applying the settings which'd spawn a process as root to do the actual changes, the application with the x11 session wouldn't have any actual special permissions at any point in time.
>In KDE 3.5 present version using KDM 3 all I have to do is press the
>administration button in the system settings which promps me to give my root
>password and then I can make the changes - no security risks to my system.
This has a number of issues:
1.) Speed: Pretty much an entire session must be started for the root user, making it *much* slower to open
2.) Memory usage: Because all the services have to be started for root, more memory is being used.
3.) Security: Different X clients can assert a *lot* of control over each other, it's generally not considered safe at all to be running X11 apps as root in a normal user's session (running X as root isn't considered a very good idea either, unfortunately there isn't too much choice with that at the moment).
So why can't I or a lot of other users make the afore mentioned changes on KDE 4? Why does this bug still exist as valid if the solution is already there?
PolicyKit is very new (in fact we made an exception for it being merged after the freeze). Apps like System Settings need to pick it up now.
Thanks for the info Sebas. So about when can we start enjoying it? Will I be able to change my login theme as a user by the release of KDE 4.2?
I guess this depends if such changes are seen as bug fixes or as new features. The former would appear in KDE 4.2 sometime, the latter would have to appear in 4.3 earliest.
As far as I understand PolicyKit is quite young and might not be fully functional or be officially implemented in 4.2 although openSuse seems to provide the packages. I am however not sure as to how usable this new implementation is.
plasma-workspace-4.1.85/plasma/applets/system-monitor/net.h:22:27: error: ui_cpu-config.h: No such file or directory
maybe...
net.h:22 #include "ui_cpu-config.h"
should be
#include "ui_net-config.h"
?
Ps. no bugzilla account, so here, sorry. :-)
plasma-devel at kde dot org is a better place for such reports; but in this case, it's correct. apparently it's not generating the ui_cpu-config.h from cpu-config.ui?
It is, but just like for the hdd_config.ui or temperatur_config.ui, it's generated for their use only, so all of them need their own .ui file.
the CMakeLists.txt even has
kde4_add_ui_files(net_SRCS net-config.ui)
so it's generated, but just not included (copy'n'paste bug of cpu.h obviously :-) ).
aah, it's in net.h .. yes, that would only work depending on what cmake decided to do. probably worked for us doing make -jN due to that ... anyways, fixed in svn. =)
you are The Man.
sure it's not that per se, but by you comment hint using -j1 did get the compilation go thru with that "broken" include.
I bet it's all about me having an "-j -l4" as my make conf. maybe I'm too used to have cmake do it's own configs in parallel (if that what happens with -lN).
i'm very sad that debian is not packaging beta2 not even in experimental
debian packagers RULE!! they put the latest SVN on this server !!! go KDE. Let's debug it!
A lot of bugs disappeared. There are still some plasma and krunner crashes, but the impression is that until final, all bugs will be easily crushed if developers keep the speed of fixes.
Congrats.
I need a stable system to work. And yet KDE 4.2 Beta 2 does the trick for me :) If I stick to the main plasmoids (panel and folderview) everything works great.
Some small tips:
- I had to remove the plasma config file for things to work properly. This should be removed when plasma is not running. File is .kde4/share/config/plasma-appletsrc
- When trying plasmoids some would crash on exit. This means you cannot exit plasms properly so they always appear on startup. Again deal with config file.
- MSN is not working in my kopete. Apparently a problem with libmsn which has been fixed in later versions.
Using openSUSE 10.3 by the way
> I had to remove the plasma config file for things to work properly.
please report problems you have at bugs.kde.org
> When trying plasmoids some would crash on exit.
backtraces -> bugs.kde.org =)
> This means you cannot exit plasms properly so they always appear on startup
fixed with r898746
Sure, Aaron will do all that :)
This was not meant as a report, just as small hints to users who might run into the same, to save them some time.
Keep up the good work!
Thank you very much for this. Please ignore the "it's not like KDE3.5"-morons, but keep on innovating. It's very much appreciated.
Yeah, few more years of "innovation" and we'll have a buggy version of gnome.
you evidently haven't tried 4.2 then, as your complaint is pretty stupid in light of the achievements in that release. you might even call 4.2 the "parity release" in which we have climbed over the "must have the features of 3.5" wall.
there are very, very few things there were available in 3.5 that aren't in 4, most of those things were very, very questionable to begin with, and there's a huge, huge amount of features that never existed in 3.5.
i'm sorry if that ruins your "clever quip". though i personally feel displays of sillyness such as yours deserve everything they have coming to them.
Awww, did the backlash to being called a "moron" hurt your little feelings?
It's one thing to say that you don't get any constructive criticism from your detractors, but it's another thing entirely to call them "morons" and "stupid" just because they have the audacity to think that [Software X] version 4 should be "feature parity with [Software X] version 3.5 plus extras."
That's the way it's been through most of history; to call them names because they expect it to work the same in KDE is... disingenuous at best.
They're not morons. They're not stupid. It's okay for them to be a bit miffed because of their confusion at the misnumbering of KDE 4. It's quite natural.
Be nice to them, and they WILL eventually become productive bug-reporters and evangelists again. Keep calling them "morons" and "stupid" and they are likely to keep heaping on you the same abuse you keep whining about.
You have an opportunity to be the bigger man. What you do with such an opportunity speaks more about you than about them. I look forward to seeing a positive response in the future, rather than additional fuel on the flamewar fire.
walter != Aaron Seigo - they spell and pronounce their names differently. | https://dot.kde.org/comment/44578 | CC-MAIN-2020-34 | refinedweb | 2,093 | 75.3 |
Guido van Rossum Interviewed 226
Qa1 writes "Guido von Rossum, creator of Python, was recently interviewed by the folks at O'Reilly Network. In this interview he discusses his view of the future of Python and the Open Source community and programming languages in general. Some more personal stuff is also mentioned, like his recent job change (including the Slashdot story about it) and a little about how he manages to fit developing Python into his busy schedule."
Python (Score:2, Interesting)
Re:Python (Score:2)
Start here (Score:5, Informative)
Re:Python (Score:5, Informative)
Re:Python (Score:5, Informative)
I second that (Score:3, Informative)
Re:Python - Python in a nutshell (Score:3, Interesting)
My recomendation:
Python in a Nutshell by Alex Martelli
Hands dow the best introduction to Python from a programmer's prespective. That is if you are already familiar with basic programming concepts. The great thing about the book is that covers just about every aspect in an extremely concise way that does not bore you to death.
I'm a certified Java and XML developer, gave up on Perl long time ago, discovered Python, somehow got over my initial suspicions regarding the whitespace
... within two weeks
Re:Python - Python in a nutshell (Score:2, Informative)
Python on the Zaurus (Score:2)
Re:Python on the Zaurus (Score:2, Informative)
Can anyone (Score:3, Interesting)
The MAJOR advantage is simplicity (Score:5, Informative)
Sure you can do the same things in other languages, at the end all general languages are Turing Machine equivalent. The difference is that Python is EASY to read [pm.org] (according to Master Yoda). It is bottom-up designed to be.
So it is good not only for scripting, but too for prototyping and for everything which needs to be flexible and not too much efficiency-critical. The logic of some videogames is encoded in Python, you know.
Re:The MAJOR advantage is simplicity (Score:3, Funny)
Re:The MAJOR advantage is simplicity (Score:2, Insightful)
I think that Python is a lot easier to read than Ruby or Java. Ruby allows a lot of the same punctuation-based idioms that make Perl so difficult to read and Java is too verbose to be easy to read. Consider the Java version of hello world:
Cheers,
Brian
Re:The MAJOR advantage is simplicity (Score:2)
Of course. Python is executable pseudocode, but Perl is executable line noise. (First saw this on the Slashdot "wisdom line".
:-))
PHP is not easy to read, in most cases (Score:4, Insightful)
PHP suffers readability not in syntax, but in archetecure design. With global namespaces for module functions (say, for example, to FTP a file), you do not have the ability to trace the logic between source files and modules in someone else's code. In addition, PHP encourages the inlining of code in presentation, and most PHP code is not modular (some is) - but on top of that the most popular mechanism for code reuse is eval() and include(), which simply pop more crap into the global namespace without being explicit what they do.
All this impacts readability. Python does not have these problems becuase it encourages explicit namespaces for all objects/modules/packages/classes/etc. Python also enforces readabilty by simple (easy) use of whitespace (this is a good thing.
Re:PHP is not easy to read, in most cases (Score:2)
As for you other critisims I think they are offbase. Both languages use include() hell all languages have some sort of an Include. PHP does not "encourage" inlining of code, smarty is an official part of PEAR but neither does it force people to use smarty. Finally speaking for myself I don't think I have ever used eval() nor seen it used in any major proj
Re:Can anyone (Score:3, Interesting)
If you are using Java then python is a step up because it offers first class functions and some other incredibly power constructs.
Unfortunately, although Python's effort is applaudable, it really is
Re:Can anyone (Score:5, Funny)
Re:Can anyone (Score:4, Insightful)
I look around and it seems to me like most "new" things in CS have been around for 20 years. Why is everybody so intent on rewriting smalltalk and lisp? Does it seem strange to you that every language eventually starts looking like smalltalk and lisp?
Re:Can anyone (Score:4, Funny)
Because they get embarrassed trying to make smalltalk with a lisp.
Re:Can anyone (Score:4, Interesting)
More and more, I'm thinking corporate America is decades behind what the academic world takes for granted. XSLT, for example, is something that never should have happened. And its not just Lisp. Take, for example, DAML+OIL. Its an XML-based language that can make statements in first order logic that can be verified by a theorem prover. Its so complicated and verbose, that its nearly impossible to write by hand, and most people use GUI tools to work wih it. In the next version, they're adding support for limited execution capability (ala XSLT). Meanwhile, I'm thinking, "hello --- Haskell?"
*> My impression of this mainly comes from comp.lang.lisp. I find some of the people who hang out there to be among the rudest I've seen on the Internet. Some pricks, like Eric Naggum (search for "arrogant" on c.l.c in Google Groups and see who gets the first 20 hits...) are actually revered for their rudeness! It might just be there are more math/pure-science types on that board, and while they're definately smart, they can also be rather rude.
Re:Can anyone (Score:2)
All languages nowadays are slowly adding individual pieces of Lisp functionality. Why not just use Lisp (no reason to wait a decade for all the "popular" languages to finally come fill circle and become Lisp dialects).
At first:
Lisp alone is only Lisp, if at all, some should use a lisp with oo extensions.
Even the simplest pascal algorith is hard to be coded in "pure" lisp, as you have to do the trick of converting all "record" like data structures in list/lisp like equivalences.
Second:
Some people thin
Well, (Score:2)
Re:Unfortunately... Re:Don't fully agree. (Score:4, Insightful)
You don't have to do any kind of language design when you do Lisp programming. You can get a long way with just using plain function definitions. Yet you can easily define new syntaxes, control structures and stuff.Back when I was the proud owner of a Commodore C 128, I used to think similar things about useless stuff like GOSUB. Why can't we just stay with the more familiar GOTO that everyone understands?
Get over it. Learning new tools is usefull, but it's work. Get a good book on Lisp macros [paulgraham.com], and dive in.You are not alone. And, given that you can actually define a new syntax, many people tried to come up with alternatives to raw s-expressions. And indeed succeeded. However, none of these alternatives ever got too popular (the most successfull attempt might by the Dylan language, which started with s-expressions, but dropped them). People could have used alternative syntaxes, but the vast majority chose not to.
Re:Unfortunately... Re:Don't fully agree. (Score:2)
(when i say y
LISP misconceptions (Score:3, Informative)
You've been fed a batch of bad Kool-Aid. Fortunately, it's not too late to come around.
It's what LISP brings to the table above and beyond being a programming language (as most programmers think of the term) w
Re:LISP misconceptions (Score:2)
So you admit it!!!
;-)
I think that's the crucial problem with Lisp syntax. It's not mean to be easily read, but easily accessed from the language itself. My point is that Lisp is a 40 years old programming language, and that has its burdens. Everything discovered later about user friendliness is lacking. Lisp syntax is too simple for most everyday tasks as, for example, reading other people's code.
If you have to remove the main attribute of the language in order t
Re:LISP misconceptions (Score:2)
It is easily read. Just invest the time getting over the initial hurdle of the parentheses and it's a very easy-to-read language.
My point is that Lisp is a 40 years old programming language
John McCarthy first started working on LISP in the 50s. It's older than 40 years.
Lisp syntax is too simple for most everyday tasks as, for example, reading other people's code.
This would be true if LISP programmers generally had diff
Think Different!! Re:LISP misconceptions (Score:2)
Your example
(define _merge (l1 l2 cmp)
(cond
((null l1) l2)
((null l2) l1)
((cmp (car l1) (car l2))
(cons (car l1) (_merge (cdr l1) l2 cmp)))
(t
(cons (car l2) (_merge l1 (cdr l2) cmp)))))
is excellent.
a) I'm at first german, and english I learned in school. So: That above is completely unreadable to me!
the
Re:Think Different!! Re:LISP misconceptions (Score:2)
It's unreadable in English, too. It's perfectly legible when thinking in terms of formal computational theory. English, German or other natural languages are not good choices for programming; there's too much context sensitivity, too much implicit meaning, for it to reduce well to mathematics.
the only word in my english german dictionary where I can find a translation is null.
Your dictionary doesn't h
Re:Don't fully agree. (Score:2)
See: grunge.cs.tu-berlin.de/~tolk/vmlanguages.html
angel'o'sphere
Re:Don't fully agree. (Score:2)
ML is strongly typed which means its immediately useless. OOP can viewed as a relaxation from strong types (in C) to slightly dynamic types. You watch, eventually most languages will be dynamically typed).
One of the problems I had when I tried to learn Lisp was that it needs special editor support. This meant, as far as I could tell, I had to learn emacs.
First off, emacs is butter. Emacs is gorgoes. Emacs is so extensible and so effic
Re:Or you can meet Lisp's cousin, Dylan. (Score:2)
when I looked at dylan, around 1995, it looked like pure lisp and I ditched it.
Then it seems the dylan lovers ditched it because of the infix notation and more keywords
angel'o'sphere
Re:Can anyone (Score:2)
I came to Python from Perl, dealing mainly with text manipulation and glue-type applications, in which both Perl and Python are very adept languages. For me, Python was a breath of fresh air, - no more curly braces, indentation became an integral part of the code, rather than an annoyance (when it went wrong), and mainly, Python is OO by design, whereas in P
Re:Can anyone (Score:5, Interesting)
o Python uses indentation to denote code blocks, rather than curly brackets {} or other methods. This, along with a few other layout rules, makes Python code very strictly laid out. This makes it both easy to read and code, and you really don't miss being able to use your own crazy layouts (ahhh, perl
o Python is totally object orientated, and very intelligently designed in this department. Whereas in Perl (5) you have to jump through hoops to create objects, especially OO modules, in Python it's as easy as assigning a variable a new value.
o Python has quite a few very useful built in object types, including strings, ints, floats, lists, tuples, dictionaries, functions, classes, and more. This makes things easy if you don't want to make complex matrices. It is also easy to make more complicated types by embedding C...
o It is really easy to embed C/C++ code in Python, and vice versa, so where Python suffers on performance you can boost it with C/C++, or use a Python tool appropriately called "boost"
Generally, Python is very handy for anything from one-time dirty scripts to full applications (there are some good GUI toolkit ports about.. PyGtk, PyQt, PyKDE, wxWindows, etc), and is also very handy when developing prototypes.
But what really makes me like Python (as I'm not a language nerd by any measure) is that it is just *easy* and *fast* to code in... it doesn't get in your way.
(Pimping out...)
Re:Can anyone (Score:3, Informative)
Python is totally object orientated, and very intelligently designed in this department. Whereas in Perl (5) you have to jump through hoops to create objects, especially OO modules, in Python it's as easy as assigning a variable a new value.
Alright, lets set something straight here. The world is on a huge object oriented high. As has been said about strict types, object oriented programming is a hammer and everything all of a sudden looks like a nail.
Any language that is
Re:Can anyone (Score:2, Informative)
From the Learning Python book [oreilly.com] (see sec. 1.1.1.1):
So, while Python supports object oriented programming, it doesn't force you to use it.
Re:Can anyone (Score:2)
You don't have to use Python's object oriented features. For example, you can find all of the 22-character long English words with only the tiniest sprinkling of OO:
>>>for w in filter(lambda x: len(x) == 22, file('/usr/share/dict/words').readlines()): print w
electroencephalograph
Mediterraneanizations
OTOH, people high on OO could write:
>>> print 22. __add__(3)
25
Python gives you both
Re:Can anyone (Score:2)
File "<stdin>", line 1
print 2.__add__(3)
^
SyntaxError: invalid syntax
>>>
Hum, seems you cannot do that. Of course, in Ruby you can do 10.+(29)
Re:Can anyone (Score:2)
Try:
.__add__(3)
2
or even:
.__add__(3)
2.
Re:Can anyone (Score:3, Informative)
It's available if you want to use it, but you're not forced to
use it when it's not appropriate.
Re:Can anyone (Score:2, Informative)
ANYTHING that you can do in Lisp/Ocaml (and other functional languages), you can do in Python. Go read a bit. Python does not force you into any style. You're free to use whatver you want.
Re:Can anyone (Score:2)
Personally I think this is a flaw and not a feature. One of my pet peeves is scrolling to the end of some function and seeing this.
}
}
}
With python you don't even have that.
I like the php alternative best.
endif;
endwhile;
endforeach;
}
Re:Can anyone (Score:3, Informative)
Not really. 2.2 and up get a little closer to that, but Python is really a procedural language with a very nice but very optional set of OO features. (Internally, the Python and Perl OO implementaions are very similar, even if Perl's hideous object syntax does a good job of hiding it.) This is a nice pragmatic approach, akin to what Objective C does.
If OO purity is one of those things that appeals to you, Ruby or Smalltalk might be fun toys.
Re:Can anyone (Score:5, Informative)
Re:Can anyone (Score:5, Informative)
1. Python as a scripting language has several features seen in Objective C(and other similar languages) not found in C++. Class members can be detected and bound at runtime, further it's possible to search a classes members for information.
2. Pydoc and documentation strings. Python has built in support for documentation strings, and a great utility for automatically generating documentation. Documentation is actually a part of the programming language, and not an after-market add-on.
3. Dictionary objects, tuples, lists - are all part of the basic language. Dictionary objects allow interesting hash tables to be created without much effort at all. This feature is seen in Perl.
4. Maybe a miss feature, but enforced indentation creates much easier to read code.
5a. The shelf object. This essentially allows any object to have it's runtime information stored in an easy and effecient matter. It can then be reloaded after a run.
5b. The pickle object again allows objects to easily be stored in files.
6. Python is _EXTREMELY_ easy to extend using the Python C API.
7. Python includes functional programming aspects such as mapping and lambda forms.
8. Python includes an extremely complete library that does just about everything one would desire to be able to do. Using the python runtime library allows your code to be easily portable without the headaches involved in C/C++ porting.
9. Using psyco, it's possible to have Python code JIT on i386 processors. This gives a significant performace boost.
10. A development community and support community second to none.
There are other aspects that I haven't touched on here, but these are the major things I've found helpful so far.
Re:Can anyone (Score:) ).
If you mean this class of languages as opposed to C, C++, Java and so on, well, it becomes a matter of what you want to accomplish. The great benefits of these interpreted languages are that they make development very fast, compared to the more traditional languages (yes, Java is interpreted, but it is still designed as a traditional language). You spend more time solving your task and less time managing the mechanics of development. Also, they really make use of the benefits of being interpreted with things like closures, dynamic code evaluation and so on. And they typically have very complete, transparent access to the surrounding system - why spend two days writing some hairy functionality when you can trivially filter your data through an external application that already does the whole job for you? Do not underestimate "scripting type glue".
They do make a pretty good fit running large systems - the Swedish pension management system is all written in Perl, for instance, and Zope is written in Python. They are also quite efficient; they are on the whole as fast as a Java implementation, and occasionally (when the task plays to the specific language's strengths), quite a bit faster.
I typically use C/C++ and Perl for development, and every time I've been using Perl for a while, I get bouts of frustration with traditional languages for the lack of such things as hash datatypes and inline regular expressions. But for some tasks, traditional languages are the way to go.
Re:Can anyone (Score:5, Interesting)
I'm a professional C++ programmer, and a devout pythonista. What I miss most in C++ are the easy-to-instantiate datatypes like tuples. It's so much easier to pass a relatively simple datatype as a tuple, as opposed to introducing a whole new class and even *gasp* a new file to do the trick.
For example I can trivially code a function that returns an array of (name, address) tuples, and I can easily manipulate such an array:
tuples = get_address_entries()
for name,address in tuples:
print name,"lives in",address
After doing Python for a while, one sees how much static typing gets in your way of doing things the "proper" way, and very often one tries to avoid doing the damn thing at all... resulting in a sub-optimal design. Python allows you to be all you can be
C++ and tuples (Score:3, Informative)
And regarding your example code, the same can be done trivially in C++ with the added significant bonus of strong static typing:
Re:C++ and tuples (Score:3, Interesting)
Yes, I have written code like this, and generally like STL (too bad I can't use any of it for my work). However, it requires quite a lot of typing, and the resulting code is not as easy to understand.
Three lines of Python, three lines of C++ (barring the typedef, which is only there to make the rest of it easier to read).
And therein lies the catch, typedef is needed.
Re:C++ and tuples (Score:2, Informative)
You should always use the prefix increment ++i when using STL iterators (although in the case of vector they might just be typedefs for pointers).
The reason is that since the postincrement must return the previous value, the iterator has to be copied.
Re:C++ and tuples (Score:2)
Explicit typing (Score:2, Informative)
However, you're right about the 'easy to instantiate' part, but I don't think static typing is really the problem. The problem with types in C++ is that you have to explicitly mention them, when a lot of the time the compiler could figure them out itself. In your example, you could do:
std::vector<std
Re:Explicit typing (Score:2)
Not built-in enough; In python I can do:
mylist = [("hello",3),("world",4)]
Additionally, some environments shun STL (and templates in general) because it leads to code bloat.
There's been some discussion about introducing type-inference into the language, so you could say:
auto entries = get_address_entries();
Like that is ever going to happen... I don't even think any C++ compiler has achieved ISO C++
Re:Can anyone (Score:2)
No. That's a big advantage, but the really great advantage is that you can modify things on the fly. You can add methods to a class, or to objects of a class, during the execution of the program. You can create code that you then execute, etc. (I know that you *can* do that in C... but just try it!)
Lisp has most of the advantages that I mentioned in the preceeding paragraph, but it is
Java is not interpreted (Score:2, Informative)
Re:Java is not interpreted (Score:2, Insightful)
Actually, it's not FUD.
For string processing, database access, and pretty much...well...everything, Perl *smokes* Java. Python is slower than Perl, but Guido acknowledges that.
Also, though one can buy into the Java Marketing Machine and proclaim that there is indeed a "Virtual Machine" and "Java is not interpreted", in fact, very few languages are actually "interpreted" i
(OT) Hello, world! (Score:2)
You can write "Hello, world!" in only 500k?
I'm impressed. In my youth, we did it using feeble tools like x86 assembler on MS-DOS, and it required a massive 25 bytes, more than half of them taken by the string in question.
I know about apples and oranges, but I can't help wondering if the w
Re:Java is not interpreted (Score:2)
Links to studies? Statistics?
Java bytecode is produced from source, but this bytecode, too, must be executed as data, not as a native instruction.
If you'd read the actual post you responded to, you'd learn that bytecode is nowadays compiled to native instructions.
All of this is relatively meaningless; arguing that Java's "virtual machine" is not an interpreter does not say anything for the language
Of
Re:Can anyone (Score:) ).
I don't think that is really true. How cheap and easy is it to make a COM object or Java class in Perl? What if you want
Why I like Python (Score:2, Insightful)
1) Indentation instead of bracing. Yes, I know some people hate it but for me it makes the structure so clear.
2) Object orientation. I did OO with C++. I actually understood it with Python.
3) The smoothest ever integration to low level languages like C. Gotta love it.
4) Easy to learn. Write ab initio code with C/Fortran and never-programmed-before people interface it with Python [fysik.dtu.dk]. Then, grind out those MSc and PhD theses...
Re:Why I like Python (Score:2)
No suprise here:) C++ implimentation of OO is marginal to say the least. The syntax agrees to the letter of OO design, but not the spirit. The result is that most C++ code consists of procedural function dressed up in OO clothing. Many of the advantages of OOD can not be realised in C++ without some ugly code ( uglier then it already is). I learned OOP in the late 80s the hard way, by implementing all of the concepts and all of
Re:Why I like Python (Score:2)
Third party modules (Score:2)
I don't have much experience with other scripting languages, but I've found python to have a lot of very easy to use modules. I've found modules for polynomial fitting to data, large data sets, polygons operations - just out there when I looked for them. And many a useful library in C or C++ has been wrapped in Python. For example, I've written some CAD software (for very specific design operations we do where I work), and needed a way to me
Re:Can anyone (Score:2)
Anyone who can't get use to indentation based block delimiters within a few minuets, ought not be programming in PERL. It demonstrates a complete lack of the programming discipline that is crucial for creating maintainable PE
From the interview (Score:3, Funny)
Guido: I smoked a lot of crack that day.
Re:Dump the significant whitespace! (Score:2, Insightful)
Proper indentation of a program is considered good style in all computer languages, including Perl. It is simply goo
Re:Dump the significant whitespace! (Score:2, Interesting)
In python, it is always 8 spaces. It's considered bad style to use tab. If you use emacs, then emacs will automatically use the correct settings in python mode.
The simple fact is that if you ignore the usual style guidelines for any programming language, there are obvious gotchas. The whitespace gotchas you mention are relatively harmless, as they are caught by the compi
Python vs. the others (Score:3, Interesting)
Many people use Python for tasks they used to do in Perl, but I don't see Python replacing Perl. They serve different purposes, for the most part.
Ruby is also an interesting language, although I don't personally know much about it, except that it aims to be truly OO. Again, slightly different purposes, but I don't think Ruby will ever be very widespread.
Re:Python vs. the others (Score:2)
Could you please explain how PERL and Python serve different purposes?
I'm curious because I use Python for exactly the kind of stuff that I used
to use PERL for. The whole reason I found Python was because I was looking
for a substitute for PERL. After having used Python for some time, I've
discovered that certain things are easier in Python than PERL, and vice
ver
Re:Python vs. the others (Score:3, Interesting)
For one thing, Perl is much more used by UNIX (*BSD and GNU/Linux included) system administrators. Some people think Perl is more in the UNIX spirit than Python.
Python focuses more on OO issues and the Pythonic way. Perl is more versatile in terms of syntax.
Basically, there's some differences in the overall design philosophy.
But you are right. You can easily use Python for things you used to do in Perl and vice versa. But there ar
Re:Python vs. the others (Score:3, Funny)
Difficult and surly?
;-)
Re:Python vs. the others (Score:3, Informative)
Perl is a better shellscript than shellscript. Systems administrators who are tired of dealing with the horrors of shell script like perl. Perl is also great for text manipulation. One can write insanely powerful and terse code for this in perl (like sed on steroids). People who yank a lot of text around (web developers, sys admins) often like perl for this reason.
Python is more of a "programmers language". You can't write insanely t
Is Python still lacking a macro system? (Score:3, Interesting)
I haven't been following python for a long time, though I've used it for a few projects. I know a lot of Lisp-like features such as lambda, eval, etc. have been added to it. (Java's adding a *lot* of features that Dylan has had since its inception, such as keyword arguments... but adding those features to Java makes the language even more ugly.) But what about a real macro system (and I don't mean a C style macro system)? I assume that it would be difficult to incorporate into Python because the Python syntax is not as consistent as the Lisp-family languages.
I assume that Python is still not efficiently compilable either, right? I think Guido was discussing a sealing mechanism for Python similar to Dylan's. Gywdion Dylan can produce code that's as fast as code written in C... and there's still many more optimizations that can be implemented into the compiler.
Re:Is Python still lacking a macro system? (Score:5, Interesting)
You don't need macros since Python is dynamically typed and even functions are first-class objects. At least I know I never missed the C preprocessor after moving to Python
I assume that Python is still not efficiently compilable either, right?
Not quite. There is however a dynamic compiler called Psyco [sourceforge.net], which works by creating static versions of functions at run-time to reduce type-checking.
My own experience is that Psyco makes Python code about 400% faster in real applications. Still an order of magnitude worse than C, but comparable to or better than other languages when it comes to tasks that Python used to do significantly slower.
Re:Is Python still lacking a macro system? (Score:2, Informative)
> typed and even functions are first-class objects.
You don't even know what you are missing.
Lisp and Dylan are dynamically typed with functions as first class objects. However those features are orthogonal to a true macro system... They're not related. A true macro system allows a capable programmer to extend the language itself. Need a new control struct to lock a resource and then automatically unlock it at the end of the block of code? W
Re:Is Python still lacking a macro system? (Score:2)
Not necessarily a good solution, but it might work.
Not necessarily a bad one. (Score:2)
The only difference I think it would have is that it would run at execution time, not compile time. But I'm sure that a not too far away future version of Python will do that...
Re:Is Python still lacking a macro system? (Score:5, Insightful)
Macros would indeed be more difficult to implement in Python, because data and code are not as interchangable as in Lisp (e.g., (car 1 2) being code, '(car 1 2) being data). Macro-like manipulations of Python code would be rather difficult. But there has been discussions about ways of achieving the same flexibility without quite so much generality.
In a related example, some people feel that code blocks, ala Ruby or Smalltalk, are the right way to do control structures. Indeed they are very general. Python instead has developed notions of iteration, generation, and the use of first-class functions, and together they are all quite general as well -- you can do what you need to do. While more eclectic than anonymous functions/lambdas/closures, they are arguably more transparent -- you don't know what a function might do with a code block, and it can greatly effect surrounding code.
So it is with macros -- they are extremely general, and can do unexpected and magic things, (which is not in fitting with core Python principals). As Python grows alternatives, more things need to be built into the languages, but the result is a set of predictable and well-known idioms. Python is a full language, not the basis for other languages, as Lisp can become.
As far as performance, there are a number of things like Psyco, Pyrex, Numeric, and Weave/SciPy, which can handle performance problems (noting that in most application performance is not a problem). The result is again somewhat eclectic, but pragmatic. There's a wide variety of ways to optimize a Python program, many of which are just normal programming optimization (caching, making a process persistent, lazy loading, etc), as well as Python modules written in C or other compiled languages (potentially aided by things like SWIG, Pyrex, or ctypes)
Re:Is Python still lacking a macro system? (Score:2)
Java is adding keyword arguments? Any pointers for that? How does it work with
Re:Is Python still lacking a macro system? (Score:2)
The ICFP competitions for the last few years make an interesting testing ground. While not large scale developments in the grand scheme of things, the requirements are non-trivial and well-specified. There have been entries in many languages, and performance comparisons can
Favorite quote (Score:2, Interesting)
This is mine (Score:3, Interesting)
GvR: That's a deep philosophical question. I'm optimistic about that in theory.
[...]
Given that I believe everybody can learn to read and write, given the right education and circumstances--obviously if your parents have no money and you're sent to work when you're seven ye
Python is great. (Score:5, Informative)
My experiences (Score:3, Interesting)
I've been a Python user myself and I find it quite remarkable how it has evolved since its 1.5.2 to the pointer where they are now 2.3. More and more (interesting) software is being written for it. But evenly important is the code base of Python. It's C implementation is very clean written and very easy to use so one can write extension modules very fast.
Why python rules (Score:3, Informative)
Re:Why python rules (Score:2)
GPL (Score:2)
WTF? The GPL didn't exist in 1991? I guess I was hallucinating when I was using GNU Emacs and GCC in the 80s.
Inaccuracy about GPL (Score:3, Informative)
van Rossum :
parent poster (Slothrup) :
Yes, I was also surprised at this large factual error.
GPL Version 2 [fsf.org] : Jun
Please change "Von" to "Van" (Score:2, Informative)
"Von" is the German version. Dutch people don't like to be taken for Germans, for historical reasons..
My favorite quote from the article (Score:3, Funny)
GvR:
...I do it myself by staying where I am and giving keynotes at conferences and making my personal life the subject of discussions on Slashdot. ...
ORN: Perhaps they should get lives of their own instead of discussing yours?I think he's talking about us...
Great for the occasional programmer (Score:4, Interesting)
I don't get to program much, since I have a day job, and to make matters worse, my formal training with computers was brief. Basically, I learned Python on public transport, communiting to and from work (the Python Cookbook causes people to turn their heads, by the way). I tried learning Java at one point, but the problem is that there are too many details and formalisms that you have to remember to even get anything off the ground.
Not so with Python. Basically, you just write what you want to code. Want to know if there are characters in a string?
(This is new in Python 2.3, and I can't get the indentation to work here). Fantastically intuitive.
The only "problem" is the way the library keeps growing from release to release: Something that you had to code yourself a while back suddenly is a trivial feature. More of an embarrassment of riches than a real problem, but it does make you feel like a fool sometimes. "Why code that socket server? Just use..."
One other nice thing about learning Python is how amazingly friendly and helpful their tutor list [python.org] is. I've asked some amazingly stupid questions in my time, and they have been very gentle and kind.
Re:Welcome to our Overlords (Score:2)
Re:Welcome to our Overlords (Score:2)
Re:New processors (Score:2, Informative)
Re:extensible AI game programming inputs? (Score:2)
PHP: Capable != Appropriate (Score:3, Insightful)
A more appropriate introduction for this
Re:Fool! (Score:3, Funny)
Re:YAPLWDN (Score:2) | https://developers.slashdot.org/story/03/08/16/1331240/guido-van-rossum-interviewed?sdsrc=next | CC-MAIN-2016-36 | refinedweb | 6,017 | 63.59 |
In the last column, I talked about how to read and write from multiple filedescriptors seemingly simultaneously with the select() function call. Using multiplexed I/Olets programs block while waiting for notification that some file descriptors are ready for readingor writing.
Using multiplexed I/O of this form is quite common. High-performance network servers --including mail, workgroup, and Web servers -- all must handle multiple simultaneous connections,often thousands of connections. While select() looks like it would handle this kind of loadjust fine, it has a fatal flaw that limits its performance. Figure 1 shows the prototype ofselect().
Figure 1: A Prototype of the select() FunctionCall
int select(int numfd, fd_set * readfds, fd_set * writefds,
fd_set * exceptfds, struct timeval * tv);
Think about what happens when a program wants to use select() to block until either filedescriptor 1000 or 1001 is ready to be read from. It will have to call select() with anumfd of 1002 and the appropriate value of readfds. Easy enough, right?
The Problem with select()
Now try to think about things from the kernel's point of view. The kernel gets a request from auser-space program to monitor some file descriptors for reading. It knows that the file descriptorsare smaller than 1002, but that's all it knows. To figure out which file descriptors the program isinterested in, the kernel needs to check all 1,002 file descriptors, safely, one by one.
Needless to say, this is terrible for performance. As most busy servers call select()quite often, checking 1,002 file descriptors when the program really only cares about two is quiteinefficient. Additionally, if the file descriptors were instead 10001 and 10002, things would beworse by an order of magnitude. All of this means that select() has a very poor property:
The performance of select() is directly related to the value of the file descriptorsbeing monitored. No other Linux system call depends on file descriptors' values. (The number of filedescriptors monitored would be a more natural characteristic for performance to depend on.) As aspecific example of this, select() performs very poorly once the file descriptors getlarge.
select() has one additional problem -- how many file descriptors should fd_set beable to hold? Ideally, it should be able to hold as many file descriptors as a process can haveopen. Traditionally, Linux allowed only 1,024 file descriptors per process, so this was reasonable.However, now that patches are available for the Linux kernel that raise that limit, fd_setneeds to grow. In fact, it now needs to hold hundreds of thousands of file descriptors. Iffd_set were actually enlarged, each fd_set structure would measure at least 12K insize! Needless to say, handling fd_set structures of that size would have a noticeableperformance impact!
The 2.2 Linux kernel introduced the poll() system call. poll() is supported bymost Unix systems and is included in the Single Unix Specification. It duplicates the function ofselect() but scales much better. Every glibc-based Linux system providespoll(); those running on older kernels emulate poll() via select() in the Clibrary, but applications need not know the difference. Here's what poll() looks like:
int poll(struct pollfd *ufds, unsigned int nfds, int timeout);
The first parameter, ufds, must point to an array of struct pollfd. Each elementin the array specifies a file descriptor that the program is interested in monitoring, and whatevents on that file descriptor the program would like to know about. The next parameter,nfds, tells the kernel how many total items are in the ufds array. The finalparameter, timeout, is the maximum time, in milliseconds, that the kernel should wait for theactivities the ufds array specifies. If timeout contains a negative value, the kernelwill wait forever. If nfds is zero, poll() becomes a simple millisecond sleep.
Building the ufds array is the most complicated part of using poll(). Eachstruct pollfd looks like:
struct pollfd {
int fd; /* file descriptor */
short events; /* requested events */
short revents; /* returned events */
};
The fd element contains the file descriptor whose events this structure describes. Theevents field is a bitmap of events the application would find interesting, and therevents field is filled in on return from poll() with a list of the events that haveactually occurred on the file descriptor. Programs can monitor one or more of POLLIN,POLLOUT, or POLLPRI. The first two monitor whether there is data to read and whetherwriting will block. The last tells the application if there is out-of-band data available on thesocket, which can occur on sockets. These values can be logically ORed together, letting the programmonitor a single file descriptor for both reading and writing.
On return from poll(), the revents field is filled with the same POLLIN,POLLOUT, and POLLPRI values, indicating which of the "interesting" events the filedescriptor is now ready for. The following bits may also be set in revents:
POLLERR: This is set if an error condition has occurred on the file descriptor.
POLLHUP: This is set when the file descriptor refers to a terminal that has been hung upon.
POLLNVAL: If fd doesn't refer to a valid (open) file descriptor, this value isset.
On return, poll() returns -1 if an error occurred, 0 if the timeout was reached, or thenumber of file descriptors that have some revents fields filled in.
Listing One is a sample program that performs exactly the same function as theselect() test program in last month's Compile Time.
Listing One: The poll() System Call atWork
#include <fcntl.h>
#include <stdio.h>
#include <sys/poll.h>
#include <sys/time.h>#include <unistd.h>
/* For simplicity, all error checking has been left out */
int main(int argc, char ** argv) { int fd; char buf[1024]; int i; struct pollfd pfds[2]; fd = open(argv[1], O_RDONLY);
while (1) { pfds[0].fd = 0; pfds[0].events = POLLIN;
pfds[1].fd = fd; pfds[1].events = POLLIN;
poll(pfds, 2, -1); if (pfds[0].revents & POLLIN) { i = read(0, buf, 1024); if (!i) { printf("stdin closed\n"); return 0; } write(1, buf, i); }
if (pfds[1].revents & POLLIN) { i = read(fd, buf, 1024); if (!i) { printf("file closed\n"); return 0; } write(1, buf, i); } } }
If you compare Listing One to select's test program, you'll find very fewchanges.
The easiest way to run this program is to use a named pipe for the file specified on the commandline. Here's how (assuming you have the source code for the above program in the filetestpoll.c):
# make testpoll
# mknod p mypipe
# ./testpoll mypipe
Since the poor scalability of select() motivated us to look at poll(), we shouldlook at poll()'s performance characteristics. The number of struct pollfd items passedto it is going to have an effect on performance, which is reasonable, since the amount of work thesystem call is doing is directly related to the number of items in that array. However, the value ofthe individual file descriptors has no effect. This is a major win for poll() overselect().
The other advantage to poll() is that the largest file descriptor it can handle islimited by the size of an int, not by the size of another structure (such as anfd_set). This lets poll() work with Linux kernels that allow hundreds of thousands offile descriptors per process, which is extremely useful in some cases.
So much for multiplexed I/O -- once you understand nonblocking I/O, select(), andpoll(), you're pretty much covered. There is one other method for multiplexing I/O requests-- asynchronous I/O -- which I'll cover in a future article.
Pipes
Since I have some room left in this column, I'll talk about pipes. Pipes use the same API asregular files -- in fact, I suggested using a pipe when running the poll() program I wrote.Since they look just like normal files, programs don't normally care whether they're writing to apipe or a file (or a terminal device for that matter). This flexibility lets most programs deal withpipes quite well, a fact Linux takes advantage of with the pipe (|) shell construct. This replacesone program's standard output and another program's standard input with a single pipe, which allowsthe first program to talk to the second.
There are two types of pipes: named and unnamed. Simply enough, named pipes have a name in thefilesystem and unnamed pipes do not. An unnamed pipe disappears once no processes are reading fromit, while named pipes survive process termination and reboots, and must be removed by unlink.We're only interested in discussing unnamed pipes right now, but the two are quite similar. The firstthing to know is how to create a pipe:
int pipe(int fds[2]);
By passing an array of two file descriptors to pipe(), you create a pipe. The first filedescriptor in the array becomes the end of the pipe that can be read from, and the second is thewritable end of the pipe. Any data written to fds[1] will show up in fds[0] in thesame order it was written. (You'll sometimes hear Linux pipes called FIFO pipes, because the Firstdata In is the First data Out.) Pipes are half-duplex -- you can send data only one direction. Thepipe() system call returns 0 on success, and nonzero on failure. The only reasonspipe() would fail are a shortage of system resources, the parameter being invalid, or theprocess having too many file descriptors already open.
Listing Two is a simple program that creates a pipe and sends a message down it.
Listing Two: Creating a Pipeline and Sending aMessage
#include <stdio.h>
#include <string.h>#include <unistd.h>
int main(int argc, char ** argv) { int fds[2]; char message[80]; char * string = "Hello World"; int size;
pipe(fds); write(fds[1], string, strlen(string));
/* we read -1 bytes to leave room for the trailing '\0' */ size = read(fds[0], message, sizeof(message) - 1); message[size] = '\0';
printf("Got the message '%s' from the pipe\n", message);
close(fds[0]); close(fds[1]);
return 0; }
Pretty simple stuff really. The most interesting bit is that the read() call doesn't waitfor (sizeof(message) - 1) bytes to show up in the pipe before returning; it returns the firsthunk of data that appears, no matter how short it is. A pipe can hold a relatively small amount ofdata before it fills up; once it fills up, any write()s to the pipe will block until somedata is removed from the pipe by a reader. (Of course, the various multiplexed I/O techniques we'veintroduced let programs avoid blocking if it becomes necessary!) I'll jazz this program up a bit inListing Three by having it create a child, which generates the message to send to theparent.
Listing Three: InterprocessCommunication
pipe(fds);
if (!fork()) { close(fds[0]); write(fds[1], string, strlen(string)); close(fds[1]); exit(0); }
close(fds[1]);
/* we read -1 bytes to lave room for the trailing '\0' */ size = read(fds[0], message, sizeof(message) - 1); message[size] = '\0';
close(fds[0]);
This isn't much different from the first example, but it shows something that would be much moredifficult without pipes -- interprocess communication. The two processes are communicating with eachother, albeit very simply.
There are two special cases to think about. The first is, what happens when the writer closesits end of the pipe, and the reader tries to read? When there are no writers, and no data in thepipe, read() requests on the pipe will return immediately with 0 bytes read (but not an errorcondition). It's also worth mentioning that both select() and poll() consider a pipewith no writers ready for reading, since a read() on the file descriptor won't block. Thenext question is, what happens when you write to a pipe with no remaining readers? Let's give that atry (Listing Four).
Listing Four: Writing to a Pipe With NoReaders
int main(int argc, char ** argv) { int fds[2]; char * string = "Hello World";
pipe(fds); close(fds[0]); write(fds[1], string, strlen(string));
printf("Wrote the message into the pipe\n");
Here's what happened when I ran it:
# gcc -Wall -o writetest
writetest.c
# ./writetest
Broken pipe
#
Broken pipe? Where did that come from? And why didn't I see the results of my printf()?Obviously, something more drastic then a failedwrite() is occurring here.
Linux considers writing to a pipe with no readers a pretty serious matter, since data could getlost. Rather then let a process blithely ignore error codes from write() and go about itsbusiness, Linux sends the process a signal called SIGPIPE. Unless the process hasspecifically asked to handle the SIGPIPE signal, Linux kills the process. The Broken pipemessage is from the shell, which is telling you that the processdied because of a broken pipe. We'll talk much more about signals in the next column.
For now, you can get write() to return EPIPE rather then die by telling the kernelyou'd like to ignore the signal. Listing Five is a test program that does just that. You'llunderstand more about what's going on in this program in another month.
Listing Five: IgnoringSIGPIPE
#include <stdio.h>
#include <string.h>
#include <sys/signal.h>#include <unistd.h>
signal(SIGPIPE, SIG_IGN);
Erik Troan is a developer for Red Hat Software and co-author of the book Linux ApplicationDevelopment. He can be reached at [email protected].. | https://www.linuxtoday.com/blog/multiplexed-i0-with-poll.html | CC-MAIN-2019-30 | refinedweb | 2,215 | 63.29 |
itext pdf
itext pdf i am generating pdf using java
i want to do alignment in pdf using java but i found mostly left and right alignment in a row.
i want to divide single row in 4 parts then how can i do
itext pdf - Java Interview Questions
itext pdf sample program to modify the dimensions of image file in itext in java HiIf you want to know deep knowledge then click here and get more information about itext pdf program.
Adding images in itext pdf
Adding images in itext pdf Hi,
How to add image in pdf file using itext?
Thanks
Hi,
You can use following code... image in the pdf file.
Thanks.
regarding the pdf table using itext
regarding the pdf table using itext if table exceeds the maximum width of the page how to manage
Open Source PDF
Open Source PDF
Open Source PDF Libraries in Java
iText is a library that allows you to generate PDF files on the fly...: The look and feel of HTML is browser dependent; with iText and PDF you can
about pdf file handeling
about pdf file handeling can i apend something in pdf file using java program?
if yes then give short code for it.
You need itext api to handle pdf files. You can find the related examples from the given link:.
Read PDF file
Read PDF file
Java provides itext api to perform read and write operations with pdf file. Here we are going to read a pdf file. For this, we have used PDFReader class. The data is first converted into bytes and then with the use
reading from pdf to java - Java Beginners
the following link:
Thanks...reading from pdf to java How can i read the pdf file to strings in java.
I need the methods of reading data from file and to place that data
PDF to Image
PDF to Image Java code to convert PDF to Image
Merging multiple PDF files - Framework
Merging multiple PDF files I m using the iText package to merge pdf files. Its working fine but on each corner of the merged filep there is some... the files are having different font.Please help code to convert pdf file to word file
Java code to convert pdf file to word file How to convert pdf file to word file using Java
How to make a Rectangle type pdf
How to make a Rectangle type pdf
...
make a pdf file in the rectangle shape irrespective of the fact whether it exists or not.
If it exists then its fine, otherwise a pdf file will be created
how to get the image path when inserting the image into pdf file in jsp - JSP-Servlet
://
Thanks...how to get the image path when inserting the image into pdf file in jsp I am using the below code but i am getting the error at .getInstance. i am
change pdf version
change pdf version
In this program we are going to change the version of
pdf file through java program.
In this example we need iText.jar file, without this jar file
Convert ZIP To PDF
Convert ZIP To PDF
Lets discuss the conversion of a zipped file
into pdf file with the help of an example.
Download iText API
required for the compilation
java - Java Beginners
java Hii
Can any ome help me to Write a programme to merge pdf iles using itext api. Hi friend,
For solving the problem visit to :
Thanks
interview path pdf
interview path pdf Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company for <1 year experience thanks for all of u in advance
Please visit
How to convert a swing form to PDF
How to convert a swing form to PDF Sir,
I want to know about how convert a swing form containing textbox,JTable,JPanel,JLabel, Seperator etc swing menus to a PDF file using java code
Generate unicode malayalam PDF from JSP
PDF reports using IText,but I dont know how to generate unicode malayalam... a simple pdf generator code using itext api.
Try this:
<%@page import="java.io....Generate unicode malayalam PDF from JSP Hi,
I want to generate
Concatenate two pdf files
Concatenate two pdf files
In this program we are going to concatenate two pdf files
into a pdf file through java program. The all data of files
To convert Html to pdf in java - Java Beginners
To convert Html to pdf in java Hi all,
I want to convert html file to pdf file using java. can any one help me out.
Thanks & Regards
Santhosh Ejanthkar Hi Friend,
Try the following code:
import
pdf Table title
pdf Table title
... to the table of the pdf file. Suppose we have one pdf file in
which we have a table and we..., otherwise a pdf file will be created.
To make a program over this, firstly we
Rotating image in the pdf file
Rotating image in the pdf file
...
insert a image in a pdf file and rotate it irrespective of the fact whether... will help us to make and use pdf
file in our program.
Now create a file named
iText support android? - MobileApplications
iText support android?
would iText support android?
i ve linked the iText.jar file with my android project developed in eclipse...
//code
Document document = new Document(PageSize.A4, 50, 50, 50, 50
convert data from pdf to text file - Java Beginners
convert data from pdf to text file how to read the data from pdf file and put it into text file(.txt
How to read PDF files created from Java thru VB Script
How to read PDF files created from Java thru VB Script We have created the PDF file thru APache FOP but when we are unable to read the
data thru... file?
Java PDF Tutorials
combine two pdf files
combine two pdf files
In this program we are going to tell you how you can
read a pdf file and combine more than one pdf into one.
To make a program over this, firstly
pdf file measurement
pdf file measurement
... through java program. First ad the value in the paragraphs
then add it finally... on a pdf
file,.
To make the program for changing the pdf version firstly we have adjust a size of a pdf file
How to adjust a size of a pdf file
...
adjust the size of a pdf file irrespective of the fact whether it exists or not.
If it exists then its fine, otherwise a pdf file will be created.
Tips and Tricks
in the form of a PDF document using Servlet. This program uses iText, which is a java library containing classes to generate documents in PDF, XML, HTML, and RTF...;
Send
data from database in PDF file as servlet response
Insert pages pdf
Insert pages pdf
In this program we are going to insert a new blank pages
in pdf file through java program...,
com.lowagie.text.pdf.PdfWriter class is used to write the document on a pdf
How to Make a Pdf and inserting data
How to Make a Pdf and inserting data
... make a pdf
file and how we can insert a data into the pdf file. This all...* and com.lowagie.text.pdf.*
package which will help us to make a pdf file.
The logic
Java Code - Java Interview Questions
on PDF using Java visit to : Code Hi,
How to convert word document to PDF using java????
Can you give me a simple code and the libraries to be used?
Thanks
Java convert jtable data to pdf file
Java convert jtable data to pdf file
In this tutorial, you will learn how to convert jtable data to pdf file. Here
is an example where we have created... have fetched the data from the jtable and
save the data to pdf file.
Example
Probem while creating PDF through JSP-Servlet - JSP-Servlet
Probem while creating PDF through JSP-Servlet Hi,
I have a web-app in which I want to convert MS-Office documents to PDF online.
I'm using PDFCreator for this. If I call the PDFCreator through a standalone java app or through | http://www.roseindia.net/tutorialhelp/comment/98508 | CC-MAIN-2014-52 | refinedweb | 1,374 | 68.91 |
I've been trying to figure out what is wrong with my code for days; I can't seem to find it. If statements confuse me quite a bit, so I'm thinking I messed those up, but still not sure. All I know is that I'm super stumped and would love all of the help I can get.
#include <stdio.h>
// function main begins program execution
int main(void)
{
int numberOfDays = 0;
float numberOfMiles = 0;
float milesCharge = 0;
float milesTotal = 0;
float total = 0;
float subtotal = 0;
float tax = 0;
do {
printf("%s", "How many days was car rented?\t");
scanf("%d", &numberOfDays);
} while (numberOfDays < 1 );
do {
printf("%s", "How many miles were driven?\t");
scanf("%d", &numberOfMiles);
} while (numberOfMiles > 1);
if (numberOfMiles > 1 || numberOfMiles < 200) {
milesTotal = numberOfMiles * .40;
} else {
milesTotal = numberOfMiles * .35;
}
subtotal = milesTotal + numberOfDays * 15;
tax = subtotal * .06;
total = tax + subtotal;
printf("\nSubtotal:\t\t\t$%.2f\n", subtotal);
printf("Tax Amount:\t\t\t$%.2f\n", tax);
printf("Total:\t\t\t\t$%.2f\n", total);
printf("\n");
}
If
numberOfMiles should be of type
float you have to exchange the following line
scanf("%d", &numberOfMiles);
by
scanf("%f", &numberOfMiles);
or you can set the type to
int.
--
If you want to avoid an endless loop, exchange
} while (numberOfMiles > 1);
by
} while (numberOfMiles < 1);
Btw. why not allow distances that are shorter than a mile?
e.g. by
} while (numberOfMiles < 0);
For more specific answers, you have to be more precise. | https://codedump.io/share/g4aB85xZF5JV/1/cannot-figure-out-why-my-code-is-not-working | CC-MAIN-2017-09 | refinedweb | 245 | 82.65 |
I'm using Inkscape 0.48.3.1 r9886 on Gentoo Linux.
When I'm trying to run *any* extension from the Effects menu, I get error messages similar to the following one:
Traceback (most recent call last):
File "markers_
import random, inkex, simplestyle, copy
File "/usr/share/
u'sodipodi' :u'http://
^
SyntaxError: invalid syntax[/code]
It seems like some problem with Python, but I don't know why my Python doesn't recognize proper Python scripts from Inkscape. My python version is:
python --version
Python 3.2.3
I remember that few months ago, when I first compiled Inkscape, I used it without problems and everything worked fine, so I don't know why it stopped working.
Recompilation of the package doesn't change anything.
Any ideas what could be wrong and how to fix it?
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Inkscape Edit question
- Assignee:
- No assignee Edit question
- Solved:
-
- Last query:
-
- Last reply:
- | https://answers.launchpad.net/inkscape/+question/215655 | CC-MAIN-2021-39 | refinedweb | 159 | 61.67 |
JPEG File Format
Get mor data for your byte...
Jpeg (.jpg) File Format
Explained
by [email protected]
You can't use the internet or anything without coming across the jpg file
format. Its used everywhere, its principle is even used in video
compression as well....as its that popular. Basically, tga's and bmp's are
great, but there to big!....jpg with the aid of some tricky and sneaky
compression techniques can reduce your image size down to 10% of its original
size, and look almost identical to the original! Great eh!
But!...BUT!...its the file format is lossy...you loose information with the
jpg format....as if you did a pixel by pixel...or should I say byte by byte
comparison with the original, you'd find differences. This is because the
jpg format tries to throw away information that the eye can't notice...its
mostly used for compression of realistic images, which contain a whole variety
of colours and information...where as simple 2d simple art, can use other simple
techniques, like run time compression to gain good compression...as lots of the
information is the same....still jpg does good most times.
Its a dang hard format to get to grips with...its definetly not one of the
easiest to work with...and theres lots of different flavours, versions of
it...but once you've got your first simple image decompressed, its not to bad.
I'm going to try and take you through the process of how to write your own
simple jpg decoder!...it won't be fast, but you'll be able to see how it works
and use it for all your cool liittle projects :)
Starting from common ground is a good think....now rather than just use any
random jpg simple image, 8x8 pixels wide as shown below:
It can't get much simpler than that...and if we dump the RGB values to a txt
file, we can at least see what where looking for...ocne we can find this image,
its easy to find any image...just simpler to debug this way :)
/***************************************************************************/
/* */
/* File: main.cpp */
/* Autor: [email protected] */
/* URL: */
/* */
/***************************************************************************/
/*
Jpeg File Format Explained
*/
/***************************************************************************/
#include <windows.h>
#include <stdio.h> // sprintf(..), fopen(..)
#include <stdarg.h> // So we can use ... (in dprintf)
/***************************************************************************/
/* */
/* FeedBack Data */
/* */
/***************************************************************************/
//Saving debug information to a log file
void dprintf(const char *fmt, ...)
{
va_list parms;
char buf[256];
// Try to print in the allocated space.
va_start(parms, fmt);
vsprintf (buf, fmt, parms);
va_end(parms);
// Write the information out to a txt file
FILE *fp = fopen("output.txt", "a+");
fprintf(fp, "%s", buf);
fclose(fp);
}// End dprintf(..)
/***************************************************************************/
/* */
/* jpeg functions */
/* */
/***************************************************************************/
/* Okay before we start getting overwelmed by bits and bytes and
bit shifting and all sorts of special tricks.. we should first
read in the header...which is the first part of the file, and
can tell us a lot about the jpeg file. */
// First lets define some things
#define SOI /*Start of Image*/ 0xffd8
#define EOI /*End of Image */ 0xffd9
#define APP0 /**/ 0xffe0 /*to 0xffef APP15*/
void ReadJpgFile(char* szFileName)
{
byte chunk[2];
byte sizeofchunk[2];
FILE *f;
f = fopen(szFileName, "rb");
// Lets read in the first 8 bytes
fread(chunk, 1, 2, f);
// Output what we have read in.. see what it is?
dprintf("First 2 bytes are: 0x%x, 0x%x\n", chunk[0], chunk[1] );
fread(chunk, 1, 2, f);
fread(sizeofchunk, 1, 2, f);
short unsigned int size = ((sizeofchunk[0] << 8) | sizeofchunk[1]);
dprintf("Second 2 bytes are: 0x%x, 0x%x\n", chunk[0], chunk[1] );
dprintf("Size of our piece of data:%u\n", size);
// Remeber the size includes the 2 bytes for the size.
// Now we now how big the next chunk is, we can read it in.
// I know its an app0 chunk because the chunk was 0xffe0
char temp[100];
fread(&temp, size - 2, 1, f);
temp[size - 2 + 1] = '\0'; // Null terminate the string :)
dprintf("The APP0 value: %s\n", temp);
// A stage further, opening up the various sections.
// Lets try and read in all the data...see what we get...
// Remeber now, its a 2 byte value which tells us what it is,
// then a 2 byte value of how bit it is :)
while(true)
{
fread(chunk, 1, 2, f);
// If the chunk we read in doesn't begin with 0xff then
// then its not a valid chunk and so exit.
if( chunk[0] != 0xff )
{
dprintf("Error chunk[0] was: 0x%x, chunk[1]: 0x%x\n", chunk[0], chunk[1]);
break;
}
// If we get 0xffd9 then its the EOF (End Of File)
if( chunk[1] == 0xd9 )
{
dprintf("End Of File\n");
break;
}
fread(sizeofchunk, 2, 1, f);
short unsigned int size = ((sizeofchunk[0] << 8) | sizeofchunk[1]);
if( chunk[1] == 0xda )
{
// Okay this means we have started to scan the encoded data.
byte count;
fread(&count, 1, 1, f);
dprintf("Start of scan count: %u\n", count);
//fseek(f, -1, SEEK_CUR);
while(count != 0xff)
{
fread(&count, 1, 1, f);
if(count == 0xff)
{
fread(&count, 1, 1, f);
if(count != 0x00)
break;
}
}
dprintf("\nEnd of scan value: 0xff%x\n", count);
break;
}
dprintf("Chunk ID: 0x%x%x, Size:%u\n", chunk[0], chunk[1], size);
fseek(f, size-2, SEEK_CUR);
}
fclose(f);
}// End ReadJpgFile(..)
/***************************************************************************/
/* */
/* Entry Point */
/* */
/***************************************************************************/
int __stdcall WinMain (HINSTANCE hInst, HINSTANCE hPrev, LPSTR lpCmd, int nShow)
{
ReadJpgFile("cross.jpg");
return 0;
}// End WinMain(..)
And the output if you run the above demo, you'll find below.....this program
is pretty simple...its more of a binary dump I think, and just goes through each
tag and dumps its id and size, ....you'll find later one that the tags follow a
particular flow.. as you usually have a list of tags that describe the jpg
encoding tabs and quantization information, followed by a chunk of data called
the SOS (start of scan), where you decode this info using your jpg tables.
Theres a variety of information that is stashed away in the jpg format, but
you'll always notice it starting with 0xffd8...and if it contains an APP0
(0xffe0) tag then it can contain the image dimensions and a thumbnail...below
shows a layout of a typical jpg header.
Offset Length Contents
0 1 byte 0xff
1 1 byte 0xd8 (SOI)
2 1 byte 0xff
3 1 byte 0xe0 (APP0)
4 2 bytes length of APP0 block
6 5 bytes "JFIF\0"
11 1 byte [Major version]
12 1 byte Minor version
13 1 byte [Units for the X and Y densities]
units = 0: no units, X and Y specify the pixel aspect ratio
units = 1: X and Y are dots per inch
units = 2: X and Y are dots per cm
14 2 bytes [Xdensity: Horizontal pixel density]
16 2 bytes [Ydensity: Vertical pixel density]
18 1 byte [Xthumbnail: Thumbnail horizontal pixel count]
19 1 byte [Ythumbnail: Thumbnail vertical pixel count]
...
1 byte 0xff
1 byte 0xd9 (EOI) end-of-file
... ongoing... | https://xbdev.net/image_formats/jpeg/010_dump_tag_info/index.php | CC-MAIN-2021-25 | refinedweb | 1,161 | 71.24 |
I am trying to form URLs from different pieces, and having trouble understanding the behavior of this method. For example:
Python 3.x
from urllib.parse import urljoin
>>> urljoin('some', 'thing')
'thing'
>>> urljoin('', 'thing')
''
>>> urljoin('', 'thing')
''
>>> urljoin('', 'thing') # just a tad / after 'more'
''
urljoin('', '/thing')
''
The best way (for me) to think of this is the first argument,
base is like the page you are on in your browser. The second argument
url is the href of an anchor on that page. The result is the final url to which you will be directed should you click.
>>> urljoin('some', 'thing') 'thing'
This one makes sense give my description. Though one would hope base includes a scheme and domain.
>>> urljoin('', 'thing') ''
If you are on a vhost some, and there is an anchor like
<a href='thing'>Foo</a> then the link will take you to
>>> urljoin('', 'thing') ''
We are on
some/more here, so a relative link of
thing will take us to
/some/thing
>>> urljoin('', 'thing') # just a tad / after 'more' ''
Here, we aren't on
some/more, we are on
some/more/ which is different. Now, our relative link will take us to
some/more/thing
>>> urljoin('', '/thing') ''
And lastly. If on
some/more/ and the href is to
/thing, you will be linked to
some/thing. | https://codedump.io/share/3ETIllMZZMz2/1/python-confusions-with-urljoin | CC-MAIN-2017-04 | refinedweb | 219 | 78.08 |
Build a Command Line Weather App in Deno
If you’ve been following along with our introductory articles on Deno, you’re probably interested in having a go at writing your first program. In this article, we’re going to walk through installing the Deno runtime, and creating a command-line weather program that will take a city name as an argument and return the weather forecast for the next 24 hours.
To write code for Deno, I’d highly recommend Visual Studio Code with the official Deno plugin. To make things a little more interesting, we’re going to be writing the app in TypeScript.
Installing Deno
Firstly, let’s get Deno installed locally so we can begin writing our script. The process is straightforward, as there are installer scripts for all three major operating systems.
Windows
On windows, you can install Deno from PowerShell:
iwr -useb | iex
Linux
From the Linux terminal, you can use the following command:
curl -fsSL | sh
macOS
On a Mac, Deno can be installed with Brew:
brew install deno
After installing
Once the install process is finished, you can check that Deno has been correctly installed by running the following command:
deno --version
You should now see something similar to this:
deno 1.2.0 v8 8.5.216 typescript 3.9.2
Let’s create a folder for our new project (inside your home folder, or wherever you like to keep your coding projects) and add an
index.ts file:
mkdir weather-app cd weather-app code index.ts
Note: as I mentioned above, I’m using VS Code for this tutorial. If you’re using a different editor, replace the last line above.
Getting User Input
Our program is going to retrieve the weather forecast for a given city, so we’ll need to accept the city name as an argument when the program is run. Arguments supplied to a Deno script are available as
Deno.args. Let’s log this variable out to the console to see how it works:
console.log(Deno.args);
Now run the script, with the following command:
deno run index.ts --city London
You should see the following output:
[ "--city", "London" ]
Although we could parse this argument array ourselves, Deno’s standard library includes a module called flags that will take care of this for us. To use it, all we have to do is add an import statement to the top of our file:
import { parse } from "";
Note: the examples in the docs for standard library modules will give you an unversioned URL (such as), which will always point to the latest version of the code. It’s good practice to specify a version in your imports, to ensure your program isn’t broken by future updates.*
Let’s use the imported function to parse the arguments array into something more useful:
const args = parse(Deno.args);
We’ll also change the script to log out our new
args variable, to see what that looks like. So now your code should look like this:
import { parse } from ""; const args = parse(Deno.args); console.log(args);
Now, if you run the script with the same argument as before, you should see the following output:
Download Download Check { _: [], city: "London" }
Whenever Deno runs a script, it checks for new import statements. Any remotely hosted imports are downloaded, compiled, and cached for future use. The
parse function has provided us with an object, which has a
city property containing our input.
Note: if you need to re-download the imports for a script for any reason, you can run
deno cache --reload index.ts.
We should also add a check for the
city argument, and quit the program with an error message if it’s not supplied:
if (args.city === undefined) { console.error("No city supplied"); Deno.exit(); }
Talking to the Weather API
We’re going to be getting our forecast data from OpenWeatherMap. You’ll need to register for a free account, in order to obtain an API key. We’ll be using their 5 day forecast API, passing it a city name as a parameter.
Let’s add some code to fetch the forecast and log it out to the console, to see what we get:
import { parse } from ""; const args = parse(Deno.args); if (args.city === undefined) { console.error("No city supplied"); Deno.exit(); } const apiKey = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'; const res = await fetch(`{args.city}&units=metric&appid=${apiKey}`); const data = await res.json(); console.log(data);
Deno tries to support a lot of browser APIs where possible, so here we can use
fetch without having to import any external dependencies. We’re also making use of the support for top-level
await: normally we’d have to wrap any code that uses
await in an
async function, but TypeScript doesn’t make us do this, which makes the code a little nicer.
If you try running this script now, you’ll encounter an error message:
Check error: Uncaught PermissionDenied: network access to "", run again with the --allow-net flag at unwrapResponse ($deno$/ops/dispatch_json.ts:42:11) at Object.sendAsync ($deno$/ops/dispatch_json.ts:93:10) at async fetch ($deno$/web/fetch.ts:266:27) at async index.ts:12:13
By default, all Deno scripts are run in a secure sandbox: they don’t have access to the network, the filesystem, or things like environment variables. Scripts need to be explicitly granted permission for the system resources they need to access. In this case, the error message helpfully lets us know which permission we need and how to enable it.
Let’s call the script again, with the correct flag:
deno run --allow-net index.ts --city London
This time, we should get back a JSON response from the API:
{ cod: "200", message: 0, cnt: 40, list: [ { dt: 1595527200, main: { temp: 22.6, feels_like: 18.7, temp_min: 21.04, temp_max: 22.6, pressure: 1013, sea_level: 1013, grnd_level: 1011, humidity: 39, temp_kf: 1.56 }, weather: [ [Object] ], clouds: { all: 88 }, wind: { speed: 4.88, deg: 254 }, visibility: 10000, pop: 0, sys: { pod: "d" }, dt_txt: "2020-07-23 18:00:00" }, ... ], city: { id: 2643743, name: "London", coord: { lat: 51.5085, lon: -0.1257 }, country: "GB", population: 1000000, timezone: 3600, sunrise: 1595477494, sunset: 1595534525 } }
You can check out the full details of what gets returned in the response, but what we’re interested in mainly is the array of forecast data in
list. Each object in the array contains a timestamp (
dt), a
main object with details of the atmospheric conditions (temperature, humidity, pressure etc.), and a
weather array containing an object with a description of the predicted weather.
We’re going to iterate over the
main array to get the forecast time, temperature, and weather conditions. Let’s start by limiting the number of records to cover a 24-hour period only. The forecast data available to us on the free plan is only available in three-hour intervals, so we’ll need to get eight records:
const forecast = data.list.slice(0, 8)
We’ll map over each of the forecast items, and return an array of the data we’re interested in:
const forecast = data.list.slice(0, 8).map(item => [ item.dt, item.main.temp, item.weather[0].description, ]);
If we try to run the script now, we’ll get a compile error (if you’re using an IDE like VS Code, you’ll also get this error displayed as you type the code): Parameter ‘item’ implicitly has an ‘any’ type.
TypeScript requires us to tell it about the type of variable that
item is, in order to know if we’re doing anything with it that could cause an error at runtime. Let’s add an interface, to describe the structure of
item:
interface forecastItem { dt: string; main: { temp: number; }; weather: { description: string; }[]; }
Note that we’re not describing all the properties of the object here, only the ones we’re actually going to access. In our situation, we know which properties we want.
Let’s add our new type to our
map callback:
const forecast = data.list.slice(0, 8).map((item: forecastItem) => [ item.dt, item.main.temp, item.weather[0].description, ]);
If you’re using an IDE with TypeScript support, it should be able to autocomplete the properties of
item as you type, thanks to the interface type we’ve supplied.
- Create a service class
- Create an interface for the output
Formatting the Output
Now that we have the set of data we want, let’s look at formatting it nicely to display to the user.
First off, let’s transform the timestamp value into a human-readable date. If we take a look at Deno’s third-party module list and search for “date”, we can see date-fns in the list. We can use the link from here to import the functions we’re going to use into our Deno app:
import { fromUnixTime, format } from "";
We can now pass the timestamp through the
fromUnixTime function, to get a Date object, and then pass this object into
format in order to get a date string we want:
format(fromUnixTime(item.dt), "do LLL, k:mm", {})
The formatting string
do LLL, k:mm will give us a date in the following format: “24th Jul, 13:00”.
Note: we’re passing an empty object as the third argument to
format purely to silence an IDE warning about the expected number of arguments. The code will still run fine without it.
While we’re at it, let’s round the temperature value to a single decimal place, and add a units indicator:
`${item.main.temp.toFixed(1)}C`
Now that we have our forecast data formatted and ready to display, let’s present it to the user in a neat little table, using the ascii_table module:
import AsciiTable from ''; ... const table = AsciiTable.fromJSON({ title: `${data.city.name} Forecast`, heading: [ 'Time', 'Temp', 'Weather'], rows: forecast }) console.log(table.toString())
Save and run the script, and now we should have nicely formatted and presented forecast for our chosen city, for the next 24 hours:
.--------------------------------------------. | London Forecast | |--------------------------------------------| | Time | Temp | Weather | |-----------------|-------|------------------| | 23rd Jul, 19:00 | 17.8C | light rain | | 23rd Jul, 22:00 | 16.8C | light rain | | 24th Jul, 1:00 | 16.0C | broken clouds | | 24th Jul, 4:00 | 15.6C | light rain | | 24th Jul, 7:00 | 16.0C | broken clouds | | 24th Jul, 10:00 | 18.3C | scattered clouds | | 24th Jul, 13:00 | 20.2C | light rain | | 24th Jul, 16:00 | 20.2C | light rain | '--------------------------------------------'
Complete Code Listing
It’s quite a compact script, but here’s the complete code listing:
import { parse } from ""; import { fromUnixTime, format, } from ""; import AsciiTable from ""; const args = parse(Deno.args); if (args.city === undefined) { console.error("No city supplied"); Deno.exit(); } const apiKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"; const res = await fetch( `{args.city}&units=metric&appid=${apiKey}`, ); const data = await res.json(); interface forecastItem { dt: string; main: { temp: number }; weather: { description: string }[]; } const forecast = data.list.slice(0, 8).map((item: forecastItem) => [ format(fromUnixTime(item.dt), "do LLL, k:mm", {}), `${item.main.temp.toFixed(1)}C`, item.weather[0].description, ]); const table = AsciiTable.fromJSON({ title: `${data.city.name} Forecast`, heading: ["Time", "Temp", "Weather"], rows: forecast, }); console.log(table.toString());
Summary
You now have your own working Deno command-line program that will give you the weather forecast for the next 24 hours. By following along with this tutorial, you should now be familiar with how to start a new program, import dependencies from the standard library and third parties, and grant script permissions.
So, having got a taste for writing programs for Deno, where should you go next? I’d definitely recommend having a read through the manual to learn more about the various command-line options and built-in APIs, but also keep your eye on SitePoint for more Deno content!: | https://www.sitepoint.com/deno-build-command-line-weather-app/ | CC-MAIN-2022-27 | refinedweb | 1,979 | 63.7 |
Pawn. The Language. embedded scripting language. June ITB CompuPhase
- Frank Chandler
- 1 years ago
- Views:
Transcription
1 Pawn embedded scripting language The Language June 2011 ITB CompuPhase
2 ii Java is a trademark of Sun Microsystems, Inc. Microsoft and Microsoft Windows are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. CompuPhase is a registered trademark of ITB CompuPhase. Unicode is a registered trademark of Unicode, Inc. Copyright c , ITB CompuPhase Eerste Industriestraat 19 21, 1401VL Bussum The Netherlands (Pays Bas) telephone: (+31)-(0) www: The documentation is licensed under the Creative Commons Attribution-ShareAlike 2.5 License. A summary of this license is in appendix D. For more information on this licence, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. The information in this manual and the associated software are provided as is. There are no guarantees, explicit or implied, that the software and the manual are accurate. Requests for corrections and additions to the manual and the software can be directed to ITB CompuPhase at the above address. Typeset with TEX in the Computer Modern and Palatino typefaces at a base size of 11 points.
3 iii Table of contents Foreword...1 A tutorial introduction...3 Data and declarations...60 Functions...69 The preprocessor...92 General syntax...96 Operators and expressions Statements Directives Proposed function library Pitfalls: differences from C Assorted tips Appendices A: Error and warning messages B: The compiler C: Rationale D: License Index...187
4 iv Table of contents
5 1 Foreword pawn is a simple, typeless, 32-bit scripting language with a C-like syntax. Execution speed, stability, simplicity and a small footprint were essential design criteria for both the language and the interpreter/abstract machine that a pawn program runs on. An application or tool cannot do or be everything for all users. This not only justifies the diversity of editors, compilers, operating systems and many other software systems, also explains the presence of extensive configuration options and macro or scripting languages in applications. My own applications have contained a variety of little languages; most were very simple, some were extensive... and most needs could have been solved by a general purpose language with a special purpose library. Hence, pawn. The pawn language was designed as a flexible language for manipulating objects in a host application. The tool set (compiler, abstract machine) were written so that they were easily extensible and would run on different software/hardware architectures.õ pawn is a descendent of the original Small C by Ron Cain and James Hendrix, which at its turn was a subset of C. Some of the modifications that I did to Small C, e.g. the removal of the type system and the substitution of pointers by references, were so fundamental that I could hardly call my language a subset of C or a C dialect any more. Therefore, I stripped off the C from the title and used the name Small for the name of the language in my publication in Dr. Dobb s Journal and the years since. During development and maintenance of the product, I received many requests for changes. One of the frequently requested changes was to use a different name for the language searching for information on the Small scripting language on the Internet was hindered by small being such a common word. The name change occurred together with a significant change in the language: the support of states (and state machines). I am indebted to Ron Cain and James Hendrix (and more recently, Andy Yuen), and to Dr. Dobb s Journal to get this ball rolling. Although I must have touched nearly every line of the original code multiple times, the Small C origins are still clearly visible.õ
6 2 Foreword A detailed treatise of the design goals and compromises is in appendix C; here I would like to summarize a few key points. As written in the previous paragraphs, pawn is for customizing applications (by writing scripts), not for writing applications. pawn is weak on data structuring because pawn programs are intended to manipulate objects (text, sprites, streams, queries,...) in the host application, but the pawn program is, by intent, denied direct access to any data outside its abstract machine. The only means that a pawn program has to manipulate objects in the host application is by calling subroutines, so called native functions, that the host application provides. pawn is flexible in that key area: calling functions. pawn supports default values for any of the arguments of a function (not just the last), callby-reference as well as call-by-value, and named as well as positional function arguments. pawn does not have a type checking mechanism, by virtue of being a typeless language, but it does offer in replacement a classification checking mechanism, called tags. The tag system is especially convenient for function arguments because each argument may specify multiple acceptable tags. For any language, the power (or weakness) lies not in the individual features, but in their combination. For pawn, I feel that the combination of named arguments which lets you specify function arguments in any order, and default values which allows you to skip specifying arguments that you are not interested in, blend together to a convenient and descriptive way to call (native) functions to manipulate objects in the host application.
7 3 A tutorial introduction pawn is a simple programming language with a syntax reminiscent to the C programming language. A pawn program consists of a set of functions and a set of variables. The variables are data objects and the functions contain instructions (called statements ) that operate on the data objects or that perform tasks. The first program in almost any computer language is one that prints a Compiling and simple string; printing Hello world is a classic example. In pawn, the program would look like: Listing: hello.p main() printf "Hello world\n" This manual assumes that you know how to run a pawn program; if not, please consult the application manual (more hints are at page 169). A pawn program starts execution in an entry function in nearly all examples of this manual, this entry function is called main. Here, the function main contains only a single instruction, which is at the line below the function head itself. Line breaks and indenting are insignificant; the invocation of the function print could equally well be on the same line as the head of function main. The definition of a function requires that a pair of parentheses follow the function name. If a function takes parameters, their declarations appear between the parentheses. The function main does not take any parentheses. The rules are different for a function invocation (or a function call); parentheses are optional in the call to the print function. running scripts: see 169 Thesingleargumentoftheprintfunctionisastring,whichmustbe enclosed in double quotes. The characters \n near the end of the string form an String literals: 99 escape sequence, in this case they indicate a newline symbol. When print encounters the newline escape sequence, it advances the cursor to the first Escape sequence: 98 column of the next line. One has to use the \n escape sequence to insert a newline into the string, because a string may not wrap over multiple lines. This should not be confused with the state entry functions, which are called entry, but serve a different purpose see page 40.
8 4 A tutorial introduction pawn is a case sensitive language: upper and lower case letters are considered to be different letters. It would be an error to spell the function printf in the above example as PrintF. Keywords and predefined symbols, like the name of function main, must be typed in lower case. If you know the C language, you may feel that the above example does not look much like the equivalent Hello world program in C/C ++. pawn can also look very similar to C, though. The next example program is also valid pawn syntax (and it has the same semantics as the earlier example): Listing: hello.p C style #include <console> main() { printf("hello world\n"); These first examples also reveal a few differences between pawn and the C language: there is usually no need to include any system-defined header file ; semicolons are optional (except when writing multiple statements on one line); when the body of a function is a single instruction, the braces (for a compound instruction) are optional; whenyoudonotusetheresultofafunctioninanexpressionorassignment, parentheses around the function argument are optional. More function descriptions at page 123 As an aside, the few preceding points refer to optional syntaxes. It is your choice what syntax you wish to use: neither style is deprecated or considered harmful. The examples in this manual position the braces and use an indentation that is known as the Whitesmith s style, but pawn is a free format language and other indenting styles are just as good. Because pawn is designed to be an extension language for applications, the function set/library that a pawn program has at its disposal depends on the host application. As a result, the pawn language has no intrinsic knowledge of any function. The print function, used in this first example, must be made available by the host application and be declared to the pawn parser. It is assumed, however, that all host applications provide a minimal In the language specification, the term parser refers to any implementation that processes and runs on conforming Pawn programs either interpreters or compilers.
9 set of common functions, like print and printf. A tutorial introduction 5 In some environments, the display or terminal must be enabled before any text can be output onto it. If this is the case, you must add a call to the function console before the first call to function print or printf. The console function also allows you to specify device characteristics, such as the number of lines and columns of the display. The example programs in this manual do not use the console functions, because many platforms do not require or provide it. Arithmetic Fundamental elements of most programs are calculations, decisions (conditional execution), iterations(loops) and variables to store input data, output data and intermediate results. The next program example illustrates many of these concepts. The program calculates the greatest common divisor of two values using an algorithm invented by Euclides. Listing: gcd.p /* The greatest common divisor of two values, using Euclides algorithm. */ main() { print "Input two values\n" new a = getvalue() new b = getvalue() while (a!= b) if (a > b) a = a - b else b = b - a printf "The greatest common divisor is %d\n", a Function main now contains more than just a single print statement. When the body of a function contains more than one statement, these statements must be embodied in braces the { and characters. This groups the instructions to a single compound statement. The notion of grouping statements in a compound statement applies as well to the bodies of if else and loop instructions. Compound statement: 111 The new keyword creates a variable. The name of the variable follows new. Data declarations It is common, but not imperative, toassigna value to the variablealready at are covered in detail starting at page 60
10 6 A tutorial introduction while loop: 115 if else : 113 Relational operators: 106 the moment of its creation. Variables must be declared before they are used in an expression. The getvalue function (also common predefined function) reads in a value from the keyboard and returns the result. Note that pawn is a typeless language, all variables are numeric cells that can hold a signed integral value. The getvalue function name is followed by a pair of parentheses. These are required because the value that getvalue returns is stored in a variable. Normally, the function s arguments (or parameters) would appear between the parentheses, but getvalue (as used in this program) does not take any explicit arguments. If you do not assign the result of a function to a variable or use it in a expression in another way, the parentheses are optional. For example, the result of the print and printf statements are not used. You may still use parentheses around the arguments, but it is not required. Loop instructions, like while, repeat a single instruction as long as the loop condition (the expression between parentheses) is true. One can execute multiple instructions in a loop by grouping them in a compound statement. The if else instruction has one instruction for the true clause and one for the false. Observe that some statements, like while and if else, contain (or fold around ) another instruction in the case of if else even two other instructions. The complete bundle is, again, a single instruction. That is: the assignment statements a = a - b below the if and b = b - a below the else are statements; the if else statement folds around these two assignment statements and forms a single statement of itself; the while statement folds around the if else statement and forms, again, a single statement. It is common to make the nesting of the statements explicit by indenting any sub-statements below a statement in the source text. In the Greatest Common Divisor example, the left margin indent increases by four space characters after the while statement, and again after the if and else keywords. Statements that belong to the same level, such as both printf invocations and the while loop, have the same indentation. The loop condition for the while loop is (a!= b) ; the symbol!= is the not equal to operator. That is, the if else instruction is repeated until a equals b. It is good practice to indent the instructions that run under control of another statement, as is done in the preceding example.
11 A tutorial introduction 7 The call to printf, near the bottom of the example, differs from the print call right below the opening brace ( { ). The f in printf stands for formatted, which means that the function can format and print numeric values and other data (in a user-specified format), as well as literal text. The %d symbol in the string is a token that indicates the position and the format that the subsequent argument to function printf should be printed. At run time, the token%dis replaced by the value ofvariable a (thesecond argument of printf). Function print can only print text; it is quicker than printf. If you want to print a literal % at the display, you have to use print, or you have to double it in the string that you give to printf. That is: print "20% of the personnel accounts for 80% of the costs\n" and printf "20%% of the personnel accounts for 80%% of the costs\n" print the same string. Arrays & constants Next to simple variables with a size of a single cell, pawn supports array variables that hold many cells/values. The following example program displays a series of prime numbers using the well known sieve of Eratosthenes. The program also introduces another new concept: symbolic constants. Symbolic constants look like variables, but they cannot be changed. Listing: sieve.p /* Print all primes below 100, using the "Sieve of Eratosthenes" */
12 8 A tutorial introduction Constant declaration: 101 Progressive initiallers: 63 for loop: 112 An overview of all operators: 103 When a program or sub-program has some fixed limit built-in, it is good practice create a symbolic constant for it. In the preceding example, the symbol max_primes is a constant with the value 100. The program uses the symbol max_primes three times after its definition: in the declaration of the variable series and in both for loops. If we were to adapt the program to print all primes below 500, there is now only one line to change. Like simple variables, arrays may be initialized upon creation. pawn offers a convenient shorthand to initialize all elements to a fixed value: all hundred elements of the series array are set to true without requiring that the programmer types in the word true a hundred times. The symbols true and false are predefined constants. When a simple variable, like the variables i and j in the primes sieve example, is declared in the first expression of a for loop, the variable is valid only inside the loop. Variable declaration has its own rules; it is not a statement although it looks like one. One of those rules is that the first expression of a for loop may contain a variable declaration. Both for loops also introduce new operators in their third expression. The ++ operator increments its operand by one; meaning that, ++i is equal to i = i + 1. The += operator adds the expression on its right to the variable on its left; that is, j += i is equal to j = j + i. There is an off-by-one issue that you need to be aware if when working with arrays. The first element in the series array is series[0], so if the array holds max_primes elements, the last element in the array is series[max_primes-1]. If max_primes is 100, the last element, then, is series[99]. Accessing series[100] is invalid. Functions Larger programs separate tasks and operations into functions. Using functions increases the modularity of programs and functions, when well written, are portable to other programs. The following example implements a function to calculate numbers from the Fibonacci series. The Fibonacci sequence was discovered by Leonardo Fibonacci of Pisa, an Italian mathematician of the 13 th century whose greatest achievement was popularizing for the Western world the Hindu-Arabic numerals. The goal of the sequence was to describe the growth of a population of (idealized)
13 A tutorial introduction 9 rabbits; and the sequence is 1, 1, 2, 3, 5, 8, 13, 21,... (every next value is the sum of its two predecessors). Listing: fib.p /* Calculation of Fibonacci numbers by iteration */ main() { print "Enter a value: " new v = getvalue() if (v > 0) printf "The value of Fibonacci number %d is %d\n", v, fibonacci(v) else printf "The Fibonacci number %d does not exist\n", v fibonacci(n) { assert n > 0 new a = 0, b = 1 for (new i = 2; i < n; i++) { new c = a + b a = b b = c return a + b The assert instruction at the top of the fibonacci function deserves ex- assert statement: 111 plicit mention; it guards against impossible or invalid conditions. A negative Fibonacci number is invalid, and the assert statement flags it as a programmer s error if this case ever occurs. Assertions should only flag programmer s errors, never user input errors. The implementation of a user-defined function is not much different than that of function main. Function fibonacci shows two new concepts, though: it receives an input value through a parameter and it returns a value (it has a result ). Function parameters are declared in the function header; the single parameter in this example is n. Inside the function, a parameter behaves as a local variable, but one whose value is passed from the outside at the call to the function. The return statement ends a function and sets the result of the function. It need not appear at the very end of the function; early exits are permitted. Functions: properties & features: 69
14 10 A tutorial introduction Native function interface: 83 The main function of the Fibonacci example calls predefined native functions, like getvalue and printf, as well as the user-defined function fibonacci. From the perspective of calling a function (as in function main), there is no difference between user-defined and native functions. The Fibonacci numbers sequence describes a surprising variety of natural phenomena. For example, the two or three sets of spirals in pineapples, pine cones and sunflowers usually have consecutive Fibonacci numbers between 5 and 89 as their number of spirals. The numbers that occur naturally in branching patterns (e.g. that of plants) are indeed Fibonacci numbers. Finally, although the Fibonacci sequence is not a geometric sequence, the further the sequence is extended, the more closely the ratio between successive terms approaches the Golden Ratio, of that appears so often in art and architecture. Call-by-reference & call-by-value Dates are a particularly rich source of algorithms and conversion routines, because the calenders that a date refers to have known such a diversity, through time and around the world. The Julian Day Number is attributed to Josephus Scaliger and it counts the number of days since November 24, 4714 BC (proleptic Gregorian calendar ). Scaliger chose that date because it marked the coincidence of three well-established cycles: the 28-year Solar Cycle (of the old Julian calendar), the 19-year Metonic Cycle and the 15-year Indiction Cycle (periodic taxes or governmental requisitions in ancient Rome), and because no literature or recorded history was known to pre-date that particular date in the remote past. Scaliger used this concept to reconcile dates in historic documents, The exact value for the Golden Ratio is 1/2( 5+1). The relation between Fibonacci numbers and the Golden Ratio also allows for a direct calculation of any sequence number, instead of the iterative method described here. There is some debate on exactly what Josephus Scaliger invented and who or what he called it after. The Gregorian calendar was decreed to start on 15 October 1582 by pope Gregory XIII, which means that earlier dates do not really exist in the Gregorian calendar. When extending the Gregorian calendar to days before 15 October 1582, we refer to it as the proleptic Gregorian calendar.
15 A tutorial introduction 11 later astronomers embraced it to calculate intervals between two events more easily. Julian Day numbers (sometimes denoted with unit jd ) should not be confused with Julian Dates (the number of days since the start of the same year), or with the Julian calendar that was introduced by Julius Caesar. Belowis a programthat calculatesthe Julian Daynumber froma date in the (proleptic) Gregorian calendar, and vice versa. Note that in the proleptic Gregorian calendar, the first year is 1 AD (Anno Domini) and the year before that is 1 BC (Before Christ): year zero does not exist! The program uses negative year values for BC years and positive (non-zero) values for AD years. Listing: julian.p /* calculate Julian Day number from a date, and vice versa */ main() { new d, m, y, jdn print "Give a date (dd-mm-yyyy): " d = getvalue(_, -, / ) m = getvalue(_, -, / ) y = getvalue() jdn = DateToJulian(d, m, y) printf("date %d/%d/%d = %d JD\n", d, m, y, jdn) print "Give a Julian Day Number: " jdn = getvalue() JulianToDate jdn, d, m, y printf "%d JD = %d/%d/%d\n", jdn, d, m, y DateToJulian(day, month, year) { /* The first year is 1. Year 0 does not exist: it is 1 BC (or -1) */ assert year!= 0 if (year < 0) year++ /* move January and February to the end of the previous year */ if (month <= 2) year--, month += 12 new jdn = 365*year + year/4 - year/100 + year/400 + (153*month - 457) / 5 + day return jdn
16 12 A tutorial introduction JulianToDate(jdn, &day, &month, &year) { jdn -= /* approximate year, then adjust in a loop */ year = (400 * jdn) / while (365*year + year/4 - year/100 + year/400 < jdn) year++ year-- /* determine month */ jdn -= 365*year + year/4 - year/100 + year/400 month = (5*jdn + 457) / 153 /* determine day */ day = jdn - (153*month - 457) / 5 /* move January and February to start of the year */ if (month > 12) month -= 12, year++ /* adjust negative years (year 0 must become 1 BC, or -1) */ if (year <= 0) year-- Function main starts with creating variables to hold the day, month and year, and the calculated Julian Day number. Then it reads in a date three calls to getvalue and calls function DateToJulian to calculate the day number. After calculating the result, main prints the date that you entered and the Julian Day number for that date. Now, let us focus on function DateToJulian... Call by value versus call by reference : 70 Near the top of function DateToJulian, it increments the year value if it is negative; it does this to cope with the absence of a zero year in the proleptic Gregorian calendar. In other words, function DateToJulian modifies its function arguments (later, it also modifies month). Inside a function, an argument behaves like a local variable: you may modify it. These modifications remain local to the function DateToJulian, however. Function main passes the values of d, m and y into DateToJulian, who maps them to its function arguments day, month and year respectively. Although DateToJulian modifies year and month, it does not change y and m in function main; it only changes local copies of y and m. This concept is called call by value. The example intentionally uses different names for the local variables in the functions main and DateToJulian, for the purpose of making the above
17 A tutorial introduction 13 explanation easier. Renaming main s variables d, m and y to day, month and year respectively, does not change the matter: then you just happen to have two local variables called day, two called month and two called year, which is perfectly valid in pawn. The remainder of function DateToJulian is, regarding the pawn language, uninteresting arithmetic. Returning to the second part of the function main we see that it now asks for a day number and calls another function, JulianToDate, to find the date that matches the day number. Function JulianToDate is interesting because it takes one input argument (the Julian Day number) and needs to calculate three output values, the day, month and year. Alas, a function can only have a single return value that is, a return statement in a function may only contain one expression. To solve this, JulianToDate specifically requests that changes that it makes to some of its function arguments are copied back to the variables of the caller of the function. Then, in main, the variables that must hold the result of JulianToDate are passed as arguments to JulianToDate. Function JulianToDate marks the appropriate arguments for being copied backtocaller byprefixing themwithan&symbol. Argumentswithan&are copied back, arguments without is are not. Copying back is actually not the correct term. An argument tagged with an & is passed to the function in a special way that allows the function to directly modify the original variable. This is called call by reference and an argument that uses it is a reference argument. In other words, if main passes y to JulianToDate who maps it to its function argument year and JulianToDate changes year, then JulianToDate really changes y. Only through reference arguments can a function directly modify a variable that is declared in a different function. To summarize the use of call-by-value versus call-by-reference: if a function has one output value, you typicallyuse a return statement; if a function has more output values, you use reference arguments. You may combine the two inside a single function, for example in a function that returns its normal output via a reference argument and an error code in its return value. As an aside, many desktop application use conversions to and from Julian Day numbers (or varieties of it) to conveniently calculate the number of days between to dates or to calculate the date that is 90 days from now
18 14 A tutorial introduction for example. Rational numbers All calculations done up to this point involved only whole numbers integer values. pawn also has support for numbers that can hold fractional values: these are called rational numbers. However, whether this support is enabled depends on the host application. Rational numbers can be implemented as either floating-point or fixed-point numbers. Floating-point arithmetic is commonly used for general-purpose and scientific calculations, while fixed-point arithmetic is more suitable for financial processing and applications where rounding errors should not come intoplay(oratleast,theyshouldbepredictable). Thepawntoolkithasboth a floating-point and a fixed-point module, and the details (and trade-offs) for these modules in their respective documentation. The issue is, however, that a host application may implement either floating-point or fixed-point, or both or neither. The program below requires that at least either kind of rational number support is available; it will fail to run if the host application does not support rational numbers at all. Listing: c2f.p #include <rational> main() { new Rational: Celsius new Rational: Fahrenheit print "Celsius\t Fahrenheit\n" for (Celsius = 5; Celsius <= 25; Celsius++) { Fahrenheit = (Celsius * 1.8) + 32 printf "%r \t %r\n", Celsius, Fahrenheit The example program converts a table of degrees Celsius to degrees Fahrenheit. The first directive of this program is to import definitions for rational number support from an include file. The file rational includes either Actually, this is already true of all native functions, including all native functions that the examples in this manual use.
19 A tutorial introduction 15 support for floating-point numbers or for fixed-point numbers, depending on what is available. ThevariablesCelsiusandFahrenheitaredeclared withatag Rational: Tag names: 66 between the keyword new and the variable name. A tag name denotes the purpose of the variable, its permitted use and, as a special case for rational numbers, its memory lay-out. The Rational: tag tells the pawn parser that the variables Celsius and Fahrenheit contain fractional values, rather than whole numbers. The equation for obtaining degrees Fahrenheit from degrees Celsius is F = C The program uses the value 1.8 for the quotient 9 / 5. When rational number support is enabled, pawn supports values with a fractional part behind the decimal point. The only other non-trivial change from earlier programs is that the format string for the printf function now has variable placeholders denoted with %r instead of %d. The placeholder %r prints a rational number at the position; %d is only for integers ( whole numbers ). I used the include file rational rather than float or fixed in an attempt to make the example program portable. If you know that the host application supports floating point arithmetic, it may be more convenient to #include the definitions from the file float and use the tag Float: instead of Rational when doing so, you should also replace %r by %f in the call to printf. For details on fixed point and floating point support, please see the application notes Fixed Point Support Library and Floating Point Support Library that are available separately. Strings pawn has no intrinsic string type; character strings are stored in arrays, with the convention that the array element behind the last valid character is zero. Working with strings is therefore equivalent with working with arrays. Among the simplest of encryption schemes is the one called ROT13 actually the algorithm is quite weak from a cryptographical point of view. It is most widely used in public electronic forums (BBSes, Usenet) to hide
20 16 A tutorial introduction texts from casual reading, such as the solution to puzzles or riddles. ROT13 simply rotates the alphabet by half its length, i.e. 13 characters. It is a symmetric operation: applying it twice on the same text reveals the original. Listing: rot13.p /* Simple encryption, using ROT13 */ main() { printf "Please type the string to mangle: " new str[100] getstring str, sizeof str,.pack = false rot13 str printf "After mangling, the string is: \"%s\"\n", str rot13(string[]) { for (new index = 0; string[index]; index++) if ( a <= string[index] <= z ) string[index] = (string[index] - a + 13) % 26 + a else if ( A <= string[index] <= Z ) string[index] = (string[index] - A + 13) % 26 + A In the function header of rot13, the parameter string is declared as an array,but withoutspecifyingthesizeofthearray there isnovalue between the square brackets. When you specify a size for an array in a function header, it must match the size of the actual parameter in the function call. Omitting the array size specification in the function header removes this restriction and allows the function to be called with arrays of any size. You must then have some other means of determining the (maximum)size of the array. In the case of a string parameter, one can simply search for the zero terminator. The for loop that walks over the string is typical for string processing functions. Note that the loop condition is string[index]. The rule for true/ false conditions in pawn is that any value is true, except zero. That is, when the array cell at string[index] is zero, it is false and the loop aborts. The ROT13 algorithm rotates only letters; digits, punctuation and special characters are left unaltered. Additionally, upper and lower case letters must be handled separately. Inside the for loop, two if statements filter out the characters of interest. The way that the second if is chained to the else
21 A tutorial introduction 17 clause of the first if is noteworthy, as it is a typical method of testing for multiple non-overlapping conditions. Earlier in this chapter, the concept of call by value versus call by ref- A function that erence were discussed. When you are working with strings, or arrays in general, note that pawn always passes arrays by reference. It does this to conserve memory and to increase performance arrays can be large data structures and passing them by value requires a copy of this data structure to be made, taking both memory and Õtime. Due to this rule, function rot13 can modify its function parameter (called string in the example) without needing to declare as a reference argument. Another point of interest are the conditions in the two if statements. The first if, for example, holds the condition a <= string[index] <= z, which means that the expression is true if (and only if) both a <= string[index] and string[index] <= z are true. In the combined expression, the relational operators are said to be chained, as they chain multiple comparisons in one condition. Finally, note how the last printf in function main uses the escape sequence \" to print a double quote. Normally a double quote ends the literal string; the escape sequence \" inserts a double quote into the string. Staying on the subject of strings and arrays, below is a program that separates a string of text into individual words and counts them. It is a simple program that shows a few new features of the pawn language. Listing: wcount.p /* word count: count words on a string that the user types */ #include <string> main() { print "Please type a string: " new string[100] getstring string, sizeof string new count = 0 new word[20] new index for ( ;; ) { word = strtok(string, index) if (strlen(word) == 0) break takes an array as an argument and that does not change it, may mark the argument as const ; see page 71 Relational operators: 106 Escape sequence: 98
22 18 A tutorial introduction count++ printf "Word %d: %s \n", count, word printf "\nnumber of words: %d\n", count strtok(const string[], &index) { new length = strlen(string) /* skip leading white space */ while (index < length && string[index] <= ) index++ /* store the word letter for letter */ new offset = index /* save start position of token */ new result[20] /* string to store the word in */ while (index < length && string[index] > && index - offset < sizeof result - 1) { result[index - offset] = string[index] index++ result[index - offset] = EOS /* zero-terminate the string */ return result for loop: 112 Function main first displays a message and retrieves a string that the user must type. Then it enters a loop: writing for (;;) creates a loop without initialisation, without increment and without test it is an infinite loop, equivalent to while (true). However, where the pawn parser will give you a warning if you type while (true) (something along the line redundant test expression; always true ), for (;;) passes the parser without warning. A typical use for an infinite loop is a case where you need a loop with the test in the middle a hybrid between a while and a do...while loop, so to speak. pawn does not support loops-with-a-test-in-the middle directly, but you can imitate one by coding an infinite loop with a conditional break. In this example program, the loop: gets a word from the string code before the test; tests whether a new word is available, and breaks out of the loop if not the test in the middle; prints the word and its sequence number code after the test.
23 A tutorial introduction 19 As is apparent from the line word = strtok(string, index) (and the declaration of variable word), pawn supports array assignment and functions returning arrays. The pawn parser verifies that the array that strtok returns has the same size and dimensions as the variable that it is assigned into. Function strlen is a native function (predefined), but strtok is not: it must be implemented by ourselves. The function strtok was inspired by the function of the same name from C/C ++, but it does not modify the source string. Instead it copies characters from the source string, word for word, into a local array, which it then returns. Arrays and symbolic subscripts (structured data) In a typeless language, we might assign a different purpose to some array elements than to other elements in the same array. pawn supports symbolic substripts that allow to assign specific tag names or ranges to individual array elements. The example to illustrate symbolic subscripts and arrays is longer than previouspawnprograms,and italsodisplaysafewotherfeatures, suchasglobal variables and named parameters. Listing: queue.p /* Priority queue (for simple text strings) */ #include <string> main() { new msg[.text{40,.priority] /* insert a few items (read from console input) */ printf "Please insert a few messages and their priorities; "... "end with an empty string\n" for ( ;; ) { printf "Message: " getstring msg.text,.pack = true if (strlen(msg.text) == 0) break printf "Priority: " msg.priority = getvalue() if (!insert(msg)) { printf "Queue is full, cannot insert more items\n" break
24 20 A tutorial introduction /* now print the messages extracted from the queue */ printf "\ncontents of the queue:\n" while (extract(msg)) printf "[%d] %s\n", msg.priority, msg.text const queuesize = 10 new queue[queuesize][.text{40,.priority] new queueitems = 0 insert(const item[.text{40,.priority]) { /* check if the queue can hold one more message */ if (queueitems == queuesize) return false /* queue is full */ /* find the position to insert it to */ new pos = queueitems /* start at the bottom */ while (pos > 0 && item.priority > queue[pos-1].priority) --pos /* higher priority: move up a slot */ /* make place for the item at the insertion spot */ for (new i = queueitems; i > pos; --i) queue[i] = queue[i-1] /* add the message to the correct slot */ queue[pos] = item queueitems++ return true extract(item[.text{40,.priority]) { /* check whether the queue has one more message */ if (queueitems == 0) return false /* queue is empty */ /* copy the topmost item */ item = queue[0] --queueitems /* move the queue one position up */ for (new i = 0; i < queueitems; ++i) queue[i] = queue[i+1] return true Function main starts with a declaration of array variable msg. The array has two fields,.text and.priority ; the.text field is declared as a subarray holding 40 characters. The period is required for symbolic subscripts and there may be no space between the period and the name.
25 A tutorial introduction 21 When an array is declared with symbolic subscripts, it may only be indexed with these subscripts. It would be an error to say, for example, msg[0]. On the other hand, since there can only be a single symbolic subscript between the brackets, the brackets become optional. That is, you can write msg.priority as a shorthand for msg.[priority]. Furtherinmainaretwoloops. Theforloopreads stringsand priorityvalues from the console and inserts them in a queue. The while loop below that extracts element by element from the queue and prints the information on the screen. The point to note, is that thefor loop stores both the string and the priority number (an integer) in the same variable msg; indeed, function main declares only a single variable. Function getstring stores the message text that you type starting at array msg.text while the priority value is stored (by an assignment a few lines lower) in msg.priority. The printf functioninthewhileloopreadsthestringandthevaluefromthosepositions as well. At the same time, the msg array is an entity on itself: it is passed in its entirety to function insert. That function, in turn, says near the end queue[queueitems] = item, where item is an array with the same declaration as the msg variable in main, and queue is a two-dimensional array that holds queuesize elements, with the minor dimension having symbolic subscripts. The declaration of queue and queuesize are just above function insert. At several spots in the example program, the same symbolic subscripts are repeated. In practice, a program would declare the list of symbolic constants in a#define directive and declare the arrays using this text-substition macro. This saves typing and makes modifications of the declaration easier to maintain. Concretely, when adding near the top of the program the following line: #define MESSAGE[.text{40,.priority] you can declare all arrays with these symbolic subscripts as msg[message]. The example implements a priority queue. You can insert a number of messages into the queue and when these messages all have the same priority, they are extracted from the queue in the same order. However, when the messages have different priorities, the one with the highest priority comes out first. The intelligence for this operation is inside function insert: it first determines the position of the new message to add, then moves a few
26 22 A tutorial introduction messages one position upward to make space for the new message. Function extract simply always retrieves the first element of the queue and shifts all remaining elements down by one position. Note that both functions insert and extract work on two shared variables, queue and queueitems. A variable that is declared inside a function, like variable msg in function main can only be accessed from within that function. A global variable is accessible by all functions, and that variable is declared outside the scope of any function. Variables must still be declared before they are used, so main cannot access variables queue and queueitems, but both insert and extract can. Function extract returns the messages with the highest priority via its function argument item. That is, it changes its function argument by copying the first element of the queue array into item. Function insert copies in the other direction and it does not change its function argument item. In such a case, it is advised to mark the function argument as const. This helps the pawn parser to both check for errors and to generate better (more compact, quicker) code. Named parameters: 73 getstring: 128 A final remark on this latest sample is the call to getstring in function main: if you look up the function declaration, you will see that it takes three parameters, two of which are optional. In this example, only the first and the last parameters are passed in. Note how the example avoids ambiguity about which parameter follows the first, by putting the argument name in front of the value. By using named parameters rather than positional parameters, the order in which the parameters are listed is not important. Named parameters are convenient in specifying and deciphering long parameter lists. Bit operations to manipulate sets A few algorithms are most easily solved with set operations, like intersection, union and inversion. In the figure below, for example, we want to design an algorithm that returns us the points that can be reached from some other point in a specified maximum number of steps. For example, if we ask it to return the points that can be reached in two steps starting from B, the algorithm has to return C, D, E and F, but not G because G takes three steps from B. Our approach is to keep, for each point in the graph, the set of other points
27 A tutorial introduction 23 that it can reach in one step this is the next_step set. We also have a result set that keeps all points that we have found so far. We start by setting the result set equal to the next_step set for the departure point. Now we have in the result set all points that one can reach in one step. Then, for every point in our result set, we create a union of the result set and the next_step set for that point. This process is iterated for a specified number of loops. An example may clarify the procedure outlined above. When the departure point is B, we start by setting the result set to D and E these are the points that one can reach from B in one step. Then, we walk through the result set. The first point that we encounter in the set is D, and we check what points can be reached from D in one step: these are C and F. So we addcand Ftotheresultset. Weknew thatthepointsthatcan be reached from D in one step are C and F, because C and F are in the next_step set for D. So what we do is to merge the next_step set for point D into the result set. The merge is called a union in set theory. That handles D. The original result set also contained point E, but the next_step set for E is empty, so no more point is added. The new result set therefore now contains C, D, E and F. A set is a general purpose container for elements. The only information that a set holds of an element is whether it is present in the set or not. The order ofelementsin aset isinsignificant and aset cannot containthe sameelement multiple times. The pawn language does not provide a set data type or
28 24 A tutorial introduction operators that work on sets. However, sets with up to 32 elements can be simulated by bit operations. It takes just one bit to store a present/absent status and a 32-bit cell can therefore maintain the status for 32 set elements provided that each element is assigned a unique bit position. The relation between set operations and bitwise operations is summarized in the following table. In the table, an upper case letter stands for a set and a lower case letter for an element from that set. concept mathematical notation pawn expression intersection A B A & B union A B A B complement A ~A empty set ε 0 membership x A (1 << x) & A To test for membership that is, to query whether a set holds a particular element, create a set with just one element and take the intersection. If the result is0(the empty set) the element is not in the set. Bit numbering starts typically at zero; the lowest bit is bit 0 and the highest bit in a 32-bit cell is bit 31. To make a cell with only bit 7 set, shift the value 1 left by seven or in a pawn expression: 1 << 7. Below is the program that implements the algorithm described earlier to find all points that can be reached from a specific departure in a given number of steps. The algorithm is completely in the findtargets function. Listing: set.p /* Set operations, using bit arithmetic */ const { A = 0b , B = 0b , C = 0b , D = 0b , E = 0b , F = 0b , G = 0b main() { new nextstep[] = [ C E, /* A can reach C and E */ D E, /* B " " D and E */ G, /* C " " G */
29 C F, /* D " " C and F */ 0, /* E " " none */ 0, /* F " " none */ E F, /* G " " E and F */ ] print "The departure point: " new start = clamp(.value = toupper(getchar()) - A,.min = 0,.max = sizeof nextstep - 1 ) print "\nthe number of steps: " new steps = getvalue() A tutorial introduction 25 /* make the set */ new result = findtargets(start, steps, nextstep) printf "The points in range of %c in %d steps: ", start + A, steps for (new i = 0; i < sizeof nextstep; i++) if (result & 1 << i) printf "%c ", i + A findtargets(start, steps, nextstep[], numpoints = sizeof nextstep) { new result = 0 new addedpoints = nextstep[start] while (steps-- > 0 && result!= addedpoints) { result = addedpoints for (new i = 0; i < numpoints; i++) if (result & 1 << i) addedpoints = nextstep[i] return result The const statement just below the header of the main function declares the constants for the nodes A to G, using binary radix so that that only a single bit is set in each value. When working with sets, a typical task that pops up is to determine the number of elements in the set. A straightforward function that does this is below: const statement: 101 cellbits constant: 101 Listing: simple bitcount function bitcount(set) { new count = 0 for (new i = 0; i < cellbits; i++) if (set & (1 << i)) count++
30 26 A tutorial introduction return count With a cell size of 32 bits, this function s loop iterates 32 times to check for a single bit at each iteration. With a bit of binary arithmetic magic, we can reduce it to loop only for the number of bits that are set. That is, the following function iterates only once if the input value has only one bit set: Listing: improved bitcount function bitcount(set) { new count = 0 if (set) do count++ while ((set = set & (set - 1))) return count Algebraic notation is also called infix notation A simple RPN calculator The common mathematical notation, with expressions like 26 3 (5+2), is known as the algebraic notation. It is a compact notation and we have grown accustomed to it. pawn and by far most other programming languages use the algebraic notation for their programming expressions. The algebraic notation does have a few disadvantages, though. For instance, it occasionally requires that the order of operations is made explicit by folding a part of the expression in parentheses. The expression at the top of this paragraph can be rewritten to eliminate the parentheses, but at the cost of nearly doubling its length. In practice, the algebraic notation is augmented with precedence level rules that say, for example, that multiplication goes before addition and subtraction. Precedence levels greatly reduce the need for parentheses, but itdoesnotfullyavoidthem. Worseisthatwhenthenumber of operators grows large, the hierarchy of precedence levels and the particular precedence level for each operator becomes hard to memorize which is why an operator-rich language as APL does away with precedence levels altogether. Around 1920, the Polish mathematicianjan Ĺukasiewicz demonstrated that by putting the operators in front of their operands, instead of between them, These rules are often summarized in a mnemonic like Please Excuse My Dear Aunt Sally (Parentheses, Exponentiation, Multiplication, Division, Addition, Subtraction).
31 A tutorial introduction 27 precedence levels became redundant and parentheses were never necessary. This notation became known as the Polish Notation. Charles Hamblin Reverse Polish proposed later to put operators behind the operands, calling it the Reverse Polish Notation. The advantage of reversing the order is that the operators are listed in the same order as they must be executed: when reading the operators from the left to the right, you also have the operations to perform in that order. The algebraic expression from the beginning of this section would read in rpn as: When looking at the operators only, we have: first an addition, then a multiplication and finally a subtraction. The operands of each operator are read from right to left: the operands for the + operator are the values 5 and 2, those for the operator are the result of the previous addition and the value 3, and so on. It ishelpful toimaginethevalues tobe stackedonapile, wherethe operators take one or more operands from the top of the pile and put a result back on top of the pile. When reading through the rpn expression, the values 26, 3, 5 and 2 are stacked in that order. The operator + removes the top two elements from the stack (5 and 2) and pushes the sum of these values back the stack now reads Then, the operator removes 3 and 7 and pushes the product of the values onto the stack the stack is Finally, the operator subtracts 21 from 26 and stores the single value 5, the end result of the expression, back onto the stack. Reverse Polish Notation became popular because it was easy to understand and easy to implement in (early) calculators. It also opens the way to operators with more than two operands (e.g. integration) or operators with more than one result (e.g. conversion between polar and Cartesian coordinates). The main program for a Reverse Polish Notation calculator is below: Listing: rpn.p /* a simple RPN calculator */ #include strtok #include stack #include rpnparse Notation is also called postfix notation Polish Notation is completely unrelated to Hungarian Notation which is just the habit of adding type or purpose identification warts to names of variables or functions.
32 28 A tutorial introduction main() { print "Type expressions in Reverse Polish Notation "... "(or an empty line to quit)\n" new string{100 while (getstring(string,.pack = true)) rpncalc string The main program contains very little code itself; instead it includes the required code from three other files, each of which implements a few functions that, together, build the rpn calculator. When programs or scripts get larger, it is usually advised to spread the implementation over several files, in order to make maintenance easier. Function main first puts up a prompt and calls the native function getstring to read an expression that the user types. Then it calls the custom function rpncalc to do the real work. Function rpncalc is implemented in the file rpnparse.inc, reproduced below: Listing: rpnparse.inc /* main rpn parser and lexical analysis, part of the RPN calculator */ #include <rational> #include <string> #define Token [.type, /* operator or token type */ Rational:.value, /* value, if t_type is "Number" */.word{20, /* raw string */ ] const Number = 0 const EndOfExpr = # rpncalc(const string{) { new index new field[token] for ( ;; ) { field = gettoken(string, index) switch (field.type) { case Number: push field.value case + : push pop() + pop() case - : push - pop() + pop() case * :
33 push pop() * pop() case /, : : push 1.0 / pop() * pop() case EndOfExpr: break /* exit "for" loop */ default: printf "Unknown operator %s \n", field.word printf "Result = %r\n", pop() if (clearstack()) print "Stack not empty\n", red gettoken(const string{, &index) { /* first get the next "word" from the string */ new word{20 word = strtok(string, index) /* then parse it */ new field[token] field.word = word if (strlen(word) == 0) { field.type = EndOfExpr /* special "stop" symbol */ field.value = 0 else if ( 0 <= word{0 <= 9 ) { field.type = Number field.value = rval(word) else { field.type = word{0 field.value = 0 return field A tutorial introduction 29 The rpn calculator uses rational number support and rpnparse.inc includes the rational file for that purpose. Almost all of the operations on rational numbers is hidden in the arithmetic. The only direct references to rational numbers are the %r format code in the printf statement near the bottom of function rpncalc and the call to rationalstr halfway function gettoken. Near the top in the file rpnparse.inc is a preprocessor macro that declares the symbolic subscripts for an array. The macro name, Token will be used Rational numbers, see also the Celsius to Fahrenheit example on page 14 Preprocessor macro: 92
34 30 A tutorial introduction throughout the program to declare arrays with those fields. For example, function rpncalc declares variable field as an array using the macro to declare the field names. Arrays with symbolic subscripts were already introduced in the section Arrays and symbolic subscripts on page 19; this script shows another feature of symbolic subscripts: individual substripts may have a tag name of their own. In this example,.typeis a simple cell,.value is a rationalvalue (with a fractional part) that is tagged as such, and.word can hold a string of 20 characters(includding the terminating zero byte). See, for example, the line: printf "Unknown operator %s \n", field.word how the.word subscript of the field variable is used as a string. switch statement: 114 If you know C/C ++ or Java, you may want to look at the switchstatement. The switch statement differs in a number of ways from the other languages that provide it. The cases are not fall-through, for example, which in turn means that the break statement for the case EndOfExpr breaks out of the enclosing loop, instead of out of the switch. On the top of the for loop in function rpncalc, you will find the instruction field = gettoken(string, index). As already exemplified in the wcount.p ( word count ) program on page 17, functions may return arrays. It gets more interesting for a similar line in function gettoken: field.word = word where word is an array for 20 characters and field is an array with 3 (symbolic) subscripts. However, as the.word subscript is declared as having a size of 20 characters, the expression field.word is considered a sub-array of 20 characters, precisely matching the array size of word. Listing: strtok.inc /* extract words from a string (words are separated by white space) */ #include <string> strtok(const string{, &index) { new length = strlen(string) /* skip leading white space */ while (index < length && string{index <= ) index++ /* store the word letter for letter */ new offset = index /* save start position of token */ const wordlength = 20 /* maximum word length */ new result{wordlength /* string to store the word in */ while (index < length
35 A tutorial introduction 31 && string{index > && index - offset < wordlength) { result{index - offset = string{index index++ result{index - offset = EOS /* zero-terminate the string */ return result Function strtok is the same as the one used in the wcount.p example. It is implemented in a separate file for the rpn calculator program. Note that the strtok function as it is implemented here can only handle words with up to 19 characters the 20 th character is the zero terminator. A truly general purpose re-usable implementation of an strtok function would pass the destination array as a parameter, so that it could handle words of any size. Supporting both packed and unpack strings would also be a useful feature of a general purpose function. When discussing the merits of Reverse Polish Notation, I mentioned that a stack is both an aid in visualizing the algorithm as well as a convenient method to implement an rpn parser. This example rpn calculator, uses a stack with the ubiquitous functions push and pop. For error checking and resetting the stack, there is a third function that clears the stack. wcount.p: 17 Listing: stack.inc /* stack functions, part of the RPN calculator */ #include <rational> static Rational: stack[50] static stackidx = 0 push(rational: value) { assert stackidx < sizeof stack stack[stackidx++] = value Rational: pop() { assert stackidx > 0 return stack[--stackidx] clearstack() { assert stackidx >= 0 if (stackidx == 0) return false
36 32 A tutorial introduction stackidx = 0 return true The file stack.inc includes the file rational again. This is technically not necessary (rpnparse.inc already included the definitions for rational number support), but it does not do any harm either and, for the sake of code re-use, it isbettertomakeanyfile include thedefinitionsofthelibraries that it depends on. Notice how the two global variables stack and stackidx are declared as static variables; using the keyword static instead of new. Doing this makes the global variables visible in that file only. For all other files in a larger project, the symbols stack and stackidx are invisible and they cannot (accidentally) modify the variables. It also allows the other modules to declare their own private variables with these names, so it avoids name clashing. The rpn calculator is actually still a fairly small program, but it has been set up as if it were a larger program. It was also designed to demonstrate a set of elements of the pawn language and the example program could have been implemented more compactly. Event-driven programming All of the example programs that were developed in this chapter so far, have used a flow-driven programming model: they start with main and the code determines what to do and when to request input. This programming model iseasytounderstandand itnicelyfits mostprogramminglanguages,but itis also a model does not fit many real life situations. Quite often, a program cannot simply process data and suggest that the user provides input only when it is ready for him/her. Instead, it is the user who decides when to provide input, and the program or script should be prepared to process it in an acceptable time, regardless of what it was doing at the moment. The above description suggests that a program should therefore be able to interrupt its work and do other things before picking up the original task. In early implementations, this was indeed how such functionality was implemented: a multi-tasking system where one task (or thread) managed the background tasks and a second task/thread that sits in a loop continuously requesting user input. This is a heavy-weight solution, however. A more
37 A tutorial introduction 33 light-weight implementation of a responsive system is what is called the event-driven programming model. In the event-driven programming model, a program or script decomposes any lengthy (background) task into short manageable blocks and in between, it is available for input. Instead of having the program poll for input, however, the host application (or some other sub-system) calls a function that is attached to the event but only if the event occurs. A typical event is input. Observe that input does not only come from human operators. Input packets can arrive over serial cables, network stacks, internal sub-systems such as timers and clocks, and all kinds of other equipment that you may have attached to your system. Many of the apparatus that produce input, just send it. The arrival of such input is an event, just like a key press. If you do not catch the event, a few of them may be stored in an internal system queue, but once the queue is saturated the events are simply dropped. pawn directly supports the event-driven model, because it supports multiple entry points. The sole entry point of a flow-driven program is main; an event-driven program has an entry point for every event that it captures. When compared to the flow-driven model, event-driven programs often appear bottom-up : instead of your program calling into the host application and deciding what to do next, your program is being called from the outside and it is required to respond appropriately and promptly. pawn does not specify a standard library, and so there is no guarantee that in a particular implementation, functions like printf and getvalue. Although it is suggested that every implementation provides a minimal console/terminal interface with a these functions, their availability is ultimately implementation-dependent. The same holds for the public functions the entry points for a script. It is implementation-dependent which public functions a host application supports. The script in this section may therefore not run on your platform (even if all previous scripts ran fine). The tools in the standard distribution of the pawn system support all scripts developed in this manual, provided that your operating system or environment supports standard terminal functions such as setting the cursor position. An early programming language that was developed solely for teaching the concepts of programming to children was Logo. This dialect of LISP made programming visual by having a small robot, the turtle, drive over the floor under control of a simple program. This concept was then copied Public functions: 81
38 34 A tutorial introduction to moving a (usually triangular) cursor of the computer display, again under control of a program. A novelty was that the turtle now left a trail behind it, allowing you to create drawings by properly programming the turtle it became known as turtle graphics. The term turtle graphics was also used for drawing interactively with the arrow keys on the keyboard and a turtle for the current position. This method of drawing pictures on the computer was briefly popular before the advent of the mouse. Listing: { /* get current position */ new x, y wherexy x, y /* determine how the update the current position */ switch (key) { case u : y-- /* up */ case d : y++ /* down */ case l : x-- /* left */ case r : x++ /* right */ case \e : exit /* Escape = exit */ /* adjust the cursor position and draw something */ moveturtle x, y moveturtle(x, y) { gotoxy x, y print "*" gotoxy x, y The entry point of the above program it is called on a key press. If you run the program and do not type any key, the never runs; if you type ten runs ten times. Contrast this behaviour with main: function main runs immediately after you start the script and it runs only once. It is still allowed to add a main function to an event-driven program: the main function will then serve for one-time initialization. A simple addition to this example program is to add a main function, in order to clear the console/terminal window on entry and perhaps set the initial position of the turtle to the centre.
39 A tutorial introduction 35 Support for function keys and other special keys (e.g. the arrow keys) is highly system-dependent. On ANSI terminals, these keys produce different codes than in a Windows DOS box. In the spirit of keeping the example program portable, I have used common letters ( u for up, l for left, etc.). This does not mean, however, that special keys are beyond pawn s capabilities. In the turtle script, the Escape key terminates the host application through the instruction exit. For a simple pawn run-time host, this will indeed work. With host applications where the script is an add-on, or hostapplications that are embedded in a device, the script usually cannot terminate the host application. Multiple events The advantages of the event-driven programming model, for building reactive programs, become apparent in the presence of multiple events. In fact, the event-driven model is only useful if you have more that one entry point; if your script just handles a single event, it might as well enter a polling loop for that single event. The more events need to be handled, the harder the flow-driven programming model becomes. The script below implements a bare-bones chat program, using only two events: one for sending and one for receiving. The script allows users on a network (or perhaps over another connection) to exchange single-line messages. The script depends on the host application to provide the native and public functions for sending and receiving datagrams and for responding to keys that are typed in. How the host application sends its messages, over a serial lineorusingtcp/ip, thehost applicationmaydecide itself. Thetoolsinthe standard pawn distribution push the messages over the TCP/IP network, and allow for a broadcast mode so that more than two people can chat with each other. Listing: chat.p #include <datagram> const cellchars = cellbits / message[], const source[]) printf "[%s] says: %s\n", source, { static string{100 static index
40 36 A tutorial introduction if (key == \e ) exit /* quit on Esc key */ echo key if (key == \r key == \n index == sizeof string * cellchars) { string{index = \0 /* terminate string */ sendstring string index = 0 string{index = \0 else string{index++ = key echo(key) { new string{2 = { 0 string{0 = key == \r? \n : key printf string The bulk of the above script handles gathering received key-presses into a string and sending that string after seeing the enter key. The Escape key ends the program. The function echo serves to give visual feedback of what the user types: it builds a zero-terminated string from the key and prints it. Despite its simplicity, this script has the interesting property that there is no fixed or prescribed order in which the messages are to be sent or received there is no query reply scheme where each host takes its turn in talking & listening. A new message may even be received while the user is typing its own message. State programming In a program following the event-driven model, events arrive individually, and they are also responded to individually. On occasion, though, an event ispartofasequentialflow, thatmustbehandledinorder. Examplesaredata transfer protocols over, for example, a serial line. Each event may carry a command, asnippet ofdatathat is part ofa largerfile, anacknowledgement, As this script makes no attempt to separate received messages from typed messages (for example, in two different scrollable regions), the terminal/console will look confusing when this happens. With an improved user-interface, this simple script could indeed be a nice message-base chat program.
41 A tutorial introduction 37 or other signals that take part in the protocol. For the stream of events (and the data packets that they carry) to make sense, the event-driven program must follow a precise hand-shaking protocol. To adhere to a protocol, an event-driven program must respond to each event in compliance with the (recent) history of events received earlier and the responses to those events. In other words, the handling of one event may set up a condition or environment for the handling any one or more subsequent events. A simple, but quite effective, abstraction for constructing reactive systems that need to follow (partially) sequential protocols, is that of the automaton or state machine. As the number of states are usually finite, the theory often refers to such automatons as Finite State Automatons or Finite State Machines. In an automaton, the context (or condition) of an event is its state. An event that arrives may be handled differently depending on the state of the automaton, and in response to an event, the automaton may switch to another state this is called a transition. A transition, in other words, as a response of the automaton to an event in the context of its state. Automatons are very common in software as well as in mechanical devices (you may see the Jacquard Loom as an early state machine). Automatons, with a finite number of states, are deterministic (i.e. predictable in behaviour) and their relatively simple design allows a straightforward implementation from a state diagram. In a state diagram, the states are usually represented as circles or rounded rectangles and the arrows represent the transitions. As transitions are the
42 38 A tutorial introduction response of the automaton to events, an arrow may also be seen as an event that does something. An event/transition that is not defined in a particular state is assumed to have no effect it is silently ignored. A filled dot represents the entry state, which your program (or the host application) must set in start-up. It is common to omit in a state diagram all event arrows that drop back into the same state, but for the preceding figure I have chosen to make the response to all events explicit. This state diagram is for parsing comments that start with /* and end with */. There arestatesforplaintextand fortextinsideacomment, plus two states for tentative entry into or exit from a comment. The automaton is intended to parse the comments interactively, from characters that the user types on the keyboard. Therefore, the only events that the automaton reacts on are key presses. Actually, there is only one event ( key-press ) and the state switches are determined by event s parameter: the key. pawn supports automatons and states directly in the language. Every function may optionally have one or more states assigned to it. pawn also supports multiple automatons, and each state is part of a particular automaton. The following script implements the preceding state diagram (in a single, anonymous, automaton). To differentiate plain text from comments, both are output in a different colour. Listing: comment.p /* parse C comments interactively, using events and a state machine */ main() state <plain> { state (key == / ) slash if (key!= / ) echo <slash> { state (key!= / ) plain state (key == * ) comment echo / /* print / held back from previous state */ if (key!= / ) echo key With the exception of native functions and user-defined operators.
43 A tutorial introduction <comment> { echo key state (key == * ) <star> { echo key state (key!= * ) comment state (key == / ) plain echo(key) <plain, slash> printchar key, yellow echo(key) <comment, star> printchar key, green printchar(ch, colour) { setattr.foreground = colour printf "%c", ch Function main sets the starting state to main and exits; all logic is eventdriven. When a key arrives in state plain, the program checks for a slash and conditionally prints the received key. The interaction between the states plain and slash demonstrates a complexity that is typical for automatons: you must decide how to respond to an event when it arrives, without being able to peek ahead or undo responses to earlier events. This is usually the case for event-driven systems you neither know what event you will receive next, nor when you will receive it, and whatever your response to the current event, there is a good chance that you cannot erase it on a future event and pretend that it never happened. In our particular case, when a slash arrives, this might be the start of a comment sequence ( /* ), but it is not necessarily so. By inference, we cannot decide on reception of the slash character what colour to print it in. Hence, wehold itback. However, thereisnoglobalvariableinthescriptthat says that a character is held back in fact, apart from function parameters, no variable is declared at all in this script. The information about a character being held back is hidden in the state of the automaton. As is apparent in the script, state changes may be conditional. The condition is optional, and you can also use the common if else construct to change states.
44 40 A tutorial introduction Being state-dependent is not reserved for the event functions. Other functions may have state declarations as well, as the echo function demonstrates. When a function would have the same implementation for several states, you just need to write a single implementation and mention all applicable states. For function echo there are two implementations to handle the four states. That said, an automatonmust be prepared to handle all events in any state. Typically, the automaton has neither control over which events arrive nor over when they arrive, so not handling an event in some state could lead to wrong decisions. It frequently happens, then, that a some events are meaningful only in a few specific states and that they should trigger an error or reset procedure in all other cases. The function for handling the event in such error condition might then hold a lot of state names, if you were to mention them explicitly. There is a shorter way: by not mentioning any name between the angle brackets, the function matches all states that have not explicit implementation elsewhere. So, for example, you could use the signature echo(key) <> for either of the two implementations (but not for both). A single anonymous automaton is pre-defined. If a program contains more than one automaton, the others must be explicitly mentioned, both in the stateclassifierofthefunctionand inthestateinstruction. Todoso, addthe name of the automatonin front of the state name and separate the names of the automaton and the state with a colon. That is, parser:slash stands for the state slash of the automaton parser. A function can only be part of a single automaton; you can share one implementation of a function for several states of the same automaton, but you cannot share that function for states of different automatons. Entry functions and automata theory State machines, and the foundation of automata theory, originate from mechanical design and pneumatic/electric switching circuits (using relays rather than transistors). Typical examples are coin acceptors, traffic light control and communication switching circuits. In these applications, robustness and predictability are paramount, and it was found that these goals A function that has the same implementation for all states, does not need a state classifierat all see printchar.
45 A tutorial introduction 41 Figure 1: Pedestrian crossing lights were best achieved when actions (output) were tied to the states rather than to the events (input). In this design, entering a state causes activity events cause state changes, but do not carry out other operations. In a pedestrian crossing lights system, the lights for the vehicles and the pedestrians must be synchronized. Technically, there are six possible combinations, but obviously the combination of a green light for the traffic and a walk sign for the pedestrians is recipe for disaster. We can also immediately dismiss the combination of yellow/walk as too dangerous. Thus, four combinations remain to be handled. The figure below is a state diagram for the pedestrian crossing lights. The entire process is activated with a button, and operates on a timer.
46 42 A tutorial introduction When the state red/walk timesout, the state cannot immediatelygoback to green/wait, because the pedestrians that are busy crossing the road at that moment need some time to clear the road the state red/wait allows for this. For purpose of demonstration, this pedestrian crossing has the added functionality that when a pedestrian pushes the button while the light for the traffic is already red, the time that the pedestrian has for crossing is lengthened. If the state is red/wait and the button is pressed, it switches back to red/walk. The enfolding box around the states red/walk and red/wait for handling the button event is just a notational convenience: I could also have drawn twoarrowsfromeither state back to red/walk. The script source code (which follows below) reflects this same notational convenience, though. In the implementation in the pawn language, the event functions now always haveasinglestatement,whichiseitherastatechangeoranemptystatement. Events that do not cause a state change are absent in the diagram, but they must be handled in the script; hence, the fall-back event functions that do nothing. The output, in this example program only messages printed on the console, is all done in the special functions entry. The function entry may be seen as a main for a state: it is implicitly called when the state that it is attached to is entered. Note that the entry function is also called when switching to the state that the automaton is already in: when the state is red_walk an invocation of sets the state to red_walk (which it is already in) and causes the entry function of red_walk to run this is a re-entry of the state. Listing: traffic.p /* traffic light synchronizer, using states in an event-driven model */ #include <time> main() state <green_wait> state <red_walk, red_wait> state <> { /* fallback <yellow_wait> state <red_walk> state <red_wait> state <> { /* fallback */ entry() <green_wait> print "Green / Don t walk\n" entry() <yellow_wait> { print "Yellow / Don t walk\n" settimer 2000
47 A tutorial introduction 43 entry() <red_walk> { print "Red / Walk\n" settimer 5000 entry() <red_wait> { print "Red / Don t walk\n" settimer 2000 This example program has an additional dependency on the host application/environment: in addition to event function, the host must also provide an event. Because of the timing functions, the script includes the system file time.inc near the top of the script. The event functions with the state changes are all on the top part of the script. The functions are laid out to take a single line each, to suggest a table-like structure. All state changes are unconditional in this example, but conditional state changes may be used with entry functions too. The bottom part are the event functions. Two transitions to the state red_walk exist or three if you consider the affection of multiple states to a single event function as a mere notational convenience: from yellow_wait and from the combination of red_walk and red_wait. These transitions all pass through the same entry function, thereby reducing and simplifying the code. In automata theory, an automaton that associates activity with state entries, such as this pedestrian traffic lights example, is a Moore automaton ; an automaton that associates activity with (state-dependent) events or transitions is a Mealy automaton. The interactive comment parser on page 38 is a typical Mealy automaton. The two kinds are equivalent: a Mealy automaton can be converted to a Moore automaton and vice versa, although a Moore automaton may need more states to implement the same behaviour. In practice, the models are often mixed, with an overall Moore automaton design, and a few Mealy states where that saves a state. State variables
48 44 A tutorial introduction The model of a pedestrian crossing light in the previous example is not very realistic (its only goal is to demonstrate a few properties of state programming with pawn). The first thing that is lacking is a degree of fairness: pedestrians should not be able to block car traffic indefinitely. The car traffic should see a green light for a period of some minimum duration after pedestrians have had their time slot for crossing the road. Secondly, many traffic lights have a kind of remote control ability, so that emergency traffic (ambulance, fire truck,...) can force green lights on their path. A wellknown example of such remote control is the mirt system (Mobile Infra-Red Transmitter) but other systems exist the Netherlands use a radiographic system called vetag for instance. The new state diagram for the pedestrian crossing light has two more states, but more importantly: it needs to save data across events and share it between states. When the pedestrian presses the button while the state is red_wait, we neither want to react on the button immediately (this was our fairness rule ), nor the button to be ignored or forgotten. In other words, we move to the state green_wait_interim regardless of the button press, but memorize the press for a decision made at the point of leaving state green_wait_interim. Automatons excel in modelling control flow in reactive/interactive systems, but data flow has traditionally been a weak point. To see why, consider that each event is handled individually by a function and that the local variables in that function disappear when the function returns. Local variables can, hence, not be used to pass data from one event to the next. Global variables, while providing a work-around, have drawbacks: global scope and an eternal lifespan. If a variable is used only in the event handlers of a
49 A tutorial introduction 45 single state, it is desirable to hide it from the other states, in order to protect it from accidental modification. Likewise, shortening the lifespan to the state(s) that the variable is active in, reduces the memory footprint. State variables provide this mix of variable scope and variable lifespan that are tied to a series of states, rather than to functions or modules. pawn enriches the standard finite state machine (or automaton) with variables that are declared with a state classifier. These variables are only accessible from the listed states and the memory these variable hold may be reused by other purposes while the automaton is in a different state (different than the ones listed). Apart from the state classifier, the declaration of a state variable is similar to that of a global variable. The declaration of the variable button_memo in the next listing illustrates the concept. To reset the memorized button press, the script uses an exit function. Just likeanentry functionis called when enteringastate, theexitfunction is called when leaving a state. Listing: traffic2.p /* a more realistic traffic light synchronizer, including an * "override" for emergency vehicles */ #include <time> main() state green_wait_interim new bool: button_memo <red_wait, { switch (key) { case : button_press case * : mirt_detect button_press() <green_wait> state yellow_wait button_press() <red_wait, green_wait_interim> button_memo = true button_press() <> /* fallback */ { mirt_detect() state <yellow_wait> state red_walk
50 46 A tutorial <red_walk> state <red_wait> state <green_wait_interim> { state (!button_memo) green_wait state (button_memo) <mirt_override> state <> /* fallback */ { entry() <green_wait_interim> { print "Green / Don t walk\n" settimer 5000 exit() <green_wait_interim> button_memo = false entry() <yellow_wait> { print "Yellow / Don t walk\n" settimer 2000 entry() <red_walk> { print "Red / Walk\n" settimer 5000 entry() <red_wait> { print "Red / Don t walk\n" settimer 2000 entry() <mirt_override> { print "Green / Don t walk\n" settimer 5000
51 State programming wrap-up A tutorial introduction 47 The common notation used in state diagrams is to indicate transitions with arrows and states with circles or rounded rectangles. The circle/rounded rectangle optionally also mentions the actions of an entry or exit function and of events that are handled internally without causing a transition. The arrow for a transition contains the name of the event (or pseudo-event), an optional condition between square brackets and an optional action behind a slash ( / ). States are ubiquitous, even if we do not always recognize them as such. The concept of finite state machines has traditionally been applied mostly to programs mimicking mechanical apparatus and software that implements communication protocols. With the appearance of event-driven windowing systems, state machines now also appear in the GUI design of desktop programs. States abound in web programs, because the browser and the web-site scripting host have only a weak link. That said, the state machine in web applications is typically implemented in an ad-hoc manner. States can also be recognized in common problems and riddles. In the well known riddle of the man that must move a cabbage, a sheep and a wolf across a river, the states are obvious the trick of the riddle is to avoid the forbidden states. But now that we are discovering states everywhere, we must be careful not to overdo it. For example, in the second implementation of a pedestrian crossing light, see page 45, I used a variable(button_memo) to hold a criterion for a decision made at a later time. An alternative implementation would be to throw in a couple of more states to hold the situations red-wait-&- button-pressed and green-wait-interim-&-button-pressed. No more variable would then be needed, but at the cost of a more complex state diagram and implementation. In general, the number of states should be kept small. Although automata provide a good abstraction to model reactive and interactive systems, coming to a correct diagram is not straightforward and sometimes just outright hard. Too often, the sunny day scenario of states and events is plottedout first, and everythingstrayingfromthis path is then A man has to ferry a wolf, a sheep and a cabbage across a river in a boat, but the boat can only carry the man and a single additional item. If left unguarded, the wolf will eat the sheep and the sheep will eat the cabbage. How can the man ferry them across the river?
52 48 A tutorial introduction added on an impromptu basis. This approach carries the risk that some combinations of events & states are forgotten, and indeed I have encountered two comment parser diagrams (like the one at page 38) by different book/ magazine authors that were flawed in such way. Instead, I advise to focus on the events and on the responses for individual events. For every state, every event should be considered; do not route events through a general purpose fall-back too eagerly. It has become common practice, unfortunately, to introduce automata theory with applications for which better solutions exist. One, oft repeated, example is that of an automaton that accumulates the value of a series of coins, or that calculates the remainder after division by 3 of a binary number. These applications may have made sense in mechanical/pneumatic design where the state is the only memory that the automaton has, but in software, using variables and arithmetic operations is the better choice. Another typical example is that of matching words or patterns using a state machine: every next letter that is input switches to a new state. Lexical scanners, such as the ones that compilers and interpreters use to interpret source code, might use such state machines to filter out reserved words. However, for any practical set of reserved words, such automatons become unwieldy, and no one will design them by hand. In addition, there is no reason why a lexical scanner cannot peek ahead in the text or jump back to a mark that it set earlier which is one of the criteria for choosing a state implementation in the first place, and finally, solutions like trie lookups are likely simpler to design and implement while being at least as quick. Program verification Should the compiler/interpreter not catch all bugs? This rhetorical question has both technical and philosophical sides. I will forego all non-technical aspects and only mention that, in practice, there is a trade-off between the expressiveness of a computer language and the enforced correctness (or provable correctness ) of programs in that language. Making a language very strict is not a solution if work needs to be done that exceeds the size of a toy program. A too strict language leaves the programmer struggling with the language, whereas the problem to solve should be the real struggle and the language should be a simple means to express the solution in. The goal of the pawn language is to provide the developer with an informal, and convenient to use, mechanism to test whether the program behaves
53 A tutorial introduction 49 as was intended. This mechanism is called assertions and, although the concept of assertions pre-dates the idea of design by contract, it is most easily explained through the design-by-contract methodology. The design by contract paradigm provides an alternative approach for dealing with erroneous conditions. The premise is that the programmer knows the task at hand, the conditions under which the software must operate and the environment. In such an environment, each function specifies the specific conditions, in the form of assertions, that must hold true before a client may execute the function. In addition, the function may also specify any conditions that hold true after it completes its operation. This is the contract of the function. The name design by contract was coined by Bertrand Meyer and its principles trace back to predicate logic and algorithmic analysis. Preconditions specify the valid values of the input parameters and environmental attributes; Postconditions specify the output and the (possibly modified) environment; Invariants indicate the conditions that must hold true at key points in a function, regardless of the path taken through the function. Forexample, afunctionthat computesasquarerootofanumber mayspecify bisection.. For preconditions, write assertions at the very start of the routine; for invariants, write an assertion where the invariant should hold; for post conditions, write an assertion before each return statement or at the end of the function. In pawn, the instruction is called assert; it is a simple statement that contains a test. If the test outcome is true, nothing happens. If the Example square root function (using bisection): 77
54 50 A tutorial introduction outcome is false, the assert instruction terminates the program with a message containing the details of the assertion that failed. Assertions are checks that should never fail. Genuine errors, such as user input errors, should be handled with explicit tests in the program, and not with assertions. As a rule, the expressions contained in assertions should be free of side effects: an assertion should never contain code that your application requires for correct operation. This does have the effect, however, that assertions never fire in a bug-free program: they just make the code fatter and slower, without any user-visible benefit. It is not this bad, though. An additional feature of assertions is that youcanbuildthesourcecodewithout assertions simplyusingaflagoroption to the pawn parser. The idea is that you enable assertions during development and build the retail version of the code without assertions. This is a better approach than removing the assertions, because all assertions are automatically back when recompiling the program e.g. for maintenance. During maintenance, or even during the initial development, if you catch a bug that was not trapped by an assertion, before fixing the bug, you should think of how an assertion could have trapped this error. Then, add this assertion and test whether it indeed catches the bug before fixing the bug. By doing this, the code will gradually become sturdier and more reliable. Documentation comments When programs become larger, documenting the program and the functions becomes vital for its maintenance, especially when working in a team. The pawn language tools have some features to assist you in documenting the code in comments. Documenting a programor libraryin its comments has a few advantages for example: documentation is more easily kept up to date with the program, it is efficient in the sense that programming comments now double as documentation, and the parser helps your documentation efforts in generating syntax descriptions and cross references. Comment syntax: 96 Every comment that starts with three slashes ( /// ) followed by whitespace, or that starts with a slash and two stars ( /** ) followed by whitespace is a special documentation comment. The pawn compiler extracts documentation comments and optionally writes these to a report file. See the application documentation, or appendix B, how to enable the report generation.
55 A tutorial introduction 51 As an aside, comments that start with /** must still be closed with */. Single line documentation comments ( /// ) close at the end of the line. The report file is an XML file that can subsequently be transformed to HTML documentation via an XSL/XSLT stylesheet, or be run through other tools to create printed documentation. The syntax of the report file is compatible with that of the.net developer products except that the pawn compiler stores more information in the report than just the extracted documentation strings. The report file contains a reference to the smalldoc.xsl stylesheet. The example below illustrates documentation comments in a simple script that has a few functions. You may write documentation comments for a function above its declaration or in its body. All documentation comments that appear before the end of the function are attributed to the function. You can also add documentation comments to global variables and global constants these comments must appear above the declaration of the variable or constant. Figure 2 shows part of the output for this (rather long) example. The style of the output is adjustable in the cascading style sheet (CSS-file) associated with the XSLT transformation file. Listing: weekday.p /** * This program illustrates Zeller s congruence algorithm to calculate * the day of the week given a date. */ /** * <summary> * The main program: asks the user to input a date and prints on * what day of the week that date falls. * </summary> */ main() { new day, month, year if (readdate(day, month, year)) { new wkday = weekday(day, month, year) printf "The date %d-%d-%d falls on a ", day, month, year switch (wkday) { case 0: print "Saturday" case 1: print "Sunday" case 2:
56 52 A tutorial introduction print "Monday" case 3: print "Tuesday" case 4: print "Wednesday" case 5: print "Thursday" case 6: print "Friday" else print "Invalid date" print "\n" /** * <summary> * The core function of Zeller s congruence algorithm. The function * works for the Gregorian calender. * </summary> * * <param name="day"> * The day in the month, a value between 1 and 31. * </param> * <param name="month"> * The month: a value between 1 and 12. * </param> * <param name="year"> * The year in four digits. * </param> * * <returns> * The day of the week, where 0 is Saturday and 6 is Friday. * </returns> * * <remarks> * This function does not check the validity of the date; when the * date in the parameters is invalid, the returned "day of the week" * will hold an incorrect value. * <p/> * This equation fails in many programming languages, notably most * implementations of C, C++ and Pascal, because these languages have * a loosely defined "remainder" operator. Pawn, on the other hand, * provides the true modulus operator, as defined in mathematical * theory and as was intended by Zeller. * </remarks> */ weekday(day, month, year) { /**
57 A tutorial introduction 53 * <remarks> * For Zeller s congruence algorithm, the months January and * February are the 13th and 14th month of the <em>preceding</em> * year. The idea is that the "difficult month" February (which * has either 28 or 29 days) is moved to the end of the year. * </remarks> */ if (month <= 2) month += 12, --year new j = year % 100 new e = year / 100 return (day + (month+1)*26/10 + j + j/4 + e/4-2*e) % 7 /** * <summary> * Reads a date and stores it in three separate fields. * </summary> * * <param name="day"> * Will hold the day number upon return. * </param> * <param name="month"> * Will hold the month number upon return. * </param> * <param name="year"> * Will hold the year number upon return. * </param> * * <returns> * <em>true</em> if the date is valid, <em>false</em> otherwise; * if the function returns <em>false</em>, the values of * <paramref name="day"/>, <paramref name="month"/> and * <paramref name="year"/> cannot be relied upon. * </returns> */ bool: readdate(&day, &month, &year) { print "Give a date (dd-mm-yyyy): " day = getvalue(_, -, / ) month = getvalue(_, -, / ) year = getvalue() return 1 <= month <= 12 && 1 <= day <= daysinmonth(month,year) /** * <summary> * Returns whether a year is a leap year. * </summary> * * <param name="year"> * The year in 4 digits.
58 54 A tutorial introduction * </param> * * <remarks> * A year is a leap year: * <ul> * <li> if it is divisable by 4, </li> * <li> but <strong>not</strong> if it is divisable by 100, </li> * <li> but it <strong>is</strong> it is divisable by 400. </li> * </ul> * </remarks> */ bool: isleapyear(year) return year % 400 == 0 year % 100!= 0 && year % 4 == 0 /** * <summary> * Returns the number of days in a month (the month is an integer * in the range ). One needs to pass in the year as well, * because the function takes leap years into account. * </summary> * * <param name="month"> * The month number, a value between 1 and 12. * </param> * <param name="year"> * The year in 4 digits. * </param> */ daysinmonth(month, year) { static daylist[] = [ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ] assert 1 <= month <= 12 return daylist[month-1] + _:(month == 2 && isleapyear(year)) The format of the XML file created by.net developer products is documented in the Microsoft documentation. The pawn parser creates a minimal description of each function or global variable or constant that is used in a project, regardless of whether you used documentation comments on that function/variable/constant. The parser also generates few tags of its own: attribute automaton Attributes for a function, such as native or stock. The automaton that the function belongs to (if any). dependency The names of the symbols (other functions, global variables and/global constants) that the function requires. If desired, a call tree can be constructed from the dependencies.
59 A tutorial introduction 55 Figure 2: Documentation generated from the source code param paraminfo referrer stacksize Function parameters. When you add a parameter description in a documentation comment, this description is combined with the auto-generated content for the parameter. Tags and array or reference information on a parameter. All functions that refer to this symbol; i.e., all functions that use or call this variable/function. This information is sufficient to serve as a cross-reference the referrer tree is the inverse of the dependency tree. The estimated number of cells that the function will allocate on the stack and heap. This stack usage estimate excludes the stack requirements of any functions that are called from the function to which the documentation applies. For 3 Numbers and Numeral Systems
CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural,
Moving from CS 61A Scheme to CS 61B Java
Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope you
Chapter One Introduction to Programming
Chapter One Introduction to Programming 1-1 Algorithm and Flowchart Algorithm is a step-by-step procedure for calculation. More precisely, algorithm is an effective method expressed as a finite list of++ Programming: From Problem Analysis to Program Design, Fifth Edition. Chapter 4: Control Structures I (Selection)
C++ Programming: From Problem Analysis to Program Design, Fifth Edition Chapter 4: Control Structures I (Selection) Objectives In this chapter, you will: Learn about control structures Examine relational
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
The C Programming Language course syllabus associate level
TECHNOLOGIES The C Programming Language course syllabus associate level Course description The course fully covers the basics of programming in the C programming language and demonstrates fundamental programming
2. Compressing data to reduce the amount of transmitted data (e.g., to save money).
Presentation Layer The presentation layer is concerned with preserving the meaning of information sent across a network. The presentation layer may represent (encode) the data in various ways (e.g., data)
Chapter 2: Elements of Java
Chapter 2: Elements of Java Basic components of a Java program Primitive data types Arithmetic expressions Type casting. The String type (introduction) Basic I/O statements Importing packages. 1 Introduction
7.1 Our Current Model
Chapter 7 The Stack In this chapter we examine what is arguably the most important abstract data type in computer science, the stack. We will see that the stack ADT and its implementation are very simple.
YOU CAN COUNT ON NUMBER LINES
Key Idea 2 Number and Numeration: Students use number sense and numeration to develop an understanding of multiple uses of numbers in the real world, the use of numbers to communicate mathematically,
Introduction to Python
Caltech/LEAD Summer 2012 Computer Science Lecture 2: July 10, 2012 Introduction to Python The Python shell Outline Python as a calculator Arithmetic expressions Operator precedence Variables and assignment
6.1. Example: A Tip Calculator 6-1
Chapter 6. Transition to Java Not all programming languages are created equal. Each is designed by its creator to achieve a particular purpose, which can range from highly focused languages designed for
DATA STRUCTURES USING C
DATA STRUCTURES USING C QUESTION BANK UNIT I 1. Define data. 2. Define Entity. 3. Define information. 4. Define Array. 5. Define data structure. 6. Give any two applications of data structures. 7.
VB.NET Programming Fundamentals
Chapter 3 Objectives Programming Fundamentals In this chapter, you will: Learn about the programming language Write a module definition Use variables and data types Compute with Write decision-making statements
VISUAL ALGEBRA FOR COLLEGE STUDENTS. Laurie J. Burton Western Oregon University
VISUAL ALGEBRA FOR COLLEGE STUDENTS Laurie J. Burton Western Oregon University VISUAL ALGEBRA FOR COLLEGE STUDENTS TABLE OF CONTENTS Welcome and Introduction 1 Chapter 1: INTEGERS AND INTEGER OPERATIONS
Python to C/C++ Fall 2011
Python to C/C++ Fall 2011 1. Main Program Python: Program code is indented after colon : def main(): body of program C/C++: Have more setup overhead. C: Both require #include directives to access libraries
Writing Portable Programs COS 217
Writing Portable Programs COS 217 1 Goals of Today s Class Writing portable programs in C Sources of heterogeneity Data types, evaluation order, byte order, char set, Reading period and final exam Important
2) Write in detail the issues in the design of code generator.
COMPUTER SCIENCE AND ENGINEERING VI SEM CSE Principles of Compiler Design Unit-IV Question and answers UNIT IV CODE GENERATION 9 Issues in the design of code generator The target machine Runtime Storage
Limitation of Liability
Limitation of Liability Information in this document is subject to change without notice. THE TRADING SIGNALS, INDICATORS, SHOWME STUDIES, PAINTBAR STUDIES, PROBABILITYMAP STUDIES, ACTIVITYBAR STUDIES,
I PUC - Computer Science. Practical s Syllabus. Contents
I PUC - Computer Science Practical s Syllabus Contents Topics 1 Overview Of a Computer 1.1 Introduction 1.2 Functional Components of a computer (Working of each unit) 1.3 Evolution Of Computers 1.4 Generations
Ohio University Computer Services Center August, 2002 Crystal Reports Introduction Quick Reference Guide
Open Crystal Reports From the Windows Start menu choose Programs and then Crystal Reports. Creating a Blank Report Ohio University Computer Services Center August, 2002 Crystal Reports Introduction Quick...
Review questions for Chapter 9
Answer first, then check at the end. Review questions for Chapter 9 True/False 1. A compiler translates a high-level language program into the corresponding program in machine code. 2. An interpreter
Sources: On the Web: Slides will be available on:
C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison,
1 The Java Virtual Machine
1 The Java Virtual Machine About the Spec Format This document describes the Java virtual machine and the instruction set. In this introduction, each component of the machine is briefly described. This
Chapter 5 Names, Bindings, Type Checking, and Scopes
Chapter 5 Names, Bindings, Type Checking, and Scopes Chapter 5 Topics Introduction Names Variables The Concept of Binding Type Checking Strong Typing Scope Scope and Lifetime Referencing Environments Named
Chapter 3. Input and output. 3.1 The System class
Chapter 3 Input and output The programs we ve looked at so far just display messages, which doesn t involve a lot of real computation. This chapter will show you how to read input from the keyboard, use++ Language Tutorial
cplusplus.com C++ Language Tutorial Written by: Juan Soulié Last revision: June, 2007 Available online at: The online version is constantly revised and may contain
Visual Logic Instructions and Assignments
Visual Logic Instructions and Assignments Visual Logic can be installed from the CD that accompanies our textbook. It is a nifty tool for creating program flowcharts, but that is only half of the story.
The Basics of C Programming. Marshall Brain
The Basics of C Programming Marshall Brain Last updated: October 30, 2013 Contents 1 C programming 1 What is C?................................. 2 The simplest C program, I........................ 2 Spacing Compiler Targeting the Java Virtual Machine
C Compiler Targeting the Java Virtual Machine Jack Pien Senior Honors Thesis (Advisor: Javed A. Aslam) Dartmouth College Computer Science Technical Report PCS-TR98-334 May 30, 1998 Abstract One of the
Pseudo code Tutorial and Exercises Teacher s Version
Pseudo code Tutorial and Exercises Teacher s Version Pseudo-code is an informal way to express the design of a computer program or an algorithm in 1.45. The aim is to get the idea quickly and also easy
COMP 356 Programming Language Structures Notes for Chapter 4 of Concepts of Programming Languages Scanning and Parsing
COMP 356 Programming Language Structures Notes for Chapter 4 of Concepts of Programming Languages Scanning and Parsing The scanner (or lexical analyzer) of a compiler processes the source program, recognizing
A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions
A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved First Draft February 8, 2006 1 Contents 25
Programming in Access VBA
PART I Programming in Access VBA In this part, you will learn all about how Visual Basic for Applications (VBA) works for Access 2010. A number of new VBA features have been incorporated into the 2010
FEEG6002 - Applied Programming 5 - Tutorial Session
FEEG6002 - Applied Programming 5 - Tutorial Session Sam Sinayoko 2015-10-30 1 / 38 Outline Objectives Two common bugs General comments on style String formatting Questions? Summary 2 / 38 Objectives Revise
General Software Development Standards and Guidelines Version 3.5
NATIONAL WEATHER SERVICE OFFICE of HYDROLOGIC DEVELOPMENT Science Infusion Software Engineering Process Group (SISEPG) General Software Development Standards and Guidelines 7/30/2007 Revision History Date
Business Insight Report Authoring Getting Started Guide
Business Insight Report Authoring Getting Started Guide Version: 6.6 Written by: Product Documentation, R&D Date: February 2011 ImageNow and CaptureNow are registered trademarks of Perceptive Software,
Hypercosm. Studio.
Hypercosm Studio Hypercosm Studio Guide 3 Revision: November 2005 Copyright 2005 Hypercosm LLC All rights reserved. Hypercosm, OMAR, Hypercosm 3D Player, and Hypercosm Studio are trademarks
Lecture 03 Bits, Bytes and Data Types
Lecture 03 Bits, Bytes and Data Types In this lecture Computer Languages Assembly Language The compiler Operating system Data and program instructions Bits, Bytes and Data Types ASCII table Data Types
ML for the Working Programmer
ML for the Working Programmer 2nd edition Lawrence C. Paulson University of Cambridge CAMBRIDGE UNIVERSITY PRESS CONTENTS Preface to the Second Edition Preface xiii xv 1 Standard ML 1 Functional Programming
Computer Programming Tutorial
Computer Programming Tutorial COMPUTER PROGRAMMING TUTORIAL by tutorialspoint.com tutorialspoint.com i ABOUT THE TUTORIAL Computer Prgramming Tutorial Computer programming is the act of writing computer
PHP Tutorial From beginner to master
PHP Tutorial From beginner to master PHP is a powerful tool for making dynamic and interactive Web pages. PHP is the widely-used, free, and efficient alternative to competitors such as Microsoft's ASP.
JetBrains ReSharper 2.0 Overview Introduction ReSharper is undoubtedly the most intelligent add-in to Visual Studio.NET 2003 and 2005. It greatly increases the productivity of C# and ASP.NET developers,
Custom Javascript In Planning
A Hyperion White Paper Custom Javascript In Planning Creative ways to provide custom Web forms This paper describes several of the methods that can be used to tailor Hyperion Planning Web forms. Hyperion
Programming Languages CIS 443
Course Objectives Programming Languages CIS 443 0.1 Lexical analysis Syntax Semantics Functional programming Variable lifetime and scoping Parameter passing Object-oriented programming Continuations Exception
Chapter 7D The Java Virtual Machine
This sub chapter discusses another architecture, that of the JVM (Java Virtual Machine). In general, a VM (Virtual Machine) is a hypothetical machine (implemented in either hardware or software) that directly
1.1 WHAT IS A PROGRAMMING LANGUAGE?
1 INTRODUCTION 1.1 What is a Programming Language? 1.2 Abstractions in Programming Languages 1.3 Computational Paradigms 1.4 Language Definition 1.5 Language Translation 1.6 Language Design How we communicate
Microsoft Excel Tips & Tricks
Microsoft Excel Tips & Tricks Collaborative Programs Research & Evaluation TABLE OF CONTENTS Introduction page 2 Useful Functions page 2 Getting Started with Formulas page 2 Nested Formulas page 3 Copying | http://docplayer.net/168047-Pawn-the-language-embedded-scripting-language-june-2011-itb-compuphase.html | CC-MAIN-2016-50 | refinedweb | 18,481 | 50.06 |
Opened 9 years ago
Closed 9 years ago
#9921 closed (duplicate)
request.urlconf incorrect behavoir
Description
if dynamically load urls file through request.urlconf django will not append trailing slash
so if I will have rule
(r'^(?P<user>[a-zA-Z0-9-]{4,20})/$', 'user')
, it have to call it
only,
(wihtout slash) will not work and django will try load ROOT_URLCONF from settings file.
If I will have the same rules in ROOT_URLCONF file it will work correctly.
May be its happen due bad parsing hostname. Im using subdomains..so I have x.site.com - load x.urls to request.urlconf, y.site.com - load y.urls to request.urlconf. Prepend project name -project.x.urls doesnt solve problem. File is load correctly, but didnt append slash. Global append settings - On. Maybe bug in cache systems.
Don't know :(
Using latest stable 1.0.2 django, python 2.5, windows xp sp3.
Dmitrij
Change History (11)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
comment:3 Changed 9 years ago by
SOLVED :
in django\middleware\common.py file near by line 55 (not sure, I've change some sources)
if settings.APPEND_SLASH and (not old_url[1].endswith('/')): if (not _is_valid_path(request.path_info) '''and''' _is_valid_path("%s/" % request.path_info)): new_url[1] = new_url[1] + '/'
have to be
if settings.APPEND_SLASH and (not old_url[1].endswith('/')): print "not end with /" if (not _is_valid_path(request.path_info) '''&''' _is_valid_path("%s/" % request.path_info)): new_url[1] = new_url[1] + '/' print "new_url[1] = %s " % new_url[1] if settings.DEBUG and request.method == 'POST':
for more info see the Python doc describing different between logical and arithmetical operators or look through folowing code executed in standart python shell ;)
>>> t = True >>> f = False >>> not t and f False >>> not t & f True
comment:4 Changed 9 years ago by
comment:5 Changed 9 years ago by
comment:6 Changed 9 years ago by
I do not see how changing from using boolean to arithmetic/bitwise "AND" (but not "NOT") in code where we are dealing with boolean values could possibly be correct.
Given the precedence () of boolean "AND" and "NOT", the code as written today is saying:
if (the path we have been asked to locate is not resolvable) and (adding a slash to the end it makes it resolvable) then (proceed with settings things up so we'll do a redirect to the added-slash version of the url)
When you change the boolean "AND" to a bitwise one, but leave the boolean "NOT", the precedence changes (bitwise "AND" has higher precedence than boolean "NOT") and the code then says:
if not ("the path as given is resolvable" and "adding a slash to the end makes it resolvable") then (proceed with settings things up so we'll do a redirect to the added-slash version of the url)
which means that any not-resolvable-as-originally-specified path will get redirected to the added-slash version, which is incorrect. We only want to do the redirect when adding the slash actually results in a resolvable url.
I'm not re-closing as a dupe, because I'm not sure what it's supposed to be a dupe of. reverse() is not involved here. I do see there are two other tickets (#3530 and #5034) with other cases where
request.urlconf is not being taken into account when it should be. James tried to get those consolidated into one but that doesn't seem to have stuck. It does seem like we have a general problem with
request.urlconf not being considered in all the places it should be, and it would be good if they were all found and fixed together rather than piecemeal (higher likelihood of doing it in a consistent fashion), but I don't really know enough about the code involved here to say whether one consistent fix is a reasonable goal or if these really all need to be fixed individually as they are tripped over.
comment:7 Changed 9 years ago by
thanks for reply and detail describing ;)
I've gone through code again :)
The result:
my changes goes back ;)
now look..
common.py
process_request(self, request): ... if (not _is_valid_path(request.path_info) and _is_valid_path("%s/" % request.path_info)): new_url[1] = new_url[1] + '/' ...
going to _is_valid_path(...
def _is_valid_path(path): try: urlresolvers.resolve(path) return True except urlresolvers.Resolver404: return False
going to urlresolvers.resolve(path)..
(file urlresolvers.py)
def resolve(path, urlconf=None): #urlconf is always None (I hope) return get_resolver(urlconf).resolve(path)
going to get_resolver(urlconf) # urlconf = None
def get_resolver(urlconf): if urlconf is None: from django.conf import settings urlconf = settings.ROOT_URLCONF #get default settings return RegexURLResolver(r'^/', urlconf) get_resolver = memoize(get_resolver, _resolver_cache, 1)
I think this should be better..and no operators will changed ;)
common.py !!! + means added
def process_request(self, request): ... host = request.get_host() old_url = [host, request.path] new_url = old_url[:] + urlconf = None + ... if (not _is_valid_path(request.path_info, +urlconf+) and _is_valid_path("%s/" % request.path_info, +urlconf+)): new_url[1] = new_url[1] + '/'
going to _is_valid_path..
def _is_valid_path(path, +urlconf=None+): try: urlresolvers.resolve(path, +urlconf+) return True except urlresolvers.Resolver404: return False
and urlresolvers.resolve will get correct urlconf from request, if there is no urlconf it will be getted from settings ;)
Any better ideas?
Thanks.
Dmitrij
comment:8 Changed 9 years ago by
sorry, forgot one piece of code
if settings.APPEND_SLASH and (not old_url[1].endswith('/')): #default None + if hasattr(request, 'urlconf'): + urlconf = request.urlconf #if not (_is_valid_path(request.path_info) and _is_valid_path("%s/" % request.path_info)): if (not _is_valid_path(request.path_info, + urlconf +) and _is_valid_path("%s/" % request.path_info, + urlconf +)): new_url[1] = new_url[1] + '/'
comment:9 Changed 9 years ago by
comment:10 Changed 9 years ago by
Smiley I agree with your assment, the issue is COMMON_MIDDLWEAR tries to see if it can reverse the url with a / appended. This is why I originally marked this as a dupe, it's a symptom, not a cause.
Marking as a duplicate, this is because reverse() doesn't take request.urlconf into account. | https://code.djangoproject.com/ticket/9921 | CC-MAIN-2018-22 | refinedweb | 1,001 | 68.47 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.