text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Dear,
How can I access the command history from python?
Does the command history contain all informations of then command (not only the name
of the command)?
Kind regards
Dear,
How can I access the command history from python?
Does the command history contain all informations of then command (not only the name
of the command)?
Kind regards
Hi,
Does this give you what you want?
import rhinoscriptsyntax as rs txt = rs.CommandWindow()
It’s a string that you’ll need to parse.
Alain
No, if you’re looking to reproduce the result of a command for example, you will not see anything in the command history like a mouse pick of a point, what objects were selected, etc. All it will give you is what the command line has reported back to you in serial order. It should more properly be called “CommandLineHistory”.
–Mitch
What I need is to get all necessary information to re-create the objects:
if I create a cylinder, and create a box than I do a union, I would like to get the 3 commands: cylinder,
box, and union along with the details (radius, position, height, length, …) to be able to recreate the
object.
Taoufik
When I run the python script I get an error: module object has no attribute CommandWindow
I am using Atom and Rhino5 on Mac
Taoufik
Yeah, it is rs.CommandHistory()
import rhinoscriptsyntax as rs txt = rs.CommandHistory() print txt
should work.
-Pascal
Hi
Where do I see the result of print? does Rhino has a console?
Taoufik
You can actually show the command history in Mac Rhino in the right sidebar as one of the panels. --Mitch
But the command history does not show the result of ‘print txt’, where do I find
the result of ‘print txt’?
Taoufik
With Python, any print statement will print to the “command line”. On Mac Rhino, that will only be one line to the tiny little lower left hand status bar, as Mac Rhino does not have a real command line per se. Therefore the last line is all you will see in that tiny space - unless you do something else. In order to see all of it you will either need to:
–Mitch
I wrote the following but the out.txt file is empty:
import rhinoscriptsyntax as rs
txt = rs.CommandHistory()
f=open(‘out.txt’, ‘w’)
f.write(txt)
f.close()
Taoufik
Looks like you actually need a full path to the file for f.write() to work. --Mitch | https://discourse.mcneel.com/t/access-command-history-from-python/25057 | CC-MAIN-2020-40 | refinedweb | 418 | 72.66 |
Closed Bug 667132 Opened 11 years ago Closed 11 years ago
Ion
Monkey: Fix greedy register allocation around loops
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
People
(Reporter: dvander, Assigned: dvander)
References
Details
Attachments
(2 files, 2 obsolete files)
This patch is really two things: (1) Fixes lots of bugs around phis in the greedy register allocator. (2) Introduces a new LDefinition policy, called REDEFINED. adrake suggested this as a way to deal with two cases: (a) box(constant-type, non-constant-data) (b) unbox(constant->type, non-constant-data) In both of these cases, we take in a register and basically do nothing. The allocator though sees a definition for these instructions and allocates a new register. "Redefinition" means that the definition specifies an existing virtual register, and the allocator ignores the definition completely. This required some rejiggering of the x86 code and I can't decide how ugly it is. For the code: |var t; while (x) { t = t + 1; } return t;| Before this patch: B0: 5 parameter ([t:0 (arg -1)], [d:0 (arg -2)]) <|@ 0 move ()[(arg:-2) -> (ebx)] <|@ 7 box ([t:0 (edi)], [d:0 (esi)]) (c) <|@ 9 goto (B1) <|@ B1: 0 move ()[(esi) -> (ecx)] <|@ 12 unbox ([i:0 (ecx)]) (ecx), (edi) <|@ 13 testvandbranch (B2, B3) (arg:-1), (ebx) <|@ B2: 0 move ()[(edi) -> (ecx)], [(esi) -> (edx)] <|@ 14 return () (ecx), (edx) <|@ B3: 15 addi ([i:0 (ecx)]) (ecx), (c) <|@ 16 box ([t:0 (ebx)], [d:0 (edx)]) (ecx) <|@ 0 move ()[(ebx) -> (stack:i0)], [(edx) -> (stack:i1)] <|@ 0 move ()[(stack:i0) -> (edi)], [(stack:i1) -> (esi)] <|@ 18 goto (B1) <|@ After this patch: B0: 5 parameter ([t (arg -1)], [d (arg -2)]) <|@ 0 move ()[(arg:-2) -> (edi)] <|@ 7 box ([t (ecx)], [d (esi)]) (c) <|@ 9 goto () <|@ B1: 11 unbox ([i:11 (r)]) (ecx), (esi) <|@ 12 testvandbranch (B2, B3) (arg:-1), (edi) <|@ B2: 0 move ()[(esi) -> (edx)] <|@ 13 return () (ecx), (edx) <|@ B3: 14 addi ([i (esi)]) (esi), (c) <|@ 15 box ([t (edi)], [d:15 (r)]) (esi) <|@ 0 move ()[(edi) -> (ecx)], [(edi) -> (esi)] <|@ 16 goto () <|@
Fixes two thinkos around virtual registers.
Attachment #541885 - Attachment is obsolete: true
Attachment #542218 - Flags: review?(adrake)
Comment on attachment 542218 [details] [diff] [review] v2 Review of attachment 542218 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/ion/LIR-Common.h @@ +49,5 @@ > > // Register spill/restore/resolve marker. > class LMove : public LInstructionHelper<0, 0, 0> > { > + RegisterSet freeRegs; Presumably this is so the code generator has access to a list of temporaries it can take advantage of? I can generate information of this form during reification or resolution without too much difficulty. It feels kind of hacky, but I can't think of a better solution to that problem. ::: js/src/ion/MIR.h @@ +330,5 @@ > void setEmitAtUses() { > setFlags(EMIT_AT_USES); > } > + bool isEffectful() { > + return hasFlags(EFFECTFUL); This makes me want for the pseudo-monadic effect values Adobe's got... ::: js/src/ion/MIRGraph.h @@ +411,4 @@ > MBasicBlock *loopSuccessor_; > > + // The predecessor block which is the backedge of a loop header. > + MBasicBlock *backedge_; Isn't this necessarily the predecessor of the loop successor block? ::: js/src/ion/x86/Lowering-x86.cpp @@ +102,5 @@ > } > > LUnbox *lir = new LUnbox(unbox->type()); > + lir->setOperand(0, useType(inner)); > + lir->setOperand(1, usePayloadInRegister(inner)); No use of VREG_DATA_OFFSET and VREG_TYPE_OFFSET in this kind of case?
(In reply to comment #2) > ::: js/src/ion/MIRGraph.h > @@ +411,4 @@ > > MBasicBlock *loopSuccessor_; > > > > + // The predecessor block which is the backedge of a loop header. > > + MBasicBlock *backedge_; > > Isn't this necessarily the predecessor of the loop successor block? I've swung the other way on this one -- I actually want backedge in my register allocator, not loopSuccessor_ as the paper would have me believe. As I'm the only consumer of loopSuccessor_, want to just remove it while we're here?
I've rebased the patch in bug 657816 to make use of the backedge functionality.
This test case on at least x86_64 generates: Assertion failure: def->policy() == LDefinition::PRESET, at /home/adrake/moz/ionmonkey/js/src/ion/GreedyAllocator.cpp:713 with this patch applied. This assertion does not trip with this patch not applied.
Comment on attachment 542218 [details] [diff] [review] v2 Due to regression.
Attachment #542218 - Flags: review?(adrake) → review-
After splitting critical edges the assert is gone. This is the same patch just rebased and removed stuff that shouldn't have been in there.
Attachment #542218 - Attachment is obsolete: true
Attachment #544006 - Flags: review?(adrake)
Comment on attachment 544006 [details] [diff] [review] v3 Review of attachment 544006 [details] [diff] [review]: -----------------------------------------------------------------
Attachment #544006 - Flags: review?(adrake) → review+
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED | https://bugzilla.mozilla.org/show_bug.cgi?id=667132 | CC-MAIN-2021-49 | refinedweb | 754 | 51.18 |
Example is a little difficult to follow
I'm tryingto follow the example for creating a an application for weather and a night clock. I'm at the part were I need to turn each of my files into modules and components. I'm not sure on how to turn to start the first procees and then create a qmldir. I tried going into the projects panel -> right click on the folder ->create a new file in the folder -> add the code and move it to another folder in my directory. After that I did the same thing but made the qmldir a text file. If anyone can explain to me were I misunderstood the instructions on how to do this it would be greatly apprieciated. Thanks. Here are the example instructions:
bq. In order to create a component, you have to create a new file saved as
<NameOfComponent>.qml with only one root element in it (the same as you would
do with an ordinary Qt Quick application). It is important to underline that the name of the
file has to start with a capital letter. From now on, the new component will be available under
the name <NameOfComponent> to all other Qt Quick applications residing in the same
directory. Generally, files with a qml extension are referred to as QML Documents 1 .
When your work progresses, you will probably get many files in the application’s folder. Later
on, you might even have the need to host components in different versions. This is where Qt
Quick modules come to the rescue. The first step is to move all components (basically, files)
which belong to the same group of functionality in a new folder. Then you need to create a
qmldir file containing meta-information about these components in that folder.
- dheerendra Qt Champions 2017
How are you planning to keep your qml files ? If you are component directory are with reference to the current directory, you don't have do anything. Just place them in different directory under your current directory. Import these directory.
e.g
dev3036>main.qml
dev3060/module1/Weather.qml
dev3060/module2/Cloud.qml
You can just import the directory module1 and module2 in main and start using the Weather.qml and Cloud.qml
Hope this helps.
In the project window I have it like this
Folder: SwordNote0_1
- SwordNote0_1.qmlproject
Folder: ModsComLib
- NightClockApp.qml
- WeatherApp.qml
- WeatherModelApp.qml
- SwordNote0_1.qml
Note sure if this is what you mean. Here is the code in the main qml file SwordNote0_1.qml:
@import QtQuick 2.2
import QtQuick.Controls 1.1
import QtQuick.Window 2.0
import "File:/home/phoenix/SwordNote0_1/ModsComLib/NightClockApp.qml"
import "File:/home/phoenix/SwordNote0_1/ModsComLib/WeatherApp.qml"
Rectangle {
id: root
property string defaultLocation: "Munich"
property int defaultInterval: 60 // in seconds
property bool showSeconds: true
property bool showDate: true
width: 360 height: 640 Image { id: background source: "File:/home/phoenix/QtExamples/content/resources/light_background.png" fillMode: "Tile" anchors.fill: parent onStatusChanged: if (background.status == Image.Error) console.log("Background image \"" + source + "\" cannot be loaded") } WeatherModelItem { id: weatherModelItem location: root.defaultLocation interval: root.defaultInterval } Component { id: weatherCurrentDelegate Weather { id: currentWeatherItem labelText: root.defaultLocation conditionText: model.condition tempText: model.temp_c + "C°" } } Component { id: weatherForecastDelegate Weather { id: forecastWeatherItem labelText: model.day_of_week conditionText: model.condition tempText: Logic.f2C (model.high) + "C° / " + Logic.f2C (model.low) + "C°" } } Column { id: clockAndWeatherScreen anchors.centerIn: root NightClock { id: clock height: 80 width: 160 showDate: root.showDate showSeconds: root.showSeconds textColor: Style.onlineClockTextColor } Repeater { id: currentWeatherView model: weatherModelItem.currentModel delegate: weatherCurrentDelegate } GridView { id: forecastWeatherView width: 300 height: 300 cellWidth: 150; cellHeight: 150 model: weatherModelItem.forecastModel delegate: weatherForecastDelegate } } MouseArea { anchors.fill: parent onClicked: Qt.quit() }
}
@
I don't think you can import a QML file directly, this should not be working:
@
import "File:/home/phoenix/SwordNote0_1/ModsComLib/NightClockApp.qml"
@
You should import the directory instead, and QML files inside that directory can then be used. Eg:
@
import "File:/home/phoenix/SwordNote0_1/ModsComLib"
Rectangle {
NightClockApp {
...
}
...
}
@
qmldir is a file that defines your module name, components and their corresponding QML files. Here's an example of the directory structure of a module:
@
/home/phoenix/MyControls <-- This is a directory
/home/phoenix/MyControls/qmldir <-- qmldir is a plain text file. Note that it has no extension.
/home/phoenix/MyControls/MyButton.qml <-- a component
/home/phoenix/MyControls/MyRectangle.qml <-- another component
@
The contents of qmldir can be like the following. This will create a module called MyControls that has two components:
@
module MyControls
MyButton 1.0 MyButton.qml
MyRectangle 2.0 MyRectangle.qml
@
Finally you need to tell QML engine how to locate MyControls module. Otherwise you will get this error when you import: module "MyControls" is not installed. One way is by using QML2_IMPORT_PATH environment variable. In our example, you set QML2_IMPORT_PATH=/home/phoenix/MyControls. The other way is to call "QQmlEngine::addImportPath()":
After everything is set, you can import the module in your QML file:
@
import MyControls
MyRectangle {
...
MyButton {
...
}
}
@
How do I go about creating the qmldir correctly? I tried to right click on the ModsComLib folder and choose a general template but that would add a txt extension and I know the qmldir shouldn't have any. I could just modify the example qmldir file in the example and move it to the ModsComLib but it feels kind of cheap without knowing how its done.
So I messed up some where and the whole entire file is gone. I'm just with SwordNote0_1.qmlproject file. Back to the drawing board.
[quote author="dev3060" date="1408435649"]How do I go about creating the qmldir correctly?[/quote]
I am also puzzled by the inability to create qmldir elegantly within Qt Creator :) I usually create the file with another text editor, and may have to remove the extension by renaming it.
I did the same thing but poof everything just went away. I plan on just starting back at square one. Maybe, I'll discover what happened. I appreciate the help it did clear some things up. | https://forum.qt.io/topic/44982/example-is-a-little-difficult-to-follow | CC-MAIN-2018-34 | refinedweb | 999 | 51.75 |
/ Published in: JavaScript
A simple Javascript function to tell if one string starts with another.
Usage:
if (myString.startsWith("anything") { do something… }
NOTE: As always, make sure this is defined before it is used!
Expand | Embed | Plain Text
- if (typeof String.prototype.startsWith != 'function') {
- String.prototype.startsWith = function(str) {
- return (this.lastIndexOf(str, 0) === 0);
- };
- }
Report this snippet
Tweet
I just have the same idea, Are you searching for finest Cheap Hermes Birkin ? buy Original Leather Hermes Bags Online Sale, 100% guaranteed, you could come to our store and find one handbags you love. and we will tell you Hermes latest news from Hermes Birkin blog. | http://snipplr.com/view/63954/casesensitive-starts-with/ | CC-MAIN-2015-48 | refinedweb | 106 | 67.65 |
Today for my 30 day challenge, I decided to learn how to do article extraction using the Python programming language. I have been interested in article extraction for a few month when I wanted to write a Prismatic clone. Prismatic creates a news feed based on user interest. Extracting article's main content, images, and other meta information is a very common requirement in most of the content discovery websites like Prismatic. In this blog post, we will learn how we can use a Python package called goose-extractor to accomplish this task. We will first cover some basics, and then we will develop a simple Flask application which will use the Goose Extractor API.
What is Goose Extractor?
Goose Extractor is an open source article extraction library written in Python. It can be used to extract the main text of an article, main image of an article, videos in an article, meta description, and meta tags in an article. Goose was originally written in Java by Gravity.com and then most recently converted to a scala project.
From the Goose Extractor website
Goose Extractor is a complete rewrite in python. The aim of the software is to take any news article or article type web page and not only extract what is the main body of the article but also all meta data and most probable image candidate.
Why should I care?
The reason I decided to learn Goose Extractor are as follows:
I wanted to develop applications which require article extraction. Goose Extractor stands on the strong shoulders of NTLK and Beautiful Soup, which are the leading libraries for text processing and HTML parsing.
I wanted to learn how article extraction can be done in Python.
Install Goose Extractor
Before we can get started with Goose Extractor, we need to install Python and virtualenv on the machine. The Python version I am using in this blog post is 2.7.
We will use the pip install to get started with Goose Extractor. goose-extractor
The commands above will create a myapp directory on the local machine, then activate virtualenv with Python version 2.7, then install the goose-extractor package.
Github Repository
The code for today's demo application is available on github: day16-goose-extractor-demo.
Application
The demo application is running on OpenShift. It is a very simple example of using Goose Extractor API. Users can submit a link, and application will show the title, main image, and first 200 characters of the main text., request, render_template,jsonify from goose import Goose app = Flask(__name__) @app.route('/') @app.route('/index') def index(): return render_template('index.html') @app.route('/api/v1/extract') def extract(): url = request.args.get('url') g = Goose() article = g.extract(url=url) response = {'title' : article.title , 'text' : article.cleaned_text[:250],'image': article.top_image.src} return jsonify(response) if __name__ == "__main__": app.run(debug=True)
The code shown above does the following:
It imports the Flask class, request object, jsonify function, and render_template function from flask package.
It imports the Goose class from goose package.
It defines a route to '/' and 'index' url. So, if a user makes a GET request to either '/' or '/index', then the index.html will be rendered.
It defines a route to '/api/v1/extract' url. We first get the 'url' query paramter from the request object. Then, we create an instance of Goose class. Next, extract the article, and then finally, create a json object and return it back. The json object contains title, cleaned text, and main image of the article..
<!DOCTYPE html> <html> <head> <title>Extract Title, Text, and Image from URL</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" type="text/css" href="static/css/bootstrap.css"> <style type="text/css"> body { padding-top:60px; padding-bottom: 60px; } </style> <="#">TextExtraction</a> </div> </div> </div> <div id="main" class="container"> <form class="form-horizontal" role="form" id="myform"> <div class="form-group"> <div class="col-lg-4"> <input type="url" id="url" name="url" class="form-control" placeholder="Url you want to parse" required> </div> </div> <div class="form-group"> <input type="submit" value="Extract" id="submitUrl" class="btn btn-success"> </div> </form> </div> <div id="loading" style="display:none;" class="container"> <img src="/static/images/loader.gif" alt="Please wait.." /> </div> <div id="result" class="container"> </div> <script type="text/javascript" src="static/js/jquery.js"></script> <script type="text/javascript"> $("#myform").on("submit", function(event){ $("#result").empty(); event.preventDefault(); $('#loading').show(); var url = $("#url").val() $.get('/api/v1/extract?"+result.text+"</p>"); }) }); </script> </body> </html>
You can copy the js and css files from my github repository.
In the HTML file shown above, we make a REST call on form submission. After we receive the response, we append it to result div.
Deploy to the cloud
Before we can deploy the application to our cloud environment,If you already have one, make sure it is the latest one. To update your rhc, execute the command shown below.
sudo gem update rhcFor additional assistance setting up the rhc command-line tool, see the following page:
Setup your OpenShift account using rhc setup command. This command will help you create a namespace and upload your ssh keys to OpenShift server.
To deploy the application on OpenShift just type the command shown below.
$ rhc create-app day16.
I tried the demo, it returns a 500 error when you submit a URL. seems to be working for me.
Working fine for me as well. | https://www.openshift.com/blogs/day-16-goose-extractor-an-article-extractor-that-just-works | CC-MAIN-2014-15 | refinedweb | 923 | 58.58 |
Holy cow, I wrote a book!.
When you select "Shut Down" from the Start menu,
a dialog appears with three options:
"Stand By", "Turn Off" and "Restart".
To get the secret fourth option "Hibernate" you have
to press the shift key.
Would the Alt key be the more obvious choice for
revealing alternate options?
You might think so, but it so happens that Alt already
has meaning.
In this dialog, the Alt key would be a disaster,
because the underlined letters indicate keyboard
accelerators, which are pressed in conjunction with the Alt
key.
In other words, from the Shut Down dialog,
you can type Alt+S to stand by, Alt+U to turn off,
or Alt+R to restart.
Since the Alt key was already taken,
the Shift key had to be used to reveal the bonus options.
Using the Shift key to reveal bonus options is not uncommon.
You can hold the Shift key while right-clicking on a file
to get
an extended context menu,
and of course there's the Shift+No option in file copy
dialogs to mean "No to all".
In fact, you don't need to press the Alt or Shift keys at all.
Recall that the rules for dialog box navigation permit omitting
the Alt key if the focus control does not accept character
input; since the only things on the Shut Down dialog are
pushbuttons, there is no character input and you can just press
"S", "U" or "R" without the Alt key.
What's more, you don't need to hold the Shift key if you want
to shut down; you can just type "H" and the Hibernate option
will be invoked,
because hotkeys for hidden controls are still active..
Last time,
I left you with a few questions.
Part of the answer to the first question
was given in the comments, so I'll just link to that.
The problem is more than just typeahead, though.
The dialog box doesn't show itself until all message traffic
has gone idle.
If you actually ran the code presented in the original message,
you'd find that it didn't actually work!
#include <windows.h>
INT_PTR CALLBACK
DlgProc(HWND hwnd, UINT uiMsg, WPARAM wParam, LPARAM lParam)
{
switch (uiMsg) {
case WM_INITDIALOG:
PostMessage(hwnd, WM_APP, 0, 0);
return TRUE;
case WM_APP:
MessageBox(hwnd,
IsWindowVisible(hwnd) ? TEXT("Visible")
: TEXT("Not Visible"),
TEXT("Title"), MB_OK);
break;
case WM_CLOSE:
EndDialog(hwnd, 0);
break;
}
return FALSE;
}
int WINAPI WinMain(HINSTANCE hinst, HINSTANCE hinstPrev,
LPSTR lpCmdLine, int nShowCmd)
{
DialogBox(hinst, MAKEINTRESOURCE(1), NULL, DlgProc);
return 0;
}
When you run this program, the message box says "Not Visible",
and in fact when it appears, you can see that the main dialog
is not yet visible.
It doesn't show up until after you dismiss the message box.
Mission: Not accomplished.
Along the way, there was some dispute over whether the
private message should be WM_USER or
WM_APP.
As we saw before,
window messages in the WM_USER range belong to the
window class,
and in this case, the window class is the dialog window class,
i.e., WC_DIALOG.
Since you are not the implementor of the dialog window class
(you didn't write the window procedure),
the WM_USER messages are not yours for the taking.
And in fact, if you had decided to use WM_USER
you would have run into all sorts of problems,
because it so happens that the dialog manager
already defined that message for its own purposes:
WM_USER
WM_APP
WC_DIALOG
#define DM_GETDEFID (WM_USER+0)
When the dialog manager sends the dialog a DM_GETDEFID
message to obtain the default control ID,
you will think it's your WM_USER message and show your
dialog box.
It turns out that the dialog manager uses the default control ID
rather often, and as a result, you're going to display an awful
lot of dialog boxes.
(Even worse, your second dialog box will probably use the dialog
itself as the owner, which then leads to the problem of
having a dialog box with multiple modal children,
which then leads to disaster when they are dismissed by the
user in the wrong order.)
DM_GETDEFID
Okay, so we're agreed that we should use WM_APP
as the private message.
Some people suggested using a timer,
on the theory that timer messages are lower priority than
paint messages,
so the timer won't fire until all painting is done.
While that's true, it also doesn't help.
The relative priority of timer and paint messages comes
into play only if the window manager has to choose between
timers and paint messages when deciding which one to deliver
first.
But if there are no paint messages needed in the first place,
then timers are free to go ahead.
And when the window is not visible,
it doesn't need any paint messages.
In a sense, the timer approach misses the point entirely:
It's trying to take advantage of paint messages being higher priority
precisely in the scenario where there are no paint messages!
Let's demonstrate this by implementing the timer approach,
but I'm going to add a twist to make the race condition clearer:
...
INT_PTR CALLBACK
DlgProc(HWND hwnd, UINT uiMsg, WPARAM wParam, LPARAM lParam)
{
switch (uiMsg) {
case WM_INITDIALOG:
SetTimer(hwnd, 1, 1, 0);
Sleep(100); //simulate paging
return TRUE;
case WM_TIMER:
if (wParam == 1) {
KillTimer(hwnd, 1);
MessageBox(hwnd,
IsWindowVisible(hwnd) ? TEXT("Visible")
: TEXT("Not Visible"),
TEXT("Title"), MB_OK);
}
break;
case WM_CLOSE:
EndDialog(hwnd, 0);
break;
}
return FALSE;
}
If you run this program, you'll see the message "Not Visible".
I inserted an artificial Sleep(100) to simulate
the case where the code takes a page fault and has to wait
100ms for the code to arrive from the backing store.
(Maybe it's coming from the network or a CD-ROM,
or maybe the local hard drive is swamped with I/O and
you have to wait that long for your paging request to
become satisfied after all the other I/O requests active
on the drive.)
Sleep(100)
As a result of that Sleep(),
the dialog manager doesn't get a chance to empty the message
queue and show the window because the timer message is already
in the queue.
Result: The timer fires and the dialog is still hidden.
Sleep()
Some people waited for WM_ACTIVATE, but that
tells you when the window becomes active, which is not the
same as being shown, so it doesn't satisfy the original
requirements.
WM_ACTIVATE
Others suggested waiting for WM_PAINT,
but a window can be visible without painting.
The WM_PAINT message arrives if the window's
client area is uncovered, but the caption bar might still be
visible even if the client area is covered.
Furthermore, while this addresses the problem if you interpret
"visible" as "results in pixels on the screen",
as opposed to IsWindowVisible,
you need to look behind the actual request to what the person
was really looking for.
(This is an important skill to have because people rarely ask
for what they want, but rather for what they think they want.)
The goal was to create a dialog box and have it look like the
user automatically clicked a button on it to call up a secondary dialog.
In order to get this look, the base dialog needs to be visible
before the secondary dialog can be displayed.
WM_PAINT
IsWindowVisible
One approach is to show the second dialog on receipt of the
WM_SHOWWINDOW, but even that is too soon:
WM_SHOWWINDOW
// In real life, this would be an instance variable
BOOL g_fShown = FALSE;
INT_PTR CALLBACK
DlgProc(HWND hwnd, UINT uiMsg, WPARAM wParam, LPARAM lParam)
{
switch (uiMsg) {
case WM_INITDIALOG:
return TRUE;
case WM_SHOWWINDOW:
if (wParam && !g_fShown) {
g_fShown = TRUE;
MessageBox(hwnd,
IsWindowVisible(hwnd) ? TEXT("Visible")
: TEXT("Not Visible"),
TEXT("Title"), MB_OK);
}
break;
case WM_CLOSE:
EndDialog(hwnd, 0);
break;
}
return FALSE;
}
(Subtlety: Why do I set g_fShown = TRUE before
displaying the message box?)
g_fShown = TRUE
If you run this program, you will still get the message
"Not Visible" because WM_SHOWWINDOW is sent
as part of the entire window-showing process.
At the time you receive it, your window is in the
process of being show but it's not quite there yet.
The WM_SHOWWINDOW serves a similar purpose
to WM_INITDIALOG: To let you prepare the window
while it's still hidden so the user won't see ugly flashing
which would otherwise occur if you had done
your preparation after the window were visible.
WM_INITDIALOG
Is there a message that is sent after the window has been shown?
There sure is: WM_WINDOWPOSCHANGED.
WM_WINDOWPOSCHANGED;
MessageBox(hwnd,
IsWindowVisible(hwnd) ? TEXT("Visible")
: TEXT("Not Visible"),
TEXT("Title"), MB_OK);
}
break;
case WM_CLOSE:
EndDialog(hwnd, 0);
break;
}
return FALSE;
}
This time, we get the message "Visible",
because WM_WINDOWPOSCHANGED is sent after
the window positioning negotiations are complete.
(The "ED" at the end emphasizes that it is delivered
after the operation has been done, as opposed to the "ING"
which is delivered while the operation is in progress.)
But wait, we're not out of the woods yet.
Although it's true that the window position negotiations
are complete, the message is nevertheless sent as part
of the whole window positioning process,
and there may be other things that need to be done
as part of the whole window-showing bookkeeping.
If you show the second dialog directly in your
WM_WINDOWPOSCHANGED handler,
then that clean-up won't happen until after the user
exits the second dialog.
For example, the window manager notifies Active Accessibility
of the completed window positioning operation after all
the window positions have settled down.
This reduces the likelihood that the accessibility tool will be told
"Okay, the window is shown" followed by
"Oh no wait, it moved again, ha ha!"
If you display the second dialog inside your
WM_WINDOWPOSCHANGED handler,
the screen reader will receive a bizarro sequence of events:
Notice that the "Main dialog shown" notification arrives out of
order because you did additional UI work before the previous operation
was complete.
As another example, the window may have been shown as part
of a multiple-window window positioning operation
such as one created by DeferWindowPos.
All the affected windows will get their WM_WINDOWPOSCHANGED
notifications one at a time,
and if your window happened to go first,
then those other windows won't know that they were repositioned
until after the user finishes with the nested dialog.
This may manifest itself in those other windows appearing to
be "stuck" since your dialog is holding up the subsequent
notifications with your nested dialog.
For example, a window might be trying to do
exactly what you're trying to do here,
but since you're holding up the remainder of the notifications,
that other window won't display its secondary dialog until
the user dismisses yours.
From the user's standpoint, that other window is "stuck"
for no apparent reason.
DeferWindowPos
Therefore, we need one more tweak to our solution.;
PostMessage(hwnd, WM_APP, 0, 0);
}
break;
case WM_APP:
MessageBox(hwnd,
IsWindowVisible(hwnd) ? TEXT("Visible")
: TEXT("Not Visible"),
TEXT("Title"), MB_OK);
break;
case WM_CLOSE:
EndDialog(hwnd, 0);
break;
}
return FALSE;
}
When we learn that the dialog is being shown for the first time,
we post a message to ourselves to display the secondary dialog
and return from the WM_WINDOWPOSCHANGED handler.
This allows the window positioning operation to complete.
Everybody gets their notifications, they are all on board
with the state of the windows,
and only after everything has stabilized do we display our
message box.
This is a common thread to many types of window management.
Many window messages are notifications which are delivered
while the operation is still in progress.
You do not want to display new UI while handling those
notifications because that holds up the completion of the
original UI operation that generated the notification in the
first place.
Posting a message to yourself to complete the user interaction
after the original operation has stabilized is the standard solution.
It really all started with
Katamari Damacy
(塊魂).
The music for that game is so darned infectious, and it was
my fascination with that music that prompted my colleague to
loan me the CDs his wife bought while she traveled through Asia.
I already mentioned
China Dolls (中國娃娃).
Another of the CDs in the collection was
4th Ikimasshoi!
(4th いきまっしょい!
=
4th Let's Go!),
the um fourth album from the J-Pop group
Morning Musume
(モーニング娘 =
Morning Girls).
I'm sure somebody will correct my Japanese translation.
Yes, these are the girls who in the United States are probably known
only for
having pork chops tied to their foreheads while being stalked by
a lizard
or
being chased by American fighter Bob Sapp
or
being freaked out by a clip from the movie The Ring
or
traumatizing one of its members by dressing her up
like a seal and making her hang out at the polar bear tank.
From what I can gather, they aren't so much a pop music group
as a marketing phenomenon, what with their own television show
and endorsement contracts.
And yes, it's a singing group with thirteen members.
Thirteen.
When I first glanced at the album cover, I just assumed that it
was the same four or five singers dressed up in different costumes,
but no, it really is a group with a ridiculous number of members.
Their music is bubble-gum J-Pop, often
catchy,
but sometimes just plain
awful.
(And sometimes
really awful or
horrifically I-can't-even-watch-it awful.)
But I found at least the catchy tunes useful,
because they're energetic and kept me going on longer bicycle rides.
It probably helped that I didn't understand the words,
though I strongly suspect
they're singing about love.
(I also find that even the catchy songs tend to be ruined by the videos.)
Setting aside the musical merits,
I have to admire the logistics of organizing a performance of
such a large group.
Compare, for example,
this music video for
Osaka Koi no Uta
(大阪 恋の歌
= Osaka Love Song)
with a
live performance of same.
In the music video, you can just cut from one vocalist to the next,
but in the live performance, the singers have to physically trade places.
It's so complicated that some dedicated fans have
color-coded the lyrics to keep track of who sings what.
Another of my colleagues more tuned into the contemporary music scene
learned of my fascination with Japanese pop music
and dedicated himself to finding some good Japanese pop music,
just to show me that it's not all bubble-gum.
More on that in the next episode.
The purpose of quotation marks is to allow a character that
would normally be interpreted as a delimiter to be included
as part of a file name.
Most of the time, this delimiter is the space.
The CreateProcess function uses a space to
separate the program name from its arguments.
Most programs separate their command line arguments with a space.
But the PATH environment variable doesn't use spaces
to separate directories.
It uses semicolons.
CreateProcess
PATH
This means that if you want to add a directory with spaces in its
name to the path, you don't need quotation marks since spaces
mean nothing to the PATH environment variable.
The quotation marks don't hurt, mind you, but they don't help either.
On the other hand, if the directory you want to add contains
a semicolon in its name, then you do need the quotation marks,
because it's the semicolon that you need to protect...
The answer to the question
"What is the maximum number of window classes a program can register?"
is not a number.
Most user interface objects come from a shared pool of memory known
as the "desktop heap".
Although one could come up with a theoretical maximum number of
window classes that can fit in the desktop heap, that number is
not achievable in practice because the desktop heap is shared with
all other user interface objects on the desktop.
For example, menus and windows go into the desktop heap,
as well as more obscure objects like active window enumerations,
window positioning handles used by DeferWindowPos,
and even how many threads have attached input queues
(either implicitly done by having cross-thread parent/owner
relationships or explicitly by calling the
AttachThreadInput function).
The more windows and menus you have, the less space available
for other things like registered window classes.
AttachThreadInput
Typically, when somebody asks this question,
the real problem is that they designed a system to the point
where desktop heap exhaustion has become an issue,
and they need to redesign the program so they aren't so
wasteful of desktop heap resources in general.
(One customer, for example, was registering thousands of
window classes in their program, which is excessive.)
In the same way that somebody asking for the
maximum number of threads a process can create
is an indication that their program is in need of a redesign,
a program that registers thousands of window classes needs
to be given another look.
After all, even just creating a thousand windows is excessive—any
UI that shows the user a thousand windows is too confusing to be usable.
(Pre-emptive link:
Q126962: On the desktop heap.). | http://blogs.msdn.com/b/oldnewthing/archive/2006/09.aspx?PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2014-52 | refinedweb | 2,893 | 56.89 |
Here are the topics that have been cooking. Commits prefixed with '-' are only in 'pu' (proposed updates) while commits prefixed with '+' are in 'next'.
As we already have merged enough changes to 'master' during this cycle that can potentially cause unforseen regressions, let's not merge topics that are not regression fixes from 'next' to 'master', either, until the final release. You can find the changes described here in the integration branches of the repositories listed at -------------------------------------------------- [Graduated to "master"] * fc/remote-bzr (2013-04-30) 18 commits - remote-bzr: access branches only when needed - remote-bzr: delay peer branch usage - remote-bzr: iterate revisions properly - remote-bzr: improve progress reporting - remote-bzr: add option to specify branches - remote-bzr: add custom method to find branches - remote-bzr: improve author sanitazion - remote-bzr: add support for shared repo - remote-bzr: fix branch names - remote-bzr: add support for bzr repos - remote-bzr: use branch variable when appropriate - remote-bzr: fix partially pushed merge - remote-bzr: fixes for branch diverge - remote-bzr: add support to push merges - remote-bzr: always try to update the worktree - remote-bzr: fix order of locking in CustomTree - remote-bzr: delay blob fetching until the very end - remote-bzr: cleanup CustomTree To replace the one we pushed out in 1.8.2 after hearing that Emacs folks had a good experience with this version, this will be in 1.8.3-rc2. -------------------------------------------------- [New Topics] * fc/fast-export-persistent-marks (2013-05-06) 3 commits - fast-export: don't parse commits while reading marks file - fast-export: do not parse non-commit objects while reading marks file - fast-{import,export}: use get_sha1_hex() directly Seems to break a handful of topics when merged to the tip of 'pu'. * jc/core-checkstat-2.0 (2013-05-06) 2 commits - core.statinfo: remove as promised in Git 2.0 - deprecate core.statinfo at Git 2.0 boundary The bottom one is a fix for a breakage of a new feature in 1.8.2 but it is not all that urgent. * jk/packed-refs-race (2013-05-06) 4 commits - for_each_ref: load all loose refs before packed refs - get_packed_refs: reload packed-refs file when it changes - add a stat_validity struct - resolve_ref: close race condition for packed refs -------------------------------------------------- [Stalled] * mg/more-textconv (2013-04-23) 7 commits - git grep: honor textconv by default - grep: honor --textconv for the case rev:path - grep: allow to use textconv filters - t7008: demonstrate behavior of grep with textconv - cat-file: do not die on --textconv without textconv filters - show: honor --textconv for blobs - t4030: demonstrate behavior of show with textconv Rerolled. I am not sure if I like "show <blob>" and "grep" that use textconv by default, though. * mh/multimail (2013-04-21) 1 commit - git-multimail: a replacement for post-receive-email Waiting for comments. *. * jk/commit-info-slab (2013-04-19) 3 commits - commit-slab: introduce a macro to define a slab for new type - commit-slab: avoid large realloc - commit: allow associating auxiliary info on-demand (this branch is used by jc/show-branch.) Technology demonstration to show a way we could use unbound number of flag bits on commit objects. *. -------------------------------------------------- [Cooking] * fc/at-head (2013-05-02) 5 commits - Add new @ shortcut for HEAD - sha1_name: refactor reinterpret() - sha1_name: compare variable with constant, not constant with variable - sha1_name: remove unnecessary braces - sha1_name: remove no-op Instead of typing four capital letters "HEAD", you can say "@" instead. There was another series from Ram that looked mostly test updates but I lost track of which one was which. In any case, are people happy with this series? * jk/lookup-object-prefer-latest (2013-05-02) 1 commit (merged to 'next' on 2013-05-06 at cc59dcc) + lookup_object: prioritize recently found objects Optimizes object lookup when the object hashtable starts to become crowded. * jk/subtree-do-not-push-if-split-fails (2013-05-01) 1 commit (merged to 'next' on 2013-05-06 at 81bdf37) + contrib/subtree: don't delete remote branches if split fails "git subtree" (in contrib/) had one codepath with loose error checks to lose data at the remote side. * fc/completion (2013-04-27) 9 commits - completion: remove __git_index_file_list_filter() - completion: add space after completed filename - completion: add hack to enable file mode in bash < 4 - completion: refactor __git_complete_index_file() - completion: refactor diff_index wrappers - completion: use __gitcompadd for __gitcomp_file - completion; remove unuseful comments - completion: document tilde expansion failure in tests - completion: add file completion tests I saw this discussed somewhat. Is everybody happy with this version? This is its v2, in the $gmane/222682 thread. * jk/test-output (2013-05-06) 3 commits (merged to 'next' on 2013-05-06 at 7c03af3) + t/Makefile: don't define TEST_RESULTS_DIRECTORY recursively (merged to 'next' on 2013-05-01 at 63827c9) + test output: respect $TEST_OUTPUT_DIRECTORY + t/Makefile: fix result handling with TEST_OUTPUT_DIRECTORY When TEST_OUTPUT_DIRECTORY setting is used, it was handled somewhat inconsistently between the test framework and t/Makefile, and logic to summarize the results looked at a wrong place. Will cook in 'next'. * rj/mingw-cygwin (2013-04-28) 2 commits - cygwin: Remove the CYGWIN_V15_WIN32API build variable - mingw: rename WIN32 cpp macro to GIT_WINDOWS_NATIVE Cygwin portability; both were reviewed by Jonathan, and the tip one seems to want a bit further explanation. Needs positive report from Cygwin 1.7 users who have been on 1.7 to make sure it does not regress for them. * rj/sparse (2013-04-28) 10 commits (merged to 'next' on 2013-05-01 at 649e16c) + sparse: Fix mingw_main() argument number/type errors + compat/mingw.c: Fix some sparse warnings + compat/win32mmap.c: Fix some sparse warnings + compat/poll/poll.c: Fix a sparse warning + compat/win32/pthread.c: Fix a sparse warning + compat/unsetenv.c: Fix a sparse warning + compat/nedmalloc: Fix compiler warnings on linux + compat/nedmalloc: Fix some sparse warnings + compat/fnmatch/fnmatch.c: Fix a sparse error + compat/regex/regexec.c: Fix some sparse warnings Will cook in 'next'. * js/transport-helper-error-reporting-fix (2013-04-28) 3 commits (merged to 'next' on 2013-04-29 at 8cc4bb8) + git-remote-testgit: build it to run under $SHELL_PATH + git-remote-testgit: further remove some bashisms + git-remote-testgit: avoid process substitution (this branch uses fc/transport-helper-error-reporting.) Finishing touches to fc/transport-helper-error-reporting topic. Will cook in 'next'. * mh/fetch-into-shallow (2013-05-02) 2 commits (merged to 'next' on 2013-05-03 at 3fadc61) + t5500: add test for fetching with an unknown 'shallow' (merged to 'next' on 2013-04-29 at a167d3e) + upload-pack: ignore 'shallow' lines with unknown obj-ids Will cook in 'next'. * kb/full-history-compute-treesame-carefully (2013-05-06) 11 commits - revision.c: treat A...B merge bases as if manually specified - revision.c: discount side branches when computing TREESAME - simplify-merges: drop merge from irrelevant side branch - simplify-merges: never remove all TREESAME parents - t6012: update test for tweaked full-history traversal - revision.c: Make --full-history consider more merges - rev-list-options.txt: correct TREESAME for P - t6111: allow checking the parents as well - t6111: new TREESAME test set - t6019: test file dropped in -s ours merge - decorate.c: compact table when growing Major update to a very core part of the system to improve culling of irrelevant parents while traversing a mergy history. Will not be a 1.8.3 material, but is an important topic. * jh/checkout-auto-tracking (2013-04-21) 8 commits (merged to 'next' on 2013-04-22 at 2356700) +>' Updates ". Will cook in 'next'. * jc/prune-all (2013-04-25) 4 commits (merged to 'next' on 2013-04-26 at 97a7387) + prune: introduce OPT_EXPIRY_DATE() and use it (merged to 'next' on 2013-04-22 at b00ccf6) + api-parse-options.txt: document "no-" for non-boolean options + git-gc.txt, git-reflog.txt: document new expiry options + date.c: add parse_expiry_date() (this branch is used by mh/packed-refs-various.). Will cook in 'next'. * as/check-ignore (2013-04-29) 6 commits (merged to 'next' on 2013-04-30 at 646931f) + t0008: use named pipe (FIFO) to test check-ignore streaming (merged to 'next' on 2013-04-21 at 7515aa8) + Documentation: add caveats about I/O buffering for check-{attr,ignore} + check-ignore: allow incremental streaming of queries via --stdin + check-ignore: move setup into cmd_check_ignore() + check-ignore: add -n / --non-matching option + t0008: remove duplicated test fixture data Enhance "check-ignore" (1.8.2 update) to work more like "check-attr" over bidi-pipes. Will cook in 'next'. * mh/packed-refs-various (2013-05-01) 33 commits (merged to 'next' on 2013-05-01 at e527153) + refs: handle the main ref_cache specially + refs: change do_for_each_*() functions to take ref_cache arguments + pack_one_ref(): do some cheap tests before a more expensive one + pack_one_ref(): use write_packed_entry() to do the writing + pack_one_ref(): use function peel_entry() + refs: inline function do_not_prune() + pack_refs(): change to use do_for_each_entry() + refs: use same lock_file object for both ref-packing functions + pack_one_ref(): rename "path" parameter to "refname" + pack-refs: merge code from pack-refs.{c,h} into refs.{c,h} + pack-refs: rename handle_one_ref() to pack_one_ref() + refs: extract a function write_packed_entry() + repack_without_ref(): write peeled refs in the rewritten file + t3211: demonstrate loss of peeled refs if a packed ref is deleted + refs: change how packed refs are deleted + search_ref_dir(): return an index rather than a pointer + repack_without_ref(): silence errors for dangling packed refs + t3210: test for spurious error messages for dangling packed refs + refs: change the internal reference-iteration API + refs: extract a function peel_entry() + peel_ref(): fix return value for non-peelable, not-current reference + peel_object(): give more specific information in return value + refs: extract function peel_object() + refs: extract a function ref_resolves_to_object() + repack_without_ref(): use function get_packed_ref() + peel_ref(): use function get_packed_ref() + get_packed_ref(): return a ref_entry + do_for_each_ref_in_dirs(): remove dead code + refs: define constant PEELED_LINE_LENGTH + refs: document how current_ref is used + refs: document do_for_each_ref() and do_one_ref() + refs: document the fields of struct ref_value + refs: document flags constants REF_* (this branch uses jc/prune-all.) Updates reading and updating packed-refs file, correcting corner case bugs. Will cook in 'next'. * fc/transport-helper-error-reporting (2013-04-25) 10 commits (merged to 'next' on 2013-04-25 at 3358f1a) + t5801: "VAR=VAL shell_func args" is forbidden (merged to 'next' on 2013-04-22 at 5ba6467) + transport-helper: update remote helper namespace + transport-helper: trivial code shuffle + transport-helper: warn when refspec is not used + transport-helper: clarify pushing without refspecs + transport-helper: update refspec documentation + transport-helper: clarify *:* refspec + transport-helper: improve push messages + transport-helper: mention helper name when it dies + transport-helper: report errors properly (this branch is used by js/transport-helper-error-reporting-fix.) Update transport helper to report errors and maintain ref hierarchy used to keep track of remote helper state better. Will cook in 'next', but may be 1.8.3 material depending on how things go. *. Will cook in 'next'. * jl/submodule-mv (2013-04-23) 5 commits (merged to 'next' on 2013-04-23 at c04f574) + submodule.c: duplicate real_path's return value (merged to 'next' on 2013-04-19 at 45ae3c9) +. Will cook in 'next'. * jn/add-2.0-u-A-sans-pathspec (2013-04-26) 1 commit - git add: -u/-A now affects the entire working tree Will cook in 'next' until Git 2.0. * nd/magic-pathspecs (2013-03-31) 45 commits . Rename field "raw" to "_raw" in struct pathspec .: a special flag for max_depth feature . Convert some get_pathspec() calls to parse_pathspec() . parse_pathspec: add PATHSPEC_PREFER_{CWD,FULL} ." . setup.c: check that the pathspec magic ends with ")" Migrate the rest of codebase to use "struct pathspec" more. This has nasty conflicts with kb/status-ignored-optim-2, as/check-ignore and tr/line-log; I've already asked Duy to hold this and later rebase on top of them. Will defer. * tr/line-log (2013-04-22) 13 commits (merged to 'next' on 2013-04-22 at 8f2c1de) + git-log(1): remove --full-line-diff description (merged to 'next' on 2013-04-21 at cd92620) + line-log: fix documentation formatting (merged to 'next' on 2013-04-15 at 504559e) + log -L: improve comments in process_all_files() + log -L: store the path instead of a diff_filespec + log -L: test merge of parallel modify/rename + t4211: pass -M to 'git log -M -L...' test (merged to 'next' on 2013-04-05 at 5afb00c) + log -L: fix overlapping input ranges + log -L: check range set invariants when we look it up (merged to 'next' on 2013-04-01 at 5be920c) + Speed up log -L... -M + log -L: :pattern:file syntax to find by funcname + Implement line-history search (git log -L) + Export rewrite_parents() for 'log -L' + Refactor parse_loc Will cook in 'next'. * jc/push-2.0-default-to-simple (2013-04-03) 1 commit - push: switch default from "matching" to "simple" The early bits to adjust the tests have been merged to 'master'.. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [email protected] More majordomo info at | https://www.mail-archive.com/[email protected]/msg25374.html | CC-MAIN-2017-51 | refinedweb | 2,172 | 50.97 |
.
Fortunately, in our house we don't need to clean out the space before we put something else there. Since Object 2 is not needed, as the GC we'll move Object 3 down and fix the pointer in Object 1.
Next, as the GC, we'll copy Object 5 down
Now that everything is cleaned up we just need to write a sticky note and put it on the top of our compacted heap to let Claire know where to put new objects.
Knowing the nitty-gritty of CG helps in understanding that moving objects around can be very taxing. As you can see, it makes sense that if we can reduce the size of what we have to move, we'll improve the whole GC process because there will be less to copy.
What about things outside the managed heap?
As the person responsible for garbage collection, one problem we run into in cleaning house is how to handle objects in the car. When cleaning, we need to clean everything up. What if the laptop is in the house and the batteries are in the car?
There are situations where the GC needs to execute code to clean up non-managed resources such as files, database connections, network connections, etc. One possible way to handle this is through a finalizer.
class Sample
{
~Sample()
{
// FINALIZER: CLEAN UP HERE
}
}
Object 2 is treated in the usual fashion. However, when we get to object 4, the GC sees that it is on the finalization queue and instead of reclaiming the memory object 4 owns, object 4 is moved and it's finalizer is added to a special queue named freachable.
There is a dedicated thread for executing freachable queue items. Once the finalizer is executed by this thread on Object 4, it is removed from the freachable queue. Then and only then is Objet 4 ready for collection.
So Object 4 lives on until the next round of GC.
Because adding a finalizer to our classes creates additional work for GC it can be very expensive and adversely affect the performance garbage collection and thus our program. Only use finalizers when you are absolutely sure you need them.
A better practice is to be sure to clean up non-managed resources. As you can imagine, it is preferable to explicitly close connections and use the IDisposable interface for cleaning up instead of a finalizer where possible.
IDisposaible
Classes that implement IDisposable perform clean-up in the Dispose() method (which is the only signature of the interface). So if we have a ResouceUser class instead of using a finalizer as follows:
public class ResourceUser
~ResourceUser() // THIS IS A FINALIZER
// DO CLEANUP HERE
public class ResourceUser : IDisposable
#region IDisposable Members
public void Dispose()
// CLEAN UP HERE!!!
#endregion
public static void DoSomething()
ResourceUser rec = new ResourceUser();
using (rec)
// DO SOMETHING
} // DISPOSE CALLED HERE
// DON'T ACCESS rec HERE
using (ResourceUser rec = new ResourceUser())
Static Variables: Watch Out!
class Counter
{
private static int s_Number = 0;
public static int GetNextNumber()
{
int newNumber = s_Number;
// DO SOME STUFF
s_Number = newNumber + 1;
return newNumber;
}
}
If two threads call GetNextNumber() at the same time and both are assigned the same value for newNumber before s_Num}
class Counter
private static int s_Number = 0;
public static int GetNextNumber()
int newNumber = s_Number;
// DO SOME STUFF
s_Number = newNumber + 1;
return newNumber;
lock (typeof(Counter))
{
int newNumber = s_Number;
// DO SOME STUFF
newNumber += 1;
s_Number = newNumber;
return newNumber;
}
The next thing we have to watch out for objects referenced by static variables. Remember, how anything that is referenced by a "root" is not cleaned up. Here's one of the ugliest examples I can come up with:
class Olympics
public static Collection<Runner> TryoutRunners;
class Runner
private string _fileName;
private FileStream _fStream;
public void GetStats()
FileInfo fInfo = new FileInfo(_fileName);
_fStream = _fileName.OpenRead();
Singleton
One trick to keep things light is to keep only one instance of a class in memory at all times. To do this we can use the GOF Singleton Pattern.
One trick to keep things light is to keep only one instance of a
utility class in memory at all times. One easy way to do this we can use the GOF Singleton Pattern.
Singletons should be used with caution because they are really "global
variables" and cause us much headached and "strange" behavior in multi-threaded
applications where different threads could be altering the state of the object.
If we are using the singleton pattern (or any global variable) we should be able
to justify it (in other words... don't do it without a good reason).
public">{
private static Earth _instance = new Earth();
private Earth() { }
public static Earth GetInstance() { return _instance; }
In Conclusion...
So to wrap up, some things we can do to improve GC performance are:
Next time we'll look even more closely at the GC process and look into ways to check under the hood as your program executes to discover problems that may need to be cleaned up.
Until then,-Happy coding
Part I | Part II | Part III | Part IV
hello
Really thanks . I am RajeshKumar.I am from India. I am a software engineer for past 2.5 year.
After reading your article i got good idea on heap and stack...
why can't u provide your articles in pdf format so that we can download easier any how grat thanks...
We can declare a class inside the structure . In that case wt will happen Can u pleas through some light on it.
I am studying design patterns. I am not getting the idea on where which pattern should be used...
My personal email id is [email protected]
Happy New Year
Regards
Rajesh Kumar.C
In your article you gave the example below for a Singleton class.
public class Earth
private static Earth _instance = new Earth();
private Earth() { }
I believe (but might be wrong) the following example is better.
public class Earth
{.
You are right (Guess I need to do some housecleaning now). Thanks for pointing this out. | http://www.c-sharpcorner.com/UploadFile/rmcochran/csharp_memory_401282006141834PM/csharp_memory_4.aspx | crawl-002 | refinedweb | 1,004 | 69.62 |
[
]
Stephan Ewen commented on FLINK-2999:
-------------------------------------
I need this as well!
> Support connected keyed streams
> -------------------------------
>
> Key: FLINK-2999
> URL:
> Project: Flink
> Issue Type: Improvement
> Components: Streaming
> Affects Versions: 1.0
> Reporter: Fabian Hueske
> Assignee: Stephan Ewen
>
> It would be nice to add support for connected keyed streams to enable key-partitioned
state in Co*Functions.
> This could be done by simply connecting two keyed Streams or adding a new method to connect
and key two streams as one operation.
> {code}
> DataStream<X> s1 = ...
> DataStream<Y> s2 = ...
> // alternative 1
> s1
> .keyBy(0)
> .connect(s2.keyBy(1))
> .map(new KeyedCoMap());
> // alternative 2
> s1
> .connectByKey(s2, 0, 1)
> .map(new KeyedCoMap());
> public class KeyedCoMap implements RichCoMapFunction<X,Y,Z> {
>
> OperatorState<A> s;
> public void open() {
> s = getRuntimeContext().getKeyValueState("abc", A.class, new A());
> }
> // ...
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.us.apache.org/mod_mbox/flink-issues/201511.mbox/%[email protected]%3E | CC-MAIN-2020-16 | refinedweb | 142 | 60.61 |
![if !IE]> <![endif]>
Handling HTTP get Requests Containing Data
When requesting a document or resource from a Web server, it is possible to supply data as part of the request. The servlet WelcomeServlet2 of Fig. 30.12 responds to an HTTP get request that contains a name supplied by the user. The servlet uses the name as part of the response to the client.
// Fig. 9.12: WelcomeServlet2.java
// Processing HTTP get requests containing data.
// process "get" request from client
package com.deitel.advjhtp1.servlets;
import javax.servlet.*;
import javax.servlet.http.*;
import java.io.*;
public class WelcomeServlet2 extends HttpServlet {
protected void doGet( HttpServletRequest request,
HttpServletResponse response )
throws ServletException, IOException
{
String firstName = request.getParameter( "firstname" );
response.setContentType( "text/html" );
PrintWriter out = response.getWriter();
// send XHTML document get.12 WelcomeServlet2 handles a get request containing data
Parameters are passed as name/value pairs in a get request. Line 16 demonstrates how to obtain information that was passed to the servlet as part of the client request. The request object’s getParameter method receives the parameter name as an argument and returns the corresponding String value, or null if the parameter is not part of the request. Line 41 uses the result of line 16 as part of the response to the client.
The WelcomeServlet2.html document (Fig. 30.13) provides a form in which the user can input a name in the text input element firstname (line 17) and click the Submit button to invoke WelcomeServlet2. When the user presses the Submit button, the values of the input elements are placed in name/value pairs as part of the request to the server. In the second screen capture of Fig. 30.13, notice that the browser appended
?firstname=Paul
to the end of the action URL. The ? separates the query string (i.e., the data passed as part of the get request) from the rest of the URL in a get request. The name/value pairs are passed with the name and the value separated by =. If there is more than one name/value pair, each name/value pair is separated by &.
<?xml version = "1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"">
<!-- Fig. 9.13: WelcomeServlet2.html -->
<html xmlns = "">
<head>
<title>Processing get requests with data</title>
</head>
<body>
<form action = "/advjhtp1/welcome2" method = "get">
<p><label>
Type your first name and press the Submit button
<br /><input type = "text" name = "firstname" />
<input type = "submit" value = "Submit" />
</p></label>
</form>
</body>
</html>
Fig. 30.13 HTML document in which the form’s action invokes Welcome-Servlet2 using alias welcome2 specified in web.xml
Once again, we use our advjhtp1 context root to demonstrate the servlet of Fig. 30.12. Place WelcomeServlet2.html in the servlets directory created in Section 30.3.2. Place WelcomeServlet2.class in the classes subdirectory of WEB-INF in the advjhtp1 context root. Remember that classes in a package must be placed in the appro-priate package directory structure. Then, edit the web.xml deployment descriptor in the WEB-INF directory to include the information specified in Fig. 30.14. This table contains the information for the servlet and servlet-mapping elements that you will add to the web.xml deployment descriptor. You should not type the italic text into the deployment descriptor. Restart Tomcat and type the following URL in your Web browser:
Type your name in the text field of the Web page, then click Submit to invoke the servlet.
Once again, note that the get request could have been typed directly into the browser’s Address or Location field as follows:
Try it with your own name.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Handling-HTTP-get-Requests-Containing-Data---Servlets_11137/ | CC-MAIN-2019-51 | refinedweb | 622 | 66.54 |
String.Format Method (String, Object)
Replaces one or more format items in a specified string with the string representation of a specified object.
Assembly: mscorlib (in mscorlib.dll)
Parameters
- format
- Type: System.String
A composite format string (see Remarks).
- arg0
- Type: System.Object
The object to format.
Return ValueType: System.String
A copy of format in which any format items are replaced by the string representation of arg0. "}}".
Although the String.Format(String, Object) method has a single object in the parameter list, format can include more than one format item as long as each has the same index. In the following example, the format string includes two format items: one displays the decimal value of a number and the other displays its hexadecimal value.
public class Example { public static void Main() { short[] values= { Int16.MinValue, -27, 0, 1042, Int16.MaxValue }; Console.WriteLine("{0,10} {1,10}\n", "Decimal", "Hex"); foreach (short value in values) { string formatString = String.Format("{0,10:G}: {0,10:X}", value); Console.WriteLine(formatString); } } } // The example displays the following output: // Decimal Hex // // -32768: 8000 // -27: FFE5 // 0: 0 // 1042: 412 // 32767: 7FFF
If the) method to embed an individual's age in the middle of a string.
using System; [assembly: CLSCompliant(true)] public class Example { public static void Main() {. string output; if (birthdate.AddYears(years) <= dateValue) { output = String.Format("You are now {0} years old.", years); Console.WriteLine(output); } else { output = String.Format("You are now {0} years old.", years - 1); Console.WriteLine(output); } } } } // The example displays the following output: // You are now 0 years old. // You are now 1 years old. // You are now 7 years old. // You are now 9 years old. // You are now 13 years. | http://msdn.microsoft.com/en-us/library/fht0f5be(v=vs.100).aspx | CC-MAIN-2013-48 | refinedweb | 284 | 62.44 |
Click this for NoviceGuard main page
Based on years of teaching, and earlier designs for similar projects, in early 2015, I developed NoviceGuard... a bit of hardware that makes things easier for newcomers to the delights of "playing" with Arduinos.
In addition to making it easier to use an Arduino, the NoviceGuard protects the Arduino, to some extent, from the errors novices sometimes make.
If you haven't already had a general introduction to the NoviceGuard, please visit the NoviceGuard project's main page.
Good! "Met" the NoviceGuard? Then we can go on to....
To make using NoviceGuard easier, there is a free library to extend the Arduino programming environment. It is called NovGrdCore. You can download the NovGrdCore files with the link in this sentence. Once unzipped the files are all just text files... you can look in them, check for malware. (They install in the usual way. There's help on this further down the page, but first why install them...)
We'll examine NovGrdCore comprehensively in due course. Just to give you an idea, here are two little program's for a NoviceGuard. They can be compiled if the computer with the IDE has NovGrdCore, the free library....
First: the traditional microprocessor "Blink", for NoviceGuard + NovGrdCore....
--------------------------------------- //TESTED code... should be EXACTLY RIGHT! #include <NovGrdCore.h>; NovGrdCore NGC; void setup() { } void loop() { NGC.makeOuUL(0); delay(300); NGC.makeOuUL(1); delay(50); } ---------------------------------------
And second: Here's an Arduino "doorbell" for a deaf person. An LED comes on when the button is pressed. Of course, the deaf person should be able to see the button presser if the deaf person can see the LED... but the button could be put on the end of a wire, and the LED replaced by a buzzer....
--------------------------------------- //TESTED code... should be EXACTLY RIGHT! #include <NovGrdCore.h>; NovGrdCore NGC; void setup() { } void loop() { if (NGC.boInHigh(0)) {NGC.makeOuUL(0);} else {NGC.makeOuUL(1);} } ---------------------------------------
Those don't seem Really Simple?? They are, really! You need to have installed (no big deal) the free and open NovGrdCore library, if you want to run the programs exactly as above. (With minor tweaks, you can eschew NovGrdCore.)
The first two lines never change. You just use them as you see them.
NGC.makeOuUL(0); MAKES the OUtput line at the UpperLeft on the NoviceGuard low, turning off the LED on that line. The same word, but with a 1 makes the output high, and turns the NoviceGuard's LED on.
NGC.boInHigh(0) will return "true" or "false", hence the "bo", for "Boolean" at the start. It will be true if the Upper Right (that's the "0") button on the NoviceGuard is NOT pressed.
-------------
What's this "Should be exactly right" stuff??! Mea culpa, gentle reader... I fear that there may be bits and pieces of code still on the site with an extra semi-colon, or a missing "NGG.". If you come across any such wasters of readers' time, I would be grateful to hear from you... especially if you tell me what page the error is on.
If you accept my suggestion, and use NovGrdCore (which is what this page is about) then all you need in your program's loop() to test the 4 buttons and 4 LEDs on the basic NoviceGuard is....);};
The number passed with boInHigh tells the Arduino which button to look at. The first number with makeOu tells it which LED to set or clear, and the second number says which, on or off.
The process of installing the library is covered later.
You do not need to use the library.
You may be saying to yourself, "Why would I want the library?", or "I haven't used libraries much... I'm not sure I want to use this one."
If you are saying either, then you do have a choice. I've written a page for you about "Do I have to install and use the NovGrdCore library?"... but I would suggest to you that unless you (or anyone you are putting in front of a NoviceGuard equipped Arduino) are at least an intermediate user, you would find mastering "using libraries" very, very worthwhile. If you decide not to use the library, you have to do numerous things "the hard way". Later in this page, you will be given help with the details of installing, using the library.
The library takes care of doing the following #defines. Whether you choose to use the library or not, you will find having the #defines in your work will make following examples at this site a lot easier!
If you use the library, the #defines will have been done, and you don't have to (and shouldn't) worry about which actual pins the NoviceGuard is using
What did I mean when I said you shouldn't worry about which pins are in use?
I meant that when you want to refer to the pin connected to the input line connected to the button on the upper left of the NoviceGuard, you say "inPUL", instead of "2". (That input is done via pin 2... but you don't need to worry about the 2. You simply remember the (somewhat!) human friendly name for that pin, created by the #define, which "happens" as soon as you put #include NovGrdCore near the top of your code..
One of the greatest weaknesses... and I don't consider it a very great weakness, especially if you use the NovGrdCore library, is that novices using the NoviceGuard must promise not to use any pinMode statements.
If you are using NoviceGuard with the NovGrdCore library, then the novices should never even see a pinMode statement. The following is taken care of for you as soon as you include....
#include <NovGrdCore.h>
... and...
NovGrdCore NGC;
... in your program. (But you do have to include them! Even if you are not going to use any of the things that the NGC object makes available to you.).
(A digression on the name NGC: You don't have to use that name. But it will make following various examples easier, if you do. Can anyone see any objection to "standardizing" on "NGC"? PLEASE WRITE... soon... if you do!!)
When you...
#include <NovGrdCore.h>;
...and do the...
NovGrdCore NGC;
...you get access to the following. All but the first can be done other ways, which I will explain. (And the first only supplies a want which doesn't arise if you aren't using. On a NoviceGuard board with the default (minimum).)
Use this link to download the NovGrdCore files. That will give you an ordinary .zip file. All of the files in it are just text files, but some of them have unusual extensions for a zip file, so that the Arduino programming environment knows what to do with them.
If you are totally new to libraries, installing them, etc, I have a page about libraries for Arduino work. But you probably don't really need it.
The page just mentioned explains that you can (I think) put libraries in your Arduino sketches folder. That approach may have certain charms, but for NovGrdCore, it would be best to put it in your "proper" Arduino Libraries folder.
Apologies, by the way, for explaining all of this in Windows terms. I would be delighted to create a small page with the equivalent instructions for Mac or Linux users, or create a link to such a page, if you would like to send me the information for creating it.
On my Windows XP machine, the right place was....
C:\Program Files\Arduino\libraries\
If you have trouble finding "the right place" either consult some of the general resources about installing libraries, or get in touch. (Contact details at bottom of page.) Tell me what operating system you are using.
Once you have determined the right place for the new library's folder, you just....
(*What is "Unpack"? You use Windows Explorer to navigate to the folder your .zip file is in. Double- click on it and a Windows Explorer window will open with the files in the zip listed. Open a separate Windows Explorer window. Navigate to the Arduino libraries folder. (You should see folders for the "standard" libraries, e.g. EEPROM, Ethernet, SoftwareSerial). If you haven't already done so, make the folder NovGrdCore I mentioned a moment ago. Drag and drop from the list of what's in the .zip to the folder made for the contents of the .zip.)
(If you examined the files, you will have seen from the rems that I am not an experienced writer of Arduino libraries... but that's good! (The library is simple, straightforward.) I would just mention, to reassure you, that I have done libraries for other environments before writing NovGrdCore. Also, the guides to library writing cited in the files are particularly good, and I just followed them.)
If ever you have a question relating to what the library does, what is in it, etc, etc, just go to the NovGrdCore.h and NovGrdCore.ccp files. Use a text editor to open them. See what you see. THAT is what the library has/ does! Web pages documenting something like a library do sometimes fall behind what is actually happening in the library.
Here is a snipped listing of what is in each of them at 28 Apr 15. Note the version ID embedded in the library. ("#define versNovGrdCore...") I won't promise that the .zip you download today will be the same as this, but I do not anticipate any "improvements"(!) which will change what the library currently does. I do hope to add... judiciously... to it in the future. In particular, the whole subject of the 12 way connector on the bottom edge of the NoviceGuard needs development at the moment.
I said the files were snipped... by all means consult the originals, if you want every detail.
#include <Arduino.h> //Note that if you are using an old Arduino (before 1.0 IDE) // change "Arduino.h" to "WProgram.h" class NovGrdCore { #define versNovGrdCore "0.001" //no ; here; public: NovGrdCore();//Constructor ~NovGrdCore();//Destructor void reportVers(); void makeOuUL(byte bState); void makeOuUR(byte bState); void makeOuLR(byte bState); void makeOuLL(byte bState); void makeOu(byte bWhich,byte bState); boolean boInHigh(byte bWhich); private: int iTmp; };
/*For information on writing libraries... First... good place to start... And then second, to confirm various, learn more... */ #include "NovGrdCore.h" //no ; here. //<<constructor>> NovGrdCore::NovGrdCore(){ pinMode(inPUL,INPUT_PULLUP); //make that pin an INPUT, with pull up pinMode(inPLL,INPUT_PULLUP); //The pins for inPUR. inPLR are ANALOG channels and the values //in those constants are the ANALOG CHANNEL NUMBER... //So it is all rather handy that the pins' modes are // already as we want them, pinMode(ouPUL,OUTPUT); //make that pin an OUTPUT pinMode(ouPUR,OUTPUT); pinMode(ouPLL,OUTPUT); pinMode(ouPLR,OUTPUT); } //<<destructor>> NovGrdCore::~NovGrdCore(){/*nothing to destruct*/} void NovGrdCore::reportVers() //One call of Serial.begin(9600) is needed at some point before first use //of reportVers() { iTmp=0;//An iTmp created in the .h file. while (versNovGrdCore[iTmp]!=0){ Serial.print(versNovGrdCore[iTmp]); iTmp++; delay(20); }//end of while }//end of reportVers //Make UL output high or low (if bState 0 or not, respectively.) void NovGrdCore::makeOuUL(byte bState) { if (bState==0) {makeOu(0,0);}//no ; here else {makeOu(0,1);}; }//end of makeOuUL() void NovGrdCore::makeOuUR(byte bState) { if (bState==0) {makeOu(1,0);}//no ; here else {makeOu(1,1);}; }//end of makeOuUL() void NovGrdCore::makeOuLL(byte bState) { if (bState==0) {makeOu(2,0);}//no ; here else {makeOu(2,1);}; }//end of makeOuUL() void NovGrdCore::makeOuLR(byte bState) { if (bState==0) {makeOu(3,0);}//no ; here else {makeOu(3,1);}; }//end of makeOuUL() void NovGrdCore::makeOu(byte bWhich,byte bState) { switch (bWhich) { case 0:if (bState==0) {digitalWrite(ouPUL, LOW);}//no ; here else {digitalWrite(ouPUL, HIGH);}; //end of else break; //end of Case 0 case 1:if (bState==0) {digitalWrite(ouPUR, LOW);}//no ; here else {digitalWrite(ouPUR, HIGH);};//end of else break; //end of Case 1 case 2:if (bState==0) {digitalWrite(ouPLL, LOW);}//no ; here else {digitalWrite(ouPLL, HIGH);};//end of else break; //end of Case 2 case 3:if (bState==0) {digitalWrite(ouPLR, LOW);}//no ; here else {digitalWrite(ouPLR, HIGH);};//end of else break; //end of Case 3 }//end of switch }//end of makeOu() boolean NovGrdCore::boInHigh(byte bWhich) /*Cases 0 and 2 for digital inputs, pulled high until button pressed, which pulls them low. Cases 1 and 3 for analog inputs, pulled high until button pressed, which pulls them low. The case numbers chosen for NoviceGuard's layout, numbering the inputs in the order that printed words would be read.*/ { switch (bWhich) { //YES: presented 0,2,1,3... not 0,1,2,3 case 0:if (digitalRead(inPUL)==HIGH) {return true;} else {return false;}//end of else break; //end of Case 0 case 2:if (digitalRead(inPLL)==HIGH) {return true;} else {return false;}//end of else break; //end of Case 2 case 1:if (analogRead(inPUR)>127) //A low value in case driving LED //is depressing Vin. One of 2 instances {return true;} else {return false;}//end of else break; //end of Case 1 case 3:if (analogRead(inPLR)>127) //A low value in case driving LED //is depressing Vin Two of 2 instances {return true;} else {return false;}//end of else break; //end of Case 3 }//end of switch }//end of boInHigh()
12w... TBD... reserved...
I hope that helped you understand what NovGrdCore has to offer? Click here for the main page for NoviceGuard if you haven't already seen it (or if, as I hope, you want to go back to it!)
I have many other ideas and resources for you over ..... | http://rugguino.com/NovGrdCoreMain.htm | CC-MAIN-2018-17 | refinedweb | 2,291 | 73.78 |
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
#include <stropts.h> int fattach(int fildes, const char *path);
The fattach() function attaches a STREAMS- or doors-based file descriptor to an object in the file system name space, effectively associating a name with fildes. The fildes argument must be a valid open file descriptor representing a STREAMS or doors file. The path argument is a path name of an existing object and the user must have appropriate privileges or be the owner of the file and have write permissions. subsequently changed (for example, chmod(2)), the attributes of the underlying object are not affected.
Upon successful completion, fattach() returns 0. Otherwise it returns -1 and sets errno to indicate an error.
The fattach() function will fail if:
The user is the owner of path but does not have write permissions on path or fildes is locked.
The fildes argument is not a valid open file descriptor.
The path argument is currently a mount point or has a STREAMS or doors file descriptor attached to it.
The path argument is a file in a remotely mounted directory.
The fildes argument does not represent a STREAMS or doors file.
Too many symbolic links were encountered in translating path.
The size of path exceeds {PATH_MAX}, or the component of a path name is longer than {NAME_MAX} while {_POSIX_NO_TRUNC} is in effect.
The path argument does not exist.
A component of a path prefix is not a directory.
The effective user ID is not the owner of path or a user with the appropriate privileges.
See attributes(5) for descriptions of the following attributes:
fdetach(1M), chmod(2), mount(2), stat(2), door_create(3DOOR), fdetach(3C), isastream(3C), attributes(5), standards(5), streamio(7I)
STREAMS Programming Guide
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | http://docs.oracle.com/cd/E19253-01/816-5168/fattach-3c/index.html | CC-MAIN-2015-32 | refinedweb | 303 | 55.24 |
That’s not the first post I’m doing on incorporating a Web Server in .NET Microframework (NETMF). In some of my previous posts, I’ve explain how to do it using the existing .NET classes for this. And it is working very well!
The main concerns I have is that I’m using a netduino board. This board is great, I love it. But there is a very limited amount of memory available and limited amount of space to store the programs. Total storage is 64Kb and what is left when the code is in is about 48K… So very little amount of memory. And the http .NET classes are great and use stream but they are intense in terms of memory and also the main http class is huge…
So I had to find a solution and it was to redevelop a web server like IIS or Apache. OK, I can’t compare
I just want a web server which handle GET answers and respond a simple web page. The other challenge I have is to be able to reuse the code I4ve already written for my Sprinkler, to pilot my Lego city and my Lego infrared receiver like my Lego trains…
So I was searching for code to reuse on the Internet and found some code. So I did a mix of existing code and spend some time testing various solutions
Most of the existing code is not really robust. It does fail if there is a network problem, if 2 requests arrive at the same time, etc. I’m not saying my code is perfect but it is working and working well for the last month with no problem at all.
A web server is simple, it’s just a connection on a socket and a protocol of communication which is HTTP. It is also very simple as it is text based. What is interesting is to see all what you can do with such a simple protocol and such a simple markup language like HTML and some javascript.
OK, so let start with what is necessary: a thread that will run all the time and handle socket requests. So we need also a socket. And a way to stop the thread.
private bool cancel = false; private Thread serverThread = null;
public WebServer(int port, int timeout) { this.Timeout = timeout; this.Port = port; this.serverThread = new Thread(StartServer); Debug.Print("Web server started on port " + port.ToString()); }
As you can see, it is quite simple, the WebServer object is initialize with a specific port and a timeout. By default, the http port is 80 but it can be anything. There is no limitation. And as it’s easy to implement, let make the code generic enough to be able to be use with different ports. And a new Thread is created to point on function StartServer. I will detail it later. I will explain also why we need a timeout later.
Now we have this object initialize, let start the Webserver:
public bool Start() { bool bStarted = true; // start server try { cancel = false; serverThread.Start(); Debug.Print("Started server in thread " + serverThread.GetHashCode().ToString()); } catch { //if there is a problem, maybe due to the fact we did not wait engouth cancel = true; bStarted = false; } return bStarted; }
That is where the fun being! We start listening and initialize a variable we will use later to stop the server if needed. The catch can contain something to retry to start, here, it just return if it is started or not. At this stage, it should work with no problem as it is only a thread starting. But who knows
private void StartServer() { using (Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)) { //set a receive Timeout to avoid too long connection server.ReceiveTimeout = this.Timeout; server.Bind(new IPEndPoint(IPAddress.Any, this.Port)); server.Listen(int.MaxValue); while (!cancel) { try { using (Socket connection = server.Accept()) { if (connection.Poll(-1, SelectMode.SelectRead)) { // Create buffer and receive raw bytes. byte[] bytes = new byte[connection.Available]; int count = connection.Receive(bytes); Debug.Print("Request received from "
+ connection.RemoteEndPoint.ToString() + " at " + DateTime.Now.ToString("dd MMM yyyy HH:mm:ss")); //stup some time for send timeout as 10s. //necessary to avoid any problem when multiple requests are done the same time. connection.SendTimeout = this.Timeout; ; // Convert to string, will include HTTP headers. string rawData = new string(Encoding.UTF8.GetChars(bytes)); string mURI; // Remove GET + Space // pull out uri and remove the first / if (rawData.Length > 5) { int uriStart = rawData.IndexOf(' ') + 2; mURI = rawData.Substring(uriStart, rawData.IndexOf(' ', uriStart) - uriStart); } else mURI = ""; // return a simple header string header = "HTTP/1.1 200 OK\r\nContent-Type: text/html
; charset=utf-8\r\nConnection: close\r\n\r\n"; connection.Send(Encoding.UTF8.GetBytes(header), header.Length, SocketFlags.None); if (CommandReceived != null) CommandReceived(this, new WebServerEventArgs(connection, mURI)); } } } catch (Exception e) { //this may be due to a bad IP address Debug.Print(e.Message); } } } }
This function will run all the time in a thread. It’s in an infinite loop which can be break by the cancel variable. First, we need to initialize the Socket. We will use IPv4 with a stream and the TCP protocol. No timeout to receive the request. The, you’ll have to bind this socket to a physical IP address. In our case, we will use all IP address on the port initialized before. Any IP address mean all addresses and in our case only 1 IP address as we do have only 1 Ethernet interface. We are using '”using” to make sure the server Socket will be closed and cleaned properly after usage.
The way it is working is not too complicated. Remember that we’ve open a Socket named Server, setup it to listen to port 80. This is running in a separate thread in this thread. So in order to analyze the information returned when a connection is accepted (so when a Browser ask for a page), we need to create another Socket pointing to the same Socket, here “using (Socket connection = server.Accept())”. In this case “using” allow the code to clean in the “proper way” when the thread will be finished or then the loop end or when it goes back to the initial loop. It’s thread in thread and if you don’t close things correctly, it can quickly let lots of objects in the memory, objects which will be seen as alive by the garbage collector.
When there are bytes ready to read with connection.Poll, we just read them. The request is transformed into a string. An http request look like “GET /folder/name.ext?param1=foo¶m2=bar HTTP/1.1”. Areal life example looks more like this: "GET /folder/name.ext?param1=foo¶m2=bar HTTP/1.1\r\nAccept: text/html, application/xhtml+xml, */*\r\nAccept-Language: fr-FR,fr;q=0.8,en-US;q=0.5,en;q=0.3\r\nUser-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)\r\nAccept-Encoding: gzip, deflate, peerdist\r\nHost: localhost:81\r\nConnection: Keep-Alive\r\nX-P2P-PeerDist: Version=1.1\r\nX-P2P-PeerDistEx: MinContentInformation=1.0, MaxContentInformation=1.0\r\n\r\n"
For a normal, full web server like IIS or Apache, you’ll analyze all those parameters, and there are lots, see the W3C protocol here. For our usage, the only thing that interest us is the full URL. And it is located between the 2 first spaces. And we will extract the URL, remove the first ‘/’ as I will not use it in the rest of the code.
Now, the next step is to start answering the request. When someone ask you something, it’s polite to answer
Like for the request, the response need to have couple of header information. And as my usage is extremely simple, I will always consider that it is OK, I’m only delivering text content and that the connection can be closed. By the way, whatever you put there, HTTP is a disconnected protocol, so you should never consider that you are always connected! It’s an error and can drive you to very bad behaviors.
connection.Send return the first part of the message and then I call an event to tell the creator of the WebServer object that something happened. I send of course the connection object so that the caller will be able to create an HTML page and answer and also the URL so that it can analyze it.
Last but not least, the try and catch is extremely important. With Sockets a problem can quickly arrive due to a network problem. And I’ve seen it happening on the netduino for no reason. Just capturing the problem and not doing anything makes the web server working for months! Even if you loose the network, the catch will capture the problem and the server will continue to work up to the point the network will work again. The other reason to use it is because of the timeout. If something happen between the client and our webserver, after the timeout, you’ll get in this catch and you’ll start a new socket and the process will go back to something normal. It can happen and happened to me with very long HTML pages I was generating. When I was interrupting the creation and ask for a new page, the socket went into a kind of infinite loop waiting for a request. There should be a smart way to check is something goes well or not but it’s an easy way.
public delegate void GetRequestHandler(object obj, WebServerEventArgs e); public class WebServerEventArgs: EventArgs { public WebServerEventArgs(Socket mresponse, string mrawURL) { this.response = mresponse; this.rawURL = mrawURL; } public Socket response { get; protected set; } public string rawURL { get; protected set; } } public event GetRequestHandler CommandReceived;
Right after the header is sent back, an event is raised. The arguments are simple here, we do send the Socket object and the URL. If you want to enrich the web server, you can add other elements like the header element rather than sending them right away, the browser requesting the page, the IP address or whatever you want! Again, simple and efficient there.
Last but not least if you need to stop the Server, you’ll need a function to this and also to clean the code at the end:
private bool Restart() { Stop(); return Start(); }public void Stop()
{
cancel = true;
Thread.Sleep(100);
serverThread.Suspend();
Debug.Print("Stoped server in thread ");
}public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposing) { serverThread = null; } }
Nothing too complex there, it’s just about pausing the thread (remember, there are other tread attached in it), closing the other thread leaving in the Server object and cleaning everything. I hope it’s the good code to let it clean
But at the end of the day, my only interest is to let this server running all the time. So I don not really care if it will stop correctly!But at the end of the day, my only interest is to let this server running all the time. So I don not really care if it will stop correctly!
Now, to use the server, easy:private static WebServer server;// Start the HTTP ServerWebServer server = new WebServer(80, 10000); server.CommandReceived += new WebServer.GetRequestHandler(ProcessClientGetRequest); // Start the server. server.Start();
Declare a static WebServer if you want it to be unique. Technically, you can have multiple servers running on different port. In my case, no need for this. Then, it’s about creating the object, adding an event and starting the server!private static void ProcessClientGetRequest(object obj, WebServer.WebServerEventArgs e)
And you are ready to do some treatment into this function. To return part of the answer, just use e.response.Send as for the header part and you’re done!
To simplify the process, as it’s a function which you’ll have to call often, I’ve created a function to do this:public static string OutPutStream(Socket response, string strResponse) { byte[] messageBody = Encoding.UTF8.GetBytes(strResponse); response.Send(messageBody, 0, messageBody.Length, SocketFlags.None); //allow time to physically send the bits Thread.Sleep(10); return ""; }
This can be a function you add in your main code or can add in the Web Server code.
Now you have a fully functional simple Web Server. You can read the previous article on how to handle parameters and analyzing them. The code to manage the parameter is now in the WebServer class.
I’ll post the code in CodePlex so anyone will be able to use it as a helper.
Enjoy this code
Again, I’m just a marketing director doing some code. And it’s the code running in my sprinkler management system for the last month without any problem!Again, I’m just a marketing director doing some code. And it’s the code running in my sprinkler management system for the last month without any problem!
Excellent article and I've been looking for a webserver that gives a bit more. I can't find the code on CodePlex, can you post a link please.
Thanks
Phil
This is great. Thank you.
Very nice code! I tried to find it at Codeplex with no success. Please post a link.
I just found it.
netmfwebserver.codeplex.com
Great example! I wish it would be possible to call the poll function on all connected sockets. | https://blogs.msdn.microsoft.com/laurelle/2012/05/29/creating-an-efficient-http-web-server-for-net-microframework-netmf/ | CC-MAIN-2019-09 | refinedweb | 2,271 | 66.23 |
3.11 Advice
Don't reinvent the wheel; use libraries.
Don't believe in magic; understand what your libraries do, how they do it, and at what cost they do it.
When you have a choice, prefer the standard library to other libraries.
Don't think that the standard library is ideal for everything.
Remember to #include the headers for the facilities you use; see §3.3.
Remember that standard library facilities are defined in namespace std; see §3.3.
Use string rather than char*; see §3.5, §3.6.
If in doubt, use a range-checked vector (such as Vec); see §3.7.2.
Prefer vector<T>, list<T>, and map<key,value> to T[]; see §3.7.1, §3.7.3, §3.7.4.
When adding elements to a container, use push_back() or back_inserter(); see §3.7.3, §3.8.
Use push_back() on a vector rather than realloc() on an array; see §3.8.
Catch common exceptions in main(); see §3.7.2. | http://www.informit.com/articles/article.aspx?p=25004&seqNum=12 | CC-MAIN-2017-43 | refinedweb | 165 | 76.72 |
Hello,
following decription is very easy.
I have a class UC_Main with constructor where I create EventHandler for SelectionChanged event in MainUCGrid control which is standard DataGridView. MainUCGrid control is part of UC_Main class.
public class UC_Main : UserControl { public UC_Main(Evidence.Nodik nod, DataView columns) { Initialize(); MainUCGrid.SelectionChanged += new System.EventHandler(MainUCGrid_SelectionChanged); } ... ... ... }
Then I have another class called UC_Material.
public partial class UC_Material : UC_Main ... ... ...
I have more classes like UC_Material and I use them to create user controls that are placed in main form. MainUCGrid is initialized in UC_Main so this is the same process for any user control.
When I run code everything is almost fine. I click on any row in MainUCGrid in any user control and SelectionChanged event is fired as I would expect.
Problem is that somewhere during first initialization, I mean when I create instance of any user control, this event is fired 7x and I can not find out why.
How can I find a place in my code that fires this event 7x ?? I thought for example that there is 7 rows in DataGridView or 7 columns but it is not, every user control has different number of rows and columns and event is always fired 7x. Problem is that I can see in debugger that SelectionChanged fires but only thing I know is, that sender is MainUCGrid but I do not know why?? I tried to change SelectionChanged to CellClick and this does not fire 7x.
I do not want to bother with the whole code, at least for now. | https://www.daniweb.com/programming/software-development/threads/261134/multiple-selectionchanged-event-in-datagridview | CC-MAIN-2018-05 | refinedweb | 258 | 65.62 |
In this section, you will learn how to get the size of a file.
Description of code:
You can see in the given example, we have created an object of File class and specified a file in the constructor. The object then calls the method length() which returns the size of the file in bytes. Then we have converted the size of file from bytes to kilo bytes.
Here is the code:
import java.io.*; public class FileSize { public static void main(String args[]) { File file = new File("C:/java.txt"); long filesize = file.length(); long filesizeInKB = filesize / 1024; System.out.println("Size of File is: " + filesizeInKB + " KB"); } }
Through the method length(), you can get the size of any file. This method returns the size in bytes.
Output:
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Getting the Size of a File in Java View All Comments
Post your Comment | http://www.roseindia.net/java/example/java/io/FileSize.shtml | CC-MAIN-2014-52 | refinedweb | 165 | 84.68 |
This chapter includes:.
The pfil interface is purely in the stack and supports packet-filtering hooks. Packet filters can register hooks that are called when packet processing is taking place; in essence, pfil is a list of callbacks for certain events. In addition to being able to register a filter for incoming and outgoing packets, pfil provides support for interface attach/detach and address change notifications.
The pfil interface is one of a number of different layers that a user-supplied application can register for, to operate within the stack process context. These modules, when compiled, are called Loadable Shared Modules (lsm) in QNX Neutrino nomenclature, or Loadable Kernel Modules (lkm) in BSD nomenclature.
There are two levels of registration required with io-pkt:
In Neutrino, shared modules are dynamically loaded into the stack. You can do this by specifying them on the command line when you start io-pkt, using the -p option, or you can add them subsequently to an existing io-pkt process by using the mount command.
The application module must include an initial module entry point defined as follows:
#include "sys/io-pkt.h" #include "nw_datastruct.h" int mod_entry( void *dll_hdl, struct _iopkt_self *iopkt, char *options) { }
The calling parameters to the entry function are:
This is followed by the registration structure that the stack will look for after calling dlopen() to load the module to retrieve the entry point:
struct _iopkt_lsm_entry IOPKT_LSM_ENTRY_SYM(mod) = IOPKT_LSM_ENTRY_SYM_INIT(mod_entry);
This entry point registration is used by all shared modules, regardless of which layer the remainder of the code is going to hook into. Use the following functions to register with the pfil layer:
#include <sys/param.h> #include <sys/mbuf.h> #include <net/if.h> #include <net/pfil.h>);
The head_get() function returns the start of the appropriate pfil hook list used for the hook functions. The af argument can be either PFIL_TYPE_AF (for an address family hook) or PFIL_TYPE_IFNET (for an interface hook) for the “standard” interfaces.
If you specify PFIL_TYPE_AF, the Data Link Type (dlt) argument is a protocol family. The current implementation has filtering points for only AF_INET (IPv4) or AF_INET6 (IPv6).
When you use the interface hook (PFIL_TYPE_IFNET), dlt is a pointer to a network interface structure. All hooks attached in this case will be in reference to the specified network interface.
Once you've selected the appropriate list head, you can use pfil_add_hook() to add a hook to the filter list. This function takes as arguments a filter hook function, an opaque pointer (which is passed into the user-supplied filter arg function), a flags value (described below), and the associated list head returned by pfil_head_get().
The flags value indicates when the hook function should be called and may be one of:
When a filter is invoked, the packet appears just as if it came off the wire. That is, all protocol fields are in network-byte order. The filter returns a nonzero value if the packet processing is to stop, or zero if the processing is to continue.
For interface hooks, the flags argument can be one of:
Here's an example of what a simple pfil hook would look like. It shows when an interface is attached or detached. Upon a detach (ifconfig iface destroy), the filter is unloaded.
#include <sys/types.h> #include <errno.h> #include <sys/param.h> #include <sys/conf.h> #include <sys/socket.h> #include <sys/mbuf.h> #include <net/if.h> #include <net/pfil.h> #include <netinet/in.h> #include <netinet/ip.h> #include "sys/io-pkt.h" #include "nw_datastruct.h" static int in_bytes = 0; static int out_bytes = 0; static int input_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir) { in_bytes += (*m)->m_len; return 0; } static int output_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir) { out_bytes += (*m)->m_len; return 0; } static int deinit_module(void); static int iface_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir) { printf("Iface hook called ... "); if ( (int)m == PFIL_IFNET_ATTACH) { printf("Interface attached\n"); printf("%d bytes in, %d bytes out\n", in_bytes, out_bytes); } else if ((int)m == PFIL_IFNET_DETACH) { printf("Interface detached\n"); printf("%d bytes in, %d bytes out\n", in_bytes, out_bytes); deinit_module(); } return 0; } static int ifacecfg_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir) { printf("Iface cfg hook called with 0x%08X\n", (int)(m)); return 0; } static int deinit_module(void) { struct pfil_head *pfh_inet; pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); if (pfh_inet == NULL) { return ESRCH; } pfil_remove_hook(input_hook, NULL, PFIL_IN | PFIL_WAITOK, pfh_inet); pfil_remove_hook(output_hook, NULL, PFIL_OUT | PFIL_WAITOK, pfh_inet); pfh_inet = pfil_head_get(PFIL_TYPE_IFNET, 0); if (pfh_inet == NULL) { return ESRCH; } pfil_remove_hook(ifacecfg_hook, NULL, PFIL_IFNET, pfh_inet); pfil_remove_hook(iface_hook, NULL, PFIL_IFNET | PFIL_WAITOK, pfh_inet); printf("Unloaded pfil hook\n" ); return 0; } int pfil_entry(void *dll_hdl, struct _iopkt_self *iopkt, char *options) { struct pfil_head *pfh_inet; pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); if (pfh_inet == NULL) { return ESRCH; } pfil_add_hook(input_hook, NULL, PFIL_IN | PFIL_WAITOK, pfh_inet); pfil_add_hook(output_hook, NULL, PFIL_OUT | PFIL_WAITOK, pfh_inet); pfh_inet = pfil_head_get(PFIL_TYPE_IFNET,0); if (pfh_inet == NULL) { return ESRCH; } pfil_add_hook(iface_hook, NULL, PFIL_IFNET, pfh_inet); pfil_add_hook(ifacecfg_hook, NULL, PFIL_IFADDR, pfh_inet); printf("Loaded pfil hook\n" ); return 0; } struct _iopkt_lsm_entry IOPKT_LSM_ENTRY_SYM(pfil) = IOPKT_LSM_ENTRY_SYM_INIT(pfil_entry);
You can use pfil hooks to implement an io-net filter; for more information, see the Migrating from io-net appendix in this guide.
The.
The Berkeley Packet Filter (BPF) provides link-layer access to data available on the network through interfaces attached to the system. To use BPF, open a device node, /dev/bpf, and then issue ioctl() commands to control the operation of the device. A popular example of a tool using BPF is tcpdump (see the Utilities Reference).
The device /dev/bpf is a cloning device, meaning you can open it multiple times. It is in principle similar to a cloning interface, except BPF provides no network interface, only a method to open the same device multiple times.
To capture network traffic, you must attach a BPF device to an interface. The traffic on this interface is then passed to BPF for evaluation. To attach an interface to an open BPF device, use the BIOCSETIF ioctl() command. The interface is identified by passing a struct ifreq, which contains the interface name in ASCII encoding. This is used to find the interface from the kernel tables. BPF registers itself to the interface's struct ifnet field, if_bpf, to inform the system that it's interested in traffic on this particular interface. The listener can also pass a set of filtering rules to capture only certain packets, for example ones matching a given combination of host and port.
BPF captures packets by supplying a bpf_tap() tapping interface to link layer drivers, and by relying on the drivers to always pass packets to it. Drivers honor this request and commonly have code which, along both the input and output paths, does:
#if NBPFILTER > 0 if (ifp->if_bpf) bpf_mtap(ifp->if_bpf, m0); #endif
This passes the mbuf to the BPF for inspection. BPF inspects the data and decides if anyone listening to this particular interface is interested in it. The filter inspecting the data is highly optimized to minimize the time spent inspecting each packet. If the filter matches, the packet is copied to await being read from the device.
The BPF tapping feature and the interfaces provided by pfil provide similar services, but their functionality is disjoint. The BPF mtap wants to access packets right off the wire without any alteration and possibly copy them for further use. Callers linking into pfil want to modify and possibly drop packets. The pfil interface is more analogous to io-net's filter interface.
BPF has quite a rich and complex syntax (e.g.) and is a standard interface that is used by a lot of networking software. It should be your interface of first choice when packet interception / transmission is required. It will also be a slightly lower performance interface given that it does operate across process boundaries with filtered packets being copied before being passed outside of the stack domain and into the application domain. The tcpdump and libpcap library operate using the BPF interface to intercept and display packet activity. For those of you currently using something like the nfm-nraw interface in io-net, BPF provides the equivalent functionality, with some extra complexity involved in setting things up, but with much more versatility in configuration. | https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx.doc.io-pkt_en_user_guide/filtering.html | CC-MAIN-2021-25 | refinedweb | 1,381 | 52.7 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Coding the Blog Homepage15:52 with Zac Gordon
Now that we have our home.php template setup, we continue to code it out with the common blog meta data used on blogs.
- 0:00
If we look back at the original static template for
- 0:03
blog.html and come down inside of the primary class,
- 0:08
we could see the markup that we want to be using inside of our loop.
- 0:12
Specifically, we have an article tag with class post.
- 0:17
We have an h1 for the title, an h2 for the excerpt, and then an unordered list with
- 0:24
these specific classes for showing things like the author and the categories, etc.
- 0:32
There's a container for featured images if there is one.
- 0:35
And then we could see the rest is just normal content as it would appear.
- 0:40
So what we need to do is bring over some of this into our template.
- 0:45
And I'm going to do that by copying from the opening of
- 0:47
the article tag down through where the featured image appears.
- 0:54
And then we're going to come back into our home.php template.
- 1:00
And inside of the if statement before the alts,
- 1:02
we're going to place that markup that we just copied over.
- 1:05
[BLANK_AUDIO]
- 1:09
This step right here is an important one for
- 1:12
a WordPress developer, to know what markup to pull out from a static template and
- 1:17
then what WordPress code to put in its place.
- 1:20
We'll close this article tag and
- 1:23
then go through replacing the static code with the dynamic WordPress code we need.
- 1:28
First, I'm going to turn on word wrap.
- 1:32
Okay.
- 1:33
The first piece of code that we will need is, instead of linking like this,
- 1:37
we're going to use the PHP, the_permalink function, which we've seen before.
- 1:46
And this will automatically figure out what post it is and
- 1:49
where it's supposed to link.
- 1:51
Next, we could take out this and replace it with the_title.
- 1:55
[BLANK_AUDIO]
- 2:01
Followed by that, I'm going to introduce a new tag here called the_excerpt.
- 2:05
[BLANK_AUDIO]
- 2:09
the_excerpt is similar to the_content except it cuts the content off
- 2:14
after a certain point and it only displays a small excerpt of the post.
- 2:20
That's going to leave our main blog listing page a little bit cleaner.
- 2:24
We then have the post metadata,
- 2:26
like the author, their avatar, name, and then category.
- 2:33
The first thing I had to do here was update this class because
- 2:38
avatar is a class that WordPress used.
- 2:41
And I found afterwards that just using the avatar class alone was a problem, so
- 2:46
I namespaced it and then updated the CSS that you should already have.
- 2:52
Now, here's what we do in order to get a gravatar or avatar image in WordPress.
- 2:58
We're going to use a function called get_avatar.
- 3:02
[BLANK_AUDIO]
- 3:07
If we come over and look at this on the codex briefly.
- 3:10
[BLANK_AUDIO]
- 3:14
We could see that it takes a few parameters.
- 3:17
It takes the id or email of the user whose avatar we're trying to get.
- 3:22
And then the size of it, as well as if we want to use a custom default or
- 3:26
fallback image or any special alternate text.
- 3:34
In our case, we're going to pass it the ID of the author who wrote this post.
- 3:40
Now, this is a little complex to be passing in
- 3:43
functions within functions as parameters.
- 3:46
However, this is fairly conventional way in order for
- 3:50
us to get the data that we need into the function in this specific example.
- 3:56
I'm then going to set 24 as a small size so that we just have a small avatar.
- 4:01
And again, what's what's happening is that we're using a get_avatar function to
- 4:06
get the author meta information, specifically the ID of the author, and
- 4:11
then output it at a given size.
- 4:13
Now, because this is using get_avatar instead of something like the_avatar,
- 4:19
we have to use echo.
- 4:21
Remember, again, is that if you see something like the underscore,
- 4:25
it will automatically echo out that content.
- 4:28
However, in WordPress,
- 4:29
if you see get, you may have to echo out that content yourself.
- 4:36
So we'll get in their avatar and then in place of name here,
- 4:43
we're going to use a function called the_author_posts_link.
- 4:49
And what this does is generate a link to the special author page, which we'll be
- 4:54
able to control via an archives page which we'll build out shortly.
- 5:00
We can then come down and replace category.
- 5:03
[BLANK_AUDIO]
- 5:05
Excuse me, we're gonna take out the entire link because the the_category tag.
- 5:11
[BLANK_AUDIO]
- 5:16
Will automatically create links.
- 5:18
Now, the the_category tag takes a parameter.
- 5:24
And I'll show you when you start bouncing around one function to another to look up,
- 5:28
all you do is add the function to the end of this URL of Function Reference,
- 5:34
the codex and it will pull it up.
- 5:35
That there is a separator here, so
- 5:38
if you wanted to put a comma in between the categories or a pipe character, etc.
- 5:44
However, for now, we're gonna try leaving this blank and
- 5:47
seeing how it rolls out and looks.
- 5:50
Finally, we're going to add one more in here that I'm going to get by copying and
- 5:54
pasting this, and we're gonna update the class to date, and
- 5:58
then we're going to say when this was posted.
- 6:01
Now, this wasn't in our original static mockup.
- 6:05
However, I want to make sure we covered it so
- 6:08
that you could see the function here in action.
- 6:13
So, we're calling the date function, and then we come down and
- 6:18
we have space for our featured image.
- 6:21
Now, we looked earlier in our portfolio page at the code that we would need to
- 6:26
echo out the post thumbnail.
- 6:29
However, what we're going to do now is copy this, and
- 6:34
paste it in here as you might expect.
- 6:37
However, instead of leaving it just like this,
- 6:40
since some posts won't have a thumbnail, we want to run a conditional statement
- 6:45
that says if there is a post thumbnail, then go ahead and display it.
- 6:50
So what we wanna do here is write out a conditional statement that will check to
- 6:54
see if there is a featured image, and then if so, output the markup, and
- 6:58
if not, don't.
- 7:00
We could do that pretty easily by simply saying, if the_post_thumbnail.
- 7:07
And what that will do is check to see if there is a featured image for
- 7:10
this post and if there is,
- 7:12
it will output this code and then we'll close up our if statement here.
- 7:19
And that should all work properly.
- 7:21
This is something really great to know because in the future, you may find that
- 7:25
you need to do conditional statements on all sorts of different functions.
- 7:30
And it's nice to know that you could easily pass them in to
- 7:32
see if they're empty or test to see if the content is what you want it
- 7:37
to be in order to display conditional content.
- 7:41
So let's come over to our site now and
- 7:43
look at how this is outputted on our blog homepage.
- 7:48
Oh, before we do that, we do need to make sure that this is a colon.
- 7:51
If you're writing conditional statements this way,
- 7:54
you need to have an opening colon there.
- 7:57
So now when we come over and we view our blog, we could see that we have the title,
- 8:01
we have an excerpt displaying really big here and then we have this information.
- 8:09
It doesn't look like it's giving us the date here and
- 8:12
then this is a little bit styled funky.
- 8:15
So, let's, let's take one more swing through this and
- 8:19
see if we can work out some of these details here.
- 8:22
But before we do that,
- 8:23
we'll go into an individual post and add a featured image to it just so
- 8:28
we could test to see that our conditional statement that we wrote works properly.
- 8:33
So if we come into our blog, we could see that is showing there.
- 8:39
And if we inspect the code here and look at it, we could see it actually is
- 8:44
only outputting the image and not the markup that we wanted.
- 8:49
So, the reason for is when we ran this conditional statement, we're saying
- 8:52
the_post_thumbnail, which is echoing out the post thumbnail exactly there and
- 8:57
then not getting to this.
- 8:59
However, if we change that to if get_the_post_thumbnail and
- 9:04
then we refresh our page and look at the source code again for
- 9:07
this, we could see now the conditional statement is working.
- 9:12
So this is a helpful instance where we could see that get needs to be added in
- 9:16
order to run conditional statements like we were just talking about.
- 9:21
[BLANK_AUDIO]
- 9:23
The other piece I wanna show you before we go back is if we
- 9:28
come into testing a post and we add a bunch of lorem ipsum to it.
- 9:33
[BLANK_AUDIO]
- 9:47
Go back and view our blog.
- 9:51
We could see this shortening in effect here.
- 9:54
Now, this h2 is way too big.
- 9:57
It it looks like maybe we're missing a little bit of markup here or something.
- 10:03
Article class post, h2.
- 10:05
Article, h2.
- 10:06
They're looking.
- 10:07
[BLANK_AUDIO]
- 10:10
Because it’s putting the p tag inside of the h2.
- 10:16
So there is actually a fix that we could use that will remove these p tags and
- 10:20
just have the h2 showing because that’s what’s causing this text to appear real,
- 10:25
really big where it doesn’t end in our markup.
- 10:28
So what we're going to do is we're gonna take the excerpt and
- 10:32
we're gonna wrap it in the PHP functions strip_tags.
- 10:35
[BLANK_AUDIO]
- 10:38
And then pass it the excerpt.
- 10:40
However, instead of using just the_excerpt like we are,
- 10:43
we're going to say get the excerpt and then echo it here.
- 10:48
So again, we see the difference between having get at the beginning and
- 10:52
having just the_excerpt.
- 10:54
Get will get it, allow us to process this and then echo it out ourselves.
- 10:59
And if we refresh the page, we could see that's much nicer.
- 11:04
Now, the excerpt is controlled to set a certain number of characters and
- 11:10
then put the dot dot dot.
- 11:12
However, commonly you'll want to override this and control the amount of characters
- 11:19
that are showing, so we may want to make this, for example, a little bit shorter.
- 11:24
What we could do then is go into our functions.php file and
- 11:31
add a new function called wpt_excerpt_length.
- 11:36
[BLANK_AUDIO]
- 11:41
Which will have a parameter of length.
- 11:44
And we're just going to return the value 16 and
- 11:48
then add this to the filter or add this to the hook of excerpt_length.
- 11:55
[BLANK_AUDIO]
- 11:59
And then I'll explain again real quick what is going on here.
- 12:03
[BLANK_AUDIO]
- 12:06
And then we're gonna add this at the end.
- 12:08
Okay, so what's happening here is we have a custom function that's getting the value
- 12:12
of length that's being set default by WordPress,
- 12:16
returning 16 to override that and then telling WordPress, hey,
- 12:21
when you set what the excerpt_length is, first check our function.
- 12:27
And instead of first, we're actually putting this number here,
- 12:30
which will make it come much later in the process.
- 12:33
Just in case there's something before it that is overriding or
- 12:36
setting the function, we want ours to come later in that process.
- 12:40
So if we come back and refresh our page now, we could see, hey,
- 12:44
we have a nice shortened preview excerpt here and
- 12:48
that's a helpful thing to use in, in all sorts of different sites.
- 12:52
[BLANK_AUDIO]
- 12:55
It looks like this, this guy is slightly out of place here and
- 13:00
what we see when we click in here is that the cat, the, the categories tag or
- 13:06
function is outputting an entire unordered list and then echoing them out.
- 13:12
And I’d showed you before that if we look at the category, that it has options for
- 13:17
separators instead.
- 13:19
And if we choose to separate it by a space, for example, or a comma,
- 13:25
that will prevent this function from echoing out an entire unordered list.
- 13:31
So if we come back into our code, find where we have the category, and we pass
- 13:37
a parameter of separating by a space or even if we separate it by a comma,
- 13:43
this will prevent the categories from being echoed out in a list and
- 13:48
that should have everything displaying correctly.
- 13:55
Now, one more problem that we see on this page is that we
- 13:58
see the date displaying here, but not here.
- 14:03
The reason for this is a very interesting one at WordPress,
- 14:07
in that if the date is called, it will only show the date for
- 14:12
the first post of posts that have the same publish date.
- 14:16
For example, if there are three posts on the same date,
- 14:21
it will only echo out the date for the first one.
- 14:24
The reason being is that it's then assumed that the other ones were
- 14:26
also published on the same date.
- 14:29
Now, while that works for some layouts, it isn’t as good for this one.
- 14:33
And we could correct this by instead using the function the_time, and
- 14:37
the time is more unique than the date and
- 14:39
we could see it’s outputting the exact time by default here, which is
- 14:43
not what we really want, but we could see that it is still displaying for both.
- 14:49
So what we could do are set some more specific parameters here and
- 14:52
format how we want it to display because it is capable of displaying the date.
- 14:59
It just, by default, only puts out the time.
- 15:02
So, if we refresh that, we could see now we have the date.
- 15:06
Now we have the date for both posts.
- 15:09
So that's an important thing to remember, that the date is commonly used, but
- 15:13
more specifically, you wanna try to use the time and then format that correctly so
- 15:18
you don't have that issue coming up of it only displaying the date once.
- 15:23
At this point though, we have what looks like a pretty solid blog post listing page
- 15:29
where we could click to an individual blog post.
- 15:32
We could click to an author page and we could click to a category page.
- 15:36
Now, none of those we've set up and coded yet, but we'll do that once we
- 15:40
code out what we want our actual individual post page to look like.
- 15:45
So in the next step,
- 15:46
we'll jump in to looking at our single.php template which will control our posts. | https://teamtreehouse.com/library/wordpress-theme-development/adding-a-blog-to-a-wordpress-theme/coding-the-blog-homepage | CC-MAIN-2017-30 | refinedweb | 2,942 | 74.93 |
.
Mypy is one such tool, and it's an increasingly popular one. The idea is that you run Mypy on your code before running it. Mypy looks at your code and makes sure that your annotations correspond with actual usage. In that sense, it's far stricter than Python itself, but that's the whole point.
In my last article, I covered some basic uses for Mypy. Here, I want to expand upon those basics and show how Mypy really digs deeply into type definitions, allowing you to describe your code in a way that lets you be more confident of its stability.
Type Inference
Consider the following code:
x: int = 5 x = 'abc' print(x)
This first defines the variable
x, giving it a type
annotation of
int.
It also assigns it to the integer 5. On the next line, it assigns
x the
string
abc. And on the third line, it prints the value of
x.
The Python language itself has no problems with the above code. But if
you run
mypy against it, you'll get an error message:
mytest.py:5: error: Incompatible types in assignment (expression has type "str", variable has type "int")
As the message says, the code declared the variable to have type
int,
but then assigned a string to it. Mypy can figure this out because,
despite what many people believe, Python is a strongly typed language.
That is, every object has one clearly defined type. Mypy notices this
and then warns that the code is assigning values that are contrary to what
the declarations said.
In the above code, you can see that I declared
x to be of
type
int at
definition time, but then assigned it to a string, and then I got an error.
What if I don't add the annotation at all? That is, what if I run the
following code via Mypy:
x = 5 x = 'abc' print(x)
You might think that Mypy would ignore it, because I didn't add any
annotation. But actually, Mypy infers the type of value a variable
should contain from the first value assigned to it. Because I assigned
an integer to
x in the first line, Mypy assumed that
x should always
contain an integer.
This means that although you can annotate variables, you typically don't have to do so unless you're declaring one type and then might want to use another, and you want Mypy to accept both.
Defining Dictionaries
Python's
dict ("dictionary") type is probably the most important in
the entire language. It would seem, at first glance, that name-value
pairs aren't very exciting or important. But when you think about how
often programs use name-value pairs—for variables, namespaces,
user name-ID associations—it becomes clear just how necessary this can
be.
Dictionaries also are used as small databases, or structures, for keeping track of data. For many people new to Python, it seems natural to define a new class whenever they need a new data type. But for many Python users, it's more natural to use a dictionary. Or if you need a collection of them, a list of dicts.
For example, assume that I want to keep track of prices on various items in a store. I can define the store's price list as a dictionary, in which the keys are the item names and the values are the item prices. For example:
menu = {'coffee': 5, 'sandwich': 7, 'soup': 8}
What happens if I accidentally try to add a new item to the menu, but mix up the name and value? For example:
menu[5] = 'muffin'
Python doesn't care; as far as it's concerned, you can have any hashable type as a key and absolutely any type as as value. But of course, you do care, and it might be nice to tighten up the code to ensure you don't make this mistake.
Here's a great thing about Mypy: it'll do this for you automatically, without you saying anything else. If I take the above two lines, put them into a Python file, and then check the program with Mypy, I get the following:
mytest.py:4: error: Invalid index type "int" for ↪"Dict[str, int]"; expected type "str" mytest.py:4: error: Incompatible types in assignment ↪(expression has type "str", target has type "int")
In other words, Mypy noticed that the dictionary was (implicitly) set to have strings as keys and ints and values, simply because the initial definition was set that way. It then noticed that it was trying to assign a new key-value pair with different types and pointed to the problem.
Let's say, however, that you want to be explicit. You can do that by
using the
typing module, which defines annotation-friendly versions of
many built-in types, as well as many new types designed for this
purpose. Thus, I can say:
from typing import Dict menu: Dict[str, int] = {'coffee': 5, 'sandwich': 7, 'soup': 8} menu[5] = 'muffin'
In other words, when I define my
menu variable, I also give it a type
annotation. This type annotation makes explicit what Mypy implied from
the dict's definition—namely that keys should be strings and values
should be ints. So, I got the following error message from Mypy:
mytest.py:6: error: Invalid index type "int" for ↪"Dict[str, int]"; expected type "str" mytest.py:6: error: Incompatible types in assignment ↪(expression has type "str", target has type "int")
What if I want to raise the price of the soup by 0.5? Then the code looks like this:
menu: Dict[str, int] = {'coffee': 5, 'sandwich': 7, ↪'soup': 8.5}
And I end up getting an additional warning:
mytest.py:5: error: Dict entry 2 has incompatible type "str": ↪"float"; expected "str": "int"
As I explained in my last article, you can use a
Union to define several different
options:
from typing import Dict, Union menu: Dict[str, Union[int, float]] = {'coffee': 5, ↪'sandwich': 7, 'soup': 8.5} menu[5] = 'muffin'
With this in place, Mypy knows that the keys must be strings, but the values can be either ints or floats. So, this silences the complaint about the soup's price being 8.5, but retains the warning about the reversed assignment regarding muffins.
Optional Values
In my last article, I showed how when you define a function, you can annotate not
only the parameters, but also the return type. For example, let's say
I want to implement a function,
doubleget, that takes two
arguments: a
dictionary and a key. It returns the value associated with the key, but
doubled. For example:
from typing import Dict def doubleget(d: Dict[str, int], k) -> int: return d[k] * 2 menu: Dict[str, int] = {'coffee': 5, 'sandwich': 7, ↪'soup': 8} print(doubleget(menu, 'sandwich'))
This is fine, but what happens if the user passes a key that isn't in
the dict? This will end up raising a
KeyError exception. I'd like to do
what the
dict.get method does—namely return
None if the key is
unknown. So, my implementation will look like this:
from typing import Dict def doubleget(d: Dict[str, int], k) -> int: if k in d: return d[k] * 2 else: return None menu: Dict[str, int] = {'coffee': 5, 'sandwich': 7, 'soup': 8} print(doubleget(menu, 'sandwich')) print(doubleget(menu, 'elephant'))
From Python's perspective, this is totally fine; it'll get
14 back
from the first call and
None back from the second. But from Mypy's
perspective, there is a problem: this indicated that the function will
always return an integer, and now it's returning
None:
mytest.py:10: error: Incompatible return value type ↪(got "None", expected "int")
I should note that Mypy doesn't flag this problem when you call the
function. Rather, it notices that you're allowing the function to return
a
None value in the function definition itself.
One solution is to use a
Union type, as I showed earlier, allowing an
integer or
None to be returned. But that doesn't quite express what
the goal is here. What I would like to do is say that it might
return an integer, but it might not—meaning, more or less, that the
returned integer is optional.
Sure enough, Mypy provides for this with its
Optional type:
from typing import Dict, Optional def doubleget(d: Dict[str, int], k) -> Optional[int]: if k in d: return d[k] * 2 else: return None
By annotating the function's return type with
Optional[int], this is
saying that if something is returned, it will be an integer. But, it's
also okay to return
None.
Optional is useful not only when you're returning values from a
function, but also when you're defining variables or object attributes.
It's pretty common, for example, for the
__init__ method in a class to
define all of an object's attributes, even those that aren't defined in
__init__ itself. Since you don't yet know what values you want to set,
you use the
None value. But of course, that then means the attribute
might be equal to
None, or it might be equal to (for example) an
integer. By using
Optional when setting the attribute, you signal that
it can be either an integer or a
None value.
For example, consider the following code:
class Foo(): def __init__(self, x): self.x = x self.y = None f = Foo(10) f.y = 'abcd' print(vars(f))
From Python's perspective, there isn't any issue. But you might like to
say that both
x and
y must be integers, except
for when
y is
initialized and set to
None. You can do that as follows:
from typing import Optional class Foo(): def __init__(self, x: int): self.x: int = x self.y: Optional[int] = None
Notice that there are three type annotations here: on the
parameter
x
(
int), on the attribute
self.x (also
int) and on the attribute
self.y (which is
Optional[int]). Python won't
complain if you break
these rules, but if you still have the code that was run before:
f = Foo(10) f.y = 'abcd' print(vars(f))
Mypy will complain:
mytest.py:13: error: Incompatible types in assignment ↪(expression has type "str", variable has type ↪"Optional[int]")
Sure enough, you now can assign either
None or an integer
to
f.y.
But if you try to set any other type, you'll get a warning from Mypy.
Conclusion
Mypy is a huge step forward for large-scale Python applications. It promises to keep Python the way you've known it for years, but with added reliability. If your team is working on a large Python project, it might well make sense to start incorporating Mypy into your integration tests. The fact that it runs outside the language means you can add Mypy slowly over time, making your code increasingly robust.
Resources
You can read more about Mypy here. That site
has documentation, tutorials and even information for people using
Python 2 who want to introduce
mypy via comments (rather than
annotations). | https://www.linuxjournal.com/content/pythons-mypy-advanced-usage | CC-MAIN-2020-29 | refinedweb | 1,875 | 69.31 |
Summary
A mixin is a collection of methods that can be injected into a class. The mixin technique consists in building classes by composing reusable mixins. The advantages and disadvantages of the technique are very much debated. I will focus on the negative aspects.
The idea of creating classes by composing reusable collections of methods is quite old: for instance Flavors, and old Lisp dialect, featured mixins more that 25 years ago. Nevertheless, there are still people thinking that the idea is new and cool: as a consequence, we see a lot of new code where the idea is misused or abused. Writing a paper showing the downfalls of mixins is in my opinion worth the effort.
Injecting methods into a class namespace is a bad idea for a very simple reason: every time you use a mixin, you are actually polluting your class namespace and losing track of the origin of your methods. In this sense, using a mixin in a class is just as bad as using from module import * in a module. However, everybody agrees that it is much better to use the full form mymodule.myname instead of importing all the names in the global namespace, but nobody realizes that importing methods into a class via a mixin is potentially just as bad.
At worst, using mixins can become just a modern way of writing spaghetti code. In order to explain what the problem is, I will show two real life Python frameworks making use of mixins: Tkinter e Zope.
Tkinter is a GUI framework which is part of Python standard library. It is a case where mixins work decently well, because it is a small framework, but it is also large enough that it is possible to see the beginning of the problem.
Every Tkinter class, even the simple Label is composed by multiple mixins:
>>> import Tkinter, inspect >>> for i, cls in enumerate(inspect.getmro(Tkinter.Label)): ... # show the ancestors ... print i, cls 0 Tkinter.Label 1 Tkinter.Widget 2 Tkinter.BaseWidget 3 Tkinter.Misc 4 Tkinter.Pack 5 Tkinter.Place 6 Tkinter.Grid
The standard library function inspect.getmro(cls) returns a tuple with the ancestors of cls, in the order specified by the Method Resolution Order (MRO). In our example the MRO of Label contains the geometry mixins (Grid, Pack and Place) and the generic mixin Misc which provides a lot of functionality by delegating to the underlying Tk library. The classes BaseWidget, Widget and Label have state and they take the role of base classes, not mixins.
You should see what I mean by namespace pollution: if you use any IDE with an autocompletion feature (or even ipython): if you try to complete the expression Tkinter.Label., you will get 181 choices. 181 attributes on a single class are a lot. If you invoke the builtin help
>>> help(Tkinter.Label)
you will see the origin of the different attributes, i.e. the classes from which they come: the output spans a few hundreds of lines.
Luckily, Tkinter is a very stable framework (I mean: it works) and there is no need to investigate the hierarchies to find bugs or the reason for unexpected behavior. Moreover Tkinter is a comparatively small framework: 181 attributes are many, but not too many, and with some effort one could manage to find out their origin. Things are however very much different in the case of Zope/Plone.
For instance, have a look at the hierarchy of the Plone Site class which I report in appendix.).
My hate for mixins comes from my experience with Zope/Plone. However the same abuses could be equally be done in other languages and object systems - with the notable exception of CLOS, where methods are defined outside run into this issue: I overrode a pre-defined method inadvertently, by causing hard to investigate problems in an unrelated part of the code.
The first thing I did after being bitten by Plone was to write an utility function to identify the overridden methods. Let me show here a simplified version of that function, called warn_overriding.
You can use it when you need to work with a big framework which you do not know well.
First of all, it is convenient to introduce a couple of utility functions: getpublicnames to get the public names in a namespace
def getpublicnames(obj): "Return the public names in obj.__dict__" return set(n for n in vars(obj) if not n.startswith('_'))
and find_common_names to extract the common names from a set of classes:
def find_common_names(classes): "Perform n*(n-1)/2 namespace overlapping checks on a set of n classes" n = len(classes) names = map(getpublicnames, classes) for i in range(0, n): for j in range(i+1, n): ci, cj = classes[i], classes[j] common = names[i] & names[j] if common: yield common, ci, cj
Moreover, it is convenient to define a warning class:
class OverridingWarning(Warning): pass
Now it is easy to implement warn_overriding as a class decorator.
def warn_overriding(cls): """ Print a warning for each public name which is overridden in the class hierarchy, unless if is listed in the "override" class attribute. """ override = set(vars(cls).get("override", [])) ancestors = inspect.getmro(cls) if ancestors[-1] is object: # remove the trivial ancestor <object> ancestors = ancestors[:-1] for common, c1, c2 in find_common_names(ancestors): overridden = ', '.join(common - override) if ',' in overridden: # for better display of the names overridden = '{%s}' % overridden if overridden: msg = '%s.%s overriding %s.%s' % ( c1.__name__, overridden, c2.__name__, overridden) warnings.warn(msg, OverridingWarning, stacklevel=2) return cls
Here is an example to show how it works in practice. Given the base classes
class Base(object): def m1(self): pass def spam(self): pass
class M1(object): def m2(self): pass def ham(self): pass
class M2(object): def m3(self): pass def spam(self): pass
we can define the subclass
class Child(Base, M1, M2): def ham(self): pass def spam(self): pass def m1(self): pass
which features method overriding. The overriding is specified by the MRO, which includes five classes:
>>> inspect.getmro(Child) (<class "Child">, <class "Base">, <class "M1">, <class "M2">, <type "object">)
find_common_names takes those classes (except object which does not provide any public name) and look for common names which are printed by warn_overriding:
>>> Child = warn_overriding(Child) OverridingWarning: Child.{m1, spam} overriding Base.{m1, spam} OverridingWarning: Child.ham overriding M1.ham OverridingWarning: Child.spam overriding M2.spam OverridingWarning: Base.spam overriding M2.spam
In recent versions of Python (2.6+) it is possible to use the elegant syntax
@warn_overriding class Child(Base, M1, M2): ...
The advantages of the class decorator syntax are clear: the decorator is much more visible since it comes before the class and not after; moreover the warning prints the line number corresponding to the class definition, the right place where to look in case of overriding. It is possible to avoid the warnings by listing explicitly the overriding methods in the .override class attribute. For instance, try to add the line:
override = ['m1', 'spam', 'ham']
in the definition of Child and you will see that the warnings will disappear
warn_overriding is a small tool which can help you when you are fighting with a big framework, but it is not a solution. The solution is not to use mixins in the first place. The next articles in this series will discuss a few alternatives.
Here is the picture: if it is too big to fit in your screen, that proves my point against mixins ;)
Have an opinion? Readers have already posted 12 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Michele Simionato adds a new entry to his weblog, subscribe to his RSS feed. | http://www.artima.com/weblogs/viewpost.jsp?thread=246341 | CC-MAIN-2014-52 | refinedweb | 1,295 | 55.34 |
The earth’s Atmospheric CO2 level is increasing day by day. The global average atmospheric carbon dioxide in 2019 was 409.8 parts per million and in October-2020 it is 411.29. Carbon dioxide is a key greenhouse gas and responsible for about three-quarters of emissions. So CO2 level monitoring has also started to gain importance.
In our previous project, we used the Gravity Infrared CO2 sensor to measure the CO2 concentration in air. In this project, we are going to use an MQ-135 sensor with Arduino to measure the CO2 concentration. The measured CO2 concentration values will be displayed on the OLED module and last we will also compare the Arduino MQ-135 sensor readings with Infrared CO2 sensor readings. Apart from CO2, we have also measured the concentration of LPG, Smoke, and Ammonia gas using Arduino.
Components Required
- Arduino Nano
- MQ-135 Sensor
- Jumper Wires
- 0.96’ SPI OLED Display Module
- Breadboard
- 22KΩ Resistor_0<<
For this project, we are using a Monochrome 7-pin SSD1306 0.96” OLED display. It can work on three different communications Protocols: SPI 3 Wire mode, SPI four-wire mode, and I2C mode. You can also learn more about the basics of OLED display and its types by reading the linked article. The pins and its functions are explained in the table below:
OLED Specifications:
- OLED Driver IC: SSD1306
- Resolution: 128 x 64
- Visual Angle: >160°
- Input Voltage: 3.3V ~ 6V
- Pixel Colour: Blue
- Working temperature: -30°C ~ 70°C. The circuit diagram for the MQ-135 board is given below:
The load resistor RL plays a very important role in making the sensor work. This resistor changes its resistance value according to the concentration of gas. According to the MQ-135 datasheet, the load resistor value can range anywhere from 10KΩ to 47KΩ. The datasheet recommends that you calibrate the detector for 100ppm NH3 or 50ppm Alcohol concentration in air and use a value of load resistance (RL) of about 20 KΩ. But if you track your PCB traces to find the value of your RL in the board, you can see a 1KΩ (102) load resistor.
So to measure the appropriate CO2 concentration values, you have to replace the 1KΩ resistor with a 22KΩ resistor.
Circuit Diagram to Interface MQ135 with Arduino
The complete schematics to connect MQ-135 Gas Sensor with Arduino is given below:
The circuit is very simple as we are only connecting the MQ-135 Sensor and OLED Display module with Arduino Nano. MQ-135 Gas Sensor and OLED Display module both are powered with +5V and GND. The Analog Out pin of the MQ-135 sensor is connected to the A0 pin of Arduino Nano. Since the OLED Display module uses SPI communication, we have established an SPI communication between the OLED module and Arduino Nano. The connections are shown in the below table:
After connecting the hardware according to the circuit diagram, the Arduino MQ135 sensor setup should look something like below:
Calculating the Ro Value of MQ135 Sensor
Now that we know the value of RL, let’s proceed on how to calculate the Ro values in clean air. Here we are going to use MQ135.h to measure the CO2 concentration in the air. So first download the MQ-135 Library, then preheat the sensor for 24 hours before reading the Ro values. After the preheating process, use the below code to read the Ro values:
#include "MQ135.h" void setup (){ Serial.begin (9600); } void loop() { MQ135 gasSensor = MQ135(A0); // Attach sensor to pin A0 float rzero = gasSensor.getRZero(); Serial.println (rzero); delay(1000); }
Now once you got the Ro values, Go to Documents > Arduino > libraries > MQ135-master folder and open the MQ135.h file and change the RLOAD & RZERO values.
///The load resistance on the board #define RLOAD 22.0 ///Calibration resistence at atmospheric CO2 level #define RZERO 5804.99
Now scroll down and replace the ATMOCO2 value with the current Atmospheric CO2 that is 411.29
///Atmospheric CO2 level for calibration purposes #define ATMOCO2 397.13
Code to Measure CO2 Using Arduino MQ135 Sensor
The complete code for interfacing MQ-135 Sensor with Arduino is given at the end of the document. Here we are explaining some important parts of the MQ135 Arduino code.
The code uses the Adafruit_GFX, and Adafruit_SSD1306, and MQ135.h libraries. These libraries can be downloaded from the Library Manager in the Arduino IDE and install it from there. For that, open the Arduino IDE and go to Sketch < Include Library < Manage Libraries. Now search for Adafruit GFX and install the Adafruit GFX library by Adafruit.
Similarly, install the Adafruit SSD1306 libraries by Adafruit. MQ135 library can be downloaded from here.
After installing the libraries to Arduino IDE, start the code by including the needed libraries files.
#include "MQ135.h" #include <SPI.h> #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h>
Then, define the OLED width and height. In this project, we’re using a 128×64 SPI OLED display. You can change);
After that, define the Arduino pin where the MQ-135 sensor is connected.
int sensorIn = A0;
Now inside the setup() function, initialize the Serial Monitor at a baud rate of 9600 for debugging purposes. Also, Initialize the OLED display with the begin() function.
Serial.begin(9600); display.begin(SSD1306_SWITCHCAPVCC); display.clearDisplay();
Inside the loop() function, first read the signal values at the Analog pin of Arduino by calling the analogRead() function.
val = analogRead(A0); Serial.print ("raw = ");
Then in the next line, call the gasSensor.getPPM() to calculate the PPM values. The PPM values are calculated using the Load resistor, R0, and reading from the analog pin.
float ppm = gasSensor.getPPM(); Serial.print ("ppm: "); Serial.println (ppm);
After that, set the text size and text colour using the setTextSize() and setTextColor().
display.setTextSize(1); display.setTextColor(WHITE);
Then in the next line, define the position where the text starts using the setCursor(x,y) method. And print the CO2 Values on OLED Display using the display.println() function.
display.setCursor(18,43); display.println("CO2"); display.setCursor(63,43); display.println("(PPM)"); display.setTextSize(2); display.setCursor(28,5); display.println(ppm);
And in the last, call the display() method to display the text on OLED Display.
display.display(); display.clearDisplay();
Testing the Interfacing of MQ-135 Sensor with Arduino
Once the hardware and code are ready, it is time to test the sensor. For that, connect the Arduino to the laptop, select the Board and Port, and hit the upload button. Then open your serial monitor and wait for some time (preheat process), then you'll see the final data. The Values will be displayed on OLED display as shown below:
This is how an MQ-135 sensor can be used to measure accurate CO2 in the air. The complete MQ135 Air Quality Sensor Arduino Code and working video are given below. If you have any doubts, leave them in the comment section.
/* * Interfacing MQ135 Gas Senor with Arduino * Author: Ashish * Website: * Date: 11-11-2020 */ // The load resistance on the board #define RLOAD 22.0 #include "MQ135.h" #include <SPI.h> #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h> ); MQ135 gasSensor = MQ135(A0); int val; int sensorPin = A0; int sensorValue = 0; void setup() { Serial.begin(9600); pinMode(sensorPin, INPUT); display.begin(SSD1306_SWITCHCAPVCC); display.clearDisplay(); display.display(); } void loop() { val = analogRead(A0); Serial.print ("raw = "); Serial.println (val); // float zero = gasSensor.getRZero(); // Serial.print ("rzero: "); //Serial.println (zero); float ppm = gasSensor.getPPM(); Serial.print ("ppm: "); Serial.println (ppm); display.setTextSize(2); display.setTextColor(WHITE); display.setCursor(18,43); display.println("CO2"); display.setCursor(63,43); display.println("(PPM)"); display.setTextSize(2); display.setCursor(28,5); display.println(ppm); display.display(); display.clearDisplay(); delay(2000); } | https://circuitdigest.com/microcontroller-projects/interfacing-mq135-gas-sensor-with-arduino-to-measure-co2-levels-in-ppm | CC-MAIN-2020-50 | refinedweb | 1,290 | 57.98 |
Delegate.CreateDelegate Method (Type, MethodInfo)
Creates a delegate of the specified type to represent the specified static method.
Namespace: SystemNamespace: System
Assembly: mscorlib (in mscorlib.dll)
Parameters
- type
- Type: System.Type
The Type of delegate to create.
- method
- Type: System.Reflection.MethodInfo
The MethodInfo describing the static or instance method the delegate is to represent. Only static methods are supported in the .NET Framework version 1.0 and 1.1.
Return ValueType: System.Delegate
A delegate of the specified type to represent the specified static method.
In the .NET Framework version 1.0 and 1.1, this method overload creates delegates for static methods only. In the .NET Framework version 2.0, this method overload also can create open instance method delegates; that is, delegates that explicitly supply the hidden first argument of instance methods. For a detailed explanation, see the more general CreateDelegate(Type, Object, MethodInfo) method overload, which allows you to create all combinations of open or closed delegates for instance or static methods, and optionally to specify a first argument.
This method overload is equivalent to calling the CreateDelegate(Type, MethodInfo, Boolean) method overload and specifying true for throwOnBindFailure.
Compatible Parameter Types and Return Type
In the .NET Framework version 2.0, the parameter types and return type of a delegate created using this method overload must be compatible with the parameter types and return type of the method the delegate represents; the types do not have to match exactly. This represents a relaxation of the binding behavior in the .NET Framework version 1.0 and 1.1, where the types must.
This section contains two code examples. The first example demonstrates the two kinds of delegates that can be created with this method overload: open over an instance method and open over a static method.
The second code example demonstrates compatible parameter types and return types.
Example 1
The following code example demonstrates the two ways a delegate can be created using this overload of the CreateDelegate method.
The example declares a class C with a static method M2 and an instance method M1, and two delegate types: D1 takes an instance of C and a string, and D2 takes a string.
A second class named Example contains the code that creates the delegates.
A delegate of type D1, representing an open instance method, is created for the instance method M1. An instance must be passed when the delegate is invoked.
A delegate of type D2, representing an open static method, is created for the static method M2.
using System; using System.Reflection; using System.Security.Permissions; // Declare three delegate types for demonstrating the combinations // of static versus instance methods and open versus closed // delegates. // public delegate void D1(C c, string s); public delegate void D2(string s); public delegate void D3(); // A sample class with an instance method and a static method. // public class C { private int id; public C(int id) { this.id = id; } public void M1(string s) { Console.WriteLine("Instance method M1 on C: id = {0}, s = {1}", this.id, s); } public static void M2(string s) { Console.WriteLine("Static method M2 on C: s = {0}", s); } } public class Example { public static void Main() { C c1 = new C(42); // Get a MethodInfo for each method. // MethodInfo mi1 = typeof(C).GetMethod("M1", BindingFlags.Public | BindingFlags.Instance); MethodInfo mi2 = typeof(C).GetMethod("M2", BindingFlags.Public | BindingFlags.Static); D1 d1; D2 d2; D3 d3; Console.WriteLine("\nAn instance method closed over C."); // In this case, the delegate and the // method must have the same list of argument types; use // delegate type D2 with instance method M1. // Delegate test = Delegate.CreateDelegate(typeof(D2), c1, mi1, false); // Because false was specified for throwOnBindFailure // in the call to CreateDelegate, the variable 'test' // contains null if the method fails to bind (for // example, if mi1 happened to represent a method of // some class other than C). // if (test != null) { d2 = (D2) test; // The same instance of C is used every time the // delegate is invoked. d2("Hello, World!"); d2("Hi, Mom!"); } Console.WriteLine("\nAn open instance method."); // In this case, the delegate has one more // argument than the instance method; this argument comes // at the beginning, and represents the hidden instance // argument of the instance method. Use delegate type D1 // with instance method M1. // d1 = (D1) Delegate.CreateDelegate(typeof(D1), null, mi1); // An instance of C must be passed in each time the // delegate is invoked. // d1(c1, "Hello, World!"); d1(new C(5280), "Hi, Mom!"); Console.WriteLine("\nAn open static method."); // In this case, the delegate and the method must // have the same list of argument types; use delegate type // D2 with static method M2. // d2 = (D2) Delegate.CreateDelegate(typeof(D2), null, mi2); // No instances of C are involved, because this is a static // method. // d2("Hello, World!"); d2("Hi, Mom!"); Console.WriteLine("\nA static method closed over the first argument (String)."); // The delegate must omit the first argument of the method. // A string is passed as the firstArgument parameter, and // the delegate is bound to this string. Use delegate type // D3 with static method M2. // d3 = (D3) Delegate.CreateDelegate(typeof(D3), "Hello, World!", mi2); // Each time the delegate is invoked, the same string is // used. d3(); } } /* This code example produces the following output: An instance method closed over C. Instance method M1 on C: id = 42, s = Hello, World! Instance method M1 on C: id = 42, s = Hi, Mom! An open instance method. Instance method M1 on C: id = 42, s = Hello, World! Instance method M1 on C: id = 5280, s = Hi, Mom! An open static method. Static method M2 on C: s = Hello, World! Static method M2 on C: s = Hi, Mom! A static method closed over the first argument (String). Static method M2 on C: s = Hello, World! */
Example 2
The following code example demonstrates compatibility of parameter types and return types.
The code example defines a base class named Base and a class named Derived that derives from Base. The derived class has a static (Shared in Visual Basic) method named MyMethod with one parameter of type Base and a return type of Derived. The code example also defines a delegate named Example that has one parameter of type Derived and a return type of Base.
The code example demonstrates that the delegate named Example can be used to represent the method MyMethod. The method can be bound to the delegate because:
The parameter type of the delegate (Derived) is more restrictive than the parameter type of MyMethod (Base), so that it is always safe to pass the argument of the delegate to MyMethod.
The return type of MyMethod (Derived) is more restrictive than the parameter type of the delegate (Base), so that it is always safe to cast the return type of the method to the return type of the delegate.
The code example produces no output.
using System; using System.Reflection; // Define two classes to use in the demonstration, a base class and // a class that derives from it. // public class Base {} public class Derived : Base { // Define a static method to use in the demonstration. The method // takes an instance of Base and returns an instance of Derived. // For the purposes of the demonstration, it is not necessary for // the method to do anything useful. // public static Derived MyMethod(Base arg) { Base dummy = arg; return new Derived(); } } // Define a delegate that takes an instance of Derived and returns an // instance of Base. // public delegate Base Example(Derived arg); class Test { public static void Main() { // The binding flags needed to retrieve MyMethod. BindingFlags flags = BindingFlags.Public | BindingFlags.Static; // Get a MethodInfo that represents MyMethod. MethodInfo minfo = typeof(Derived).GetMethod("MyMethod", flags); // Demonstrate contravariance of parameter types and covariance // of return types by using the delegate Example to represent // MyMethod. The delegate binds to the method because the // parameter of the delegate is more restrictive than the // parameter of the method (that is, the delegate accepts an // instance of Derived, which can always be safely passed to // a parameter of type Base), and the return type of MyMethod // is more restrictive than the return type of Example (that // is, the method returns an instance of Derived, which can // always be safely cast to type Base). // Example ex = (Example) Delegate.CreateDelegate(typeof(Example), minfo); // Execute MyMethod using the delegate Example. // Base b = ex(new Derived()); } }
-Access. | https://msdn.microsoft.com/en-us/library/53cz7sc6 | CC-MAIN-2015-11 | refinedweb | 1,392 | 58.58 |
Firstly: Scala development in IDEA with the Scala plugin is excellent! I'm using it for a large multi-module project and it mostly 'just works'. Very nice work.
I've found a couple of weird cases with the error highlighting, though.
Here's a simple example where it thinks a comment is a syntax error:
class IDEAHighlight1 {
def comments: Unit = {
Some("foo")
.map { _.toString } // comment here is fine
.map { _.length }
// comment here seen as syntax error
.map { _ + 3 }
}
}
Screenshot attached.
Details:
- IntelliJ IDEA 10.0.1 Community, Build #IC-99.32, 23-Dec-2010
- Scala plugin: 0.4.345
Attachment(s):
Menu_047.png
Similar to this bug:
I've lodged a separate bug.
Thanks for the report.
-jason
Thanks, Jason. Appreciate it.
This is hacked now. Still not work with /**/ comments. But your cases is ok now.
I'm planning to fix such problems in general way, but this way needs more time.
Best regards,
Alexander Podkhalyuzin.
Thanks, Alexander!
Are there nightly or experimental builds that I could try out?
Mark
Wait for the next one here:
Verified that this is working in 0.4.428. Thanks for the quick fix. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206641005-Comment-highlighted-as-error | CC-MAIN-2020-40 | refinedweb | 195 | 80.68 |
Re: ANN: Kamaelia 0.2.0 released!
- From: Michael Sparks <ms@xxxxxxxxxxxx>
- Date: Wed, 03 Aug 2005 22:16:37 +0200
phil hunt wrote:
> On Wed, 03 Aug 2005 16:57:34 +0100, Michael Sparks <michaels@xxxxxxxxxxxx>
wrote:
>>> Is the audience programmers or
>>> less technical people? A project that allows non-technical people
>>> to build complex network applications is an ambitious one, but not
....
>>It's a little ambitious at this stage, yes.
> But it couldbe there eventually?
Could? Yes. Will? Can't say. I can agree it would be nice, and given
time/resources (or someone sufficiently interested externally) then it
may happen. (IMO there's no real reason that it couldn't happen aside
from time/effort/resources)
>>> What sort of servers and clients?
>>Whatever you feel like. If you want a server to split and serve audio,
>>you could do that.
> This is streaming audio, right? For non-streaming I can just use an
> ftp or http server.
There's more to network servers and clients than just audio & video, or
unidirectional download.
For example the visualisation/introspection tool is an example of a
client server system. The server is the visualisation tool. It listens on a
specified port waiting for a connection. The client connects and sends
data to it about the internal structure, and the server displays this.
>>>> *.
*Our* main interest, at the moment, is in /delivery/ of content.
Dealing with capture would be largely re-inventing wheels before we know
whether the framework is a suitable framework. We are looking at making it
possible to use pymedia for dealing with capture/encoding/decoding.
There's a number of things along the way we need to deal with this, but
we're not starting from the perspective of capture.
(After all for capture we can generally look at using dedicated encoder
hardware that will often spit out it's encoded information in the form of a
network connection. As a result capture and encoding hasn't been a priority
as yet. Sometimes looking into a project from the outside I can appreciate
that certain decisions might look strange, but consider that you don't need
to worry about capture in order
>>> ).
You said *can't*. That says to me cannot ever be broken. If you have a large
number of listeners, as your statement implied, that implies decryptable by
many listeners - you then just need one compromised listener (essentially
you're asking for the equivalent of implementing a DRM system that the NSA
couldn't break...).
If you can provide me with a library that you can guarantee that it will
satisfy the following properties:
encoded = F(data)
and a piece of client code that can do this:
decoded = G(encoded)
Then yes, that can be wrapped. That's trivial in fact:
---(start)---
from magic import unbreakable_encryption
class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def main(self):
while 1:
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
yield 1
class decoder(component):
def __init__(self, **args):
self.decoder = unbreakable_encryption.decoder(**args)
def main(self):
while 1:
if self.dataReady("inbox"):
data = self.recv("inbox")
decoded = self.decoder.decode(data)
self.send(decoded, "outbox")
yield 1
---(end)---
If you believe you can implement F&G for general use such that F&G
can //never// be decrypted by anyone other than authorised recipients, I
suggest you cease this conversation - you have some highly marketable
technology.
>>>.
Realtime systems are a subset of all systems that are interesting in terms
of network delivery & multimedia systems. Realtime scheduling is a well
known area and if/when this becomes an issue, we'll look at adding it into
the mix. The real problem is dealing with concurrency and making it simple
to work with. Making realtime concurrent systems easy to work with strikes
me as running before you can walk.
(Network systems are naturally concurrent, so if you're aiming to make
network systems easy to build you're really talking about making concurrent
systems easy to build.)
>>>?
I mean a mesh. (see below)
>>This could be of use in many areas, such as GRID based systems
>>for distributed rendering, application layer multicast, and network
>>multicast island joining.
> Unpack, please.
They're all big systems, all of which utilise networks of collaborating
systems for different purposes.
Grid starting point:
*
*short* introduction to application level multicast (has many names,
including overlay multicast):
*
* Also puts the term "mesh" in context.
Multicast island joining is a special case and is exactly what it says -
joining multicast islands together.
>>Due to the illegal /uses/ of P2P, much work in this area is difficult to
>>reuse due to defensive coding.
>
> Oh. Could you give an example?
How many RFCs have you seen documenting the protocols used by (say)
Napster, Bit Torrent, Gnutella, Limewire, Freenet? The legitimate uses of
Bit Torrent for example tend to get ignored by certain large companies
when trying to shut down systems.
>.
It is (as mention on the page describing the visualisation tool).
Regards,
Michael.
.
-
- Re: ANN: Kamaelia 0.2.0 released!
- From: phil hunt
- Prev by Date: Re: pain
- Next by Date: Re: Wheel-reinvention with Python
- Previous by thread: Re: ANN: Kamaelia 0.2.0 released!
- Next by thread: Re: ANN: Kamaelia 0.2.0 released!
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2005-08/msg00922.html | crawl-001 | refinedweb | 892 | 56.15 |
Source code for sympy.core.singleton
"""Singleton mechanism""" from __future__ import print_function, division from .core import Registry from .assumptions import ManagedProperties from .sympify import sympify[docs]class SingletonRegistry(Registry): """ The registry for the singleton classes (accessible as ``S``). This class serves as two separate things. The first thing it is is the ``SingletonRegistry``. Several classes in SymPy appear so often that they are singletonized, that is, using some metaprogramming they are made so that they can only be instantiated once (see the :class:`sympy.core.singleton.Singleton` class for details). For instance, every time you create ``Integer(0)``, this will return the same instance, :class:`sympy.core.numbers.Zero`. All singleton instances are attributes of the ``S`` object, so ``Integer(0)`` can also be accessed as ``S.Zero``. Singletonization offers two advantages: it saves memory, and it allows fast comparison. It saves memory because no matter how many times the singletonized objects appear in expressions in memory, they all point to the same single instance in memory. The fast comparison comes from the fact that you can use ``is`` to compare exact instances in Python (usually, you need to use ``==`` to compare things). ``is`` compares objects by memory address, and is very fast. For instance >>> from sympy import S, Integer >>> a = Integer(0) >>> a is S.Zero True For the most part, the fact that certain objects are singletonized is an implementation detail that users shouldn't need to worry about. In SymPy library code, ``is`` comparison is often used for performance purposes The primary advantage of ``S`` for end users is the convenient access to certain instances that are otherwise difficult to type, like ``S.Half`` (instead of ``Rational(1, 2)``). When using ``is`` comparison, make sure the argument is sympified. For instance, >>> 0 is S.Zero False This problem is not an issue when using ``==``, which is recommended for most use-cases: >>> 0 == S.Zero True The second thing ``S`` is is a shortcut for :func:`sympy.core.sympify.sympify`. :func:`sympy.core.sympify.sympify` is the function that converts Python objects such as ``int(1)`` into SymPy objects such as ``Integer(1)``. It also converts the string form of an expression into a SymPy expression, like ``sympify("x**2")`` -> ``Symbol("x")**2``. ``S(1)`` is the same thing as ``sympify(1)`` (basically, ``S.__call__`` has been defined to call ``sympify``). This is for convenience, since ``S`` is a single letter. It's mostly useful for defining rational numbers. Consider an expression like ``x + 1/2``. If you enter this directly in Python, it will evaluate the ``1/2`` and give ``0.5`` (or just ``0`` in Python 2, because of integer division), because both arguments are ints (see also :ref:`tutorial-gotchas-final-notes`). However, in SymPy, you usually want the quotient of two integers to give an exact rational number. The way Python's evaluation works, at least one side of an operator needs to be a SymPy object for the SymPy evaluation to take over. You could write this as ``x + Rational(1, 2)``, but this is a lot more typing. A shorter version is ``x + S(1)/2``. Since ``S(1)`` returns ``Integer(1)``, the division will return a ``Rational`` type, since it will call ``Integer.__div__``, which knows how to return a ``Rational``. """ __slots__ = [] # Also allow things like S(5) __call__ = staticmethod(sympify) SymPy registry %s" % ( name, self)) class_to_install = self._classes_to_install[name] value_to_install = class_to_install() self.__setattr__(name, value_to_install) del self._classes_to_install[name] return value_to_install def __repr__(self): return "S"S = SingletonRegistry() class Singleton(ManagedProperties): """ Metaclass for singleton classes. A singleton class has only one instance which is returned every time the class is instantiated. Additionally, this instance can be accessed through the global registry object S as S.<class_name>. Examples ======== >>> from sympy import S, Basic >>> from sympy.core.singleton import Singleton >>> from sympy.core.compatibility import with_metaclass >>> class MySingleton(with_metaclass(Singleton, Basic)): ... pass >>> Basic() is Basic() False >>> MySingleton() is MySingleton() True >>> S.MySingleton is MySingleton() True Notes ===== Instance creation is delayed until the first time the value is accessed. (SymPy versions before 1.0 would create the instance during class creation time, which would be prone to import cycles.)(Singleton, cls).__new__(cls, *args, **kwargs) S.register(result) return result def __call__(self, *args, **kwargs): # Called when application code says SomeClass(), where SomeClass is a # class of which Singleton is the metaclas. # __call__ is invoked first, before __new__() and __init__(). if self not in Singleton._instances: Singleton._instances[self] = \ super(Singleton, self).__call__(*args, **kwargs) # Invokes the standard constructor of SomeClass. return Singleton._instances[self] # Inject pickling support. def __getnewargs__(self): return () self.__getnewargs__ = __getnewargs__ | http://docs.sympy.org/dev/_modules/sympy/core/singleton.html | CC-MAIN-2017-26 | refinedweb | 772 | 51.34 |
Create a new pager object.
#include <zircon/syscalls.h> zx_status_t zx_pager_create(uint32_t options, zx_handle_t* out);
zx_pager_create() creates a new pager object.
When a pager object is destroyed, any accesses to its vmos that would have required communicating with the pager will fail as if
zx_pager_detach_vmo() had been called. Furthermore, the kernel will make an effort to ensure that the faults happen as quickly as possible (e.g. by evicting present pages), but the precise behavior is implementation dependent.
None.
zx_pager_create() returns ZX_OK on success, or one of the following error codes on failure.
ZX_ERR_INVALID_ARGS out is an invalid pointer or NULL or options is any value other than 0.
ZX_ERR_NO_MEMORY Failure due to lack of memory.
zx_pager_create_vmo()
zx_pager_detach_vmo()
zx_pager_supply_pages() | https://fuchsia.googlesource.com/fuchsia/+/419b51fe8a82d81b63b0e67951ec6e224c2194f7/zircon/docs/syscalls/pager_create.md | CC-MAIN-2020-24 | refinedweb | 119 | 50.02 |
The alternative syntax I had been suggesting was to use a clause introduced by a new keyword: where. (I had first discussed this idea more than 10 years ago.)
For example, instead of the approved type hinting syntax
def twice(i: int, next: Function[[int], int]) -> int: return next(next(i))
I was suggesting to use something like the following:
def twice(i, next): where: i: int next: Function[[int], int] returns: int return next(next(i))
Similarly, instead of using a comment to introduce type hinting for single variables
x = 3 # type: int
one would have used the following:
x = 3 where: x: int
However, this alternative was never considered as a serious contender for type hinting since it was not compatible with Python 2.7, unlike the accepted syntax from PEP 484.
Still, as an experiment, I decided to see if I could use the approach mentioned in my last few blog posts and use an nonstandard where clause. And, of course, it is possible to do so, and it is surprisingly easy. :-)
The main details are to be found in three previous blog posts. However, to summarize, suppose you with to use a where clause like the one described above which would not have an effect on the actual execution of the Python program (same as for the current type hinting described in PEP 484). All you need to do is
1. include the line
from __nonstandard__ import where_clause
in your program.in your program.
2a) if the program to be run is the main program, instead of running it via
python my_program.py
you would doyou would do
python import_nonstandard.py my_program
instead, where import_nonstandard.py is in the directory "version 5" of this repository, and the relevant where_clause.py is in "version 6".instead, where import_nonstandard.py is in the directory "version 5" of this repository, and the relevant where_clause.py is in "version 6".
2b) If instead of having it run as the main program, you would like to import it, then you would include an extra import statement:
import import_nonstandard import my_program # rest of the code
and run this program in the usual way. By importing "import_nonstandard" first, a new import hook is created which pre-processes any modules to be imported afterwards - in this case, to remove the where clause or, as I have described in previous posts, to define new keywords or even an entirely different syntax (French Python).
Note: None of the recent experiments are to be taken as serious proposals to modify Python.
Instead, they are a demonstration of what can be done (often in a surprisingly easy way)
with Python as it exists today. | https://aroberge.blogspot.com/2015/12/revisiting-old-friend-yet-again.html | CC-MAIN-2018-13 | refinedweb | 446 | 61.06 |
Fragments.
If you are interested in creating custom template also see:
Custom Projects In Android Studio
When you first start working with Android it is difficult to see why you should bother with the strange new Fragment object. It really doesn't seem to do anything extra over and above a simple Activity and it adds another layer of complexity.
In addition finding out exactly what a Fragment does for you is very difficult.
Perhaps they are best ignored?
It is true that for many simple apps you really don't need to bother with the extra complexity of using a Fragment. If you are serious about developing an app over a long period of time then it probably is worth starting off by using a Fragment - even if it isn't actually necessary.
This is certainly the attitude that Google would like you to take and in Android Studio even the simplest apps start off using a Fragment.
This is, of course the reason the Simple Blank Activity template was introduced earlier in this book. However now it is time to find out what Fragments are all about and finally understand the standard template among other things.
This is a fairly deep look at using Fragments and it might well tell you more than you want to know on a first reading but if you are going to make use of Fragments properly you do need to know this.
In this chapter we focus on Fragment basics - creating a Fragment, displaying a Fragment and Fragment event handling. In the next chapter we look at Fragments and XML, the designer, the standard template and using Fragments in devices that run Android version earlier than Honeycomb. The third chapter on Fragments looks at the problem of managing Fragments to create flexible UIs.
A Fragment is like a very cut down Activity. It has a set of events that signal various stages of its lifecycle like an Activity. It also has its own associated View object which defines its UI - but it has no way of displaying that View object.
To display its View object a Fragment has to pass it on to an Activity. The Activity decides how best to display the Fragment's UI.
The basic idea is that a Fragment is a lot like having a View hierarchy that the Activity can decide to display as it needs. The idea is that if the device that the application is running on has enough space then Fragments can be displayed on the same screen. If the device is too small you can arrange for each Fragment to be displayed on its own.
One idea that it is important to dispel as early as possible. This management of Fragments according to screen size is entirely up to the programmer. The system doesn't support any automatic management of Fragments even though there is a class called FragmentManager.
What you do with a Fragment is up to you and it is still a fairly low level mechanism.
In short what you do with a Fragment is entirely up to you.
The key method in using a Fragment is
onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState)
this returns a single View object, always a ViewGroup, with the set of View object that defines the Fragment's UI. The Activity calls this event handler when it is time for the Fragment to provide its UI for display.
The Activity passes a LayoutInflater to help the Fragment create the View hierarchy from an XML file, a container that the is the ViewGroups that will contain the fragment's UI and finally a Bundle if the Fragment already exists and has been suspended - much like an Activity.
It looks as if the Fragment's only role in life is to create a View and this is true but it has more functionality than just a ViewGroup object. The Fragment is also an object that has a lifecycle and persists even when its View hierarchy isn't on display. This makes it easier to arrange for things like the back button and other state changes to be implemented. To make full use of the Fragment you have to know about its lifecycle but for the moment we can simply focus on the onCreateView event.
The best way to understand all this is to find out how Fragments work in detail.
So start a new Simple Blank Activity project and this time make sure you select Honeycomb Android 3.0 for the minimum required SDK - the reason is that Fragments were introduced with this version. The question of supporting Fragments in earlier versions just complicates things - but see the next chapter.
Creating a Fragment is a matter of deriving your own class from the Fragment class. To do this you can simply right click on the MainActivity class and select New, Java Class from the menu:
Next you have to give the class a suitable name:
This creates a new file in the project to store the class.
If you have programmed in other languages it is worth knowing that Java generally expects you to put each new class in its own file.
Next we need to edit the class to make it inherit from Fragment:
package com.example.app;import android.app.Fragment;public class myFragment extends Fragment { }
If you start typing "extends" Android studio will autocomplete and the same for "Fragment". It will also automatically add the import statement for you.
Now we have to fill in the details of onCreateView.
We could define an XML layout file or use the designer - more of which in the next chapter where we look at using Android Studio's Fragment facilities - but the most direct way and the one that lets you see exactly what is happening is to create the View objects in code.
In practice you would normally use the XML to create the View hierarchy but working with code means there is no irrelevant details to confuse the issue.
If you are not happy with creating View objects in code then see the previous chapter: UI Graphics A Deep Dive
Android Studio will autogenerate a stub for the even handler if you simply start to type onCreateView and select the correct method from the list that appears. It will also automatically add the import statements you need.
To create a simple UI for the Fragment to supply to the Activity we will just create a LinearLayout containing a Button:
@Override public View onCreateView( LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { LinearLayout linLayout= new LinearLayout(getActivity()); Button b = new Button(getActivity()); b.setText("Hello Button"); linLayout.addView(b); return linLayout;}
@Override public View onCreateView( LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
LinearLayout linLayout= new LinearLayout(getActivity()); Button b = new Button(getActivity()); b.setText("Hello Button"); linLayout.addView(b); return linLayout;}
This should be perfectly easy to follow but there are a few subtle points. When you create a View object you have to provide it with a "context" - this is the Activity it belongs to and you usually pass it this to indicate the current object i.e. the Activity instance. In this case we are in an instance of the Fragment object so you can't use this.
However at the time that the onCreateView is called the Fragment will be associated with a Activity i.e. the Activity that is going to display its View. You can use the getActivity method to return the Activity that the Fragment is associated with and this is what you use as the context when you create a View object within a Fragment - that is:
Button b = new Button(getActivity());
Notice also that you seem to have to return a ViewGroup object from onCreateView, even though this isn't documented. Finally you don't add the Fragment's View to the container you have been passed. You are only provided with it so you can find out details of the layout and adjust your View objects accordingly.
At this point we have a Fragment that returns a LinearLayout which contains a Button object with the text "Hello Button". Not a very useful Fragment but the simplest example I can think of. | http://i-programmer.info/programming/android/6882-introducing-android-fragments.html | CC-MAIN-2015-27 | refinedweb | 1,371 | 58.42 |
int main( int argc, char **argv ) { char szModuleName[MAX_PATH]; // Extract the module name from argv[0] { char *szLastSlash = strchr( argv[0], '\\' ); strncpy( szModuleName, szLastSlash + 1 ); char *szExtension = strchr( szModuleName, '.' ); *szExtension = '\0'; } ... }Maybe you could have passed argv[0] to a HelperFunction in this case, but it's unlikely you'll need to extract the module name again. Note how the two char * variables fall out of scope and thus are unavailable to the rest of the program. This also prevents name collisions.
doSomething(); { //debug begins System.out.println ("foo = " + foo); System.out.println ("bar = " + bar); for (int i = 0; i < items.length; i++) System.out.println ("items [" + i + "] = " + items [i] ); } //debug ends doSomethingElse();this allows debug code to introduce variables that don't mess up the code you're trying to debug, and it makes it easy to figure out which code to remove after you fix the problem. -- anonymous :Wow, this is way cool! EdPoor Cooler still can be:
if(debug){ //Do debug stuff when static or instance variable is set }or
if(debug()){ //Do debug stuff when protected member function is overridden //in anonymous class }and even
if(false&&debug||debug()){ //Do debug stuff when protected member function is overridden //or static variable is set, but not for this compile }Even when (you think) the problem is fixed you may not need to remove most such blocks as they should be optimised out by the compiler - DavidWright I tend to use braces this way also. I would also suggest marking the beginning and end of your debug with #if defined() / #endif markers. I also usually add a remove date comment. This way I can leave the debug statements in for a couple of iterations without affecting production. -- WayneMack I like to start my blocks with something that explains the reason. For example:
#define DEBUG_BLOCK if (_NO_DEBUG) {} else do_something1(); DEBUG_BLOCK { whatever(); } do_something2();Note that the block is the "else" clause, to avoid subtle bugs. An alternative is to create a macro whose arg is the block; but I find many editors aren't too good at highlighting that style. Wrapping debug blocks in #ifdef ... #endif is more visually distracting (and doesn't support the % key in vi). The DEBUG_BLOCK macro can be made more powerful by adding parameters. -- DaveWhipp
int main( int argc, char **argv ) { char *szModuleName = extractModuleName(argv[0]); ... } static char *extractModuleName(char *argv0) { static char szModuleName[MAXPATHNAMELEN]; // static? How lazy. Someone who understands std:string should fix it. What if this is C code, not C++ code? strncpy(szModuleName, rindex(argv0, '/') + 1); strchr(szModuleName, '.')[0] = '\0'; return szModuleName; }A better argument than the reuse argument is that szModuleName will certainly have additional methods related to it. By extracting the methods related to this data element into a common C module or a common C++ class, it makes it easier to modify the data element and methods later on. For example, since this is a call into Main(), there is no reason to create additional storage for the module name. One can merely hold on the argument pointer and modify the [assumed] Compare() method later to find the last slash and stop at the last period. It is often best to defer operations until needed, but this is difficult to do if the methods operating on a data element are spread throughout the code. --WayneMack
}//while not eof filep.close(); }//load(string filename) };//MyDumbClass?-- DickBotting Some reasons not to do ClosingBraceComments?:
int foo(void) { if (!boring()) { while (not_done()) { do_something(); } // while not done } // if not boring } // end of int foo(void)I still think it's better to learn to do without, though... -- GarethMcCaughan Why on earth would anyone use commented braces? How could you have that much fear of creating a method? After all, isn't a method just a comment with braces, and compiler enforcement? Why do:
//do stuff with foo { stuff.... }rather than:
doStuffWithFoo() { stuff.... }am I missing something? Comments on closing braces can be useful when the brace encloses a region of code which is too big to see all at once. Although functions shouldn't get that big, it can happen with classes or namespaces, or with #endif. Some people like simple style rules. If some braces need comments and some don't, rather than let the programmer decide on a case by case basis, they mandate that all braces have comments, because it's a simpler rule. | http://c2.com/cgi/wiki?BracesAreGood | CC-MAIN-2014-15 | refinedweb | 734 | 64.3 |
Alternative, slightly easier to comprehend:
import Data.List diag b = [b !! n !! n | n <- [0 .. length b - 1], n < (length $transpose b) ] getAllDiags f g = map f [drop n . take (length g) $ g | n <- [1.. (length g - 1)] ] problem_11 num= maximumBy (\(x, _) (y, _) -> compare x y) $zip (map product allfours) allfours where rows = num cols = transpose rows) allposs = rows ++ cols ++ diagLs ++ diagRs allfours = [x | xss <- allposs, xs <- inits xss, x <- tails xs, length x == 4 ] split :: Char -> String -> [String] split = unfoldr . split' split' :: Char -> String -> Maybe (String, String) split' c l | null l = Nothing | otherwise = Just (h, drop 1 t) where (h, t) = span (/=c) l sToInt x=map ((+0).read) $split ' ' x main=do a<-readFile "p11.log" let b=map sToInt $lines a print $problem_11 b
Second alternative, = getContents >>= print . maximum . prods . input:
sToInt =(+0).read main=do a<-readFile "p13.log" let b=map sToInt $lines a let c=take 10 $ show $ sum b print c $ !!
This is another solution. I think it is much cleaner than the one above. (map ] | http://www.haskell.org/haskellwiki/index.php?title=Euler_problems/11_to_20&oldid=18087 | CC-MAIN-2014-41 | refinedweb | 176 | 85.08 |
2008ING
Using Semantic Web Technology to Design Agent-to-Agent
Argumentation Mechanism in an E-Marketplace
Abstract
In existing e-marketplaces, buyers can use search engines to find products that exactly
match their demands, but some products those are potentially interesting to them cannot be found
out. This research aims to design a multi-agent e-marketplace in which buyers and sellers can
delegate their agents to argue over product attributes via an agent-to-agent argumentation
mechanism. To make the idea possible, this research adopts the Semantic Web technology to
express agents’ ontologies and uses an abstract argumentation framework with information
gathering approach to support defeasible reasoning. A laboratory experiment is conducted to
assess the performance of the argumentation mechanism. The experimental results show that the
proposed system can help buyers to search both exactly and potentially interesting products, and
e-marketplaces are supposed to help buyers to search potentially interesting products. The
proposed architecture and approaches can inspire existing and initiative e-marketplaces to design
their product searching and recommendation mechanisms.
Keywords: multi-agent e-marketplace, argumentation mechanism, Semantic Web, ontology,
abstract argumentation framework, defeasible reasoning.
2008ING
1
Using Semantic Web Technology to Design Agent-to-Agent
Argumentation Mechanism in an E-Marketplace
1. Introduction
1.1. Research background and motivation
Persuasive presentation and negotiation are fundamental tasks in a selling process
(Oberhaus, Ratliffe, and Stauble, 1993; Anderson, 1995). A salesperson introduces potentially
interesting products to the prospect and promotes these products. After that, the salesperson deals
with prospect resistance and objections, and arranges the terms of an agreement with the
prospect in the negotiation stage. For online selling, many negotiation agents have been
researched (Matwin, Szapiro, and Haigh, 1991; Oliver, 1997; Wasfy and Hosni, 1998; Zeng and
Sycara, 1998; Lin and Chang, 2001; Dumas et al., 2002; Huang and Lin, 2007). However, how to
use agent technologies to facilitate persuasion for online selling is not well addressed yet. Huang
and Lin (2007) proposed a sales-agent, called Isa, to handle online persuasion and negotiation
dialogues with human buyers. Isa can stand for a seller to persuade a buyer into increasing
his/her product evaluation but it only focuses on agent-to-human argumentation. How to design
an agent-to-agent argumentation mechanism is needed to be researched for reducing both sellers
and buyers’ load and facilitating online selling.
Two obstacles must be broken through for designing an agent-to-agent argumentation
mechanism. The first obstacle is how to enable agents in an e-marketplace to understand other
agents’ arguments. Semantic Web technologies help Web information to be
machine-understandable (Berners-Lee, Hendler, and Lassila, 2001). These technologies enable
agents to understand arguments and transcend this obstacle. The second obstacle is how to prove
whose arguments are true or false. In a cell phone e-marketplace, for example, a buyer delegates
a buyer agent to search good-feature cell phones. The buyer and the sellers in this e-marketplace
probably have different definitions of the concept “good-feature cell phone.” Therefore, we need
a well-developed argumentation framework to describe relations among arguments and prove
their status. Argumentation in a multi-agent context is a process by which one agent attempts to
convince another agent of the truth (or falsity) of state of affairs. This process involves agents
putting forward arguments for and against propositions, together with justifications for the
acceptability of these arguments (Wooldridge, 2002). In an argumentation process, a truth can be
defeated when new information appears. Through argumentation, following aforementioned
example, a seller agent can persuade a buyer agent to believe its cell phone is good at features
even their masters (the seller and buyer) have different definitions of a good-feature cell phone.
Dung (1995) developed an argumentation framework for defeasible reasoning. The advantage of
this framework is that it pays no special attention on the internal structure of the arguments and
therefore this framework can be applied to every domain. The suitability of this framework
motivates this research to adjust it to design an agent-to-agent argumentation mechanism in an
e-marketplace.
1.2. Research objectives
In existing e-marketplaces, buyers can set conditions and find out products exactly
matching these conditions using a search engine. The products those are potentially interesting to
the buyer but do not exactly match the conditions cannot be found out. Many chances to deal are
missed. This research aims to design a multi-agent e-marketplace, in which buyers and sellers
2008ING
2
can delegate their agents to argue over product attributes via an agent-to-agent argumentation
mechanism. This research adopts Semantic Web technologies and refers to Dung’s abstract
argumentation framework to design this mechanism. This mechanism can help buyers to find out
not only exactly interesting but also potentially interesting products. Moreover, it gives sellers a
chance to persuade the buyer agents as well as buyers into considering or even buying their
products.
2. Related Works
2.1. Semantic Web
Semantic Web inherits some concepts of WWW and adds “meaning” to the Web that
enables machine to comprehend semantic documents and data (Berners-Lee, 2001). In fact,
people use software agents to search information and deal with some time-consuming or
complex tasks is more and more popular, however agents cannot understand all data on the Web
like people do. To make agents understand what Web documents mean, in February 2004, World
Wide Web Consortium (W3C) released the Resource Description Framework (RDF) and the
Web Ontology Language (OWL) as W3C Recommendations for the Semantic Web structure.
RDF is used to express information and to exchange knowledge on the Web. OWL is used to
publish and share ontologies, which support advanced Web search, software agents and
knowledge management ().
2.1.1. OWL
As mentioned, the most recent development in standard ontology languages is Web
Ontology Language (OWL) from W3C. OWL evolves from DAML+OIL that is a combination
of OIL and DAML. The Ontology Inference Layer (OIL) is the first ontology language
integrating feature from frame-based systems and description logics (DLs), and it is based on
RDF and XML to express semantics. The DARPA Agent Markup Language (DAML) is used to
develop a language and tools to facilitate the concept of the Semantic Web. For the same purpose,
the joining of OIL and DAML bring a powerful language for defining and instantiating Web
ontology. W3C slightly revised DAML+OIL to form OWL that builds on RDF and RDF Schema
and adds more vocabulary for describing properties and classes (Herman and Hendler, 2006).
OWL is developed based on Description Logic which makes it possible for concepts to be
defined as well as described. Furthermore, OWL allows the use of a reasoner to check
consistency (whether or not one class is possible to have any instances) and subsumption
(whether or not one class is a subclass of another class).
There have been many scholars defining what an ontology is, in brief, ontologies are used to
capture domain knowledge. An ontology describes the concepts (classes) in a domain and also
the relationships (properties) between those concepts. Properties of each concept describe
various characteristics and attributes (slots or roles) of the concept, and restrictions (facets or role
restrictions) on slots. A knowledge base is composed of an ontology involving a set of individual
instances of classes (Natalya and Deborah, 2001).
OWL Ontologies can be categorized into three species according to its expressiveness:
OWL-Lite, OWL-DL and OWL-Full. Readers can refer to the OWL Web Ontology Language
Overview (Herman and Hendler 2006) for a more detailed synopsis of these species. This
research will use OWL-DL to express agents’ ontologies because it supports automatic
reasoning
based on DLs. DLs are a family of logic formalisms for knowledge representation (Baader, et al.
2002). The DL syntax and corresponding OWL elements are listed in Horrocks, Patel-Schneider,
2008ING
3
and van Harmelen (2003). Ontologies using the DLs can be easily described by OWL-DL for the
Semantic Web. In addition to OWL, another language called SWRL is needed to specify rules in
ontologies.
2.1.2. SWRL
SWRL (Semantic Web Rule Language) is a language to describe rules for the Semantic
Web. The SWRL syntax is a combination of OWL and RuleML. RuleML is a XML-based rule
language that adopts a kind of standardization and webizing form to present rules (Grosof 2004).
SWRL also adopts OWL syntax because RuleML can make structure standardizing but cannot
make content do, and rule usage cannot be stipulated either. OWL helps to define vocabulary and
attributes used in the rules. We can use common inference engine, such as Jess rule engine
(), to reason a domain knowledge described by SWRL.
In common with many other rule languages, SWRL rules are written as
antecedent-consequent pairs. In SWRL terminology, the antecedent corresponds to the rule body
and the consequent corresponds to the rule head. The head and body consist of a conjunction of
one or more atoms. SWRL rules reason about OWL individuals, primarily in terms of OWL
classes and properties and also can refer explicitly to OWL individuals and support the common
same-as and different-from concepts. Similarly, the “differentFrom” atom can be used to express
that two OWL individuals are different. Moreover, SWRL has an atom to determine if an
individual, property, or variable is of a particular type. The type specified must be an XML
Schema data type. Besides, SWRL supports a range of built-in predicates, which greatly expand
its expressive power. SWRL built-ins are predicates that accept several arguments. They are
described in detail in the SWRL Built-in Specification. The simplest built-ins are comparison
operations. All built-ins in SWRL must be preceded by the namespace qualifier “swrlb:”.
2.2. Argumentation Theory
2.2.1. Toulmin Argument Structure
Toulmin Argument Structure gives us a tool for both evaluating and making arguments. The
main parts of Toulmin's model are the data, claim, backing, warrant, rebuttal, and qualifier. A
data is a fact that describes present situation. A claim is supported by data and by a warrant,
which is a general rule or principle supporting the step from data to a claim. The backing is a
justification for the warrant, and the rebuttal is a condition where a warrant does not hold. A
qualifier expresses the applicability of the warrant (Toulmin, 1958). Figure 1 illustrates an
argument based on Toulmin's model.
Figure 1: Toulmin Argument Structure
Toulmin Argument Structure is useful to organize arguments and knowledge but loosely
specifies how arguments relate to each other and provides no guidance as to how to evaluate the
arguments or prove their statuses.
Data
Claim
Warrant
Qualifier
Backin
g
Rebuttal
Ground was wet.
Weather forecast said it
will be rainy and cloudy.
The sky is covered with
dark and heavy clouds.
Certainly. It must rain.
Someone irrigates
the flowers.
2008ING
4
2.2.2. Abstract Argumentation Framework
An abstract approach to non-monotonic logic developed in several articles by Bondarenko,
Dung, Toni and Kowalski. The major innovation of the approach is that it provides a framework
and vocabulary for investigating the general features of argumentation systems, and also for
non-monotonic logics that are not argument-based. This section presents Dung’s formulation
(1995) because in Bondarenko et al. (1997) the basic notion is not for arguments but for a set of
what they call “assumptions”. They treat an argument as a set of assumptions.
Dung’s abstract argumentation framework completely abstracts from both the internal
structure of an argument and the origin of the set of arguments. The argumentation framework
(AF) denoted as AF = <AR, attacks>, where AR is a set of arguments, and an attack is a binary
relation on AR. Here, an argument is an abstract entity whose role is solely determined by its
relations to other arguments. The notation ‘←’ is an attack relation between two arguments. The
relation arg
1
← arg
2
denotes that arg
1
is attacked by arg
2
. Dung also defined various notions of
so-called argument extensions, which are intended to capture various types of defeasible
consequence. These notions are declarative, just declaring sets of arguments as having a certain
status. The basic formal notions are as follows.
An argument a is attacked by a set of arguments B if B contains an attacker of a (not all
members of B need attack a).
An argument a is acceptable with respect to a set of arguments C, if every attacker of a is
attacked by a member of C. For example, if a ← b then b ← c for some c ∈ C. In that
case we say c defends a, and also that C defends a.
A set of arguments S is conflict-free if no argument in S attacks an argument in S.
A conflict-free set S of arguments is admissible if each argument in S is acceptable with
respect to S.
A set of arguments is a preferred extension if it is a ⊆ -maximal admissible set.
A conflict-free set of arguments is a stable extension if it attacks every argument outside it.
Dung showed that many existing nonmonotonic logics can be reformulated as instances of
the abstract framework.
2.2.3. Defeasible Argumentation Systems
An argumentation framework also needs a ‘proof-theory’ to compute that a particular
argument has a certain status. One approach is assigning priority ordering to arguments and an
argument with lower priority cannot defeat a higher-priority argument. Vreeswijk and Prakken
(2000) proposed a dialectical form of an argumentation game between a proponent and an
opponent as a natural form of a proof theory. The initial argument is acceptable if its proponent
has a winning strategy; that is, if a proponent can make the opponent run out of moves against
his/her any possible counter-arguments. Figure 2 illustrates two argumentation games, where a
node means a move. A proponent’s moves are denoted as black nodes and an opponent’s moves
are denoted as white nodes. The relation P1 ← O1 denotes that P1 is attacked by O1. Prakken
(2001) defined the disputational status of a dispute move that a move M of a dispute D is in if
and only if all moves in D that reply to it are out; otherwise M is out. The status of a move is in
means that the argument of this move is acceptable. We can find that a leaf node in a dialogue
tree must be in because it has no attackers. This approach is very easy to calculate the status of
each argument. In game (a), for instance, P1 is acceptable and included in the admissible set {P1,
P3, P4}. In game (b), however, P1 is unacceptable.
2008ING
5
P2
P4
O2
O3
P3
P1
O1
in
in
in
out
in
out
out
P2
P4
O2
O3
P3
P1
O1
in
inout
in
out out
O4
in
out
(a) (b)
P2
P4
O2
O3
P3
P1
O1
in
in
in
out
in
out
out
P2
P4
O2
O3
P3
P1
O1
in
in
in
out
in
out
out
P2
P4
O2
O3
P3
P1
O1
in
inout
in
out out
O4
in
out
P2
P4
O2
O3
P3
P1
O1
in
inout
in
out out
O4
in
out
(a) (b)
Argumentation
Mechanism
ACL
Seller Agent Buyer Agent
E-Marketplac
e Ontology
Buyer-Agent’s
Ontology
Bu
y
er Seller
Seller-Agent’s
Ontology
Argumentation
Mechanis
m
Reasoner and
Rule En
g
ine
Reasoner and
Rule En
g
ine
Figure 2: Argument Statuses in Dialectical Games
3. System Architecture
This research aims to design a multi-agent e-marketplace equipped with an agent-to-agent
argumentation mechanism. In this e-marketplace, buyers can delegate their buyer agents to
search products matching their needs and sellers can delegate their seller agents to persuade
buyer agents to believe their products can match the buyers’ needs. A buyer agent communicates
with each seller agent and initiates an argumentation dialogue. This research designs the
argumentation mechanism referring to Dung’s argumentation framework and Vreeswijk and
Prakken’s dialectical game approach but makes some adjustments. Two assumptions are
proposed to make an argumentation dialogue more simple and suitable for e-marketplaces.
Assumption 1: Some beliefs are not changeable in a buyer’s mind. If a seller agent’s claim
has a conflict with the buyer agent’s unchangeable beliefs the dialogue is not need to be
continued and the buyer agent cannot be persuaded to accept the seller agent’s proposal. This
assumption makes sense because persuading a buyer to buy a product that s/he definitely dislikes
is not necessary. Buyers’ arguments should have higher priority than sellers’ because purchase
decisions are made by buyers.
Assumption 2: Agents’ ontologies are incomplete. Asking buyers or sellers to talk all their
beliefs to their agents is a difficulty. Additionally, buyers may not sure their needs. Therefore,
an argumentation dialogue in an e-marketplace should be an information gathering process
instead of a dispute. In this system, when a seller agent’s argument is conflict with a buyer
agent’s changeable beliefs or the buyer agent has no idea whether a seller agent’s argument is
true or false the buyer agent will query the seller agent about the reasons for the argument rather
than start a sequence of attack actions. If the seller’s reasons are not conflict with the buyer
agent’s unchangeable beliefs the buyer agent can be persuaded to accept the seller agent’s
proposal.
Figure 3: Architecture of Buyer and Seller Agents
2008ING
6
Figure 3 illustrates the architecture of the buyer and seller agents. Each agent has its own
ontology to represent its mental state and shares the e-marketplace ontology. An agent’s mental
state ontology describes concepts, relations, and rules about products defined by its master. The
e-marketplace ontology defines the common vocabulary used in this e-marketplace and
constitutes undefeatable rules that are supported by the most buyers and sellers. Ontologies are
described in OWL and SWRL formats. Once a dialogue starts, an agent’s argumentation
mechanism is responsible for choosing arguments from its ontology to utter and these arguments
are formed in Agent Communication Language (ACL) based on the communicative acts
specified by Foundation for Intelligent Physical Agents (FIPA). The reasoner and rule engine
help the agent to check the consistency between the opposite agent’s arguments and its own
mental state ontology.
A product searching process is executed by a buyer agent according to the following steps:
1. Declaring demand: A buyer defines the products s/he needs using an interface without any
technical jargon and then these definitions are automatically transformed into SWRL and
added into the buyer agent’s ontology. After that, the buyer can send his/her agent to find
exactly and potentially interesting products by communicating with seller agents.
2. Find the products that exactly match the buyer’s demand: A buyer agent finds the products that
exactly match the buyer’s demand by the following procedure –
(1) Perform monotonic reasoning on the e-marketplace and the buyer agent’s ontologies to
reason out which products are exactly interesting (exactly compliant with the buyer’s
definitions about a good product).
(2) Add these products that exactly match the buyer’s demand into the Option List of Exactly
Interesting Products.
(3) Add the seller agents whose products cannot be prove to be exactly matching into the
Talk List.
3. Find potentially matching products: For each seller agent in the Talk List -
(1) Call for proposal.
(2) Receive the seller agent’s proposal.
(3) Request for the claim and its supporting premises about the proposal.
(4) Receive the seller agent’s claim and its premises.
(5) Agree this claim and add the proposal into Option List of Potentially Interesting
Products if all premises can be prove to be true, otherwise refute this claim and reject
the proposal.
4. Updating the buyer agent’s ontology: A dialogue history about each potentially interesting
product is shown in the option list. The buyer can check the seller agent’s arguments that the
buyer agent cannot disagree and modify his/her beliefs and the buyer agent’s ontology.
Figure 4. The Algorithm for Proving a Seller Agent’s Claim.
prove(c)
Query the reason that supports the claim c.
Receive the reason R that supports the claim c.
For each premise p in R
Believe p is TRUE if p is consistent with the e-marketplace and buyer agent’s ontologies.
Believe p is FALSE if p is conflict with the e-marketplace or buyer agent’s ontology.
Otherwise p is TRUE if prove(p) return TRUE, or p is FALSE if prove(p) return FALSE.
Return TRUE if all p in R are TRUE, or return FALSE if one p in R is FALSE.
2008ING
7
The algorithm for a buyer agent to prove a seller agent’s claim c is stated in Figure 4. The
buyer agent firstly queries the reason that supports the claim (a claim is also a premise
supporting former claim). The buyer agent checks each premise after receiving the argument
from the seller agent. If a premise cannot be proved true or false according to the buyer agent’s
ontology, the additional reasons are continually queried. Finally, the claim is proved true if no
premise is false.
A seller agent tries to persuade a buyer agent to recommend the product to the buyer using
the following procedure:
1. Declaring supply: A seller defines his/her product using an interface without any technical
jargon and then these definitions are automatically transformed into SWRL and added into
the seller agent’s ontology.
2. Persuade buyer agents: The seller agent persuade a buyer agent by following steps –
(1) Propose proposal when receiving a buyer agent's call-for-proposal message.
(2) Inform the claim and supporting premises about the proposal when the buyer agent
request for it.
(3) Inform the premises of the queried claim.
(4) Terminate the dialogue when receiving the message of either accept or reject proposal.
Briefly speaking, this research treats an argumentation process in an e-marketplace as an
information gathering process in which a buyer agent queries a seller agent about related
information. If the information provided by a seller agent is not conflict with the buyer’s
unchangeable beliefs the seller agent can persuade the buyer agent otherwise the buyer agent
cannot be persuaded. This argumentation process does not depend on internal structure of
arguments and the feature of abstract remains.
3.1. Demonstration of Agent-to-Agent Dialogues
This research adopts a cell phone trading marketplace for demonstration. A buyer defines
the conditions of a good or a bad cell phone and what conditions are non-negotiable according to
his/her beliefs via a template (see Figure 5). These definitions will be automatically transformed
into rules and added into the buyer agent’s ontology. A seller also uses a similar template to
define his/her own rules and to input product information.
Figure 5. The Belief Acquisition Template.
Media_player
GPS
Camera
MBrand
SBrand
NBrand
>300hr
>250hr
>200hr
MBrand
SBrand
NBrand
>300hr
>250hr
>200hr
<20000
<10000
<5000
Good
Bad
Cellphone
Function
Function
Brand Cellphone
BatteryTime Cellphone
Brand
Batter
y
Time
Presented Date
Price
Feature
Brand
Battery Time
Presented Date
Good
Bad
Good
Bad
Inference
MBrand
SBrand
NBrand
>300hr
>250hr
>200hr
Bartype
Slider
Fli
p
After 2007/1
After 2007/6
After 2008/1
Media_player
GPS
Camera
MBrand
SBrand
NBrand
>300hr
>250hr
>200hr
<20000
<10000
<5000
2008ING
8
The following scenario with some cases of dialogue demonstrates how an argumentation
proceeds using the argumentation mechanism.
Ariel wants to buy a cell phone and she thinks feature and price are important criteria.
She believes if a cell phone has slider type, the cell phone’s feature is good. Ariel’s
budget is smaller than NT$ 5000, therefore Ariel does not consider the cell phones
with prices higher than NT$ 5000. Besides, she also thinks that a cell phone’s
battery time requires at least 250 hrs, otherwise the battery time is not good. As to
the cell phone’s brand, she views NBrand and MBrand as good brands.
Ariel’s need can be represented by the following rules in her agent B’s ontology:
B: GoodFeatureCellphone(x) ∧ GoodPriceCellphone(x) GoodCellphone(x)
hasFeature(x, Slider) GoodFeatureCellphone(x)
hasPrice(x, ≦5000) GoodPriceCellphone(x)
hasPrice(x, >5000) BadPriceCellphone(x)
BadPriceCellphone(x) BadCellphone(x)
hasBatteryTime(x, ≧250) GoodBatteryTimeCellphone(x)
hasBatteryTime(x, <250) BadBatteryTimeCellphone(x)
hasBrand(x, NBrand) GoodBrandCellphone(x)
hasBrand(x, MBrand) GoodBrandCellphone(x)
There are three unchangeable beliefs in this ontology, the price of a good price cell phone
must be lower than or equal to NT$ 5000, a bad price cell phone cannot be a good cell phone,
and a good battery time cell phone must have a battery time that exceeds or equals 250 hours.
Case 1:
The seller agent S1 sells the Cell phone 1 and believes it is a good cell phone. A cell phone
has good function means it has the functions GPS and Email tool. A cell phone has good battery
time means its battery time exceed 300 hours. A cell phone has good brand means its brand is
PBrand. A cell phone has good price means its price is not exceed NT$ 4999. A cell phone has
good feature means it has bar type feature. A cell phone has good presented date means its
presented date is not earlier than 2005/12/10. All these beliefs can be represented as the
following rules:
S)
hasBrand(C001, PBrand) GoodBrandCellphone(C001)
hasPrice(C001, ≦4999) GoodPriceCellphone(C001)
hasPresentedDate(C001, ≧2005/12/10) GoodPresentedDateCellphone(C001)
hasFunction(C001, GPS) ∧ hasFunction(C001, Email tool)
GoodFunctionCellphone(C001)
hasBatteryTime(C001, ≧300) GoodBatteryTimeCellphone(C001)
hasFeature(C001, Bartype) GoodFeatureCellphone(C001)
2008ING
9
Table 1. The Specification of Cell Phone C001.
Cell phone C001
Model j44
Brand PBrand
Battery Time 300 hrs
Presented Date 2005/12/10
Price NT$ 4999
Feature Bartype
Function GPS, Email tool
Argumentation between the buyer agent B and the seller agent S1 includes the following
sequence of arguments:
B: Please recommend a good cell phone for me.
S1: I think C001 is a good cell phone.
B: Please tell me why.
S)
B: I agree that the cell phone C001 is a good cell phone.
PROPOSE
REQUEST
INFORM
B: What is the premise of GoodBrandCellphone(C001)?
S4: hasBrand(C001, SBand ) GoodBrandCellphone(C001)
QUERY REF
INFORM
B: What is the premise of GoodFunctionCellphone(C001)?
S4: hasFunction(C001, GPS) ∧ hasFunction(C001, Email tool)
GoodFunctionCellphone(C001)
QUERY REF
INFORM
B: What is the premise of GoodPrensentedDateCellphone(C003) ?
S4: hasPresentedDate
(
C003
,
≧2005/12/10
)
GoodPresentedDateCell
p
hone
(
C003
)
QUERY REF
INFORM
B: What is the premise of GoodFeatureCellphone(C001)?
S4: hasFeature
(
C001, Bart
yp
e
)
GoodFeatureCell
p
hone
(
C001
)
QUERY REF
INFORM
ACCEPT PROPOSAL
2008ING
10
In case 1, a monotonic reasoning cannot infer that the cell phone C001 is a good cell phone
or a bad cell phone, that makes S1 be taken into the Talk List and then an argumentation with S1
is started. In dialogue, agent S1 informs agent B that the cell phone C001 has good brand, price,
presented date, function, battery time and feature, which makes S1 believes it is a good cell
phone. The buyer agent checks whether the premises of GoodCellphone(C001) can be proved
true or false according to its ontology and finds that the premises GoodPriceCellphone(C001)
and GoodBatteryTimeCellphone(C001) can be proved true but the premises
GoodBrandCellphone(C001), GoodPrensentedDateCellphone(C001),
GoodFunctionCellphone(C001), and GoodFeatureCellphone(C001) cannot be proved true or
false. Therefore the agent B further queries the premises of the claims
GoodBrandCellphone(C001), GoodPrensentedDateCellphone(C001),
GoodFunctionCellphone(C001), and GoodFeatureCellphone(C001). Finally, all premises can be
proved true and the buyer agent accept S1’s proposal. The agent S1 persuades agent B into
believing the cell phone C001 is a good cell phone and this cell phone can be added into the List
of Potentially Good Cell Phones.
Case 2:
The seller agent S2 sells the Cell phone C002 and believes it is a good cell phone. A cell
phone has good function means it has the functions GPS and Email tool. A cell phone has good
battery time means its battery time exceed 150 hours. A cell phone has good brand means its
brand is SBrand. A cell phone has good price means its price is not exceed NT$ 3999. A cell
phone has good feature means it has flip feature. A cell phone has good presented date means its
presented date is not earlier than 2005/12/10. All the beliefs can be represented as the following
rules:)
hasBrand(C002, SBrand ) GoodBrandCellphone(C002)
hasPrice(C002, ≦3999) GoodPriceCellphone(C002)
hasPresentedDate(C002, ≧2005/12/10) GoodPresentedDateCellphone(C002)
hasFunction(C002, GPS) ∧ hasFunction(C002, Email tool)
GoodFunctionCellphone(C002)
hasBatteryTime(C002, ≧150) GoodBatteryTimeCellphone(C002)
hasFeature(C002, Flip) GoodFeatureCellphone(C002)
Table 2. The Specification of Cell Phone C002.
Cell phone C002
Model Uu8
Brand SBrand
Battery Time 150 hrs
Presented Date 2005/12/10
Price NT$ 3999
Feature Flip
Function GPS, Email tool
2008ING
11
Argumentation between the buyer agent B and the seller agent S2 includes the following
sequence of arguments:
B: Please recommend a good CellPhone for me.
S2: I think Cell phone C002 is a good cell phone.
B: Please tell me why.)
In case 2, a monotonic reasoning cannot infer that the cell phone C002 is a good or bad cell
phone so that the agent S2 is taken into Talk List and then an argumentation dialog starts. Since
the claim cell phone C002 is a good-price cell phone can be proved true according to agent B’s
ontology the agent B does not query the reason. Finally, agent S2 claims that C002 is a good
battery time cell phone because its battery time exceeds 150 hours. This claim is conflict with
agent B’s unchangeable belief that a good battery time cell phone must have a battery time
exceeding or equaling 250 hours. In this situation, the seller agent’s argument is attacked by the
buyer agent’s argument and the buyer agent rejects the proposal to finish this argumentation.
The cell phone C002 cannot be added into the List of Potentially Good Cell Phones. The agent
S2 cannot persuade agent B into believing the cell phone C002 is a good cell phone.
4. System Evaluation and Results
4.1. Experimental Procedure
This research implemented the e-marketplace using Java programming language, JADE
platform (jade.tilab.com) and Jess rule engine (herzberg.ca.sandia.gov) based on the proposed
architecture and approach. We conducted a laboratory experiment to evaluate the agent-to-agent
argumentation mechanism. We built 50 seller agents to sell cell phones in advance based on the
famous cell phone Web sites in Taiwan ( and). Each seller
agent sold one cell phone and these cell phones had no duplication. 36 undergraduate students
who majored in Information Management joined this experiment. These subjects had experience
of searching and purchasing cell phones and they were willing to buy cell phones in the future.
This experiment was held in a computer classroom, every computer was set with related
programs and the executable environment in advance. In the experiment, an instructor firstly told
the subjects the experimental purpose, procedure, and a cover story that let them put
himself/herself in the scenario of buying a cell phone in a multi-agent e-marketplace. In the next
phase, each subject logon the system and then defined his/her need via a belief acquisition
interface. Then, each subject can delegate his/her buyer agent to communicate with the seller
PROPOSE
REQUEST
INFORM
B: What is the premise of GoodBrandCellphone(C002)?
S2: hasBrand
(
C002
,
SB
r
and
)
GoodBrandCell
p
hone
(
C002
)
QUERY REF
INFORM
B: I disagree that the cell phone C002 is a good battery time cell phone.
REJECT PROPOSAL
2008ING
12
agents to search appropriate products that exactly or potentially match his/her need. After the
buyer agent communicated with all seller agents in Talk List, a list of exactly good cell phones
and a list of potentially good cell phones were recommended to the buyer. The former list
included the items exactly matching the buyer’s need and these items were searched by a
monotonic reasoning. The later list included the potentially interesting items that searched by
argumentations (non-monotonic reasoning). The subjects were asked to give a rating about how
interested they feel for every exactly and potentially matching item using a 7-point Likert scale
and the score ranges from -3 to 3. In this way, this experiment can assess if the argumentation
mechanism is able to recommend potentially interesting items effectively. Besides, the system
automatically recorded the time cost and dialogue history during each argumentation dialogue. In
the end of experiment, the subjects were asked to fill up a short questionnaire for acquiring their
feedbacks, including the user backgrounds, their feelings about system use, comprehensions of
the argumentation process, and the satisfactions of the lists of exactly good cell phones and
potentially good cell phones. Each feedback to a question is measured by a 7-point Likert scale
and the score ranges from -3 to 3.
When a subject used the system, s/he chose the import condition that a good cell phone
must have using the condition setting panel. After choosing, the condition definition panel
appeared to make the subject define his/her detailed demands. For instance, s/he thought a good
cell phone must have a good price and a good feature, and then s/he defined that a good price is
less than NT$ 5000 and viewed the good-price attribute as a non-negotiable attribute, that is s/he
did not consider a cell phone with a price higher than NT$ 5000. Therefore, s/he checked the box
of non-negotiable attribute and the frame of the condition will turn to red. S/he also thought a
good feature means the feature of a cell phone is slider. These two definitions were conducive to
produce the rules about what is a good cell phone. As to the conditions the subject did not choose
in the first panel, those will show in the other condition definition panel. The subject can
determine whether s/he wanted to use the panel to define his/her demands for the rest of the
conditions. S/he also can set the non-negotiable attribute for each condition. The list panel will
present a tree of conditions that the subject had defined.
The subject made the forms out step by step and submitted the information, then it is stored
in SWRL format in an OWL file and the Jess rule engine is started for proceeding monotonic
inferring. Jess can infer the individuals of the good cell phones and the bad cell phones. The
seller agents whose cell phones cannot be inferred good or bad were added into the Talk List. In
the List of Exactly Good Cell Phones, an exactly good cell phone will be shown. The product
photo, number, brand, model, battery time, presented date, feature, price and function are also on
the table. The first column of the table is the score column designed to get the subject’s feedback
about how interested the subject feel about the product. After the subject scored each product, the
buyer agent began to communicate with seller agents. The target agents the buyer agent wanted
to communicate with were the seller agents in the Talk List and their arguments were generated
based on their ontologies and the algorithm described in Section 3. Finally, List of Potentially
Good Cell Phones including the cell phones their seller agents made successful persuasions. The
table in the list is added two columns, the reason to accept the proposal and the detailed dialogue
content of each product. In the reason column which recorded the defeated arguments of the
buyer agent. The dialogue column can help the subject to see the complete argumentation history.
4.2. Results of the Experiment
Figure 6 illustrates the subjects’ profiles.
2008ING
13
Figure 6. Subjects’ Profiles.
The average number of potentially good items recommended by each buyer agent is 23, the
average
number of dialogues
is 24, and
the average number of messages in a dialogue
is 11. There are
14 buyer agents found none of item that exactly match the buyer’s need and 6 buyer agents found
none of item that potentially match the buyer’s need. Additionally, the average time cost in a
dialogue is shorter than 1 second.
We measure a buyer’s interest in a list of recommended items by averaging scores of items
in the list. There were 22 subjects received nonempty list of exactly good items. Their average
score of interest in the list is 0.636 and the standard deviation is 0.889. 30 subjects got nonempty
list of potentially good items. Their average score of interest in the list is 0.711 and the standard
deviation is 0.414. There were 19 subjects whose lists of exactly and potentially good items were
not empty. This research uses pair-sample t test to compare the 19 subjects’ interests in the two
lists. The result is depicted in Table 3 and shows that there is no significant difference between
the interest in the list of exactly good items and the interest in the list of potentially good items.
The average interest in the list of potentially good items is positive and is not lower than the
average interest in the list of exactly good items that means the argumentation mechanism is able
to find out potentially interesting items for buyers.
Table 3. Interests in the Lists of Exactly and Potentially Good Items.
Mean SD. t-value (p-value)
Interest in the list of exactly good items.
0.519 0.913
-1.113 (0.280)
Interest in the list of potentially good items. 0.786 0.333
Table 4 reveals average scores of the questions in the questionnaire. We can find that
subjects had positive attitudes toward the system and agreed that this system can help them to
search potentially interesting products.
Table 4. The Average Scores of the Questions.
Question
Average
score
SD.
1. Do you feel the system is easy to use? 1.583333 0.953794
2. Can you understand the system manipulation process? 1.805556 0.966651
3. Do you feel the system can help you to search interesting products? 1.527778 0.83287
4. Are you satisfied with the recommended items in the list of exactly
good cell phones?
1.083333 1.037492
5. Are you satisfied with the recommended items in the list of potentially
good cell phones?
1.083333 0.829156
6. Can you understand the dialogue contents provided in the list of
potentially good cell phones?
1.027778 0.86558
7. Do you feel the dialogue contents can help you to understand why the
agent recommends these items to you?
0.944444 0.664348
8. Do you agree that an e-store should help you to search not only the
exactly interesting products but also potentially interesting products?
1.861111 1.031525
9. Do you agree that this system can help you to search potentially
interesting products?
1.666667 0.881917
Gender
14
22
0
5
10
15
20
25
male female
Number of
subjects
Age
2
14
20
0
5
10
15
20
25
23 22 21
Experience of shopping on
Internet
29
7
0
10
20
30
40
Yes No
Experience of searching
product in an e-store
34
2
0
10
20
30
40
Yes No
2008ING
14
We further compare the 19 subjects’ satisfactions with the two lists. The result is depicted
in Table 5 and shows that the satisfaction with the list of exactly good items and the satisfaction
with the list of potentially good items are identical and positive. The potentially interesting items
searched by the argumentation mechanism can satisfy buyers. We also find that even the items
that exactly mach the conditions set by the buyers cannot fully satisfy the buyers. The possible
reason is that buyers usually cannot fully know their needs or cannot fully understand the
products they search for. Therefore, e-marketplaces should help buyers search not only exactly
but also potentially interesting items.
Table 5. Satisfactions with the Lists of Exactly and Potentially Good Items.
Mean SD. t-value (p-value)
Satisfaction with the list of exactly good items. 1.160 0.834
0.000 (1.000)
Satisfaction with the list of potentially good items. 1.160 0.834
5. Conclusions
This research designs a multi-agent e-marketplace with an agent-to-agent argumentation
mechanism. Using this mechanism, buyers can find out potentially interesting products through
their agents. Moreover, sellers can delegate their agents to make buyer agents change beliefs and
recommend their products to the buyers. To make agent-to-agent argumentation possible, this
research adopts OWL and SWRL to clearly express agents’ ontologies and uses an abstract
argumentation framework with information gathering approach to support defeasible reasoning.
A prototype system based on the proposed architecture and approaches was developed for
trading cell phones and a laboratory experiment was conducted to evaluate it. The experimental
results show that the proposed system is able to help buyers to search not only exactly but also
potentially interesting products, and e-marketplaces are supposed to help buyers to search
potentially interesting products.
This research indicates two innovation directions for electronic commerce. First,
argumentation mechanism is useful for online matchmaking and recommending potentially
interesting items. How to acquire users’ beliefs easily and how to present dialogue history
comprehensibly are also important. Therefore, more user-friendly argumentation-based agents
for searching various products should be developed. Second, the Semantic Web technology is
getting mature to express complex rules and information. Developing smarter agents using
Semantic Web technology is worthy to be researched. We believe that the proposed architecture
and approaches can help existing and initiative e-marketplaces to design their argumentation
mechanisms and facilitate the evolution of modern applications for electronic commerce.
6. References
Anderson, R. (1995) Essentials of Personal Selling: The New Professionalism, Prentice-Hall,
inc., New Jersey.
Baader, F., Calvanese, D., Mc Guinness, D., Nardi, D. and Patel-Schneider, P. (2002). The
Description Logic Handbook. Cambridge University Press.
Berners-Lee, T., Hendler, J., and Lassila, (2001). The Semantic Web, Scientific American, pages
34-44.
Bondarenko, A., Dung, P. M., Kowalski, R. A., and Toni, F. (1997). An abstract,
argumentation-theoretic approach to default reasoning. Artificial Intelligence, Vol.93, No.1-2,
pages 63-101.
Dumas, M., Governatori, G., Hofstede, A.H.M., & Oaks, P. (2002). A Formal Approach to
Negotiating Agents Development. Electronic Commerce Research and Applications, Vol.1,
2008ING
15
No.2, pages 193-207.
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games. Artificial Intelligence, Vol.77, No.2,
pages 321-358.
Grosof, B. N. (2004), Representing e-commerce rules via situated courteous logic programs in
RuleML, Electronic Commerce Research and Applications, Vol.3, No.1, pages 2-20.
Herman, I. and Hendler, J. (2006). Web Ontology Language (OWL). Retrieved from
Horrocks, I., Patel-Schneider P. F., and van Harmelen F. (2003). From SHIQ and RDF to OWL:
The making of a web ontology language. Journal of Web Semantics, Vol.1, No.1, pages 7-26.
Huang, S.-L. and Lin, F.-R. (2007). The Design and Evaluation of Intelligent Sales Agent for
Online Persuasion and Negotiation. Electronic Commerce Research and Applications, Vol. 6,
No. 3, pp. 285-296.
Lin, F.-r., & Chang, K.-y. (2001). Enhancing the On-Line Automated Bargaining Process Using
Bargaining Pattern and Dynamic Price Issuing Approaches. IEEE Intelligent Systems, special
issue on intelligent e-business, pages 41-47.
Matwin, S., Szapiro, T., & Haigh, K. (1991). Genetic Algorithm Approach to A Negotiation
Support System. IEEE Transactions on Systems, Man, and Cybernetics, Vol.21, No.1, pages
102-114.
Natalya F. N. and Deborah L. M.(2001). Ontology Development 101: A guide to Creating Your
First Ontology. Stanford University, Stanford, CA.
Oberhaus, M.A., S. Ratliffe, and V. Stauble (1993) Professional Selling: A Relationship Process,
Harcourt Brace Jovanovich College Publishers, Orlando.
Oliver, J.R. (1997) “A Machine Learning Approach to Automated Negotiation and Prospects for
Electronic Commerce,” Journal of Management Information Systems, Vol.13, No.3, pages
83-112.
Prakken, H. (2001). Relating Protocols for Dynamic Dispute with Logics for Defeasible
Argumentation. Synthese, Vol.127, No. 1-2, pages 187-219.
Toulmin, S. (1958). The uses of argument. Cambridge University Press. Cambridge.
Vreeswijk, G and H. Prakken (2000). Credulous and Sceptical Argument Games for Preferred
Semantics. Proceedings of JELIA'2000, The 7th European Workshop on Logic for Artificial
Intelligence, in: Lecture Notes in Artificial Intelligence, Springer, Berlin, pages 239-253.
Wasfy, A.M., & Hosni, Y.A. (1998). Two-Party Negotiation Modeling: An Integrated Fuzzy
Logic Approach. Group Decision and Negotiation, Vol.7, No.6, pages 491-518.
Wooldridge, M. (2002). Reaching Agreements. An Introduction to Multi-agent Systems. John
Wiley & Sons.
Zeng, D., & Sycara, K. (1998). Bayesian Learning in Negotiation. International Journal of
Human-Computer Studies, pages 125-141.
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Log in to post a comment | https://www.techylib.com/en/view/rouleaupromise/using_semantic_web_technology_to_design_agent-to-agent | CC-MAIN-2018-17 | refinedweb | 7,577 | 53.31 |
For learning about the angle between two planes in 3D, we need to learn about planes and angles.
Plane is a two-dimensional surface that extends to infinity.
Angle is the space in degrees between two lines and surfaces which intersect at a point.
So, in this problem we need to find the angle between two 3D planes. For this we have two planes that intersect each other and we need to find the angle at which the are intersecting each other.
To calculate the angle between two 3D planes, we need to calculate the angle between the normal's of these planes.
Here, we have two planes,
p1 : ax + by + cz + d = 0 p2 : hx + iy + j z + k = 0
The directions of the normal's of the planes p1 and p2 are (a,b,c) and (h,i,j).
Using this a mathematical formula that is created to find the angle between the normal's of these two planes is,
Cos Ø = {(a*h) + (b*i) + (c*j)} / [(a2 + b2 + c2)*(h2 + i2 + j2)]1/2 Ø = Cos-1 { {(a*h) + (b*i) + (c*j)} / [(a2 + b2 + c2)*(h2 + i2 + j2)]1/2 }
#include <iostream> #include <math.h> using namespace std; int main() { float a = 2; float b = 2; float c = -1; float d = -5; float h = 3; float i = -3; float j = 5; float k = -3; float s = (a*h + b*i + c*j); float t = sqrt(a*a + b*b + c*c); float u = sqrt(h*h + i*i + j*j); s = s / (t * u); float pi = 3.14159; float A = (180 / pi) * (acos(s)); cout<<"Angle is "<<A<<" degree"; return 0; }
Angle is 104.724 degree | https://www.tutorialspoint.com/angle-between-two-planes-in-3d-in-cplusplus | CC-MAIN-2021-43 | refinedweb | 281 | 72.09 |
1.1 Camera webpage access tutorial
Part 1--Connect camera into Raspberry Pi board.
Part 2 -- Upgrading the system
Part 3 -- 4 -- Using camera
1.You can transfer the video which captured by the Raspberry Pi to the web page.
You need to log in to the WinSCP software and transfer the master.zip file to the pi directory of the Raspberry Pi.
2.You should input: #unzip master.zip
(This command to unzip mater.zip)
After complete unzip, input ls, you can see migp-streamer-master folder. As shown in the figure below..
4. You need to install library
You should input:
#sudo apt-get install libjpeg8-dev
After completion, As shown in the figure below.
5.Compiling
Note: we possess two formats of camera (JPEG/YUYV )
2-degree-of-freedom camera: YUYV/JPEG
If we want to change to the YUYV format, we need to modify the relevant files and compile:
Specific steps as shown below:
You should enter the input_uvc.c file in the /home/pi/mjpg-streamer-master/mjpg-streamer-experimental/plugins/input_uvc/ directory.
You should input:
cd /home/pi/mjpg-streamer-master/mjpg-streamer-experimental/plugins/input_uvc/
You should input:
ls
Just modify the format of 135 lines to V4L2_PIX_FMT_YUYV (The default is format=V4L2_PIX_FMT_MJPEG)
You should input command
nano -c input_uvc.c
If you want to change the resolution and frame rate of the USB camera, you can change it here.
After the modification is completed, press ctrl+X, press Y to save, and then press the Enter key.
Then return to the mjpg-streamer-experimental folder to compile
Then return to mjpg-streamer-experimental:
cd /home/pi/mjpg-streamer-master/mjpg-streamer-experimental
And input make clean all to complete the compilation.
You can wait for the compilation to complete, you can see the interface shown below.
6.Restart system
You need to input: sudo reboot to reboot system
Plug in the camera and restart the system.
7. After rebooting, enter the system.
You need to enter the mjpg-streamer-experimental directory by command. And use the following command to start the normal USB camera (for 2-DOF cameras):
#./mjpg_streamer -i "./input_uvc.so" -o "./output_http.so -w ./www"
Some cameras will report an error when executing this command. If they do not return to the command prompt and display "Starting Camera", it means success.
As shown in the figure below, the camera is successfully turned on:
8. Testing
View the image, open the browser on the PC side, you need to enter the following URL to see the static screenshot:
http://<RaspberryPi IP>:8080/?action=snapshot
My URL is:
As shown below.
You should input the following URL to see the dynamic image:
http://<RaspberryPi IP>:8080/javascript_simple.html
My URL is:
Note: After running the camera web page service, this process will occupy the camera, causing other camera commands to fail to run. Please end the process before running other camera commands.
View camera process number:
ps a
Input following command to kill PID
sudo kill -9 1118
Different Raspberry Pi process numbers are different. Please refer to the process shown in your own system.
1.2 Camera capture pictures
Part 1 2-- Using camera to capture pictures.
3. Input following command to install mplayer player.
sudo apt-get install mplayer -y
Wait patiently, after the installation is complete, you can see the interface shown below.
4. Input following command to install fswebcam video software.
sudo apt-get install fswebcam -y
Input following command to view USB camera picture.
sudo mplayer tv://
5. After confirming the screen, you need to exit through “ctrl+c” before proceeding to the next operation.
If you run mplayer and use the fswebcam command at the same time, the system will prompt an error that the camera is busy. As shown below.
6. Input following command to generate a real-time photo taken by the current camera in the /home/pi directory
fswebcam -d /dev/video0 --no-banner -r 320x240 -S 10 /home/pi/image.jpg
Parameter explanation:
-d -- configure which camera device to use
--no-banner --- There is no watermark in the photos taken. If this parameter is not used, the system may prompt a wrong font
-r -- Size of picture
-S -- Visibility, the range is from 1 to 10. If this parameter is not set or this parameter is set to 0, the photo will be black.
/home/pi/image.jpg -- Save the image path (if you do not add the path, picture will be saved in the current directory /home/pi/ by default ).
1.3 Camera python driver tutorial
Common API functions used by OpenCV:
1. cv2.VideoCapture()
cap = cv2.VideoCapture(0)
The parameter in VideoCapture () is 0, which means Raspberry Pi video0.
(Note: You can view the current camera through the command ls/dev/ )
cap = cv2.VideoCapture("…/1.avi")
VideoCapture("…/1.avi"), This parameter indicates that if the path of the video file is entered, the video is opened.
2.cap.set()
Camera parameters common configuration methods:
capture.set(CV_CAP_PROP_FRAME_WIDTH, 1920); # Width
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 1080); # Height
capture.set(CV_CAP_PROP_FPS, 30); # Frame
capture.set(CV_CAP_PROP_BRIGHTNESS, 1); # Brightness 1
capture.set(CV_CAP_PROP_CONTRAST,40); # Contrast 40
capture.set(CV_CAP_PROP_SATURATION, 50); # Saturation 50
capture.set(CV_CAP_PROP_HUE, 50); # Hue 50
capture.set(CV_CAP_PROP_EXPOSURE, 50); # Visibility 50
Parameter explanation:
#define CV_CAP_PROP_POS_MSEC 0
// Calculate the current position in milliseconds
#define CV_CAP_PROP_POS_FRAMES 1
// Calculate the current position in frame
#define CV_CAP_PROP_POS_AVI_RATIO 2 // Relative position of the video
#define CV_CAP_PROP_FRAME_WIDTH 3 // Width
#define CV_CAP_PROP_FRAME_HEIGHT 4 // Height
#define CV_CAP_PROP_FPS 5 // Frame rate
#define CV_CAP_PROP_FOURCC 6 // 4 Character encoding
#define CV_CAP_PROP_FRAME_COUNT 7 // Video frames
#define CV_CAP_PROP_FORMAT 8 // Video format
#define CV_CAP_PROP_MODE 9
// Backend specific value indicating the current capture mode.
#define CV_CAP_PROP_BRIGHTNESS 10 // Brightness
#define CV_CAP_PROP_CONTRAST 11 // Contrast
#define CV_CAP_PROP_SATURATION 12 // Saturation
#define CV_CAP_PROP_HUE 13 // Hue
#define CV_CAP_PROP_GAIN 14 // Gain
#define CV_CAP_PROP_EXPOSURE 15 // Exposure
#define CV_CAP_PROP_CONVERT_RGB 16
// Mark whether the image should be converted to RGB.
#define CV_CAP_PROP_WHITE_BALANCE 17 // White balance
#define CV_CAP_PROP_RECTIFICATION 18 // Stereo camera calibration mark (note: only support DC1394 v2)
3.cap.isOpened()
Return true indicates open camera successful and false indicates open camera failure
4.ret,frame = cap.read()
cap.read () reads the video frame by frame. ret and frame are the two return values of the cap.read () function.
ret is a Boolean value, if the read frame is correct, it will return true, If the file has not been read to the end, it returns False.
Frame is the image of each frame, which is a three-dimensional matrix.
5.cv2.waitKey(n)
n represents the delay time, if the parameter is 1, it means a delay of 1ms to switch to the next frame of image.
If the parameter is too large, such as cv2.waitKey (1000), it will freeze because of the long delay.
The parameter is 0, such as, cv2.waitKey (0) only displays the current frame image, which is equivalent to video pause.
6.cap.release() and destroyAllWindows()
Call cap.release () to release the video.
Call destroyAllWindows () to close all image windows.
About Code
Since our entire tutorial runs in JupyterLab, we must understand the various components inside.
Here we need to use the image display component.
1.Import library
import ipywidgets.widgets as widgets
2.Set Image component
image_widget = widgets.Image(format='jpeg', width=600, height=500)
3.Display Image component
display(image_widget)
4.Open camera and read image
image = cv2.VideoCapture(0) # Open camera
ret, frame = image.read() # Read camera data
5.Assignment to components
#Convert the image to jpeg and assign it to the video display component
image_widget.value = bgr8_to_jpeg(frame)
import cv2
import ipywidgets.widgets as widgets
import threading
import time
#Set camera display component
image_widget = widgets.Image(format='jpeg', width=600, height=500)
display(image_widget) # display camera component
#bgr 8 to jpeg format
import enum
import cv2
def bgr8_to_jpeg(value, quality=75):
return bytes(cv2.imencode('.jpg', value)[1])
image = cv2.VideoCapture(0) # Open camera
# width=1280
# height=960
# cap.set(cv2.CAP_PROP_FRAME_WIDTH,width) # set width of image
# cap.set(cv2.CAP_PROP_FRAME_HEIGHT,height) # set height of image
image.set(3,600)
image.set(4,500)
image.set(5, 30) # set frame
image.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter.fourcc('M', 'J', 'P', 'G'))
image.set(cv2.CAP_PROP_BRIGHTNESS, 40) #set brightness -64 - 64 0.0
image.set(cv2.CAP_PROP_CONTRAST, 50) #set contrast -64 - 64 2.0
image.set(cv2.CAP_PROP_EXPOSURE, 156) #set exposure value 1.0 - 5000 156.0
ret, frame = image.read() # read camera data
image_widget.value = bgr8_to_jpeg(frame)
while 1:
ret, frame = image.read()
image_widget.value = bgr8_to_jpeg(frame)
time.sleep(0.010)
image.release() #After using the object, we need to release the object, otherwise when we use the object again, the system will prompt that the object be occupied, making it unusable.
The camera screen is shown below.
| https://www.elephantjay.com/blogs/tutorial/642 | CC-MAIN-2021-04 | refinedweb | 1,461 | 50.73 |
So you want to get Github to "do something" on a commit? Well, serverless technologies like IBM Cloud Functions (based on Apache Openwhisk) could be the answer for you.
A 1080p version of this video is on Cinnamon here.
This is part two of a live coding session in which I show how to create an IBM Cloud Function in Python to look up a user's PayID and resolve their payment address to pay them some XRP on commit. Part one is here:
PayID Hackathon live coding - Monetizing Github Commits
Matt Hamilton ・ Jul 2 '20 ・ 2 min read
Once again, I was joined by my colleague, Si Metson, for the show and we were working on this together with a view to submit it to the PayID hackathon currently running.
Recap of the session
I'm going to run through the basic steps we covered in both part I and II of this session. I will be taking examples from the code we wrote during the session, although may omit or simplify some bits for focus in this post.
Setting up Cloud Functions
Firstly you will need an account on IBM Cloud, which you can get here.
You will need to download the IBM Cloud CLI, login, install the cloud functions plugin, create and target a namespace:
$ ibmcloud login -a $ ibmcloud plugin install cloud-functions $ ibmcloud namespace create mynamespace $ ibmcloud fn property set --namespace mynamespace
Code For Our Webhook
We then need to create our code to be called. I've simplified this a bit from the video to make it a bit easier to blog about. But there are three main functions:
A function
pay_xrp_testnet that takes a wallet seed (secret key) for a wallet on the XRP Ledger testnet and will make a payment to a destination address of a certain amount using the Xpring Python API. For the sake of brevity this function is not waiting to confirm that the transaction was successful.
def pay_xrp_testnet(wallet_seed, address, amount): # Create a wallet instance using the seed wallet = xpring.Wallet.from_seed(wallet_seed) # The XRP testnet url = 'test.xrp.xpring.io:50051' client = xpring.Client.from_url(url) # Create a transaction txn = client.send(wallet, address, str(amount)) # Submit txn to the network res = client.submit(txn) return res
We then have a function,
get_address_from_payid, that given a PayID will form an HTTP request to fetch the contents of the PayID and parse it for the address for the network and environment we want:
def get_address_from_payid(payid, network, environment): # Convert the PayID into a URL e.g. # pay$username.github.io -> local_part, domain = payid.split('$') url = f'https://{domain}/{local_part}' response = requests.get(url) response.raise_for_status() data = response.json() # Look for an address that matches the network # and environment we want to use for address in data['addresses']: if address['paymentNetwork'] == network and \ address['environment'] == environment: return address['addressDetails']['address']
Again for the sake of brevity here, we are not handling errors if we can't resolve or parse the PayID.
And finally, we have our
main function that is actually called by the webhook:
def main(args): # The wallet seed (secret key) is passed in, as a bound # parameter to this function xrp_wallet_seed = args['xrp_wallet_seed'] # Extract the username of the pusher from the # Github hook payload pusher = args['pusher']['name'] # Assume a PayID on Github of this form payid = f'pay${pusher}.github.io' # Calculate the amount based on number of commits # this is just an example and could be any metric try: num_commits = len(args['commits']) except KeyError: num_commits = 1 amount = 1000 * num_commits # Get the address from the PayID and make payment address = get_address_from_payid(payid, 'XRPL', 'TESTNET') res = pay_xrp_testnet(xrp_wallet_seed, address, amount) return { 'address': address, 'amount': amount, }
In this version we are just assuming that a Github user with username
foo will have a PayID setup of
pay$foo.github.io. But in the future we'd like to parse the PayID from the commit message itself.
We calculate how much to pay the contributor based on the number of commits in this push multiplied by 1000 drops. There are 1e6 drops in 1 XRP.
I've detailed in another post how to setup a PayID using Github Pages:
Hosting a PayID on Github Pages
Matt Hamilton ・ Jun 20 '20 ・ 2 min read
Creating a Cloud Function
Now how to get create the Cloud function? Well this was slightly complicated by the fact we need to have access to the
xpring library, which is not included by default. So we created our own Docker image, the Dockerfile being:
FROM ibmfunctions/action-python-v3.7 RUN pip install --upgrade pip RUN pip install -r requirements.txt RUN pip install xpring[py] requests
This bases off the original docker image that IBM Cloud Functions uses for Python functions and installs the
xpring library to it.
I then built and pushed the Docker image to Dockerhub:
$ docker build -t hammertoe/twitch_payid:0.1 . $ docker push hammertoe/twitch_payid:0.1
I can then refer to it when I create my cloud function. I also bind a value of
xrp_wallet_seed to the function with the secret key of my wallet. Obviously take care with this secret as with it anyone can empty your account (in this case it is just the testnet, so no issue). By passing
--web true we allow this function to be accessed from the web.
$ ic fn action create webhook webhook.py --docker hammertoe/twitch_payid:0.1 --param xrp_wallet_seed snzBUmvTTAzCCRwGvGfKeA6Zqn4Yf --web true ok: created action github_pay
Configure the webhook in Github
To configure the webhook in Github, you need to first get the URL of the function we just created:
$ ic fn action get github_pay --url ok: got action github_pay
The can then put that URL with
.json on the end into our Github hook settings. Also note we are not setting a secret here, which we would want to configure in a real world system to ensure our Cloud Function can only be called by Github and not someone else randomly.
Results
Here is an animated gif showing a commit being pushed and the payment arriving a few seconds later in my XRP wallet:
The next steps will be to take this and expand it to be a bit more generic and secure to run in the real world.
The full code to this session is, as always, in the following Github Repo:
IBMDeveloperUK
/
ML-For-Everyone
Resources, notebooks, assets for ML for Everyone Twitch stream
ML for Everyone
ML For everyone is a live show broadcast on the IBM Developer Twitch channel every week at 2pm UK time.
The show is presented by Matt Hamilton and focusses on topics in and around machine learning for developers, with the odd smattering of random Python topics now and then.
Past shows
- 2020/05/26 Using Docker Images with IBM Cloud Functions
- 2020/05/19 Intro to IBM Cloud Functions
- 2020/05/12 Aligning Audio Files
- 2020/05/05 Finding Similar Images with an Autoencoder
- 2020/04/30 Creating an Autoencoder in Python with Keras
- 2020/04/15 Scraping web content from graphql with Python
- 2020/04/07 Intro to Watson Data Studio and Investigating Air Quaility Data
I hope you enjoyed the video, if you want to catch them live, I stream each week at 2pm UK time on the IBM Developer Twitch channel:
Top comments (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/hammertoe/using-ibm-cloud-functions-for-github-hooks-3md7 | CC-MAIN-2022-40 | refinedweb | 1,228 | 55.27 |
Section (3) posix_fallocate
Name
posix_fallocate — allocate file space
Synopsis
#include <fcntl.h>
DESCRIPTIONdis not a valid file descriptor, or is not opened for writing.
- EFBIG
offset+lenexceeds the maximum file size.
- EINTR
A signal was caught during execution.
- EINVAL
offsetwas less than 0, or
lenwas less than or equal to 0, or the underlying filesystem does not support the operation.
- ENODEV
fddoes not refer to a regular file.
- ENOSPC
There is not enough space left on the device containing the file referred to by
fd.
- ESPIPE
fdrefers to a pipe.
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7)..
NOTEShas been opened with the
O_APPENDor
O_WRONLYflags, the function fails.
SEE ALSO
fallocate(1), fallocate(2), lseek(2), posix_fadvise(2) | https://manpages.net/detail.php?name=posix_fallocate | CC-MAIN-2022-21 | refinedweb | 124 | 60.72 |
SFD2016
Software Freedom Day Rasbperry Pi Workshop Ali Nik & Radi Almac
Workshop
Ali Nik & Radi Almac
#!/bin/bash Topic="Raspberry Workshop" SubTopic[0]="What is a Raspberry" SubTopic[1]="Why Rasberry" SubTopic[2]="Generations of Raspberry" SubTopic[3]="Parts of the raspberry" SubTopic[4]="Operating System" SubTopic[5]="SSH" SubTopic[6]="GNU/Linux" SubTopic[7]="Bash" SubTopic[8]="GPIOs" SubTopic[9]="Compile/Run (C,Python)" SubTopic[10]="IoT?" SubTopic[11]="Q & A" getContent(){ echo ${SubTopic[$1]} } echo $Topic getContent $1 #Run: bash workshop.sh 0
Is a low cost, credit-card sized computer.
Enables people of all ages to explore computing.
It’s capable of doing everything you’d expect a desktop computer to do.
It’s Cheap!
It’s Tiny
Can Run A Variety Of Operating Systems
Is Really Versatile
You Can Overclock It
Release date
February 2012
CPU
Memory
Storage
700 MHz single-core ARM1176JZF-S (model A, A+, B, B+, CM)[2]
256 MB[3] (model A, A+, B rev 1) 512 MB (model B rev 2, B+, CM)
SDHC slot (model A and B), MicroSDHC slot (model A+ and B+), 4 GB eMMC IC chip (model CM)
Raspberry Pi 1 model B+
Power
1.5 - 3.5 W
Release date
February 2012
CPU
Memory
Storage
900 MHz quad-core ARM Cortex-A7
1 GB RAM
MicroSDHC slot
Raspberry Pi 2 model B
Power
4.0 W
Release date
29 February 2016
CPU
Memory
Storage
1200 MHz quad-core ARM Cortex-A53
1 GB RAM
MicroSDHC slot
Raspberry Pi 3 model B
Power
4.0 W
Release date
November 2015
CPU
Memory
Storage
1000 MHz single-core ARM1176JZF-S
512 MB RAM
MicroSDHC slot
Raspberry Pi Zero
Power
Price
0.8 W
US$5
Most Popular operating systems
We will use a SSH client to connect to the Raspberry Pi remotely over a Local Area Network (LAN)
No additional software is required, but we need to activate SSH service
*Putty: Free SSH and telnet client
*Xming:
pi@raspberrypi ~ $ sudo raspi-config
Restart your Pi and login again.
Advanced Options >>
SSH >>
Enable
pi@raspberrypi ~ $ ifconfig
A list of networks will appear, look at eth0 and save the IP Address.
Using SSH in PuTTY
raspberry
User:
pi
Graphic mode?
In windows VNC Server
pi@raspberrypi ~ $ xtightvncviewer raspberrypi.local:1
In Linux
(such as Xming)
(such as TightVncServer)
Four main parts make up a GNU/Linux system:
The kernel is primarily responsible for four main functions:
The default shell used in all Linux distributions is the bash
shell. The bash shell was developed by the GNU project as a replacement for the standard Unix shell, called the Bourne shell (after its creator). The bash shell name is a play on this wording, referred to as the “Bourne again shell.”
The default prompt symbol for the bash is the dollar sign ($).
I will be using the "$ " in my codes to represent terminal input, you don't need to type it.
You can check more basic commands
Display current directory:
$ pwd
Now create an empty file:
Display the content of the folder:
$ touch file.txt
$ ls -lh
Display a string:
Create a variable and display its content:
$ echo "Hello world!" Hello world!
$ var="Hello world!" $ echo $var Hello world! $ echo var var
$ rm file.zip $ rm -R dir
If you need to write code that needs to run more than one time, you must consider to writing down in a file and then execute it when you need it.
Scripts needs to have at the first line the shebang.
#!/bin/bash
The shebang is used to tell the system the name of the interpreter that should be used to execute the script that follows.
$ vi hello_world.sh
#!/bin/bash # This is a comment echo 'Hello World!' # "" and '' have the same effect
Type ( i) and later the code:
Save script. (Press Esc and then :wq Enter)
We can execute file type:
$ sh hello_world.sh
Output:
Hello World!
The other way to execute the file is to make it an executable file
chmod 755 hello_world.sh $ ls -l hello_world -rwxr-xr-x 1 pi pi 63 2015-10-07 10:10 hello_world
Execute file
$ ./hello_world.sh Hello World!
We can evaluate an expression:
#!/bin/bash x=1 if [ $x -eq 10 ]; then echo "x is equal to 10" elif [ $x -gt 10]; then echo "x es greater than 10" else echo "x is less than 10" fi
Example 1:
Example 2:
Example 3:
Example 4:
#!/bin/bash for i in $( ls ); do echo item: $i done
#!/bin/bash n="1 2 3 4 5 6 7 8" for i in $n; do echo "number: "$i done
#!/bin/bash for i in `seq 10`; do echo i done
#!/bin/bash for i in `seq 1 2 10`; do echo i done
Example 1:
Example 2:
Older UNIX shells like Borne shell and ksh88 have clumsy, inefficient way of doing arithmetic based on external expr command
#!/bin/bash count=0 while [ $count -lt 10 ]; do echo $count count=`expr $count + 1` done
#!/bin/bash count=0 while [ $count -lt 10 ]; do echo $count count=$((count + 1)) done
You can send parameters to a script. Write:
Execute file by typing:
#!/bin/bash echo $0 echo $1 echo $2 echo $#
$ sh myscript.sh Hello World myscript.sh Hello World 2
Make a script that prints a count depending on the first parameter
?
Validate if a given number is odd
?
Calculate the factorial number of a given number (0-9)
?
* All GPIOs are 3.3 V tolerant
* They can provide maximum 16 mA, but not exceeding 50 mA from all at the same time
* 3.3 V pin can deliver 50 mA maximum
* 5 V pin can deliver your power supply – 700 mA for the raspberry*
Several inputs:
You have a mouse, keyboard, Ethernet connection, monitor, SD card without need to connect additional electronics
Filesystem:
Being able to read and write data in the Linux file system will make many projects much easier.
Linux tools:
Packaged in the Raspberry Pi’s Linux distribution is a set of core command-line utilities, which let you work with files, control processes, and automate many different tasks.
Languages:
There are many programming languages out there and embedded Linux systems like the Raspberry Pi give you the flexibility to choose whichever language you’re most comfortable with
We will turn on and off and LED
Connect and LED to GPIO25 using a 330 ohm resistor
You can then use the Linux command line to turn the LED on and off.
Steps:
1) Connect to raspberry (SSH)
2) In order to access the input and output pins from the command line, you’ll need to run the commands as root, the superuseraccount on the Raspberry Pi. To start running commands as root, type sudo su at the command line and press enter:
The root account has administrative access to all the functions and files on the system and there is very little protecting you from damaging the operating system if you type a command that can harm it.
pi@raspberrypi ~ $ sudo su root@raspberrypi:/home/pi#
3).
4) Change to that directory with the cd command and list the contents of it with ls:
root@raspberrypi:/home/pi# cd /sys/class/gpio/gpio25 root@raspberrypi:/sys/class/gpio/gpio25# ls active_low direction edge power subsystem uevent value.
5) The directionfile is how you’ll set this pin to be an input (like a button) or an output (like an LED). Since you have an LED connected to pin 25 and you want to control it, you’re going to set this pin as an output:
root@raspberrypi:/sys/class/gpio/gpio25# echo out > direction
6) To turn the LED on, you’ll use the echo command again to write the number 1 to the value file:
root@raspberrypi:/sys/class/gpio/gpio25# echo 1 > value
7) After pressing enter, the LED will turn on! Turning it off is as simple as using echo to write a zero to the value file:
root@raspberrypi:/sys/class/gpio/gpio25# echo 0 > value
The files that you’re working with aren’t actually files on the Raspberry Pi’s SD card, but rather are a part of Linux’s virtual file system,.
We will read a digital input and display its status “0” for GND and “1” for 3.3v.
Connect the following diagram
Almost same instructions. Remember to run commands as root.
root@raspberrypi:/home/pi# echo 24 > /sys/class/gpio/export root@raspberrypi:/home/pi# cd /sys/class/gpio/gpio24 root@raspberrypi:/sys/class/gpio/gpio24# echo in > direction root@raspberrypi:/sys/class/gpio/gpio24# cat value 0
(1) Export the pin input to userspace.
(2) Change directory.
(3) Set the direction of the pin to input.
(4) Read the value of the of the pin using cat command.
(5) Print the result of the pin, zero when you aren’t not pressing the button.
We can use Python to control the GPIOs.
Open Python by typing on the Linux console:
sudo python
First make sure this library its already installed on the Raspberry Pi. In console type:
>>> import RPi.GPIO as GPIO
If you don’t get an error, you’re all set.
Close the console
Type the following command on the Linux console
$ wget $ tar zxf RPi.GPIO-0.1.0.tar.gz $ cd RPi.GPIO-0.1.0 $ sudo python setup.py install
The instructions here refer to an early version of RPi.GPIO. Please search the web for the latest version and replace the version numbers in the instructions below. On newer Raspbian distributions library is included.
On the Python console type:
>>> import RPi.GPIO as GPIO >>> GPIO.setmode(GPIO.BCM) >>> GPIO.setup(25, GPIO.OUT) >>> GPIO.output(25, GPIO.HIGH) >>> GPIO.output(25, GPIO.LOW) >>> exit()
1) Import GPIO library
2) Use BCM convention for the names of the GPIOs
3) Pin 25 as output
4) Turn on pin 25 (send 3.3v)
5) Turn off pin 25 (connect to ground)
6) Close python interpreter
BCM is for Broadcom BCM 2835 chip, the chip that is containned in the Raspberry Pi. When we set mode as BCM we are telling to the library that I want to use the real pin names of the BCM chip. There are other configurations that we will not use in this class (such as board).
We will use a Python Script.
Create a new file, name it blink.py
Write the following code:
import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) GPIO.setup(25, GPIO.OUT) while True: GPIO.output(25, GPIO.HIGH) time.sleep(1) GPIO.output(25, GPIO.LOW) time.sleep(1)
touch blink.py vi blink.py
Run:
pi@raspberrypi ~ $ sudo python blink.py
Your LED should now be blinking!!!
Hit Ctrl+C to stop the script and return to the command line
import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP) count = 0 while True: inputValue = GPIO.input(18) if (inputValue == False): count = count + 1 print("Button pressed " + str(count) + " times.") time.sleep(.3) #Wait for the user time.sleep(.01) #So the CPU is not at 100%
Amazing IoT Project
Open this link Now !
By SFD2016
Raspberry Pi Workshop - Software Freedom Day 2016 - Tehran Ali Nik & Radi Almac | https://slides.com/sfd2016/raspberryworkshop | CC-MAIN-2021-21 | refinedweb | 1,892 | 62.88 |
Conversational C# for Java Programmers
Pages: 1, 2, 3
Let's dissect this program, starting with the files that the source code goes into. In Java, each class needs to exist in a like-named file (usually with the like-named extension .java) -- there are, of course, exceptions with class visible and inner classes, as those are defined within another class's .java file. C# does not have this, or any, restriction when it comes to defining classes, even when they are in different namespaces -- any part of a C# program may exist in the same .cs file, so as to form a self-contained unit.
.java
.cs
Java programs almost always follow one standard convention for naming: mixed-caps,
called camel cased. (Conventions are not imposed by the compiler, but simply
appear in the Java Language Specification and provide a style that programmers
use and become familiar with within certain languages -- as always, there are
author-personal exceptions to these rules.) Mixed-caps style doesn't use spaces
or underscores between logical splits in a name, but instead uses capital letters.
Object names typically have their first letter capitalized and then follow wimpy
caps for the rest of the name, for example ThisIsAnObjectName.
The first letter of a method, or variable, is usually in lowercase, followed
by the capitalization of all other logical splits (thisIsAMethodName
and thisIsAVariableName). Often, named constants (static final
variables) break from this convention and are usually declared in Java by having
names in all capital letters such as THISISACONSTANT.
ThisIsAnObjectName
thisIsAMethodName
thisIsAVariableName
THISISACONSTANT
C# follows this mixed-caps style, except that method names typically also capitalize
their first letter. This is often referred to as Pascal cased. It is the standard
to have the .NET Framework use Pascal case for object, method and property names,
but use camel casing for parameters.
Java packages place classes into different author-defined namespaces for organization and readability; the rule of thumb is that related classes go into the same package (and subpackages contain closely related classes within a package, etc.). Coders can know to look in a specific package to find classes that can provide related functionality. In Java, to use classes within a package, either the "fully qualified name" (i.e. java.io.BufferedReader) can be specified within the code, or the entire package or specific class can be imported into the namespace using the "import" statement at the top of the class's source. All of the standard classes provided by the JRE are partitioned out into the java package.
java.io.BufferedReader
C# has a similar name-partitioning scheme. To delimit different namespaces, a namespace block is specified. In the Hello.cs file, the namespace com.oreilly.hello block defines that all code within the braces are to be assigned to the com.oreilly.hello "package." What is not illustrated in the above example is that these namespaces may be nested. For example, the above namespace may have been defined as
Hello.cs
com.oreilly.hello
namespace com {
namespace oreilly {
namespace hello {
}
}
}
This allows different structures to be placed within different namespaces while
within the same source file. C#'s "using" statement brings an entire specified
namespace into the program's space; it does not have the exact same functionality
as Java's "import" statement, as it cannot bring a specific object into the
program's namespace.
All classes in C# descend from the System.Object class, just as all classes in Java descend from the java.lang.Object class. And just as in Java, if a class is simply extending Object (the default), then it is not necessary to specify it in the source code. When extending a class, the use of ":" is required (in the place of the "extends" keyword in Java). In the above example, the C# Hello class's fully qualified name is com.oreilly.hello.Hello, as it is defined in that namespace. What is not shown above is how to implement interfaces -- Java provides a very clean way of differentiating when the code is extending another object (via the
"extends" keyword), and when the code is implementing an interface (via the "implements" keyword). Java programmers may get annoyed with C# as it muddles the two together: interfaces may also be placed after the colon, and the compiler will only demand that the class it extends (if the class directly extends another) is the
first listed after the colon.
com.oreilly.hello.Hello
class ClassName : ExtendedClass, Interface1, Interface2, ...
The default access modifier for classes in C# is to make them "internal," meaning
they are not accessible from any other object outside the .cs file
in which they are defined. The more familiar "public," "private," and "protected"
modifiers are provided along with a fifth "protected internal" modifier which
is the union of "protected" and "internal" properties (the modified can be accessed
from a class which extends the enclosing class, or from within the file in which
it has been defined). In the Hello example above, both the Java and the C# class
are defined to be "public," although they do not need to be. Neither class provides
functionality that any other object may wish to use -- they simply provide the
proper signature so that the run-time environment may execute them.
[Ed. Note: This paragraph is no longer completely true as of current versions
of C#. While the default access modifier for C# class statements is "internal".
Internal means it is only visible within the assembly in which it is defined,
not just within the .cs file as the author previously stated. The default access
modifier for members of the class is private (not internal).]
In order for the runtime environment to load and start a class, it needs to know where to begin. The Java language has the programmer define a public static void main( String[] args ) method so that the JVM may be able to pass the command line arguments into that method and start the program. In order to be more flexible (and more C-like), C# allows the programmer three different method signatures for the entry point. The simplest one is the public static void Main() method, followed by the public static void Main( string[] args ) and the public static int Main( string[] args )" methods. The last two signatures have the ability to take in the command line parameters passed into the program, and the third has the ability of returning an exit code.
public static void main( String[] args )
public static void Main()
public static void Main( string[] args )
public static int Main( string[] args )
But just like Java, having an entry point that does not have a return value does not mean that the exit code of the program cannot be set. In Java, the programmer can call System.exit( int code ) with the value to exit with. A C# programmer can set the ExitCode property in the System.Environment class and when System.WinForms.Application.Exit() is called, the value of
ExitCode is returned to the run time environment.
System.exit( int code )
ExitCode
System.Environment
System.WinForms.Application.Exit(). | http://archive.oreilly.com/pub/a/dotnet/2001/05/31/csharp_4_java.html?page=2 | CC-MAIN-2016-30 | refinedweb | 1,186 | 52.29 |
I want to find a link which contains a text and some noise by BeautifulSoup4:
<a href="#">
<span>gggggggggggg</span>
Some text123
<div>fdsfdsfdsfd</div>
<span> fdsfdsfdsfd</span>
</a>
soup123.find("a", "Some text123") # => NoneType
The following might suit your needs. It simply finds all
a tags and determines if the search text you are looking for is present. It then displays the associated
href tag for any matching entries:
from bs4 import BeautifulSoup<span>gggggggggggg</span>Some text123<div>fdsfdsfdsfd</div><span> fdsfdsfdsfd</span></a> <a href="#2"><span>gggggggggggg</span>Some text124<div>fdsfdsfdsfd</div><span> fdsfdsfdsfd</span></a>""" soup = BeautifulSoup(html, "html.parser") search = "Some text123" for a in soup.find_all('a'): if search in a.text: print a['href']
So for my example, it would display:
#1 | https://codedump.io/share/FnZYtaYP6kjp/1/find-a-text-by-its-text-with-can-contain-noise | CC-MAIN-2017-13 | refinedweb | 131 | 57.37 |
17 July 2012 11:23 [Source: ICIS news]
LONDON (ICIS)--New registrations for passenger cars in the 27-member EU in June fell for the ninth consecutive month, an industry body said on Tuesday.
According to data from the European Automobile Manufacturers’ Association (ACEA), 1,201,578 new passenger cars were registered in the region in June, down by 2.8% from the same month in 2011, continuing the downward trend which began in October last year.
“June results were diverse across the EU, leading to an overall 2.8% downturn,” the ACEA said.
“Looking at the major markets, ?xml:namespace>
In the first half of 2012, new registrations for cars in the EU fell by 6.8% year on year to 6,644,829.
“From January to June, | http://www.icis.com/Articles/2012/07/17/9578614/eu-new-passenger-car-registrations-fall-for-ninth-consecutive-month.html | CC-MAIN-2014-41 | refinedweb | 129 | 66.64 |
For each node in the kernel device tree, the system selects a driver for the node based on the node name and the compatible property (see Binding a Driver to a Device). The same driver might bind to multiple device nodes. The driver can differentiate different nodes by instance numbers assigned by the system.
After a driver is selected for a device node, the driver's probe(9E) entry point is called to determine the presence of the device on the system. If probe() is successful, the driver's attach(9E) entry point is invoked to set up and manage the device. The device can be opened if and only if attach() returns success (see attach() Entry Point).
A device might be unconfigured to conserve system memory resources or to enable the device to be removed while the system is still running. To enable the device to be unconfigured, the system first checks whether the device instance is referenced. This check involves calling the driver's getinfo(9E) entry point to obtain information known only to the driver (see getinfo() Entry Point). If the device instance is not referenced, the driver's detach(9E) routine is invoked to unconfigure the device (see detach() Entry Point).
To recap, each driver must define the following entry points that are used by the kernel for device configuration:
Note that attach(), detach(), and getinfo() are required. probe() is only required for devices that cannot identify themselves. For self-identifying devices, an explicit probe() routine can be provided, or nulldev(9F) can be specified in the dev_ops structure for the probe() entry point.
The system assigns an instance number to each device. The driver might not reliably predict the value of the instance number assigned to a particular device. The driver should retrieve the particular instance number that has been assigned by calling ddi_get_instance(9F).
Instance numbers represent the system's notion of devices. Each dev_info, that is, each node in the device tree, for a particular driver is assigned an instance number by the kernel. Furthermore, instance numbers provide a convenient mechanism for indexing data specific to a particular physical device. The most common use of instance numbers is ddi_get_soft_state(9F), which uses instance numbers to retrieve soft state data for specific physical devices.
Drivers are responsible for managing their minor number namespace. For example, the sd driver needs to export eight character minor nodes and eight block minor nodes to the file system for each disk. Each minor node represents either a block interface or a character interface to a portion of the disk. The getinfo(9E) entry point informs the system about the mapping from minor number to device instance (see getinfo() Entry Point).
For non-self-identifying devices, the probe(9E) entry point should determine whether the hardware device is present on the system.
For probe() to determine whether the instance of the device is present, probe() needs to perform many tasks that are also commonly done by attach(9E). In particular, probe() might need to map the device registers.
Probing the device registers is device-specific. The driver often has to perform a series of tests of the hardware to assure that the hardware is really present. The test criteria must be rigorous enough to avoid misidentifying devices. For example, a device might appear to be present when in fact that device is not available, because a different device seems to behave like the expected device.
The test returns the following flags:
DDI_PROBE_SUCCESS if the probe was successful
DDI_PROBE_FAILURE if the probe failed
DDI_PROBE_DONTCARE if the probe was unsuccessful yet attach(9E) still needs to be called
DDI_PROBE_PARTIAL if the instance is not present now, but might be present in the future
For a given device instance, attach(9E) will not be called until probe(9E) has succeeded at least once on that device.
probe(9E) must free all the resources that probe() has allocated, because probe() might be called multiple times. However, attach(9E) is not necessarily called even if probe(9E) has succeeded
ddi_dev_is_sid(9F) can be used in a driver's probe(9E) routine to determine whether the device is self-identifying. ddi_dev_is_sid() is useful in drivers written for self-identifying and non-self-identifying versions of the same device.
The following example is a sample probe() routine.
Example 6-3 probe(9E) Routine); /* * Initalize the device access attributes and map in * the devices); /* * Reset the device * Once the reset completes the CSR should read back * (PIO_DEV_READY | PIO_IDLE_INTR) */ ddi_put8(dev_hdl, csrp, PIO_RESET);); }
When the driver's probe(9E) routine is called, the driver does not know whether the device being probed exists on the bus. Therefore, the driver might attempt to access device registers for a nonexistent device. A bus fault might be generated on some buses as a result.
The following example shows a probe(9E) routine that uses ddi_poke8(9F) to check for the existence of the device. ddi_poke8() cautiously attempts to write a value to a specified virtual address, using the parent nexus driver to assist in the process where necessary. If the address is not valid or the value cannot be written without an error occurring, an error code is returned. See also ddi_peek(9F).
In this example, ddi_regs_map_setup(9F) is used to map the device registers.
Example 6-4 probe(9E) Routine Using ddi_poke8(9F)); /* * Initialize the device access attrributes and map in * the device's); /* * The bus can generate a fault when probing for devices that * do not exist. Use ddi_poke8(9f) to handle any faults that * might occur. * * Reset the device. Once the reset completes the CSR should read * back (PIO_DEV_READY | PIO_IDLE_INTR) */ if (ddi_poke8(dip, csrp, PIO_RESET) != DDI_SUCCESS) { ddi_regs_map_free(&dev_hdl); return (DDI_FAILURE);); }
The Oracle Oracle.
Example 6-5 Typical attach() Entry Point
/* *); } }
Note - The attach() routine must not make any assumptions about the order of invocations on different device instances. The system might invoke attach() concurrently on different device instances. The system might also invoke attach() and detach() concurrently on different device instances.
The kernel calls a driver's detach(9E) entry point to detach an instance of a device or to suspend operation for an instance of a device by power management. This section discusses the operation of detaching device instances. Refer to Chapter 12, Power Management for a discussion of power management issues.
A driver's detach() entry point is called to detach an instance of a device that is bound to the driver. The entry point is called with the instance of the device node to be detached and with DDI_DETACH, which is specified as the cmd argument to the entry point.
A driver is required to cancel or wait for any time outs or callbacks to complete, then release any resources that are allocated to the device instance before returning. If for some reason a driver cannot cancel outstanding callbacks for free resources, the driver is required to return the device to its original state and return DDI_FAILURE from the entry point, leaving the device instance in the attached state.
There are two types of callback routines: those callbacks that can be canceled and those that cannot be canceled. timeout(9F) and bufcall(9F) callbacks can be atomically cancelled by the driver during detach(9E). Other types of callbacks such as scsi_init_pkt(9F) and ddi_dma_buf_bind_handle(9F) cannot be canceled. The driver must either block in detach() until the callback completes or else fail the request to detach.
Example 6-6 Typical detach() Entry Point
/* * detach(9e) * free the resources that were allocated in attach(9e) */ static int xxdetach(dev_info_t *dip, ddi_detach_cmd_t cmd) { Pio *pio_p; int instance; switch (cmd) { case DDI_DETACH: instance = ddi_get_instance(dip); pio_p = ddi_get_soft_state(pio_softstate, instance); /* * turn off the device * free any resources allocated in attach */ ddi_put8(pio_p->csr_handle, pio_p->csr, PIO_RESET); ddi_remove_minor_node(dip, NULL); ddi_regs_map_free(&pio_p->csr_handle); ddi_regs_map_free(&pio_p->data_handle); ddi_remove_intr(pio_p->dip, 0, pio_p->iblock_cookie); mutex_destroy(&pio_p->mutex); ddi_soft_state_free(pio_softstate, instance); return (DDI_SUCCESS); case DDI_SUSPEND: default: return (DDI_FAILURE); } }
The system calls getinfo(9E) to obtain configuration information that only the driver knows. The mapping of minor numbers to device instances is entirely under the control of the driver. The system sometimes needs to ask the driver which device a particular dev_t represents.
The getinfo() function can take either DDI_INFO_DEVT2INSTANCE or DDI_INFO_DEVT2DEVINFO as its infocmd argument. The DDI_INFO_DEVT2INSTANCE command requests the instance number of a device. The DDI_INFO_DEVT2DEVINFO command requests a pointer to the dev_info structure of a device.() might be called before attach(). The mapping defined by the driver. getinfo() then passes back the dev_info pointer saved in the driver's soft state structure for the appropriate device, as shown in the following example.
Example 6-7 Typical getinfo() Entry Point
/* * instance); }
Note - The getinfo() routine must be kept in sync with the minor nodes that the driver creates. If the minor nodes get out of sync, any hotplug operations might fail and cause a system panic. | http://docs.oracle.com/cd/E23824_01/html/819-3196/autoconf-60641.html | CC-MAIN-2013-48 | refinedweb | 1,484 | 52.19 |
Hello!
How is it possible to connect to an existing sqlite db in the folder of the app? All my trys just resulted in an sqlite db created in the isolated storage of the browser, but it seem's not to be possible to connect to an existing sqlite db in the file system.
Where is my mistake ?
Regards,
Michael
For Android, phonegap already has a way of using an external data source you simply must use it instead of the generic window.openDatabase.
Here is what I have done:
phonegap.1.4.1.js defines a DroidDB_opendatabase which can be used in place of the browsers implementation.
In SQLConnection, I do this:
Ext.ns('Ext.Sqlite');
Ext.Sqlite.Connection = Ext.define('Ext.Sqlite.Connection', {
extend: 'Ext.util.Observable',
/**
* @cfg {String} dbName
* Name of database
*/
dbName: undefined,
/**
* @cfg {String} version
* database version. If different than current, use updatedb event to update database
*/
dbVersion: '1.19',
/**
* @cfg {String} dbDescription
* Description of the database
*/
dbDescription: '',
/**
* @cfg {String} dbSize
* Max storage size in bytes
*/
dbSize: 5 * 1024 * 1024,
/**
* @cfg {String} dbConn
* database connection object
*/
dbConn : undefined,
constructor : function(config) {
config = config || {};
Ext.apply(this, config);
var me = this;
me.callParent([this]);
if(navigator.userAgent.toLowerCase().match(/android/)){
window.droiddb = new DroidDB();
me.dbConn = DroidDB_openDatabase(me.dbName, me.dbVersion, me.dbDescription, me.dbSize);
} else {
me.dbConn = openDatabase(me.dbName, me.dbVersion, me.dbDescription, me.dbSize);
}
return me;
}
});
This uses the dbname in the app namespace /data/data/app.package.name/app_database:appname.db which is the default path used by Storage.java -> openOrCreateDatabase.
Perhaps the same can be done for IOS with a SQLite plugin.
My team is currently implementing davibe's plugin for IOS so that a second conditional may be used:
if(navigator.userAgent.toLowerCase().match(/android/)){
window.droiddb = new DroidDB();
me.dbConn = DroidDB_openDatabase(me.dbName, me.dbVersion, me.dbDescription, me.dbSize);
} else if (navigator.userAgent.toLowerCase().match(/ios/)){
me.dbConn =PGSQLite_openDatabase(me.dbName, me.dbVersion, me.dbDescription, me.dbSize);
} else{
me.dbConn = openDatabase(me.dbName, me.dbVersion, me.dbDescription, me.dbSize);
}
Thanks for the info regarding android sqlite plugin,
Is that working fine ?, do let me know how ios phonegap works ?, we can include some config option in current proxy to prefer type of DB.
Being a nitpicker here as this work is great and looking forward to sticking my teeth into it - but could the classes be renamed so they work better with the loader? For the time being I have put the two files at:
sdk/src/data/proxy/SqliteStorage.js, and
sdk/src/sqlite/Connection.js
So their paths match their class names - a small issue but makes it easier for people using the proxy for the first time!
Hey Tom,
Thanks again for this work, I can't tell you how useful it is to my project which really needs more than localstorage can provide. Saving me so much time!
I just had a couple more suggestions for you, changes I've made to the SqliteStorage.js file, that again others might find useful.
I put a debug config on the class and when this is set to false, no errors are logged to the console; useful when you trust that the proxy is doing what it should and want to debug other areas of your project without the console being swamped!
I also had a case where I was trying to use a callback on record.save() to then use the record's id as a foreign key in another db table. The issue I ran into was that the operation of the create method on the Ext.data.proxy.SqliteStorage class sets the operation to completed and successful when the query has been sent to the db connection; not when the query has been successfully executed. Therefore, record.get('id'); as in the code below was returning undefined:
Code:
record.save(function() { record.get('id'); });
Thanks again for all the hard work, invaluable for those looking to leverage the advantages Sqlite can get them in ST2, particularly important with how awkward localstorage can be with ST2.
I'm trying to get it running with the loader. I'm now stuck with this error message:
Code:
Cannot read property 'dbConn' of undefined SqliteStorage.js:120
found the problem. DBConnection need to be defined before the stores and models. I got the hint from here:
thanks
Need a Simple ST2 Demo with SqliteProxy2 for PhoneGap SQLite-plugin testing
Need a Simple ST2 Demo with SqliteProxy2 for PhoneGap SQLite-plugin testing
Hi,
(Edited).
My mistake, the github has already the demo example.
I need just to figure out how to work this demo in PhoneGap.
Thanks.
Noli | http://www.sencha.com/forum/showthread.php?151444-SqliteProxy-for-ST2/page4 | CC-MAIN-2013-48 | refinedweb | 779 | 56.66 |
In the last two posts, we started building up a simple system to reuse a common set of C code in Android and iOS:
- OpenGL from C on Android by using the NDK
- Calling OpenGL from C on iOS, Sharing Common Code with Android
In this post, we’ll also add support for emscripten, an LLVM-to-JavaScript compiler that can convert C and C++ code into JavaScript. Emscripten is quite a neat piece of technology, and has led to further improvements to JavaScript engines, such as asm.js. Check out all of the demos over at the wiki.
Prerequisites
For this post, you’ll need to have Emscripten installed and configured; we’ll cover installation instructions further below. It’ll also be helpful if you’ve completed the first two posts in this series: OpenGL from C on Android by using the NDK and Calling OpenGL from C on iOS, Sharing Common Code with Android. If not, then you can also download the code from GitHub and follow along.
Installing emscripten
Installing on Windows (tested on Windows 8)
There is a set of detailed instructions available at. There’s no need to build anything from source as there’s prebuilt binaries for everything you need.
Here are a few gotchas that you might run into during the install:
- The GCC and Clang archives need to be extracted to the same location, such as C:\mingw64.
- The paths in .emscripten should be specified with forward slashes, as in ‘C:/mingw64’, or double backward slashes, as in ‘C:\\mingw64’.
- TEMP_DIR in .emscripten should be set to a valid path, such as ‘C:\\Windows\\Temp’.
You can then test the install by entering the following commands into a command prompt from the emscripten directory:
python emcc tests\hello_world.cpp -o hello_world.html
hello_world.html
Installing on Mac OS X (tested on OS X 10.8.4)
The instructions over at should get you up and running. Instead of
brew install node, you can also enter
sudo port install nodejs, if using MacPorts. I installed emscripten and LLVM into the /opt directory.
First you should run emcc from the emscripten directory to create a default config file in ~/.emscripten. After configuring ~/.emscripten and checking that all paths are correct, you can test the install by entering the following into a terminal shell from the emscripten directory:
./emcc tests/hello_world.cpp -o hello_world.html
open hello_world.html
Installing on Ubuntu Linux (tested on Ubuntu 13.04)
The following commands should be entered into a terminal shell; They were adapted from:
Installing prerequisites
sudo apt-get update; sudo apt-get install build-essential openjdk-7-jdk openjdk-7-jre-headless git
Installing node.js:
Download the latest node.js from, extract it, and then build & install it with the following commands from inside the nodejs source directory:
./configure
make
sudo make install
Installing LLVM
sudo apt-get install llvm clang
To download and install LLVM and Clang from source, instead, see the instructions on this page:
Installing emscripten
sudo mkdir /opt/emscripten
sudo chmod 777 /opt/emscripten
cd /opt
git clone git://github.com/kripken/emscripten.git emscripten
Configuring emscripten
cd emscripten
./emcc
This command will print out a listing with the auto-detected paths for LLVM and other utilities. Check that all paths are correct, and edit ~/.emscripten if any are not.
You can then test out the install by entering the following commands:
./emcc tests/hello_world.cpp -o hello_world.html
xdg-open hello_world.html
If all goes well, you should then see a browser window open with “hello, world!” printed out in a box.
Adding support for emscripten
Let’s start by creating a new folder called emscripten in the airhockey folder. In that new folder, let’s create a new source file called main.c, beginning with the following contents:
#include <stdlib.h> #include <stdio.h> #include <GL/glfw.h> #include <emscripten/emscripten.h> #include "game.h" int init_gl(); void do_frame(); void shutdown_gl(); int main() { if (init_gl() == GL_TRUE) { on_surface_created(); on_surface_changed(); emscripten_set_main_loop(do_frame, 0, 1); } shutdown_gl(); return 0; }
In this C source file, we’ve cleared a few functions, and then we’ve defined the main body of our program. The program will begin by calling
init_gl() (a function that we’ll define further below) to initialize OpenGL, then it will call
on_surface_created() and
on_surface_changed() from our common code, and then it will call a special emscripten function,
emscripten_set_main_loop(), which can simulate an infinite loop by using the browser’s
requestAnimationFrame mechanism.
Let’s complete the rest of the source file:
int init_gl() { const int width = 480, height = 800; if (glfwInit() != GL_TRUE) { printf("glfwInit() failed\n"); return GL_FALSE; } if (glfwOpenWindow(width, height, 8, 8, 8, 8, 16, 0, GLFW_WINDOW) != GL_TRUE) { printf("glfwOpenWindow() failed\n"); return GL_FALSE; } return GL_TRUE; } void do_frame() { on_draw_frame(); glfwSwapBuffers(); } void shutdown_gl() { glfwTerminate(); }
In the rest of this code, we use GLFW, an OpenGL library for managing OpenGL contexts, creating windows, and handling input. Emscripten has special support for GLFW built into it, so that the calls will be translated to matching JavaScript code on compilation.
Like we did for Android and iOS, we also need to define where the OpenGL headers are stored for our common code. Save the following into a new file called glwrapper.h in airhockey/emscripten/:
#include <GLES2/gl2.h>
Building the code and running it in a browser
To build the program, run the following command in a terminal shell from airhockey/emscripten/:
emcc -I. -I../common main.c ../common/game.c -o airhockey.html
In the GitHub project, there’s also a Makefile which will build airhockey.html when
emmake make is called. This Makefile can also be used on Windows by running
python emmake mingw32-make, putting the right paths where appropriate. To see the code in action, just open up airhockey.html in a browser.
When we ask emscripten to generate an HTML file, it will generate an HTML file that contains the embedded code, which you can see further below (WebGL support is required to see the OpenGL code in action):
Exploring further
The full source code for this lesson can be found at the GitHub project. Now that we have a base setup in Android, iOS, and emscripten, we can start fleshing out our project in the next few posts. Emscripten is pretty neat, and I definitely recommend checking out the samples over.
10 thoughts on “Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS”
Awesome article – I’m curious though, where is the process happening by which the browser knows to translate OpenGL into OpenGL ES? The browser only knows ES. I imagine either:
– Emscripten is doing it (unlikely)
– GLFW determines it
Or maybe those functions you are using are strictly OpenGL ES (I’m familiar with OpenGL but not so much ES, and they do look pretty foreign to me).
Thanks!
I believe that it all gets translated into WebGL calls by the emscripten compiler (it has special behaviour for certain APIs including GL); so in effect only what WebGL supports would be supported, AFAIK.
It seems there’s a combination of native support & emulation: Here’s the authoritative source. 😉
Hi,
Do you have a github repo for the source code of your book? Because there’s some part that really confusing. I got so many red marks on my class. Just want to check if im following them correctly.
Hi Skadush,
There is no Github for the book but there is downloadable source code from. For this specific post (which is not part of the book), there is GitHub code available at. The C code in this post will unfortunately have some red underlined errors in Eclipse, but that’s OK — you can manually delete the errors from the “Problems” view and the code should still compile fine. | http://www.learnopengles.com/calling-opengl-from-c-on-the-web-by-using-emscripten-sharing-common-code-with-android-and-ios/ | CC-MAIN-2018-39 | refinedweb | 1,309 | 63.19 |
Timers can be used for a great variety of tasks, like measuring time spans or being notified that a specific interval has elapsed.
These two concepts are grouped into two different subclasses:
Chrono: used to measure time spans.
Alarm: to get interrupted after a specific interval.
You can create as many of these objects as needed.
Create a chronometer object.
Create an Alarm object.
handler: will be called after the interval has elapsed. If set to
None, the alarm will be disabled after creation.
arg: an optional argument can be passed to the callback handler function. If
Noneis specified, the function will receive the object that triggered the alarm.
s, ms, us: the interval can be specified in seconds (float), miliseconds (integer) or microseconds (integer). Only one at a time can be specified.
periodic: an alarm can be set to trigger repeatedly by setting this parameter to
True.
Delay for a given number of microseconds, should be positive or 0 (for speed, the condition is not enforced). Internally it uses the same timer as the other elements of the
Timer class. It compensates for the calling overhead, so for example, 100us should be really close to 100us. For times bigger than 10,000us it releases the GIL to let other threads run, so exactitude is not guaranteed for delays longer than that.
Can be used to measure time spans.
Start the chronometer.
Stop the chronometer.
Reset the time count to 0.
Get the elapsed time in seconds.
Get the elapsed time in milliseconds.
Get the elapsed time in microseconds.
Example:
from machine import Timer import time chrono = Timer.Chrono() chrono.start() time.sleep(1.25) # simulate the first lap took 1.25 seconds lap = chrono.read() # read elapsed time without stopping time.sleep(1.5) chrono.stop() total = chrono.read() print() print("\nthe racer took %f seconds to finish the race" % total) print(" %f seconds in the first lap" % lap) print(" %f seconds in the last lap" % (total - lap))
Used to get interrupted after a specific interval.
Specify a callback handler for the alarm. If set to
None, the alarm will be disabled.
An optional argument
arg can be passed to the callback handler function. If
None is specified, the function will receive the object that triggered the alarm.
Disables the alarm.
Example:
from machine import Timer class Clock: def __init__(self): self.seconds = 0 self.__alarm = Timer.Alarm(self._seconds_handler, 1, periodic=True) def _seconds_handler(self, alarm): self.seconds += 1 print("%02d seconds have passed" % self.seconds) if self.seconds == 10: alarm.cancel() # stop counting after 10 seconds clock = Clock()
For more information on how Pycom’s products handle interrupts, see notes. | https://docs.pycom.io/firmwareapi/pycom/machine/timer/ | CC-MAIN-2021-10 | refinedweb | 443 | 69.89 |
ZF-2544: Add message type as argument to FlashMessenger.
Description
Currently you can't specify what type of message you're sending with the FlashMessenger in Zend_Controller. It's just a message.
I would like to be able to differentiate the messages I'm sending via FlashMessenger (e.g. error, warning, information, tip, etc.) so users can see more clearly what's going on. Therefore I propose to enhance the addMessage method of the FlashMessenger with an additional argument that allows setting the type of message.
Change: addMessage (string $message, string $namespace)
To: addMessage (string $message, string $message_type, string $namespace)
Posted by Matthew Weier O'Phinney (matthew) on 2008-02-14T11:04:34.000+0000
This issue duplicates ZF-1705
Posted by Wil Sinclair (wil) on 2008-03-25T20:41:12.000+0000
Please categorize/fix as needed.
Posted by Matthew Weier O'Phinney (matthew) on 2008-04-22T13:39:51.000+0000
Assigning to Ralph to evaluate and schedule.
Posted by Sean P. O. MacCath-Moran (emanaton) on 2009-04-25T20:24:13.000+0000
Greetings All,
FYI, I've created a PriorityMessenger - please check it out and tell me what you think?
Regards,
Sean P. O. MacCath-Moran
Posted by Marc Hodgins (mjh_ca) on 2010-11-26T22:39:33.000+0000
Closing as duplicate of ZF-1705 | http://framework.zend.com/issues/browse/ZF-2544?focusedCommentId=43350&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-15 | refinedweb | 219 | 61.63 |
Last week we released our latest watch face for Android Wear, and it is a little bit special, it is the first done using something different from Java. It is entirely written in Kotlin. In this post I would like to share our experiences while building it and how doing in the language created by Jetbrains has improved the development process.
To add a little bit of context, we are a Madrid based small company which, among other things, have been doing a lot of wearables applications, we have worked with Pebble, Sony smartwatch, Samsung’s Tizen, Apple Watch and, of course, Android Wear. In our Google’s OS take on watches, this is our seventh watch face, and all of them were developed using the ubiquitous Java.
If you don’t know Kotlin, it is a open sourced language designed by Jetbrains, yes, they are the ones behind the powerful IntelliJ Idea, the IDE which powers Android Studio. Kotlin is yet another language for the JVM and brings a lot of fresh air to JVM development. Since it does not uses any “strange” things like AOT compilation or bytecode generation, it is fully interoperable with Java and it works wherever Java works, so it does in Android. If you haven’t took it a look, I recommend you to do it, you can get started at
Once we did the introductions, let’s go to the heart of the post, the learnings from our watch face development.
Ready for Production
During the development of the watchface, Kotlin 1.0 was officially released, but truth is said, everything worked perfectly. Developer tools are formed by several packages, and all of them worked pretty well even though they were beta versions, which became stable afterwards.
Android Studio Kotlin plugin works pretty well, it helps you with code completion, syntax highlighting, code navigation, code conversion from Java and all the candies you can expect from a mature IDE.
Kotlin’s Android gradle plugin was perfectly integrated with Android, you just click on Run, and it works on your phone, zero problems. There is just a single issue, the new feature from the Android build tools, Instant Run, doesn’t work yet. This was somehow expected, if you know how it works by patching the dex file, it is expected that the compilation process of Kotlin might has some problems with it.
Since we started using Kotlin when it was about to reach 1.0, the language itself is pretty stable, the syntax changes, if any, didn’t have any impact in our code, we didn’t have any problems the compiler and standard library either.
Fully interoperable with Java
One of the pillars of this language is that it is fully compatible with all the existing Java code, it means that you can use all Android libraries, and if you already have libraries and code, you can call it without any problems nor changes.
This feature is key to adopt Kotlin in your new developments, in fact, we started by using some of our libraries which we have developed for all of our projects. What it happened, is that we converted all those libraries to Kotlin and now our previous projects are using a library written using kotlin code.
No more if (foo != null) and less NullPointerException
If you are a java developer, you probably have suffered the curse of writing if (foo != null) thousand of times, and when you forget it, a NullPointerException will remind you to write it.
Kotlin tries to eliminate this infamous check with some syntax sugar. If a variable can be null, you need to explicit say it, you cannot assign a null to a String type, but you can do it to a String? . (note the question mark at the end). You cannot dereference a ? variable without explicitly unwrapping it either by using !! or by using a ? at the end. If a variable is null and you use a ? at the end, it won’t crash with a NPE. That means you can write rectangle?.size?.width? , if rectangle or size is null, nothing happens.
Fill your listeners using cool lambdas
When it comes to java, you know that you are going to put in use you typing skills. Chances are that you are using a modern IDE and it will save you a little bit from that pain, but all the boilerplate code is there.
If you write Applications which respond to events like an Android application or an UI app, this is particularly notorious when you need to write control Listeners . Kotlin does magic in this side, if you need to implement an Interface which has only one method, you can use a Kotlin lambda to do it, saving you hundreds of characters and making your code more expressive with less useless decorations. Let’s see an example to show this better
// Java (129 characters of boilerplate code)
view.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// Do something cool
}
});
// Kotlin (37 characters to do exactly the same)
view.onClick { // Do something cool }
No more findViewById
This point is specifically related with Android. Kotlin has a library called Android Extensions which will save you from typing useless code.
When I started in Android Programming some years ago, I hated why I needed to write so many findViewById() and hated more the typecast of that method. For non Android programmers, that method returns a generic View and you probably will need to cast it to a Button or a TextView to do something useful with it. Among other cool things, the Android Extensions library allows you to use any view from a layout just by using its id. You only need to import the layout like it was another class and all ids will be visible in your code magically.
// Java
TextView tv = ((TextView)findViewById(R.id.my_textview)).setText(“Java y u make me write so much!”)
// Kotlin
import kotlinx.android.synthetic.main_layout.*
my_textview.text = “isn’t kotlin cool?”
Syntax sugar
Kotlin has a lot more to offer in terms of making your code more readable and allowing you to type less. Java getters and setters are automatically converted to properties, instead of write person.getName() or setName(“”) , Kotlin allows you call that java code by writing person.name or person.name = “”.
Like modern languages, you don’t need to write the type of a variable when the compiler can infer the type, which can be done most of the times.
Assignments can contain a conditional expresions, Kotlin has mutable and inmutable references, I usually feel safer when a variable is inmutable, sometimes, the value of a variable depends on some condition, and this could force you to make the variable mutable. I think this is better explained by this piece of code.
// Java
String greeting = null;
if (isDaytime()) {
greeting = “Good Morning”;
} else {
greeting = “good Evening”;
}
Kotlin allows you to write it in this way:
val greeting = if (isDaytime()) {
“Good Morning”
} else {
“Good Evening”
}
Using this code, greeting is inmutable and you don’t need to check if it is null when using this variable.
Vibrant community
One of the important things when using a new technology is the how its users communicate themselves. Kotlin community is great, you only need to connect to its slack channels and you will be able to talk with the developers of the language. Moreover, an increasing number of projects are starting to be developed using Kotlin, so you can share your experiences with others and learn by talking with them.
Conclusions
I could be talking about the features of Kotlin which we used while developing the watch face for hours, things like the functional features of Kotlin, the pattern matching of “switch” clauses, the bugs that optional types and immutability has saved us time of debugging and so on. In our experience choosing of Kotlin has been a great decision, it made our code more readable and one of the most important outcomes, we enjoyed a lot while developing the application which at the end is all that matters, isn an AndroidWear watchface using Kotlin
评论 抢沙发 | http://www.shellsec.com/news/6726.html | CC-MAIN-2017-09 | refinedweb | 1,360 | 67.38 |
An implementation of the Porter2 English stemming algorithm.
Project description
An implementation of the Porter2 English stemming algorithm.
- Free software: BSD license
- Documentation:
What is stemming?
Stemming is a technique used in Natural Language Processing to reduce different inflected forms of words to a single invariant root form. The root form is called the stem and may or may not be identical to the morphological root of the word.
What is it good for?
Lots of things, but query expansion in information retrieval is the canonical example. Let’s say you are building a search engine. If someone searches for “cat” it would be nice if they were shown documents that contained the word “cats” too. Unless the query and document index are stemmed, that won’t happen. Stemming can be thought of as a method to reduce the specificity of queries in order to pull back more relevant results. As such, it involves a trade-off.
What type of stemmer is this?
Porter2 is a suffix-stripping stemmer. It transforms words into stems by applying a deterministic sequence of changes to the final portion of the word. Other stemmers work differently. They may, for instance, simply look up the inflected form in a table and map it to a morphological root, or they may use a clustering approach to map diverse forms to a centre form. Different approaches have different advantages and disadvantages.
How do I use it?
Very simply. Import it, instantiate a stemmer, and away you go:
from porter2stemmer import Porter2Stemmer stemmer = Porter2Stemmer() print(stemmer.stem('conspicuous'))
History
1.0 (2016-03-31)
- First release on PyPI.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/porter2stemmer/ | CC-MAIN-2019-51 | refinedweb | 297 | 67.86 |
Also, this codes seems to be much easier to understand and to calibrate the ranges for the flex sensors.
So the code for the glove is as follows:
// This will be the code for the glove. I got parts from two different existting glove codes.
// One from Garbry295 on Instuctables and the other from dschurman from
// instructables as well.
int flexpinky = A0;
int flexring = A1;
int flexmiddle = A2;
int flexindex = A3;
int flexthumb = A4;
void setup()
{
Serial.begin(9600);
pinMode(flexpinky, INPUT);
pinMode(flexring, INPUT);
pinMode(flexmiddle, INPUT);
pinMode(flexindex, INPUT);
pinMode(flexthumb, INPUT);
}
void loop()
{
// Defines analog input variables
int flex1 = analogRead(flexpinky);.
// The ranges for each sensor is different. You will need to find out the max and min
// of each sensor by uncommneting the section right before these comments.,10,180);
int pos4 = map(flex4,640,840,0,180);
pos4 = constrain(pos4,0,180);
int pos5 = map(flex5,720,810,0,180);
pos5 = constrain(pos5,0,180);
Serial.write("<"); // This is what is being sent from the glove's XBee module.
Serial.write(pos1);
Serial.write(pos2);
Serial.write(pos3);
Serial.write(pos4);
Serial.write(pos5);
//Serial.println("<");
//Serial.println(pos1);
//Serial.println(pos2);
//Serial.println(pos3);
//Serial.println(pos4);
//Serial.println(pos5);
delay(50);
}
Just make sure you choose the correct board under tools and that you have the XBee disconnected from the LilyPad.This code was comprised from parts from Gabry295 and from dschurman from Instructables.com. Their work can be seen at the following links:
The final code used was the code for the hand itself. Again, this code was comprised from using different pars and methods from Gabry295 and dschurman. The last code is as follows:
// This code is made by combining different parts of two existing codes
// One from Gabry295 and the other from dschurman on Instructables.com
#include <Servo.h> // Includes the servo library
Servo pinky, ring, middle, index, thumb;
int servoPin1 = 5;
int servoPin2 = 6;
int servoPin3 =11;
int servoPin4 = 10;
int servoPin5 = 3;
int angpinky = 0;
int angring = 0;
int angmiddle = 0;
int angindex = 0;
int angthumb = 0;
byte startPackage; // Variable that will contain the character of start package
void setup()
{
Serial.begin(9600);
// Attach the servos to their respective pins
pinky.attach(servoPin1);
ring.attach(servoPin2);
middle.attach(servoPin3);
index.attach(servoPin4);
thumb.attach(servoPin5);
// Set each servo pin to output
pinMode(servoPin1, OUTPUT);
pinMode(servoPin2,OUTPUT);
pinMode(servoPin3, OUTPUT);
pinMode(servoPin4, OUTPUT);
pinMode(servoPin5, OUTPUT);
}
void loop()
{
if(Serial.available()) {
startPackage = Serial.read();
angpinky = Serial.read();
angring = Serial.read();
angmiddle = Serial.read();
angindex = Serial.read();
angthumb = Serial.read();
if(startPackage == '<'){
pinky.write(angpinky);
ring.write(angring);
middle.write(angmiddle);
index.write(angindex);
thumb.write(angthumb);
}
}
delay(50);
}
Just make sure you have the XBee disconnected here too as well to upload the code. So to calibrate the range for the flex sensors on the glove, you would uncomment the section that is commented out. Once this is uncommented, you would need to also comment out the section where the data is being sent to the XBee. Once this is done, you can see the readings of the flex sensor in the serial monitor.
Here you will see the maximum and the minimum readings for each individual flex sensor. You will use these maximum and minimum readings when it comes to remapping the flex sensor readings to 0 and 180 for the degrees of the servos.
Once you have chosen your max and min ranges for each indiviudal flex sensor. You will then recomment those lines that you have uncommented to find the max and min. To make sure your chose maximum and minimum readings are translating well to 0 and 180, you can uncomment the last section in the Glove code. Once these are uncommented, you will be able to see the degree position the servos will be receiving. You will see this in the serial monitor and will see it change as you close and open the glove.
Once everything is working order, you can comment out the last section. Just make sure the section right before isn't commented out. This is the data that will be sent from the Glove.
The only changes that I have done with the 3D printed hand here at Boca Bearings were the servos and the strings used for the servos. I swapped out the smaller sg90 micro servos to TowerPro MG 996R. The reason for this was that the MG996R servos have higher torque that would then translate to a stronger grip. The only downside to this new setup was the response time was significantly delayed. It could be because of the current required to move these servos. But the specs of the MG996R servos, specifically the operating speed, was lower than that of the micro servos.
The other change were the strings. The previous I have used were of nylon type material. These type of strings would stretch over time. So I later opted to used braided fish line. This fishing line is much stronger than the first type of string that I used.
The hand was also displayed at the MakerFaire in Orlando, FL at the Orlando Science Center.
I hope that the code above can help anyone that is trying to do this project. This last code still has one issue: for some reason the thumb periodically jump. I still haven't figured out what is the cause of this maybe but maybe someone can figure it out.
Hello Gerado, I have uploaded your codes but they still not work,. i tried using the code from dschurman, it have worked in uno and lilypad so I think problem form the Xbee, this is my Xbee :
Is it correct vesion ?
One more thing,i use flex sensor 2,2 inch and 47k ohm, servo MG90S
Hope you will reply soon,
I'm really sorry to hear that. I can't really tell you what the problem is then. I don't know if it may have to do with your wiring or maybe it is the XBee that you are using. Your XBee is different from the one that I used and from the one that Gabry295 used. Our XBee's were SP1 XBees. Yours says MaxStream on it. So I think maybe your XBee may need some configuration. But I am not sure, I could be wrong. I'm not an expert on these types of topics. I found information on your XBee at:
I looked over it really quick and saw a configuration section in it. The SP1 XBees don't need any configuration from what I know of. i didn't do configuration for this project on the XBees. And I think the XBee must be powered by 3.3 volts. I'm not sure if you're supplying more than that on the XBee with the LilyPad.
I've found this forum that talks about the XBee MaxStream. It may provide some information for your XBee.
In that forum thread they mention some website that helps with MaxStream but I don't know if it can be of any help at all and the thread is from 2006.
I think your flex sensor is fine. I think it's shorter than mines but it's the one that Gabry used. You just need to figure out the maximum and minimum values of the flex sensors. And your resistors are fine. Your servos, MG90S, are also fine I think. Just make sure you provide enough voltage to them since they may draw large amounts of amps for a servo. The first servos I used were similar to yours and they would not work at all on the Arduino's 5V pin. Even if it was just one servo hooked up to it. The only way it worked was when I provided its own power source of 6V from 4 AA batteries.
I hope you figure out the problem with your project. But things like this happen all the time. There is always something that doesn't go according to plan but eventually you will figure out what the problem is. It happened to me on all the projects that I have worked on here at Boca Bearings.
When I was trying to calibrate my flex sensors using the serial monitor there where 5 values displayed, for example if the hand was I unflexed it showed
<´Ž£˜´<
180
142
163
152
180
<
830
832
847
808
815
<´�¢--´<
180
141
162
151
180
<
Do I just pick the lowest which is 808 to be my min value?
The first value after < is the reading for the pinky. The second is for the ring finger, the third is for the middle finer, fourth for the index, and the fifth is for the thumb.
So 808 is the fourth value, so it's for the index finger. I think 808 would be your max value for the index finer which would be pos4. All the other numbers in that set after the > are max values for each different flex sensor.
When you connected your xbees did you do anything or did you just turn on the glove and Arduino and it started working?
Because I'm following exactly what you did but its not transmitting for some reason
This comment has been removed by the author.
also whenever I turn on my power supply one servo makes a noise but the rest don't do anything and one time I saw smoke coming out. IS this a wiring problem or can it be fixed in the code
hi Jared i have the same problems you mentioned, my xbees dont seem to be transmitting and recieving. and also when i turn on the power one servo makes a noise, no smoke though...
how did u fix these problems??
I just turned on the glove and the hand. The order in which they are turned on doesn't matter. Just make sure the switch on the SparkFun XBee shield is switched to UART. If not, data will not be transmitted. That's what happened to me. Another thing to check is to make sure in the final program of the glove that you are using the Serial.write commands at the end of the program and not the Serial.println command. Also, make sure you are using "<" in the first Serial.write command in the glove and that '<' is used for the if statement in the hand code. This is what marks in where the data should start to be read by the hand code.
Sometimes the servos sound can happen because of the range they are programmed in, their range of angle that is obtained from the range of the flex sensor readings. Sometimes if the range of the flex sensor is small, the servo may twitch. But I am sure if that's the problem. One of my servos always twitched no matter what I did to try to fix it. I think the smoke coming out is a wiring problem. Maybe some wires that shouldn't be touching each other are making contact with each other. Just review your wiring. Hopefully this helps and hopefully you get it up and running.
Sorry for bothering you this much, but I have one last question to ask. I got new xbees and they seem to be communicating, the glove wiring is also good, each servo gets a signal from the arduino when running the sample sweep code.
But as soon as I hook up the signal wires from the 5 servos to the Adafruit motor shield nothing happens even when giving eternal power. But if I run the sample code for the servo sweep and connect one to the digital pin 9 slot that one runs the code. Could it be a problem with the actual Adafruit motor shield or the wiring. By the way I'm using a 3 AA battery pack to power all 5, is that too little because I could replace it with a 4 AA battery pack.
Is it possible to show me exactly how you set up the wiring for the inmoov arm.
Once again sorry to be asking so many questions your probably very busy.
It's not a problem, you can ask me as much as you want and I will try to help as much as I can.
The first main reason why it may not be working is because the codes shown here is meant to be used without the use of an Adafruit shield because I found it difficult to use the Adafruit servo shield.
Another reason why the Adafruit isn't working could be because you haven't included its library in the code. If you have included the library in the code, you have to make sure you have the library saved in a libraries folder saved in the Arudino folder. But you have to have a code that is incorporating the use of the Servo shield.
Another thing, I'm not sure which Adafruit shield you're using. There is the Adafruit motor shield which could only control 2 servos.
The other shield is the Adafruit PWM/Servo shield which can control up to 16 servos. This is the shield you would want to use out of the two. But, it requires some knowledge of how to really use it. I had some trouble using it. I could get it to work but I couldn't obtain the full 180 degrees of motion of the servos. If you want me to, I can send the codes or upload them here that used this particular shield. Then you can probably look over it and maybe try to fix it or tweak it to make it fully work.
Maybe 3 AA batteries might not be enough for the servos. I would go with the 4 AA batteries. 4 AA batteries should be enough. I used it for the small micro servos I first used and it worked perfectly. When I changed out the small servos to standard size high torque servos, TowerPro MG 996R, the movement of the servos was much slower. I think it could be probably because the high torque motors may draw more current or that is how they operate.
Yes, I can try to show you how I set up the wiring of the Inmoov arm. Just give me a day or two to create the wiring setup in a program to display it.
This comment has been removed by the author.
I would also really appreciate you showing me the wiring, thank you.
This comment has been removed by the author.
So I got most of it to working but the thumb keeps tewtching, like I would go up and down and when I stop it keeps replaying
Yeah that's the same thing that happened to me. I tried to find a solution to it but I couldn't figure it out. I have heard that it happens because of large current draws from the servos. I've read online before that you can use a capacitor to try to fix this by providing energy when needed. I played around with it a little but I couldn't figure it out like what capacitor size to use and if capacitor type matters.
But that's good you got it to work this far. Do you have any pictures or video of your work? I would really like to check it out. I also added a new post about the wiring and the codes I used for the Servo shield if you want to check it out.
I found this on Adafruit. It may help.
So I bought some capacitor which I'm planning to test with. I am currently restringing the tendons. I will post a link to the video when thats done(probably in a couple of days).
This comment has been removed by the author.
Here is the video:
That looks pretty cool, Thanks for sharing! Glad that you got it to work now.
what to do after finding the max and min values o each flex snsor??
Once you find the max and min of each flex sensor, you can use those values in the map function when converting from the flex sensor readings to degrees that the servos would use. For example, in the following line of the code
int pos1 = map(flex1,600,830,0,180);
600 and 830 were the min and max values I found for the first flex sensor. Each finger will have a different range. So you need to find the min and max of each flex sensor and then use those values when converting from flex sensor readings to degrees. I hope this helped clarify things for you.
yeah i figured that out after asking that question :p.
thanks for the neat explanation btw.
however it still doesnt work.
in the serial monitor i can see the flex sensor readings and i also used the min and max values to map from 0 to 180 and saw the results i the serial monitor. so the glove part is fine. however after i connected every thing it still doesnt work...i have 2 xbee s1 and two arduino xbee pro shields so i connected the tx and rx pins of the lilypad to the rx and tx pins of the xbee shield and simply mounted the other shield on top of the arduino uno. Is there a way to check if the arduino uno board is recieving signal from the lilypad??
Hi, it's been a while since I've worked on this project but I'll try my best to help you troubleshoot what the problem is. I think there is a way to see if the Arduino Uno is receiving some kind of signal from the glove.
In the hand code, you probably replace the following lines of code
pinky.write(angpinky);
ring.write(angring);
middle.write(angmiddle);
index.write(angindex);
thumb.write(angthumb);
with some form of the command
Serial.println();
so you can see it being displayed in the serial monitor, the degree positions. But if you just leave everything unchanged from the hand code and run it, you should be able to see some random characters that will be displayed in the serial monitor. Those characters will change as you open and close the glove. Then you would know that the hand is receiving some form of signal from the glove.
I also looked online and found that the Arduino XBee shield has some sort of jumper on board that would change its settings. So that may also be a reason for no communication.
This comment has been removed by the author.
its working, but not properly. When i bend the flex sensors individually, two or three of the fingers are working properly. when i bend one of the flex sensors two servo motors are reacting together. Sometimes 3 of them rotate together...and sometimes none of them rotate...
what do i do?
It's very difficult to pinpoint what the problem could be in your situation. It could be that maybe there isn't enough power being supplied to the servos. Make sure when you are bending one flex sensor that you are not flexing the nearby sensors. A slight movement of the nearby sensors can cause the servos to move. I hope you figure it out. Sorry I couldn't really help much. | http://bocabearingsworkshop.blogspot.com/2015/11/3d-printed-robotic-hand-part-7-last.html | CC-MAIN-2017-51 | refinedweb | 3,259 | 81.43 |
Moderators: SecretSquirrel, just brew it!
Pauly wrote:Pascal's lovely. Everybody should learn Pascal before any other language...
Entroper wrote:Pauly wrote:Pascal's lovely. Everybody should learn Pascal before any other language...
Learned Pascal after I learned C/C++. It seemed cumbersome and impotent. Became easier as I used it more often, but I don't think I could ever describe the language as "lovely".
#include <iostream>
int main()
{
for (int x = 0; x < 3000; ++x)
std::cout << "I will not use redundant, self-incrementing loops in my code." << std::endl;
}
muyuubyou wrote:C++ is a half-assed piece of crap.
C rocks, Java too. C++ is a half-assed piece of crap
mac_h8r1 wrote:but with the iostream commands, we've not been taught to use the std::... stuff in my computer science class, because the college board and AP testers don't expect us to use it.
Return to Developer's Den
Users browsing this forum: No registered users and 1 guest | http://techreport.com/forums/viewtopic.php?f=20&t=3731 | CC-MAIN-2016-18 | refinedweb | 165 | 69.18 |
Testing a class for correctness using Unit Tests means to exercise all methods of a class. This is especially easy for public methods. You can freely choose where to locate your test code, it always has access to public methods of the class. But how about internal/friend methods for example? They are only visible to test code located in the same project/assembly as the class to be tested. And how about private methods? And how about methods without any "directly visible effect", i.e. methods which change only the private state of an object of the class to be tested?
I favor separating test code from implementation code. My "component test beds" look like this:
Let´s assume, I´m developing a stack component called "MyStack". I´d then create a Visual Studio solution, e.g. MyStack Component.sln, and realize the component as an assembly (Visual Studio project), e.g. MyStack.csprj. All tests for that componet, though, would go into a separate project, e.g. test.MyStack.csprj. Why´s this? Because then I potentially can reuse the tests for different implementations. However, this depends on defining an interface/contract for the component and requires the test to dynamically load an implementation.
Although the above example does not define a separate interface for the component, I nevertheless put the tests in their own project. Call it a habit, if you like ;-)
Now, on to testing the component. Here´s an outline of the class MyStack:
How could a test for Push() look like? Maybe something like this:
s.Push(42);Assert.AreEqual(42, s.Pop());
failed, I would not know, whether Push() or Pop() was not correct. That´s why I want to check Push()´s success by looking at the private state - stackElements - of the MyStack object.
Since this state is not visible to any client of a MyStack object, be it within the same component or outside, I can´t put this test in my separate test project.
But there´s a solution to this dilemma: I put the test code in a static method of the class to be tested - and call it from my test project. However, I don´t want to clutter the code of the MyStack class with additional test methods, so I separate any test methods - probably it´s just a few of this kind, that need to be that close to the code under test - into another file by making MyStack a partial class:
MyStack.csprj/MyStack.cs:
public partial class MyStack<T>{ private List<T> stackElements = new List<T>(); public void Push(T element)... ...}MyStack.csprj/tests.cs:
public partial class MyStack<T>{ public static void TestPush() { MyStack<int> s = new MyStack<int>(); s.Push(42); Assert.AreEqual(1, s.stackElements.Count, "Push failed"); Assert.AreEqual(42, s.stackElements[0], "Element not on stack after Push"); ... }}
Now, you might be asking:
Although the intra-component test method is static and not a regular Unit Test, if uses the NUnit framework by calling Assert methods. So it acts as much like a Unit Test method as possible. In order to execute it, though, it needs to be called from a regular Unit Test method, which I locate in the test client project:
test.MyStack.csprj/tests.cs:
[Test]public void TestPush(){ MyStack<int>.TestPush();}
That´s it! All tests are factored out into a separate test client - except just for a few which need to be very close to the class to be tested. Those live in static methods to which tregular test methods delegate their work.
There´s only one drawback so far: The MyStack component now contains a test method which any client could call. That´s not how production code should look like.
To get rid of this problem, I surround the static test method as well as the regular Unit Test method calling it with a
#if DEBUG...#endif
preprocessor statement.
My component test bed now looks like this:
And I´m very happy with its as clear as possible separation between implementation and tests. Using a partial class for the few necessary tests within the component under development makes them very easy to spot and maintain and exclude in release code.
Interesting method and I like it. Just as a tip, rather than using a #IFDEF statement around code you don't want compiled, I use a [Conditional("DEBUG")] attribute on the method. I find it a little cleaner.
I totoally disagree with this...
1) You're exposing your classes as partial classes which means that anyone can come along and extend them but without using OO. This is just a bad use of partial classes.
2) If you're using VS to do your testing, utilize the PrivateObject class to access private members, etc. Otherwise, create a general "UnitTesting" assembly you can use in your test cases that use Reflection to get into the private members.
@Matt: 1) A partial class does not expose anything to the outside. It cannot be extended in another assembly. So not just "anyone can come along and extend" it.
2) I´m not using VSTS and wanted to show a way to structure tests involving access to private members which works for any Unit Testing framework.
And I´m not a friend of using reflection, because thereby you introduce dependencies on hidden implementation details into the tests lying outside the class to be tested.
Code within the class to be tested like my static method in the partial class, though, "is allowed" to know about private details of the class.
-Ralf
Wouldn't friend assemblies solve the unit testing problem much easier?
@James: No, the InternalsVisibleToAttribute does not help, since then the assembly of the class to be tested would need to know the test client assembly. This would be the wrong direction for any dependency/coupling.
Not sure I like this approach. While tantalizingly easy and simple, I'd argue that you're Unit Testing the wrong thing to begin with.
The Stack<T> class contract only states that you pop back what you pushed in. How it does that is an implementation detail - even to your TestFixture.
I don't know if/how you can seperate a Stack into a Push and a Pop responsibility - one implies the other. I think the Unit Test should test the contract - not the implementation. To bolster my argument, this particular Stack example is even listed in "JUnit AntiPatterns".
Of course, you're free to have implementation tests as well to verify your own class. But, in that case, inheriting your TextFixture from Stack<T> would probably be cleaner and allow a lot more flexibility in your testing.
Why are you testing the internals? If your class is passing 100% of the tests you wrote around its public operations, then it really doesn't matter to any user of your class whether the private implementation fails in some way.
Also, whenever you are tempted to test private implementation, you should take that as a signal to refactor your code into more classes and assemble your solution using composition.
@MB: Of course the TestPush() method tests an implementation detail. And I think I made clear I don´t like that very much - but I at the same time I think it´s necessary. Testing of MyStack has to start somewhere, so I picked the Push() method. And a new test should add only one new method to test. Testing two untested methods is like changing two parameters in an experiment. Not a good idea. So I´m testing just Push(). In order to check if it´s working correctly I thus need to rely on implementation details.
This approach of course should be kept to a minimum - as I demonstrated by using it for just one method.
Deriving a test class from MyStack<T> does not work, since it´s a generic class which a testing framework cannot instanciate.
The source you listed shows how not to test a stack class and I agree with them. They criticize exposing implementation details in the public interface just for testing purposes. But that´s not what I´m doing. Right to the contrary: I´m showing an approach to keep such details hidden and (!) be able to use them when needed.
@bonder: I´m using knowledge of implementation details in one of four tests because not using it would complicate the test by using two untested methods where I just want to test one.
I agree with you this is not the best way to test. Testing just the public interface of a class is the way to go - whereever possible. But as I tried to show, it´s not possible here.
I don´t agree with you that any perceived need to test with the help of implementation details is a sign for refactoring - as my example shows. Refactoring would not help in testing the MyStack<T> class in any way.
I prefer having my test projects share the same namespace as my production code. Instead of "test.MyStack" I'd have "MyStack.Tests". An added benefit is that you get an automatic "using" statement.
@Udi: I see what you mean. Yes, why not... But since I like my tests to be "real clients" of the code under test, I want them that close to it. A production client of the component to test likely is not in the same namespace, so I like to see what´s necessary to work with the component (e.g. a namespace, an interface).
Hi Ralf,
I still think that your method does miss one important test case: You must test also the release build of your application. To test only the debug build is in my opinion not an option. I would even go so far to test and debug only release builds since this is what the customer will get. Since all debug assemblies have JIT optimizations turned off you can miss a) JIT errors b) Garbage Collection issues c) Performance/Timing/Threading dependant errors
I vote therefore for the InternalVisibleTo Attribute:
You could test your internals much simpler by making your class members which must be covered by white box tests internal and still get full coverage. The dependency issue is not really one. I hope your product code does not depend on test code: This is what the customer gets he will not see any changed dependencies in your product assemblies. It is additional effort to change the product assemblies source if you change the test assembly name but hey doing tests is also additional effort so I am willing to take this. With your solution you would need to build the same assembly 4 times debug/release/debugTest/releaseTest. This way you have generated much more effort to maintain consistency of these many different builds for your configuration management team. This will lead to confusion when you accidentally mix during your tests an assembly from one build with the assemblies from another one and find yourself in nirvana. With InternalVisibleTo I only need debug/release builds. Nothing more.
@Alois: You have a point here: My test code within the assembly under test would not be included in a release build. If that´s to be tested, you can´t use my suggestion - or you remove the #if DEBUG pragma.
Nevertheless it doesn´t seem to feel right, including the name of a test project into the assembly under test. This would force any test code to go into that test assembly.
Also, publishing private stuff as internal does not feel right either.
-Ralf
I think we should differentiate between to test scenarios:
1. Testing of the components (public) contract.
2. Testing during development, while the contract is not fulfilled yet.
For 1) you need only access public stuff. Right!
For 2) you need often to access private or internal stuff. You don't want to wait until your component is working, in order to do tests. You want to see if things work like you intended it. Did you close an open file, release an unused buffer? ...
For these reasons I like Ralf’s idea of partial classes.
I would however not necessarily put test code in the partial class, but implement public accessors there. Like making "private List<T> stackElements" public through a public property like:
public List<T> TestAccess_stackElements{
get { return stackElements; }
}
-Olivier
Unit testing a library with a nice public interface is one thing. Unit testing an application is quite another. Consider an active object with a TCP/IP interface and no public methods, except maybe Dispose(). You cannot unit test this without using reflection. I don't like reflection since it is string based name binding is brittle and complicates refactoring.
Using the partial keyword provides a neat solution. This keyword only instructs the compiler to allow other source files to add to the class, and does not change the behaviour or interface of the class in any way. The final type is created at compile time and cannot be changed later.
I also like to keep unit tests and implementation code separate, but I don't like the #if DEBUG approach. One solution is not to reference the assembly you are testing, but to include the class's source code file directly in the unit tests' assembly (add it as a link if necessary). In this way the implementation code stays completely untouched, and you can test any kind of build. Now it also does not matter how you change the class's interface during testing, as clients would have to reference the unit test assembly to see the modified class (you can also not reference the unit test and production assemblies at the same time). | http://weblogs.asp.net/ralfw/archive/2006/04/14/442836.aspx | crawl-002 | refinedweb | 2,302 | 63.9 |
The Deque is similar to the double-ended queue that helps to add or remove elements from each end of the data structure. In this tutorial, we will discover & understand completely what is Deque interface, declaration, working of java deque, how to create a deque in java using classes that implement it, and its methods with example programs.
This tutorial on Deque Interface in Java includes the following topics
- Java DeQue
- Deque Interface Declaration
- Working of Deque
- Classes that implement Deque
- Creating a Deque
- Java Deque Example
- Methods of Deque Interface in Java
- Deque Implementation In Java
- Deque as Stack Data Structure
- Implementation of Deque in ArrayDeque Class
Java DeQue
Deque in Java is an interface present in java.util package. The Deque interface in Java was added by Java 6 version. It extends the Queue interface and declares the behavior of a double-ended queue. In Deque we can add or remove elements from both sides of the queue. The Deque can function as standard, first-in, first-out queues or as last-in, first-out stacks. The Deque doesn’t allow you to insert the null element.
Deque Interface Declaration
public interface Deque<E> extends Queue<E>
Do Check:
Working of Deque
Well in a normal queue, we will add elements from the rear and remove them from the front. But in a deque, we should insert and remove elements from both the rear and front.
Classes that implement Deque
If you want to use the Deque functionalities then you must add these two classes that implement Deque Interface. They are as follows:
- LinkedList
- ArrayDeque
Creating a Deque
Prior to using a deque in java, we should create an instance of one of the classes that implement the java deque interface. Let’s have a look at the example of creating a deque instance by creating a LinkedList interface or by creating an ArrayDeque instance:
// LinkedList implementation of Deque Deque deque = new LinkedList(); // Array implementation of Deque Deque deque = new ArrayDeque();
Java Deque Example
import java.util.*; class DequeExample{ public static void main(String args[]){ Deque dq = new LinkedList(); //adding the element in the deque dq.add("Ajay"); dq.add("Vijay"); dq.add("Rahul"); dq.addFirst("Amit"); dq.addLast("Sumit"); System.out.println("Deque elements are: " +dq); //remove last element System.out.println("remove last: " +dq.removeLast()); /*return the element at the head of a deque but not removed. It returns null if the deque is empty.*/ System.out.println("peek(): " +dq.peek()); //returns and remove the head element of the deque. System.out.println("poll(): " +dq.poll()); /* returns and remove the first element of the deque. It returns null if the deque is empty.*/ System.out.println("pollFirst(): " +dq.pollFirst()); //dispalying deque element System.out.println("After all operation deque elements are: " +dq); } }
Output:
Methods of Deque Interface in Java
1. add(E e): This method is used to insert a specified element to the tail.
2. addFirst(E e): This method is used to insert a specified element to the head.
3. addLast(E e): This method is used to insert the specified element to the tail.
4. E getFirst(): This method returns the first element in the deque.
5. E getLast(): This method returns the last element in the deque.
6. offer(E e): This method adds an element to the tail of the deque and returns a boolean value.
7. offerFirst(E e): This method adds an element to the head of the queue and returns a boolean value if the insertion was successful.
8. offerLast(E e): This method adds an element to the tail of the queue and returns a boolean value if the insertion was successful.
9. removeFirst(): It removes the element at the head of the deque.
10. removeLast(): It removes the element at the tail of the deque.
11. push(E e): This method adds the specified element at the head of the queue
12. pop(): It removes the element from the head and returns it.
13. poll(): returns and remove the head element of the deque.
14. pollFirst(): returns and remove the first element of the deque. It returns null if the deque is empty.
15. pollLast(): returns and removes the last element of the deque. It returns null if the deque is empty.
16. peek(): return the element at the head of a deque but not removed. It returns null if the deque is empty.
17. peekFirst(): return the first element at the head of a deque but not removed. It returns null if the deque is empty.
18. peekLast(): return the last element at the head of a deque but not removed. It returns null if the deque is empty.
Deque Implementation In Java:
Deque as Stack Data Structure
The implementation of a stack can be provided by the Stack Class of the Java Collections framework.
Yet, it recommends using Deque as a stack instead of the Stack class. It happens due to the methods of Stack are synchronized.
Following are the methods the Deque interface grants to implement stack:
- push() – Adds the specified element at the head of the queue
- pop() – Removes the element from the head and returns it.
- peek() – Return the element at the head of a deque but not removed. It returns null if the deque is empty.
Implementation of Deque in ArrayDeque Class Example
import java.util.Deque; import java.util.ArrayDeque; class Main {:
| https://btechgeeks.com/deque-in-java-with-example/ | CC-MAIN-2021-31 | refinedweb | 903 | 64.3 |
Using Guardian OpenPlatform API
Guardian Open Platform is a new API from The Guardian for accessing it's publications, news and articles. API is just HTTP REST, so it's very easy to use. Currently in beta - so when you register your app you will have to wait few days to be activated. Using the API you can search and get articles publishen by Guardian and few other newspapers. There is interesting example of Google Wave robot that inserts search results (with Guardian publications) in a Wave.
OpenPlatform and PythonThere is complete library openplatform-python, which covers all the API:
Which will return articles matching query term - "Roberta Kubicy":
from guardianapi import Client client = Client('Your API Key') results = client.search(q = 'robert kubica') print results.count() for item in results: print item['headline'] # content #print item['typeSpecific']['body'] print
More examples can be found on the library website.
Japanese grand prix - as it happened Bahrain grand prix - as it happened European grand prix - live Chinese grand prix lap-by-lap - as it happened Barcelona grand prix - as it happened Malaysian grand prix - as it happened Kubica off the pace but fast enough Breakthrough for Kubica but the pits for Hamilton Breakthrough for Kubica but the pits for Hamilton Kubica announces his arrival as championship contender
OpenPlatform and PHPThere is similar library openplatform-php that allows PHP scripts to easily use the API. There is an example in the library package.
RkBlog | https://rk.edu.pl/en/using-guardian-openplatform-api/ | CC-MAIN-2021-31 | refinedweb | 242 | 56.69 |
Java 9 provides an interactive REPL tool to test code snippets rapidly without a test project or main method. So we can learn or evaluate Java features easily. In this tutorial, we’re gonna look at how to use Java 9 JShell – REPL.
Contents
I. Start JShell
1. Run
We can run JShell by jshell command available at ${JAVA_HOME}\bin directory:
This is how the JShell command line tool looks like:
Now we don’t need to create Java Project or define a
public static void main(String[] args) method for testing code. Just write and run immediately.
2. Some commands
After executing
/help command, we can see other useful commands:
For example,
/help /list shows us some deeper commands:
II. JShell features
1. Default import types
To know set of common imports, just use
/import command:
For example,
java.util.stream.* was imported by default, so the code below runs without error:
If we want to test other classes or interfaces of another package, we have to import it:
import java.nio.CharBuffer
*Note: For simple statements, we don’t need to use “semicolon”.
2. Expressions
A valid java expression will be returned a value and assigned to a variable.
In the example, $1 and $2 are assigned the values.
3. Variables
We can declare and initialize our variables:
4. Method
We can also define methods with JShell:
If rewrite the method with same name, it will be replaced.
5. Tab-Completion
Tab-Completion detection uses javac lexer, custom and table-driven code. So this helps us save time when typing only part of all characters.
For example, just type
add, then press key Tab, JShell will complete the code:
addTwoNumber(.
If we define another method:
Type
add, then press key Tab, JShell shows :
That means we just type some more characters to make it unique, then Jshell will complete it.
6. List
– list variables using
/vars
– list methods using
/methods
– list all things we type using
/list
Using
/help command, we can see more functions.
Last updated on September 11, 2018. | https://grokonez.com/java/java-9/java-9-jshell-repl | CC-MAIN-2021-31 | refinedweb | 343 | 63.9 |
I have a method in my class that uses an API to get data. Next I use JsonConvert.DeserializeObject to create another instance of the same class, then I copy the values to the object I'm in, which is where I wanted the values in the first place. Although this works just fine, it seems like there must be a better way to do this. (I know it could be further refactored for SRP. I'm just trying to find a more efficient way to get the values into the members.)
Can anyone show me a better way?
public class MyModel
{
public string Description { get; set; }
public string Last_Name { get; set; }
public string Nickname { get; set; }
public void Load()
{
var results = {code that gets stuff}
MyModel item = JsonConvert.DeserializeObject<MyModel>(results.ToString());
this.Description = item.Description;
this.Last_Name = item.Last_Name;
this.Nickname = item.Nickname;
}
.
.
.
}
Do you want this
class A { public int Id { get; set; } public string Value { get; set; } public void Load() { var json = @"{Id:1,Value:""Value""}"; JsonConvert.PopulateObject(json, this); } } | https://codedump.io/share/CtG1pDs4p6Vj/1/deserialize-into-same-object-using-jsonconvert-c | CC-MAIN-2017-30 | refinedweb | 174 | 53.92 |
NAME
mi_switch, cpu_switch, cpu_throw - switch to another thread context
SYNOPSIS
#include <sys/param.h> #include <sys/proc.h> void mi_switch(void); void cpu_switch(void); void cpu_throw(void);
DESCRIPTION
The mi_switch() function implements the machine independent prelude to a thread context switch. It is called from only a few distinguished places in the kernel code as a result of the principle of non-preemptable kernel mode execution. The various major uses of mi_switch can be enumerated as follows: 1. From within sleep(9), tsleep(9) and msleep(9) when the current thread voluntarily relinquishes the CPU to wait for some resource to become available. 2. After handling a trap (e.g. a system call, device interrupt) when the kernel prepares a return to user-mode execution. This case is typically handled by machine dependent trap- handling code after detection of a change in the signal disposition of the current process, or when a higher priority thread might be available to run. The latter event is communicated by the machine independent scheduling routines by calling the machine defined need_resched(). 3. In the signal handling code (see issignal(9)) if a signal is delivered that causes a process to stop. 4. When a thread dies in thread_exit(9) and control of the processor can be passed to the next runnable thread. 5. In thread_suspend_check(9) where a thread needs to stop execution due to the suspension state of the process as a whole. mi_switch() records the amount of time the current thread has been running in the process structures and checks this value against the CPU time limits allocated to the process (see getrlimit(2)). Exceeding the soft limit results in a SIGXCPU signal to be posted to the process, while exceeding the hard limit will cause a SIGKILL. If the thread is still in the TDS_RUNNING state, mi_switch() will put it back onto the run queue, assuming that it will want to run again soon. If it is in one of the other states and KSE threading is enabled, the associated KSE will be made available to any higher priority threads from the same group, to allow them to be scheduled next. After these administrative tasks are done, mi_switch() hands over control to the machine dependent routine cpu_switch(), which will perform the actual thread context switch. cpu_switch() first saves the context of the current thread. Next, it calls choosethread() to determine which thread to run next. Finally, it reads in the saved context of the new thread and starts to execute the new thread. cpu_throw() is similar to cpu_switch() except that it does not save the context of the old thread. This function is useful when the kernel does not have an old thread context to save, such as when CPUs other than the boot CPU perform their first task switch, or when the kernel does not care about the state of the old thread, such as in thread_exit() when the kernel terminates the current thread and switches into a new thread. To protect the runqueue(9), all of these functions must be called with the sched_lock mutex held.
SEE ALSO
issignal(9), mutex(9), runqueue(9), tsleep(9), wakeup(9) | http://manpages.ubuntu.com/manpages/hardy/man9/mi_switch.9.html | CC-MAIN-2013-48 | refinedweb | 529 | 67.89 |
Asynchronous programming is simplified so much with the release of .NetFramework 4.5 which introduced async and await keywords to write asynchronous methods. Prior to this, the .NetFramework had support for asynchronous programming but it was bit complicated to implement and understand. For example, there was Asynchronous Programming Model (APM) which exposed long running framework API methods with pair of begin and end methods. For example, the ADO.Net ExecuteReader() method has a APM counterparts methods BeginExecuteReader() and EndExecuteReader() in .NetFramework 2.0. Similarly, there was another methodology called Event-based asynchronous pattern (EAP) which approached asynchronous model as event based model. For example, WebClient.DownloadStringAsync and WebClient.DownloadStringCompleted. Though, this approaches worked well but they are not simple to implement. Let’s see how async and await helps us to implement asynchronous methods in a simpler way.
How to use async and await for Asynchronous Programming?
The biggest advantage of using async and await is, it is very simple and the method looks very similar to a normal synchronous methods. It does not change programming structure like old models (As mentioned above) and the resultant asynchronous method look similar to synchronous methods. The compiler takes the complexity part and takes care everything when we use async and await.
Asynchronous method Definition
An async method is represented by using async modifier in the method signature.
If the method has any return types they are enclosed as part of Task<TResult> object.
If the method does not return any values then the return type is just Task. Void is also a valid return type and it is used for asynchronous event handlers.
Every async method should include at least one await operator in the method body to take the advantage of asynchronous programming.
It is a recommended practise to name the asynchronous methods with ending string as Async (WriteAsync(), ReadAsync()). Let’s move ahead and implement a very simple method to demonstrate asynchronous methods.
Example:
Let’s build simple async methods that greets us with a welcome message. Include System.Threading.Tasks namespace for writing asynchronous methods.
public async static Task GreetAsync()
{
Console.WriteLine("Async Application Started!");
//Call Async method
Task<string> greetMsg = GetGreetingsAsync();
//Do your other stuffs synchronously while async method is in progress
Console.WriteLine("Async Method in started....");
Console.WriteLine("Current Time: " + DateTime.Now);
Console.WriteLine("Awaiting result from Async method...");
//All work completed, wait for async method to complete
string msg = await greetMsg;
//Print Async Method Result
Console.WriteLine("Async method completed!");
Console.WriteLine("Current Time: " + DateTime.Now);
Console.WriteLine("Async method output: " + greetMsg.Result);
Console.WriteLine("Async Application Ended!");
Console.Read();
}
public async static Task<string> GetGreetingsAsync()
{
//simulate long running process
await Task.Delay(10000);
return "Welcome to Async C# Demo!";
}
As you can see, the asynchronous methods are decorated by using async modifier. When the request arrives and the GreetAsync() method is called, the asynchronous call is made when the compiler calls GetGreetingsAsync() method.
Task<string> greetMsg = GetGreetingsAsync();
This method(GetGreetingsAsync()) represents (actually simulates) a thread blocking operation or an I/O call which can make the current thread to potentially wait if it is synchronous method. For easy understanding, i will just make a delay of 10 seconds by calling Task.Delay(). Since, they are async methods, the compiler calls the method and returns a Task<string> object without waiting for its completion. For now, let’s just forget the await in GetGreetingsAsync() method which is used to just simulate a thread blocking process like reading a file in network share. The Task<string> object represents an ongoing asynchronous task that will return a string value on completion. This means, the current thread that calls the GetGreetingsAsync() returns without waiting and continues to execute subsequent lines of code after this method call synchronously. In the above code, it prints current datetime and executes next statement that prints “Async Method in started....” while the async method GetGreetingsAsync() is still executing.
When the thread subsequently sees the next line which has await operator the thread suspends its execution and it is returned to the method that called GreetAsync() or it is placed back into the thread pool if it is the top method(like Controller Action or Button click event handler in windows app) until the async method completes it execution and returns. The await operator hints that the code after this point requires the output of the async method that is still executing and cannot continue untill it finishes.
string msg = await greetMsg;
It is here the benefit of using asynchronous methods are taking place. The threads that is returned to the thread pool is available for other requests for processing. If it is a synchronous method the thread will be busy waiting without doing anything. Once the asynchronous method completes the execution, the async methods updates the state to Task object marking it completed and the string value is returned to msg variable. Now, when the value is returned the subsequent lines of code in the method is executed by a new thread from thread pool. This means, the thread that called the async method and the thread that does the remaining execution may be different.
This process is repeated in multiple nested async method calls and the thread is returned to callee or thread pool instead of just waiting for blocking operation to complete.
When executed you will get output like below,
After printing the Current Time (first occurrence in the above output) it waits for the async method to complete. You can see the start and end timestamp (squared) has 10 second delay before and after async method completion. During this waiting time the thread will be available for other request processing.
Note – The above method can be called from Main() method like below. Without calling Wait() the main method will exit without waiting for async method to complete.
static void Main(string[] args)
{
GreetAsync().Wait();
}
What an Asynchronous method not do?
The asynchronous method does not create new thread to run the method. The asynchronous methods runs on the current synchronization context and uses the time on the calling thread. It is just used when there is a blocking operations like calling a web service, reading a file, reading a website page that blocks current thread.
An async method cannot declare ref or out parameters, but it can call methods that have ref or out parameters.
Advantages
Using asynchronous methods increases the responsiveness of the application. For example, in a desktop application when the compiler awaits a blocking operation in an asynchronous event handler, the application UI will still respond to user actions like resize, minimizing, etc since the UI thread is returned back. In an synchronous application, this will freeze the window as if the application stopped responding. This is because the UI thread will be waiting for the blocking operation to complete and the app UI will not respond to any user actions.
Helps in application scaling in server based application. When await is encountered in Asp.Net application, the thread is returned to the thread pool for other request to use the thread. In synchronous method, the thread will be waiting for the blocking operation to complete. When the number of concurrent request increases and if all threads were waiting then the application will throw 503 (Service unavailable) or when the waiting request queue limit exceeds it will throw 503 status (Server Too Busy)
Async Methods in Base Class Library
.Net Framework Base Class libraries already have some useful asynchronous methods in some of the in-built classes. Some commonly used classes are,
Happy Learning!! | http://www.codedigest.com/quick-start/10/learn-asynchronous-programming-async-await-in-c-in-10-minutes | CC-MAIN-2017-22 | refinedweb | 1,267 | 55.84 |
Most discussion is on Typelevel Discord:
extendsin the same project very similar code compiles without problems. I'll look further tomorrow.
If I have an IO with a timeout:
IO(somefn).timeout(100 millis)
Is it possible for
somefn to create an IO which doesnt get timed out?
def somefn = { readDb >> computeSomething >> fireOffaTask >> updateDb }
I'm trying to have the
fireOffaTask run without being effected by timeout.
ioa.start.voidwill still get timeout exception.
never prints second partnever prints second part
def firenforgetExample: IO[Unit] = { (for { _ <- IO(println("start")) _ <- IO.shift >> longTask.start.void _ <- IO(println("end")) _ <- IO.never } yield ()).timeout(500 millis) } def longTask: IO[Unit] = for { _ <- IO(println("first part")) _ <- IO.sleep(600 millis) _ <- IO(println("second part")) } yield ()
start first part end
IO.shiftdoesn't do anything.
startalready introduces an async boundary IIUC.
@djspiewak
it's interesting that it's actually slower than the new Throwable().getStackTrace() approach and less flexible
The only way to get
new Throwable in the correct place is to modify every
IO constructor – if you do that it’s impossible to turn tracing on/off NON-globally – ZIO supports
_.traced and
_.untraced regions for lexically-scoped control of tracing, which allows you to omit tracing overhead for the parts that are heavy on flatMaps and low on information (
fs2,
Gen, etc.) or in hot spots and gain back 2x performance before tracing patch
@neko-kai Lexical control definitely becomes more complicated with a stack trace solution, but it isn’t impossible by any means. Both solutions suffer from instrumenting call sites which evaluate prior to the run loop, though the costs of such instrumentation is fairly low because it’s bounded by the size of the static program. Dynamic loops are the only place where performance has to be really really tight in the disabled case, and those are the easiest to disable from the run loop.
Really, the problem with bytecode instrumentation is it doesn’t work well at all with polymorphism. ZIO’s ecosystem is pretty monomorphic, so that limitation is felt less frequently, but Cats Effect tends to be used in polymorphic contexts essentially by default, where instrumentation based tracing doesn’t provide much useful information at all. Single frame (fast) traces are also limited, but at least you have the option then of multi frame (slow) tracing when you need it.
Not saying either design is wrong, really. Just different trade offs and challenges, sparked by very different ecosystems and standard usage patterns.
Also agreed that tracing should always be on by default for the same reason we include debug symbols in production builds, and the runtime costs need to be negligible.
@djspiewak
Both solutions suffer from instrumenting call sites which evaluate prior to the run loop
Well, all ZIO tracing happens directly in the run loop – it can’t happen prior, since the data is just not created yet.
Really, the problem with bytecode instrumentation is it doesn’t work well at all with polymorphism. ZIO’s ecosystem is pretty monomorphic
Not saying either design is wrong, really. Just different trade offs and challenges, sparked by very different ecosystems and standard usage patterns.
I’ve made ZIO tracing specifically so that it would work well with tagless final – rejecting e.g. static instrumentation with macros because that would never work with TF. Now, it doesn’t work well with monad transformers, that’s unfortunately correct, but I disagree that default even in CE ecosystem is to use monad transformers, total IME ahead: I’ve seen most people stick to F=IO in business logic or use monix Task directly, and the only person I’ve seen use cats-mtl also used it with Ref instances – that tracing is ok with, not transformers.
I sketched the kind of traces you’d get from an exception in place of constructor…
import cats.data.OptionT import zio._ import zio.interop.catz._ import zio.syntax._ object TestTraceOfOptionT extends zio.App { def x(z: Any => Task[Int]): OptionT[Task, Int] = OptionT.liftF[Task, Int](1.succeed).flatMap(_ => y(z)) def y(z: Any => Task[Int]): OptionT[Task, Int] = OptionT[Task, Int](ZIO.some(()).flatMap { case Some(value) => z(value).map(Some(_)) case None => ZIO.none }) def z: Any => Task[Int] = _ => throw new RuntimeException override def run(args: List[String]): UIO[Int] = { x(z).getOrElse(0).orDie } }
java.lang.RuntimeException at TestTraceOfOptionT$.$anonfun$z$1(TestTraceOfOptionT.scala:13) at TestTraceOfOptionT$.$anonfun$y$1(TestTraceOfOptionT.scala:10) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:272) at zio.internal.FiberContext.$anonfun$fork$1(FiberContext.scala:596))
For OptionT this does look pretty relevant, but I don’t know what’ll happen with suspending transformers like Free or Stream. I do have a gut feeling that supporting every transformer will require a lot of manual work, though.
Certainly hard coding transformer or third party library call sites is intractable. Stack frame instrumentation though can ameliorate this issue since you have the thunk class and the full runtime stack, so you have the opportunity to trace up. Also, as I mention in the gist, providing extra frame information is really helpful for when that kind of thing fails and you’re trying to track down an error.
To be clear, I don’t think there’s anything wrong with ZIO’s tracing. It gives information that is very useful in a monomorphic or bijective polymorphic context (like your tagless final example). The problem is it is defeated immediately by something like Stream or even some library usage (tracing through an http4s app is varyingly useful, for example), and it doesn’t work at all with transformers. Again, that’s a fair tradeoff, particularly given the ZIO ecosystem’s focus on a single master effect without induction. I just think it’s possible to improve on that, albeit by accepting a totally different tradeoff (greater complexity in implementing lexical configuration), and given how cats IO is often used, it seems worthwhile to at least explore that direction.
Either way, all of these approaches have some unfortunate caveats. It’s possible to build stellar examples of the strengths of both, and also of the weaknesses of both. I’m not sure there’s ever going to be a silver bullet.
I'm trying to understand a bit more about cats-effect, and thus did some experiments by modifying the example here:
what I did to make it have only one
ExecutionContext and
ContextShift and a simple producer/consumer using an
MVar. However, here I stumbled upon a (to me) puzzling behaviour. In my first example using an explicit
ExecutionContext and
ContextShift it works as I expect, it keeps running till I terminate it:
in my second example though, I tried to trim down my application to rely on the default in IOApp, but here my program terminates almost immediately:
why does this happen? What should I read/see/etc to understand this behavior better? | https://gitter.im/typelevel/cats-effect?at=5daa34500ac62f4acd98c66e | CC-MAIN-2022-40 | refinedweb | 1,170 | 54.83 |
Hello everyone, I have had this problem for quite a while but never thought to come and ask on a forum. I just looked around because I thought it was a simple problem that required a key word in the coding or something. Turns out, I can't find any help anywhere.
My problem is I want to turn a user specified graphic made in a simple JFrame program into a jpeg (or really any 'picture' format) but I don't know how. The way I have saved it is by saving the pixels in the pic into an array that is printed into a text file. So really, my jpeg file is a text file. This runs far too slow because I need to be able to load the pic several times in animations.
Here is the code. It's a very simple, user size specified, black and white checkered pic that is saved into a text file to either be loaded or written over by a new pic the next time you run the program:
Code :
/*Hello, welcome to SimplePic. This program: * 1. asks weather you want to create a new pic or load the last pic created. * 2. if you choose to make a new pic, it asks for the width and height in pixels * 3. the program then creates a checkered pic of the specified size (or loads the last pic you made) starting with a black pixel in the top left corner always. *To see the save file, look for a text file called 'picText.txt' in the same folder as this program. It will have 2 numbers for the first 2 lines (width then height), *and then it will have (if word wrap is turned off; it doesn't have to be) a square of letters ababab...representing the pixel array in this program. * *MY MAIN PROBLEM: I want to convert this text file/pic into a jpeg or some kind of picture file so that: * 1. a text file is not necessary * 2. the program loads the pic much faster like it was openning a pic you drew in microsoft paint. *I need this because I want to be able to create this pic, but then load it repeatedly in animations. * *Good luck and thank you. *-Ben */ //needed libraries import javax.swing.JFrame; import java.awt.*; import java.util.*; import java.io.*; public class SimplePic extends JFrame{ //variables private Color c1 = new Color(0,0,0);//black private Color c2 = new Color(255,255,255);//white private char char1 = 'a'; private char char2 = 'b'; private char[][]pixels; private int width,height; private Container contents; //new SimplePic public SimplePic(int xsize,int ysize){ width=xsize; height=ysize; pixels=new char[xsize][ysize]; generatePic(); contents=getContentPane(); setSize(xsize,ysize); setVisible(true); } //load SimplePic public SimplePic(){ super("SimplePic"); loadPic("picText.txt"); contents=getContentPane(); setSize(width,height); setVisible(true); } //generate Pic, creates a fine checkered pic based on remainders when dividing by 2 public void generatePic(){ for(int y=0;y<height;y++){//top to bottom for(int x=0;x<width;x++){//left to right //if x and y coordinate pixel has no remainder, set to black if(x%2==0&&y%2==0){ pixels[x][y]=char1; } //if x coordinate pixel has a remainder of 1, but y does not, set to white if(x%2==1&&y%2==0){ pixels[x][y]=char2; } //if y coordinate pixel has a remainder of 1, but x does not, set to white if(x%2==0&&y%2==1){ pixels[x][y]=char2; } //if x and y coordinate pixel has a remainder of 1, set to black if(x%2==1&&y%2==1){ pixels[x][y]=char1; } } } } //render pic public void paint(Graphics g){ super.paint(g); Color color=c1; for(int y=0;y<height;y++){ for(int x=0;x<width;x++){ //convert char array to colors black and white switch(pixels[x][y]){ case 'a':color=c1;break; case 'b':color=c2;break; } g.setColor(color); g.fillRect(x,y,1,1); } } } //main method public static void main(String[]args){ //i did not bother making any code for input error checking, this is a bare bones example Scanner scan=new Scanner(System.in); System.out.println("new pic or load pic? (n or l)"); String choice=scan.nextLine(); char c=choice.charAt(0); if(c=='n'){ System.out.println("please enter width of pic (int)"); int x=scan.nextInt(); System.out.println("please enter height of pic (int)"); int y=scan.nextInt(); SimplePic newPic=new SimplePic(x,y); newPic.savePic("picText.txt"); newPic.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } if(c=='l'){ SimplePic loadPic=new SimplePic(); loadPic.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } } //save current pic public void savePic(String fn){ try{ PrintWriter out=new PrintWriter(new FileWriter(fn)); //the first 2 lines of the txt file are reserved for width and height out.println(width); out.println(height); //turn word wrap off in the txt file to see a checkered square of letters representing the pixels array for(int y=0;y<height;y++){ for(int x=0;x<width;x++){ out.print(pixels[x][y]); } out.println(); } out.close(); System.out.println("Saved to "+fn); } catch(IOException e){System.out.println("Error saving to "+fn);} } //load previous pic public void loadPic(String fn){ System.out.println("Reading "+fn+", loading..."); try{ BufferedReader br=new BufferedReader(new FileReader(fn)); String line; char charLine[]; int numRows=0; //read the first 2 lines of the txt file to get the width and height String swidth=br.readLine(); String sheight=br.readLine(); width=Integer.parseInt(swidth); height=Integer.parseInt(sheight); //turn the square of letters in the txt file into the pixel array (turn word wrap off in the txt file to see more clearly) pixels=new char[width][height]; while((line=br.readLine())!=null){ while(numRows<height){ charLine=line.toCharArray(); int x=0; while((x<width)){ pixels[x][numRows]=charLine[x]; x++; } numRows++; line=br.readLine(); } } br.close(); } catch (IOException e){ System.out.println("Error reading maze plan from " + fn); System.exit(0); } } }
I think I made it as clear as possible, but if you have any questions, I'd be glad to answer them. I really need this answered. | http://www.javaprogrammingforums.com/%20awt-java-swing/4626-convert-jframe-graphics-into-jpeg-file-printingthethread.html | CC-MAIN-2013-48 | refinedweb | 1,039 | 63.59 |
This document contains important information about the Sun StorageTek 5800 system, Version 1.0.1. Read this document so that you are aware of issues or requirements that can affect the installation and operation of the Sun StorageTek 5800 system.
Use the upgrade command to upgrade the Sun StorageTek 5800 system cluster to Version 1.0.1.
To perform an upgrade, follow these steps:
1. Make sure you have a copy of your current schema definition file.
Upgrading to Version 1.0.1 deletes your schema definition file, but you can replace it after the upgrade. Your existing data is not lost, but querying will not be available for approximately 12 hours while the system repopulates the database.
If you do not have a copy of your schema definition file, use the CLI command mdconfig -d to print the current schema file.
2. Edit your schema definition file to define lengths for fields and include table layouts appropriate to your configuration. See Enhanced Schema Definitions for information about these new features for this version.
3. Insert the DVD into the drive in the service node. Then, log into the CLI and type the following:
Once the upgrade begins, the system checks to see that all nodes and disks are available and requires that you answer a few questions confirming the upgrade. After that, the process does not permit any changes or interruptions. However, it does display information about what is going on throughout the process. For example, the system displays messages as it unpacks the install image, upgrades the service node, the switches, and finally each node in the cluster.
If there is a problem with the images during an upgrade, the nodes will boot from the previous version.
If there is a power outage during an upgrade, the system might or might not come back online, depending on where in the process the upgrade was interrupted. If the system does not become operational when the power resumes, call your Sun Service representative.
During the last phase of the upgrade process, the system reboots. First, the service node reboots. Next, each node except one reboots. Then, the switches reboot, and finally, the last node reboots.
4. Use the CLI command netcfg clients to reconfigure your authorized clients.
You must reconfigure your authorized clients after the upgrade or no data traffic will be allowed on the system. If all clients are to be allowed access, issue the command with the all option (netconfg clients all).
5. Use the CLI command mdconfig to test your updated schema definition file.
Make sure the file reports no errors and that it defines the table layout you expect. Refer to the Sun StorageTek 5800 System Administration Guide for more information about the syntax and use of the mdconfig command.
6. Use the CLI command mdconfig -c to activate your updated schema definition file.
Refer to the Sun StorageTek 5800 System Administration Guide for more information about the syntax and use of the mdconfig -c command.
7. You can access data stored on the Sun StorageTek 5800 system immediately, but wait approximately 12 hours before issuing queries on the data to allow for full repopulation of the query engine.
You can use the CLI command sysstat to get some feedback about the progress of repopulation. See Enhanced System Status Monitoring for more information about the sysstat command.
This section describes enhancements included in this version of the Sun StorageTek 5800 system software.
The version command has been enhanced to display the system firmware version. To display the firmware version for each node, issue the version command followed by -v as follows.
The sysstat command allows you to monitor the StorageTek 5800 cluster state. For Version 1.0.1, the output of the sysstat command has been enhanced to provide more detailed information about the data checks that the system performs.
Issue the sysstat command from the CLI:
The output that the sysstat command produces is explained below. Data reported is for all online disks in the entire cluster.
The ifconfig command has been enhanced to allow you to set the address of the service node. If there is a conflict between the factory configured IP address on the service node and an address on your network, you can now set a new address for the service node using ifconfig with the sp0 option.
Issue the ifconfig command followed by sp0 and the IP address you want to use:
The shutdown command has been enhanced to allow you to power off the service node as well as the nodes. Issue the shutdown command with the -a option to power off the nodes and the service node. (The shutdown command without the -a option powers off just the nodes.)
This version of the Sun StorageTek 5800 system includes enhancements to the Document Type Definition (DTD) for the schema definition file.
The updated DTD is as follows. The sections that follow the DTD provide an example of a schema definition file that uses the new DTD and also describe the new features in the DTD for this version.
The following is an example of a valid schema using the revised DTD.
Using the new table element in the DTD, you can partition the schema into tables and specify each metadata field as a column within a particular table. You can greatly improve the performance of query and store operations by grouping metadata fields that commonly occur together in the same table and by separating metadata fields that do not commonly occur together into separate tables. Objects stored in the Sun StorageTek 5800 system become rows in one or more tables, depending on which fields are associated with that data.
For example, suppose you specify columns in a table reference in the schema definition file as follows:
<table name="reference"/>
<column name="mp3.artist"/>
<column name="mp3.album"/>
<column name="mp3.title"/>
<column name="dates.year"/>
</table>
The reference table you create would have the layout shown in TABLE 1.
When an object is stored in the Sun StorageTek.
To take full advantage of the new tables element to maximize query and store performance, it is important to understand field length values and to carefully plan which fields are grouped together in which tables. It is also important to plan which fields will be listed together in which indexes. Planning Tables and Planning Indexes provide more information on these considerations. For a summary of how to use the new table feature to maximize query performance, see Summary of Best Practices for Optimal Query Performance.
With this version, you can specify a length attribute for fields of type string. The length attribute is important because there are limits to the number of bytes that each table row and each index can store. See Planning Tables and Planning Indexes for more information.
In future releases of the Sun StorageTek 5800 system, trying to store a string that is longer than the specified length will result in an error message. For Version 1.0.1, you can successfully store a string that is longer than the specified length, but the string is truncated to fit the specified length, which might affect query results.
For example, suppose you specify the length for the field mp3.album as 12 and then store The Very Best of Rod Stewart in field mp3.album with object ID 010001287969bbdbb511dab28000e08159646c000019970200000000.
If you issue a query that includes the following specification:
"mp3.album = `The Very Best of Benny Goodman'"
The system will return object ID 010001287969bbdbb511dab28000e08159646c000019970200000000 because the first 12 characters match The Very Best of Rod Stewart.
Both store and query operations are more efficient if you store metadata attributes that always occur together, or even very commonly occur together, in the same table. For best performance, pay close attention to which metadata attributes occur together in your data, especially if those attributes are used in queries, and group those fields together into the same table.
Conversely, avoid putting fields into the same table that do not often occur together, limit.
TABLE 2 lists the number of bytes that each element in a column consumes. The total amount of space consumed by all the columns in a table cannot exceed 8080 bytes.
Suppose the fields listed in TABLE 3 commonly occur together on data and will be used together in queries. (Note that three of these fields are in the namespace mp3 and one is in the namespace dates.)
To maximize query and store performance, you could include each of these fields as columns in the same table, called, for example, reference. When planning the reference table, calculate the total number of bytes used by all the columns combined, to make sure it is less than 8080, as follows:
100 (for system overhead) +
8 (2 per column for column overhead) +
512 (for mp3.artist) +
512 (for mp3.album) +
1024 (for mp3.title) +
8 (for dates.year)
____________________________
= 2164 bytes total
Since 2164 bytes is less than 8080 bytes, the total combined size of all columns is acceptable.
In addition to the limits on the number of bytes consumed by the columns in a table, keep in mind the following considerations when planning the tables:
The system creates indexes on metadata fields to allow those fields to be queried more efficiently. Although you may use the fsViews element in the schema definition file to create virtual views for applications that have nothing to do with indexes, you also use the fsViews elements to specify the content of the indexes the system creates and maximize query performance.
For each fsView you create, the system creates an index of up to 15 fields that includes first the attributes in the fsView, followed by the fields specified in the filename element of the fsView, as long as those fields and attributes all come from the same table.
For fsViews that you create in order to specify indexes that improve query performance, follow these guidelines:
Suppose you want to have a query on the fields listed in TABLE 5.
To maximize query performance, you include each of these fields as columns within the same table, called books. To maximize performance even further, you create an fsView called, for example, bookview, that includes these fields and no others so that an index is created on these fields for querying.
Since there are fewer than 15 fields in the fsView and all of the fields are from the same table, the system creates an index that includes all of these fields, as long as the total number of bytes required for the index does not exceed 1024.
Calculate the number of bytes required for the index as follows:
100 (for system overhead) +
8 (2 per column for column overhead) +
100 (for book.author) +
100 (for book.series) +
100 (for book.title) +
8 (for dates.year)
____________________________
= 416 bytes total
Since 416 an fsView with a smaller set of fields. An index of a subset of fields in the query may still help to speed up query performance.
To maximize the performance of queries, follow these steps:
1. Plan tables that include metadata fields that commonly occur together in queries in the same tables and separate metadata fields that do not occur together into separate tables. Take into account the space limitations on tables, as described in Planning Tables.
2. Determine which fields should be in which indexes. Each query uses at most one index, so the fields in indexes should match the fields in queries as much as possible. Take into account the limitations described in Planning Indexes.
3. Define fsViews whose purpose is to create the indexes you want, as determined in Step 2. Limit the number of fsViews you configure to those required for these indexes and those required for applications.
A new readonly attribute for fsViews allows you to specify that users accessing data through an application (such as WebDAV) cannot upload new objects into the Sun StorageTek 5800 system and also cannot modify metadata attributes for objects.
With this version, you may use queryable as a synonym for indexable.
By specifying the indexable or queryable attribute for a field as false, you can exclude that field from the metadata that is indexed and available for queries. You might want to specify a field as indexable = false or queryable = false, for example, if you will access that field only through the retrieveMetadata example application, and never through queries.
When you issue the mdconfig -c command to activate a new schema definition file, you may see the following error message if the system is under heavy load:
Timed out waiting for the state machine.
This message indicates that the new schema definition file has been committed to the system, but not all of the tables may have been created.
In this case, reduce the load on the system if possible, and use the new -r option for the mdconfig command to finish the table creation:
When you issue the mdconfig -r command, the system finishes creating any tables that were not completed during the mdconfig -c operation. Tables that had already been created are not affected. You may have to issue the mdconfig -r command several times until all tables are created.
The Sun StorageTek 5800 System Administration Guide might not make it clear that the namespace attribute within an fsView is optional. If all of the fields in that fsView are from the same namespace, you can specify the namespace attribute so that you do not have to use fully qualified names for the fields in the fsView.
You can include fields from different namespaces within the same fsView, however. In that case, you should not specify the namespace attribute for the fsView and you must specify fully qualified names.
For example, the following fsView specification includes fields that are all from the namespace mp3. The namespace attribute is specified so the field names are not required to be fully qualified:
<fsView name= "byArtist" namespace = "mp3" filename="${title}.${type}">
<attribute name="artist"/>
<attribute name="album"/>
<fsView>
In the following fsView specification, some of the fields are from the namespace mp3, but one field is from the namespace dates. In this case, the namespace attribute is not specified and the field names are fully qualified:
<fsView name= "byArtist" filename="${mp3.title}.${mp3.type}">
<attribute name="mp3.artist"/>
<attribute name="mp3.album"/>
<attribute name="dates.year"/>
<fsView>
This section lists the clients you can use to access data stored on the Sun StorageTek 5800 system:
This section provides information about functional limitations and bugs that have been resolved in this Version 1.0.1 product release.
Bug 6421293 - If you power up the system, access the CLI, and then enter sysstat, a Data Integrity Verified indication appears. At this point, however, the Data Doctor tool has not completed a lost fragment recovery cycle and is unable to determine whether or not all data fragments are indeed accounted for.
Resolution Note - For this version, the sysstat command has been updated to correct this problem. See Enhanced System Status Monitoring for information about the updated command.
Bug 6407770 - If you try to install the Software Development Kit (SDK), the emulator configuration script fails if the specified directory name contains spaces.
Bug 6402543 - When moving a rack, there is no way to power off all of the components since the CLI shutdown command powers off only the nodes.
Resolution Note - The shutdown command has a new -all option that powers off all components. See New CLI Option to Power Off All Components for more information.
Bug 6408010 - If you use the wipe command, a NullPointerException appears on the console. The operation will continue despite this error.
Bug 6408658 - If you attempt to determine when a file or directory was last modified or created, inconsistencies occur in date reporting. For example, the directory listing getlastmodified might show a significant offset from Coordinated Universal Time (UTC), while the creationdate is about the same time without the offset. These times should be similar or very close.
Bug 6405531 - You can use netcfg and ifconfig to change the cluster administrative and data virtual IP addresses (VIPs). However, you cannot change the service node's IP address. Thus, if you have a conflict with the predefined factory service node IP, you will be unable to change it.
Resolution Note - You can now set the IP address of the service node. See New CLI Option to Set Address for Service Node for more information.
Bug 6425530 - If your primary switch is not functioning and you are operating on the secondary switch, rebooting a node repeatedly fails to bring up the data VIP at startup. In addition, the system configures some unusual interfaces, eventually escalates, and then reboots once more. Rebooting the entire cluster while running on the secondary switch causes the cluster to become nonoperational.
Resolution Note - You can now successfully reboot a node and the cluster when the system is running on the secondary switch. However, you should not perform any networking-related configuration operations on the system, such as those using the CLI commands ifconfig and netcfg, while operating on the secondary switch.
Bug 6427699 - If you attempt to update the schema using mdconfig while a system load is occurring, the database becomes suspended in a create schema state.
Resolution Note - The mdconfig command has a new -r option that allows you to work around this issue. See New CLI Option to Recover from Timeout While Activating New Schema for more information.
Bug 6421314 - If you enter the sysstat command and its --verbose option, the system output that is displayed does not include node identifiers.
Bug 6411146 - If you attempt to commit an invalid schema.xml file with the mdconfig -c command, the CLI becomes non-operational and eventually times out.
This section provide information about functional limitations and bugs in this version of the product release. Note that if a recommended workaround is available for a bug, it follows the bug description.
The section contains the following topics:
The functional limitations of this version are as follows:
Do not remove a disk from a live system. When Sun service personnel replace disks, they first power down the system.
In the case of multiple failed disks, Sun service personnel shut down the node for each disk, one at a time, replace the failed disk in that node, and bring the node fully online again before proceeding to the next node.
This section describes known issues and bugs related to installing and initially configuring the Sun StorageTek 5800 system.
Bug 6403228 - When you attempt to bring up a server node with two or more missing or faulty disks, that server node might fail during the startup procedure. Contact Sun Support to schedule replacement of the failed drives.
Bug 6466326 - If you set an invalid IP address for an NTP server using the CLI command netcfg ntp, the system's nodes might not function properly and might not come up at all.
Workaround - Be very careful to enter the correct IP address for an NTP server.
Bug 6470857 - When you upgrade the Sun StorageTek 5800 system to a new version of software, any authorized clients you have configured are lost and no data traffic can occur until you reconfigure authorized clients, even if previously all clients had been allowed.
Workaround - After the upgrade is complete, you must use the netcfg clients command to re-enter your authorized clients. If all clients are allowed, issue the command with the all option. (netcfg clients all).
Bug 6471001 - If you want to upgrade your software version over the network from an http server, that server must be configured as an authorized client on the Sun StorageTek 5800 system.
Workaround - Use the netcfg clients command to configure the HTTP server as an authorized client.
This section describes known issues and bugs related to using the command-line interface (CLI).
Bug 6247537 - After a master failover occurs, the following message might appear on the console when you log in to the CLI:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed...
This message is innocuous and can be ignored.
Bug 6327227 - If disabling a disk fails (for example, because the disk cannot be unmounted), no warning message is displayed. Use hwstat -v to view the status of the disk.
Bug 6380366 - If you access the CLI and type the mdconfig command before the database is ready, the following error message is displayed:
Timed out waiting for the state machine
Workaround - Ensure that the database is operational by entering the sysstat command before using the mdconfig command. The output of the sysstat command should specify Query Available. If the sysstat command reports Query Unavailable, do not issue the mdconfig command.
Bug 6405506 - If you configure an SMTP server and a port in the CLI using the netcfg command, the SMTP configuration does not honor the port setting. Regardless of the port value you specify with the netconfg command, the system sends mail to your SMTP server using the default port 25.
Bug 6406170 - When you make a configuration change, certain properties require a reboot to take effect. Once the change is entered, however, you can no longer determine the current value since the netcfg command shows the new (pending) value instead. You also cannot tell that the displayed value is a pending value and that a reboot is still required.
Bug 6241900 - If you log in to the system when it is not operational, enter the CLI help command, and then enter additional commands, such as version and sysstat, an immediate exit from the login session results. No error message is displayed.
Bug 6403938 - If you inadvertently add the -v or --verbose option to certain CLI commands, such as hwcfg, where its use is not supported, the following error is displayed:
unknown error: Can't find resource for bundle java.util.PropertyResourceBundle, key common.version.name
Bug 6421305 - Note the following with the df command:
You can use df effectively to view raw storage statistics for each disk in the cluster. Therefore, used is not equivalent to the total number of object bytes stored in the system. For example, it includes space consumed by data parity, object headers and footers, query indexes and so on.
Storage utilization statistics reflected by df are refreshed every three minutes.
When using df to view storage utilization, be aware that the system reserves 15% of raw storage. This space is available so data recovery can be completed should a node or disk fail on a full cluster.
Bug 6450643 - The system might hang after you issue the reboot --all command. This issue is most commonly seen during initial configuration.
Workaround - If the system does not reboot after ten minutes, turn off the power to the Sun StorageTek 5800 cell and then turn the power back on to reboot all components.
Bug 6466323 - The syntax for the passwd command is documented incorrectly in the command help and also in the Sun StorageTek 5800 System Administration Guide. The correct syntax is as follows:
passwd [-K | --pubkey]
Use either the -K or the --pubkey options to provide a public key to be used for login authorization.
Bug 6194366 - The supported metadata type field double does not work in views.
Bug 6464055 - In the schema definition file, you can specify a metadata field as queryable = false. If you later change the schema definition file to indicate that queryable = true for that field, any data that you add to the system after the change includes that field as a queryable field. However, data that was previously stored on the system is not updated and is not queryable with that field.
Bug 6464058 - The total number of bytes allowed in a schema table is 8080. However, the system does not prevent you from specifying a table in the schema definition file that would require more than 8080 bytes if full-sized strings of all fields were stored in the table at the same time. If you specify a table that requires more than 8080 bytes, the metadata for some objects might never be stored in a table and therefore might never be queryable.
Workaround - Be careful to keep the maximum size of any table that you specify in the schema definition file under 8080 bytes. See Planning Tables for information about calculating the number of bytes that a table requires.
Bug 6464061 - If an fsView includes more than 15 fields, the system does not create an index for any of those fields.
Workaround - Do not create fsViews with more than 15 fields.
Bug 6467475 - During a complex query operation that requires the query to access more than one table, the system might generate the error message "Table memory space exhausted" and the query might time out.
Workaround - Follow the guidelines in Planning Tables and Planning Indexes to maximize query performance.
This section describes known issues and bugs related to using the client API (application program interface) and SDK (software developer's kit).
Bug 6395771 - Harmless errors are printed to stdout during query operations with a select clause. For a cluster, they are displayed in the log. For the emulator, these errors are written to the shell where the emulator is launched.
Bug 6403951 - The emulator supports the Delete Metadata operation of NameValueObjectArchive.delete and hc_delete_ez. However, the emulator does not remove the underlying data file when the last metadata record is deleted. The semantics are correct, but the underlying space is not reclaimed.
Bug 6427145 - When using the C API, the overall metadata size of a stored data item is limited to 76384 bytes. (The exact maximum size depends on many factors). This limitation does not apply to data stored using the Java API.
Bug 6427141 - The behavior of metadata values that contain nonprintable characters is not guaranteed. In particular, two known limitations are that metadata values cannot contain the null character, and that metadata values that contain the <LF> character will have the <LF> values silently removed in the stored value.
Bug 6466803 - System performance might degrade significantly if you have a large number of authorized clients configured.
Workaround - Limit the number of authorized clients to 4 or fewer.
Bug 6468507 - When you configure authorized clients using the netcfg clients command, the CLI seems to hang. After a few minutes, however, the configuration is updated and you can enter new CLI commands.
Workaround - Wait a few minutes after issuing the netcfg clients command to allow the system to update the configuration.
Bug 6471588 - You may encounter problems running the SDK example scripts for Unix systems from the C shell (csh).
Workaround - Run the scripts from a bash or sh shell only.
Bug 6472509 - If you specify the FILE argument in the RetrieveMetadata SDK example application, the application fails.
Workaround - Use RetrieveMetadata example application without the FILE argument.
This section describes general issues related to the Sun StorageTek 5800 system.
Bug 6187582 - If you try to delete the same object twice, the system produces the following error message:
ERROR: failed to retrieve object: request failed with status 400: no oa ctx.
The error message should say:
ERROR: failed to retrieve object: request failed with status 400: noSuchObject
Bug 6268321 - If the client connection is lost during store and delete operations, the following error is displayed:
INFO: request failed with IOException...
Error in parsing the status line from the response: unable to find line starting with "HTTP"
Workaround - The store failed because of a network problem. Retry the operation.
Bug 6291970 - Known issues arise when you perform concurrent delete operations.
Workaround - Until this issue is resolved, perform delete operations from a single client connection at a time.
Bugs 6355668 and 6403926 - Initial input/output (I/O) operations on an otherwise idle cluster might be slower than usual.
Bug 6398940 - With the exception of the mail header itself, email alerts do not provide cluster-specific information to enable you to distinguish between systems.
Workaround - Read the mail headers to determine where the alert originated. The source address will include the Administrative VIP configured for the cluster.
Bug 6392770 - Starting up or shutting down the cluster without all nodes being online might force the query indexes to be rebuilt. Until the rebuilding process is finished, query results might be incomplete. See the sysstat command for the status of the query engine.
Bug 6413587 - When the Sun StorageTek 5800 node BIOS is booting up, very rarely, the following two conditions might occur:
Workaround - Power on the node again through the front panel power switch. If this does not fix the problem, remove the power cord, wait 30 seconds, and then connect the power cord once more.
Bug 6402478 - When the switch fails over, it does not send an email alert. The following alert is displayed indicating that you should perform a cluster assessment:
Cluster is booting or master failed over
Other than by visually inspecting the cluster, there is no way to determine if the switch has failed over.
Workaround - Visually inspect the cluster to determine if the switch has failed over.
Bug 6422741 - If you use a good disk from another node during a disk replacement, the node might assume the identity of the previous disk owner.
Workaround - Use only new disks when performing a disk replacement.
Bug 6424800 - With disk write cache enabled, there is a very small probability that a complete power outage can result in data loss for recently stored files. Testing of the failure scenarios has not yet resulted in any data loss.
Bug 6423238 - In some instances, five disk failures will cause the query engine to become disabled.
Workaround - Reboot the cluster and the query engine will automatically repair itself in approximately 12 hours.
Bug 6465815 - If six disks fail on a full cluster (greater than 85% utilized), the system cannot heal itself in a reasonable time.
Workaround - In the unlikely event that five disks fail within the quarterly maintenance window, schedule immediate replacement of those disks; do not wait for the next quarterly maintenance.
Bug 6451150 - Sometimes when you issue the CLI commands shutdown or reboot, the system returns the messages "It is not safe to shut down the system" or "It is not safe to reboot the system." These messages indicate that the system is in the process of initializing the query engine.
Workaround - Although you can continue with the shut down or reboot process, for best performance, you might want to wait until the query engine is fully initialized before proceeding.
Bug 6422739 - If the Sun StorageTek 5800 system experiences an unexpected power loss, the query engine must be recreated and repopulated with user metadata. If, after such a power loss, the query engine remains unavailable for more than four hours following system reboot, it may indicate that the internal recovery load on the system is too high.
Workaround - Contact your Sun service representative. A system technician will tune internal recovery to allow the query engine to be recreated. Once the query engine has been recreated, internal healing processes will repopulate the query engine's indexes with user metadata. This healing process may take 12 or more hours to complete. During the healing process, query and WebDAV access will be operational. Not all objects will be accessible, however, until the query engine's indexes are fully repopulated.
Bug 6481942 - In rare cases, when some (but not all) nodes are rebooted after a temporary loss of internal network connectivity in the Sun StorageTek 5800 system, the nodes that were not rebooted may not be able to utilize the disks on the rebooted nodes. In this situation, Sun StorageTek 5800 clients experience intermittent failures to store or retrieve data. For example, a client cannot retrieve an object, but then a few seconds later, the client does successfully retrieve the same object.
Workaround - Clear this state using the reboot command. (It is not necessary to use the reboot - - all option; it is sufficient to just reboot the nodes):
Bug 6473958 - If the Sun StorageTek 5800 system is configured to send email alerts, the system may send a message indicating The node XXX has left the cluster and then several seconds later another message indicating The node XXX has joined the cluster several times a day during normal operation.
Workaround - Messages indicating that a node has rejoined the cluster a few seconds after it left are innocuous. You can ignore them.
Bug 6481952 - If a reboot or shutdown operation occurs while clients are accessing the Sun StorageTek 5800 system, the query engine database may be corrupted. If the query engine is corrupted, the system may require 12 or more hours after the reboot or shutdown to fully repopulate the database with all user metadata so that query and WebDAV can access all objects.
Workaround - If you need to reboot or shut down the system for administrative reasons, stop all clients first and then issue the reboot or shutdown commands.
Bug 6483145 - If the Sun StorageTek 5800 system is configured to use a mail server that is nonoperational, the query engine database may be corrupted during a reboot or shutdown operation. If the query engine is corrupted, the system may require 12 or more hours after the reboot or shutdown to fully repopulate the database with all user metadata so that query and WebDAV can access all objects.
Workaround - Make sure your SMTP host setting is for a valid and operational mail server. If there is no valid SMTP host available, before shutting down or rebooting, configure an invalid SMTP host (for example, 0.0.0.1).
Bug 6450643 - If the Sun StorageTek 5800 system is configured to use a mail server that is nonoperational, some CLI commands may appear to hang because they are trying to send email alerts and timing out.
Workaround - Make sure your SMTP host setting is for a valid and operational mail server. If the mail server is not operational and the CLI commands appear to hang, try waiting three to five minutes for the commands to take effect.
Bug 6481476 - The system may respond to some queries with an out-of-memory error message.
Workaround - When developing queries using the Java API, set the maxResults parameter to a value less than 10000.
Bug 6407787 - Even after the system has healed a disk, that disk may still be included in the disks unrecovered count displayed by the sysstat command.
Workaround - When the system is rebooted, the disks unrecovered count is reset to an accurate number.
If you need help installing or using this product, go to: | http://docs.oracle.com/cd/E19851-01/819-7553-11/relnotes_body.html | CC-MAIN-2015-22 | refinedweb | 5,793 | 60.75 |
IMAPClient introduction
I gave a presentation introducting IMAPClient at the monthly Christchurch Python meetup on Thursday. It included a brief introduction to the IMAP protocol, the motivation for creating the IMAPClient package, some examples of how it compares to using imaplib from the standard library, and some discussion of future plans. There also may have been some silly pictures. The talk seemed to be well received. Thanks to everyone who attended.
I'm not sure how useful they'll be without the vocal part but the slides are now available online.
Apart from my talk, a wide range of topics came up including installing and managing multiple versions of Python side-by-side, TLS (SSL) and Python and the suckiness (or not) of JavaScript. As always, lots of fun. There's always a great bunch of people who make it along.
posted: Sun, 08 Mar 2015 08:59 | permalink | commentsposted: Sun, 08 Mar 2015 08:59 | permalink | comments
IMAPClient now all at Bitbucket
I've been wanting to do this for a while: IMAPClient is now completely hosted on Bitbucket. The Trac instance is no more and all tickets have been migrated to the Bitbucket issue tracker. now redirects to the Bitbucket site.
The primary motivation for this change is that it makes it possible for anyone to easily interact with IMAPClient's tickets using the Bitbucket account they probably already have. Due to spam issues, I had to turn off public write access to the Trac bug tracker a long time ago meaning I was the only one creating and updating tickets. This change also means less maintenance overhead for me, so more time to spend on IMAPClient.
Please let me know if you see anything related to the migration that doesn't look right.
posted: Sat, 28 Feb 2015 11:32 | permalink | commentsposted: Sat, 28 Feb 2015 11:32 | permalink | comments
IMAPClient 0.12
I'm very happy to announce that IMAPClient 0.12 is out!
This is a big release. Some highlights:
- Unicode handling has been fixed. Some bad decisions were made during the Python 3 port (v0.10) and this release fixes that. Bytes are now returned in most places (instead of unicode strings).
- MODSEQ parts in SEARCH responses are now handled correctly. A crash has been fixed when MODSEQ queries (part of the CONDSTORE extension) are made with the search method. The returned MODSEQ value is now available via the "modseq" attribute on the returned list of ids.
- Extra __init__ keyword args are passed through. This allows access to SSL options that the underlying imaplib library might support (Python version dependent).
- Python 3.4 is now officially supported.
- More control over OAUTH2 parameters.
- The deprecated get_folder_delimiter() method has been removed.
See the NEWS.rst file and manual for more details.
Many thanks go to Inbox for sponsoring the significant unicode changes in this release.
IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.
The next major version of IMAPClient will be 1.0.0, and will be primarily focussed on enhancing TLS/SSL support.
posted: Mon, 12 Jan 2015 12:14 | permalink | commentsposted: Mon, 12 Jan 2015 12:14 | permalink | comments
Inbox
IM
IMAPClient 0.10
IMAPClient 0.10 has just been released. This is an important release because it's the first to support Python 3!
Here's the highlights:
- Python 3.2 and 3.3 are now officially supported. This release also means that Python versions older than 2.6 are no longer supported. Be sure to see the NEWS.rst file for more information on this change.
- The HIGHESTMODSEQ item in SELECT responses is now parsed correctly
- Fixed daylight saving handling in FixedOffset class
- Fixed --port command line bug in imapclient.interact when SSL connections are made.
- Michael Foord's excellent Mock library is now longer included with the IMAPClient package (it is listed as external test dependency)
- Live tests that aren't UID related are now only run once per run
- Live tests now perform far less logins to the server under test
- Unit tests can now be run for all supported Python versions using tox.
- python setup.py test now runs the unit tests
- Many documentation fixes and improvements.
A massive thank you to Mathieu Agopian for his massive contribution to getting the Python 3 support finished. His changes and ideas feature heavily in this release.
See the NEWS.rst file and manual for more details.
IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.
posted: Thu, 06 Jun 2013 00:15 | permalink | commentsposted: Thu, 06 Jun 2013 00:15 | permalink | comments
IMAPClient 0.9.2
IMAPClient 0.9.2 was released yesterday. In this release:
- The IMAP THREAD command is now supported. Thanks to Lukasz Mierzwa for the patches.
- Enhanced CAPABILITY querying
- Better documentation for contributors (see HACKING file)
See the NEWS file and manual for more details.
IMAPClient can be installed from PyPI (pip install imapclient) or downloaded from the IMAPClient site.
Note that the official project source repository is now on Bitbucket. is still the offical home page and is still used for project tracking. It is only the source respository that has moved.
posted: Thu, 28 Mar 2013 23:35 | permalink | commentsposted: Thu, 28 Mar 2013 23:35 | permalink | comments
IM
IMAPClient 0.9 released
I'm pleased to announce version 0.9 of IMAPClient, the easy-to-use and Pythonic IMAP client library.
Highlights for this release:
- Support for Gmail's label API. Thanks to Brian Neal for the patches for this.
- Significant cleanup and refactoring in preparation for Python 3 compatibility.
- The "livetest" module can now be safely used against IMAP accounts with real data. Previously it could only be used with dummy accounts due to the destructive nature of the tests.
- Fixed handling of IMAP servers that return all-digit folder name without quotes. Thanks to Rhett Garber for the bug report.
- Much improved test coverage (again, in preparation for Python 3 support)
- Fixed rename live test so that it uses folder namespaces
- Parse STATUS responses robustly - fixes folder_status() with MS Exchange.
- Numerous livetest fixes to work around oddities with the MS Exchange IMAP implementation.
The NEWS file and manual have more details on all of the above.
As always,.
posted: Wed, 16 May 2012 16:02 | permalink | commentsposted: Wed, 16 May 2012 16:02 | permalink | comments
IM
IM
IM
IM
IM
IM
IM
Problem:
posted: Fri, 28 Aug 2009 01:46 | permalink | commentsposted: Fri, 28 Aug 2009 01:46 | permalink |: Wed, 29 Apr 2009 00:54 | permalink | commentsposted: Wed, 29 Apr 2009 00:54 |
Announcing IMAPClient 0.3
I've just made a new release of IMAPClient (0.3). Changes are:
- The distribution has been renamed from imapclient to IMAPClient so as follow the conventions used by modern Python packages. The Python package name remains imapclient however.
- Fixed a bug reported by Brian Jackson which meant more complex fetch part selections weren't being handled correctly. (eg.
"BODY[HEADER.FIELDS (FROM)]"). Thanks Brian!
- IMAPClient is now distributed using setuptools.
IMAPClient can be installed from PyPI using
easy_install
IMAPClient or downloaded from my
Code
page. As always, feedback and patches are most welcome
posted: Tue, 16 Oct 2007 15:20 | permalink | commentsposted: Tue, 16 Oct 2007 15:20 | permalink | comments
Introducing!
posted: Wed, 02 May 2007 21:08 | permalink | commentsposted: Wed, 02 May 2007 21:08 | permalink | comments | http://freshfoo.com/blog/tags/imapclient | CC-MAIN-2015-14 | refinedweb | 1,237 | 66.74 |
My love for stored procedures
When I first started coding Oracle PL/SQL for my employer, I realised that I found myself back in medieval ages. A syntax reminding me of Ada and Pascal, ultra-strong typing, compilers that find compilation errors about 100'000 lines later, etc etc. I was young (OK I still am), full of visions and ideas, so this was about as hard as reality could get. Especially since I have friends working over at the Scala labs in Lausanne, Switzerland... developing for the "future", not the "past". I guess, today I would have joined this facebook group.
But I haven't. Actually, I have started to completely change my mind about those archaic pieces of logic deep down in the database. They're very much alive! In our company, we're using stored procedures as a means of interfacing various systems with our huge database (several billions of records in some tables). Actually, the database has become a middleware between our own software components. On one server (or from a script, etc) we call a stored procedure with a UDT message parameter. The procedure updates some tables and puts the UDT in an Oracle AQ. The AQ is consumed by another server. While this is not a "J2EE standard" architecture, it works very well and efficiently.
Stored procedure evolution
Check out this hilariously angry article from 2004. People have furiously hated stored procedures for a long time and with a lot of passion, and they still do today. But the stored procedures aren't gone. Au contraire, they're even more important today, as databases tend to hold more and more data that cannot be handled on the business tier as far as many operations are concerned. Check out "newer" databases such as MySQL, Postgres, H2, HSQLDB, etc. They all ship with their own stored procedure languages. So it's not just about the "archaic", "evil" monoliths provided by Oracle, IBM (DB2), or Microsoft.
So, where is this going?
In the IT-business, we live in a fast-paced world. None of us actually knows where we're going in the next 10 years. I'm not here to discuss that overly religious topic. Maybe I'm wrong about the stored procedure business. But I love them for their performance and closeness to the data they manipulate. And I think I share this love with many of you out there. The same applies for SQL. I'm not here to discuss the object-relational impedance mismatch either and whose fault it is etc... It's a boring and over-rated discussion. After all, we don't know where these things are heading either.
jOOQ
Instead I'm focusing on something else. I recently published an article on dzone and on the server side about jOOQ. I had lots of interesting, constructive, and motivating feedback, and a first committer! In the meantime, we have established a sound support for stored procedures (and UDT's) that integrate well into jOOQ's DSL. Because one of the most tiresome tasks when using stored procedures, is to call and map them from a higher-level language such as Java and JDBC.
Generate PROCEDURES as methods, PACKAGES as packages, UDTs as classes
Like many other persistence tools, jOOQ ships with code generation. It will also generate classes for your stored procedures and for your UDT's. Check out this PL/SQL example package:
CREATE OR REPLACE PACKAGE library AS
-- A procedure checking the existence of an author and
-- providing the result as an OUT parameter
PROCEDURE p_author_exists (author_name VARCHAR2, result OUT NUMBER);
-- A function checking the existence of an author and
-- providing the result as a RETURN value
FUNCTION f_author_exists (author_name VARCHAR2) RETURN NUMBER;
END library;
And its generated (simplified) Java representation:
public final class Library {
// Invoke the stored procedure on a connection. The return value
// represents the OUT parameter.
public static BigDecimal pAuthorExists(
Connection connection,
String authorName)
throws SQLException { ... }
// Invoke the stored function on a connection
public static BigDecimal fAuthorExists(
Connection connection,
String authorName)
throws SQLException { ... }
// Use the stored function as a Field in jOOQ's query DSL
public static Field<BigDecimal> fAuthorExists(String authorName) { ... }
// Use the stored function as a Field taking another field as parameter
public static Field<BigDecimal> fAuthorExists(Field<String> authorName) { ... }
}
Apart from calling stored functions and procedures directly from your Java application, you can also embed them in a jOOQ query like this:
// Let's say PERSONS is a table holding FIRST_NAME and LAST_NAME fields
// Then the following code will use those fields as generated Java objects
Factory create = new Factory(connection);
Field<BigDecimal> isAuthor = Library.fAuthorExists(LAST_NAME).as("is_author");
// Find persons born in 1981 and check whether they are authors
Result<?> result = create
.select(FIRST_NAME, LAST_NAME, isAuthor)
.from(PERSONS)
.where(YEAR_OF_BIRTH.equal(1981)).fetch();
for (Record record : result) {
System.out.println(
record.getValue(LAST_NAME) + " is an author : " +
record.getValue(isAuthor));
}
You can embed the stored function wherever you can in SQL, i.e. in SELECT clauses, in ORDER BY clauses, in GROUP BY clauses. For example, you filter the authors directly in the database, not in your Java code:
Result<?> result = create
.select(FIRST_NAME, LAST_NAME)
.from(PERSONS)
.where(YEAR_OF_BIRTH.equal(1981))
.and(Library.fAuthorExists(LAST_NAME).equal(1))
.fetch();
Now if you have UDT's in your stored procedure (which is a lot more common than having UDT's in tables), then you can have jOOQ map them easily to the Java world for you as well. Let's say you have this UDT:
CREATE TYPE u_address_type AS OBJECT (
street u_street_type, -- A nested UDT!
zip VARCHAR2(50),
city VARCHAR2(50),
country VARCHAR2(50)
)
And this UDT is used in the following standalone (i.e. not in a package) stored procedure:
CREATE OR REPLACE PROCEDURE p_check_address
(address IN OUT u_address_type);
Then, these pieces of Java code (simplified for the example) are generated for you:
public class UAddressTypeRecord extends UDTRecordImpl<UAddressTypeRecord> {
// The nested UDT is referenced
public UStreetTypeRecord getStreet();
public String getZip();
public String getCity();
public String getCountry();
}
And you will be able to communicate with your database using code similar to the following one:
// Create the nested UDT structure
UAddressTypeRecord address = new UAddressTypeRecord();
UStreetTypeRecord street = new UStreetTypeRecord();
street.setStreet("Bahnhofstrasse");
street.setNo("1");
address.setStreet(street);
address.setZip("8001");
address.setCity("Zurich");
address.setCountry("Switzerland");
// Pass the UDT as a parameter to the database. Note how the OUT parameter
// is mapped as a return value to the method. If you have several OUT parameters
// A value object containing all qualified parameters is generated and returned
UAddressTypeRecord resultingAddress = Procedures.pCheckAddress(address);
Conclusion
In the early days of jOOQ, I thought I was creating an alternative to Hibernate or JPA, which are excellent tools, but they are OR-mappers (see the introduction about religious persistence beliefs). Since I'm in love with SQL, I don't want any mapping to the OO-world. (I'm also in love with OO, that's why it's called j-OO-Q).
But in the mean time, I have realised that jOOQ is something entirely different. It can go side-by-side with Hibernate, JPA, or other tools, providing much ease of access to your beloved (or hated) database functionality in the way you're used to in Java. With code generation, access to vendor-specific constructs becomes as easy as it can be. You can call your stored procedures as if they were regular Java methods, without knowing the details about JDBC's advanced features. You can make your database an integral part of your application. You can make stored procedures fun! Heck, you can even change your mind like I did! :-)
Check back on jOOQ for future versions, with more features and databases to come. And maybe you'll commit the odd patch to integrate that beloved database-feature of yours in jOOQ! | http://www.theserverside.com/discussions/thread.tss?thread_id=61675 | CC-MAIN-2016-50 | refinedweb | 1,305 | 54.73 |
A minimal, modular, client side application framework.
Tinyapp is a simple event-driven client-side JavaScript application architecture and module management framework that serves the following needs:
Tinyapp is based on Applitude: Simple module management. View the slideshow: "Introducing Applitude: Simple Module Management"
Status - Developer preview (stick to tested, documented features for best results).
The guiding philosophy of Tinyapp is “Less is more.” Tinyapp lays the wiring and then gets out of the way of your modules. Hence the subtitle, “Simple Module Management.”
Tinyapp was created to illustrate how to implement a client-side JavaScript application architecture for the upcoming book "Programming JavaScript Applications" (O'Reilly).
Tinyapp uses
npm for dependency management and CommonJS to use the dependencies in your modules.
If you don't have node installed, you can download it from.
In your project's
package.json file, make sure to include tinyapp:
dependencies: { "tinyapp": "*" // use latest version }
You'll also need something like browserify and grunt to build your app:
"devDependencies": { "grunt-browserify": "~0.1.x", "grunt": "~0.3.x", "traverse": "~0.6.x", "browserify": "~1.15.x" }
app.js):
var app = require('tinyapp'); app.init({ // Pass your app configuration in here. environment: myEnvironment, // A promise that must be resolved before the app // renders. beforeRender: myPromise });
Create a namespace:
var namespace = 'hello';
Include tinyapp:
var namespace = 'hello', app = require('tinyapp'); // use app as desired
Provide an API:
var namespace = 'hello', app = require('tinyapp'), hello: function hello() { return 'hello, world'; }, api = { hello: hello };
Export your module:
var namespace = 'hello', app = require('tinyapp'), hello: function hello() { return 'hello, world'; }, api = { hello: hello }; module.exports = api;
Exporting your module makes it available to
require() in other modules.
It's nice to declare a namespace, because you'll be using a lot of events, and declaring a namespace lets you do things like this:
var eventData = { namespace: namespace, detail: 'my custom event data' }; app.trigger('something_happened' eventData);
That makes refactoring really easy. You can move code from one module to another without breaking it, or change a module's namespace without impacting any of the module code.
Declaring your API explicitly makes it immediately clear which parts of your module constitute the exposed interface.
api = { hello: hello };
In this case, it's just
hello, but most interfaces will be more complicated.
Module initialization is broken into two phases:
The first is the load phase. Tinyapp exposes an
app.loadReady() method that takes a callback function that you define in your module. The intention of the
app.loadReady() method is to allow you to begin setting up your data models, including firing any asynchronous Ajax load methods that need to happen before you can render your module to the document.
Similarly, the
app.renderReady() callback is called after:
.beforeRendercallbacks have fired, and
This allows you to defer render until it's safe to do so. For example, say you want to render Skrillex tour dates from BandsInTown:
var namespace = 'skrillexInfo', app = require('tinyapp'), data, whenLoaded, load = function load(url) { var url = url || '.' + 'json?api_version=2.0&app_id=YOUR_APP_ID'; whenLoaded = app.get(url); whenLoaded.done(function (response) { data = response; }); return whenLoaded.promise(); }, render = function render(data) { } // Expose API for flexibility and unit testing. api = { load: load, render: render }; // Register load and render callbacks. app.loadReady(load); app.renderReady(render); module.exports = api;
Tip: Try not to do anything blocking in your
app.loadReady() callback. For example, you might want to asynchronously fetch the data that you need to complete your page render, but if you're loading a fairly large collection and you need to iterate over the collection and do some data processing, save the data processing step for
app.renderReady(), when you're not blocking the page load process.
You can't safely manipulate the DOM in your
.loadReady() callback.
Environment is made up of things like image hosting URLs which might vary from one host or CDN to another. Generally server side environments will also contain passwords, secrets, or tokens for communicating with third party APIs. Since the client-side environment is not secure, you should not pass those secrets through to the client would make your application less portable.
As a general rule of thumb, your app should be ready to open-source at any time, even if you never intend to do it. That mode of thought will help establish the proper separation of environment configuration and secrets from application code.
For more on application configuration, see "The Twelve-Factor App"
beforeRender
beforeRender is a list of application-level promises which all must finish before the render process begins. For example, many apps will need translations to load before modules are allowed to render. By adding an i18n (internationalization) promise to the application's
beforeRender queue, you can postpone render until the translations are loaded. Using
beforeRender can prevent tricky race condition bugs from cropping up, and provide a neat solution if you need a guaranteed way to handle tasks before the modules render.
You can resolve
beforeRender promises by listening for an expected event to fire. Inside
app.js:
var namespace = 'app', whenI18nLoaded = app.deferred(); app.on('translations_loaded.' + namespace, function () { whenI18nLoaded.resolve(); }); app.init({ beforeRender: [whenI18nLoaded.promise()], });
Later:
whenTranslationsLoaded.done(function () { app.trigger('translations_loaded.' + namespace); });
Modules should know as little as possible about each other. To that end, modules should communicate through events. There is an app level event bus supplied by the tinyapp sandbox. You can use
app.on() to subscribe to events,
app.off() to unsubscribe, and
app.trigger() to publish.
app.on('a.*', function (data) { console.log(data); }); // later app.trigger('a.b', 'hello, world'); // logs 'hello, world'
Be specific about the events you report, and namespace your events. For example:
var namespace = 'videoPlayer.view', // Capture click events and relay them to the app // event bus. bindEvents = function bindEvents() { // Delegate to the view's parent element. app.$('#playerview').on('click.' + namespace, '.play-button', function (event) { event.namespace = 'videoPlayer.view' app.trigger('click', event); }); }, // Expose the bindEvents method for testing. api = { bindEvents: bindEvents }; // Let Tinyapp call bindEvents when the DOM is ready // to attach events to. app.renderReady(bindEvents); module.exports = api;
Access libraries and utilities through a canonical interface (a facade), rather than calling library code directly. Doing so allows you to modify the implementation, or swap out the library completely with transparency to the application code.
.init()- Initialize the app.
.register()- Register your module with the app sandbox.
.loadReady()- Pass in callbacks to run at load time.
.renderReady()- Pass in callbacks to run at render time.
.events- Node Event Emitter compatible event emitter.
.on()- Delegates to
events.on().
.off()- Delegates to
events.off().
.trigger()- Delegates to
events.emit().
.$()- A selector engine for dom utilities.
.get()- jQuery compatible Ajax
.get().
.ajax()- jQuery compatible
.ajax().
.when- jQuery compatible .when().
.deferred()- jQuery compatible deferred API.
.resolved- A resolved promise.
.rejected- A rejected promise.
Modules that you wish to add to the sandbox (to be used by many other modules) need to be registered. Tinyapp supports deep namespaces. For example:
var namespace = 'i18n.date', api; // Later... app.register(namespace, api); module.exports = api;
This example will create an
i18n object if it doesn't already exist, and attach the
api object at
app.i18n.date.
Tinyapp relies on promises and deferreds from the jQuery library. Tinyapp exposes a few Deferred utilities, including:
.resolved- A resolved promise
.rejected- A rejected promise
.when()- A utility that allows you to run callbacks only after all promises passed to it are resolved.
These utilities can be helpful for coordinating asynchronous events in your application. | https://www.npmjs.com/package/tinyapp | CC-MAIN-2017-09 | refinedweb | 1,253 | 51.34 |
Let's start with understanding the interpreted nature of python.
Unlike C/C++ etc, Python is an interpreted object-oriented programming language. By interpreted it is meant that each time a program is run the interpreter checks through the code for errors and then interprets the instructions into machine-readable bytecode.
An interpreter is a translator in computer's language which translates the given code line-by-line in machine readable bytecodes. And if any error is encounterd it stops the translation until the error is fixed. Unlike C language, which is a compiled programming language. The compiler translates the whole code in one-go rather than line-by-line. This is the reason why in C language, all the errors are listed during compilation only.
# Demonstrating interpreted python print "\n\n----This line is correct----\n\n" #line1 print Hello #this is wrong #line2
In the above illustration, you can see that line 1 was syntactically correct and hence got successfully executed. Whereas, in line 2 we had a syntax error. Hence the interpreter stopped executing the above script at line 2. This is not valid in the case of a compiled programming language.
The illustration below is a C++ program, see that although line 1 and line 2 were correct than also the program reports an error, due to an error on line 3. This is the basic difference between interpreted language and compiled language.
// Demonstrating Compiled language #include<iostream> using namespace std; int main() { cout<<"---This is correct line---"; // line 1 cout<<"---This line is also correct---"; // line 2 cout "this is Incorrect Line" // line 3 return 0; }
Python is interactive. When a Python statement is entered, and is followed by the Return key, if appropriate, the result will be printed on the screen, immediately, in the next line. This is particularly advantageous in the debugging process. In interactive mode of operation, Python is used in a similar way as the Unix command line or the terminal.
The interactive Python shell looks like:
Interactive Python is very much helpful for the debugging purpose. It simply returns the
>>> prompt or the corresponding output of the statement if appropriate and returns error for incorrect statements. In this way if you have any doubts like: whether a syntax is correct, whether the module you are importing exists or anything like that, you can be sure within seconds using Python interactive mode.
An illustration for interactive python is shown below: | https://www.studytonight.com/network-programming-in-python/interpreted-and-interactive-python | CC-MAIN-2022-05 | refinedweb | 409 | 53.41 |
GET method. 36 * <p> 37 * The HTTP GET method is defined in section 9.3 of 38 * <a href="">RFC2616</a>: 39 * <blockquote> 40 * The GET method means retrieve whatever information (in the form of an 41 * entity) is identified by the Request-URI. If the Request-URI refers 42 * to a data-producing process, it is the produced data which shall be 43 * returned as the entity in the response and not the source text of the 44 * process, unless that text happens to be the output of the process. 45 * </blockquote> 46 * </p> 47 * 48 * @since 4.0 49 */ 50 @NotThreadSafe 51 public class HttpGet extends HttpRequestBase { 52 53 public final static String METHOD_NAME = "GET"; 54 55 public HttpGet() { 56 super(); 57 } 58 59 public HttpGet(final URI uri) { 60 super(); 61 setURI(uri); 62 } 63 64 /** 65 * @throws IllegalArgumentException if the uri is invalid. 66 */ 67 public HttpGet(final String uri) { 68 super(); 69 setURI(URI.create(uri)); 70 } 71 72 @Override 73 public String getMethod() { 74 return METHOD_NAME; 75 } 76 77 } | http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/xref/org/apache/http/client/methods/HttpGet.html | CC-MAIN-2015-35 | refinedweb | 175 | 64.95 |
I'm working on an application whose workflow is managed by passing messages in SQS, using boto.
My SQS queue is growing gradually, and I have no way to check how many elements it is supposed to contain.
Now I have a daemon that periodically polls the queue, and checks if i have a fixed-size set of elements. For example, consider the following "queue":
q = ["msg1_comp1", "msg2_comp1", "msg1_comp2", "msg3_comp1", "msg2_comp2"]
>>> rs = q.get_messages()
>>> len(rs)
1
>>> rs = q.get_messages(10)
>>> len(rs)
10
Put your call to
q.get_messages(n) inside while loop:
all_messages=[] rs=q.get_messages(10) while len(rs)>0: all_messages.extend(rs) rs=q.get_messages(10)
Additionally, dump won't support more than 10 messages either:
def dump(self, file_name, page_size=10, vtimeout=10, sep='\n'): """Utility function to dump the messages in a queue to a file NOTE: Page size must be < 10 else SQS errors""" | https://codedump.io/share/JUsRH12cidKF/1/how-to-get-all-messages-in-amazon-sqs-queue-using-boto-library-in-python | CC-MAIN-2017-47 | refinedweb | 151 | 57.16 |
Your source for hot information on Microsoft SharePoint Portal Server and Windows SharePoint Services
******* REMOVED SOME HARSH WORDS ON THE SESSION *******
I took some notes, and augmented it with some of my own thoughts and information.
-------
SharePoint Online provides:Managed Services on the net- No server deployment needed, just a few clicks to bring your instance up and running- Unified admin center for online services- Single sign on system, no federated active directory yetEnterprise class Reliability- Good uptime- Anti virus-...SharePoint online is available in two tastes: standard (hosted in the cloud) and dedicated (on premises)Standard is most interesting I think: minimum of 5 seats, max 1TB storage.On standard we have no custom code deployment, so we need to be inventive!SharePoint Online is a subset of the standard SharePoint product (extensive slide on this in the slide deck, no access to that yet)SharePoint online is for intranet, not for anonymous internet publishing.$15 for the complete suite: Exchange, SharePoint, SharePoint, Office Live Meeting. Separate parts are a few dollars a piece.Base os SharePoint Online is MOSS, but just a subset of functionality is available. Also just the usual suspect set of site templates is available: blank, team, wiki, blog, meeting.SharePoint Online can be accessed through the Office apps, SharePoint designer and throuth the web services.SharePoint Designer:- No code WF- Customize content types- Design custom look and feel Silverlight: - talk to the web services of SharePoint online.- Uses authentication of current user accessing the page hosting the Silverlight control- See for some discussion on getting a SharePoint web service call workingData View Web Part:- Consume data from a data source - Consume RSS feeds through http GET - Consume http data through HTTP GET/POST - Consume web services - ...- Configure filter, order, paging etc.- Select columns, rename columns, ...- Result is an XSLT fileThis XSLT code can be modified at will. There are infinite formatting capabilities with XSLT. Also a set of powerful XSLT extension functions is available in the ddwrt namespace (See for a SharePoint 2003 article on this function set, see reflector for additional functions in the 2007 version;-)). See for writing XSLT extension functions when you are able to deploy code, so not for the online scenario; this was not possible on SharePoint 2003).Note that the Data View Web Part can only be constructed with SharePoint designer.InfoPath client: custom forms for workflowsWeb services: Can be used from custom apps (command line, win forms, ...), but also from Silverlight to have functionality that is hosted in your SharePoint Online site itself.You can also host custom web pages on your own server or in the cloud on Windows Azure (the new Microsoft cloud platform), and call SharePoint Online web services in the code behind of these pages.What can't be done:- No Farm wide configurations- No Server side code - No custom web parts - No site definitions - No coded workflows - No features - No ...
There is still a lot that can be done, but that will be an adventure to find out exactly....
Top News Stories Are You Integrating Properly? (Information World Review) I will stop harping on about
Hmm. I found it quite useful. You have a nice summary of the points I took away. I was not super familiar with SharePoint Online so perhaps that's the difference. Nice to see Silverlight options exist.
Pingback from office 2003 templates | Bookmarks URL
Just got a mail from Troy Hopwood, the presenter of the session who followed up wwith some good during the presentation. He wrote "Sorry to hear you didn’t find any value in my Extending SharePoint Online session. The summary on your blog was perfect though suggesting you were able to come away with the key points." Actually I feel a bit embarrased by my harsh words in my blog post. Maybe I felt that way because I knew to much of SharePoint and its extensibility possibilities and shouldn't have attended the session. For other people it was useful, like for Tom in the reaction on this blog.
Pingback from Websites tagged "bloglines" on Postsaver | http://weblogs.asp.net/soever/archive/2008/10/27/notes-from-pdc-session-extending-sharepoint-online.aspx | crawl-002 | refinedweb | 683 | 61.77 |
Recently,.
Let’s say you created new file
Math.re. Boom, you have a new module in your app called
Math.
By convention, all filenames are capitalized to match module name. You can name file
math.re but module name is still capitalized:
Math.
If you create a type or a function or whatever inside your module it’s automatically available for module’s consumers using dot notation.
let sum = (a, b) => a + b; /* Now you can use `Math.sum` in another module */reMath.re.
let onePlusTwo = Math.sum(1, 2);reApp:
export default AuthButton extends React.Component {}jsLoginButton.js
import Button from "./LoginButton";jsLoginForm.js
What a mess! If you ever decide to rename your component you have to change all these places to keep your naming accurate. Meh. In Reason world, you have only one source of truth: file name. So it’s always guaranteed accurate across the entire app.
More about the ReasonML modules in the official documentation. | https://alexfedoseev.com/2018/reasonml-modules | CC-MAIN-2018-51 | refinedweb | 161 | 62.24 |
The.
RenamespaceTask.java.
renameFile
dojo.js.
djConfig.
Greg, would this mean I would be able to use jMaki also within the AjaxTableContainerProvider of Sun Java System Portal Server 7.1 u1, which relies on Dojo 0.3.1?
-Christian
Posted by: knothec on August 11, 2007 at 11:40 AM
Hi, Yes you could use them together. This was the whole idea behind the namespacing. I've successfully used Dojo 0.4.3 and Dojo .9 (early access) together. The only thing to keep in mind is if you doing anything with the djConfig it will be shared (this is a configuration file for Dojo).
Give it a try and let me know if there is a problem.
-Greg
Posted by: gmurray71 on August 11, 2007 at 07:26 PM | http://weblogs.java.net/blog/gmurray71/archive/2007/08/renamespacing_d.html | crawl-002 | refinedweb | 130 | 67.76 |
Primitives
Primitives are the built-in object types that all other objects are composed from. They can be created through literals, atomic expressions that evaluate to a value.
All primitive values in Magpie are immutable. That means that once created, they cannot be changed.
3 is always
3 and
"hi" is always
"hi".
Booleans
A boolean value represents truth or falsehood. There are two boolean literals,
true and
false. Its class is
Bool.
Numbers
Magpie has two numeric types, integers and floating-point. Number literals look like you expect:
// Ints 0 1234 -5678 // Floats 3.14159 1.0 -12.34
Integers and floating-point numbers are instances of different classes (
Int and
Float respectively) and there are no implicit conversions in Magpie. This means that if you have a method that expects a
Float, then passing
1 won't work. You'll need to pass
1.0.
In practice, this is rarely an issue. Most arithmetic operations have specializations for both kinds of numbers and will work fine regardless of what you pass. In cases where mixed argument types are passed, then the result will be a float.
If you want to write code that works generically with either kind of number, then you want the
Num class. Both
Int and
Float inherit from that, so a method specialized to
Num will accept either type.
Strings
String literals are surrounded in double quotes:
"hi there"
A couple of escape characters are supported:
"\n" // Newline. "\"" // A double quote character. "\\" // A backslash.
Their class is
String.
Characters
Characters are instance of the
Char class and represent the individual Unicode code points that make up strings. When you index into a string or iterate over it, character objects will be returned. They also have a literal form, which is the character surrounded by single quotes:
'A' '!'
There will also be escape sequences for characters, but they haven't been implemented yet.
Nothing
Magpie has a special primitive value
nothing, which is the only value of the
class
Nothing. (Note the difference in case.) It functions a bit like
void
in some languages: it indicates the absence of a value. An
if expression with
no
else block whose condition is
false evaluates to
nothing. Likewise, a
method like
print() that doesn't return anything actually returns
nothing.
It's also similar to
null in some ways, but it doesn't have the
problems that
null has in most other languages. It's rare that you'll
actually need to write
nothing in code since it can usually be inferred from
context but it's there if you need it.
Since
nothing is in its own class, that means that a method that is specialized to another class won't receive
nothing instead. In Magpie, you never need to do this:
def length(string is String) // Make sure we got a string. if string == nothing then return 0 string count end
If your method expects a string, you'll get a string, not a string-or-maybe-nothing. | http://magpie.stuffwithstuff.com/primitives.html | CC-MAIN-2017-30 | refinedweb | 503 | 74.49 |
51589/query-regarding"
^
You can perform this task in two ways as below,,
Hope this helps!
If you want to know more about Apache Spark Scala, It's highly recommended to go for Spark certification course today.
Thanks!!
1) Use the concat() function. Refer to the below ...READ MORE
You can use this:
lines = sc.textFile(“hdfs://path/to/file/filename.txt”);
def isFound(line):
if ...READ MORE
Function Definition :
def test():Unit{
var a=10
var b=20
var c=a+b
}
calling ...READ MORE
Hi,
You can see this example to,
To format a string, use the .format ...READ MORE
All prefix operators' symbols are predefined: +, -, ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/51589/query-regarding-appending-to-a-string-in-scala | CC-MAIN-2022-33 | refinedweb | 134 | 68.16 |
Java 2 Software Development Kit
Since its introduction four years ago, Sun's Java technology has appeared on nearly every size and type of computer system, from IBM's MVS-powered mainframes to 3Com's Palm organizers.
New versions of the Java Development Kit and Java Virtual Machine are available first on Sun's systems and on Microsoft Windows, but many months may pass before they appear on other platforms. Sun's Java Software division produces reference implementations only for Solaris and for Windows, and licenses source code to other system vendors who want to port the JDK or JVM to their own hardware and operating systems.
Some vendors, like IBM, have their ports of new Java versions ready almost immediately. Unfortunately, users of platforms such as Apple's Macintosh have had long waits for the latest JDK. Linux users didn't see JDK 1.1 until mid-1997, although Sun introduced it in December 1996, and they have been eagerly awaiting Java 2 since November of last year.
Sun had no problem figuring out with whom to arrange Java licensing agreements at the major system vendors, but it did take them a while to identify a licensee for the widely dispersed Linux community. They eventually licensed the JDK source code to Steve Byrne and the Blackdown Java-Linux Porting Team, a group of Linux developers.
The Blackdown team recently released a preliminary version of the Java 2 SDK aft er testing it extensively with the Java Compatibility Kit, a suite of more than 16,000 tests which verify a port's conformance to the published Java language and runtime specifications.
You can download the Java 2 SDK for Linux from the Blackdown mirror web sites, found at. Note that Sun recently changed the name of the software from JDK 1.2 to Java 2 SDK, although much of the on-line material still refers to the earlier name. The compressed SDK is nearly 24MB, so use a fast network connection if possible. Full documentation for Java 2 is packaged separately from the SDK. Download it directly from Sun's web site at java.sun.com/products/jdk/1.2/docs/index.html.
Installation consists of uncompressing and extracting the jdk1.2pre-v2.tar.bz2 file archive into your home directory or other location. Use the command:
bzip2 -d jdk1.2pre-v2.tar.bz2 tar xvf jdk1.2pre-v2.tar
In order to use the JDK, you must have a Linux distribution that includes the glibc2 version of the GNU C library. Most distributions, such as Red Hat 5.2, include this library, which is also called libc6.
After installation, simply set your PATH environment variable to include the JDK's bin directory, and you're ready to go:
export PATH={JDK-install-directory}/jdk1.2/bin:$PATH
To verify your installation, type:
java -versionThe JVM should run and report its version as “java version 1.2”, along with additional information about the implementation.
The Java 2 SDK includes several tools for development and testing, including the javac compiler, the jar Java archive manager that works similarly to the tar command, and the appletviewer program for testing applets. “Applets” are Java programs intended to run within a browser; Java “applications” are like any other program and don't need a browser to run. Current versions of Netscape and Internet Explorer do not directly support Java 2, so you will need to use appletviewer to test your applets.
The Java 2 SDK comes with a compiler, but does not include an interactive development environment. You can, of course, use vi or emacs to create your source files, then compile and run them from your command line. For example, create the file HelloLinux.java containing the lines:
public class HelloLinux { public static void main(String[] args) { System.out.println("Hello, Linux!"); } }
Compile your program with the command javac HelloLinux.java. The compiler will produce the file HelloLinux.class, which you run using the command java HelloLinux. If you encounter problems trying to compile and run Java programs, check your PATH and CLASSPATH environment variables. CLASSPATH lists the directories where the JVM looks for class files. If your own files or the class files required for a separately installed Java library are not referenced in CLASSPATH, the JVM cannot find and | http://www.linuxjournal.com/article/3424?quicktabs_1=1 | CC-MAIN-2014-49 | refinedweb | 718 | 54.22 |
Hello, I've got a problem with APM and Raptor 50 heli
I trying to set up APM on Raptor 50 with Glow engine. I connect my JR RC control to APM next variant:
RC APM
rx in out
1 trotle(pitch) ----> 3 3
2 ailer ----> 1 1
3 elevator ----> 2 2
4 rudder ----> 4 4
my transmitter sets up in "airoplane mode"
The essence of the problem: I don't understand how set up the swash plate. On helicopter installed 90 degrees swash plate
I tried different variants of degrees setup. For example:
servo1 90 (aileron??)
servo2 0 (elevator??)
servo3 180 (pitch??)
ailerons works normal, but elevator and pitch mix some strange.
please help me....
p.s. Sorry for bad English
Views: 1736
<![if !IE]>▶<![endif]> Reply to This
I will give it a go today, Robert has a good plan, I was waiting for 2.4 release before starting, it has the best chance of working well.
<![if !IE]>▶<![endif]> Reply
Sweet! I look forward to seeing the results.
<![if !IE]>▶<![endif]> Reply
Here's what Randy said about the issue in another thread.
"It's not terribly difficult to get a non-mixed input working but it's lower down on the to-do list so it won't be done for a while.
In case someone else wants to have a try at it - there are separate routines in the Heli.pde for both turning the ccpm inputs from the RX into separate roll, pitch, yaw and collective commands and the opposite of turning these commands back into ccpm outputs to the servos. The catch is that then we would need to have a new set-up routine to capture min + max servo values, servo direction, etc. At the moment, the set-up only works for CCPM."
Sounds like the routines for setting up the servos is designed around CCPM. The unmixed swashplate setup will apparently need entirely new setup routines.
This is getting a little over my head. I don't have ArduPilot hardware yet myself. I just recently ordered my first setup, and I'll be waiting for it for a while because it's a new APM 2.0 preorder (should ship this next week I think, judging by the order numbers they're posting on their status updates). I've just been doing all my flying in HiL mode to get a feel for how the software is supposed to work, using a barebones Arduino Mega 2560 as the platform until I get the real hardware.
Once I get my own hardware and get to play with the real thing, I'll have more insight into how the software's put together, and hopefully I'll be more help.
<![if !IE]>▶<![endif]> Reply
@Chris, All,
That comment was from the old ccpm set-up which was much more complicated and actually required you to send in mixed signals from the TX. I.e. the TX had to be set-up in helicopter mode. That's been changed now so the TX sends regular unmixed signals just like it does for the quad. So the solution is easier now.
<![if !IE]>▶<![endif]> Reply
<![if !IE]>▶<![endif]> Reply
The scaling is actually quite easy. If I knew exactly what I had to scale TO, I'd be done in 1 hour. I just haven't got my head around that yet. It's that RC_Channel library, it's not clear. It's a good library I think, it's just not clear, needs commenting.
<![if !IE]>▶<![endif]> Reply
While working on latest ACM 2.4 pull from Git, I find this problem,
How do you enable show line numbers in Arduino 23 relax MacOS IDE? I can't find the options.
I have a compile error, ArduCopter.cpp:6388:2: error: #endif without #if
but, Arducopter.pde only has ~ 2150 lines??
Any Ideas?
There are so many #if #endif conditions to check...
<![if !IE]>▶<![endif]> Reply
That's strange Mark. I would suggest pulling from the download section and try compiling from that. The GIT is in a constant state of flux. You never know what you're going to get. You could have pulled while somebody was in the middle of checking in changes.
<![if !IE]>▶<![endif]> Reply
Thanks Robert, I wish it was that easy.
I started with clean code, all was compiling fine with my changes before I applied my unknown Vodoo?
I must have deleted a #if statement some where?
Current compile errors from code for new Frame_Config = Heli_90_Frame for Raptors etc.
I can probably fix all soon but, the 1st one, I have spent hours on !.
I usually compile after a few changes but this error got by me.
ArduCopter.cpp:6691:2: error: #endif without #if
ArduCopter.cpp: In function 'void heli_reset_swash()':
Heli_90:64: error: 'heli_rollFactor' was not declared in this scope
Heli_90:74: error: 'heli_pitchFactor' was not declared in this scope
ArduCopter.cpp: In function 'void heli_init_swash()':
Heli_90:110: error: 'heli_pitchFactor' was not declared in this scope
Heli_90:114: error: 'heli_rollFactor' was not declared in this scope
Heli_90:128: error: 'heli_servo_out' was not declared in this scope
ArduCopter.cpp: In function 'void heli_move_swash(int, int, int, int)':
Heli_90:194: error: 'heli_rollFactor' was not declared in this scope
Heli_90:195: error: 'heli_pitchFactor' was not declared in this scope
Heli_90:207: error: 'heli_servo_out' was not declared in this scope
Heli_90:211: error: 'heli_servo_out_count' was not declared in this scope
ArduCopter.cpp: In function 'void init_motors_out()':
motors_quad:32: error: redefinition of 'void init_motors_out()'
Heli_90:250: error: 'void init_motors_out()' previously defined here
ArduCopter.cpp: In function 'void motors_output_enable()':
motors_quad:39: error: redefinition of 'void motors_output_enable()'
Heli_90:257: error: 'void motors_output_enable()' previously defined here
ArduCopter.cpp: In function 'void output_motors_armed()':
motors_quad:47: error: redefinition of 'void output_motors_armed()'
Heli_90:270: error: 'void output_motors_armed()' previously defined here
ArduCopter.cpp: In function 'void output_motors_disarmed()':
motors_quad:154: error: redefinition of 'void output_motors_disarmed()'
Log:1: error: 'void output_motors_disarmed()' previously defined here
ArduCopter.cpp: In function 'void output_motor_test()':
motors_quad:185: error: redefinition of 'void output_motor_test()'
Log:11: error: 'void output_motor_test()' previously defined here\
<![if !IE]>▶<![endif]> Reply
I don't think I can help you with that, from here.
I might as well mention this here. I've created a remote clone of the Arducopter project which I'll be using for this work. I don't know if maybe we should all use it to avoid duplicating work?
(I haven't made any changes yet).
But, I think that you got a good idea to start with. What I propose is to change all instances in the code of "Heli_Frame" to "Heli_Frame_CCPM". This will represent our current code.
Then we make a new #define, but I would prefer to call it "Heli_Frame_Mechanical_Mixed" or "Heli_Frame_H1". I want to make sure it's very clear what is what. Which is best? I think that "Heli_90_Frame" could be confused with 90° CCPM mixing, which does exist. Some helis have 4 swash servos at 90°. Yes, this means it's overconstrained, but for some reason it is done.
<![if !IE]>▶<![endif]> Reply
I would not change existing HELI_FRAME just use it as is, less change is better /easy?
HELI_H1_FRAME seems less descriptive than HELI_90_FRAME, Maybe HELI_NORM_FRAME?
Or maybe CCPM_FRAME and NORM_FRAME fewer characters, but still descriptive.
I am aware of 90° CCPM, but It is a special case that requires no Pitch servo, but uses 4 servos for collective mix (more power to move swash plate) similar to 120° CCPM mixing but at 90°
I found the missing "}" that caused Error #1, Whoo!
<![if !IE]>▶<![endif]> Reply
A lot of times its called single servo, CCPM_FRAME ........ SINGLE_SERVO_FRAME....although its a bit long in the name :)
<![if !IE]>▶<![endif]> Reply | http://diydrones.com/forum/topics/problems-with-apm-and-raptor-50?commentId=705844%3AComment%3A787761&xg_source=activity | CC-MAIN-2013-48 | refinedweb | 1,288 | 73.78 |
question about how to Search for custom text in the ListView
By
nacerbaaziz, in AutoIt General Help and Support
Recommended Posts
Similar Content
- AndyS19
I'm trying to implement a Ctl-F popup box that looks something like the one that Notepad uses, but I'm not havine much luck. I intend to get it working, then beef up the popup's contents to add several checkboxes, buttons and radio boxes.
What my example code does is to use InputBox(), but that's not what I want.
Here is my test code:
#include <Array.au3> #include <GUIConstantsEx.au3> #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 Opt("GUICloseOnESC", 1) Opt("GUIOnEventMode", 1) Opt('MustDeclareVars', 1) OnAutoItExitRegister("ExitStageLeft") Opt("WinTitleMatchMode", -2) Global $hGUI _Main() Func _Main() $hGUI = GUICreate("Test ^F", 300, 200) setupSpecialKeysHandlers() GUISetOnEvent($GUI_EVENT_CLOSE, "Event_GUIClose") GUISetState() While (1) Sleep(157) WEnd EndFunc ;==>_Main Func handle_CTRL_F_key() Local $str $str = InputBox("Search", "Enter the string to search for:") ConsoleWrite("+++: $str ==>" & $str & "<==" & @CRLF) EndFunc ;==>handle_CTRL_F_key Func ExitStageLeft() Exit (99) EndFunc ;==>ExitStageLeft Func Event_GUIClose() Exit (1) EndFunc ;==>Event_GUIClose Func setupSpecialKeysHandlers() Local $ar, $parts, $key, $handler, $id Local $aAccelKeys[1][2] ; Create a table of Special keys and their handlers $ar = StringSplit("", "") _ArrayAdd($ar, "^f - handle_CTRL_F_key ") ReDim $aAccelKeys[UBound($ar) - 1][2] ; Now, create $aAccelKeys array with the table data. ; For each entry, create a Dummy GUI and associate its ; ID with the special key. For $ndx = 1 To UBound($ar) - 1 $parts = StringSplit($ar[$ndx], "-", 2) $key = StringStripWS($parts[0], 8) $handler = StringStripWS($parts[1], 8) $id = GUICtrlCreateDummy() $aAccelKeys[$ndx - 1][0] = $key $aAccelKeys[$ndx - 1][1] = $id GUICtrlSetOnEvent($id, $handler) Next GUISetAccelerators($aAccelKeys) ; Setup the Special keys hooks EndFunc ;==>setupSpecialKeysHandlers
- By Atoxis
Howdy, I've gone through a lot of au3 forums, and I once had a working Imagesearch script that I got from here. However, and i'm just totally not sure how but my imagesearch scripts aren't working anymore.
I'm not new to au3 but i'm not the most experienced with it's syntax/commands.
Anyways, I've looked over the big threads involving imagesearch.
Does anyone have a working Imagesearch x64 for win10 that is currently working as of the date with the post.
Dll's and what not is fine, just when I tell the script to run, I want to be able to find the image on the screen!
Can't find a working copy so if anyone has one please send it my way lol.
I've taken all the imagesearch downloads and what not and have played with them but I can't get any of them working on my end, despite others saying they're working.
Thanks.
- By lenclstr746
HELLO GUYS
I'm a work on a background see and click bot project
I can complete it if your help me
(using imagesearch , gdi+ and fastfind)
-! | https://www.autoitscript.com/forum/topic/192393-question-about-how-to-search-for-custom-text-in-the-listview/?tab=comments | CC-MAIN-2018-51 | refinedweb | 480 | 52.63 |
iPcMovable Struct ReferenceControl moving an iPcMesh. More...
#include <propclass/move.h>
Inheritance diagram for iPcMovable:
Detailed DescriptionControl moving an iPcMesh.
Definition at line 44 of file move.h.
Member Function Documentation
Add a constraint.
Get the current mesh on which we're working.
Relative move.
Check constraints too. This function will correctly update the current sector if a portal is traversed. Returns:
- CEL_MOVE_FAIL: if no movement was possible.
- CEL_MOVE_SUCCEED: if movement was possible.
- CEL_MOVE_PARTIAL: if object could move partially.
Move object while checking constraints.
This routine will ignore the previous position and will just check if it is possible to move to the given position. Returns:
- CEL_MOVE_FAIL: if no movement was possible.
- CEL_MOVE_SUCCEED: if movement was possible.
Remove all constraints.
Remove a constraint.
Set mesh to move.
If not set a default mesh will be found from the parent entity.
The documentation for this struct was generated from the following file:
Generated for CEL: Crystal Entity Layer 1.2 by doxygen 1.4.7 | http://crystalspace3d.org/cel/docs/online/api-1.2/structiPcMovable.html | CC-MAIN-2014-42 | refinedweb | 164 | 55 |
pwauth - authenticator for mod_authnz_external and the Apache HTTP Daemon
pwauth
Pwauth is an authenticator designed to be used with mod_auth_external or mod_authnz_external and the Apache HTTP Daemon to support reasonably secure web authentication out of the system password database on most versions of Unix. Particulary - secure authentication against PAM. The simplest test pwauth is to start a root shell and just run pwauth. It will attempt to read the login and password from standard input, so type a login name, hit return, then type a password, and hit return (the password will echo on your screen). The check the status code that was returned (in csh: "echo $status" in sh: "echo $?"). If the login/password were correct you should get a zero status code. If not, you will get some other value. See below the list of status codes to find the meaning of the various values returned. Any values 50 or greater indicate a configuration error.
0 STATUS_OK Login OK. 1 STATUS_UNKNOWN Nonexistant login or (for some configurations) incorrect password. 2 STATUS_INVALID Incorrect password (for some configurations). 3 STATUS_BLOCKED Uid number is below MIN_UNIX_UID value configured in config.h. 4 STATUS_EXPIRED Login ID has expired. 5 STATUS_PW_EXPIRED Login’s password has expired. 6 SSTATUS_NOLOGIN Logins to system have been turned off (usually by /etc/nologin file). 7 STATUS_MANYFAILES Limit on number of bad logins exceeded. 50 STATUS_INT_USER pwauth was invoked by a uid not on the SERVER_UIDS list. If you get this error code, you probably have SERVER_UIDS set incorrectly in pwauth’s config.h file. 51 STATUS_INT_ARGS pwauth was not given a login & password to check. The means the passing of data from mod_auth_external to pwauth is messed up. Most likely one is trying to pass data via environment variables, while the other is trying to pass data via a pipe. 52 STATUS_INT_ERR one of several rare and unlikely internal errors occurred. You’ll have to read the source code to figure these out. 53 STATUS_INT_NOROOT pwauth was not able to read the password database. Usually this means it is not running as root. (PAM and login.conf configurations will return 1 in this case.)
pwauth was written by Jan Wolter <[email protected]>. This manual page was written by Hai Zaar <[email protected]>, for the Debian project (but may be used by others). 2009-05-02 | http://huge-man-linux.net/man8/pwauth.html | CC-MAIN-2017-26 | refinedweb | 390 | 67.65 |
daily,
paycheck,
saverate,
loanrate, one per line. If there is no file with the inputted name, the program should print out an error message and return 1 instead of zero. If any one of the above values is missing in the input file or has a negative value, the program should print an error message and return 2 instead of zero.
sudo apt-get install gnuplot-x11You'll have to enter your VM password. After this, gnuplot should be available. To make sure everything's good, try the command:
echo "plot sin(x) with lines" | gnuplot -pIf a window with a plot of the sine function pops up, you're good to go.
.close()to close the file stream you use to to this. Then add the following code at the end of your program right before "return 0"
system("gnuplot -p -e 'plot \"out.txt\" with lines'");which instructs your program to ask the operating system to run the command
gnuplot -p -e 'plot "out.txt" with lines'which is a command that launches gnuplot and has it plot out.txt. In order for this to compile, you will have to add
#include <cstdlib>to your list of #include's. This is required in order to use the
system( )function.
$ ./part3 Enter input file name: input0.txt daily = 40 paycheck = 550 saverate = 3.5 loanrate = 13.25 Enter initial account balance: 500 Enter length of simulation in weeks: 40 Ending balance = 300 Daily average balance = 183.571
interest = 85 * 0.02 / 365but when the balance is $-85.00 the interest for the day would be
interest = -85 * 0.14 / 365Note that your interest is negative when you're account is in the negative indicating that you'll lose money, and positive when your account is in the positive. That means when you owe money, you'll owe even more money. Although interest is compounded daily, it is only added to your account every 30 days. That means interest accrues (or builds up) in its own little side account until 30 days have been completed. Then at the very beginning of the 31st day, the interest is added or deducted from your account. Note that at the end of the 31st day, the first bit of the next installment of interest is accrued. Now your program should output the current amount of accured in interest along with the ending balance and daily average balance.
$ ./part4 Enter input file name: input1.txt daily = 40 paycheck = 550 saverate = 2.75 loanrate = 19 Enter initial account balance: 450 Enter length of simulation in weeks: 64 Ending balance = 125.062 Ending interest = -1.61362 Daily average balance = 73.4497
submit proj01 *.cppThis will submit all C++ files in that directory, the syntax of the command is submit Project_Name (in this case proj01) and the files you wish to submit. Feel free to include any additional files you believe may be beneficial. | https://www.usna.edu/Users/cs/wcbrown/courses/F14IC210/project/p01/ | CC-MAIN-2018-22 | refinedweb | 487 | 66.33 |
In this tutorial, you will learn how to create your first Web API in ASP.Net Core 2. You will also learn how to deploy your first ASP.Net Core Web API and how to install it on your live server. In addition, I will also demonstrate on how to consume those ASP.Net Core Web APIs using Postman program.
Let's get started with creating a new project in Visual studio 2017 by going to File Menu > New > Project.
Select the ASP.Net Core Web Application and enter the project name as MyFirstASPNetCoreWebAPI and click OK button. Feel free to change the project name if you prefer.
It will then load a new dialog window to select a template. Make sure the .Net Core and ASP.Net Core 2.0 is selected. Choose an Empty and click OK button.
Your new ASP.Net Core project will be created and will have the following files structures.
We are going to modify the Startup class file. What we want to do is to add a start up code for the Web API services. Open the Startup class file and paste the following code.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.DependencyInjection; namespace MyFirstASPNetCoreWebAPI { public class Startup { // This method gets called by the runtime. Use this method to add services to the container. // For more information on how to configure your application, visit public void ConfigureServices(IServiceCollection services) { services.AddMvcCore(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { app.UseMvc(builder => { builder.MapRoute("default", "api/{controller}/{action}/{id?}"); }); } } } a folder named WebAPI.
Right click of the new created folder and Click Add menu > New Item.
Then choose add the Web API Controller and name the API controller as TestingController.
Open the TestingController.cs file and paste the following code. Wha we are going to do is to add multiple example of API Method that will perform HttpGet and HttpPost methods.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; namespace MyFirstASPNetCoreWebAPI.WebAPI { [Route("api/[controller]")] public class TestingController : Controller { //Get /api/testing [HttpGet] public string Get() { return "Welcome to your first ASP.Net Core Web API Tutorial."; } //Get /api/testing/value1 [HttpGet("{value}")] public string Get(string value) { return "You have entered a string value : " + value; } //Get /api/testing/value1/value2/value3 [HttpGet("{value1}/{value2}/{value3?}")] public string Get(int value1, int value2, int? value3) { if (value3.HasValue) { return "You have entered a three int values : " + value1 + " " + value2 + " " + value3; } else { return "You have entered a two int values : " + value1 + " " + value2; } } //Get /api/testing/GetProducts [HttpGet("GetProducts")] public IEnumerable<string> GetProducts() { return new string[] { "product1", "product2" }; } //Get /api/testing/GetItems/2 [HttpGet("GetItems/{value}")] public IEnumerable<string> GetItems(string value) { return new string[] { "item1", "item2" }; } //Post /api/testing/PostItems [HttpPost("PostItems")] public string PostItems() { return "You have performed a basic post."; } ////Post /api/testing/PostItemsWithInt [HttpPost("PostItemsWithInt")] public string PostItemsWithInt([FromBody] int num) { return "You have post a message int value: " + num.ToString(); } } }
One thing to remember, if you do not specify the routing name, the method has to contain unique parameters types. See the following example:
//Get /api/testing/value1 [HttpGet("{value}")] public string Get(string value) { return "You have entered a string value : " + value; } //Get /api/testing/value1 [HttpGet("{value}")] public string Get2(string value) { return "This is the second method. You have entered a string value : " + value; }
Although above both methods have different name but contains the same string parameter types, this will cause a problem. To fix this issue, you can include a routing name. See the following example to fix above issue.
//Get /api/testing/value1 [HttpGet("{value}")] public string Get(string value) { return "You have entered a string value : " + value; } //Get /api/testing/Get2/value1 [HttpGet("Get2/{value}")] public string Get2(string value) { return "This is the second method. You have entered a string value : " + value; }
At the end of article, I will show you how we can consume the ASP.Net Core Web API using Postman program.
Open the Notepad program by right click and run As Administrator as we are going to edit hosts file.
Then open the hosts file.
In the hosts file, please add the following domain address. Note: You change the host name to any domain address.
127.0.0.1 local.coreapi.com
The next step is to create a new IIS local site. Open your IIS manager and Add a New Website. Note if you do not have IIS, you can just debug the site by pressing F5. The reason I use IIS so I can show you how to run the site on your local computer. Add the information like below.
Under the application pools section in IIS. Select the local.apicore.com application pool and right click and select Advanced Settings. Set the ASP.Net framework to No Managed Code. ASP.Net runs in a separate process and does not rely on loading the CLR. So there is no need to set the .Net CLR version. Then set the identity to ApplicationPoolIdentity.
The next important part is to setup the security folder of local.apicore.com. We need to give the read and write access for the local.coreapi.com application pool. Right click the folder of local.apicore.com in the wwwroot and select Properties. In the Security Tab, click the Edit button.
Then click Add button, and add the user IIS AppPool/local.coreapi.com.
Grant the user permission as below.
It is time to published our First ASP.Net Core API project. Firstly we need to create our profile for deployment. Right click of the project and select Publish.
In the publish app section, choose the midlle option which is (IIS,Ftp,etc) and click Publish button.
Choose the publish method as File System and specify the location of the IIS site click Next and Save.
Once the profile deployment has been saved and the files will be published on the selected IIS site folder. The next part is to install the .Net Core Windows Server Hosting Bundle. Please go to the following url and download the latest version of Server Hosting Installer. This need to be installed otherwise you will receive Http Error 500.19 Internal Server Error.
Once the installer has been installed, you may want to restart your IIS. You can now start view your Web api by browsing the following url in your browser.
Here is a list of screenshots of using Postman program to consume our ASP.Net Core Web API.
Download Files
You can download the ASP.Net Core Web API example on the following link.
If you need further help or have any questions, feel free to post your comment or question below. | https://bytutorial.com/blogs/asp-net-core/how-to-create-your-first-web-api-in-aspnet-core-2 | CC-MAIN-2021-04 | refinedweb | 1,159 | 61.43 |
This section describes how to unpack, make, and run LAMMPS, for both new and experienced users.2.1 What's in the LAMMPS distribution
When you download LAMMPS you will need to unzip and untar the downloaded file with the following commands, after placing the file in an appropriate directory.
gunzip lammps*.tar.gz tar xvf lammps*.tar
This will create a LAMMPS directory containing two files and several sub-directories:
Read this first:
Building LAMMPS can be non-trivial. You will likely need to edit a makefile, there are compiler options, additional libraries can be used (MPI, FFT), etc. Please read this section carefully. If you are not comfortable with makefiles, or building codes on a Unix platform, or running an MPI job on your machine, please find a local expert to help you. Many compiling, linking, and run problems that users are send an email to the developers.
If you succeed in building LAMMPS on a new kind of machine (which there isn't a similar Makefile for in the distribution), send it to the developers and we'll include it in future LAMMPS releases.
Building a LAMMPS executable:
The src directory contains the C++ source and header files for LAMMPS. It also contains a top-level Makefile and a MAKE
If you get no errors and an executable like lmp_linux or lmp_mac is produced, you're done; it's your lucky day.
Errors that can occur when making LAMMPS:
(1) If the make command breaks immediately with errors that indicate it can't find files with a "*" in their names, this can be because your machine's make doesn't support wildcard expansion in a makefile. Try gmake instead of make. If that doesn't work, try using a -f switch with your make command to use.
(2) Other errors typically occur because the low-level Makefile isn't setup correctly for your machine. If your platform is named "foo", you need to create a Makefile.foo in the MAKE, we suggest using the free Intel icc compiler, which you can download from Intel's compiler site.
(3) If you want LAMMPS to run in parallel, you must have an MPI library installed on your platform. If you do not use "mpicc" as your compiler/linker, then Makefile.foo needs to specify where the mpi.h file (-I switch) and the libmpi.a library (-L switch) is found. If you are installing MPI yourself, we recommend Argonne's MPICH 1.2 which can be downloaded from the Argonne MPI site. LAM MPI should also work. If you are running on a big parallel platform, your system people or the vendor should have already installed a version of MPI, which will be faster than MPICH or LAM, so find out how to build and link with it. If you use MPICH or LAM, you will have to configure and build it for your platform. The MPI configure script should have compiler options to enable you to use the same compiler you are using for the LAMMPS build, which can avoid problems that may arise when linking LAMMPS to the MPI library.
(4) If you just want LAMMPS to run on a single processor, you can use the STUBS library in place of MPI, since you don't need an MPI library installed on your system. See the Makefile.serial file for how to specify the -I and -L switches. You will also need to build the STUBS library for your platform before making LAMMPS itself. From the STUBS dir, type "make" and it will hopefully create a libmpi.a suitable for linking to LAMMPS. If the build fails, you will need to edit the STUBS/Makefile for your platform.
The file STUBS/mpi.cpp has a CPU timer function.
(5) If you want to use the particle-particle particle-mesh (PPPM) option in LAMMPS for long-range Coulombics, you must have a 1d FFT library installed on your platform. This is specified by a switch of the form -DFFT_XXX where XXX = INTEL, DEC, SGI, SCSL, or FFTW. All but the last one are native vendor-provided libraries. FFTW is a fast, portable library that should work on any platform. You can download it from. Use version 2.1.X, not the newer 3.0.X. Building FFTW for your box should be as simple as ./configure; make. Whichever FFT library you have on your platform, you'll need to set the appropriate -I and -L switches in Makefile.foo.
If you examine fft3d.c and fft3d.h you'll see it's possible to add other vendor FFT libraries via #ifdef statements in the appropriate places. If you successfully add a new FFT option, like -DFFT_IBM, please send the developers an email; we'd like to add it to LAMMPS.
(6) If you don't plan to use PPPM, you don't need an FFT library. Use a -DFFT_NONE switch in the CCFLAGS setting of Makefile.foo, or exclude the KSPACE package (see below).
(7) There are a few other -D compiler switches you can set as part of CCFLAGS. The read_data and dump commands will read/write gzipped files if you compile with -DLAMMPS_GZIP. It requires that your Unix support the "popen" command. Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY options can make for faster parallel FFTs (in the PPPM solver) on some platforms. The -DPACK_ARRAY setting is the default. If you compile with -DLAMMPS_XDR, the build will include XDR compatibility files for doing particle dumps in XTC format. This is only necessary if your platform does have its own XDR files available. See the Restrictions section of the dump command for details.
(8) FFT libraries it will use, all you need to do from the src directory is type one of these 2 commands:
make foo gmake foo
You should get the executable lmp_foo when the build is complete.
Additional build tips:
(1) Building LAMMPS for multiple platforms.
You can make LAMMPS for multiple platforms from the same src directory. Each target creates its own object sub-directory called Obj_name where it stores the system-specific *.o files.
(2) Cleaning up.
Typing "make clean" will delete all *.o object files created when LAMMPS is built.
(3) On some machines with some compiler options, the Coulomb tabling option that is enabled by default for "long" pair styles such as lj/cut/coul/long and lj/charmm/coul/long does not work. Tables are used by these styles since it can offer a 2x speed-up. A symptom of this problem is getting wildly large energies on timestep 0 of the examples/peptide simulation.
Here are several work-arounds. Coulomb tables can be disabled by setting "table 0" in the pair_modify command.
The associated files (e.g. pair_lj_cut_coul_long.cpp) can be compiled at a lower optimization level like -O2, or with the compiler flag -fno-strict-aliasing. The latter can be done by adding something like these lines in your Makefile.machine:
NOALIAS = -fno-strict-aliasing
pair_lj_cut_coul_long.o : pair_lj_cut_coul_long.cpp $(CC) $(CCFLAGS) $(NOALIAS) -c $<
pair_lj_charmm_coul_long.o : pair_lj_charmm_coul_long.cpp $(CC) $(CCFLAGS) $(NOALIAS) -c $<
On a Macintosh, try compiling the pair "long" files without the -fast compiler option.
(4) Building for a Macintosh.
OS X is BSD Unix, so it already works. See the Makefile.mac file.
(5) Building for MicroSoft Windows.
I've never done this, but LAMMPS is just standard C++ with MPI and FFT calls. You can use cygwin to build LAMMPS with a Unix make; see Makefile.cygwin. Or you should be able to pull all the source files into Visual C++ (ugh) or some similar development environment and build it. In the src/MAKE/Windows directory are some notes from users on how they built LAMMPS under Windows, so you can look at their instructions for tips. Good luck - we can't help you on this one.
The source code for LAMMPS is structured as a large set of core files which are always used, plus optional packages, which are groups of files that enable a specific set of features. For example, force fields for molecular systems or granular systems are in packages. You can see the list of both standard and user-contributed packages by typing "make package".
The current list of standard packages is as follows:
There are also user-contributed packages which may be as simple as a single additional file or many files grouped together which add a specific functionality to the code. The difference between a standard package versus a user package don't necessarily meet these requirements. If you have problems using a feature provided in a user package, you will likely need to contact the contributor directly to get help. Information on how to submit additions you make to LAMMPS as a user-contributed package is given in this section of the documentation.
Any or all packages can be included or excluded when LAMMPS is built. The one exception is that to use the standard "opt" package, you must also be using the "molecule" and "manybody" packages. You may wish to exclude certain packages if you will never run certain kinds of simulations. This will keep you from having to build auxiliary libraries (see below) and will produce a smaller executable which may run a bit faster.
By default, LAMMPS includes only the "kspace", "manybody", and "molecule" packages. As described below, some standard packages require LAMMPS be linked to separately built library files, which will require editing of your src/MAKE/Makefile.machine.
Packages are included or excluded by typing "make yes-name" or "make no-name", where "name" is the name of the package. You can also type "make yes-standard", "make no-standard", "make yes-user", "make no-user", "make yes-all" or "make no-all" to include/exclude various sets of packages. These commands work by simply moving files back and forth between the main src directory and sub-directories with the package name, so that the files are seen or not seen when LAMMPS is built. After you have included or excluded a package, you must re-build LAMMPS.
Additional" will overwrite src files with files from the package directories if the package has been included. It should be used after a patch is installed, since patches only update the master package version of a file. Typing "make package-overwrite" will overwrite files in the package directories with src files. Typing "make package-check" will list differences between src and package versions of the same files.
To use the "meam" package you must build LAMMPS with the MEAM library in lib/meam, which computes the modified embedded atom method potential, which is a generalization of EAM potentials that can be used to model a wider variety of materials. This MEAM implementation was written by Greg Wagner at Sandia. To build LAMMPS with MEAM, you must use a low-level LAMMPS Makefile that includes the MEAM directory in its paths. See Makefile.linux_meam as an example. You must also build MEAM itself as a library before building LAMMPS, so that LAMMPS can link against it. This requires a F90 compiler. The library is built by typing "make" from within the meam directory with the appropriate Makefile, e.g. "make -f Makefile.icc". If one of the provided Makefiles is not appropriate for your system you can edit or add one as needed.
Note that linking a Fortran library to a C++ code can be problematic (e.g. Fortran routine names can't be found due to non-standard underscore rules) and typically requires additional C++ or F90 libraries be included in the link. You may need to read documentation for your compiler about how to do this correctly.
To use the "poems" package you must build LAMMPS with the POEMS library in lib/poems, which computes the constrained rigid-body motion of articulated (jointed) multibody systems. POEMS was written and is distributed by Prof Kurt Anderson's group at Rensselaer Polytechnic Institute (RPI). To build LAMMPS with POEMS, you must use a low-level LAMMPS Makefile that includes the POEMS directory in its paths. See Makefile.g++_poems as an example. You must also build POEMS itself as a library before building LAMMPS, so that LAMMPS can link against it. The POEMS library is built by typing "make" from within the poems directory with the appropriate Makefile, e.g. "make -f Makefile.g++". If one of the provided Makefiles is not appropriate for your system you can edit or add one as needed.
LAMMPS can be built as a library, which can then be called from another application or a scripting language. See this section for more info on coupling LAMMPS to other codes. Building LAMMPS as a library is done by typing
make makelib make -f Makefile.lib foo
where foo is the machine name. The first "make" command will create a current Makefile.lib with all the file names in your src dir. The 2nd "make" command will use it to build LAMMPS as a library. This requires that Makefile.foo have a library target (lib) and system-specific settings for ARCHIVE and ARFLAGS. See Makefile.linux for an example. The build will create the file liblmp_foo.a which another application can link to.
When used from a C++ program, the library allows one or more LAMMPS objects to be instantiated. All of LAMMPS is wrapped in a LAMMPS_NS namespace; you can safely use any of its classes and methods from within your application code, as needed. See the sample code examples/couple/c++_driver.cpp as an example.
When used from a C or Fortran program or a scripting language, the library has a simple function-style interface, provided in library.cpp and library.h. See the sample code examples/couple/c_driver.cpp as an example.
You can add as many functions as you wish to library.cpp and library.h. In a general sense, those functions can access LAMMPS data and return it to the caller or set LAMMPS data values as specified by the caller. These 4 functions are currently included in library.cpp:
void lammps_open(int, char **, MPI_Comm, void **ptr); void lammps_close(void *ptr); int lammps_file(void *ptr, char *); int lammps_command(void *ptr, char *);
The lammps_open() function is used to initialize LAMMPS, passing in a list of strings as if they were command-line arguments when LAMMPS is run from the command line and a MPI communicator for LAMMPS to run under. It returns a ptr to the LAMMPS object that is created, and which should be used in subsequent library calls. Note that lammps_open() can be called multiple times to create multiple LAMMPS objects.
The lammps_close() function is used to shut down LAMMPS and free all its memory. The lammps_file() and lammps_command() functions are used to pass a file or string to LAMMPS as if it were an input file or single command read from an input script.
By default, LAMMPS runs by reading commands from stdin; e.g. lmp_linux < in.file. This means you first create an input script (e.g. in.file) containing the desired commands. This section describes how input scripts are structured and what commands they contain.
You can test LAMMPS on any of the sample inputs provided in the examples directory. Input scripts are named in.* and sample outputs are named log.*.name.P where name is a machine and P is the number of processors it was run on.
Here is how you might run one of the Lennard-Jones tests on a Linux box, using mpirun to launch a parallel job:
cd src make linux cp lmp_linux ../examples/lj cd ../examples/lj mpirun -np 4 lmp_linux < in.lj.nve this section. For example, lmp_ibm might be launched as follows:
mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none < in.alloy
These are the command-line options:
.
The input script specifies what simulation is run on which partition; see the variable and next commands. This howto section gives examples of how to use these commands in this way. Simulations running on different partitions can also communicate with each other; see the temper command.
-in file
Specify a file to use as an input script. This is an optional switch when running LAMMPS in one-partition mode. If it is not specified, LAMMPS reads its input script from stdin - e.g. lmp_linux < in.run. This is a required switch when running LAMMPS in multi-partition mode, since multiple processors cannot all read from stdin.
.
-var name value
Specify a variable that will be defined for substitution purposes when the input script is read. "Name" is the variable name which can be a single character (referenced as $x in the input script) or a full string (referenced as ${abc}). The value can be any string. Using this command-line option is equivalent to putting the line "variable name index value" at the beginning of the input script. Defining a variable as a command-line argument overrides any setting for the same variable in the input script, since variables cannot be re-defined. See the variable command for more info on defining variables and this section for more info on using variables in input scripts..
LAMMPS 2003 is a complete C++ rewrite of LAMMPS 2001, which was written in F90. Features of earlier versions of LAMMPS are listed in this section. LAMMPS 2003.
If you are a previous user of LAMMPS 2001, these are the most significant changes you will notice in LAMMPS 2003:
(1) The names and arguments of many input script commands have changed. All commands are now a single word (e.g. read_data instead of read data).
(2) All the functionality of LAMMPS 2001 is included in LAMMPS 2003, but you may need to specify the relevant commands in different ways.
(3) The format of the data file can be streamlined for some problems. See the read_data command for details. The data file section "Nonbond Coeff" has been renamed to "Pair Coeff" in LAMMPS 2003.
(4) Binary restart files written by LAMMPS 2001 cannot be read by LAMMPS 2003 LAMMPS 2003 read_data command to read it in.
(5) There are numerous small numerical changes in LAMMPS 2003 that mean you will not get identical answers when comparing to a 2001 run. However, your initial thermodynamic energy and MD trajectory should be close if you have setup the problem for both codes the same. | http://lammps.sandia.gov/doc/Section_start.html | crawl-001 | refinedweb | 3,076 | 65.01 |
posix_mem_offset - find offset and length of a mapped typed memory block (ADVANCED REALTIME)
[TYM]
#include <sys/mman.h>#include <sys/mman.h>
int posix_mem_offset(const void *restrict addr, size_t len,
off_t *restrict off, size_t *restrict contig_len,
int *restrict fildes);.
Upon successful completion, the posix_mem_offset() function shall return zero; otherwise, the corresponding error status value shall be returned.
The posix_mem_offset() function shall fail if:
- [EACCES]
- The process has not mapped a memory object supported by this function at the given address addr.
This function shall not return an error code of [EINTR].
None.
None.
None.
None.
mmap(), posix_typed_mem_open(), the Base Definitions volume of IEEE Std 1003.1-2001, <sys/mman.h>
First released in Issue 6. Derived from IEEE Std 1003.1j-2000. | https://pubs.opengroup.org/onlinepubs/007904875/functions/posix_mem_offset.html | CC-MAIN-2020-29 | refinedweb | 122 | 57.47 |
Here.
Fortunately, Dojo’s complex implementation came to the rescue. Dojo injects quite a lot of extra markup around its buttons; in particular the buttons are wrapped with spans. So it occurred to me that I could attach the tooltip to one of the wrappers instead of the button itself. Then even if the button were disabled, the wrapper would still be enabled and the tooltip would appear. The simplest way is just to attach the tooltip to the button’s parents.
And it worked! Even better, it worked on both Firefox and Internet Explorer. I only tested on FF3 and IE6, but it may well also work on IE7, IE8, Chrome, Safari et al.
Here’s some code, with an example. It’s self-explanatory if you have some familiarity with Dojo.
dojo.require("dijit.Tooltip"); // Some examples of my tooltip functions dojo.addOnLoad(function() { // Add a tooltip to the button whose ID is "save-button"; only works // if it's enabled addTooltip("save-button", "Save the information to the database."); // Add an appropriate tooltip to the Delete button that works // even if it's disabled var deleteButton = dojo.byId("delete-button"); var deleteTip = "Delete the information from the database."; if (deleteButton.attr("disabled")) { deleteTip += " Unavailable because it's too dangerous."; } addParentTooltip(deleteButton, deleteTip); // There are two Next buttons with no IDs: add tooltips to // them that work even if disabled addParentTooltipByName("next-button", "Go to the next page."); }); /** Add a tooltip to any number of nodes passed directly or as IDs. */ function addTooltip(/*node or string or array*/nodes, /*string*/text) { if (!dojo.isArray(nodes)) { nodes = [nodes]; } // connectId can be an id or a node (not that you'd know from the // documentation) or an array of ids and nodes new dijit.Tooltip({connectId: nodes, label: text}); } /** Add a tooltip to all form elements with the given name. */ function addTooltipByName(/*string*/name, /*string*/text) { addTooltip(dojo.query('[name="' + name + '"]'), text); } /** Add a tooltip to the parents of all form elements with the given name. The tooltips will show even if the element is disabled, provided the parent is not disabled. */ function addParentTooltip(/*string*/name, /*string*/text) { var nodes = dojo.query('[name="' + name + '"]'); var parents = dojo.map(nodes, function(node) { return node.parentNode; }); addTooltip(parents, text); } /** Add a tooltip to the parents of all nodes passed either directly or as IDs. The tooltips will show even if the element is disabled, provided the parent is not disabled. */ function addParentTooltipByName(/*node or string or array*/nodes, /*string*/text) { var parents = dojo.map(nodes, function(node) { return dojo.byId(node).parentNode; }); addTooltip(parents, text); }
You could extend this in lots of ways:
- Put it in some fancy namespace
- Extend the Tooltip class itself to add these functions
- Automatically wrap the control (in case it’s not a Dojo button)
- Use jQuery instead of Dojo 🙂
When this problem first came up, my colleague David googled for a solution but couldn’t find one. Hopefully this post has filled the gap.
AWESOME AWESOME AWESOME PLUG IN . . . . exactly what i’ve been looking for and saves a task with my automated content!
thanks!
really cool… thank you very much!!!
a quick question, would you happen to know why the tooltip works perfectly well in IE, but, acts very weird in Firefox (by weird, I mean, it takes a longer time to load, even after loading, sometimes, it just freezes on the scree.)
@newbie, I haven’t noticed that. I must say that working with Dojo is quite nice when it works, but it’s a nightmare to debug when it doesn’t work. | https://thunderguy.com/semicolon/2009/05/23/tooltips-on-disabled-buttons-with-dojo/ | CC-MAIN-2018-51 | refinedweb | 599 | 65.62 |
NDKshell variable to point to the location of your NDK installation and then in your working dir issue the following command:
$NDK/build/tools/make-standalone-toolchain.sh --platform=android-8 --install-dir=./ndk-toolchainReplace
android-8with the desired version for your app. Such prepared toolchain targets arm arch (both armv5te and armv7-a). If you want to prepare toolchain for another arch add
--arch=mipsor
--arch=x86accordingly.
ndk-toolchainfolder to your path and export
CXXshell var to g++ from
./ndk-toolchain/binfolder (it will be named
something-something-g++).
CXXFLAGSshell vars and it should all work just by typing 'make'.
GNUmakefilefile:
-marchoption) from
nativeto
armv5te
LDFLAGS += -pthreadoption)
makeshould successfully build static version of Crypto++ lib for Andy. To build a shared version type
make libcryptopp.so
-lgnustl_sharedlinker option:
LDLIBS += -lgnustl_sharedAlso neither
LDFLAGSnor
LDLIBSvar is originally used in the command that builds shared library in the
GNUmakefilefile so you need to append it at its end like this:
libcryptopp.so: $(LIBOBJS) $(CXX) -shared -o $@ $(LIBOBJS) $(LDFLAGS) $(LDLIBS)
GNUmakefiledefine the
LDFLAGS,
LDLIBSand
CXXFLAGSvars like this:
CXXFLAGS += -nostdinc++ -I$(NDK)/sources/cxx-stl/stlport/stlport LDFLAGS += -nodefaultlibs -L$(NDK)/sources/cxx-stl/stlport/libs/armeabi LDLIBS += -lstlport_shared -lc -lm -ldl -lgcc
-nostdinc++tells the compiler not to include stdlib++ header files which belong to gnustl in our case. The
-Iflag points the compiler to the header files for STLport.
-nodefaultlibstells the linker not to link against standard C and C++ libs that may belog to gnustl: it will link only against the platform minimal C runtime (crt* stuff) and the libraries specified by
-lflags that we specify in
LDLIBS: STLport, libc and the low level compiler stuff. Finally
-Lflag points the linker to the location of these libs. If you are building for different arch than armeabi you need to change the path in this flag accordingly.
fd_set. This is not the case when using STLport so you need to add the following preprocessor command somewhere in
wait.hfile:
#include <sys/select.h>
diff -r -u cryptopp-orig/GNUmakefile cryptopp-andy.armeabi.stlport/GNUmakefile --- cryptopp-orig/GNUmakefile 2010-08-09 14:22:42.000000000 +0200 +++ cryptopp-andy.armeabi.stlport/GNUmakefile 2013-03-09 19:17:12.934327979 +0100 @@ -37,7 +37,7 @@ ifeq ($(UNAME),Darwin) CXXFLAGS += -arch x86_64 -arch i386 else -CXXFLAGS += -march=native +CXXFLAGS += -march=armv5te endif endif @@ -78,7 +78,12 @@ endif ifeq ($(UNAME),Linux) -LDFLAGS += -pthread +## uncomment the below line to use GNU STL +#LDLIBS += -lgnustl_shared +## uncomment the below 3 lines to use STLport +LDFLAGS += -nodefaultlibs -L$(NDK)/sources/cxx-stl/stlport/libs/armeabi -Wl,--no-undefined +LDLIBS += -lstlport_shared -lc -lm -ldl -lgcc +CXXFLAGS += -nostdinc++ -I$(NDK)/sources/cxx-stl/stlport/stlport ifneq ($(shell uname -i | $(EGREP) -c "(_64|d64)"),0) M32OR64 = -m64 endif @@ -151,7 +156,7 @@ $(RANLIB) $@ libcryptopp.so: $(LIBOBJS) - $(CXX) -shared -o $@ $(LIBOBJS) + $(CXX) -shared -o $@ $(LIBOBJS) $(LDFLAGS) $(LDLIBS) cryptest.exe: libcryptopp.a $(TESTOBJS) $(CXX) -o $@ $(CXXFLAGS) $(TESTOBJS) -L. -lcryptopp $(LDFLAGS) $(LDLIBS) diff -r -u cryptopp-orig/wait.h cryptopp-andy.armeabi.stlport/wait.h --- cryptopp-orig/wait.h 2010-08-06 18:44:32.000000000 +0200 +++ cryptopp-andy.armeabi.stlport/wait.h 2013-03-08 15:50:01.758533453 +0100 @@ -13,6 +13,7 @@ #include <winsock2.h> #else #include <sys/types.h> +#include <sys/select.h> #endif #include "hrtimer.h"
Nativethat will be responsible for loading native libs and will contain the native method. Libs have to be loaded in the order of their dependencies, ie if liba.so depends on libb.so you first need to load libb.so and then liba.so. Thus you need to start from loading the STL lib you decided to use, then load Crypto++, and finally our small lib that uses it (let's call it
libcrypt_user.so)
package pl.morgwai.ndktutorial; public class Native { static { System.loadLibrary("stlport_shared"); //System.loadLibrary("gnustl_shared"); System.loadLibrary("cryptopp"); System.loadLibrary("crypt_user"); } public native long fun(int i); }
javahtool: go to the
bin/classessubfolder of your app project folder and run
javahwith the fully qualified name of your Native class as an argument:
javah pl.morgwai.ndktutorial.Native
JNIEXPORTand
JNICALL: most of the sample apps from NDK don't use them and Eclipse ADT marks them as errors. To strip them you can use the below
sedcommand:
sed -e 's#JNIEXPORT##' < pl_morgwai_ndktutorial_Native.h | sed -e 's#JNICALL##' >native.hNow you can include
native.hfile in your sources or just copy the relevant part from it.
javahtool remember to put your function or its header in
extern "C"braces.
crypt_user.cpp. Inside the implementation just use some global var from Crypto++:
#include <jni.h> #include <cryptlib.h> extern "C" { jlong Java_pl_morgwai_ndktutorial_Native_fun (JNIEnv* env, jobject o, jint i) { long long t = CryptoPP::INFINITE_TIME / i; return t; } }
binand
objand all arch specific result subfolders from
libfolder. Few times I myself have lost several hours because of stale objects lying there around.
Android.mkfile so the main
Android.mkfile from the
jnifolder just needs to include them:
include $(call all-subdir-makefiles)
Application.mkfile in the
jnifolder:
APP_ABI := armeabi APP_CPPFLAGS += -fexceptions -frtti APP_STL := stlport_shared #APP_STL := gnustl_shared
APP_ABIspecifies the arch(s) which our libs should be built for. If you want to build apk for multiple archs enumerate them all separated by space. For example
APP_ABI := armeabi armeabi-v7a x86 mips
APP_CPPFLAGSspecifies CXXFLAGS to be used by NDK when building your shared libs. By default NDK compiles CPP sources without exception nor RTTI support. As we are using STL version that uses them we need to turn them on here. (Contrary to this, the standalone toolchain has exception and RTTI support turned on by default)
APP_STLspecifies which version of STL to include in your apk.
cryptoppin the
jnifolder. Inside it create subfolders named after each arch you want to build your shared libs for. Place your prebuilt libcryptopp.so files in them accordingly. Next to them (ie inside
cryptoppfolder) create a subfolder named
includeand place there all header files that you may need to include in the sources of your libs that use this prebuilt lib. In case of Crypto++ just copy all header files (*.h) from its source folder.
Android.mkfile inside the cryptopp folder:
LOCAL_PATH:= $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := cryptopp LOCAL_SRC_FILES := $(TARGET_ARCH_ABI)/libcryptopp.so LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/include include $(PREBUILT_SHARED_LIBRARY)The last line says that this module is a prebuilt shared library and the
LOCAL_SRC_FILESpoints to the location of its binary. The
TARGET_ARCH_ABIvar will be substituted accordingly during the build process for the given arch.
LOCAL_EXPORT_C_INCLUDESspecifies the location of header files for other libs that use this one.
crypt_user, placing there the source file of our small cpp lib (
crypt_user.cpp) we created previously and creating
Android.mkfile for it:
LOCAL_PATH:= $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := crypt_user LOCAL_SRC_FILES := crypt_user.cpp LOCAL_SHARED_LIBRARIES := cryptopp include $(BUILD_SHARED_LIBRARY)
LOCAL_SHARED_LIBRARIESspecifies other shared lib modules this module depends on so NDK knows where to find include files and what to link against.
NDK_MODULE_PATHto contain a path somewhere outside of your app project tree: you will store your external modules there. Next move
cryptoppsubfolder of
jnifolder there. In the main
Android.mkfile from
jnifolder append the following line at the end:
$(call import-module,cryptopp)The second argument
cryptoppmust correspond to the subfolder name of your external module in
NDK_MODULE_PATHfolder. Inside its
Android.mkfile you can actually define several modules that you can later use as
LOCAL_SHARED_LIBRARIESor
LOCAL_STATIC_LIBRARIESin your projects. None of them has to be named after the subfolder name.
PREBUILT_SHARED_LIBRARYin
cryptoppmodule's
Android.mkfile:
LOCAL_EXPORT_CPPFLAGS := -fexceptions -frttiAs our only 'local' module
crypt_userdoes not depend on these flags directly anymore, you can safely remove
APPAPP_CPPFLAGSline from your
Application.mkfile.
crypt_userneeds to be compiled with these flags as well to be properly linked with
cryptopp, but they will be imported for him thanks to the declared dependency in
LOCAL_SHARED_LIBRARIESin its
Android.mkfile.
cryptoppfolder create subfolders named after STL versions, for example
stlport_sharedand
gnustl_sharedand inside them create subfolders for supported archs and place prebuilt binaries there. Now change the definition of
LOCAL_SRC_FILESin
Android.mkfile to look like this:
LOCAL_SRC_FILES := $(APP_STL)/$(TARGET_ARCH_ABI)/libcryptopp.soSometimes even a single app may need to have different builts with different STL versions. There's a small problem here that in such case you don't know which lib to load in your java code. The solution is simply to try to load all of them and catch the error when the wrong ones fail. Our
Nativeclass would look like this:
package pl.morgwai.ndktutorial; import android.util.Log; public class Native { public static final String LOG_TAG = Native.class.getSimpleName(); static { try { System.loadLibrary("stlport_shared"); } catch (UnsatisfiedLinkError e) { Log.i(LOG_TAG, "stlport not found"); } try { System.loadLibrary("gnustl_shared"); } catch (UnsatisfiedLinkError e) { Log.i(LOG_TAG, "gnustl not found"); } System.loadLibrary("cryptopp"); System.loadLibrary("crypt_user"); } public native long fun(int i); }Now it will all work just by changing
APP_STLvar in the
Application.mkfile. | http://morgwai.pl/ndkTutorial/ | CC-MAIN-2019-18 | refinedweb | 1,465 | 50.23 |
MQTT
In the previous tutorial you have learned how to set up and use the ESP8266 chip to collect data from your sensors, connect to the internet over Wi-Fi and send those values to Thingspeak over HTTP protocol. Also, you have seen how you can graph that data, and how you can extend your system with some features that Thingspeak offers.
Let’s take a look at the program from the previous example. Our device connects to the server, sends POST request with the sensor value, disconnects from the server, waits for 15 seconds and repeats the same process over and over. Each time there is something to be sent, our device has to reestablish connection to the server – one connection per one data write. On the other hand, if we want to transfer data from the server to the device, the only way to achieve this is that our device constantly connects to the server and checks if there are any updates. It is clear that this is not the most efficient way of transferring data. HTTP protocol uses request – response method of communication. If you take a look at the requests we send and responses we get, you will see that there are a lot of additional headers that are transferred. All kind of information about statuses that we are not usually interested in. HTTP is good for large data transfers (websites for example) but it is clear that it is not the most efficient protocol when we want to send just a few bytes of data – our sensor values. Also, it is not that fast. This is why we use MQTT protocol.
MQTT is lightweight and fast. It takes very few bytes to connect to the server and connection can be kept open all the time. Communication consumes less data and time that the HTTP protocol – advantages are clear. How does the MQTT work? The whole system consists of many clients and one broker. Our devices act as clients. Clients can be our cellphones or lap tops too. Each client communicates to a broker only, clients don’t communicate among themselves. The whole system is based on publish – subscribe method of communication. Each client can be a publisher which publishes (sends) messages, subscriber which listens to incoming messages, or both at the same time. Broker is a kind of server whose task is to accept published messages from publishers and forward them to subscribers.
Publishers and subscribers
So, how does the broker know which messages it should forward to each subscriber? Obviously, it would be too messy that each subscriber receives each message, especially if there are many publishers in the system. The answer is topics. Each publisher publishes to a certain topic or multiple topics, and each subscriber subscribes to one or multiple topics. Topics have tree structure, separated with slashes. Let’s say I have temperature sensors at several places, and I am interested in temperature value in my garden. I use my cellphone and subscribe to a topic Sensors/Garden/Temperature. Device that measures temperature in the garden publishes the temperature value to the same topic and at that time my cellphone receives the message. Let’s say I also have sensors for humidity, alarm sensors and some others. What if I want to check all sensors in my garden with one subscription? I can use wildcards – I subscribe to a topic Sensors/Garden/#. What if I want to check temperature sensor at each place (garden, living room, basement…) – I also use a wild card and subscribe to a topic Sensors/+/Temperature. Wildcards are one more convenience that MQTT offers. MQTT has other features like Quality of Service (guaranteeing message delivery), Last Will, Retained Messages etc. Here is a good resource to learn more about MQTT protocol.
PubSubClient Library
How can we use the MQTT with the ESP8266? Fortunately, there is a library that gives us that possibility. We have to install that library. Go to and download the zip file of PubSubClient library. To add a library to your Arduino IDE, go to Sketch > Include Library > Add .ZIP library and select the downloaded file. You are ready to use MQTT protocol with the ESP8266.
Let’s see how to use the MQTT protocol with the ESP8266 on a practical example. We will consider a simple and classic example of turning on and off an LED. By sending appropriate message from our PC or cellphone, we want to control our LED.
You already know that we need a broker in order to communicate over MQTT. The simplest way for the beginning is to use a public broker, such as. It is handy that you can go to the broker page and monitor all published messages in a dashboard. This is a good way to verify that everything works. Each time the ESP8266 connects to the broker, it will publish a notification message.
If you want to use your PC as a client, you can use MQTT lens. For the cellphone, search for an MQTT client app. I used this one for my Android cellphone. You have to configure the connection. Enter the broker domain name and port number (1883 by default). After you establish the connection, you are ready to publish and subscribe to topics. You can start publishing ‘on’ and ‘off’ messages to the ESP8266/LED status (the same the ESP8266 subscribes to) and you should see your LED (attached to the pin 4 in this example) turning on and off. Subscribe to the ESP8266/connection status (using spaces in topics like in this case is not a good practice!) to receive connection status message published by the ESP8266.
#include <ESP8266WiFi.h> #include <PubSubClient.h> const char* ssid = ""; const char* password = ""; //const char* mqtt_server = "test.mosquitto.org"; //const char* mqtt_server = "iot.eclipse.org"; const char* mqtt_server = "broker.mqtt-dashboard.com"; WiFiClient espClient; PubSubClient client(espClient); void setup() { pinMode(4,OUTPUT); Serial.begin(115200); setup_wifi(); client.setServer(mqtt_server, 1883); client.setCallback(callback); reconnect(); }]); } if((char)payload[0] == 'o' && (char)payload[1] == 'n') //on digitalWrite(4,HIGH); else if((char)payload[0] == 'o' && (char)payload[1] == 'f' && (char)payload[2] == 'f') //off digitalWrite(4,LOW); Serial.println(); } void reconnect() { // Loop until we're reconnected while (!client.connected()) { Serial.print("Attempting MQTT connection..."); // Attempt to connect if (client.connect("ESP8266Client")) { Serial.println("connected"); // Once connected, publish an announcement... client.publish("ESP8266/connection status", "Connected!"); // ... and resubscribe client.subscribe("ESP8266/LED status"); } else { Serial.print("failed, rc="); Serial.print(client.state()); Serial.println(" try again in 5 seconds"); // Wait 5 seconds before retrying delay(5000); } } } void loop() { if (!client.connected()) { reconnect(); } client.loop(); }
MQTT lens
You will see some other public brokers under comments in the code. You must have noticed that when using the public broker connection is not stable. Moreover, data transfer is not secure. Although in some cases the public broker would work just fine for us, this option is not always acceptable. One solution is to install a broker on your machine locally. I installed open-source broker called Mosquito on my Windows machine. You can find instruction how to install it here. You have to use your IP address instead of a domain name. The problem with this is that your computer has to be constantly on. Using a smaller computer such as Raspberry Pi would be a better option for this purpose.
Now you know how you can implement the MQTT on the ESP8266 and exploit its features. In the next tutorial we will use the MQTT protocol to connect our devices to IBM’s powerful platform called Bluemix, which will allows us to quickly build interesting and useful IoT applications. | https://tuts.codingo.me/mqtt-and-esp8266 | CC-MAIN-2017-51 | refinedweb | 1,281 | 66.74 |
) 200718 * the Initial Developer. All Rights Reserved.19 * 20 * Contributor(s):21 * Felix Gnass [fgnass at neteye dot de]22 * 23 * ***** END LICENSE BLOCK ***** */24 package org.riotfamily.pages;25 26 /**27 * Alias for a page. Aliases are created whenever a page (or one of it's 28 * ancestors) is renamed or moved.29 *30 * @author Felix Gnass [fgnass at neteye dot de]31 * @author Jan-Frederic Linde [jfl at neteye dot de]32 * @since 6.533 */34 public class PageAlias {35 36 private Page page;37 38 private PageLocation location;39 40 public PageAlias() {41 }42 43 public PageAlias(Page page, PageLocation location) {44 this.page = page;45 this.location = location;46 }47 48 public PageLocation getLocation() {49 return this.location;50 }51 52 public void setLocation(PageLocation location) {53 this.location = location;54 }55 56 public Page getPage() {57 return this.page;58 }59 60 public void setPage(Page page) {61 this.page = page;62 }63 64 public String toString() {65 return "PageAlias[" + page + " --> " + location + "]";66 }67 68 }69
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/riotfamily/pages/PageAlias.java.htm | CC-MAIN-2018-05 | refinedweb | 186 | 60.21 |
transaction failed" using injected JavaBeanDaniel Roth Nov 27, 2007 8:03 AM
Hi all,
I have struggled some time with a problem I get when trying to persist some objects. I use the exact same setup as jboss-seam-jpa running om tomcat6 without jboss embedded. (JavaBean an POJO:s).
Persisting an object with
@Name("register") public class RegisterAction { @In private EntityManager em; public void register() { em.persist(someObject); } }
works perfectly, even without the @Transactional annotation. However this:
@Name("register") public class RegisterAction { @In private MyService service; public void register() { service.register(someObject); } } @Name("service") @AutoCreate public class MyService { @In private EntityManager em; public void register(Object someObject) { em.persist(someObject); } }
does not work(and I have tried using @Transactional on 'register' and MyService). I get a "Transaction failed" in my jsf h:messages, but apart from that, there is no error message what so ever.
I guess I have missed something fundamental, especially when I read this in the Seam doc:
25.3....
1.2.1.2....
This content has been marked as final. Show 2 replies
1. Re: transaction failedJarek Gilewski Jan 25, 2008 5:40 PM (in response to Daniel Roth)
Hi,
I have the same problem. Did you resolve it?
Regards
Jarek
2. Re: transaction failedDaniel Roth Jan 29, 2008 3:43 AM (in response to Daniel Roth)
Unfortunately not. I have no idea why.
I 'solved' this issue by having transactional spring dao classes do _everything_ for me, i.e. something like:
myDao.addLogToJob(LogItem log, Job job) {
job = em.find(job.getId());
job.getLogs().add(log);
em.merge(job);
}
Ugly and bad in every way possible, but at least it worked and was ready to deadline... :-) | https://developer.jboss.org/thread/140113 | CC-MAIN-2018-13 | refinedweb | 282 | 59.4 |
Debug Assertion failed error while using an mfc dll via import library in an managed console application
- Wednesday, March 25, 2009 4:38 AMI have compiled the libtorrent library in an Visual Studio 2008 mfc dll project. I want to use this dll via an import library in a managed exe console application also developed in VS 2008.
The managed application project compiles ok. But when I try to run it I get the following assertion failed error message:
Debug Assertion Failed!
File: F:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dllinit.cpp
Line: 587
And when I press ignore I get the the following assertion failed error message
Debug Assertion Failed!
File: F:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dllmodul.cpp
Line: 137
After I press the ignore button again the rest of the application works fine.
Althogh the path for the above files given by VS 2008 environment is not correct these files do exist in the following folder on my system
C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\src\mfc
The following line in dllinit.cpp is cuasing the first assertion failiure:
ASSERT(AfxGetModuleState() != AfxGetAppModuleState());
And the following line in dllmodul.cpp is cuasing the second assertion failure:
VERIFY(AfxSetModuleState(AfxGetThreadState()->m_pPrevModuleState) ==
&afxModuleState);
Can any one help me out as to why these assertion failures are ocurring and how to resolve them?
Thanks
- Moved by nobugzMVP, Moderator Wednesday, March 25, 2009 12:34 PM not a clr q (Moved from Common Language Runtime to Visual C++ General)
-
All Replies
- Wednesday, March 25, 2009 5:52 PMDebug asserts flags bugs for you. Maybe you did not initialize MFC's module state or CWinApp. Check for an example.
MSMVP VC++
- Marked As Answer by Nancy ShaoModerator Tuesday, March 31, 2009 3:30 AM
-
- Wednesday, April 01, 2009 7:04 AMI am now certain that the problem is due a bug in the Libtorrent Library. Any ideas on how I can trace the source (or location) of the error.
- Friday, April 10, 2009 8:23 PMI am having a similar problem. Part way through compilation (actually its during the linking phase), I get the debug assertion thrown on line 587 of dllinit.cpp.
I am pretty sure its not that I didn't initialize MFC's module state or CWinApp, since the assertion is thrown before the application actually runs.
Some background: we are using the CGAL 3.4 libraries which depend on the Boost libraries. Everything works just fine, until we upgraded to the latest version of Boost (1.38.0). With this version, CGAL compiles as expected, and most portions of our code that use CGAL or Boost compile as normal. Then, one portion of code that uses CGAL (but not Boost explicitly) throws this odd assertion at link time.
If you press Ignore or Retry, linking/compilation continues as if nothing went wrong, although the resulting binary falls quickly over due to heap corruption.
If you press Abort linking fails with the message "Error Code: Performing Registration".
Other than rolling back to the old version of Boost, does anybody have any ideas about how we might track down the source of our problem/get this working correctly?
- Friday, April 10, 2009 8:28 PMDebug asserts flags bugs. If a linker throw a debug assert on you, then your linker has bugs. Find another linker instead.
MSMVP VC++
- Saturday, April 11, 2009 5:26 AMWe could not find the solution of our problem for mfc enabled dll project. So we tried another dll project (non mfc dll project) and we were able to successfully build the dll. But still we dont know what caused the assertion failiure and how to fix it in the mfc enabled dll. The error results in the built in dllmain function of mfc. Perhaps you can try to use a dll project where you can write your own dllmain function. In this way you will have greater control over the dll initialization process.
- Saturday, April 11, 2009 5:10 PMI am using Visual Studio 2008 with the latest service pack. You are telling me that Microsoft's linker is broken and I need to use another one? I find it somewhat hard to believe that the linker shipped with VS2008 simply doesn't work, but what would you suggest instead?
EDIT: I should also add that the documentation for both of Boost 1.38.0 and CGAL 3.4 explicitly state that they are compatible with VS2008.
- Saturday, April 11, 2009 6:00 PMThen you should not get a debug assert in compile or link time. Maybe you are confusing runtime and compile/link time.
MSMVP VC++
- Saturday, April 11, 2009 6:19 PMI agree that I should not be getting a debug assertion at compile time or link time. That's why I am here asking the question.
I am quite certain that I know the distinction between run time and compile/link time.
As I stated in my original posting, the problem occurs during link time and pressing "Abort" on the assertion causes linking to fail with the message "Error Code: Performing Registration". In this case NO EXECUTABLE IS PRODUCED AT ALL.
With no executable produced, it is hard to imaging how we could possibly be in run time yet.
As I also stated in my first post, if you press 'Retry' or 'Ignore' on the assertion, then visual studio continues to compile/link as if nothing went wrong. In this case it eventually produces an executable, which falls over very quickly due to heap corruption.
So, now that we are clear that the debug assertion is thrown BEFORE ANY EXECUTABLE HAS BEEN PRODUCED, does anybody have any suggestions?
- Saturday, April 11, 2009 11:57 PMAh, now the problem is clear. The building process is registering your COM component via REGSVR32.EXE, and your DllRegisterServer function throws an error. To find out why, debug your project with REGSVR32.EXE as the target. Search "regsvr32 debug" in this forum for previous discussions about debugging a COM component via RegSvr32.
MSMVP VC++
- Sunday, April 12, 2009 2:46 AMOkay, now this is starting to sound promising. I will give it a try either tomorrow or Monday. Thanks for the advice.
- Thursday, July 23, 2009 1:29 PMI have similar problem. When i upgrade boost form 1.33.1 up to 1.39 I get the following assertion failed error message:
Debug Assertion Failed!
File: F:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dllinit.cpp
Line: 587
when i rollback boost all run ok.
There are breaking changes in boost.thread since 1.33
Interface of scoped_lock has been changed and boost::mutex now is boost::recursive_mutex.
Any help is appreciated.
- Monday, August 03, 2009 10:23 AM
We had the same problem, it is problably because boost.thread overwrites the _pRawDllMain pointer. To fix this you need to patch your boost.thread library, in the file boost_1_39_0/libs/thread/src/win32/tss_pe.cpp comment the line which overwrites the pointer:
< // extern BOOL (WINAPI * const _pRawDllMain)(HANDLE, DWORD, LPVOID)=&dll_callback;
---
> extern BOOL (WINAPI * const _pRawDllMain)(HANDLE, DWORD, LPVOID)=&dll_callback;
and make suyre that you call the on_process_exit(); function when you exit/unload your library.
you need to include the following for this call:
#include <boost/thread/detail/tss_hooks.hpp>
Don't forget to recompile your boost.thread library after patching :-)
This issue is known by the boost.thread author and will be fixed in an upcomming release of boost, the fix above is suggested by the author here:
- Proposed As Answer by Willem de Jonge Monday, August 03, 2009 10:24 AM
- | http://social.msdn.microsoft.com/Forums/en-US/vcgeneral/thread/ec0ce12e-d54d-4b6a-944e-62dd8bb76b7d | CC-MAIN-2013-20 | refinedweb | 1,272 | 65.12 |
It should not be easy to master a language, but it should not be difficult to simply master some usage and use it. Scala and java have many commonalities. From the perspective of Java, it should be easy to enter Scala.
Configuring the Scala environment
Here I configure Scala version 2.11.8
1. First, configure the local environment variables
2. The plug-in of scala is installed in settings - > plugins
3. SDK is added to Libraries in Project Structure
Differences and commonalities between Scala and Java
Scala features
Scala extensible
object-oriented
Functional programming
JAVA compatible
Class library call
Interoperability
Concise grammar
Code line short
Type inference
Abstract control
Static typing (you can't change the type arbitrarily, var and val can only change the value)
Verifiable
Security reconfiguration
Support concurrency control
Strong computing power
Customize other control structures
Difference (difference of some concepts)
In Java
Scala is very compatible with Java. Here we simply define a Java class
public class Demo1Java { public void print(){ System.out.println("Print java"); } public static void main(String[] args) { Demo1Java demo1Java = new Demo1Java(); demo1Java.print();//This is the method called through the object of the class Demo1Java.main(args);//This is a method called through a class object // new Demo2Scala().print_scala(); //Classes in Scala can be called directly in Java new Demo4ScalaStu().print(); } }
Here we can see that in Java, if a method in a class is not added with static, it is just an ordinary member method, which must be called through the class object, that is, the following: (our print method is not decorated with static, but a simple method. When calling, we need to create a class object to call.)
Only static modified methods can be called directly through class objects (class names), which are as follows: (because the main method in Java is called by the virtual machine, that is, through the class object. The class object is an object loaded into memory by the. Class bytecode file generated by the compiled code. There is a static modifier in the main function. Removing the static will report an error, that is, the main function needs to be called through the class object.)
In Scala
The main method in Scala can only be run in object, that is, the class modified by object is equivalent to a static class, that is, the "class object" in which the code is loaded into the virtual machine. The methods and attributes in the class are equivalent to automatically adding static, which can be called directly through the "class object" (class name). It is a singleton mode
object Demo2Scala { val i = 10 /** * Scala main method in * Methods in Scala cannot be decorated with static * * def Define the keyword of a function * main Method name * args: Array[String] Parameter name: parameter type * Unit The return value type is equivalent to void in Java * {} Method body */ def print_scala(): Unit ={ print("scala") } def print_s: String ={ val s = "S" return s } def main(args: Array[String]): Unit = { print("Hello World") Demo2Scala.print_scala() print(Demo2Scala.i) } }
generality
Java can directly call the classes in Scala and use the methods inside
Classes in Scala:
class Demo4ScalaStu { def print(): Unit ={ println("Scala Stu") } }
Differences (some differences in use)
import java.io.{BufferedReader, FileReader} import java.util import scala.io.{BufferedSource, Source} object Demo5ScalaBase { //Basic syntax of scala def main(args: Array[String]): Unit = { /** * variable * You can use val and var to define variables. You don't need to specify the type, but you'd better specify the type * Variables modified with val cannot be modified. They are equivalent to constants * Variables modified with var can be modified * Use val if you can */ val i = 100 var j = 200 j = 300 println(i+1)//This is not a modification of variables //Automatic inference type val k = 100 //It is best to add the type manually val kk : Int = 200 //If you don't know what type it is, use the object type val k1 :Object = "100" println(k) println(kk) println(k1) //Type conversion //Type conversion in Java val str1 = "123" println(Integer.parseInt(str1)) //Type conversion in scala: as long as the data meets the requirements, you can directly type to + val i1 = str1.toInt println(i1) //String type //1. val str:String = "Hello,World" println(str.getClass.getSimpleName) val strings = str.split(",") strings.foreach(println) //In this way, the traversal can be output directly println(strings{0}) //The output here needs {} parentheses val str2:String = "abc"+"def" println(str2) //2. val builder: StringBuilder = new StringBuilder() builder.append("a") builder.append("bc") println(builder) //3. //You can use the $variable name to remove the value of the variable in the string. The bottom layer is StringBuilder val str3:String = "abc" val str4:String = "def" val str5:String = "hij" println(s"$str3,$str4,$str5")//Use s in front //File reading and writing //Java mode val br = new BufferedReader(new FileReader("data/students.txt")) var line:String = br.readLine() while (line!=null){ println(line) line = br.readLine() } br.close() println("*" * 50) //scala reads files val source: BufferedSource = Source.fromFile("data/students.txt") val iter: Iterator[String] = source.getLines() //Returns an iterator, but the iterator can only traverse once //foreach in scala for (elem <- iter) { println(elem) } println("*" * 50) //First simplification for(elem<-Source .fromFile("data/students.txt") .getLines()){ println(elem) } println("*" * 50) //The second simplification: chain call Source .fromFile("data/students.txt") .getLines() .foreach(println) println("*" * 50) } }
scala object oriented programming
Construction method
class Student(id:String,name:String,age:Int){ println("default constructor ") //Directly define the properties of the class val _id:String = id val _name:String = name val _age:Int = age } object Demo6ScalaClass { def main(args: Array[String]): Unit = { val stu: Student = new Student("001","Zhang San",21) println(stu._id) println(stu._name) println(stu._age) } }
Overloading of construction methods
class Student(id:String,name:String,age:Int){ println("default constructor ") //Directly define the properties of the class val _id:String = id val _name:String = name val _age:Int = age var _clazz:String = _ // Overloading of the constructor requires re implementing the this method // The first line of code must call the default constructor def this(id:String,name:String,age:Int,clazz:String){ //Call the default constructor this(id,name,age) _clazz = clazz } } object Demo6ScalaClass { def main(args: Array[String]): Unit = { val stu: Student = new Student("001","Zhang San",21) println(stu._id) println(stu._name) println(stu._age) val stu2:Student = new Student("002","Li Si",22,"Liberal arts class 1") println(stu2._id) println(stu2._name) println(stu2._age) println(stu2._clazz) } }
be careful:
//If the variable name starts with an underscore, it needs to be enclosed in curly braces
//If you want to call a method of a variable, you also need to enclose it in curly braces
//return you can omit the last line of code. The default value is the returned value
inherit
class A(id:String,name:String){ println("A Construction method of") //Definition is assignment val _id:String = id val _name:String = name override def toString = s"A(${_id}, ${_name})" } /** * Inherit or use the extends keyword * However, when inheriting, you need to call the constructor of the parent class */ class B(id:String,name:String,age:Int) extends A(id,name){ println("B Construction method in") val _age:Int = age override def toString = s"B(${_id}, ${_name},${_age})" } object Demo7ScalaExtend { def main(args: Array[String]): Unit = { val b: A = new B("001","Zhang San",21) println(b.toString) } }
Sample class
object Demo8CaseClass { def main(args: Array[String]): Unit = { //When creating a sample class object, the new keyword can be omitted val stu: Stu = Stu("001","Zhang San",21,"male","Liberal arts class 1") println(stu.id) println(stu.name) println(stu.age) println(stu.gender) println(stu.clazz) //The parameters in the sample class are the properties of the class //The default value is assigned by val. if you want to modify it, you need to manually modify it to var stu.age = 22 println(stu.age) } } /** * Sample class: it will automatically add "get, set" methods to each attribute during compilation, and implement the serialization interface */ case class Stu(id:String,name:String,var age:Int,gender:String,clazz:String)
Thank you for reading. I am shuaihe, a senior majoring in big data. I wish you happiness. | https://programmer.help/blogs/simple-introduction-to-scala.html | CC-MAIN-2021-49 | refinedweb | 1,386 | 50.46 |
Accounts with zero posts and zero activity during the last months will be deleted periodically to fight SPAM!
#include <wx/wx.h>#include <wx/log.h>#include <wx/intl.h>#include <whatever.h>
EDIT:And another thing possible...It may actually be that CVS is working *properly*. Check the editor which you are using under Linux. Some editors offer a "feature" to save with non-native line endings, so if your Linux editor is configured to write CR+LF in an attempt to be "compatible", and CVS (correctly) inserts a LF after CR because you migrate from Linux to Windows, then you may get that very same effect. In that case, simply change the editor's preferences to "use CR".
At the risk of being cheeky... use Subversion
Yiannis:Don't even think about migrating? What is so scary about running cvs2svn? It should really not cause too much grief? I mean, you can say "no" after thinking about it, but not think about it at all? There are even GPLed hook scripts available that allow back-versioning to a read-only CVS repository, so while committing to SVN, people can still check out via CVS if they want.
Well, the option I added to set the line endings mode was taken directly from scintilla. I see no reason why files modified in Linux would be different from those in Windows (see the editor preferences, EOL mode)So I'm sure it's a CVS setting.
Hmmm... that's very interesting. Anyway this option enabled in Linux was what was causing the problem? (Now that i think of it, is it safe to have it in windows?) | http://forums.codeblocks.org/index.php?topic=845.msg5945 | CC-MAIN-2019-51 | refinedweb | 277 | 64.81 |
by Michael S. Kaplan, published on 2005/09/18 16:01 -04:00, original URI:
Prior posts in the series:
Extending collation support in SQL Server and Jet, Part 0 (HISTORY)
Extending collation support in SQL Server and Jet, Part 1 (the broad strokes)
Okay, let's dig into some details, now. :-)
If you look back to June when I talked about how string.Compare is for sissies (not for people who want SQLCLR consistency) and July when I went a bit further and talked about how real developers use CompareInfo's Compare (Part 1), I really did spend some time talking about how if you want configurable collation in .NET, you have to use the CompareInfo object. Certainly for sort keys, there is no way other than the CompareInfo.GetSortKey method....
Now the first point is how many of the workarounds I discussed to move collation settings between Win32 and .NET or .NET and SQL Server are needed:
NONE.
That is right, you don't need to do any of those workarounds. Because the method in .NET will do the work for you of mapping out of .NET and Win32 through the "Windows-only CultureInfo" support. Cool, no? :-)
You do have to decide what you want to do with the various settings you have available to you, basically the members of the CompareOptions enumeration:
For the sake of consistency with other SQL Server collations, I would recommend staying away from IgnoreSymbols and StringSort (though at this point some of you may be thinking about how this method could be used to customize the support for existing collations!).
We will start with our CustomSqlCollation class, which will take a culture name and the comparison options to use (I'll build on this class more, in the future):
sealed public class CustomSqlCollation {
// Private members
private CultureInfo m_cultureInfo;
private CompareInfo m_compareInfo;
private CompareOptions m_options;
private string m_name;
// Constructor
public CustomSqlCollation(string name, CompareOptions options) {
this.m_cultureInfo = new CultureInfo("mn-CN-Mong", false);
this.m_compareInfo = this.m_cultureInfo.CompareInfo;
this.m_options = options;
this.m_name = name;
}
// Method to return an index value
public byte[] GetSortKey(string input) {
return this.m_compreInfo.GetSortKey(input, this.m_options);
}
// Compare method -- note that the many other forms of
// CompareInfo.Compare can also be done here
public int Compare(string string1, string string2) {
return this.m_compareInfo.Compare(string1, string2, this.m_options);
}
}
So, lets say that you have decided you want to create a custom collation for one of the new locales in Vista added for the Chinese minority language support -- Mongolian. The name of this culture that you would use to create it is mn-CN-Mong and it will create a CultureInfo object's whose NativeName is something like
ᠮᠣᠩᠭᠤᠯ ᠬᠡᠯᠡ (ᠪᠦᠭᠦᠳᠡ ᠨᠯᠣᠰ)
and whose EnglishName is something like
Mongolian (Mongolian, People's Republic of China)
(That native name will show up much more effectively in Vista, which has the font and rendering engine support!)
On the whole it would likely be best to not ignore any of the weights when the language in question does not have any of those chracteristics (case, non-spacing marks, different Kana types, or different widths). So none of those other flags should be passed in this scenario -- the weights may be leveraged for other purposes in the script.
But now any time you need to create a value for your index (either inserting a row or changing the value in the column), you can call CustomSqlCollation.GetSortKey() to get the value to use -- using an insert trigger and a update trigger to create or regenerate the index value, as needed. And you can sort using that index column at any time.
If anybody told me that SQL Server was going to give good Mongolian script support, I would have been really skeptical -- until I realized it would not be that hard to do the actual work!
Next time, I'll cover those triggers and how to write them, as well as filling out more of the methods of the CustomSqlCollation class....
This post brought to you by "ᠯ" (U+182f, a.k.a. MONGOLIAN LETTER LA)
referenced by
2005/10/21 Extending collation support in SQL Server and Jet, Part 4 (What about Jet?)
2005/10/09 Extending collation support in SQL Server and Jet, Part 3 (THAT CLASS)
2005/09/25 Extending collation support in SQL Server and Jet, Part 2.1 (is this on?)
go to newer or older post, or back to index or month or day | http://archives.miloush.net/michkap/archive/2005/09/18/470869.html | CC-MAIN-2017-13 | refinedweb | 747 | 59.13 |
CHKHELP(1) General Commands Manual CHKHELP(1)
chkhelp - check performance metrics help text files
$PCP_BINADM_DIR/chkhelp [-eHiOp] [-n pmnsfile] [-v version] helpfile [metricname ...]
chkhelp checks the consistency of Performance Co-Pilot help text files generated by newhelp(1) and used by Performance Metric Domain Agents (PMDAs). The checking involves scanning the files, and optionally displaying selected entries. The files helpfile.dir and helpfile.pag are created by newhelp(1), and are assumed to already exist. Without any options or metricname arguments, chkhelp silently verifies the structural integrity of the help files. If any metricname arguments are specified, then the help entries for only the corresponding metrics will be processed. If no metricname arguments are specified, then at least one of the options -i or -p must be given. The -i option causes entries for all instance domains to be processed (ignoring entries for performance metrics). The -p option causes entries for all metrics to be displayed (ignoring entries for instance domains). When metric entries are to be processed (via either the metricname arguments or the -p option or the -i option), the -O and -H options request the display of the one-line and verbose help text respectively. The default is -O. Although historically there have been multiple help text file formats, the only format currently supported using the -v option is version 2, and this is the default if no -v flag is provided. Normally chkhelp operates on the default Performance Metrics Name Space (PMNS), however if the -n option is specified an alternative namespace is loaded from the file pmnsfile. The -e option provides an existence check where all of the specified metrics from the PMNS (note, not from helpfile) are scanned, and only the names of the metrics for which no help text exists are reported. The -e option is mutually exclusive with the -i and/or -p).
newhelp(1), PMAPI(3), pmLookupInDomText(3), pmLookupText(3), pcp.conf(5) and pcp.env(5).
There are all sorts of reasons a help database may be inconsistent, the most likely is that a performance metric in the database is not defined in the loaded PMNS. CHKHELP(1)
Pages that refer to this page: newhelp(1), pmlookupindomtext(3), pmlookuptext(3) | http://man7.org/linux/man-pages/man1/chkhelp.1.html | CC-MAIN-2017-47 | refinedweb | 370 | 54.63 |
How about starting a campaign to grow and develop the community around ?
*Edward Saperia* Conference Director Wikimania London <> email <[email protected]> • facebook <> • twitter <> • 07796955572 133-135 Bethnal Green Road, E2 7DG On 26 August 2014 13:03, svetlana <[email protected]> wrote: > Hi, > > David Goodman. > > Thanks, I agree. I'm pretty passionate about making a difference in this > area. I would personally go and start doing that /right now/, but the > question remains open: Which activity should I engage in for all that to > happen? > > - Look at recent edits and collaborate with new people? That's a most > thankless item on this list, perhaps, as people edit more than anything > else. > - Look at newly created pages and collaborate on those with due care and > attention to the new people? That'd be nice. (although imo the drafts > process at English Wikipedia creates an unnecessary hierarchy -- I'd love > to remain a peer and treat the newcomer as a source of wonderful knowledge, > not as a reviewee or mentoree. For this reason, I might perhaps only do > this to articles created in main namespace.) > - I had written a script [2] which makes draft review things more personal > by not using a template in review comments, but I couldn't figure out whom > to approach to get it deployed, or how to prevent ugly [3] templates on > talk pages of people who submitted a draft for review. > - Reworking the welcome template into something else? Into what > specifically? > - There are other things I tried to do, such as leave simple short > messages such as [4], but I have not been doing enough of them to figure > out who likes them. > - Many many examples, warning vandals for example, completely template > thing, they get reborn as trolls, etc. see also [5]. But there is a need to > not feed them still, i.e. put some effort into personal communication but > not too much. > - Figuring out how to provide IP contributors with more software, up to > the point it's technically possible? ([1] lists some software limitations). > - <add your thought here> > > How do I set priorities in such list? Where to start tackling the problem? > > svetlana > > [1] > > [2] > [3] > [4] > > | https://www.mail-archive.com/[email protected]/msg14188.html | CC-MAIN-2022-05 | refinedweb | 366 | 66.23 |
23 July 2009 09:44 [Source: ICIS news]
GUANGZHOU (ICIS news)--?xml:namespace>
The prosperity index for the oil and gas-exploration industry increased 17 points from the first quarter to 153.5 points in the second quarter, while the chemical industry rose 13.3 points to 99.4 points.
The figures were published in a joint report by the China Economic Monitoring and
The production index for the petroleum sector gained 1.43 points in the second quarter to 99.02 points, while the chemical sector rose 0.59 points to 99.22 points, the report said.
Total profits for the petroleum sector increased 73.3% quarter on quarter to yuan (CNY) 71bn ($10.4bn), while the chemical sector went up more than twofold to CNY47bn, according to the report.
Around 22.1% of
Almost 20.6% of the companies in the chemical sector reported losses in the second quarter, down 3.8 percentage points from the first quarter, it said.
Overall, the data showed that the petroleum and chemical industries were on an uphill trend, and performance in the second half of the year would be even better, industry analysts said.
“With further implementation of stimulus plans like tariff breaks on exports, the industries would see higher profitability and growth,” said an analyst from the
A recovery in the global economy would help boost demand and
“
($1 = CNY6.83)
See John Richardson’s Asian Chemical Connections | http://www.icis.com/Articles/2009/07/23/9234393/china-petchems-move-up-on-prosperity-index-in-second.html | CC-MAIN-2014-10 | refinedweb | 238 | 67.55 |
RTSP communication problem
Hi,
It is my first question in this forum, after looking for a long time any solution. I'm using python 2.7 with OPENCV '2.4.13' (I already tried with 3.1) and I can't open streams. I already solved the ffmpeg problem (dll) and tried to run the local camera and after a local video with success.
could anyone help me? follow below code: PS: Windows 10, x86 rtsp link working (tried in the vlc player) ffmpeg working (tried to run a video in the code locally)
import cv2, platform #import numpy as np cam = "" #cam = 0 # Use local webcam. cap = cv2.VideoCapture(cam) if not cap.isOpened(): print("not opened") while(True): # Capture frame-by-frame ret, current_frame = cap.read() if type(current_frame) == type(None): print("!!! Couldn't read frame!") break # Display the resulting frame cv2.imshow('frame',current_frame) if cv2.waitKey(1) & 0xFF == ord('q'): break # release the capture cap.release() cv2.destroyAllWindows()
When I try to run I get back
C:\Python27\python.exe C:/Users/brleme/PycharmProjects/opencv/main.py not opened !!! Couldn't read frame! warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
Process finished with exit code 0
image of the camera working in the browser
kind regards, Bruno
...and the problem is ?
sorry :-s...
well, after a long time (probably 3 minutes) it returns the error bellow (nothing was loaded)
^^ now please put that into your question | https://answers.opencv.org/question/113185/rtsp-communication-problem/ | CC-MAIN-2019-39 | refinedweb | 243 | 70.6 |
#include <db.h>
int db_env_create(DB_ENV **dbenvp, u_int32_t flags);
The db_env_create function creates a DB_ENV structure that is the handle for a Berkeley DB environment. A pointer to this structure is returned in the memory to which dbenvp refers.
The flags value must be set to 0 or the following value:
The DB_CLIENT.
The DB_ENV handle contains a special field, "app_private", which is declared as type "void *". This field is provided for the use of the application program. It is initialized to NULL and is not further used by Berkeley DB in any way.
The db_env_create function returns a non-zero error value on failure and 0 on success.
The db_env_create function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the db_env_create function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way. | http://pybsddb.sourceforge.net/api_c/env_create.html | crawl-001 | refinedweb | 161 | 62.27 |
@GolamMostafa, Serial.read() returns an int, not a byte. So you should store it in an int and rerun your tests.
Serial.read() returns an int, not a byte.
Does it mean that the serial FIFO buffer is word oriented? If so, it agrees with our good old days’ knowledge of UART Port of IBMPC. The upper 8-bit of the return value contains the status bits and the lower 8-bit contains the actual character received. In Arduino IDE, execution of byte x = Serial.read(); automatically takes the lower 8-bit of the 16-bit buffer location. In old days, we had to do it in a different way. (Thanks to clever Arduino IDE.)
Old Days’ Method:
#include <stdio.h> #include <bios.h> #include <conio.h> #define COM1INI bioscom(0x00, 0xC7, 0x0000) /*c7=2SB, C3=1SB */ #define COM1WR bioscom(0x01, ch1, 0x0000) #define COM1RD comval.z=bioscom(0x02, 0x00, 0x0000) #define COM1STS comval.z = bioscom(0x03, 0x00, 0x0000) /* Status Read */ #define TxRDY ((comval.ch2[1])& 0x20) /*!=0x00 */ #define RxRDY ((comval.ch2[1])& 0x01) #define RxDATA (comval.ch2[0]) void main(void) { unsigned char ch1 ; union { int z; unsigned char ch2[2]; } comval; clrscr(); COM1INI; /* COM1 initialize at 4800Bd,NP,2-SB */ While ((ch1 = getche()) != ‘Q’) /* read ASCII code of pressed down key */ { COM1STS ; /* reading COM1 status after executing bioscom(0x03, , );*/ if (TxRDY != 0x00) COM1WR ; /* write ASCII code of the pressed down key on COM1 port */ If (RxRDY !=00) { COM1RD ; /* Data has arrived at COM1 port and read it */ putchar (RxDATA) ; /* show the received character on screen */ } } exit(0); }
Serial.read() returns an int, not a byte.
GolamMostafa: Does it mean that the serial FIFO buffer is word oriented?
No, it means that, when no data is available to read, Serial.read() returns a value (-1) that cannot possibly be read from the serial port. When there is data available, Serial.read() returns a value in the range 0 to 255. There are no status bits with Serial.read().
In short, Serial.read() returns 256 values (-1 to 255) which cannot possibly be contained in an 8 bit data type thus Serial.read() returns a number of signed int type.
adwsystems: In short, Serial.read() returns 256 values (-1 to 255) which cannot possibly be contained in an 8 bit data type thus Serial.read() returns a number of signed int type.
Correction: Serial.read() can return one of 257 values. 0 through 255 is 256 values (which can be stored in a byte), and -1 is a 257th value. | https://forum.arduino.cc/t/understanding-serial-available/524713?page=2 | CC-MAIN-2021-31 | refinedweb | 420 | 78.04 |
Like any network client library, libmongoc cannot be fully tested as a black box. Traditional black box tests enter some input and check the output—this only validates one side of the system at a time. But libmongoc has two sides, working in concert. One side is its public API, its structs and functions and so on. The other is its communication over the network with the MongoDB server. Only by treating it as a black pipe can we fully test its two sides.
Origin
I began thinking about black-pipe testing early this year. I was reading the libmongoc test suite in preparation for taking over the project from Christian Hergert and Jason Carey, and I came across Christian's
mock_server_t struct. Test code in C does not ordinarily make lively reading, but I woke up when I saw this. Had he really written a MongoDB wire protocol server in order to test the client library?
If you know Christian Hergert's work, you know the answer. Of course he had. His mock server listened on a random TCP port, parsed the client's network messages, and sent MongoDB responses. At the time,
mock_server_t used callbacks: you created a mock server with a pointer to a function that handled requests and chose how to reply. And if you think callbacks are ungainly in Javascript or Python, try them in C.
Despite its awkward API, the mock server was indispensable for certain tests. For example, Christian had a mock server that reported it only spoke wire protocol versions 10 and 11. Since the latest MongoDB protocol version is only 3, the driver does not know how to talk to such a futuristic server and should refuse to, but the only way to test that behavior is by simulating the server.
Besides the protocol-version test, Christian also used the mock server to validate the client's handling of "read preferences". That is, how the client expresses whether it wants to read from a primary server, a secondary, or some subtler criterion. A mock server is required here because a correct client and a buggy one appear the same at the API level: it is only when we test its behavior at the network layer that bugs are caught.
In these two tests I saw the two use cases for "black-pipe" testing. First, black-pipe tests simulate unusual server behavior and network events. Second, in cases where the client's API behavior can appear correct even when there are bugs at the network layer, black-pipe tests validate the network-level logic too.
Evolution: From C to Python
I had not yet taken leadership of libmongoc—I was finishing up some Python work. So, inspired by Christian's idea, I wrote a mock server in Python, called MockupDB. MockupDB is the subject of my earlier article in this series: "Testing PyMongo As A Black-Pipe."
Since I was working in my native tongue Python, I could afford to be finicky about MockupDB's interface. I didn't want callbacks, dammit, I wanted to make something nice! As I wrote in the MockupDB article, I came up with a future-based programming interface that let me neatly interleave client and server operations in a single test function:
from mockupdb import MockupDB, Command, go from pymongo import MongoClient def test(): server = MockupDB(auto_ismaster={"maxWireVersion": 3}) server.run() client = MongoClient(server.uri) collection = client.db.collection future = go(collection.insert_one, {"_id": 1}) request = server.receives(Command({"insert": "collection"})) request.reply({'ok': 1}) assert(future().inserted_id == 1)
Let's break this down. I use MockupDB's
go function to start a PyMongo operation on a background thread, obtaining a handle to its future result:
future = go(collection.insert_one, {"_id": 1})
The driver sends an "insert" command to the mock server and blocks waiting for the server response. I retrieve that command from the server and validate that it has the expected format:
request = server.receives(Command({"insert": "collection"}))
MockupDB asserts that the command arrives promptly and has the right format before it returns the command to me. I reply to the client, which unblocks it and lets me retrieve the future value:
request.reply({'ok': 1}) assert(future().inserted_id == 1)
More Evolution: From Python Back to C
Once Bernie Hackett and I released PyMongo 3.0, I devoted myself to libmongoc full-time. I set to work updating its
mock_server_t with the ideas I had developed in Python. I wrote an example with the API I wanted:
mock_server_t *server; mongoc_client_t *client; mongoc_collection_t *collection; bson_t *document; bson_error_t error; future_t *future; request_t *request; /* protocol version 3 includes the new "insert" command */ server = mock_server_with_autoismaster (3); mock_server_run (server); client = mongoc_client_new_from_uri (mock_server_get_uri (server)); collection = mongoc_client_get_collection (client, "test", "collection"); document = BCON_NEW ("_id", BCON_INT64 (1)); future = future_collection_insert (collection, MONGOC_INSERT_NONE,/* flags */ document, NULL, /* writeConcern */ &error); request = mock_server_receives_command (server, "test", MONGOC_QUERY_NONE, "{'insert': 'collection'}"); mock_server_replies_simple (request, "{'ok': 1}"); assert (future_get_bool (future)); future_destroy (future); request_destroy (request); bson_destroy (document); mongoc_collection_destroy(collection); mongoc_client_destroy(client); mock_server_destroy (server);
Alas, C is prolix; this was as lean as I could make it. I doubt that you read that block of code. Let's focus on some key lines.
First, the mock server starts up and binds an unused port. Just like in Python, I connect a real client object to the mock server's URI:
client = mongoc_client_new_from_uri (mock_server_get_uri (server));
Now I insert a document. The client sends an "insert" command to the mock server, and blocks waiting for the response:
future = future_collection_insert (collection, MONGOC_INSERT_NONE,/* flags */ document, NULL, /* writeConcern */ &error);
The
future_collection_insert function starts a background thread and runs the libmongoc function
mongoc_collection_insert. It returns a future value, which will be resolved once the background thread completes.
Meanwhile, the mock server receives the client's "insert" command:
request = mock_server_receives_command (server, "test", /* DB name */ MONGOC_QUERY_NONE, /* no flags */ "{'insert': 'collection'}");
This statement accomplishes several goals. First, it waits (using a condition variable) for the background thread to send the "insert" command. Second, it validates that the command has the proper format: its database name is "test", its flags are unset, the command itself is named "insert," and the target collection is named "collection."
The test completes when I reply to the client:
mock_server_replies_simple (request, "{'ok': 1}"); assert (future_get_bool (future));
This unblocks the background thread. The future is resolved with the return value of
mongoc_collection_insert. I assert that its return value was
true, meaning it succeeded. My test framework detects if
future_get_bool stays blocked: this means
mongoc_collection_insert is not finishing for some reason, and this too will cause my test to fail.
Conclusion
When I first saw Christian Hergert's
mock_server_t its brilliance inspired me: To test a MongoDB client, impersonate a MongoDB server!
I wrote the MockupDB package in Python, and then I overhauled Christian's mock server in C. As I developed and used this idea over the last year, I generalized it beyond the problem of testing MongoDB drivers. What I call a "black pipe test" applies to any networked application whose API behavior and network protocol must be validated simultaneously.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/black-pipe-testing-a-connected-application-in-c | CC-MAIN-2017-09 | refinedweb | 1,182 | 53.71 |
Aflați mai multe despre abonamentul Scribd
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.
Documentation Release 0.5.0
Multiple authors
1 Introduction 3 1.1 Hello, World! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4 Input/Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5 Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.6 Random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.7 Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.8 Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.9 Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.11 Speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.12 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 1.13 Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.14 Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3 Microbit Module 53 3.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.2 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4 Bluetooth 73
6 Music 77 6.1 Musical Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7 NeoPixel 83 7.1 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.3 Using Neopixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
i 7.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
8 The os Module 87 8.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
9 Radio 89 9.1 Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 9.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
11 Speech 95 11.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 11.2 Punctuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 11.3 Timbre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 11.4 Phonemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 11.5 Singing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 11.6 How Does it Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 11.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
12 Installation 105 12.1 Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 12.2 Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 12.3 Installation Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 12.4 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
16 Contributing 113 16.1 Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
ii BBC micro:bit MicroPython Documentation, Release 0.5.0
Welcome!The BBC micro:bit is a small computing device for children. One of the languages it understands is the popular Pythonprogramming language. The version of Python that runs on the BBC micro:bit is called MicroPython.This documentation includes lessons for teachers and API documentation for developers (check out the index on theleft).this document. Thanks!
Tutorials 1BBC micro:bit MicroPython Documentation, Release 0.5.0
2 Tutorials CHAPTER 1
Introduction
We suggest you download and use the mu editor when working through these tutorials. Instructions for downloadingand installing Mu are on its website. You may need to install a driver, depending on your platform (instruction are onthe website).Mu works with Windows, OSX and Linux.Once Mu is installed connect your micro:bit to your computer via a USB lead.Write your script in the editor window and click the “Flash” button to transfer it to the micro:bit. If it doesn’t work,make sure your micro:bit appears as a USB storage device in your file system explorer.
The traditional way to start programming in a new language is to get your computer to say, “Hello, World!”.
. . . tells MicroPython to get all the stuff it needs to work with the BBC micro:bit. All this stuff is in a module calledmicrobit (a module is a library of pre-existing code). When you import something you’re telling MicroPythonthat you want to use it, and * is Python’s way to say everything. So, from microbit import * means, inEnglish, “I want to be able to use everything from the microbit code library”.The second line:
3BBC micro:bit MicroPython Documentation, Release 0.5.0
display.scroll("Hello, World!")
. . . tells MicroPython to use the display to scroll the string of characters “Hello, World!”. The display part of thatline is an object from the microbit module that represents the device’s physical display (we say “object” instead of“thingy”, “whatsit” or “doodah”). We can tell the display to do things with a full-stop . followed by what looks likea command (in fact it’s something we call a method). In this case we’re using the scroll method. Since scrollneeds to know what characters to scroll across the physical display we specify them between double quotes (") withinparenthesis (( and )). These are called the arguments. So, display.scroll("Hello, World!") means, inEnglish, “I want you to use the display to scroll the text ‘Hello, World!’”. If a method doesn’t need any arguments wemake this clear by using empty parenthesis like this: ().Copy the “Hello, World!” code into your editor and flash it onto the device. Can you work out how to change themessage? Can you make it say hello to you? For example, I might make it say “Hello, Nicholas!”. Here’s a clue, youneed to change the scroll method’s argument.
1.2 Images
MicroPython is about as good at art as you can be if the only thing you have is a 5x5 grid of red LEDs (light emittingdiodes - the things that light up on the front of the device). MicroPython gives you quite a lot of control over thedisplay so you can create all sorts of interesting effects.MicroPython comes with lots of built-in pictures to show on the display. For example, to make the device appearhappy you type:
I suspect you can remember what the first line does. The second line uses the display object to show a built-inimage. The happy image we want to display is a part of the Image object and called HAPPY. We tell show to use itby putting it between the parenthesis (( and )).
4 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
1.2. Images 5BBC micro:bit MicroPython Documentation, Release 0.5.0 • Image.DUCK • Image.HOUSE • Image.TORTOISE • Image.BUTTERFLY • Image.STICKFIGURE • Image.GHOST • Image.SWORD • Image.GIRAFFE • Image.SKULL • Image.UMBRELLA • Image.SNAKEThere’s quite a lot! Why not modify the code that makes the micro:bit look happy to see what some of the otherbuilt-in images look like? (Just replace Image.HAPPY with one of the built-in images listed above.)
6 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Of course, you want to make your own image to display on the micro:bit, right?That’s easy.Each LED pixel on the physical display can be set to one of ten values. If a pixel is set to 0 (zero) then it’s off. Itliterally has zero brightness. However, if it is set to 9 then it is at its brightest level. The values 1 to 8 represent thebrightness levels between off (0) and full on (9).Armed with this information, it’s possible to create a new image like this:
boat = Image("05050:" "05050:" "05050:" "99999:" "09990")
display.show(boat)
(When run, the device should display an old-fashioned “Blue Peter” sailing ship with the masts dimmer than the boat’shull.)Have you figured out how to draw a picture? Have you noticed that each line of the physical display is represented bya line of numbers ending in : and enclosed between " double quotes? Each number specifies a brightness. There arefive lines of five numbers so it’s possible to specify the individual brightness for each of the five pixels on each of thefive lines on the physical display. That’s how to create a new image.Simple!In fact, you don’t need to write this over several lines. If you think you can keep track of each line, you can rewrite itlike this:
boat = Image("05050:05050:05050:99999:09990")
1.2.2 Animation
Static images are fun, but it’s even more fun to make them move. This is also amazingly simple to do with MicroPython~ just use a list of images!Here is a shopping list:
EggsBaconTomatoes
I’ve simply created a list called shopping and it contains three items. Python knows it’s a list because it’s enclosedin square brackets ([ and ]). Items in the list are separated by a comma (,) and in this instance the items are threestrings of characters: "Eggs", "Bacon" and "Tomatoes". We know they are strings of characters because they’reenclosed in quotation marks ".You can store anything in a list with Python. Here’s a list of numbers:
1.2. Images 7BBC micro:bit MicroPython Documentation, Release 0.5.0
Note: Numbers don’t need to be quoted since they represent a value (rather than a string of characters). It’s thedifference between 2 (the numeric value 2) and "2" (the character/digit representing the number 2). Don’t worry ifthis doesn’t make sense right now. You’ll soon get used to it.
It’s even possible to store different sorts of things in the same list:
As with a single image, we use display.show to show it on the device’s display. However, we tell MicroPythonto use Image.ALL_CLOCKS and it understands that it needs to show each image in the list, one after the other. Wealso tell MicroPython to keep looping over the list of images (so the animation lasts forever) by saying loop=True.Furthermore, we tell it that we want the delay between each image to be only 100 milliseconds (a tenth of a second)with the argument delay=100.Can you work out how to animate over the Image.ALL_ARROWS list? How do you avoid looping forever (hint:the opposite of True is False although the default value for loop is False)? Can you change the speed of theanimation?Finally, here’s how to create your own animation. In my example I’m going to make my boat sink into the bottom ofthe display:
boat1 = Image("05050:" "05050:" "05050:" "99999:" "09990")
boat2 = Image("00000:" "05050:" "05050:" "05050:" "99999")
boat3 = Image("00000:" "00000:" "05050:" "05050:" "05050")
boat4 = Image("00000:" "00000:" "00000:" "05050:"
8 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
"05050")
boat5 = Image("00000:" "00000:" "00000:" "00000:" "05050")
boat6 = Image("00000:" "00000:" "00000:" "00000:" "00000")
1.3 Buttons
So far we have created code that makes the device do something. This is called output. However, we also need thedevice to react to things. Such things are called inputs.It’s easy to remember: output is what the device puts out to the world whereas input is what goes into the device for itto process.The most obvious means of input on the micro:bit are its two buttons, labelled A and B. Somehow, we need MicroPy-thon to react to button presses.This is remarkably simple:
sleep(10000)display.scroll(str(button_a.get_presses()))
All this script does is sleep for ten thousand milliseconds (i.e. 10 seconds) and then scrolls the number of times youpressed button A. That’s it!While it’s a pretty useless script, it introduces a couple of interesting new ideas: 1. The sleep function will make the micro:bit sleep for a certain number of milliseconds. If you want a pause in your program, this is how to do it. A function is just like a method, but it isn’t attached by a dot to an object. 2. There is an object called button_a and it allows you to get the number of times it has been pressed with the get_presses method.
1.3. Buttons 9BBC micro:bit MicroPython Documentation, Release 0.5.0
Since get_presses gives a numeric value and display.scroll only displays characters, we need to convertthe numeric value into a string of characters. We do this with the str function (short for “string” ~ it converts thingsinto strings of characters).The third line is a bit like an onion. If the parenthesis are the onion skins then you’ll notice that display.scrollcontains str that itself contains button_a.get_presses. Python attempts to work out the inner-most answerfirst before starting on the next layer out. This is called nesting - the coding equivalent of a Russian Matrioshka doll.
Let’s pretend you’ve pressed the button 10 times. Here’s how Python works out what’s happening on the third line:Python sees the complete line and gets the value of get_presses:
display.scroll(str(button_a.get_presses()))
Now that Python knows how many button presses there have been, it converts the numeric value into a string ofcharacters:
display.scroll(str(10))
display.scroll("10")
While this might seem like a lot of work, MicroPython makes this happen extraordinarily fast.
Often you need your program to hang around waiting for something to happen. To do this you make it loop around apiece of code that defines how to react to certain expected events such as a button press.To make loops in Python you use the while keyword. It checks if something is True. If it is, it runs a block of codecalled the body of the loop. If it isn’t, it breaks out of the loop (ignoring the body) and the rest of the program cancontinue.Python makes it easy to define blocks of code. Say I have a to-do list written on a piece of paper. It probably lookssomething like this:
10 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
ShoppingFix broken gutterMow the lawn
If I wanted to break down my to-do list a bit further, I might write something like this:
Shopping: Eggs Bacon TomatoesFix broken gutter: Borrow ladder from next door Find hammer and nails Return ladderMow the lawn: Check lawn around pond for frogs Check mower fuel level
It’s obvious that the main tasks are broken down into sub-tasks that are indented underneath the main task to whichthey are related. So Eggs, Bacon and Tomatoes are obviously related to Shopping. By indenting things wemake it easy to see, at a glance, how the tasks relate to each other.This is called nesting. We use nesting to define blocks of code like this:
display.show(Image.SURPRISED)
The running_time function returns the number of milliseconds since the device started.The while running_time() < 10000: line checks if the running time is less than 10000 milliseconds (i.e.10 seconds). If it is, and this is where we can see scoping in action, then it’ll display Image.ASLEEP. Notice howthis is indented underneath the while statement just like in our to-do list.Obviously, if the running time is equal to or greater than 10000 milliseconds then the display will show Image.SURPRISED. Why? Because the while condition will be False (running_time is no longer < 10000). In thatcase the loop is finished and the program will continue after the while loop’s block of code. It’ll look like yourdevice is asleep for 10 seconds before waking up with a surprised look on its face.Try it!
If we want MicroPython to react to button press events we should put it into an infinite loop and check if the buttonis_pressed.An infinite loop is easy:
while True: # Do stuff
(Remember, while checks if something is True to work out if it should run its block of code. Since True isobviously True for all time, you get an infinite loop!)
1.3. Buttons 11BBC micro:bit MicroPython Documentation, Release 0.5.0
Let’s make a very simple cyber-pet. It’s always sad unless you’re pressing button A. If you press button B it dies. (Irealise this isn’t a very pleasant game, so perhaps you can figure out how to improve it.):
while True: if button_a.is_pressed(): display.show(Image.HAPPY) elif button_b.is_pressed(): break else: display.show(Image.SAD)
display.clear()
Can you see how we check what buttons are pressed? We used if, elif (short for “else if”) and else. These arecalled conditionals and work like this:
if something is True: # do one thingelif some other thing is True: # do another thingelse: # do yet another thing.
1.4 Input/Output
There are strips of metal along the bottom edge of the BBC micro:bit that make it look as if the device has teeth. Theseare the input/output pins (or I/O pins for short).
12 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Some of the pins are bigger than others so it’s possible to attach crocodile clips to them. These are the ones labelled 0,1, 2, 3V and GND (computers always start counting from zero). If you attach an edge connector board to the deviceit’s possible to plug in wires connected to the other (smaller) pins.Each pin on the BBC micro:bit is represented by an object called pinN where N is the pin number. So, for example,to do things with the pin labelled with a 0 (zero), use the object called pin0.Simple!These objects have various methods associated with them depending upon what the specific pin is capable of.
The simplest example of input via the pins is a check to see if they are touched. So, you can tickle your device to makeit laugh like this:
while True: if pin0.is_touched(): display.show(Image.HAPPY) else: display.show(Image.SAD)
With one hand, hold your device by the GND pin. Then, with your other hand, touch (or tickle) the 0 (zero) pin. Youshould see the display change from grumpy to happy!
1.4. Input/Output 13BBC micro:bit MicroPython Documentation, Release 0.5.0
This is a form of very basic input measurement. However, the fun really starts when you plug in circuits and otherdevices via the pins.
The simplest thing we can attach to the device is a Piezo buzzer. We’re going to use it for output.
These small devices play a high-pitched bleep when connected to a circuit. To attach one to your BBC micro:bit youshould attach crocodile clips to pin 0 and GND (as shown below).
14 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
The wire from pin 0 should be attached to the positive connector on the buzzer and the wire from GND to the negativeconnector.The following program will cause the buzzer to make a sound:
pin0.write_digital(1)
This is fun for about 5 seconds and then you’ll want to make the horrible squeaking stop. Let’s improve our exampleand make the device bleep:
while True: pin0.write_digital(1)
1.4. Input/Output 15BBC micro:bit MicroPython Documentation, Release 0.5.0
sleep(20) pin0.write_digital(0) sleep(480)
Can you work out how this script works? Remember that 1 is “on” and 0 is “off” in the digital world.The device is put into an infinite loop and immediately switches pin 0 on. This causes the buzzer to emit a beep. Whilethe buzzer is beeping, the device sleeps for twenty milliseconds and then switches pin 0 off. This gives the effect of ashort bleep. Finally, the device sleeps for 480 milliseconds before looping back and starting all over again. This meansyou’ll get two bleeps per second (one every 500 milliseconds).We’ve made a very simple metronome!
1.5 Music
MicroPython on the BBC micro:bit comes with a powerful music and sound module. It’s very easy to generate bleepsand bloops from the device if you attach a speaker. Use crocodile clips to attach pin 0 and GND to the positive andnegative inputs on the speaker - it doesn’t matter which way round they are connected to the speaker.
16 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Note: Do not attempt this with a Piezo buzzer - such buzzers are only able to play a single tone.
import music
music.play(music.NYAN)
Notice that we import the music module. It contains methods used to make and control sound.MicroPython has quite a lot of built-in melodies. Here’s a complete list: • music.DADADADUM • music.ENTERTAINER
1.5. Music 17BBC micro:bit MicroPython Documentation, Release 0.5.0
• music.PRELUDE • music.ODE • music.NYAN • music.RINGTONE • music.FUNK • music.BLUES • music.BIRTHDAY • music.WEDDING • music.FUNERAL • music.PUNCHLINE • music.PYTHON • music.BADDY • music.CHASE • music.BA_DING • music.WAWAWAWAA • music.JUMP_UP • music.JUMP_DOWN • music.POWER_UP • music.POWER_DOWNTake the example code and change the melody. Which one is your favourite? How would you use such tunes as signalsor cues?”:
18 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Note: MicroPython helps you to simplify such melodies. It’ll remember the octave and duration values until you nextchangeof 440 is the same as a concert A used to tune a symphony orchestra.In the example above the range function is used to generate ranges of numeric values. These numbers are used todefine the pitch of the tone. The three arguments for the range function are the start value, end value and step size.Therefore, the first use of range is saying, in English, “create a range of numbers between 880 and 1760 in steps of16”. The second use of range is saying, “create a range of values between 1760 and 880 in steps of -16”. This ishowfrequency in the specified range of frequencies, play the pitch of that frequency for 6 milliseconds”. Notice how thething to do for each item in a for loop is indented (as discussed earlier) so Python knows exactly which code to run tohandle the individual items.
1.6 Random
Sometimes you want to leave things to chance, or mix it up a little: you want the device to act randomly.MicroPython comes with a random module to make it easy to introduce chance and a little chaos into your code. Forexample, here’s how to scroll a random name across the display:from microbit import *import random
1.6. Random 19BBC micro:bit MicroPython Documentation, Release 0.5.0
display.scroll(random.choice(names))
The list (names) contains seven names defined as strings of characters. The final line is nested (the “onion” effectintroduced earlier): the random.choice method takes the names list as an argument and returns an item chosen atrandom. This item (the randomly chosen name) is the argument for display.scroll.Can you modify the list to include your own set of names?
Random numbers are very useful. They’re common in games. Why else do we have dice?MicroPython comes with several useful random number methods. Here’s how to make a simple dice:
display.show(str(random.randint(1, 6)))
Every time the device is reset it displays a number between 1 and 6. You’re starting to get familiar with nesting, so it’simportant to note that random.randint returns a whole number between the two arguments, inclusive (a wholenumber is also called an integer - hence the name of the method). Notice that because display.show expects acharacter then we use the str function to turn the numeric value into a character (we turn, for example, 6 into "6").If you know you’ll always want a number between 0 and N then use the random.randrange method. If you giveit a single argument it’ll return random integers up to, but not including, the value of the argument N (this is differentto the behaviour of random.randint).Sometimes you need numbers with a decimal point in them. These are called floating point numbers and it’s possibleto generate such a number with the random.random method. This only returns values between 0.0 and 1.0inclusive. If you need larger random floating point numbers add the results of random.randrange and random.random like this:
The random number generators used by computers are not truly random. They just give random like results given astarting seed value. The seed is often generated from random-ish values such as the current time and/or readings fromsensors such as the thermometers built into chips.Sometimes you want to have repeatable random-ish behaviour: a source of randomness that is reproducible. It’s likesaying that you need the same five random values each time you throw a dice.This is easy to achieve by setting the seed value. Given a known seed the random number generator will create thesame set of random numbers. The seed is set with random.seed and any whole number (integer). This version ofthe dice program always produces the same results:
20 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
random.seed(1337)while True: if button_a.was_pressed(): display.show(str(random.randint(1, 6)))
Can you work out why this program needs us to press button A instead of reset the device as in the first dice example..?
1.7 Movement
Your BBC micro:bit comes with an accelerometer. It measures movement along three axes: • X - tilting from left to right. • Y - tilting forwards and backwards. • Z - moving up and down.There is a method for each axis that returns a positive or negative number indicating a measurement in milli-g’s. Whenthe reading is 0 you are “level” along that particular axis.For example, here’s a very simple spirit-level that uses get_x to measure how level the device is along the X axis: withinthe body of the loop is a measurement along the X axis which is called reading. Because the accelerometer isso sensitive I’ve made level +/-20 in range. It’s why the if and elif conditionals check for > 20 and < -20.The else statement means that if the reading is between -20 and 20 then we consider it level. For each of theseconditionsan accelerometer in exactly the same way as the program above. Game controllers also contain accelerometers to helpyou steer and move around in games.
One of the most wonderful aspects of MicroPython on the BBC micro:bit is how it lets you easily link differentcapabilities of the device together. For example, let’s turn it into a musical instrument (of sorts).Connect a speaker as you did in the music tutorial. Use crocodile clips to attach pin 0 and GND to the positive andnegative inputs on the speaker - it doesn’t matter which way round they are connected to the speaker.
1.7. Movement 21BBC micro:bit MicroPython Documentation, Release 0.5.0
What happens if we take the readings from the accelerometer and play them as pitches? Let’s find out:
while True: music.pitch(accelerometer.get_y(), 10)
The key line is at the end and remarkably simple. We nest the reading from the Y axis as the frequency to feed intothe music.pitch method. We only let it play for 10 milliseconds because we want the tone to change quickly asthe device is tipped. Because the device is in an infinite while loop it is constantly reacting to changes in the Y axismeasurement.That’s it!Tip the device forwards and backwards. If the reading along the Y axis is positive it’ll change the pitch of the tone
22 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
1.8 Gestures
The really interesting side-effect of having an accelerometer is gesture detection. If you move your BBC micro:bit inaobvious, the 3g, 6g and 8g gestures apply when the device encounters these levels of g-force (like when an astronautis launched into space).To get the current gesture use the accelerometer.current_gesture method. Its result is going to be one ofthe named gestures listed above. For example, this program will only make your device happy if it is face up:
while True: gesture = accelerometer.current_gesture() if gesture == "face up": display.show(Image.HAPPY) else: display.show(Image.ANGRY)
Once again, because we want the device to react to changing circumstances we use a while loop. Within the scope ofthe loop the current gesture is read and put into gesture. The if conditional checks if gesture is equal to "faceup" (Python uses == to test for equality, a single equals sign = is used for assignment - just like how we assign thegesture reading to the gesture object). If the gesture is equal to "face up" then use the display to show a happyface. Otherwise, the device is made to look angry!
1.8.1 Magic-8
A Magic-8 ball is a toy first invented in the 1950s. The idea is to ask it a yes/no question, shake it and wait for it toreveal the truth. It’s rather easy to turn into a program:
answers = [ "It is certain", "It is decidedly so", "Without a doubt", "Yes, definitely", "You may rely on it", "As I see it, yes", "Most likely", "Outlook good", "Yes", "Signs point to yes", "Reply hazy try again", "Ask again later", "Better not tell you now",
1.8. Gestures 23BBC micro:bit MicroPython Documentation, Release 0.5.0clears the screen, waits for a second (so the device appears to be thinking about your question) and displays a randomlychosen answer.Why not ask it if this is the greatest program ever written? What could you do to “cheat” and make the answer alwayspositive or negative? (Hint: use the buttons.)
1.9 Direction
There is a compass on the BBC micro:bit. If you ever make a weather station use the device to work out the winddirection.
1.9.1 Compass
compass.calibrate()
while True: needle = ((15 - compass.heading()) // 30) % 12 display.show(Image.ALL_CLOCKS[needle])
Note: You must calibrate the compass before taking readings. Failure to do so will produce garbage results.The calibration method runs a fun little game to help the device work out where it is in relation to the Earth’smagnetic.
24 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
1.10 Storage
Sometimes you need to store useful information. Such information is stored as data: representation of information (ina digital form when stored on computers). If you store data on a computer it should persist, even if you switch thedevice off and on again.Happily MicroPython on the micro:bit allows you to do this with a very simple file system. Because of memoryconstraints there is approximately 30k of storage available on the file system.What is a file system?It’s a means of storing and organising data in a persistent manner - any data stored in a file system should surviverestarts of the device. As the name suggests, data stored on a file system is organised into files.
A computer file is a named digital resource that’s stored on a file system. Such resources contain useful information asdata.. Forexample, .txt indicates a text file, .jpg a JPEG image and .mp3 sound data encoded as MP3.Some file systems (such as the one found on your laptop or PC) allow you to organise your files into directories: namedcontainers that group related files and sub-directories together. However, the file system provided by MicroPython is aflat file system. A flat file system does not have directories - all your files are just stored in the same place.The Python programming language contains easy to use and powerful ways in which to work with a computer’s filesystem. MicroPython on the micro:bit implements a useful subset of these features to make it easy to read and write
1.10. Storage 25BBC micro:bit MicroPython Documentation, Release 0.5.0
files on the device, while also providing consistency with other versions of Python.
Warning:.
Reading and writing a file on the file system is achieved by the open function. Once a file is opened you can do stuffwith it until you close it (analogous with the way we use paper files). It is essential you close a file so MicroPythonknows you’ve finished with it.The best way to make sure of this is to use the with statement like this:
The with statement uses the open function to open a file and assign it to an object. In the example above, the openfunction opens the file called story.txt (obviously a text file containing a story of some sort). The object that’sused to represent the file in the Python code is called my_file. Subsequently, in the code block indented underneaththe with statement, the my_file object is used to read() the content of the file and assign it to the contentobject.Here’s the important point, the next line containing the print statement is not indented. The code block associatedwith the with statement is only the single line that reads the file. Once the code block associated with the withstatement is closed then Python (and MicroPython) will automatically close the file for you. This is called contexthandling and the open function creates objects that are context handlers for files.Put simply, the scope of your interaction with a file is defined by the code block associated with the with statementthat opens the file.Confused?Don’t be. I’m simply saying your code should look like this:
Just like a paper file, a digital file is opened for two reasons: to read its content (as demonstrated above) or to writesomething to the file. The default mode is to read the file. If you want to write to a file you need to tell the openfunction in the following way:
Notice the 'w' argument is used to set the my_file object into write mode. You could also pass an 'r' argumentto set the file object to read mode, but since this is the default, it’s often left off.Writing data to the file is done with the (you guessed it) write method that takes the string you want to write to thefile as an argument. In the example above, I write the text “Hello, World!” to a file called “hello.txt”.
26 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Simple!
Note: When you open a file and write (perhaps several times while the file is in an open state) you will be writingOVER the content of the file if it already exists.If you want to append data to a file you should first read it, store the content somewhere, close it, append your data tothe content and then open it to write again with the revised content.While this is the case in MicroPython, “normal” Python can open files to write in “append” mode. That we can’t dothis on the micro:bit is a result of the simple implementation of the file system.
1.10.2 OS SOS
As well as reading and writing files, Python can manipulate them. You certainly need to know what files are on thefile system and sometimes you need to delete them too.On a regular computer, it is the role of the operating system (like Windows, OSX or Linux) to manage this on Python’sbehalf. Such functionality is made available in Python via a module called os. Since MicroPython is the operatingsystem we’ve decided to keep the appropriate functions in the os module for consistency so you’ll know where to findthem when you use “regular” Python on a device like a laptop or Raspberry Pi.Essentially, you can do three operations related to the file system: list the files, remove a file and ask for the size of afile.To list the files on your file system use the listdir function. It returns a list of strings indicating the file names ofthe files on the file system:
import osmy_files = os.listdir()
To delete a file use the remove function. It takes a string representing the file name of the file you want to delete asan argument, like this:
import osos.remove('filename.txt')
Finally, sometimes it’s useful to know how big a file is before reading from it. To achieve this use the size function.Like the remove function, it takes a string representing the file name of the file whose size you want to know. Itreturns an integer (whole number) telling you the number of bytes the file takes up:
import osfile_size = os.size('a_big_file.txt')
It’s all very well having a file system, but what if we want to put or get files on or off the device?Just use the microfs utility!
If you have Python installed on the computer you use to program your BBC micro:bit then you can use a special utilitycalled microfs (shortened to ufs when using it in the command line). Full instructions for installing and using allthe features of microfs can be found in its documentation.Nevertheless it’s possible to do most of the things you need with just four simple commands:
1.10. Storage 27BBC micro:bit MicroPython Documentation, Release 0.5.0
$ ufs lsstory.txt
The ls sub-command lists the files on the file system (it’s named after the common Unix command, ls, that servesthe same function).
The get sub-command gets a file from the connected micro:bit and saves it into your current location on your com-puter (it’s named after the get command that’s part of the common file transfer protocol [FTP] that serves the samefunction).
$ ufs rm story.txt
The rm sub-command removes the named file from the file system on the connected micro:bit (it’s named after thecommon Unix command, rm, that serves the same function).
Finally, the put sub-command puts a file from your computer onto the connected device (it’s named after the putcommand that’s part of FTP that serves the same function).
The file system also has an interesting property: if you just flashed the MicroPython runtime onto the device thenwhen it starts it’s simply waiting for something to do. However, if you copy a special file called main.py onto thefile system, upon restarting the device, MicroPython will run the contents of the main.py file.Furthermore, if you copy other Python files onto the file system then you can import them as you would any otherPython module. For example, if you had a hello.py file that contained the following simple code:
def say_hello(name="World"): return "Hello, {}!".format(name)
. . . you could import and use the say_hello function like this:
display.scroll(say_hello())
Of course, it results in the text “Hello, World!” scrolling across the display. The important point is that such an exampleis split between two Python modules and the import statement is used to share code.
Note: If you have flashed a script onto the device in addition to the MicroPython runtime, then MicroPython willignore main.py and run your embedded script instead.To flash just the MicroPython runtime, simply make sure the script you may have written in your editor has zerocharacters in it. Once flashed you’ll be able to copy over a main.py file.
28 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
1.11 Speech
1.11. Speech 29BBC micro:bit MicroPython Documentation, Release 0.5.0
30 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
1.11. Speech 31BBC micro:bit MicroPython Documentation, Release 0.5.0
It’s a little known fact that DALEKs enjoy poetry ~ especially limericks. They go wild for anapestic meter with a strictA.
Before the device can talk you need to plug in a speaker like this:
The simplest way to get the device to speak is to import the speech module and use the say function like this:
import speech
speech.say("Hello, World")
32 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
While this is cute it’s certainly not DALEK enough for our taste, so we need to change some of the parameters thatthe speech synthesiser uses to produce the voice. Our speech synthesiser is quite powerful in this respect because wecan change four parameters: • pitch - how high or low the voice sounds (0 = high, 255 = Barry White) • speed - how quickly the device talks (0 = impossible, 255 = bedtime story) • mouth - how tight-lipped or overtly enunciating the voice sounds (0 = ventriloquist’s dummy, 255 = Foghorn Leghorn) • throat - how relaxed or tense is the tone of voice (0 = falling apart, 255 = totally chilled)Collectively, these parameters control the quality of sound - a.k.a. the timbre. To be honest, the best way to get thetone of voice you want is to experiment, use your judgement and adjust.To adjust the settings you pass them in as arguments to the say function. More details can be found in the speechmodule’s API documentation.After some experimentation we’ve worked out this sounds quite DALEK-esque:
Being Cyborgs DALEKs use their robot capabilities to compose poetry and it turns out that the algorithm they use iswritten in Python like this:
# Loop over each line in the poem and use the speech module to recite it.for line in poem:
1.11. Speech 33BBC micro:bit MicroPython Documentation, Release 0.5.0
1.11.4 Phonemessayapproximation of the phonemes it would use to generate the audio. This result can be hand-edited to improve theaccuracy,?
By changing the pitch setting and calling the sing function it’s possible to make the device sing (although it’s notgoing to win Eurovision any time soon).The mapping from pitch numbers to musical notes is shown below:
34 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
The sing function must take phonemes and pitch as input like this:speech.sing("#115DOWWWW")
Notice how the pitch to be sung is prepended to the phoneme with a hash (#). The pitch will remain the same forsubsequent phonemes until a new pitch is annotated.The following example demonstrates how all three generative functions (say, pronounce and sing) can be usedto produce speech like output:import speechfrom microbit import sleep
1.11. Speech 35BBC micro:bit MicroPython Documentation, Release 0.5.0
"#62TIYYYYYY", # Ti "#58DOWWWWWW", # Doh]
1.12 Networkis it contains all the common aspects of network programming you need to know about. It’s also remarkably simpleand fun.But first, let’s set the scene. . .
1.12.1 Connection
Imagine a network as a series of layers. At the very bottom is the most fundamental aspect of communication: thereneeds to be some sort of way for a signal to get from one device to the other. Sometimes this is done via a radioconnection, but in this example we’re simply going to use two wires.
36 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
It is upon this foundation that we can build all the other layers in the network stack.As the diagram shows, blue and red micro:bits are connected via crocodile leads. Both use pin 1 for output and pin 2for input. The output from one device is connected to the input on the other. It’s a bit like knowing which way roundto hold a telephone handset - one end has a microphone (the input) and the other a speaker (the output). The recordingof your voice via your microphone is played out of the other person’s speaker. If you hold the phone the wrong wayup, you’ll get strange results!It’s exactly the same in this instance: you must connect the wires properly!
1.12.2 Signal
The next layer in the network stack is the signal. Often this will depend upon the characteristics of the connection. Inour example it’s simply digital on and off signals sent down the wires via the IO pins.If you remember, it’s possible to use the IO pins like this:pin1.write_digital(1) # switch the signal onpin1.write_digital(0) # switch the signal offinput = pin2.read_digital() # read the value of the signal (either 1 or 0)
The next step involves describing how to use and handle a signal. For that we need a. . .
1.12.3 Protocol
If you ever meet the Queen there are expectations about how you ought to behave. For example, when she arrives youmay givensituation arises.
It is for this reason that we define and use protocols for communicating messages via a computer network. Computersneed to agree before hand how to send and receive messages. Perhaps the best known protocol is the hypertext transferprotocol (HTTP) used by the world wide web.Another famous protocol for sending messages (that pre-dates computers) is Morse code. It defines how to sendcharacter-based messages via on/off signals of long or short durations. Often such signals are played as bleeps. Longdurations are called dashes (-) whereas short durations are dots (.). By combining dashes and dots Morse defines away to send characters. For example, here’s how the standard Morse alphabet is defined:
1.12. Network 37BBC micro:bit MicroPython Documentation, Release 0.5.0
Given the chart above, to send the character “H” the signal is switched on four times for a short duration, indicatingfour dots (....). For the letter “L” the signal is also switched on four times, but the second signal has a longerduration (.-..).Obviously, the timing of the signal is important: we need to tell a dot from a dash. That’s another point of a protocol,to agree such things so everyone’s implementation of the protocol will work with everyone elses. In this instance we’lljust say that: • A signal with a duration less than 250 milliseconds is a dot. • A signal with a duration from 250 milliseconds to less than 500 milliseconds is a dash. • Any other duration of signal is ignored. •).
1.12.4 Message
We’re finally at a stage where we can build a message - a message that actually means something to us humans. Thisis the top-most layer of our network stack.Using the protocol defined above I can send the following sequence of signals down the physical wire to the othermicro:bit:
...././.-../.-../---/.--/---/.-./.-../-..
1.12.5 Application
It’s all very well having a network stack, but you also need a way to interact with it - some form of application to sendview?Obviously, to send a message you should be able to input dots and dashes (we can use button A for that). If we want tosee the message we sent or just received we should be able to trigger it to scroll across the display (we can use buttonB for that). Finally, this being Morse code, if a speaker is attached, we should be able to play the beeps as a form ofaural feedback while the user is entering their message.
38 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Here’s the program, in all its glory and annotated with plenty of comments so you can see what’s going on:
def decode(buffer): # Attempts to get the buffer of Morse code data from the lookup table. If # it's not there, just return a full stop. return MORSE_CODE_LOOKUP.get(buffer, '.')
1.12. Network 39BBC micro:bit MicroPython Documentation, Release 0.5.0
"00900:" "00000:" "00000:")
# To create a DOT you need to hold the button for less than 250ms.DOT_THRESHOLD = 250# To create a DASH you need to hold the button for less than 500ms.DASH_THRESHOLD = 500
#:
40 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
# ...?
1.13 Radio
1.13. Radio 41BBC micro:bit MicroPython Documentation, Release 0.5.0
1.13.2 Bytes
A byte is a unit of information that (usually) consists of eight bits. A bit is the smallest possible unit of informationsince it can only be in two states: on or off.Bytes work like a sort of abacus: each position in the byte is like a column in an abacus - they represent an associatednumber.0 and 255. The image below shows how this works with five bits and counting from zero to 32:
If we can agree what each one of the 255 numbers (encoded by a byte) represents ~ such as a character ~ then we canstartwebsite.
1.13.3 Addressing
The problem with radio is that you can’t transmit directly to one person. Anyone with an appropriate aerial can receivethe messages you transmit. As a result it’s important to be able to differentiate who should be receiving broadcasts.The way the radio built into the micro:bit solves this problem is quite simple: •. •thing is you don’t need to worry about filtering those out. Nevertheless, if someone were clever enough, they couldjust read all the wireless network traffic no matter what the target address/group was supposed to be. In this case, it’sessential to use encrypted means of communication so only the desired recipient can actually read the message thatwas broadcast. Cryptography is a fascinating subject but, unfortunately, beyond the scope of this tutorial.
42 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
1.13.4 Fireflies
This is a firefly:
It’s a sort of bug that uses bioluminescence to signal (without wires) to its friends. Here’s what they look like whenthey signal to each other:
radio.send("a message")
The example uses the send function to simply broadcast the string “a message”. To receive a message is even easier:
new_message = radio.receive()
As messages are received they are put on a message queue. The receive function returns the oldest message fromthe queue as a string, making space for a new incoming message. If the message queue fills up, then new incomingmessages are ignored.That’s really all there is to it! (Although the radio module is also powerful enough that you can send any arbitrary typeof data, not just strings. See the API documentation for how this works.)Armed with this knowledge, it’s simple to make micro:bit fireflies like this:
# A micro:bit Firefly.# By Nicholas H.Tollervey. Released to the public domain.import radioimport randomfrom microbit import display, Image, button_a, sleep
# Create the "flash" animation frames. Can you work out how it's done?flash = [Image().invert()*(i/9) for i in range(9, -1, -1)]
# Event loop.while True: # Button A sends a "flash" message. if button_a.was_pressed(): radio.send('flash') # a-ha # Read any incoming messages. incoming = radio.receive() if incoming == 'flash':
1.13. Radio 43BBC micro:bit MicroPython Documentation, Release 0.5.0
The import stuff happens in the event loop. First, it checks if button A was pressed and, if it was, uses the radio to sendthe message “flash”. Then it reads any messages from the message queue with radio.receive(). If there is amessage it sleeps a short, random period of time (to make the display more interesting) and uses display.show()to animate a firefly flash. Finally, to make things a bit exciting, it chooses a random number so that it has a 1 in 10chance of re-broadcasting the “flash” message to anyone else (this is how it’s possible to sustain the firefly displayamong several devices). If it decides to re-broadcast then it waits for half a second (so the display from the initialflash message has chance to die down) before sending the “flash” signal again. Because this code is enclosed within awhile True block, it loops back to the beginning of the event loop and repeats this process forever.The end result (using a group of micro:bits) should look something like this:
These tutorials are only the first steps in using MicroPython with the BBC micro:bit. A musical analogy: you’ve gotahas a secret weapon: the most amazing community of programmers on the planet. Connect with this community andyouknow how to make MicroPython fly on the BBC micro:bit then read the API reference documentation. It containsinformation about all the capabilities available to you.Explore, experiment and be fearless trying things out ~ for these are the attributes of a virtuoso coder. To encourageyou!Python is one of the world’s most popular programming languages. Every day, without realising, you probably usesoftware written using Python. All sorts of companies and organisations use Python for a diverse range of applications.Google, NASA, Bank of America, Disney, CERN, YouTube, Mozilla, The Guardian - the list goes on and covers allsectors of the economy, science and the arts.For example, do you remember the announcement of the discovery of gravitational waves? The instruments used tomake the measurements were controlled with Python.
44 Chapter 1. Introduction BBC micro:bit MicroPython Documentation, Release 0.5.0
Put simply, if you teach or learn Python, you are developing a highly valuable skill that applies to all areas of humanendeavour.One such area is the BBC’s amazing micro:bit device. It runs a version of Python called MicroPython that’s designedto run on small computers like the BBC micro:bit. It’s a full implementation of Python 3 so when you move onto otherthings (such as programming Python on a Raspberry Pi) you’ll use exactly the same language.MicroPython does not include all the standard code libraries that come with “regular” Python. However, we havecreated a special microbit module in MicroPython that lets you control the device.Python and MicroPython are free software. Not only does this mean you don’t pay anything to use Python, but you arealso free to contribute back to the Python community. This may be in the form of code, documentation, bug reports,running a community group or writing tutorials (like this one). In fact, all the Python related resources for the BBCmicro:bit have been created by an international team of volunteers working in their free time.These lessons introduce MicroPython and the BBC micro:bit in easy-to-follow steps. Feel free to adopt and adaptthem for classroom based lessons, or perhaps just follow them on your own at home.You’ll have most success if you explore, experiment and play. You can’t break a BBC micro:bit by writing incorrectcode. Just dive in!A word of warning: you will fail many times, and that is fine. Failure is how good software developers learn. Thoseof us who work as software developers have a lot of fun tracking down bugs and avoiding the repetition of mistakes.If in doubt, remember the Zen of MicroPython:
Code,Hack it,Less is more,Keep it simple,Small is beautiful,
Best of luck!
46 Chapter 1. Introduction CHAPTER 2
Warning:.
Everything directly related to interacting with the hardware lives in the microbit module. For ease of use it’s recom-mended you start all scripts with:
The rest of the functionality is provided by objects and classes in the microbit module, as described below.Note that the API exposes integers only (ie no floats are needed, but they may be accepted). We thus use millisecondsfor the standard time unit.
47BBC micro:bit MicroPython Documentation, Release 0.5.0
2.1.1 Buttons
button_abutton_b
#)
2.1.3 Pins
Provide digital and analog input and output functionality, for the pins in the connector. Some pins are connectedinternally to the I/O that drives the LED matrix and the buttons.Each pin is provided as an object directly in the microbit module. This keeps the API relatively flat, making it veryeasy to use: • pin0 • pin1 • ... • pin15 • pin16 • Warning: P17-P18 (inclusive) are unavailable.
• pin19 • pin20Each of these pins are instances of the MicroBitPin class, which offers the following API:# value can be 0, 1, False, Truepin.write_digital(value)# returns either 1 or 0pin.read_digital()# value is between 0 and 1023pin.write_analog(value)# returns an integer between 0 and 1023pin.read_analog()# sets the period of the PWM output of the pin in milliseconds# (see)pin.set_analog_period(int)# sets the period of the PWM output of the pin in microseconds# (see)pin.set_analog_period_microseconds(int)# returns booleanpin.is_touched()
2.1.4 Images
Note: You don’t always need to create one of these yourself - you can access the image shown on the display directlywith display.image. display.image is just an instance of Image, so you can use all of the same methods.
Images API:# creates an empty 5x5 imageimage =image = Image(width, height)# initialises an Image with the specified width and height. The buffer# should be an array of length width * heightimage =)
#operators# returns a new image created by superimposing the two imagesimage + image# returns a new image created by multiplying the brightness of each pixel by nimage * n
# built-in images.Image.HEARTImage.HEART_SMALLImage.HAPPYImage.SMILEImage.SADImage.CONFUSEDImage.ANGRYImage.ASLEEPImage.SURPRISEDImage.SILLYImage.FABULOUSImage.MEHImage.YESImage.NOImage.CLOCK12 # clock at 12 o' clockImage.CLOCK11... # many clocks (Image.CLOCKn)Image.CLOCK1 # clock at 1 o'clockImage.ARROW_N... # arrows pointing N, NE, E, SE, S, SW, W, NW (microbit.Image.ARROW_direction)Image.ARROW_NWImage.TRIANGLEImage.TRIANGLE_LEFTImage.CHESSBOARDImage.DIAMONDImage.DIAMOND_SMALLImage.SQUAREImage.SQUARE_SMALLImage.RABBITImage.COWImage.MUSIC_CROTCHETImage.MUSIC_QUAVERImage.MUSIC_QUAVERSImage.PITCHFORKImage.XMASImage.PACMANImage.TARGETImage.TSHIRTImage.ROLLERSKATEImage.DUCKImage.HOUSEImage.TORTOISE
Image.BUTTERFLYImage.STICKFIGUREImage.GHOSTImage.SWORDImage.GIRAFFEImage.SKULLImage.UMBRELLAImage.SNAKE# built-in lists - useful for animations, e.g. display.show(Image.ALL_CLOCKS)Image.ALL_CLOCKSImage.ALL_ARROWS
The recognised gestures are: up, down, left, right, face up, face down, freefall, 3g, 6g, 8g, shake.)
2.1.8 UART)
Microbit Module
The microbit module gives you access to all the hardware that is built-in into your board.
3.1 Functions
microbit.panic(n) Enter a panic mode. Requires restart. Pass in an arbitrary integer <= 255 to indicate a status:
microbit.panic(255)
microbit.reset() Restart the board.microbit.sleep(n) Wait for n milliseconds. One second is 1000 milliseconds, so:
microbit.sleep(1000)
will pause the execution for one second. n can be an integer or a floating point number.microbit.running_time() Return the number of milliseconds since the board was switched on or restarted.microbit.temperature() Return the temperature of the micro:bit in degrees Celcius.
3.2 Attributes
3.2.1 Buttons
There are two buttons on the board, called button_a and button_b.
53BBC micro:bit MicroPython Documentation, Release 0.5.0
Attributes
button_a A Button instance (see below) representing the left button.button_b Represents the right button.
Classes
class Button Represents a button.
Note: This class is not actually available to the user, it is only used by the two button instances, which are provided already initialized.
is_pressed() Returns True if the specified button button is pressed, and False otherwise. was_pressed() Returns True or False to indicate if the button was pressed since the device started or the last time this method was called. get_presses() Returns the running total of button presses, and resets this total to zero before returning.
Example
import microbit
while True: if microbit.button_a.is_pressed() and microbit.button_b.is_pressed(): microbit.display.scroll("AB") break elif microbit.button_a.is_pressed(): microbit.display.scroll("A") elif microbit.button_b.is_pressed(): microbit.display.scroll("B") microbit.sleep(100)
The pins are your board’s way to communicate with external devices connected to it. There are 19 pins for yourdisposal, numbered 0-16 and 19-20. Pins 17 and 18 are not available.For example, the script below will change the display on the micro:bit depending upon the digital reading on pin 0:
while True: if pin0.read_digital(): display.show(Image.HAPPY)
else: display.show(Image.SAD)
Pin Functions
3.2. Attributes 55BBC micro:bit MicroPython Documentation, Release 0.5.0
The above table summarizes the pins available, their types (see below) and what they are internally connected to.
Pulse-Width Modulation
The pins of your board cannot output analog signal the way an audio amplifier can do it – by modulating the voltageon the pin. Those pins can only either enable the full 3.3V output, or pull it down to 0V. However, it is still possibleto control the brightness of LEDs or speed of an electric motor, by switching that voltage on and off very fast, andcontrolling how long it is on and how long it is off. This technique is called Pulse-Width Modulation (PWM), andthat’s what the write_analog method below does.
Above you can see the diagrams of three different PWM signals. All of them have the same period (and thus fre-quency), but they have different duty cycles.The first one would be generated by write_analog(511), as it has exactly 50% duty – the power is on half of thetime, and off half of the time. The result of that is that the total energy of this signal is the same, as if it was 1.65Vinstead of 3.3V.The second signal has 25% duty cycle, and could be generated with write_analog(255). It has similar effect asif 0.825V was being output on that pin.The third signal has 75% duty cycle, and can be generated with write_analog(767). It has three times as muchenergy, as the second signal, and is equivalent to outputting 2.475V on th pin.Note that this works well with devices such as motors, which have huge inertia by themselves, or LEDs, which blinktoo fast for the human eye to see the difference, but will not work so good with generating sound waves. This boardcan only generate square wave sounds on itself, which sound pretty much like the very old computer games – mostlybecause those games also only could do that.
There are three kinds of pins, differing in what is available for them. They are represented by the classes listed below.Note that they form a hierarchy, so that each class has all the functionality of the previous class, and adds its own tothat.
Note: Those classes are not actually available for the user, you can’t create new instances of them. You can only usethe instances already provided, representing the physical pins on your board.
class microbit.MicroBitDigitalPin
3.2. Attributes 57BBC micro:bit MicroPython Documentation, Release 0.5.0
read_digital() Return 1 if the pin is high, and 0 if it’s low. write_digital(value) Set the pin to high if value is 1, or to low, if it is 0.class microbit.MicroBitAnalogDigitalPin
read_analog() Read the voltage applied to the pin, and return it as an integer between 0 (meaning 0V) and 1023 (meaning 3.3V). write_analog(value) Output a PWM signal on the pin, with the duty cycle proportional to the provided value. The value may be either an integer or a floating point number between 0 (0% duty cycle) and 1023 (100% duty). set_analog_period(period) Set the period of the PWM signal being output to period in milliseconds. The minimum valid value is 1ms. set_analog_period_microseconds(period) Set the period of the PWM signal being output to period in microseconds. The minimum valid value is 256µs.class microbit.MicroBitTouchPin
is_touched() Return True ifcall read_analog / read_digital / is_touched. The default pull mode for these is, respectively, NO_PULL,PULL_DOWN, PULL_UP. Calling set_pull will configure the pin to be in read_digital mode with the givenpull mode.
Note: Also note, the micro:bit has external weak (10M) pull-ups fitted on pins 0, 1 and 2 only, in order for the touchsensing to work. See the edge connector data sheet here:
3.3 Classes
3.3.1 Image
The Image class is used to create images that can be displayed easily on the device’s LED matrix. Given an imageobject it’s possible to display it via the display API:
display.show(Image.HAPPY)
class microbit.Image(string)class microbit.Image(width=None, height=None, buffer=None) If string is used, it has to consist of digits 0-9 arranged into lines, describing the image, for example:
image = Image("90009:" "09090:" "00900:" "09090:" "90009")
will create a 5×5 image of an X. The end of a line is indicated by a colon. It’s also possible to use a newline (n) to indicate the end of a line like this:
image = Image("90009\n" "09090\n" "00900\n" "09090\n" "90009")
The other form creates an empty image with width columns and height rows. Optionally buffer can be an array of width``×``height integers in range 0-9 to initialize the image.
3.3. Classes 59BBC micro:bit MicroPython Documentation, Release 0.5.0
width() Return the number of columns in the image. height() Return the numbers of rows in the image. set_pixel(x, y, value) Set the brightness of the pixel at column x and row y to the value, which has to be between 0 (dark) and 9 (bright). This method will raise an exception when called on any of the built-in read-only images, like Image. HEART. get_pixel(x, y) Return the brightness of pixel at column x and row y as an integer between 0 and 9. shift_left(n) Return a new image created by shifting the picture left by n columns. shift_right(n) Same as image.shift_left(-n). shift_up(n) Return a new image created by shifting the picture up by n rows. shift_down(n) Same as image.shift_up(-n). crop(x, y, w, h) Return a new image by cropping the picture to a width of w and a height of h, starting with the pixel at column x and row y. copy() Return an exact copy of the image. invert() Return a new image by inverting the brightness of the pixels in the source image. fill(value) Set the brightness of all the pixels in the image to the value, which has to be between 0 (dark) and 9 (bright). This method will raise an exception when called on any of the built-in read-only images, like Image. HEART. blit(src, x, y, w, h, xdest=0, ydest=0) Copy the rectangle defined by x, y, w, h from the image src into this image at xdest, ydest. Areas in the source rectangle, but outside the source image are treated as having a value of 0. shift_left(), shift_right(), shift_up(), shift_down() and crop() can are all imple- mented by using blit(). For example, img.crop(x, y, w, h) can be implemented as:
The Image class also has the following built-in instances of itself included as its attributes (the attribute names indicatewhat the image represents):
• Image.HEART • Image.HEART_SMALL • Image.HAPPY • Image.SMILE • Image.SAD • Image.CONFUSED • Image.ANGRY • Image.ASLEEP • Image.SURPRISED • Image.SILLY • Image.FABULOUS • Image.MEH • Image.YES • Image.NO • Image.CLOCK12, Image.CLOCK11, Image.CLOCK10, Image.CLOCK9, Image.CLOCK8, Image. CLOCK7, Image.CLOCK6, Image.CLOCK5, Image.CLOCK4, Image.CLOCK3, Image.CLOCK2,
3.3. Classes 61BBC micro:bit MicroPython Documentation, Release 0.5.0
• Image.DUCK • Image.HOUSE • Image.TORTOISE • Image.BUTTERFLY • Image.STICKFIGURE • Image.GHOST • Image.SWORD • Image.GIRAFFE • Image.SKULL • Image.UMBRELLA • Image.SNAKEFinally, related collections of images have been grouped together:
* ``Image.ALL_CLOCKS``* ``Image.ALL_ARROWS``
Operations
repr(image)
str(image)
image1 + image2
Create a new image by adding the brightness values from the two images for each pixel.
image * n
3.4 Modules
3.4.1 Display
This module controls the 5×5 LED display on the front of your board. It can be used to display images, animationsand even text.
Functions
microbit.display.get_pixel(x, y) Return the brightness of the LED at column x and row y as an integer between 0 (off) and 9 (bright).microbit.display.set_pixel(x, y, value) Set the brightness of the LED at column x and row y to value, which has to be an integer between 0 and 9.microbit.display.clear() Set the brightness of all LEDs to 0 (off).microbit.display.show(image) Display the image.microbit.display.show(iterable, delay=400, *, wait=True, loop=False, clear=False) Display images or letters from the iterable in sequence, with delay milliseconds between them. If wait is True, this function will block until the animation is finished, otherwise the animation will happen in the background. If loop is True, the animation will repeat forever. If clear is True, the display will be cleared after the iterable has finished. Note that the wait, loop and clear arguments must be specified using their keyword.
Note: If using a generator as the iterable, then take care not to allocate any memory in the generator as allocatingmemory in an interrupt is prohibited and will raise a MemoryError.
To continuously scroll a string across the display, and do it in the background, you can use:
3.4. Modules 63BBC micro:bit MicroPython Documentation, Release 0.5.0
3.4.2 UART
The uart module lets you talk to a device connected to your board using a serial interface.
Warning: Initializing the UART on external pins will cause the Python console on USB to become unac- cessible, as it uses the same hardware. To bring the console back you must reinitialize the UART without passing anything for tx or rx (or passing None to these arguments). This means that calling uart. init(115200) is enough to restore the Python console.
The baudrate defines the speed of communication. Common baud rates include: • 9600 • 14400 • 19200 • 28800 • 38400 • 57600 • 115200 The bits defines the size of bytes being transmitted, and the board only supports 8. The parity parameter defines how parity is checked, and it can be None, microbit.uart.ODD or microbit.uart.EVEN. The stop parameter tells the number of stop bits, and has to be 1 for this board. If tx and rx are not specified then the internal USB-UART TX/RX pins are used which connect to the USB serial convertor on the micro:bit, thus connecting the UART to your PC. You can specify any other pins you want by passing the desired pin objects to the tx and rx parameters.
Note: When connecting the device, make sure you “cross” the wires – the TX pin on your board needs to be connected with the RX pin on the device, and the RX pin – with the TX pin on the device. Also make sure the ground pins of both devices are connected.
uart.any() Return True if any characters waiting, else False.uart.read([nbytes ]) Read characters. If nbytes is specified then read at most that many bytes.uart.readall() Read as much data as possible. Return value: a bytes object or None on timeout.uart.readinto(buf [, nbytes ]) Read bytes into the buf. If nbytes is specified then read at most that many bytes. Otherwise, read at most len(buf) bytes.
Return value: number of bytes read and stored into buf or None on timeout.uart.readline() Read a line, ending in a newline character. Return value: the line read or None on timeout. The newline character is included in the returned bytes.uart.write(buf ) Write the buffer of bytes to the bus. Return value: number of bytes written or None on timeout.
3.4.3 SPI
The spi module lets you talk to a device connected to your board using a serial peripheral interface (SPI) bus. SPIuses a so-called master-slave architecture with a single master. You will need to specify the connections for threesignals: • SCLK : Serial Clock (output from master). • MOSI : Master Output, Slave Input (output from master). • MISO : Master Input, Slave Output (output from slave). and miso arguments specify the pins to use for each type of signal.spi.read(nbytes) Read at most nbytes. Returns what was read.spi.write(buffer) Write the buffer of bytes to the bus.
3.4. Modules 65BBC micro:bit MicroPython Documentation, Release 0.5.0
spi.write_readinto(out, in) Write the out buffer to the bus and read any response into the in buffer. The length of the buffers should be the same. The buffers can be the same object.
3.4.4 I2 C
The i2c module lets you communicate with devices connected to your board using the I2 C bus protocol. There canbe multiple slave devices connected at the same time, and each one has its own unique address, that is either fixed forthe device or configured on it. Your board acts as the I2 C master.We use 7-bit addressing for devices because of the reasons stated here.This may be different to other micro:bit related solutions.How exactly you should communicate with the devices, that is, what bytes to send and how to interpret the responses,depends on the device in question and should be described separately in that device’s documentation.
Warning: Changing the I2 C pins from defaults will make the accelerometer and compass stop working, as they are connected internally to those pins.
microbit.i2c.scan() Scan the bus for devices. Returns a list of 7-bit addresses corresponding to those devices that responded to the scan.microbit.i2c.read(addr, n, repeat=False) Read n bytes from the device with 7-bit address addr. If repeat is True, no stop bit will be sent.microbit.i2c.write(addr, buf, repeat=False) Write bytes from buf to the device with 7-bit address addr. If repeat is True, no stop bit will be sent.
Connecting
You should connect the device’s SCL pin to micro:bit pin 19, and the device’s SDA pin to micro:bit pin 20. You alsomust connect the device’s ground to the micro:bit ground (pin GND). You may need to power the device using anexternal power supply or the micro:bit.There are internal pull-up resistors on the I2 C lines of the board, but with particularly long wires or large number ofdevices you may need to add additional pull-up resistors, to ensure noise-free communication.
3.4.5 Accelerometer
This object gives you access to the on-board accelerometer. The accelerometer also provides convenience functions fordetecting gestures. The recognised gestures are: up, down, left, right, face up, face down, freefall,3g, 6g, 8g, shake.
microbit.accelerometer.get_x() Get the acceleration measurement in the x axis, as a positive or negative integer, depending on the direction.microbit.accelerometer.get_y() Get the acceleration measurement in the y axis, as a positive or negative integer, depending on the direction.microbit.accelerometer.get_z() Get the acceleration measurement in the z axis, as a positive or negative integer, depending on the direction.microbit.accelerometer.get_values() Get the acceleration measurements in all axes at once, as a three-element tuple of integers ordered as X, Y, Z.microbit.accelerometer.current_gesture() Return the name of the current gesture.
Note: MicroPython understands the following gesture names: "up", "down", "left", "right", "face up","face down", "freefall", "3g", "6g", "8g", "shake". Gestures are always represented as strings.
microbit.accelerometer.is_gesture(name) Return True or False to indicate if the named gesture is currently active.microbit.accelerometer.was_gesture(name) Return True or False to indicate if the named gesture was active since the last call.microbit.accelerometer.get_gestures() Return a tuple of the gesture history. The most recent is listed last. Also clears the gesture history before returning.
Examples
A fortune telling magic 8-ball. Ask a question then shake the device for an answer.# Magic 8 ball by Nicholas Tollervey. February 2016.## Ask a question then shake.## This program has been placed into the public domain",
3.4. Modules 67BBC micro:bit MicroPython Documentation, Release 0.5.0
while True: display.show('8') if accelerometer.was_gesture('shake'): display.clear() sleep(1000) display.scroll(random.choice(answers)) sleep(10)
import microbit as mimport random
p = m.display.show
min_x = -1024max_x = 1024range_x = max_x - min_x
wall_min_speed = 400player_min_speed = 200
wall_max_speed = 100player_max_speed = 50
speed_max = 12
while True:
i = m.Image('00000:'*5) s = i.set_pixel
player_x = 2
wall_y = -1 hole = 0
score = 0 handled_this_wall = False
wall_speed = wall_min_speed player_speed = player_min_speed
wall_next = 0 player_next = 0
while True: t = m.running_time() player_update = t >= player_next wall_update = t >= wall_next if not (player_update or wall_update): next_event = min(wall_next, player_next) delta = next_event - t m.sleep(delta) continue
if wall_update: # calculate new speeds speed = min(score, speed_max) wall_speed = wall_min_speed + int((wall_max_speed - wall_min_speed) *˓→speed / speed_max)
wall_next = t + wall_speed if wall_y < 5: # erase old wall use_wall_y = max(wall_y, 0) for wall_x in range(5): if wall_x != hole: s(wall_x, use_wall_y, 0)
wall_reached_player = (wall_y == 4) if player_update: player_next = t + player_speed # find new x coord x = m.accelerometer.get_x() x = min(max(min_x, x), max_x) # print("x accel", x) s(player_x, 4, 0) # turn off old pixel x = ((x - min_x) / range_x) * 5 x = min(max(0, x), 4) x = int(x + 0.5) # print("have", position, "want", x)
if not handled_this_wall: if player_x < x: player_x += 1 elif player_x > x: player_x -= 1 # print("new", position) # print()
if wall_update: # update wall position wall_y += 1 if wall_y == 7: wall_y = -1 hole = random.randrange(5) handled_this_wall = False
if wall_y < 5: # draw new wall
3.4. Modules 69BBC micro:bit MicroPython Documentation, Release 0.5.0
use_wall_y = max(wall_y, 0) for wall_x in range(5): if wall_x != hole: s(wall_x, use_wall_y, 6)
if player_update: s(player_x, 4, 9) # turn on new pixel
p(i)
p(i.SAD) m.sleep(1000) m.display.scroll("Score:" + str(score))
while True: if (m.button_a.is_pressed() and m.button_a.is_pressed()): break m.sleep(100)
3.4.6 Compass
This module lets you access the built-in electronic compass. Before using, the compass should be calibrated, otherwisethe readings may be wrong.
Warning: Calibrating the compass will cause your program to pause until calibration is complete. Calibration consists of a little game to draw a circle on the LED display by rotating the device.
microbit.compass.calibrate() Starts the calibration process. An instructive message will be scrolled to the user after which they will need to rotate the device in order to draw a circle on the LED display.microbit.compass.is_calibrated() Returns True if the compass has been successfully calibrated, and returns False otherwise.microbit.compass.clear_calibration() Undoes the calibration, making the compass uncalibrated again.microbit.compass.get_x() Gives the reading of the magnetic force on the x axis, as a positive or negative integer, depending on the direction of the force.microbit.compass.get_y() Gives the reading of the magnetic force on the x axis, as a positive or negative integer, depending on the direction of the force.
microbit.compass.get_z() Gives the reading of the magnetic force on the x axis, as a positive or negative integer, depending on the direction of the force.microbit.compass.heading() Gives the compass heading, calculated from the above readings, as an integer in the range from 0 to 360, representing the angle in degrees, clockwise, with north as 0.microbit.compass.get_field_strength() Returns an integer indication of the magnitude of the magnetic field around the device.
""" compass.py ~~~~~~~~~~
Creates a compass.
The user will need to calibrate the compass first. The compass uses the built-in clock images to display the position of the needle.
"""from microbit import *
# Start calibratingcompass.calibrate()
3.4. Modules 71BBC micro:bit MicroPython Documentation, Release 0.5.0
Bluetooth
While the BBC micro:bit has hardware capable of allowing the device to work as a Bluetooth Low Energy (BLE)device, it only has 16k of RAM. The BLE stack alone takes up 12k RAM which means there’s not enough room torun MicroPython.Future versions of the device may come with 32k RAM which would be sufficient. However, until such time it’s highlyunlikely MicroPython will support BLE.
Note: MicroPython uses the radio hardware with the radio module. This allows users to create simple yet effectivewireless networks of micro:bit devices.Furthermore, the protocol used in the radio module is a lot simpler than BLE, making it far easier to use in aneducational context.
73BBC micro:bit MicroPython Documentation, Release 0.5.0
74 Chapter 4. Bluetooth CHAPTER 5
It is useful to store data in a persistent manner so that it remains intact between restarts of the device. On traditionalcomputers this is often achieved by a file system consisting of named files that hold raw data, and named directoriesthat contain files. Python supports the various operations needed to work with such file systems.However, since the micro:bit is a limited device in terms of both hardware and storage capacity MicroPython pro-vides a small subset of the functions needed to persist data on the device. Because of memory constraints there isapproximately 30k of storage available on the file system.
MicroPython on the micro:bit provides a flat file system; i.e. there is no notion of a directory hierarchy, the file systemis just a list of named files. Reading and writing a file is achieved via the standard Python open function and theresulting file-like object (representing the file) of types TextIO or BytesIO. Operations for working with files onthe file system (for example, listing or deleting files) are contained within the os module.If a file ends in the .py file extension then it can be imported. For example, a file named hello.py can be importedlike this: import hello.An example session in the MicroPython REPL may look something like this:
75BBC micro:bit MicroPython Documentation, Release 0.5.0
print('Hello')>>> import os>>> os.listdir()['hello.py']>>> os.remove('hello.py')>>> os.listdir()[]
open(filename, mode=’r’)IOclass BytesIO function described above. close() Flush and close the file. This method has no effect if the file is already closed. Once the file is closed, any operation on the file (e.g. reading or writing) will raise an exception. name() Returns the name of the file the object represents. This will be the same as the filename argument passed into the call to the open function that instantiated the object. read(size) Read and return at most size characters as a single string or size bytes from the file. As a convenience, if size is unspecified or -1, all the data contained in the file is returned. Fewer than size characters or bytes may be returned if there are less than size characters or bytes remaining to be read from the file. If 0 characters or bytes are returned, and size was not 0, this indicates end of file. A MemoryError exception will occur if size is larger than the available RAM. readinto(buf, n=-1) Read characters or bytes into the buffer buf. If n is supplied, read n number of bytes or characters into the buffer buf. readline(size) Read and return one line from the file. If size is specified, at most size characters will be read. The line terminator is always '\n' for strings or b'\n' for bytes. writable() Return True if the file supports writing. If False, write() will raise OSError. write(buf ) Write the string or bytes buf to the file and return the number of characters or bytes written.
Music
This is the music module. You can use it to play simple tunes, provided that you connect a speaker to your board.By default the music module expects the speaker to be connected via pin 0:
77BBC micro:bit MicroPython Documentation, Release 0.5.0
78 Chapter 6. Music BBC micro:bit MicroPython Documentation, Release 0.5.0
For example, A1:4 refers to the note “A” in octave 1 that lasts for four ticks (a tick is an arbitrary length of timedefinedstates:
['r4:2', 'g', 'g', 'g', 'eb:8', 'r:2', 'f', 'f', 'f', 'd:8']
The definition and scope of an octave conforms to the table listed on this page about scientific pitch notation. Forexample, middle “C” is 'c4' and concert “A” (440) is 'a4'. Octaves start on the note “C”.
6.2 Functions
music.set_tempo(ticks=4, bpm=120): • music.set_tempo() - reset the tempo to default of ticks = 4, bpm = 120 • music.set_tempo(ticks=8) - change the “definition” of a beat • music.set_tempo(bpm=180) - just change the tempo To work out the length of a tick in milliseconds is very simple arithmetic: 60000/bpm/ ticks_per_beat . For the default values that’s 60000/120/4 = 125 milliseconds or 1 beat = 500 milliseconds.music.get_tempo() Gets the current tempo as a tuple of integers: (ticks, bpm).music.play(music, pin=microbit.pin0, wait=True, loop=False) Plays music containing the musical DSL defined above. If music is a string it is expected to be a single note such as, 'c1:4'. If music is specified as a list of notes (as defined in the section on the musical DSL, above) then they are played one after the other to perform a melody. In both cases, the duration and octave values are reset to their defaults before the music (whatever it may be) is played. An optional argument to specify the output pin can be used to override the default of microbit.pin0. If wait is set to True, this function is blocking. If loop is set to True, the tune repeats until stop is called (see below) or the blocking call is interrupted.
6.2. Functions 79BBC micro:bit MicroPython Documentation, Release 0.5.0
For the purposes of education and entertainment, the module contains several example tunes that are expressed asPython lists. They can be used like this:
All the tunes are either out of copyright, composed by Nicholas H.Tollervey and released to the public domain or havean unknown composer and are covered by a fair (educational) use provision.They are: • DADADADUM - the opening to Beethoven’s 5th Symphony in C minor. • ENTERTAINER - the opening fragment of Scott Joplin’s Ragtime classic “The Entertainer”. • PRELUDE - the opening of the first Prelude in C Major of J.S.Bach’s 48 Preludes and Fugues. • ODE - the “Ode to Joy” theme from Beethoven’s 9th Symphony in D minor. • NYAN - the Nyan Cat theme (). The composer is unknown. This is fair use for educational porpoises (as they say in New York). • RINGTONE - something that sounds like a mobile phone ringtone. To be used to indicate an incoming message. • FUNK - a funky bass line for secret agents and criminal masterminds. • BLUES - a boogie-woogie 12-bar blues walking bass. • BIRTHDAY - “Happy Birthday to You. . . ” for copyright status see: world-us-canada-34332853 • WEDDING - the bridal chorus from Wagner’s opera “Lohengrin”. • FUNERAL - the “funeral march” otherwise known as Frédéric Chopin’s Piano Sonata No. 2 in B minor, Op. 35. • PUNCHLINE - a fun fragment that signifies a joke has been made. • PYTHON - John Philip Sousa’s march “Liberty Bell” aka, the theme for “Monty Python’s Flying Circus” (after which the Python programming language is named).
80 Chapter 6. Music BBC micro:bit MicroPython Documentation, Release 0.5.0
6.2.2 Example
""" music.py ~~~~~~~~
# play Prelude in C.notes = [ 'c4:1', 'e', 'g', 'c5', 'e5', 'g4', 'c5', 'e5', 'c4', 'e', 'g', 'c5', 'e5', 'g4', ˓→'c5', 'e5',
'c4', 'd', 'a', 'd5', 'f5', 'a4', 'd5', 'f5', 'c4', 'd', 'a', 'd5', 'f5', 'a4', ˓→'d5', 'f5',
'b3', 'd4', 'g', 'd5', 'f5', 'g4', 'd5', 'f5', 'b3', 'd4', 'g', 'd5', 'f5', 'g4', ˓→'d5', 'f5',
'c4', 'e', 'g', 'c5', 'e5', 'g4', 'c5', 'e5', 'c4', 'e', 'g', 'c5', 'e5', 'g4', ˓→'c5', 'e5',
'c4', 'e', 'a', 'e5', 'a5', 'a4', 'e5', 'a5', 'c4', 'e', 'a', 'e5', 'a5', 'a4', ˓→'e5', 'a5',
'c4', 'd', 'f#', 'a', 'd5', 'f#4', 'a', 'd5', 'c4', 'd', 'f#', 'a', 'd5', 'f#4', ˓→'a', 'd5',
'b3', 'd4', 'g', 'd5', 'g5', 'g4', 'd5', 'g5', 'b3', 'd4', 'g', 'd5', 'g5', 'g4', ˓→'d5', 'g5',
'b3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'b3', 'c4', 'e', 'g', 'c5', 'e4', 'g ˓→', 'c5',
'a3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'a3', 'c4', 'e', 'g', 'c5', 'e4', 'g ˓→', 'c5',
'd3', 'a', 'd4', 'f#', 'c5', 'd4', 'f#', 'c5', 'd3', 'a', 'd4', 'f#', 'c5', 'd4', ˓→'f#', 'c5',
'g3', 'b', 'd4', 'g', 'b', 'd', 'g', 'b', 'g3', 'b3', 'd4', 'g', 'b', 'd', 'g', 'b ˓→'
music.play(notes)
6.2. Functions 81BBC micro:bit MicroPython Documentation, Release 0.5.0
82 Chapter 6. Music CHAPTER 7
NeoPixelyou may experience weird bugs and issues.
NeoPixels are fun strips of multi-coloured programmable LEDs. This module contains everything to plug them into amicro:bit and create funky displays, art and games such as the demo shown below.
To connect a strip of neopixels you’ll need to attach the micro:bit as shown below (assuming you want to drive thepixels from pin 0 - you can connect neopixels to pins 1 and 2 too). The label on the crocodile clip tells you where toattach the other end on the neopixel strip.
Warning:.
83BBC micro:bit MicroPython Documentation, Release 0.5.0
7.1 Classes
class neopixel.NeoPixel(pin, n) Initialise a new strip of n number of neopixel LEDs controlled via pin pin. Each pixel is addressed by a position (starting from 0). Neopixels are given RGB (red, green, blue) values between 0-255 as a tuple. For example, (255,255,255) is white. clear() Clear all the pixels. show() Show the pixels. Must be called for any updates to become visible.
84 Chapter 7. NeoPixel BBC micro:bit MicroPython Documentation, Release 0.5.0
7.2 Operations
Writing the colour doesn’t update the display (use show() for that).np[0] = (255, 0, 128) # first elementnp[-1] = (0, 255, 0) # last elementnp.show() # only now will the updated value be shown
Interact with Neopixels as if they were a list of tuples. Each tuple represents the RGB (red, green and blue) mix ofcolours for a specific pixel. The RGB values can range between 0 to 255.For example, initialise a strip of 8 neopixels on a strip connected to pin0 like this:import neopixelnp = neopixel.NeoPixel(pin0, 8)
Set pixels by indexing them (like with a Python list). For instance, to set the first pixel to full brightness red, youwould use:np[0] = (255, 0, 0)..!
Note: If you’re not seeing anything change on your Neopixel strip, make sure you have show() at least somewhereotherwise your updates won’t be shown.
7.4 Example
""" neopixel_random.py
7.2. Operations 85BBC micro:bit MicroPython Documentation, Release 0.5.0
"""from microbit import *import neopixelfrom random import randint
while True: #Iterate over each LED in the strip
# Assign the current LED a random red, green and blue value between 0 and 60 np[pixel_id] = (red, green, blue)
86 Chapter 7. NeoPixel CHAPTER 8
The os Module
MicroPython contains an os module based upon the os module in the Python standard library. It’s used for accessingwhat would traditionally be termed as operating system dependent functionality. Since there is no operating system inMicroPython the module provides functions relating to the management of the simple on-device persistent file systemand information about the current system.To access this module you need to:
import os
8.1 Functions
os.listdir() Returns a list of the names of all the files contained within the local persistent on-device file system.os.remove(filename) Removes (deletes) the file named in the argument filename. If the file does not exist an OSError exception will occur.os.size(filename) Returns the size, in bytes, of the file named in the argument filename. If the file does not exist an OSError exception will occur.os.uname() Returns information identifying the current operating system. The return value is an object with five attributes: • sysname - operating system name • nodename - name of machine on network (implementation-defined) • release - operating system release • version - operating system version
87BBC micro:bit MicroPython Documentation, Release 0.5.0
Note: There is no underlying operating system in MicroPython. As a result the information returned by the unamefunction is mostly useful for versioning details.
Radio
The radio module allows devices to work together via simple wireless networks.The radio module is conceptually very simple: • Broadcast messages are of a certain configurable length (up to 251 bytes). • Messages received are read from a queue of configurable size (the larger the queue the more RAM is used). If the queue is full, new messages are ignored. Reading a message removes it from the queue. • Messages are broadcast and received on a preselected channel (numbered 0-83). • Broadcasts are at a certain level of power - more power means more range. • Messages are filtered by address (like a house number) and group (like a named recipient at the specified ad- dress). • The rate of throughput can be one of three pre-determined settings. • Send and receive bytes to work with arbitrary data. • Use receive_full to obtain full details about an incoming message: the data, receiving signal strength, and a microsecond timestamp when the message arrived. • As a convenience for children, it’s easy to send and receive messages as strings. • The default configuration is both sensible and compatible with other platforms that target the BBC micro:bit.To access this module you need to:
import radio
9.1 Constantsradio.RATE_250KBIT Constant used to indicate a throughput of 256 Kbit a second.
89BBC micro:bit MicroPython Documentation, Release 0.5.0
radio.RATE_1MBIT Constant used to indicate a throughput of 1 MBit a second.radio.RATE_2MBIT Constant used to indicate a throughput of 2 MBit a second.
9.2 Functionsradio.on() Turns the radio on. This needs to be explicitly called since the radio draws power and takes up memory that you may otherwise need.radio.off() Turns off the radio, thus saving power and memory.radio.config(**kwargs) when filtering messages. Conceptu- ally, radio module : RATE_250KBIT, RATE_1MBIT or RATE_2MBIT. If config is not called then the defaults described above are assumed.radio.reset() Reset the settings to their default values (as listed in the documentation for the config function above).
Note: None of the following send or receive methods will work until the radio is turned on.
radio.send_bytes(message) Sends a message containing bytes.
90 Chapter 9. Radio BBC micro:bit MicroPython Documentation, Release 0.5.0
radio.receive_bytes() Receive the next incoming message on the message queue. Returns None if there are no pending messages. Messages are returned as bytes.radio.receive_bytes_into(buffer) Receive the next incoming message on the message queue. Copies the message into buffer, trimming the end of the message if necessary. Returns None if there are no pending messages, otherwise it returns the length of the message (which might be more than the length of the buffer).radio.send(message) Sends a message string. This is the equivalent of send_bytes(bytes(message, 'utf8')) but with b'\x01\x00\x01' prepended to the front (to make it compatible with other platforms that target the mi- cro:bit).radio.receive() Works in exactly the same way as receive_bytes butError exception is raised if conversion to string fails.radio.receive_full() Returns a tuple containing three values representing the next incoming message on the message queue. If there are no pending messages then None is returned. The three values in the tuple represent: • the next incoming message on the message queue as bytes. • the RSSI (signal strength): a value between 0 (strongest) and -255 (weakest) as measured in dBm. •.
9.2.1 Examples
# Event loop.while True:
9.2. Functions 91BBC micro:bit MicroPython Documentation, Release 0.5.0
92 Chapter 9. Radio CHAPTER 10
This module is based upon the random module in the Python standard library. It contains functions for generatingrandom behaviour.To access this module you need to:
import random
10.1 Functions
random.getrandbits(n) Returns an integer with n random bits.
Warning: Because the underlying generator function returns at most 30 bits, n may only be a value between 1-30 (inclusive).
random.seed(n) Initialize the random number generator with a known integer n. This will give you reproducibly deterministic randomness from a given starting state (n).random.randint(a, b) Return a random integer N such that a <= N <= b. Alias for randrange(a, b+1).random.randrange(stop) Return a randomly selected integer between zero and up to (but not including) stop.random.randrange(start, stop) Return a randomly selected integer from range(start, stop).random.randrange(start, stop, step) Return a randomly selected element from range(start, stop, step).
93BBC micro:bit MicroPython Documentation, Release 0.5.0
random.choice(seq) Return a random element from the non-empty sequence seq. If seq is empty, raises IndexError.random.random() Return the next random floating point number in the range [0.0, 1.0)random.uniform(a, b) Return a random floating point number N such that a <= N <= b for a <= b and b <= N <= a for b < a.
Speech
This module makes microbit talk, sing and make other speech like sounds provided that you connect a speaker to yourboard as shown below:
95BBC micro:bit MicroPython Documentation, Release 0.5.0
Note: This work is based upon the amazing reverse engineering efforts of Sebastian Macke based upon an old text-to-speech (TTS) program called SAM (Software Automated Mouth) originally released in 1982 for the Commodore64. The result is a small C library that we have adopted and adapted for the micro:bit. You can find out more fromhis homepage. Much of the information in this document was gleaned from the original user’s manual which can befound here.
The speech synthesiser can produce around 2.5 seconds worth of sound from up to 255 characters of textual input.To access this module you need to:
11.1 Functions
speech.translate(words) Given English words in the string words, return a string containing a best guess at the appropriate phonemes to pronounce. The output is generated from this text to phoneme translation table. This function should be used to generate a first approximation of phonemes that can be further hand-edited to improve accuracy, inflection and emphasis.speech.pronounce(phonemes, *, pitch=64, speed=72, mouth=128, throat=128) Pronounce the phonemes in the string phonemes. See below for details of how to use phonemes to finely control the output of the speech synthesiser. Override the optional pitch, speed, mouth and throat settings to change the timbre (quality) of the voice.speech.say(words, *, pitch=64, speed=72, mouth=128, throat=128) Say the English words in the string words. The result is semi-accurate for English. Override the optional pitch, speed, mouth and throat settings to change the timbre (quality) of the voice. This is a short-hand equivalent of: speech.pronounce(speech.translate(words))speech.sing(phonemes, *, pitch=64, speed=72, mouth=128, throat=128) Sing the phonemes contained in the string phonemes. Changing the pitch and duration of the note is described below. Override the optional pitch, speed, mouth and throat settings to change the timbre (quality) of the voice.
11.2 Punctuation
Punctuation is used to alter the delivery of speech. The synthesiser understands four punctuation marks: hyphen,comma, full-stop and question mark.The hyphen (-) marks clause boundaries by inserting a short pause in the speech.The comma (,) marks phrase boundaries and inserts a pause of approximately double that of the hyphen.The full-stop (.) and question mark (?) end sentences.The full-stop inserts a pause and causes the pitch to fall.The question mark also inserts a pause but causes the pitch to rise. This works well with yes/no questions such as,“are we home yet?” rather than more complex questions such as “why are we going home?”. In the latter case, use afull-stop.
11.3 Timbre
The timbre of a sound is the quality of the sound. It’s the difference between the voice of a DALEK and the voice of ahuman (for example). To control the timbre change the numeric settings of the pitch, speed, mouth and throatarguments.The pitch (how high or low the voice sounds) and speed (how quickly the speech is delivered) settings are ratherobvious and generally fall into the following categories:Pitch: • 0-20 impractical • 20-30 very high • 30-40 high
11.1. Functions 97BBC micro:bit MicroPython Documentation, Release 0.5.0
11.4 Phonemes
The say function makes it easy to produce speech - but often it’s not accurate. To make sure the speech synthesiserpronounces things exactly how you’d like, you need to use phonemes: the smallest perceptually distinct units of soundthat can be used to distinguish different words. Essentially, they are the building-block sounds of speech.The pronounce function takes a string containing a simplified and readable version of the International PhoneticAlphabet and optional annotations to indicate inflection and emphasis.The advantage of using phonemes is that you don’t have to know how to spell! Rather, you only have to know how tosay the word in order to spell it phonetically.
Note: The table contains the phoneme as characters, and an example word. The example words have the sound of thephoneme (in parenthesis), but not necessarily the same letters.Often overlooked: the symbol for the “H” sound is /H. A glottal stop is a forced stoppage of sound.
Here are some seldom used phoneme combinations (and suggested alternatives):
11.4. Phonemes 99BBC micro:bit MicroPython Documentation, Release 0.5.0
If you use anything other than the phonemes described above, a ValueError exception will be raised. Pass in thephonemes as a string like this:
speech.pronounce("/HEHLOW") # "Hello"
The phonemes are classified into two broad groups: vowels and consonants.Vowels are further subdivided into simple vowels and diphthongs. Simple vowels don’t change their sound as you saythem whereas diphthongs start with one sound and end with another. For example, when you say the word “oil” the“oi” vowel starts with an “oh” sound but changes to an “ee” sound.Consonants are also subdivided into two groups: voiced and unvoiced. Voiced consonants require the speaker touse their vocal chords to produce the sound. For example, consonants like “L”, “N” and “Z” are voiced. Unvoicedconsonants are produced by rushing air, such as “P”, “T” and “SH”.Once you get used to it, the phoneme system is easy. To begin with some spellings may seem tricky (for example,“adventure” has a “CH” in it) but the rule is to write what you say, not what you spell. Experimentation is the bestway to resolve problematic words.It’s also important that speech sounds natural and understandable. To help with improving the quality of spoken outputit’s often good to use the built-in stress system to add inflection or emphasis.There are eight stress markers indicated by the numbers 1 - 8. Simply insert the required number after the vowel tobe stressed. For example, the lack of expression of “/HEHLOW” is much improved (and friendlier) when spelled out“/HEH3LOW”.It’s also possible to change the meaning of words through the way they are stressed. Consider the phrase “Why shouldI walk to the store?”. It could be pronounced in several different ways:
Put simply, different stresses in the speech create a more expressive tone of voice.They work by raising or lowering pitch and elongating the associated vowel sound depending on the number you give: 1. very emotional stress 2. very emphatic stress 3. rather strong stress 4. ordinary stress 5. tight stress 6. neutral (no pitch change) stress 7. pitch-dropping stress 8. extreme pitch-dropping stress
The smaller the number, the more extreme the emphasis will be. However, such stress markers will help pronouncedifficult words correctly. For example, if a syllable is not enunciated sufficiently, put in a neutral stress marker.It’s also possible to elongate words with stress markers:
11.5 Singing
Annotations work by pre-pending a hash (#) sign and the pitch number in front of the phoneme. The pitch will remainthe same until a new annotation is given. For example, make MicroPython sing a scale like this:]song = ''.join(solfa)speech.sing(song, speed=100)
In order to sing a note for a certain duration extend the note by repeating vowel or voiced consonant phonemes (asdemonstrated in the example above). Beware diphthongs - to extend them you need to break them into their componentparts. For example, “OY” can be extended with “OHOHIYIYIY”.Experimentation, listening carefully and adjusting is the only sure way to work out how many times to repeat aphoneme so the note lasts for the desired duration.
11.7 Example
import speechfrom microbit import sleep
song = ''.join(solfa)# Sing the scale descending in pitch.speech.sing(song, speed=100)
Installation
This section will help you set up the tools and programs needed for developing programs and firmware to flash to theBBC micro:bit using MicroPython.
12.1 Dependencies
• Windows • OS X • Linux • Debian and Ubuntu • Red Hat Fedora/CentOS • Raspberry Pi
105BBC micro:bit MicroPython Documentation, Release 0.5.0
12.3.1 Windows
When installing Yotta, make sure you have these components ticked to install. • python • gcc • cMake • ninja • Yotta • git-scm • mbed serial driver
12.3.2 OS X
12.3.3 Linux
These steps will cover the basic flavors of Linux and working with the micro:bit and MicroPython. See also the specificsections for Raspberry Pi, Debian/Ubuntu, and Red Hat Fedora/Centos.
Raspberry Pi
Congratulations. You have installed your development environment and are ready to begin flashing firmware to themicro:bit.
Flashing Firmware
yt target bbc-microbit-classic-gcc-nosd
yt up
yt build stringsto qstrdefsport.h. The Makefile also puts the resulting firmware at build/firmware.hex, and includes someconvenience targets.
tools/makecombinedhexlify
107BBC micro:bit MicroPython Documentation, Release 0.5.0
Installation Scenarios • Windows • OS X • Linux • Debian and Ubuntu • Red Hat Fedora/CentOS • Raspberry Pi
To access the REPL, you need to select a program to use for serial communication. Some common options are picocomand screen. You will need to install program and understand the basics of connecting to a device.
The micro:bit will have a port identifier (tty, usb) that can be used by the computer for communicating. Beforeconnecting to the micro:bit, we must determine the port identifier.
Depending on your operating system, environment, and serial communication program, the settings and commandswill vary a bit. Here are some common settings for different systems (please suggest additions that might help others)Settings • Windows
109BBC micro:bit MicroPython Documentation, Release 0.5.0
• OS X • Linux • Debian and Ubuntu • Red Hat Fedora/CentOS • Raspberry Pi
Developer FAQ
Where do I get a copy of the DAL? A: Ask Nicholas Tollervey for details.
111BBC micro:bit MicroPython Documentation, Release 0.5.0
Contributing
16.1 Checklist
113BBC micro:bit MicroPython Documentation, Release 0.5.0
• genindex • modindex • search
mmicrobit, 54microbit.accelerometer, 66microbit.compass, 70microbit.display, 62microbit.i2c, 66microbit.spi, 65microbit.uart, 64music, 77
nneopixel, 83
oos, 87
rradio, 89random, 93
sspeech, 95
115BBC micro:bit MicroPython Documentation, Release 0.5.0
117BBC micro:bit MicroPython Documentation, Release 0.5.0
O set_analog_period() (microbit.MicroBitAnalogDigitalPinoff() (in module microbit.display), 63 method), 58off() (in module radio), 90 set_analog_period_microseconds() (micro-on() (in module microbit.display), 63 bit.MicroBitAnalogDigitalPin method),on() (in module radio), 90 58open() (built-in function), 76 set_pixel() (in module microbit.display), 63os (module), 87 set_pixel() (microbit.Image method), 60 set_tempo() (in module music), 79P shift_down() (microbit.Image method), 60 shift_left() (microbit.Image method), 60panic() (in module microbit), 53 shift_right() (microbit.Image method), 60pitch() (in module music), 79 shift_up() (microbit.Image method), 60play() (in module music), 79 show() (in module microbit.display), 63pronounce() (in module speech), 97 show() (neopixel.NeoPixel method), 84R sing() (in module speech), 97 size() (in module os), 87radio (module), 89 sleep() (in module microbit), 53randint() (in module random), 93 speech (module), 95random (module), 93 stop() (in module music), 80random() (in module random), 94randrange() (in module random), 93 TRATE_1MBIT (in module radio), 89 temperature() (in module microbit), 53RATE_250KBIT (in module radio), 89 TextIO (built-in class), 76RATE_2MBIT (in module radio), 90 translate() (in module speech), 97read() (BytesIO method), 76read() (in module microbit.i2c), 66read() (microbit.spi.spi method), 65 Uread() (microbit.uart.uart method), 64 uname() (in module os), 87read_analog() (microbit.MicroBitAnalogDigitalPin uniform() (in module random), 94 method), 58read_digital() (microbit.MicroBitDigitalPin method), 57 Wreadall() (microbit.uart.uart method), 64 was_gesture() (in module microbit.accelerometer), 67readinto() (BytesIO method), 76 was_pressed() (Button method), 54readinto() (microbit.uart.uart method), 64 width() (microbit.Image method), 59readline() (BytesIO method), 76 writable() (BytesIO method), 76readline() (microbit.uart.uart method), 65 write() (BytesIO method), 76receive() (in module radio), 91 write() (in module microbit.i2c), 66receive_bytes() (in module radio), 90 write() (microbit.spi.spi method), 65receive_bytes_into() (in module radio), 91 write() (microbit.uart.uart method), 65receive_full() (in module radio), 91 write_analog() (microbit.MicroBitAnalogDigitalPinremove() (in module os), 87 method), 58reset() (in module microbit), 53 write_digital() (microbit.MicroBitDigitalPin method), 58reset() (in module music), 80 write_readinto() (microbit.spi.spi method), 65reset() (in module radio), 90running_time() (in module microbit), 53
Ssay() (in module speech), 97scan() (in module microbit.i2c), 66scroll() (in module microbit.display), 63seed() (in module random), 93send() (in module radio), 91send_bytes() (in module radio), 90
118 Index
Mult mai mult decât documente.
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.Anulați oricând. | https://ro.scribd.com/document/388379476/BBC-micro-bit-MicroPython | CC-MAIN-2020-16 | refinedweb | 17,915 | 57.87 |
Let me first discuss how to construct the matrix of distances between a set of vectors
\{v_i\}. The idea is, obviously, to use the fact that the p-distance between two vectors is given by the formula
d_p(v_1,v_2) = \|v_1-v_2\|_p = \left(\sum_k (v^k_1-v^k_2)^p\right)^{1/p}
For the Euclidean distance $p=2$ and we have
\|v\|^2 = (v,v). So the squared distance is nothing but
\|v_1-v_2\|^2 = (v_1-v_2,v_1-v_2)
This is good, then using the linearity of the scalar product, we obtain
\|v_1-v_2\|^2 = \|v_1\|^2 + \|v_2\|^2 -2(v_1,v_2)This expression can be computed with matrix multiplications. In python you can do it using numpy as follows. First, onstruct the matrix of the positions, i.e. stack all 'size' vectors of lenght 'dimension' on the top of each other
import numpyHere I have chosen uniformly distributed vectors, but you can use others of course.
dimension = 2
size = 100
positions = numpy.random.uniform(0,1, (size,dimension))
Now, we construct the matrix
s_{ij} = \|v_i\|^2 +\|v_j\|^2by repeating, reshaping and transposing the vector of the norms. This is as easy as this
# construct the matrix s_ij = |v_i|**2+|v_j|**2'sum_matrix' is what you are looking for. The scalar product is even easier. Indeed the matrix of the products
norms = numpy.sum( positions**2. , axis = 1 )
tmp = numpy.reshape(norms.repeat(size),(size,size))
sum_matrix = tmp + tmp.transpose()
x_{ij} = (v_i,v_j)is just the multiplication of the vector matrix with its transpose (try on 2x2 example to see that it works). So you can do it easily by
# construct the matrix x_ij = (v_i,v_j)
scalars = numpy.dot(positions,positions.transpose())
Nessun commento: | http://lapulm.blogspot.com/2011/07/networks-with-distance-dependent.html | CC-MAIN-2016-40 | refinedweb | 287 | 56.96 |
jGuru Forums
Posted By:
Rick_Bradshaw
Posted On:
Thursday, March 21, 2002 03:20 PM
I would like to host XML and XSL files with Tomcat as in
so myself and others can use valid URI's for our XML development (namespaces, schemas, xsl's too). I have tried adding xml files to
/ROOT and they are not found when I type in the associated URL
order.html works, how do add XML as a valid type or make this work?
Re: Host an XML and XSL files with Tomcat
Posted By:
srinivas_murthy
Posted On:
Thursday, March 21, 2002 05:17 PM | http://www.jguru.com/forums/view.jsp?EID=807147 | CC-MAIN-2015-48 | refinedweb | 101 | 72.5 |
Type classes
Type classes are a powerful tool used in functional programming to enable ad-hoc polymorphism, more commonly known as overloading. Where many object-oriented languages leverage subtyping for polymorphic code, functional programming tends towards a combination of parametric polymorphism (think type parameters, like Java generics) and ad-hoc polymorphism.
Example: collapsing a list
The following code snippets show code that sums a list of integers, concatenates a list of strings, and unions a list of sets.
def sumInts(list: List[Int]): Int = list.foldRight(0)(_ + _) def concatStrings(list: List[String]): String = list.foldRight("")(_ ++ _) def unionSets[A](list: List[Set[A]]): Set[A] = list.foldRight(Set.empty[A])(_ union _)
All of these follow the same pattern: an initial value (0, empty string, empty set) and a combining function
(
+,
++,
union). We’d like to abstract over this so we can write the function once instead of once for every type
so we pull out the necessary pieces into an interface.
trait Monoid[A] { def empty: A def combine(x: A, y: A): A } // Implementation for Int val intAdditionMonoid: Monoid[Int] = new Monoid[Int] { def empty: Int = 0 def combine(x: Int, y: Int): Int = x + y }
The name
Monoid is taken from abstract algebra which specifies precisely this kind of structure.
We can now write the functions above against this interface.
def combineAll[A](list: List[A], A: Monoid[A]): A = list.foldRight(A.empty)(A.combine)
Type classes vs. subtyping
The definition above takes an actual monoid argument instead of doing the usual object-oriented practice of using subtype constraints.
// Subtyping def combineAll[A <: Monoid[A]](list: List[A]): A = ???
This has a subtle difference with the earlier explicit example. In order to seed the
foldRight with the empty value,
we need to get a hold of it given only the type
A. Taking
Monoid[A] as an argument gives us this by calling the
appropriate
empty method on it. With the subtype example, the
empty method would be on a value of type
Monoid[A] itself, which we are only getting from the
list argument. If
list is empty, we have no values to work
with and therefore can’t get the empty value. Not to mention the oddity of getting a constant value from a non-static
object.
For another motivating difference, consider the simple pair type.
final case class Pair[A, B](first: A, second: B)
Defining a
Monoid[Pair[A, B]] depends on the ability to define a
Monoid[A] and
Monoid[B], where the definition
is point-wise, i.e. the first element of the first pair combines with the first element of the second pair and the second element of the first pair combines with the second element of the second pair. With subtyping such a constraint would be encoded as something like
final case class Pair[A <: Monoid[A], B <: Monoid[B]](first: A, second: B) extends Monoid[Pair[A, B]] { def empty: Pair[A, B] = ??? def combine(x: Pair[A, B], y: Pair[A, B]): Pair[A, B] = ??? }
Not only is the type signature of
Pair now messy but it also forces all instances of
Pair to have a
Monoid
instance, whereas
Pair should be able to carry any types it wants and if the types happens to have a
Monoid instance then so would it. We could try bubbling down the constraint into the methods themselves.
final case class Pair[A, B](first: A, second: B) extends Monoid[Pair[A, B]] { def empty(implicit eva: A <:< Monoid[A], evb: B <:< Monoid[B]): Pair[A, B] = ??? def combine(x: Pair[A, B], y: Pair[A, B])(implicit eva: A <:< Monoid[A], evb: B <:< Monoid[B]): Pair[A, B] = ??? } // <console>:15: error: class Pair needs to be abstract, since: // it has 2 unimplemented members. // /** As seen from class Pair, the missing signatures are as follows. // * For convenience, these are usable as stub implementations. // */ // def combine(x: Pair[A,B],y: Pair[A,B]): Pair[A,B] = ??? // def empty: Pair[A,B] = ??? // // final case class Pair[A, B](first: A, second: B) extends Monoid[Pair[A, B]] { // ^
But now these don’t conform to the interface of
Monoid due to the implicit constraints.
Implicit derivation
Note that a
Monoid[Pair[A, B]] is derivable given
Monoid[A] and
Monoid[B]:
final case class Pair[A, B](first: A, second: B) def deriveMonoidPair[A, B])) }
One of the most powerful features of type classes is the ability to do this kind of derivation automatically. We can do this through Scala’s implicit mechanism.
object Demo { // needed for tut, irrelevant to demonstration final case class Pair[A, B](first: A, second: B) object Pair { implicit def tuple2Instance[A, B](implicit)) } } }
We also change any functions that have a
Monoid constraint on the type parameter to take the argument implicitly,
and any instances of the type class to be implicit.
implicit val intAdditionMonoid: Monoid[Int] = new Monoid[Int] { def empty: Int = 0 def combine(x: Int, y: Int): Int = x + y } def combineAll[A](list: List[A])(implicit A: Monoid[A]): A = list.foldRight(A.empty)(A.combine)
Now we can also
combineAll a list of
Pairs so long as
Pair’s type parameters themselves have
Monoid
instances.
implicit val stringMonoid: Monoid[String] = new Monoid[String] { def empty: String = "" def combine(x: String, y: String): String = x ++ y }
import Demo.{Pair => Paired} // import Demo.{Pair=>Paired} combineAll(List(Paired(1, "hello"), Paired(2, " "), Paired(3, "world"))) // res2: Demo.Pair[Int,String] = Pair(6,hello world)
A note on syntax
In many cases, including the
combineAll function above, the implicit arguments can be written with syntactic sugar.
def combineAll[A : Monoid](list: List[A]): A = ???
While nicer to read as a user, it comes at a cost for the implementer.
// Defined in the standard library, shown for illustration purposes // Implicitly looks in implicit scope for a value of type `A` and just hands it back def implicitly[A](implicit ev: A): A = ev def combineAll[A : Monoid](list: List[A]): A = list.foldRight(implicitly[Monoid[A]].empty)(implicitly[Monoid[A]].combine)
For this reason, many libraries that provide type classes provide a utility method on the companion object of the type
class, usually under the name
apply, that skirts the need to call
implicitly everywhere.
object Monoid { def apply[A : Monoid]: Monoid[A] = implicitly[Monoid[A]] } def combineAll[A : Monoid](list: List[A]): A = list.foldRight(Monoid[A].empty)(Monoid[A].combine)
Cats uses simulacrum for defining type classes which will auto-generate such an
apply method.
Laws
Conceptually, all type classes come with laws. These laws constrain implementations for a given type and can be exploited and used to reason about generic code.
For instance, the
Monoid type class requires that
combine be associative and
empty be an identity element for
combine. That means the following
equalities should hold for any choice of
x,
y, and
z.
combine(x, combine(y, z)) = combine(combine(x, y), z) combine(x, id) = combine(id, x) = x
With these laws in place, functions parametrized over a
Monoid can leverage them for say, performance
reasons. A function that collapses a
List[A] into a single
A can do so with
foldLeft or
foldRight since
combine is assumed to be associative, or it can break apart the list into smaller
lists and collapse in parallel, such as
val list = List(1, 2, 3, 4, 5) val (left, right) = list.splitAt(2)
// Imagine the following two operations run in parallel val sumLeft = combineAll(left) // sumLeft: Int = 3 val sumRight = combineAll(right) // sumRight: Int = 12 // Now gather the results val result = Monoid[Int].combine(sumLeft, sumRight) // result: Int = 15
Cats provides laws for type classes via the
kernel-laws and
laws modules which makes law checking
type class instances easy.
You can find out more about law testing here.
Type classes in cats
From cats-infographic by @tpolecat.
Incomplete type class instances in cats
Originally from @alexknvl | https://typelevel.org/cats/typeclasses.html | CC-MAIN-2019-13 | refinedweb | 1,330 | 52.49 |
Fragmented reading: Here I just briefly summarize the following basic syntax. Because I have experience in java development before, I haven't written some knowledge points. If I want to learn pyhton in detail, I suggest to see the rookie tutorial.
Learning the basic grammar of python
1. Mark coding method
# -*- coding: cp-1252 -*-
2. Input and output
# notes print()
3. Identifier
Identifier, which is implemented in the way of letter array underline, and is case sensitive
4. Reserved words
['False', 'None', 'True', 'and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'nonlocal', 'not', 'or', 'pass', 'raise', 'return', 'try', 'while', 'with', 'yield']
5. Notes
# Single-Line Comments ''' multiline comment ''' """ multiline comment """
6. Indent
python does not apply {} to represent data, but implements related operations by indentation
7. Multiline statements
For long statements, the /+Implementation of total = str1 +\ str2 +\ str3 +\ str4
8. Number type
There are four main types of number
- int shaping
- Boolean type
- float type 1.23, 3E-2
- Complex complex type 1+2j, 1.1+2.2j
9. String
In python, you can use '' to define strings, and the form is the same as that of '' ''. Both can represent multiple lines of strings, and '' can be used for a single string
Escape character \ n \ t
The escape character r. 'abc\n' is useless at this time
There are no character types in python. One is a string
Index method:
The index can be understood as starting from 0 and ending at - 1
a = 'abctt' a[0:-1] #Means from beginning to end a[2:]#Start with 3 a[2:5:2]# From 3-5, there are two in the middle a+"Hello" #String splicing
10. Blank.
11. Input
Input indicates input
12. Multiple statements on the same line
For multiple statements in the same line, you can use; division
import sys; print("hello")
13. For multiple statements
The code group can be processed in the form of:
14. Output
#For python, there are mainly newline output and non newline output print('hello') print('word') #No newline output print('hello',end=" ") print('word',end=" ")
15. Import package
- Import all sys
- Import part from sys import input
- Import all from sys import*
Basic data type
1. Assignment
# For assignment, even equals a = b= c a ,b,c = 1,1.4,'root'
2. Standard data type
There are six basic data types in python
Number, character, tuple, set, list, the first three tuples are variable, and the last three tuples are immutable
There are two main methods to judge data types
Method 1: through type (1, int)
Method 2: isInstance (a ()) through isInstance
Difference: the type here is only True and the same. The subclass in isInstance is also True and correct
Here you can use IS to judge the type. Here you can use type. Note that bool IS a subclass of int, so true == 1
false = 0
When creating a value, an object is also created. You can delete an object by del
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-wi1mvjsb-163749554431) (E: \ Python \ learning document \ Snipaste_2021-11-17_23-22-05.png)]
2. List
Basic operations and characters of the list
list = ['a','b','c','d'] list[0:] list[1:5:2] # Unlike String, the data in the list can be modified
Common methods:
append(). pop () and other methods
3. Tuple
Note: the data in the tuple cannot be modified, but the elements contained in the tuple can be modified
tup = (1,2,3,4,5,6) tup[1:4]#It is the same as the previous work, so there is no need to change, but the relevant parameter information can be changed
Note: the characters, tuples, and lists mentioned here are elements that can be changed, so there are many similarities in the operation methods
4. Assembly
The set here can be regarded as a set of multiple types of data, and the set set can delete the reread elements.
5. Dictionary
Dictionaries are equivalent to Map related operations in java, which are stored here through key value pairs
6. Character conversion
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-vb3t7txa-163749554433) (E: \ Python \ learning document \ Snipaste_2021-11-17_23-56-30.png)]
python operators
The basic operators of python are used here. The operators here are the same as those in java
# Define number var =1 var2 = 2 #Delete related references del var del var,var2 # Convert numeric type var3= 1.0 var4= int(var3)
2. String
The functions described here are the same as those described above, so the functions related to strings are used here
#!/usr/bin/python3 a = "Hello" b = "Python" print("a + b Output results:", a + b) print("a * 2 Output results:", a * 2) print("a[1] Output results:", a[1]) print("a[1:4] Output results:", a[1:4]) if( "H" in a) : print("H In variable a in") else : print("H Not in variable a in") if( "M" not in a) : print("M Not in variable a in") else : print("M In variable a in") print (r'\n') print (R'\n')
String output
Adopted here
print("I am s% this year %d year" % ('zhao',10))
Basic value type
Technology added later
f-string is actually f{}
>>>>> f'Hello {name}' # Replace variable 'Hello Runoob' >>> f'{1+2}' # Use expressions '3' >>> w = {'name': 'Runoob', 'url': ''} >>> f'{w["name"]}: {w["url"]}' 'Runoob:'
In addition, there are many operations such as characters and functions for strings. Because there are too many, there is not much to show here
list
There is no need to elaborate on the operation of the list here. It is mainly the same as the data, which can delete and store data. The data here is shown below
# Create list list1 = [1,2,3,4,5] # Query list list1[3] #Add element list1.append(6) # Delete list element del list1[5] # Nested list a = [1,2,3] b = [4,5,6] c = [a,b]
Common functions
tuple
The basic operation of tuple and list here is the same, but the value cannot be changed
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-brlzed-16374950554442) (C: \ users \ Xinyu \ appdata \ roaming \ typora \ user images \ image-20211118211255304. PNG)]
Dictionaries
Delete a data item
def1 = {1:'w' , 2:'e', 3:'t' } #Delete specific elements del def1['1'] # Empty dictionary def1.clear() #Delete dictionary del def1
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-skp5rz56-16374955443) (C: \ users \ Xinyu \ appdata \ roaming \ typora user images \ image-20211118214007882. PNG)]
Related functions and functions
aggregate
#Create form for collection {} , set() {'g','r','y','u'} a = set('hello') #There are several kinds of set functions a -b #a contains b does not contain a |b a,b Elements in a&b #Elements contained in ab a^b ab Elements not included at the same time #Add element to collection a.add('t') # Removing Elements a.remove()
Basic process control statement
Here, there are several basic process control statements, including if else, for while, etc. the use here will not be described in detail, which is the same as that in Java
function
- Required parameters
- Keyword parameter, which determines the type of function through keywords
- The default parameter is defined during the period defined by the function and during the period passed in the parameter
- The variable length parameter is mainly transmitted through * args to determine the value of the variable length
- Anonymous function, which is determined by the anonymous function, i.e. lambed function, sum = lambed arg1, arg2; arg1+arg2
- Return return value function
- In Python 3.8, / can be used to force the location information of functions.
data structure
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-fm0ukp1c-163749554443) (C: \ users \ Xinyu \ appdata \ roaming \ typora \ user images \ image-20211118231218003. PNG)]
Program block
The information of the program block here is to process the data by writing a python related data file
- Import module (implemented by means of import, from import *)
- Return,, and related contents of modularization
*__name _ attribute
#This represents the main program of the program, and only the main program can run if __name__ == '__main__': print('The program itself is running') else: print('The first mock exam is from another module.')
The dir() function can find the function names defined inside all modules
Standard module. The standard block function here adopts sys, and common functions can be used duplicate names of modules between different libraries in the form of point module names.
In this way, different authors can provide NumPy module or Python graphics library.
Suppose you want to design a module (or a "package") that processes sound files and data uniformly.
There are many different audio file formats (basically distinguished by suffixes, such as:. wav,: file:.aiff,: file:.au,), so you need to have a group of increasing modules to convert between different formats.
And for these audio data, there are many different operations (such as mixing, adding echo, adding equalizer function, creating artificial stereo effect), so you also need a set of endless modules to deal with these operations.
sound/ Top package __init__.py initialization sound package formats/ File format conversion sub package __init__.py wavread.py wavwrite.py aiffread.py aiffwrite.py auread.py auwrite.py ... effects/ Sound effects sub package __init__.py echo.py surround.py reverse.py ... filters/ filters subpackage __init__.py equalizer.py vocoder.py karaoke.py ...
When importing a package, Python will find the subdirectories contained in the package according to the directory in sys.path.
The directory can only be regarded as a package if it contains a file called init.py, mainly to avoid some vulgar names (such as string) inadvertently affecting the effective modules in the search path.
In the simplest case, just put an empty file:init.py. Of course, this file can also contain some initialization code or assign values to the _all variable (which will be described later).
Users can import only specific modules in one package at a time, such as:
#Import package data import sound.effects.echo
Import all data from one package. Here, you can use from sound.effects.echo
Input and output
Python can output values in two ways:
Expression statement
print() function.
Using the write() method of the file object, the standard output file can be referenced with sys.stdout.
If you want the output to be more diverse, you can use the str.format() function to format the output value.
If you want to convert the output value into a string, you can use the repr() or str() function.
- str(): the function returns a user-friendly expression.
- repr(): produces an interpreter readable expression.
>>>>> str(s) 'Hello, Runoob' >>> repr(s) "'Hello, Runoob'" >>> str(1/7) '0.14285714285714285' >>> x = 10 * 3.25 >>> y = 200 * 200 >>>>> print(s) x The value of is: 32.5, y The value of is 40000... >>> # The repr() function can escape special characters in a string ...>> hellos = repr(hello) >>> print(hellos) 'hello, runoob\n' >>> # The argument to repr() can be any Python object ... repr((x, y, ('Google', 'Runoob'))) "(32.5, 40000, ('Google', 'Runoob'))"
for x in range(1, 11): ... print('{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x))
format is a commonly used form of digitized characters
Read from keyboard
The input function is mainly used
Read and write files
It mainly adopts open(filename, mode), where filename represents the name of the file and mode represents the mode
r. w and a represent related operations and related permissions
Method of reading file object
adopt
Read stands for read all
#!/usr/bin/python3 # Open a file f = open("/tmp/foo.txt", "r") str = f.read() print(str) # Close open files f.close()
readline () means to read a row of data
str = f.readLine()
In addition, you can iterate through for
for line in f: print(line,end = " ") f.close()
write module
Here you can write the function write
Close means to close the file
OS
The os module provides rich methods for processing files and directories. The common methods are shown in the following table:
python errors and exceptions
Try – exception: the exceptions here need to be clearly explained. For the exceptions in try, if there is an exception with the name after exception, it will be thrown. If the exception does not match it, it will be directly thrown to the upper level try statement
try --except --else combination: else here represents the code executed when there is no exception
try --except – else – finally here, for finally, the code will be executed whether it is executed or not
Throw an exception (the exception here refers to the self-defined exception)
Use raise to set
X= 10 if x>5: raise Exception("The value of the data cannot exceed 5")
Auto define exception ? File "<stdin>", line 3, in divide TypeError: unsupported operand type(s) for /: 'str' and 'str'
object-oriented
Class definition:
class ClassName: <statement-1> . . . <statement-N>
Class object
Class objects support two operations: attribute reference and instantiation.
Attribute references use the same standard syntax as all attribute references in Python: obj.name.
After the class object is created, all names in the class namespace are valid attribute names. Therefore, if the class definition is as follows:
#!/usr/bin/python3 class MyClass: """A simple class instance""" i = 12345 def f(self): return 'hello world' # Instantiation class x = MyClass() # Access the properties and methods of the class print("MyClass Properties of class i Is:", x.i) print("MyClass Class method f Output is:", x.f())
The class has a special method (constructor) named init(). This method will be called automatically when the class is instantiated, as follows:
def __init__(self): self.data = []
Class defines the init() method, and the class instantiation operation will automatically call the init() method. Instantiate class MyClass as follows, and the corresponding init() method will be called:
x = MyClass()
Of course, the init() method can have parameters, which are passed to the instantiation operation of the class through init(). For example:
#!/usr/bin/python3 class Complex: def __init__(self, realpart, imagpart): self.r = realpart self.i = imagpart x = Complex(3.0, -4.5) print(x.r, x.i) # Output result: 3.0 - 4.5
hello world'
Instantiation class
x = MyClass()
Access the properties and methods of the class
print("attribute I of MyClass class is:", x.i)
print("the method f output of MyClass class is:", x.f())
Class has a name __init__() Special method of(**Construction method**),This method will be called automatically when the class is instantiated, as follows:
def init(self):
self.data = []
Class defines __init__() Method, the instantiation operation of the class will be called automatically __init__() Method. Instantiate the class as follows MyClass,Corresponding __init__() Method will be called:
x = MyClass()
of course, __init__() Method can have parameters, which can be passed through __init__() Passed to the instantiation operation of the class. For example: ~~~ #!/usr/bin/python3 class Complex: def __init__(self, realpart, imagpart): self.r = realpart self.i = imagpart x = Complex(3.0, -4.5) print(x.r, x.i) # Output result: 3.0 - 4.5 ~~~ | https://programmer.help/blogs/python-basic-abridged-edition.html | CC-MAIN-2022-21 | refinedweb | 2,606 | 61.97 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.