text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
I need some help with writing a program that would read a .txt file that consists of some words and then displays the longest word on the screen.
Here is the assignment description:
Write a program that reads this file and finds the longest word that contains only a single vowel (a, e, i, o, u). Output this word (there will actually be several ties for the longest word. Your program only needs to output one of these words).
I am working on the the part of the assignment where I sort out the words and display the longest word in the list. I am having a little trouble with it, how would I get the program to count the length of the words and display one of the longest words?
I noticed that someone had a question about the exact same problem but no one seemed to answer their question.
Here is what I have done so far though:
#include <cstdlib> #include <iostream> #include <fstream> void input (char*); void play(char*, char[5000], int&, int&); void wordCount (char*); void longWord (char*, int); using namespace std; int main(){ ifstream inStream; inStream.open("./words.txt"); char temp[5000]; // inStream >> len; int counter = 0; int max_word = -1; int posi=0; int length = (strlen(temp)); temp[length]= ' '; // cout<<length<<endl; for(int i=0; i<=length; i++) { if(temp[i] !=' ') { counter++; } else if(temp[i]==' ') { if(counter > max_word) { max_word = counter; posi=i; } counter = 0; } } cout <<"Longest word:"; for(int k=posi-max_word; k<posi; k++) {cout<<temp[k];} getchar(); } inStream.close( ); system("PAUSE"); return EXIT_SUCCESS; }
This post has been edited by XinJiki: 21 February 2010 - 12:11 PM | http://www.dreamincode.net/forums/topic/157290-finding-the-longest-word-program/ | CC-MAIN-2017-17 | refinedweb | 275 | 67.59 |
PyRETIS is open source and hosted on gitlab. We are happy to include more developers and we want active users. If you wish to contribute to the PyRETIS project, please read through the code guidelines and the short description on contributing given below.
The guidelines can be summarised as follows:
loggerrather than
print()in libraries.
PyRETIS follows the Python pep8 style guide (see also pep8.org) and new code should be checked with the pep8 style guide checker and pylint:
pycodestyle source_file.py pylint source_file.py
or other tools like PyChecker or pyflakes. NumPy’s imports can be a bit tricky understand so you can help pylint by doing
pylint --extension-pkg-whitelist=numpy source_file.py
There is also the
import this
which can be useful to remember.
The PyRETIS project is documented using the NumPy/SciPy documentation standard and contributors are requested to familiarise themselves with this style and use it. Documentation style can be checked with pydocstyle
pydocstyle source_file.py
We also try to avoid
print() statements in the libraries and reserve such
statements for console output from command line scripts/programs. Output, like
debug information, warnings etc. can be handled by using
a logger.
In a library, we typically just import and set up the logger
by doing:
import logging logger = logging.getLogger(__name__) # pylint: disable=C0103 logger.addHandler(logging.NullHandler())
We can then use it as follows:
# Report some information logger.info('Up and running') logger.debug('Debug info') logger.warning('This is a warning') logger.critical('Something critical happened!') logger.error('An error occurred!')
Please note that we do not add any particular handlers here - it is up to the user to define what should happen when a logging event occur.
There is a nice guide on github about contributing to open source projects. In PyRETIS we largely follow this approach and the issue tracker is used for reporting bugs and issues and requesting new features. This section on contributing is based on the description for gitlab.
Before contributing, please read the short guidelines given below for reporting bugs and issues and for requesting new features.
If you find a bug in PyRETIS, please create an issue using the following template:
Summary ------- One sentence summary of the issue: What goes wrong, what did you expect to happen Steps to reproduce ------------------ Describe how the issue can be reproduced. Expected behavior ----------------- Describe what behavior you expected instead of what actually happens. Relevant logs ------------- Add logs from the output. Output of checks ---------------- Be sure that all the tests pass before filing the issue. Possible fixes -------------- If you can, link to the line of code that might be responsible for the problem.
If you wish to fix the bug yourself, please follow the approach described for merge requests below.
If you are missing some functionality in PyRETIS you can create
a new issue in the issue tracker and
label it as a feature proposal.
If you do not have rights to add the
feature proposal label, you can ask one
of the core members of the PyRETIS project to add it for you.
You can also implement the changes you want yourself as described below. We cannot promise that we will automatically include your work in PyRETIS but we are happy to have active users and we will consider your contribution. So, when you are finished with your work please make a merge request.
The general approach for making your bug-fix or new feature available in the PyRETIS project is as follows:
Summary ------- What does this merge request do? Justification ------------- Why is this merge request needed? Description of the new code --------------------------- A short description of the new code. Are there points in the code the reviewer needs to double check? References ---------- Add references to relevant issue numbers.
After submitting the merge request the code will be reviewed [2] by (another) member of the PyRETIS team. | http://pyretis.org/developer/index.html | CC-MAIN-2018-05 | refinedweb | 647 | 66.03 |
Opened 6 years ago
Closed 5 years ago
#17976 closed Bug (fixed)
Forms BooleanField not picklable
Description
Today, I switched from 1.3 to 1.4 and since then, BooleanFields in forms are not picklable (and thus cacheable) anymore. The following doesn't work since 1.4:
import pickle from django import forms class MyForm(forms.Form): my_field = forms.BooleanField() pickle.dump(MyForm(), open("/dev/null", "w"))
Attachments (1)
Change History (7)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
Changed 6 years ago by
Use a module-level function to restore picklability
comment:4 Changed 6 years ago by
comment:5 Changed 5 years ago by
Fixed in 9350d1d59c1a4e6a9ac246a808f55da35de0df69.
This must be backported to the 1.4.X branch, but we're in the middle of the migration to GitHub and currently don't have a 1.4 branch. Marking as Ready For Backport.
comment:6 Changed 5 years ago by
Backported in a3c8201b77002645f86c39f978fe132cb2dbab14.
Change in behavior was introduced by r17132, with r17131:
With r17132: | https://code.djangoproject.com/ticket/17976 | CC-MAIN-2017-43 | refinedweb | 179 | 75.2 |
Activity selection problem is an example of greedy algorithm.Greedy algorithms look for simple, easy-to-implement solutions to complex, multi-step problems by deciding which next step will provide the most obvious benefit.The advantage of using a greedy algorithm is that solutions to smaller sub-problems of the problem can be straightforward and easy to understand.The disadvantage is that it is entirely possible that the most optimal short-term solutions may lead to the worst long-term outcome.
Problem-
Given n activities with their start and finish times,we have to find the maximum number of activities that can be performed by a single person,assuming that a person can only work on a single activity at a time.
Algorithm-
1) Sort the activities according to their finishing time
2) Select the first activity from the sorted array and print it.
3) Do following for remaining activities in the sorted array.
a) If the start time of this activity is greater than the finish time of previously selected activity then select this activity and print it.
Example-
start[]={1,5,7,1}
finish[]={7,8,8,8}
The maximum set of activities that can be executed by a single person is {0,2} where 0,2 are the activity numbers.
start[] = {1, 3, 0, 5, 8, 5}
finish[] = {2, 4, 6, 7, 9, 9}
The maximum set of activities that can be executed by a single person is {0, 1, 3, 4}.
Code-
#include <cstdio> #include <algorithm> using namespace std; pair< int, int > a[100000]; //a[].first denotes the finish time //a[].second denotes the starting time int main() { int i, n, last; //n denotes the number of activities scanf("%d", &n); for(i = 0; i < n; i++) { scanf("%d %d",&a[i].second,&a[i].first); } //using sort function for an array of pairs sorts // it according to the first one. //sorting according to finish time sort(a, a + n); last = -1;//initialization for(i = 0; i < n; i++) { if(a[i].second>= last) //step 3 { //printing the activity number which is selected printf("%d ",i); last = a[i].first; } } return 0; }
Thanks for the implementation .. It was awesome
The index of activity gets messed up.
For example, given the data –
start[]={1,5,7,1}
finish[]={7,8,8,8}
Here, the output gives 0 3, where it should’ve been 0 2, which is because when the pair is sorted according to the given finish array, the data becomes like this-
i starting from 0, i++;
a.first[0] – 7, a.second[0] – 1 // i = 0, whereas activity task index = 0
a.first[1] – 8, a.second[1] – 1 // i = 1, whereas activity task index = 3
a.first[2] – 8, a.second[2] – 5 // i = 2, whereas activity task index = 1
a.first[3] – 8, a.second[3] – 7 // i = 3, whereas activity task index = 2
Which consequently gives out the answer 0, 3; the value of i, where it really should have been 0, 2; the value of activity task index.
That means we have to keep track of the activity task index. I have tried but failed to keep track. Can you provide a method which can do that?
Thanks! | http://www.codemarvels.com/2013/08/activity-selection-problem/ | CC-MAIN-2020-34 | refinedweb | 538 | 64 |
Multimedia Overview#
A set of APIs for working with audio, video and camera devices.
Multimedia support in Qt is provided by the Qt Multimedia module. The Qt Multimedia module provides a rich feature set that enables you to easily take advantage of a platform’s multimedia capabilities, such as media playback and the use of camera devices.
Features#
Here are some things you can do with the Qt Multimedia APIs:
-
Access raw audio devices for input and output.
-
Play low latency sound effects.
-
Play media files in playlists (such as compressed audio or video files).
-
Record audio and compress it.
-
Use a camera, including viewfinder, image capture, and movie recording
-
Decode audio media files into memory for processing.
Multimedia Components#
The Qt Multimedia APIs are categorized into three main components. More information specific to each component is available in the overview pages. You can also take a look at some recipes .
Multimedia Recipes#
For some quick recipes, see this table:
Limitations#
The Qt Multimedia APIs build upon the multimedia framework of the underlying platform. This can mean that support for various codecs, or containers will vary between machines. This support depends on what the end user has installed. See Supported Media Formats for more detail.
Changes from Previous Versions#
If you previously used Qt Multimedia in Qt 5, see Changes to Qt Multimedia for more information on what has changed, and what you might need to change when porting code to Qt 6.
Reference Documentation#
QML Types#
The QML types are accessed by using:
import QtMultimedia | https://doc-snapshots.qt.io/qtforpython-dev/overviews/multimediaoverview.html | CC-MAIN-2022-21 | refinedweb | 256 | 56.25 |
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a popular unsupervised learning method utilized in model building and machine learning algorithms. Before we go any further, we need to define what an “unsupervised” learning method is. Unsupervised learning methods are when there is no clear objective or outcome we are seeking to find. Instead, we are clustering the data together based on the similarity of observations. To help clarify, let’s take Netflix as an example. Based on previous shows you have watched in the past, Netflix will recommend shows for you to watch next. Anyone who has ever watched or been on Netflix has seen the screen below with recommendations (Yes, this image is taken directly from my Netflix account and if you have never watched Shameless before I suggest you get on that ASAP).
Because I watched ‘Shameless’, Netflix recommends several other similar shows to watch. But where is Netflix gathering those recommendations from? Considering it is trying to predict the future with what show I am going to watch next, Netflix has nothing to base the predictions or recommendations on (no clear definitive objective). Instead, Netflix looks at other users who have also watched ‘Shameless’ in the past, and looks at what those users watched in addition to ‘Shameless’. By doing so, Netflix is clustering its users together based on similarity of interests. This is exactly how unsupervised learning works. Simply clustering observations together based on similarity, hoping to make accurate conclusions based on the clusters.
Back to DBSCAN. DBSCAN is a clustering method that is used in machine learning to separate clusters of high density from clusters of low density. Given that DBSCAN is a density based clustering algorithm, it does a great job of seeking areas in the data that have a high density of observations, versus areas of the data that are not very dense with observations. DBSCAN can sort data into clusters of varying shapes as well, another strong advantage. DBSCAN works as such:
- Divides the dataset into n dimensions
- For each point in the dataset, DBSCAN forms an n dimensional shape around that data point, and then counts how many data points fall within that shape.
- DBSCAN counts this shape as a cluster. DBSCAN iteratively expands the cluster, by going through each individual point within the cluster, and counting the number of other data points nearby. Take the graphic below for an example:
Going through the aforementioned process step-by-step, DBSCAN will start by dividing the data into n dimensions. After DBSCAN has done so, it will start at a random point (in this case lets assume it was one of the red points), and it will count how many other points are nearby. DBSCAN will continue this process until no other data points are nearby, and then it will look to form a second cluster.
As you may have noticed from the graphic, there are a couple parameters and specifications that we need to give DBSCAN before it does its work. The two parameters we need to specify are as such:
What is the minimum number of data points needed to determine a single cluster?
How far away can one point be from the next point within the same cluster?
Referring back to the graphic, the epsilon is the radius given to test the distance between data points. If a point falls within the epsilon distance of another point, those two points will be in the same cluster.
Furthermore, the minimum number of points needed is set to 4 in this scenario. When going through each data point, as long as DBSCAN finds 4 points within epsilon distance of each other, a cluster is formed.
IMPORTANT: In order for a point to be considered a “core” point, it must contain the minimum number of points within epsilon distance. The point itself is included in the count. View the documentation and look at the min_samples parameter in particular.
You will also notice that the Blue point in the graphic is not contained within any cluster. DBSCAN does NOT necessarily categorize every data point, and is therefore terrific with handling outliers in the dataset. Lets examine the graphic below:
The left image depicts a more traditional clustering method, such as K-Means, that does not account for multi-dimensionality. Whereas the right image shows how DBSCAN can contort the data into different shapes and dimensions in order to find similar clusters. We also notice in the right image, that the points along the outer edge of the dataset are not classified, suggesting they are outliers amongst the data.
Advantages of DBSCAN:
- Is great at separating clusters of high density versus clusters of low density within a given dataset.
- Is great with handling outliers within the dataset.
Disadvantages of DBSCAN:
- While DBSCAN is great at separating high density clusters from low density clusters, DBSCAN struggles with clusters of similar density.
- Struggles with high dimensionality data. I know, this entire article I have stated how DBSCAN is great at contorting the data into different dimensions and shapes. However, DBSCAN can only go so far, if given data with too many dimensions, DBSCAN suffers
Below I have included how to implement DBSCAN in Python, in which afterwards I explain the metrics and evaluating your DBSCAN Model
DBSCAN Implementation in Python
1. Assigning the data as our X values
# setting up data to cluster
X = data# scale and standardizing data
X = StandardScaler().fit_transform(X)
2. Instantiating our DBSCAN Model. In the code below, epsilon = 3 and min_samples is the minimum number of points needed to constitute a cluster.
# instantiating DBSCAN
dbscan = DBSCAN(eps=3, min_samples=4)# fitting model
model = dbscan.fit(X)
3. Storing the labels formed by the DBSCAN
labels = model.labels_
4. Identifying which points make up our “core points”
import numpy as np
from sklearn import metrics# identify core samples
core_samples = np.zeros_like(labels, dtype=bool)core_samples[dbscan.core_sample_indices_] = True
print(core_samples)
5. Calculating the number of clusters
# declare the number of clusters
n_clusters = len(set(labels)) - (1 if -1 in labels else 0)
print(n_clusters_)
6. Computing the Silhouette Score
print("Silhoette Coefficient: %0.3f" % metrics.silhouette_score(X, labels)
Metrics for Measuring DBSCAN’s Performance:
Silhouette Score: The silhouette score is calculated utilizing the mean intra- cluster distance between points, AND the mean nearest-cluster distance. For instance, a cluster with a lot of data points very close to each other (high density) AND is far away from the next nearest cluster (suggesting the cluster is very unique in comparison to the next closest), will have a strong silhouette score. A silhouette score ranges from -1 to 1, with -1 being the worst score possible and 1 being the best score. Silhouette scores of 0 suggest overlapping clusters.
Inertia: Inertia measures the internal cluster sum of squares (sum of squares is the sum of all residuals). Inertia is utilized to measure how related clusters are amongst themselves, the lower the inertia score the better. HOWEVER, it is important to note that inertia heavily relies on the assumption that the clusters are convex (of spherical shape). DBSCAN does not necessarily divide data into spherical clusters, therefore inertia is not a good metric to use for evaluating DBSCAN models (which is why I did not include inertia in the code above). Inertia is more often used in other clustering methods, such as K-means clustering. | https://elutins.medium.com/dbscan-what-is-it-when-to-use-it-how-to-use-it-8bd506293818 | CC-MAIN-2021-10 | refinedweb | 1,233 | 52.09 |
!
getting MATE + Openbox to work: gnome-panel-control?
Archived topics about LMDE 1
Forum rules
- LMDE 1 has reached end of life (EOL) on January 1st 2016 and is no longer supported:
- Instructions for upgrading to LMDE 2:
4 posts • Page 1 of 1
Re: getting MATE + Openbox to work: gnome-panel-control?
Hey Morrog, I have good news for you
I downloaded gnome-panel-control.c and replaced GNOME calls for MATE calls, saved it as mate-panel-control.c, compiled it, and it works like a charm!
Here is mate-panel-control.c:
You'll need to install this to be able to compile it:
And then compile it like this:
You can copy it to /usr/bin if you like:
And there you have it!
You can now attach ALT+F1 to main menu and ALT+F2 to run dialog.
Cheers!
Nahuel.
I downloaded gnome-panel-control.c and replaced GNOME calls for MATE calls, saved it as mate-panel-control.c, compiled it, and it works like a charm!
Here is mate-panel-control.c:
Code: Select all
#include <X11/Xlib.h>
#include <string.h>
#include <stdio.h>
#include <assert.h>
typedef enum
{
NONE,
MAIN_MENU,
RUN_DIALOG
} Action;
int main(int argc, char **argv)
{
int i;
Action a = NONE;
for (i = 1; i < argc; ++i) {
if (!strcmp(argv[i], "--help")) {
a = NONE;
break;
}
if (!strcmp(argv[i], "--main-menu")) {
a = MAIN_MENU;
break;
}
if (!strcmp(argv[i], "--run-dialog")) {
a = RUN_DIALOG;
break;
}
}
if (!a) {
printf("Usage: mate-panel-control ACTION\n\n");
printf("Actions:\n");
printf(" --help Display this help and exit\n");
printf(" --main-menu Show the main menu\n");
printf(" --run-dialog Show the run dialog\n\n");
return 0;
}
{
Display *d;
Window root;
XClientMessageEvent ce;
Atom act_atom;
Time timestamp;
d = XOpenDisplay(NULL);
if (!d) {
fprintf(stderr,
"Unable to open the X display specified by the DISPLAY "
"environment variable. Ensure you have permission to "
"connect to the display.");
return 1;
}
root = RootWindowOfScreen(DefaultScreenOfDisplay(d));
switch (a) {
case MAIN_MENU:
act_atom = XInternAtom(d, "_MATE_PANEL_ACTION_MAIN_MENU", False);
break;
case RUN_DIALOG:
act_atom = XInternAtom(d, "_MATE_PANEL_ACTION_RUN_DIALOG", False);
break;
default:
assert(0);
}
/* Generate a timestamp */
{
XEvent event;
Window win;
win = XCreateSimpleWindow(d, root, 0, 0, 1, 1, 0, 0, 0);
XSelectInput(d, win, PropertyChangeMask);
XChangeProperty(d, win, act_atom, act_atom, 8,
PropModeAppend, NULL, 0);
XWindowEvent(d, win, PropertyChangeMask, &event);
XDestroyWindow(d, win);
timestamp = event.xproperty.time;
}
ce.type = ClientMessage;
ce.window = root;
ce.message_type = XInternAtom(d, "_MATE_PANEL_ACTION", False);
ce.format = 32;
ce.data.l[0] = act_atom;
ce.data.l[1] = timestamp;
XSendEvent(d, root, False, StructureNotifyMask, (XEvent*) &ce);
XCloseDisplay(d);
}
return 0;
}
You'll need to install this to be able to compile it:
Code: Select all
sudo apt-get install libx11-dev
And then compile it like this:
Code: Select all
gcc ./mate-panel-control.c -o mate-panel-control -L/usr/X11R6/lib -lX11
You can copy it to /usr/bin if you like:
Code: Select all
sudo cp ./mate-panel-control /usr/bin
And there you have it!
You can now attach ALT+F1 to main menu and ALT+F2 to run dialog.
Cheers!
Nahuel.
Re: getting MATE + Openbox to work: gnome-panel-control?
Hi, I am using MATE and Compiz and had the same problem like Morrog, i.e. ALT+<F1> and ALT+<F2> was not working.
With your nice script, Nahuel, everything runs perfectly now, thanks a lot
With your nice script, Nahuel, everything runs perfectly now, thanks a lot
Linux Mint 17 Qiana
Linux 3.13.0-24-generic x86_64
Intel(R) Core(TM) i3-2100
Linux 3.13.0-24-generic x86_64
Intel(R) Core(TM) i3-2100
4 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 0 guests | https://forums.linuxmint.com/viewtopic.php?f=219&t=109474?f=219&t=109474 | CC-MAIN-2017-26 | refinedweb | 620 | 58.38 |
Netbeans Platform Node Icon Badges
The Netbeans IDE has some very cool way of letting users know when a certain file contains errors. For example, if you are editing a Java file and the file contains syntax errors, Netbeans will change the file icons with a new icon indicating errors:
You can see that the error icons propagate all the way up to the top-most parent, the project itself.
Netbeans Platform Icon Badges
As a user of the Netbeans IDE and before even knowing what the Netbeans Platform was, I always assumed that the normal state of the icon and the error state of the icon were two completely different icons handled by Netbeans. A few days ago I learned that this is not the case.
Netbeans uses the concept of icon badges to display these kind of error icons. Icon badges are basically multiple images merged together at runtime into a single
Image object.
In this article I want to quickly go over how you can apply this concept in your Netbeans Platform application when using the Nodes API.
Icon Badges With Nodes
When defining a node class in Netbeans Platform, there is a method called
getIcon which you can define yourself so that when the nodes are rendered in the application, the default node icons will be replaced with the icon of your choosing. For example:
public class MyBeanNode extends BeanNode<MyBean> { // Constructor and other stuff here... @Override public Image getIcon(int type) { return ImageUtilities.loadImage("path/to/my/icon"); } }
To apply the concept of badging, we can simply use the
mergeImages method from
ImageUtilities. To determine whether the method needs to merge an error badge icon or not will depend on how you are handling current errors for the nodes. Here is a very basic example:
public class MyBeanNode extends BeanNode<MyBean> { private final Image ERROR_BADGE = ImageUtilities.loadImage("path/to/error/badge/icon"); // ... @Override public Image getIcon(int type) { Image icon = ImageUtilities.loadImage("path/to/my/icon"); if (getBean().hasErrors()) { icon = ImageUtilities.mergeImages(icon, ERROR_BADGE, 7, 7); } return icon; } }
In the above example we are relying on a
hasErrors method in the
MyBean object, which is obtained by calling
getBean(). Keep in mind that this may or may not be the best way to handle these errors.
Now when the factory object gets called again to render the nodes, nodes that contain errors will have their original icons merged with the error badge icon. | https://aalvarez.me/posts/netbeans-platform-node-icon-badges/ | CC-MAIN-2020-16 | refinedweb | 408 | 59.43 |
One instance running per user/machine/cpu/cpu-core/network-interface/LAN ?
The solution depends on what exactly you want to limit, and what rights you have.
[dcw] - 2011-10-26 15:21:57Seems simple for unix:
set pslst [exec ps -ef | grep mysingle.tcl] set cntr 0 foreach i $pslst { if { $i == "tclsh" } { incr cntr } } if { $cntr == 1 } { puts "One instance of the script is running" } else { puts "$cntr instances of the script are running" }
MHo 2011-10-26: On windows, you could, for example, register yourself as a dde server with a fixed name. The very first thing each newly started instance should do then is to look if a dde server with name is already running (with dde services). Another method is to start a socket server on a specific port and try to connect to that port upon start. Or you could create a lock file and look for it... There are myriads of other methods all of which are more or less tricky...
dkf - 2011-10-27 13:43:21One method is to use a server socket:
set socketID 12345; # Pick something try { socket -server {apply {{ch args} { close $ch raise . }}} -myaddr localhost $socketID } trap {POSIX EADDRINUSE} {} { close [socket localhost $socketID] }OK, you might want a slightly less dumb protocol (e.g., passing a filename across) but that's just a refinement.APE - 2012-02-28 12:43:00 Same method but using catch (still using Tcl 8.5) :
set socketID 12345; # Pick something catch {socket -server { apply {{ch args} { close $ch raise . } } } -myaddr localhost $socketID } msg if {$msg == "couldn't open socket: address already in use"} { close [socket localhost $socketID] }
[JD] - 2013-01-09 02:54:32Here is my solution to making my program a single instance application. It gets the PID of the currently running instance. It then uses the 'send' command to call a function passing the PID as a parameter. The program checks to see if the PIDS match and if they do it means it is the original programming making the call. When a new instance is started, the PIDS do not match and it returns a boolean TRUE. When it gets TRUE then it exits the instance trying to start.
set appPid [pid] proc init {} { global appPid set chck [send pk2gui checkInstance $appPid] if {$chck} { exit } } proc checkInstance args { global appPid if {$args != $appPid} { return 1 } else { return 0 } } initLet me know what you think.
[Komenor] - 2013-06-02 07:27:49If you look at the Tk8.5 documentation, page "send", you will see an example to solve your problem. But, I think that to make it simple, just start your program with the following commands :
if {[tk appname newName] != newName} { exit }wdb Ingenious ... if you enclose newName in quotes:
if {[tk appname newName] != "newName"} { exit }Won't work on Windows, as it lacks send.
[dgood] - 2015-05-04 Here's my little package for just this kind of thing. It need much improvement, but it works well for my purposes so far.Edited on 2015-05-06 to be more picky about killing processes.
## # SelfService # # This package creates a mechanism for a script to figure out if it is already # running, and still sane, by using sockets to pass messages to another copy of # itself. This is most useful for embedded system scripts which are invoked # periodically by the operating system (i.e. systemd, cron, etc...). # # This only works in unix because the killOthers proc uses standard unix utils. package provide SelfService 1.0 namespace eval selfservice { variable S array set S { Port {} Server {} } proc Server {chan addr port} { fconfigure $chan -blocking 0 -buffering none fileevent $chan readable [list selfservice::Reader $chan] } # Used for both query and response proc Reader {chan} { variable S set line [gets $chan] if {$line eq "STATUS"} { puts $chan "STATUS OK" flush $chan } elseif {$line eq "STATUS OK"} { set S(Response) $line } if {[eof $chan]} { catch {close $chan} } } # Setup server socket # port can be 0, in which case the system will pick an open port # In either case, S(Port) will refelct the actual opened port number proc startup {name port} { variable S killOthers $name set S(Server) [socket -server selfservice::Server $port] set S(Port) [lindex [fconfigure $S(Server) -sockname] 2] return $S(Server) } proc shutdown {} { variable S catch {close $S(Server)} return $S(Server) } # Get the current port number proc getPort {} { variable S return $S(Port) } # Returns bool 1 if server is up and sane, 0 otherwise proc isOk {port} { variable S if {[catch {socket localhost $port} chan]} { return 0 } # Check to see if server is still sane fconfigure $chan -blocking 0 -buffering none fileevent $chan readable [list selfservice::Reader $chan] puts $chan "STATUS" flush $chan set S(Response) "" set afterId [after 5000 [list set selfservice::S(Response) "timeout"]] vwait selfservice::S(Response) # Shutdown channel after cancel $afterId catch {close $chan} if {$S(Response) eq "STATUS OK"} { return 1 } else { return 0 } } # Kills all processes which match name # Returns a list of the process ids which were killed proc killOthers {name} { # kill all processes that match pattern and are not my pid set ps [split [exec ps -eo pid,cmd | grep $name] "\n"] set ps [lrange $ps 0 end-1]; # Ignore the grep command itself set killList {} foreach p $ps { lappend killList [lindex $p 0] } set killList [lsearch -all -inline -not $killList [pid]] foreach pid $killList { catch {exec kill -9 $pid} } return $killList } }
# Example usage if {[selfservice::isOk 12345]} { puts "Server is OK, exiting" exit } else { puts "No response from server" } puts "Starting new server..." selfservice::startup $::argv0 12345 # rest of script here...If you are on Linux, you might be able to use Lock files using atomic hard links (and yes, I know, you were looking for a cross-platform solution, but it might just also work on Windows, I've just not tried it yet)
[Walter] - 2016-10-31 22:07:42I have an expect script which prompts for my password, spawns a shell, then uses 'interact' to creates a bunch of shortcut commands. I run this script when I first log on to my primary host so that any spawned local or remote connections will inherit all of my shortcuts. But since the new shell doesn't immediately exit I could accidentally rerun the command and have nested bash shortcut scripts. Note: I didn't want the script to prevent independent logins from running the same command, just prevent one single login from running the same initialization in any of its child processes.After reading the previous replies I thought that should be easy to do by setting an environment variable then testing for it. But tcl didn't like that - it blew up anytime the environment variable wasn't already set, even if I wrapped it with 'catch'. I bet there's a way to do this but I finally just wrapped my expect script in a simple bash script, something like this:
# cat passwrap.sh !/bin/bash -f export POPID="BashInit$POPID" if [ ! $POPID == "BashInit" ]; then echo "$0 is already running..." exit 1 fi # Run expect script to set shortcuts expect passout.expSurely one of you expect/tcl experts knows how to both read and write the environment variables even when the variable isn't preset. I'll keep playing with it in my spare time and post a solution if I find one first.
[Walter] - 2016-11-01 00:52:48OK, that was easy once I found how to reference global (env) variables:#!/usr/bin/expect
set SCRIPT [info script] if { [info exists ::env(PSTAT) ] } { puts "\n$SCRIPT is already running. Cannot restart in child process.\n" exit 1 } set ::env(PSTAT) "Is running" set PPassword "[email protected]@kingAsimpleMisteak" spawn /bin/bash interact { "~!Pp" { send -- "$PPassword" } "~PID" { send -- "$PID" } "~!!P" { exec kill -9 $PID ;# annihilate this process } "~!!K" { send -- "kill -9 $PID\r" ;# nuke it another way } "~psf" { send -- "ps -ef | grep "; # find a string in ps listing } "~Gg" { send -- "a random loooong command that I hate typing"; } "~Hh" { send -- "Another long string that I want to shortcut"; } "~!JJJ" { send -- "The Daily Planet: J Jonah Jameson" } }=============================Now if we rerun the script from within the spawned bash/interact it will warn us and bailout without redefining everything. This is especially helpful when the script spawns lots of commands, such as xterm connections to remote hosts, that you don't want duplicated. | http://wiki.tcl.tk/10648 | CC-MAIN-2016-50 | refinedweb | 1,395 | 65.76 |
01 September 2010 16:33 [Source: ICIS news]
LONDON (ICIS)--BASF intends to increase its polystyrene (PS) prices in September by €85/tonne (€108/tonne) from August, a company source said on Wednesday.
“The styrene contract is up by €33/tonne from August to September, so we will target €85/tonne for PS,” said the source.
Other sources said the actual increase in the September styrene contract price could be slightly lower than €33/tonne, depending on how the increase was calculated.
However, sources added it was clear that this month’s styrene contract has been settled higher than in August, and BASF intended to increase the spread between styrene and PS.
A third European September styrene barge contract was agreed on Wednesday at €992/tonne, up €30/tonne from the previous month.
An initial September styrene barge contract was agreed on 31 August between one pair of settlers at €1,033tonne, up €38/tonne from August.
A second European September styrene barge contract was agreed earlier on Wednesday at €1,006/tonne, up by €31/tonne from the previous month. The same producer then later confirmed that it had settled September at €1,006/tonne with another consumer, bringing the total number of barge settlements this month to four.
INEOS NOVA announced last week that it was targeting a €50/tonne increase for general purpose PS (GPPS) and a €65/tonne hike for September high impact PS (HIPS).
Producers said there had been strong PS demand in August and they reported good volumes for September.
Availability was affected by BASF’s planned maintenance shutdown at its site at ?xml:namespace>
The European PS market was balanced following several permanent capacity closures in 2009, as producers addressed the oversupply that mainly came about as a result of technology changes.
In 2010, the upturn in the construction sector boosted GPPS volumes as expanded PS (XPS) volumes rose. XPS is used in insulation.
PS buyers said they are not convinced that they will have to pay increases as high as €85/tonne in September, but they acknowledged that PS availability has been an issue in recent weeks.
“We might have to pay more next month, but I can’t see why [producers] should improve their margins above and beyond the styrene increase. Let’s wait and see,” said a buyer.
PS pricing in
PS is used extensively in the packaging and household sectors.
PS producers in
(€1 = $0.79)
For more on polystyrene visit ICIS chemical intelligence
For more on BASF | http://www.icis.com/Articles/2010/09/01/9389940/basf-targets-85tonne-increase-in-europe-september.html | CC-MAIN-2015-11 | refinedweb | 419 | 59.94 |
I'm looking for a way to translate hostnames to IPs (or sockaddr_in structs), while being threadsafe. gethostbyname() doesn't meet this, as it returns pointers to static data. So, while looking for an alternative, I found getaddrinfo(). My original code compiled fine on both Windows and Linux, suceeded in Linux, but fails in Windows with:
"The <executable name> file is linked to missing export ws2_32.dll:freeaddrinfo."
...and my program never executes, period, AFAIK. The following example also fails with the same error.
MSDN seems to claim this function is compatible back to Win95, and should be in ws2_32.dll. Looking at ws2_32.dll, there does not appear to be a freeaddrinfo or getaddrinfo.
What am I doing wrong, or is there some other (better?) way of connecting to say "google.com"?
Using MinGW v3.4.2 on Win98.
Example found on web, exhibits above error:
Code:
#include <stdio.h>
#include <winsock2.h>
#include <ws2tcpip.h>
int main()
{
char *hostname = "localhost";
struct addrinfo hints, *res;
struct in_addr addr;
int err;
WSADATA data;
WSAStartup(MAKEWORD(2,0), &data);
memset(&hints, 0, sizeof(hints));
hints.ai_socktype = SOCK_STREAM;
hints.ai_family = AF_INET;
if((err = getaddrinfo(hostname, NULL, &hints, &res)) != 0)
{
printf("error %d\n", err);
return 1;
}
addr.S_un = ((struct sockaddr_in *)(res->ai_addr))->sin_addr.S_un;
printf("ip address : %s\n", inet_ntoa(addr));
freeaddrinfo(res);
WSACleanup();
return 0;
} | https://cboard.cprogramming.com/windows-programming/81245-getaddrinfo-printable-thread.html | CC-MAIN-2018-05 | refinedweb | 224 | 62.54 |
I am trying to make sure I understand how %n is used, and I have the very simple code below
#include <stdio.h>
int main (void)
{
int c1, c2;
printf("This%n is fun%n\n", &c1, &c2);
}
Your program is crashing after printing
This.
MinGW uses Microsoft's Visual C++ runtime by default. MSDN says following about "%n":
Because the
%nformat is inherently insecure, it is disabled by default. If
%nis encountered in a format string, the invalid parameter handler is invoked, as described in Parameter Validation. To enable
%nsupport, see
_set_printf_count_output.
The default invalid parameter handler aborts your program. Either enable via
_set_printf_count_output(1) or compile with
-D__USE_MINGW_ANSI_STDIO=1 to use it. | https://codedump.io/share/PGV9Dq6o5J3M/1/use-of-n-more-than-once-in-a-format-string-does-not-seem-to-work | CC-MAIN-2017-13 | refinedweb | 114 | 57.98 |
The BeanShell scripting language has gained a large following since Patrick Niemeyer first started to experiment with scriptable Java[1]. Today, BeanShell is a thriving open-source project, and BeanShell scripting has been incorporated into dozens of commercial and open-source projects, such as BEA's WebLogic, the NetBeans IDE, Sun's Java Studio, OpenOffice, JEdit, and many others. The Java Community Process recently accepted BeanShell as a JSR, the first step for BeanShell to become an official Java standard.
Artima's Frank Sommers interviewed Patrick soon after the JCP vote. Patrick became involved with Oak (Java's predecessor) while working at Southwestern Bell, and gained fame in the Java community both as author of the best-selling O'Reilly book Learning Java, and as creator of BeanShell.
Frank Sommers: What is BeanShell, and what does it offer to a developer?
Pat Niemeyer: BeanShell is a scripting language for Java. At a basic level, BeanShell is a Java source interpreter. The BeanShell syntax is based on, and is a superset of, the Java syntax. You can feed it standard Java code, the kind that you'd give to the Java compiler, and BeanShell will dynamically interpret that code for you at runtime.
More important, however, is that BeanShell extends the Java language syntax into the realm of scripting languages in a natural way, allowing you to "loosen" or "scale" Java from a static to a dynamic programming style. That, in turn, allows you to use Java in all sorts of new ways. BeanShell scripts can work with "live" Java objects and APIs just as any other Java code would, as part of your application, or with a stand-alone interpreter.
By a dynamic programming style, I mean that BeanShell offers features of a traditional scripting language: You can use "loosely" typed variables, without declaring them first. You can write simple, unstructured scripts consisting of just arbitrary statements and expressions, or toss in a few methods if you like. You can supply scripted or compiled "commands" which act just like Java methods, but are loaded on demand when called by the script. You can dynamically change the classpath and reload classes in BeanShell as well as create new classes on the fly. These are examples of using Java language constructs in "scripty" ways.
Beyond that, BeanShell offers additional, highly dynamic language features, such as the ability to implement a "meta-method" to handle method invocations, variables, methods, and namespaces that can be inspected programmatically, and the ability to manipulate the call stack and perform evaluations in powerful ways.
Those latter capabilities are not there so much for day-to-day use, as they are to enable powerful BeanShell commands. BeanShell commands are methods that are a loaded in dynamically. You can mix and match all of this loose syntax with the standard, fully typed Java syntax and classes. That makes cutting and pasting or migrating code between scripts and Java very easy.
Frank Sommers: What are some useful ways BeanShell can help developers in their daily work?
Pat Niemeyer: BeanShell started as a playground for experimenting with Java APIs and "live" objects. There is something immensely satisfying about typing
new JButton(), and seeing the thing pop up right on the screen, then having a way to tweak its methods and watching what it does. I still use BeanShell that way for rapid prototyping.
When I try to remember how the Java regular expression API works, for instance, I start typing in BeanShell. When I'm curious what fonts are available in my current environment, I type a line in BeanShell. There is that prototyping aspect to it, and also there is a role for BeanShell in education. I include BeanShell with my book, Learning Java (O'Reilly & Associates), because BeanShell is an excellent companion for learning or teaching the Java language[2].
In addition, BeanShell as a scripting and extension language allows users of your Java application to customize application behavior with their own scripts. That's similar to how Emacs is extended via Lisp, for instance, with the difference that BeanShell and Java share a common syntax. Often, it's not reasonable to require a regular user of an application to understand Java and have a full blown development environment just to supply a simple macro or "
if-then-else"-type customization for that application.
James Gosling famously noted once that all configuration files eventually become programming languages. BeanShell's simplified syntax is perfect for user scripts, expressions, rules engines, and the like. BeanShell scripts can be used as a general replacement for Java property files, allowing you to not only supply simple values, but to perform more complex logic, if needed. For instance, you can invoke or supply methods, construct objects, and perform procedural setup, all with the familiar Java syntax, but using that syntax in a scripting way.
BeanShell can also do fantastic things for dynamic deployment and debugging. With a couple of lines of code you can create a server from your application, access that server with a web browser, and get a shell inside your running application. You can poke at your app's internals live, even fix things in some cases that way. There are an endless number of things you can do with a dynamic language married to Java.
Frank Sommers: Can you give us a taste of a few dynamic BeanShell features?
Pat Niemeyer: Let me mention just three examples here, but there are many others. First, BeanShell has a handy "
invoke" meta-method:
invoke( name, args ) { ... } // Or, equivalently, with all the type information... void invoke( String name, Object [] args ) { ... }
Implementing a method with those signatures in your script allows you to handle method invocations for non-existent methods:
invoke( name, args ) { print( name + " invoked!" ); } justmadeup( 1, 2, 3 ); // justmadeup invoked!
That's very useful if you don't feel like implementing all methods of a class or interface, or if you wish to implement your own command loading scheme for your scripts. You can even use this pattern to implement mappings or variable argument list schemes for BeanShell methods, by looking up other methods and dispatching invocations to them.
Next, there is the magic "
this.caller" reference that lets you refer to the namespace - the scope - of the caller of your method, to do whatever you'd like on their behalf.
callMe() { this.caller.foo = 42; } callMe(); print( foo ); // 42
That may not seem all that magical until you look at what's going on. The script that you invoked has access to the environment of the line of code that called it, enabling you to do all sorts of things on the caller's behalf. For instance, you could set or get variables, and define or look up methods. That capability is used in many standard BeanShell commands to do things such as taking into account the classes imported in the caller's namespace or, more generally, to evaluate code and leave the side-effects in the caller's scope. The general
eval() method works in this way.
Another example of a dynamic BeanShell command takes advantage of the above feature. The
importObject() command imports the methods and variables of a Java object into your scope. That's a lot like the Java 5 static import feature, but it works relative to a "live" object. For example:
map = new HashMap(); map.put("foo", "bar"); importObject( map ); print( get("foo") ); // bar
This code snippet "mixes in" the methods and variables of the map instance into your current scope, and you can then use those methods and variables as if you had scripted them yourself, or if they were built-in commands. That feature can be very convenient when you want to script part of your application, as it allows your script to "step into" the scope of one of your objects.
These are just a few interesting things to give your BeanShell scripts more leverage and help you keep them both simple and Java-like. I also think it's interesting that the features I just highlighted sort of fell naturally out of the design, and didn't require any special effort to add to the language.
Frank.
Frank
Have an opinion? Readers have already posted 1 comment about this article. Why not add yours?.
Artima provides consulting and training services to help you make the most of Scala, reactive and functional programming, enterprise systems, big data, and testing. | https://www.artima.com/articles/scripting-java-the-beanshell-jsr | CC-MAIN-2022-40 | refinedweb | 1,407 | 60.14 |
Hey everybody,
I finished lesson 10 today, fileio, and thought "Hey, let's combine all I have learned before.
So I programmed a simple menu with switchcase, and thought I would give the user the choice between writing something into a file, read a file and print the content out, append something to a file or exit.
But when I wanted to test my first function (i.e. write something to a file) I get the following errors and can't quite figure it out :-( I bet it is quite simple!!
Any help is highly apreciated :-)
This is what I got so far:This is what I got so far:Code:~ thommy$ g++ switchfileio.cpp -o littletest switchfileio.h: In function ‘void write_file()’: switchfileio.h:15: error: ‘cout’ was not declared in this scope switchfileio.h:16: error: ‘cin’ was not declared in this scope switchfileio.h:21: error: ‘ofstream’ was not declared in this scope switchfileio.h:21: error: expected `;' before ‘a_file’ switchfileio.h:22: error: ‘a_file’ was not declared in this scope switchfileio.h: In function ‘void read_file()’: switchfileio.h:28: error: ‘cout’ was not declared in this scope switchfileio.h: In function ‘void appendto_file()’: switchfileio.h:32: error: ‘cout’ was not declared in this scope :~ thommy$
File: switchfileio.cpp
File: switchfileio.hFile: switchfileio.hCode:#include <iostream> #include <fstream> #include "switchfileio.h" using namespace std; int main(){ int input; // need this one for the switch-case-menu do { // this is the menu, it works, have tried this with another lesson! cout<<"1. Write something to a file\n"; cout<<"2. Read a file and print its content\n"; cout<<"3. Append a string to an existing file\n"; cout<<"4.Exit\n"; cin>>input; switch (input) { case 1: write_file(); break; case 2: read_file(); break; case 3: appendto_file(); break; case 4: cout << "Thanks for playing with me\n"; break; default: cout<<"Error\n"; break; } } while (input == 1 || input == 2 || input == 3 || input == 4); // this is to redraw the menu when one operation is done cin.get(); }
Code:void write_file(){ // my function that is called when the user wants to write a file char string[256]; // this is the string the user can enter char filename[15]; // this string is used to determine the filename of the file cout << "Please enter your text!\n"; cin >> string; cout << "You entered: " << string <<"\n"; // show the user what he has written cout << "Please enter a filename: "; // ask for the filename cin >> filename; ofstream a_file (filename); // creates an instance of ofstream and opens a file with the content of filename a_file<<string; // write the content of "string" into the file a_file.close(); // close the file cout << "file " << filename <<" is written!\n"; // give success message to the user } void read_file(){ cout << "you just loaded\n"; } void appendto_file(){ cout << "sorry, MP is not ready yet\n"; } | http://cboard.cprogramming.com/cplusplus-programming/118341-xy-not-declared-scope-compiler-error.html | CC-MAIN-2016-30 | refinedweb | 470 | 72.76 |
Recently, I was looking for a toy dataset for my new book’s chapter (you can subscribe to the updates here ) on instance segmentation. And, I really wanted to have something like the Iris Dataset for Instance Segmentation so that I would be able to explain the model without worrying about the dataset too much. But, alas, it is not always possible to get a dataset that you are looking for.
I actually ended up looking through various sources on the internet but inadvertently found that I would need to download a huge amount of data to get anything done. Given that is not at all the right way to go about any tutorial, I thought why not create my own dataset.
In the end, it turned out to be the right decision as it was a fun task and as it provides an end to end perspective on what goes on in a real-world image detection/segmentation project.
This post is about creating your own custom dataset for Image Segmentation/Object Detection.
Though I was able to find out many datasets for Object Detection and classification per se, finding a dataset for instance segmentation was really hard. For example, Getting the images and annotations from the Open image dataset meant that I had to download all the mask images for the whole OID dataset effectively making it a 60GB download. Something I didn’t want to do for a tutorial.
So I got to create one myself with the next question being which images I would want to tag? As the annotation work is pretty manual and lackluster, I wanted to have something that would at least induce some humor for me.
So I went with tagging the Simpson’s.
In the end, I created a dataset (currently open-sourced on Kaggle) which contains 81 image segmentations each for the five Simpson’s main characters (Homer, Lisa, Bert, Marge, and Maggie). While I could have started with even downloading Simpson’s images myself from the internet, I used the existing Simpsons Characters Image Dataset from Kaggle to start with to save some time. This particular Simpsons character dataset had a lot of images, so I selected the first 80 for each of these 5 characters to manually annotate masks. And it took me a day and a half from start to finish with just around 400 images but yeah I like the end result which is shown below.
You can also make use of this dataset under the Creative Commons license. And here are the file descriptions in the dataset.
img folder: Contains images that I selected (81 per character) for annotation from the simpsons_dataset directory in Simpsons Characters Data.
instances.json : Contains annotations for files in img folder in the COCO format.
test_images folder: Contains all the other images that I didn’t annotate from the simpsons_dataset directory in Simpsons Characters Data. These could be used to test or review your final model.
I remember using VIA annotation tool to create custom datasets a while back. But in this post, I would be using Hyperlabel for my labeling tasks as it provides a really good UI interface out of the box and is free. The process is essentially pretty simple, yet I will go through the details to make it easier for any beginner. Please note that I assume that you already have got the images that you want to label in a folder. Also, the folder structure doesn’t matter much as long as all images are in it but it would really be helpful in modeling steps if you keep it a little consistent. I had my data structure as:
- SimpsonsMainCharacters - bart_simpson - image0001.jpg - image0002.jpg - homer_simpson - lisa_simpson - marge_simpson - maggie_simpson
Hyperlabel is an annotation tool that lets you annotate images with both bounding boxes(For Object Detection) as well as polygons(For instance segmentation). I found it as one of the best tools to do annotation as it is fast and provides a good UI. Also, you can get it for free for MAC and Windows. The only downside is that it is not available for Linux, but I guess that would be fine for most of us. I for one annotated my images on a Windows system and then moved the dataset to my Linux Machine. Once you install Hyperlabel, you can start with:
The term COCO(Common Objects In Context) actually refers to a dataset on the Internet that contains 330K images, 1.5 million object instances, and 80 object categories.
But in the image community, it has gotten pretty normal to call the format that the COCO dataset was prepared with also as COCO. And this COCO format is actually what lots of current image detection libraries work with. For Instance in Detectron2, which is an awesome library for Instance segmentation by Facebook, using our Simpsons COCO dataset is as simple as:
from detectron2.data.datasets import register_coco_instances register_coco_instances("simpsons_dataset", {}, "instances.json", "path/to/image/dir")
Don’t worry if you don’t understand the above code as I will get back to it in my next post where I will explain the COCO format along with creating an instance segmentation model on this dataset. Right now, just understand that COCO is a good format because it plays well with a lot of advanced libraries for such tasks and thus makes our lives a lot easier while moving between different models.
Creating your own dataset might take a lot of time but it is nonetheless a rewarding task. You help the community by providing a good resource, you also are able to understand and work on the project end to end when you create your own datasets.
For instance, I understood just by annotating all these characters that it would be hard for the model to differentiate between Lisa and Maggie as they just look so similar. Also, I would be amazed if the model is able to make a good mask around Lisa, Maggie, or Bert’s Zigzag Hair. At least for me, it took a long time to annotate them.
In my next post, I aim to explain the COCO format along with creating an instance segmentation model using Detectron2 on this dataset. So stay tuned.
If you want to know more about various Object Detection techniques, motion estimation, object tracking in video, etc., I would like to recommend this excellent course on Deep Learning in Computer Vision in the Advanced machine learning specialization . If you wish to know more about how the object detection field has evolved over the years, you can also take a look at my last post on Object detection. | https://mlwhiz.com/blog/2020/10/04/custom-dataset-instance-segmentation-scratch/ | CC-MAIN-2021-25 | refinedweb | 1,114 | 58.72 |
- OOP and Tkinter
- common practice for creating utility functions?
- Dr. Dobb's Python-URL! - weekly Python news and links (May 15)
- help with perl2python regular expressions
- py on windows Mobile 5.0?
- Finding yesterday's date with datetime
- saving file permission denied on windows
- Embedding python in package using autoconf
- keyword help in Pythonwin interpreter
- pythoncom and IDispatch
- Decrypt DES by password
- Web development with python.
- Large Dictionaries
- Two ZSI questions
- How to organise classes and modules
- please help me is sms with python
- List and order
- Making all string literals Unicode
- Unicode to DOS filenames (to call FSUM.exe)
- Compile Python
- Web framework to recommend
- Problem with Tkinter on Mac OS X
- Tabs versus Spaces in Source Code
- Windows Copy Gui
- comparing values in two sets
- help using smptd
- continue out of a loop in pdb
- Starting/launching eric3
- SQl to HTML report generator
- send an email with picture/rich text format in the body
- Python API, Initialization, Path and classes
- do/while structure needed
- count items in generator
- copying files into one
- any plans to make pprint() a builtin?
- Aggregate funuctions broken in MySQLdb?
- Question regarding checksuming of a file
- distutils and binding a script to a file extension on windows
- Cellular automata and image manipulation
- cx_freeze and matplotlib
- retain values between fun calls
- LocaWapp 09 - localhost web applications
- Iterating generator from C
- Setting up InformixDb
- PyX on Windows
- Unicode & ZSI interaction ??
- Package that imports with name of dependent package
- SystemError: ... cellobject.c:22: bad argument to internal ?
- Named regexp variables, an extension proposal.
- Is it possible to set the date/time of a directory in windows with Python? If so how?
- Sending mail with attachment...
- plot data from Excel sheet by Matplotlib, how?
- Slovenian Python User Group
- getting rid of pass
- distributing a app frozen by cx_freeze
- How to install pyTrix?
- LocaWapp 09 - localhost web applications (add thread, add "Request" object, new "locawapp_main" function, fixed files.py," ...)
- unzip zip files
- having trouble importing a module from local directory
- Tkinter: ability to delete widgets from Text and then re-add them
- Wrong args when issuing a SIGUSR1 signal
- Discussion: Python and OpenMP
- TkTable for info gathering
- listener program in Python
- where do you run database scripts/where are DBs 'located'?
- nix logon in Python
- Creating an Active Desktop Window
- State of the "Vaults of Parnassus"
- Is there any pure python webserver that can use FCGI
- index of an element in a numarray array
- compiling module from string and put into namespace
- Python Translation of C# DES Encryption
- New blog
- deleting texts between patterns
- Decorator
- matplotlib
- A Unicode problem -HELP
- python soap web services
- Threads
- Time to bundle PythonWin
- calling perl modules from python
- SSL support in SOAPpy to build a SOAP server
- FTP filename escaping
- websphere6 ND soap access
- Reg Ex help
- linking with math.so
- find all index positions
- which windows python to use?
- String concatenation performance
- _PyList_Extend advice
- scientific libraries for python
- python equivalent of the following program
- redirecting print to a a file
- pythoncode in matlab
- Python memory deallocate
- Install libraries only without the program itself
- releasing interpreter lock in custom code?
- 2 books for me
- Python and windows bandwidth statistics?
- problem with array and for loop
- Nested loops confusion
- Slow network reading?
- glob() that traverses a folder tree
- Best form when using matrices and arrays in scipy...
- create a c++ wrapper for python class?
- Calling C/C++ functions in a python script
- segmentation fault in scipy?
- design a Condition class
- can distutils windows installer invoke another distutils windows installer
- PyThreadState_SetAsyncExc, PyErr_Clear and boost::python
- syntax for -c cmd
- python sqlite3 api question
- unittest: How to fail if environment does not allow execution?
- Python module generation (in C) from XML spec
- Decoupling fields from forms in Django
- PIL ImageFont getmask bug?
- PIL thumbnails unreasonably large
- Chicago Python Users Group Thurs May 11 at 7pm
- reusing parts of a string in RE matches?
- Delivery failure notification@
- wx.ListCtrl.EditLabel
- Proposal: Base variables
- Use subprocesses in simple way...
- MySQLdb trouble
- Redirecting unittest output to a Text widget
- New tail recursion decorator
- SciPy - I need an example of use of linalg.lstsq()
- optparse and counting arguments (not options)
- Reading Soap struct type
- Matplotlib in Python vs. Matlab, which one has much better graphicalpressentation?
- problemi con POST
- elementtidy, \0 chars and parsing from a string
- Shadow Detection?
- Can Python installation be as clean as PHP?
- How to recast integer to a string
- Is Welfare Part of Capitalism?
- Progress bar in web-based ftp?
- Upgrading Class Instances Automatically on Reload
- multiline strings and proper indentation/alignment
- Calling python functions from C
- lines of code per functional point
- distutils
- Finding defining class in a decorator
- Accessing std::wstring in Python using SWIG
- import in execv after fork
- Deferred Evaluation in Recursive Expressions?
- eval and exec in an own namespace
- What to use for adding syntax for hierarcical trees, metaclasses, tokenize.py or PLY?
- Python editor recommendation.
- installing numpy
- wxPython IEHtmlWindow mystery - dependencies?
- wxPython IEHtmlWindow mystery - dependencies?
- two of pylab.py
- Embedding Python in C++ Problem
- Embedding Python
- clear memory? how?
- Why 3.0/5.0 = 0.59999...
- PythonWin's Check (any other Lint tool) ?
- do "some action" once a minute
- Import data from Excel
- Delivery failure notification@
- Problem - Serving web pages on the desktop (SimpleHTTPServer)
- Problem - Serving web pages on the desktop (SimpleHTTPServer)
- Problem - Serving web pages on the desktop (SimpleHTTPServer)
- Problem - Serving web pages on the desktop (SimpleHTTPServer)
- Is there any plan to port python to ACCESS Palm Linux Platform?
- Enumerating Regular Expressions
- Memory leak in Python
- __dict__ in class inherited from C extension module
- Multi-line lambda proposal.
- ascii to latin1
- group for pure startups...
- Dr. Dobb's Python-URL! - weekly Python news and links (May 8)
- Help on a find Next Statement
- Global utility module/package
- Econometrics in Panel data?
- logging module: add client_addr to all log records
- i don't understand this RE example from the documentation
- PyX custom x-labels
- Using StopIteration
- advanced number recognition in strings?
- Problem with iterators and inheritance
- List Ctrl
- converting to scgi
- connect file object to standard output?
- Python Graphics Library
- regular expressions, substituting and adding in one step?
- Retrieving event descriptors in Tkinter
- hyperthreading locks up sleeping threads
- PYTHONPATH vs PATH?
- A better way to split up a list
- get Windows file type
- Web framework comparison video
- List of lists of lists of lists...
- How to get a part of string which follows a particular pattern using shell script
- Modifying PyObject.ob_type
- data entry tool
- printing out elements in list
- Python's regular expression?
- Tkfont.families does not list all installed fonts
- Why list.sort() don't return the list reference instead of None?
- Python CGI: Hidden fields don't show up in submitted keys!
- New pyparsing wiki
- reading a column from a file
- dial-up from python script
- which is better, string concatentation or substitution?
- How can I do this with python ?
- Getting HTTP responses - a python linkchecking script.
- evaluation of >
- utility functions within a class?
- why _import__ only works from interactive interpreter?
- released: RPyC 2.55
- the tostring and XML methods in ElementTree
- Python CHM Doc Contains Broken Links on Linux in xCHM.
- Passing options around your program
- Numerical Python Tutorial errors
- printing list
- NumTut view of greece
- algorithmic mathematical art
- md5 from python different then md5 from command line
- Integrate docs of site-packages with Python doc index?
- python rounding problem.
- MySQLdb problem
- Splice two lists
- os.isfile() error
- the print statement
- GladeGen and initializing widgets at startup
- sort a list of files
- Python Eggs Just install in *ONE* place? Easy to uninstall?
- Class Library for Numbers now available for Windows
- python 2.5a2, gcc 4.1 and memory problems
- Your email "Server Report" has been Quarantined
- combined files together
- Designing Plug-in Systems in Python
- how to construct a binary-tree using python?
- A critic of Guido's blog on Python's lambda
- Elegent solution to replacing ' and " ?
- MakeBot - IDE for learning Python
- How to doctest if __name__ already used?
- How to get the target of a Windows shortcut file
- How to get client's IP address in the threaded case ofSimpleXMLRPCServer?
- print formate
- easy way to dump a class instance?
- to thine own SELF be true...
- ConfigParser: values with ";" and the value blank
- python script help...
- MQSeries based file transfers using the pymqi module
- [ConfigParser] value with ; and the value blank
- Drop Down Menus...
- Best IDE for Python?
- PUDGE - Colored Code Blocks / Mailing List Access
- Is this a legal / acceptable statement ?
- PUDGE - Colored Code Blocks / Mailing List Access
- how to remove 50000 elements from a 100000 list?
- Tkinter Canvas Pre-Buffer
- Active Directory Authentication
- Method acting on arguements passed
- (question) How to use python get access to google search without query quota limit
- unittest.main-workalike that runs doctests too?
- Is this a good use of __metaclass__?
- ISSPR2006_School_etc.:NEW_EXTENDED_Deadline
- SIGTERM handling
- Embedding Python: How to run compiled(*.pyc/*.pyo) files using Python C API?
- whois info from Python?
- Unable to use py2app
- using urllib with ftp?
- Why does built-in set not take keyword arguments?
- Python 2.4.2 to 2.4.3 transition issue
- CRC calculation
- Can I use python for this .. ??
- regex to exctract informations
- [HOST] - Flexible Project Hosting with Python (similar to Assembla Breakout)
- pyuno and PDF output
- cross platform libraries
- Python for Perl programmers
- Calling superclass
- Progamming python without a keyboard
- pyuno and oootools with OpenOffice 2.0
- Tuple assignment and generators?
- Unicode code has no method float() ?
- Shed Skin Python-to-C++ Compiler - Summer of Code?
- Python sample code for PLSQL REF CURSORS
- Subclassing array
- Multiple Version Install?
- HTMLParseError: EOF in middle of construct error
- refactoring question
- Python function returns:
- __getattr__ for global namespace?
- about the implement of the PyString_InternFromString
- about the implement of the PyString_InternFromString
- RegEx with multiple occurrences
- Pythoncard question
- pleac
- Problem building extension under Cygwin (ImportError: Bad address)
- ZSI usage
- Usage of single click or double click in Tix programming
- pythonic way to sort
- stripping unwanted chars from string
- Any useful benefit to a tiny OS written in Python?
- Fastest quoting
- Defining class methods outside of classes
- scope of variables
- This coding style bad practise?
- __init__.py, __path__ and packaging
- Because of multithreading semantics, this is not reliable.
- Possibly dumb question about dicts and __hash__()
- python strings
- python modules for openAFS client functionalities
- Gettings subdirectories
- Using time.sleep() in 2 threads causes lockup when hyper-threading is enabled
- Mutable String
- problem with reload(sys) (doing reload on the sys module)
- A python problem about int to long promotion just see the idle session
- NaN handling
- I have problem with the qt-module333.dll
- Strange Threads Behaviour...
- Update Tix ExFileSelectBox
- IPython team needs a student for a google "Summer of Code" project.
- __getattr__ on non-instantiated class
- Numeric Python
- May I know Which is the best programming guide for using Tix in python
- Need Plone Information
- Customize IE to make a toolbar visible
- We finally have a complete Python project
- Sorting a list of dictionaries by dictionary key
- Running compiled Python files
- Playability of a file in windows media player
- how to install multiple pythons on window
- Calling a Postgres Function using CGI written in Python
- How to encode html and xml tag datas with standard python modules?
- Zope Guru...
- audio on os x using python, mad, ao
- Seeking students for the Summer of Code
- ImportError: No module named HTMLParser
- NewB question on text manipulation
- UPGMA algorithm ???
- Types-SIG Archives.
- what is the 'host' for SMTP?
- Anaglyph 3D Stereo Imaging with PIL and numpy
- Binary File Reading : Metastock
- detect video length in seconds
- Can subprocess point to file descriptor 5?
- assignment in if
- search file for tabs
- Dispatching operations to user-defined methods
- clearerr called on NULL FILE* ?
- urllib2 not support proxy on SSL connection?
- data regex match
- What is wrong with the Python home page?
- redemo.py with Tkinter
- Check IE toolbar visible
- Python & SSL
- Wake on LAN and Shutdown for Windows and Linux
- Polling from keyboard
- xml-rpc and 64-bit ints?
- simultaneous assignment
- RFC 822 continuations
- milliseconds are not stored in the timestamp KInterbasDB + Firebird
- Python and Windows new Powershell
- ConfigParser and multiple option names
- Pamie and Python - new browser focus
- py2app, pythoncard build problems
- produce xml documents with python, and display it with xsl in a internet browser
- --version?
- Strange result with math.atan2()
- pyqt v3.* and v4.*
- change a text file to a dictionary
- string.find first before location
- blank lines representation in python
- strip newlines and blanks
- stripping blanks
- stripping
- SGML parsing tags and leeping track
- Python distutil: build libraries before modules
- request help with Pipe class in iterwrap.py
- Using elementtree: replacing nodes
- Can Python kill a child process that keeps on running?
- Packet finding and clicking...
- Dr. Dobb's Python-URL! - weekly Python news and links (May 1)
- set partitioning
- SciTE: Printing in Black & White
- Difference between threading.local and protecting data with locks
- Could not find platform independent libraries
- using ftplib
- How do I take a directory name from a given dir?
- how to do with the multi-frame GIF
- how to do with the multi-frame GIF
- how to do with the multi-frame GIF
- Numeric, vectorization
- Multi-Monitor Support
- list*list
- Setting a module package to use new-style classes
- regd efficient methods to manipulate *large* files
- noob question: "TypeError" wrong number of args
- An Atlas of Graphs with Python
- Converting floating point to string in non-scientific format
- SPE / winpdb problem
- returning none when it should be returning a list?
- How to prevent this from happening?
- How can I implement a timeout with subprocess.Popen()?
- python colorfinding
- file open "no such file"
- python and xulrunner
- ending a string with a backslash
- How to efficiently read binary files?
- From the Tutorial Wiki: suggested restructure
- Measure memory usage in Python
- Hierarchy - how?
- Converting tuple to String
- Can I collapse a Panel in wxPython?
- unable to resize mmap object
- bonobo bindings?
- basic python programing
- setting file permissions on a web server
- OpenOffice UNO export PDF help needed
- wxpython - new line in radiobutton's label?
- how not use memmove when insert a object in the list
- resume picking items from a previous list
- Why does bufsize=1 not work in subprocess.Popen ?
- Inserting in to a list.
- Can we create an_object = object() and add attribute like for a class?
- opposite function to split?
- Lists and such
- self modifying code
- How to get computer name
- Need help removing list elements.
- midi input
- Python "member of" function
- PyQt 4.0beta1 Released
- Recommendations for a PostgreSQL db adapter, please?
- Using a browser as a GUI: which Python package
- Using a browser as a GUI: which Python package
- iputils module
- Popping from the middle of a deque + deque rotation speed
- GUI slider control
- popen3 on windows
- convert a int to a list
- an error in commented code?
- How to align a text of a Listbox to the right
- Possible constant assignment operators ":=" and "::=" for Python
- Undocumented alternate form for %#f ?
- an error in commented code?
- Summer of Code mailing list
- Urllib2 / add_password method
- Compiling SpiderMonkey (libjs) on Mac OS X (for use with Python ctypes)
- wxPython, wxcombobox opening
- wxPython problem
- Using Databases in Python
- os.startfile() - one or two arguments?
- best way to determine sequence ordering?
- Add file to zip, or replace file in zip
- Non-web-based templating system
- pgasync
- Strange constructor behaviour (or not?)
- python game with curses
- time conversions [hh:mm:ss.ms <-> sec(.ms)
- Doubt with wx.ListCtrl
- Pattern_Recognition_School+Conf_2006:Deadline_Appr oaching
- TypeError: 'module' object is not callable
- print out each letter of a word
- Two ElementTree questions
- raw_input passing to fun
- append function problem?
- Why not BDWGC?
- Foriegn contents in Python Packages...
- can i set up a mysql db connection as a class ?
- Converstion
- stdin: processing characters
- inheriting type or object?
- Using Parts of PEAK
- Protocols for Python?
- What do you use __init__.py for?
- fwd: Advanced Treeview Filtering Help
- Advanced Treeview Filtering Help
- twisted and tkinter chat client
- os.system call problem
- Regular Expression help
- OOP techniques in Python
- How to align the text of a Listbox to the right
- Editing a function in-memory and in-place
- PyEval_EvalFrame
- gcc errors
- how do I make a class global?
- pytiff for windows
- ILOG Server integration
- Get all attributes of a com object
- finding IP address of computer
- writing some lines after reading goes wrong on windows?
- String Exceptions (PEP 352)
- Unpacking a list of strings
- RELEASED Python 2.5 (alpha 2)
- searching for an article on name-binding
- midipy.py on linux
- can anyone advise me
- How to align the text of a Listbox to the right
- Direct acces to db unify on unix sco
- Editing a function in-memory and in-place
- Twisted and Tkinter
- can this be done without eval/exec?
- list of functions question
- pygresql - bytea
- C API []-style access to instance objects
- begging for a tree implementation
- print names of dictionaries
- modifying iterator value.
- Speed of shutil.copy vs os.system("copy src dest") in win32
- SimpleXMLRPCServer runnning as a Windows Service using win32serviceutil
- Events in Python?
- Query regarding support for IPv6 in python
- Query : sys.excepthook exception in Python
- A defense for bracket-less code
- blob problems in pysqlite
- scipy and py2exe
- Importing modules through directory shortcuts on Windows
- Inherit from array
- wxpython warnings
- win32com short path name on 2k
- not quite 1252
- pyqt or wxpython
- KeybordInterrupts and friends
- Python UPnP on Linux?
- Type-Def-ing Python
- Nested Lists Assignment Problem
- Introspection Class/Instance Name
- Passing Exceptions Across Threads
- how to browse using urllib2 and cookeilib the correct way
- how to save python interpreter's command history
- how to browse using urllib2 and cookeilib the correct way
- REMINDER: BayPIGgies: April 26, 7:30pm (Google)
- Drawing charts and graphs.
- How to avoid using files to store intermediate results
- Building a Dynamic Library (libpython.so) for Python 2.4.3 Final
- do while loop
- help finding
- can someone explain why ..
- cgi subprocess, cannot get output
- released: RPyC 2.50A
- Plotting package?
- Multithreading and Queue
- Xah's Edu Corner: Criticism vs Constructive Criticism
- PyThreadState_SetAsyncExc and native extensions
- parallel computing
- search an entire website given the homepage URL
- Dell Support Web Site Automation.
- Tkinter: Dynamic entry widget
- How do I set the __debug__ flag in a win32com server?
- How do I open a mysql database with python
- I have problems with creating the classic game Wumpus. the file:
- MySQLdb "begin()" - | https://bytes.com/sitemap/f-292-p-48.html | CC-MAIN-2019-43 | refinedweb | 3,042 | 54.83 |
URIs rather than using memory streams whenever you can. The XAML framework can associate the same media resources. APIs from the Windows.Graphics.Imaging namespace. You might need these APIs if your app scenario involves image file format conversions, or manipulation of an image where the user can save the result as a file. The encoding APIs are also supported by the Guidelines for scaling to pixel density. Exposing basic information about UI elements.
Windows 8 behavior
For Windows 8, resources can use a resource qualifier pattern to load different resources depending on device-specific scaling. However, resources aren't automatically reloaded if the scaling factor changes while the app is running. In this case apps would have to take care of reloading resources, by handling the DpiChanged event (or the deprecated LogicalDpiChanged event) and using ResourceManager APIs to manually reload the resource that's appropriate for the new scaling factor. Starting with Windows 8.1, any resource that was originally retrieved for your app is automatically re-evaluated if the scaling factor changes while the app is running. In addition, when that resource is the image source for an Image object, then one of the source-load events (ImageOpened or ImageFailed) is fired as a result of the system's action of requesting the new resource and then applying it to the Image. The scenario where a run-time scale change might happen is if the user moves your app to a different monitor when more than one is available.
If you migrate your app code from Windows 8 to Windows 8.1 you may want to account for this behavior change, because it results in ImageOpened or ImageFailed events that happen at run-time when the scale change is handled, even in cases where the Source is set in XAML. Also, if you did have code that handled DpiChanged/LogicalDpiChanged and reset the resources, you should examine whether that code is still needed given the new Windows 8.1 automatic reload behavior.
Apps that were compiled for Windows 8 but running on Windows 8.1 continue to use the Windows 8 behavior.
Requirements
See also
- FrameworkElement
- Quickstart: Image and ImageBrush
- XAML images sample
- Optimize media resources
- BitmapSource
- FlowDirection
- Windows.Graphics.Imaging
- Source | http://msdn.microsoft.com/en-us/library/windows/apps/br242752.aspx?cs-save-lang=1&cs-lang=cpp | CC-MAIN-2014-23 | refinedweb | 374 | 53.31 |
JUnit 4.7: Per-Test rules
- |
-
-
-
-
-
-
Read later
Reading List
A note to our readers: You asked so we have developed a set of features that allow you to reduce the noise: you can get email and web notifications for topics you are interested in. Learn more about our new features.. As described in an earlier blog post about the feature:
In JUnit 3 you could also manipulate the test running process itself in various ways. One of the prices of the simplicity of JUnit 4 was the loss of this “meta-testing”. It doesn’t affect simple tests, but for more powerful tests it can be constraining. The object framework style of JUnit 3 lent itself to extension by default. The DSL style of JUnit 4 doesn’t. Last night we brought back meta-testing, but much cleaner and simpler than before.
In addition to the capability of adding rules, a number of core rules have been added:
- TemporaryFolder: Allows test to create files and folders that are guaranteed to be deleted after the test is run. This is a common need for tests that work with the filesystem and want to run in isolation.
- ExternalResource: A pattern for resources that need to be set up in advance and are guaranteed to be torn down after the test runs. This will be useful for tests that work with sockets, embedded servers, and the like.
- ErrorCollector: Allows the test to continue running after a failure and report all errors at the end of the test. Useful when a test verifies a number of independent conditions (although that may itself be a 'test smell').
- ExpectedException: Allows a test to specify expected exception types and messages in the test itself.
- Timeout: Applies the same timeout to all tests in a class.
If you'd like to see an example of a rule in action, here's a test using the TemporaryFolder and ExpectedException rules:
public class DigitalAssetManagerTest { @Rule public TemporaryFolder tempFolder = new TemporaryFolder(); @Rule public ExpectedException exception = ExpectedException.none(); @Test public void countsAssets() throws IOException { File icon = tempFolder.newFile("icon.png"); File assets = tempFolder.newFolder("assets"); createAssets(assets, 3); DigitalAssetManager dam = new DigitalAssetManager(icon, assets); assertEquals(3, dam.getAssetCount()); } private void createAssets(File assets, int numberOfAssets) throws IOException { for (int index = 0; index < numberOfAssets; index++) { File asset = new File(assets, String.format("asset-%d.mpg", index)); Assert.assertTrue("Asset couldn't be created.", asset.createNewFile()); } } @Test public void throwsIllegalArgumentExceptionIfIconIsNull() { exception.expect(IllegalArgumentException.class); exception.expectMessage("Icon is null, not a file, or doesn't exist."); new DigitalAssetManager(null, null); } }
To make rule development easier, a few base classes for rules have been added:
- Verifier: A base class for rules like ErrorCollector that can turn failing tests into passing ones if a verification check is failed.
- TestWatchman: A base class for rules that observe the running of tests without modifying the results.
Rules were called Interceptors when they made their first appearance in earlier builds of JUnit 4.7. In addition to the rules, JUnit 4.7 also includes:
- Some changes to the matchers.
- Tests that timeout now show the stack trace; this can help to diagnose the cause of the test timing out.
- Improvements to Javadoc and a few bugs fixed.
More information on these features is available in the JUnit 4.7 release notes. Hamcrest 1.2 support was listed in earlier release notes, but has been rolled back for this release.
While you're waiting for the final release, you can download the release candidate from github, browse org.junit.rules gear, fill out the survey, read about the deadpooling of Kent Beck's JUnit Max, and wait for user reactions to JUnit 4.7 on blogs, friendfeed and twitter.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
TemporaryFolder
by
Geoffrey Wiseman
I'm interested to see if this develops into a healthy ecosystem of third-party rules.
Nice.
by
Jeff Brown
For the past two years, I have been thinking about building a similar feature for MbUnit v3 in the form of test "mixins." It hasn't happened yet but maybe in an upcoming release... :-) | https://www.infoq.com/news/2009/07/junit-4.7-rules | CC-MAIN-2018-13 | refinedweb | 724 | 56.25 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
[v9]How to set up a constraint for the full record
Hi,
I would like to apply a constraint on specific fields whatever the fileds that are modified.
For example I would like to check that field 'my_field' is checked whatever the filed that is changed :
@api.one
@api.constrains('write_date', 'create_date', 'my_field')
def my_constraints(self):
.../... Blabla raise ValdiationError
This part works well if my_field is modified, yet it's not thrown if another field is changed.
So the constraints does not apply each time you validate a record.
How is it possible to do so?
Do I need to override the write method?
regards
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/v9-how-to-set-up-a-constraint-for-the-full-record-100184 | CC-MAIN-2017-34 | refinedweb | 160 | 66.23 |
., like
weak : use a weak reference map for caching (default)
soft : use a soft reference map for caching
strong : use a Map for caching (objects not garbage collected)
none : no caching (hence uses least memory).).
The FROM clause declares query identification variables that represent iteration over objects in the database. The syntax of the FROM clause is as follows:
from_clause ::= FROM identification_variable_declaration {, {identification_variable_declaration | collection_member_declaration}}* identification_variable_declaration ::= range_variable_declaration { join | fetch_join }* range_variable_declaration ::= entity_name [AS] identification_variable join ::= join_spec join_association_path_expression [AS] identification_variable using annotations
@Entity(name="ThePerson") public class Person ...
The FROM clause also] [join_condition] *
With JPQL. The join_condition is an optional ON clause that is in addition to navigating along the relation that was specified.'
If you specify "LEFT OUTER FETCH" or "INNER FETCH" (i.e you specify FETCH) this means that you want those fields/properties fetching by this query. This doesn’t mean that DataNucleus will definitely fetch them in the same query (because sometimes it is impossible to fetch things like multi-valued fields in a single query) but that it will attempt to fetch all fields that are selected (as well as the ones that are defaulted to EAGER).?
In strict JPA you cannot join to another "root" element. That is, you define JOIN syntax to the following element along a relation from the previous element. DataNucleus supports joining to a (new) "root" element potentially without any relation. See this example
SELECT p FROM Person p LEFT OUTER JOIN Address a ON p.addressName = a.name
Here we simply chose an ON clause to join the two roots.
In strict JPA you cannot join to an embedded element class (of an embeddable). With DataNucleus you can do this, and hence form queries using fields of the embeddable (not available in most other JPA providers). See this example, where class Person has a Collection of embeddable Address objects.
SELECT p FROM Person p LEFT OUTER JOIN p.addresses a WHERE a.name = 'Home'
RDBMS : By default if you don’t specify the JOIN to some related object in the FROM clause and instead navigate through a 1-1/N-1 relation like "a.owner" then it will join using INNER JOIN. You can change this default by specifying the persistence property (to apply to all queries) or query extension datanucleus.query.jpql.navigationJoinType and set it to either "INNERJOIN" or "LEFTOUTERJOIN". You can also set the default for the filter only using the persistence property(to apply to all queries) or query extension datanucleus.query.jpql.navigationJoinTypeForFilter and set it to either "INNERJOIN" or "LEFTOUTERJOIN".)..
Some examples
SELECT p.firstName, p.lastName FROM Person p GROUP BY p.lastName SELECT p.firstName, p.lastName FROM Person p GROUP BY p.lastName HAVING COUNT(p.lastName) > 1
The ORDER BY clause allows the objects or values that are returned by the query to be ordered. The syntax of the ORDER BY clause is
orderby_clause ::= ORDER BY orderby_item {, orderby_item}* orderby_item ::= state_field_path_expression | result_variable {ASC | DESC} aggregate functions for aggregating the values of a field for all rows of the results..
[1] These functionsP; } .... }
<entity-mappings> <package>mydomain.samples.pggeometry</package> <entity class="mydomain.samples.pggeometry.SampleLineString"> <extension key="spatial-dimension" value="2"/> <extension key="spatial-srid" value="4326"/> <attributes> <id name="id"/> <basic name="name"/> <basic name="geom"> <extension key="mapping" value="no-userdata"/> <(); = em.createQuery( = em.createQuery("SELECT VALUE(p.addresses) FROM Person p WHERE KEY(p.addresses) = 'London Flat'");
Beyond this, you can also make use of the map functions and use the size of the map for example. e FROM Employee e WHERE e.salary > (SELECT avg(f.salary).
The query result (SELECT) clause allows a user to select particular fields, objects etc. The syntax of the query result clause is as follows:)
Note that if the setter property name doesn’t match the query result component name, you should use AS {alias} in the query so they are the same.
A special case, where you don’t have a result class but want to easily extract multiple columns in the form of a Tuple JPA provides a special class javax.persistence.Tuple to supply as the result class in the above call. From that you can get hold of the column aliases, and their values.
Query<PersonName> q = em.createQuery( "SELECT p.firstName, p.lastName FROM Person p WHERE p.age > 20", Tuple.class); List<Tuple> results = q.getResultList(); for (Tuple t : results) { List<TupleElement> cols = t.getElements(); for (TupleElement col : cols) { String colName = col.getAlias(); Object value = t.get(colname); } }
Query query = em.createNamedQuery("SoldOut"); List<Product> results = query.getResultList();
You can save a query as a named query like this
Query q = em.createQuery("SELECT p FROM Product p WHERE ..."); ... emf.addNamedQuery("MyQuery", q);
DataNucleus also allows you to create a query, and then save it as a "named" query directly with the query. You do this as follows
Query q = em.createQuery("SELECT p FROM Product p WHERE ..."); ((org.datanucleus.api.jpa.JPAQuery)q).saveAsNamedQuery("MyQuery");
With both methods you can thereafter access the query via
Query q = em.createNamedQuery("MyQuery");
With a JPQL query running on an RDBMS the query is compiled into SQL. Here we give a few examples of what SQL is generated. You can of course try this for yourself observing the content of the DataNucleus log, Person p WHERE :param MEMBER OF p.friends # SQL: SELECT DISTINCT P.ID FROM PERSON P WHERE EXISTS ( SELECT 1 FROM PERSON_FRIENDS P_FRIENDS, PERSON P_FRIENDS_1 WHERE P_FRIENDS.PERSON_ID = P.ID AND P_FRIENDS_1.GLOBAL_ID = P_FRIENDS.FRIEND_ID AND 101 = P_FRIENDS_1.ID)
The JPA specification defines a mode of JPQL for deleting objects from the datastore. NOTE: are case-insensitive.
Query query = em.createQuery("UPDATE Person p SET p.salary = 10000 WHERE age = 18"); int numRowsUpdated = query.executeUpdate();
In strict JPA you cannot use a subquery in the UPDATE clause. With DataNucleus JPA you can do this so, for example, you can set a field to the result of a subquery.
Query query = em.createQuery("UPDATE Person p SET p.salary = (SELECT MAX(p2.salary) FROM Person p2 WHERE age < 18) WHERE age = 18");
By default DataNucleus allows some extensions in syntax over strict JPQL (as defined by the JPA spec). To allow only strict JPQL you can do as follows
Query query = em.createQuery(...); query.setHint("datanucleus.jpql.strict", "true");
The BNF defining the JPQL query language syntax) advantages of the Static MetaModel are that it means that your queries are refactorable if you rename a field, and also that you can dynamically generate the query.. We make use of the
CriteriaBuilder to do this Static MetaModel too, like this
Path nameField = candidateRoot.get(Person_.name); crit.select(nameField);
The basic Criteria query above is fine, but you may want to define some explicit joins. To do this we need to use the Criteria API.
//); addressJoin.alias("a");
which equates to
FROM mydomain.Person p JOIN p.address a
Should we want to impose a WHERE clause filter, we use the
where method on
CriteriaQuery, using
CriteriaBuilder to build the WHERE clause.
// String-based: Predicate nameEquals = cb.equal(candidateRoot.get("name"), "First"); crit.where(nameEquals); // MetaModel-based: Predicate nameEquals = cb.equal(candidateRoot.get(Person_.name), "First"); crit.where(nameEquals);
which equates to
WHERE p.name = 'FIRST')
You can also invoke functions, so a slight variation on this clause would be
// String-based: Predicate nameUpperEquals = cb.equal(cb.upper(candidateRoot.get("name")), "FIRST");));
which equates to
WHERE p.name = 'FIRST' AND p.age = 18
//));
which equates to
WHERE p.name = 'FIRST' OR p.age = 18
Should we want to impose an ORDER clause, we use the
orderBy method on
CriteriaQuery, using
CriteriaBuilder to build the ORDERing clause.
//. This equates to
ORDER BY p.name DESC NULLS FIRST
Similarly there is a method
nullsLast if you wanted nulls to be put at the end of the list
Another common thing we would want to do is specify input parameters. We define these using the
CriteriaBuilder API. Let’s take an example of a WHERE clause!"); Subquery<Double> avgSalary = subCrit.select(cb.avg(subCandidate.get("salary"))); //);
You can make use of CASE expressions with Criteria, like this
Path<Integer> ageVar = candidate.get(Person_.age); Predicate infantAge = cb.lessThan(ageVar, 5); Predicate schoolAge = cb.greaterThanOrEqualTo(ageVar, 5).and(cb.lessThanOrEqualTo(ageVar, 18)); cb.selectCase().when(infantAge, "Infant").when(schoolAge, "Child").otherwise("Adult");
so this generates the equivalent of this JPQL
CASE WHEN (p.age < 5) THEN 'Infant' WHEN (p.age >= 5 AND p.age <= 18) THEN 'Child' ELSE 'Adult': annotation
If class X extends another class S, where S is the most derived managed class (i.e., entity or mapped superclass) extended by X, then class X_ must extend class S_, where S_ is the meta-model you would need the above); | http://www.datanucleus.org/products/accessplatform/jpa/query.html | CC-MAIN-2019-22 | refinedweb | 1,460 | 51.24 |
#include <libunwind.h>
int
unw_get_proc_name(unw_cursor_t *cp,
char *bufp,
size_t
len,
unw_word_t *offp);
The unw_get_proc_name() routine returns the name of the procedure that created the stack frame identified by argument cp. The bufp argument is a pointer to a character buffer that is at least len bytes long. This buffer is used to return the name of the procedure. The offp argument is a pointer to a word that is used to return the byte-offset of the instruction-pointer saved in the stack frame identified by cp, relative to the start of the procedure. For example, if procedure foo() starts at address 0x40003000, then invoking unw_get_proc_name() on a stack frame with an instruction-pointer value of 0x40003080 would return a value of 0x80 in the word pointed to by offp (assuming the procedure is at least 0x80 bytes long).
Note that on some platforms there is no reliable way to distinguish between procedure names and ordinary labels. Furthermore, if symbol information has been stripped from a program, procedure names may be completely unavailable or may be limited to those exported via a dynamic symbol table. In such cases, unw_get_proc_name() may return the name of a label or a preceeding (nearby) procedure. However, the offset returned through offp is always relative to the returned name, which ensures that the value (address) of the returned name plus the returned offset will always be equal to the instruction-pointer of the stack frame identified by cp.
On successful completion, unw_get_proc_name() returns 0. Otherwise the negative value of one of the error-codes below is returned.
unw_get_proc_name() is thread-safe. If cursor cp is in the local address-space, this routine is also safe to use from a signal handler.
libunwind(3), unw_get_proc_info(3)
David Mosberger-Tang
WWW:. | http://www.nongnu.org/libunwind/man/unw_get_proc_name(3).html | CC-MAIN-2015-14 | refinedweb | 295 | 50.97 |
MIDI Shield Hookup Guide
Introduction
The Sparkfun MIDI Shield allows you to add MIDI ports to your R3-compatible Arduino board.
The shield provides the standard MIDI port circuits, including 5-pin DIN connectors and an opto-isolated MIDI input. The shield also has some extra input and output devices. It had LEDs on D6 and D7, pushbuttons on D2, D3 and D4, and rotary potentiometers on A0 and A1.
The V1.5 revision.
This guide will show you how to put the shield together, then explore several example projects, demonstrating how you can apply MIDI to your projects.
Suggested Reading
- If you're new to MIDI, our MIDI Tutorial should help you get up to speed.
- We're going to use the 47 Effects MIDI Library as the heart of the example projects.
Assembly
Before assembling your MIDI shield, consider how you're going to use it. If you don't need buttons and potentiometers, you can leave them off. Similarly, if you only need the MIDI input or output port, you can leave the other port off. Finally, there are a couple of options for headers -- you can select headers that fit your application.
Materials
The MIDI shield kit contains the following parts.
- The MIDI Shield PCB
- 2x 5-pin DIN conectors
- 2x 10K rotary potentiometer
- 3x 12mm tactile pushbutton switches
You'll also need to select headers that fit your application.
- You can use the R3 stackable header kit, but they're a bit tall and make the buttons hard to reach.
- If you're using an Arduino board that predates R3, such as the Arduino Pro, you can use the regular stackable header kit.
- Alternately, you can use regular snappable headers, which don't stick as far above the board, leaving plenty of room to access the switches and potentiometers.
Tools
The following tools are recommended.
- A soldering iron with a fine-point tip.
- Some solder, either leaded or lead-free.
- A magnifying glass or loupe.
- A vise to hold the PCB as you work.
Building The MIDI Shield
If you're using stackable headers, they're easiest to put on at the very beginning (if you're using the snappable headers, we'll save them until the end). To install the stackable headers, put them through the board from the top side
Then flip the board over so it's supported by the bodies of the headers. Adjust the alignment until the headers stand squarely in your work surface, and solder them in place.
Next, we'll install the components on the top of the board in order from the shortest to the tallest. We'll start with the pushbuttons -- snap them through the holes, then solder them in place.
Following that, install the MIDI jacks, taking care to keep them seated to the PCB while you solder.
As the tallest component, the potentiometers go on next. Getting them into the PCB can take some effort -- you might need to nudge the tabs a little bit to get them through the board. Before you solder, doublecheck that the shafts are perpendicular to the surface of the board, because it takes a lot of desoldering effort to straighten them if they're crooked.
Finally, if you're using snappable headers instead of the stacking ones, install them. It's easiest if you use an Arduino board as an assembly jig. Snap the headers into suitable lengths (6, 8, 8 and 10 for an R3 board), and push them into the sockets on the Arduino. Then lay the MIDI shield PCB over them, as shown below.
Solder them from the top of the board.
In Operation
One it's assembled, place the MIDI shield on top of your Arduino, then connect your MIDI devices to the ports.
RUN vs. PROG
There's one last thing to mention: near the MIDI input jack is a small slide switch.
By default, the shield uses the hardware serial port on the Arduino for MIDI communication -- but that port is shared with the bootloader, which is initiated when you press the "load" button in the Arduino IDE. The switch allows them to share the port politely, avoiding output contention.
If you're using the hardware serial port, set the switch to the PROG position, before you load your sketch. Once it's loaded and verified, set it back to RUN.
In the next section we'll discuss using a software serial port. One advantage of software serial is that you don't need to remember to flip the switch every time you load!.
Firmware Foundations
Arduino Compatible I/O
The MIDI shield features the MIDI circuitry, plus a few extra input and output devices. It has three pushbuttons, two potentiometers, and two LEDs.
These devices are legended with their pin assignments, and can be interfaced using standard Arduino functions.
- The pushbuttons are on D2, D3, and D4. To use them, enable the corresponding pins as inputs with pullup
pinMode(<pin number>, INPUT_PULLUP);and read them with
digitalRead(<pin number>);. The inputs are active low - they will normally read as a logic
HIGH, going
LOWwhile the button is pressed.
- The potentiometers are connected to A0 and A1. You can read them using
analogRead(<pin number>).
- Finally, the LEDs are on D6 (Green) and D7 (Red). The outputs are enabled using
pinMode(<pin number>, OUTPUT), and set using
digitalWrite(<pin number>, <HIGH or LOW>). Like the buttons, these are active low -- writing HIGH turns the LED off.
You can find a quick example sketch to test the buttons, pots and LEDs in the MIDI Shield GitHub Repository.
Arduino Library Install
Note: This example assumes you are using the latest version of the Arduino IDE on your desktop. If this is your first time using Arduino, please review our tutorial on installing the Arduino IDE.
We'll be using the following libraries as the basis for the following examples. We'll discuss some specifics of its application in each example. If you have not previously installed an Arduino library, please check out our installation guide.
Installing an Arduino Library
January 11, 2013
Arduino MIDI Library
If you've read the implementation section of our MIDI tutorial, then you've seen that generating and parsing MIDI messages can be a little tricky. Thankfully, Franky at Forty Seven Effects has written a stable and flexible MIDI library for Arduino, which he has released under the MIT license.
The library handles the communication aspects of MIDI, allowing you to send and receive MIDI commands. It implements the communication layer, but it does not make any implications about how the library is applied. It is a suitable basis for almost any type of MIDI device, from simple message filters and splitters, to complex applications like synthesizers, sequencers, and drum machines.
The library is robust and flexible, with some well-designed features.
- It can use hard or soft serial ports -- you don't need to lose
Serial.print()for debugging when you use it.
- It can handle MIDI in Omni mode or be set to a specific channel.
- You can enable a soft-thru feature, which merges incoming bytes with the output, when you don't have a hardware thru port available.
- It uses a object-oriented template instantion, which allows you to declare multiple MIDI ports on the same device, as you might do in a merger or splitter.
- You can enable or disable some more esoteric features, like sending running status and implicit note off, or using a nonstandard baud rate.
You can obtain this library through the Arduino Library Manager. Do a search for "midi" and scroll down the results to find the "MIDI Library by Forty Seven Effects." To manually install the library, download the files from GitHub repository. Unzip them, and put the contents of the ...\src folder into your Arduino library path. On the author's PC, the library was placed in C:\Users\author\Documents\Arduino\libraries\MIDI. You can also download the repo with the link below.
There is also extensive documentation in Doxygen format, including several sample applications.
MsTimer2 Library
The examples also use the MsTimer 2 library. You can obtain this library through the Arduino Library Manager. Do a search for "mstimer" to find "MsTimer2 by Javier Valencia." To manually install the library, download the files from GitHub repository. You can also download the repo with the link below.
Example #1: Clock Generator & Receiver
The first example we'll demonstrate is synchronizing multiple devices using MIDI clock commands.
For this example, we'll be building two different variants of the clock code. The first variant is the master clock, which generates periodic MIDI timing clock (0xF8) bytes, and start(0xFA), stop (0xFC), and continue (0xFB) messages. These messages are used to transmit musical timing from one device to another, acting as a high resolution metronome.
The other end of the link listens for those messages and responds to them.
For this demonstration, we'll be using a pair of RedBoards, each with a MIDI shield. You can substitute either end of the link for a device that implements MIDI clock, as we'll show below.
The Firmware
Each of these boards gets loaded with a different sketch. The master clock will get loaded with the clock-gen.ino sketch.
/****************************************************************************** clock-gen.ino Use SparkFun MIDI Shield as a MIDI clock generator. Byron Jacquot, SparkFun Electronics October 8, 2015 Generate MIDI clock messages at the tempo indicated by A1. Send start/stop messages when D2 is pressed, and continue when D3 is pressed. Resources: This sketch has a clock receiving counterpart in clock-recv(HardwareSerial, Serial, MIDI); //MIDI_CREATE_INSTANCE(SoftwareSerial, SoftSerial, MIDI); bool running; bool send_start; bool send_stop; bool send_continue; bool send_tick; uint32_t tempo_delay; void play_button_event() { // toggle running state, // send corresponding responses running = !running; if(running) { send_start = true; digitalWrite(PIN_LED_PLAYING, LOW); } else { send_stop = true; digitalWrite(PIN_LED_PLAYING, HIGH); } } void cont_button_event() { // ignore continue if running if(!running) { send_continue = true; running = true; digitalWrite(PIN_LED_PLAYING, LOW); } } void timer_callback() { send_tick = true; } void check_pots() { uint32_t pot_val; uint32_t calc; pot_val = analogRead(PIN_TEMPO_POT); // Result is 10 bits calc = (((0x3ff - pot_val) * 75)/1023) + 8; tempo_delay = calc ;//* 5; } void check_buttons() { uint8_t val; static uint16_t play_debounce = 0; static uint16_t cont_debounce = 0; // First the PLAY/STOP button val = digitalRead(PIN_PLAY_INPUT); if(val == LOW) { play_debounce++; if(play_debounce == DEBOUNCE_COUNT) { play_button_event(); } } else { play_debounce = 0; } // Then the continue button val = digitalRead(PIN_CONTINUE_INPUT); if(val == LOW) { cont_debounce++; if(cont_debounce == DEBOUNCE_COUNT) { cont_button_event(); } } else { cont_debounce = 0; } } void setup() { // put your setup code here, to run once: // LED outputs pinMode(PIN_LED_PLAYING, OUTPUT); pinMode(PIN_LED_TEMPO, OUTPUT); digitalWrite(PIN_LED_PLAYING, HIGH); digitalWrite(PIN_LED_TEMPO, HIGH); // button inputs pinMode(PIN_PLAY_INPUT, INPUT_PULLUP); pinMode(PIN_CONTINUE_INPUT, INPUT_PULLUP); // Serial.begin(9600); // Serial.println("Setting up"); // SoftSerial.begin(31250); // do I need to init the soft serial port? #if 1 MIDI.begin(MIDI_CHANNEL_OMNI); MIDI.turnThruOff(); #endif running = false; send_start = false; send_stop = false; send_tick = false; // prime the tempo pump check_pots(); // check_timing(); MsTimer2::set(tempo_delay, timer_callback); MsTimer2::start(); } void loop() { static uint32_t loops = 0; static uint8_t ticks = 0; static uint8_t prev_ticks = 0; bool reset_timer = false; // put your main code here, to run repeatedly: // turn the crank... MIDI.read(); // Check buttons check_buttons(); // process inputs if(send_start) { MIDI.sendRealTime(MIDI_NAMESPACE::Start); send_start = false; // Serial.println("Starting"); ticks = 0; // Next tick comes immediately... // it also resets the timer send_tick = true; } if(send_continue) { MIDI.sendRealTime(MIDI_NAMESPACE::Continue); send_continue = false; // Serial.println("continuing"); // Restore the LED blink counter ticks = prev_ticks; // Next tick comes immediately... // it also resets the timer send_tick = true; } if(send_stop) { MIDI.sendRealTime(MIDI_NAMESPACE::Stop); send_stop = false; prev_ticks = ticks ; // Serial.println("Stopping"); } if(send_tick) { MIDI.sendRealTime(MIDI_NAMESPACE::Clock); send_tick = false; ticks++; if(ticks < 6) { digitalWrite(PIN_LED_TEMPO, LOW); } else if(ticks == 6) { digitalWrite(PIN_LED_TEMPO, HIGH); } else if(ticks >= 24) { ticks = 0; } check_pots(); reset_timer = true; } if(reset_timer) { MsTimer2::stop(); MsTimer2::set(tempo_delay, timer_callback); MsTimer2::start(); reset_timer = false; } loops++; }
The clock chasing board is loaded with the clock-recv.ino sketch.
/****************************************************************************** clock-recv.ino Use SparkFun MIDI Shield as a MIDI clock receiver. Byron Jacquot, SparkFun Electronics October 8, 2015 Listenn for clock/start/stop/continue messages on the MIDI input Resources: This sketch has a clock generating counterpart in clock-gen(SoftwareSerial, SoftSerial, MIDI); MIDI_CREATE_INSTANCE(HardwareSerial, Serial, MIDI); void setup() { // put your setup code here, to run once: // LED outputs pinMode(PIN_LED_PLAYING, OUTPUT); pinMode(PIN_LED_TEMPO, OUTPUT); digitalWrite(PIN_LED_PLAYING, HIGH); digitalWrite(PIN_LED_TEMPO, HIGH); // button inputs // Serial.begin(9600); // Serial.println("Setting up"); // SoftSerial.begin(31250); // do I need to init the soft serial port? // No - MIDI will do it. #if 1 MIDI.begin(MIDI_CHANNEL_OMNI); MIDI.turnThruOff(); #endif } void loop() { static uint32_t loops = 0; static uint8_t ticks = 0; static uint8_t prev_ticks = 0; // put your main code here, to run repeatedly: // turn the crank... if( MIDI.read()) { switch(MIDI.getType()) { case midi::Clock : { ticks++; //Serial.print('.'); // Serial.println(ticks); if(ticks < 6) { digitalWrite(PIN_LED_TEMPO, LOW); //Serial.print('#'); } else if(ticks == 6) { digitalWrite(PIN_LED_TEMPO, HIGH); } else if(ticks >= 24) { ticks = 0; // Serial.print('\n'); } } break; case midi::Start : { digitalWrite(PIN_LED_PLAYING, LOW); ticks = 0; // Serial.println("Starting"); } break; case midi::Stop : { digitalWrite(PIN_LED_PLAYING, HIGH); prev_ticks = ticks; // Serial.println("Stopping"); } break; case midi::Continue : { digitalWrite(PIN_LED_PLAYING, LOW); // Restore the LED blink counter ticks = prev_ticks; // Serial.println("continuing"); } break; default: break; } } loops++; }
Controls
Before we test the system, lets review how the sketches use the extra I/O on the MIDI shield.
The clock generator uses the following controls:
- The A1 pot controls the tempo.
- The button on D2 acts as a start/stop button.
- The button on D3 is a continue button
The LEDs on both ends of the link serve the same functions.
- D7, the red LED, blinks in time, indicating the tempo.
- D6, the green LED, is illuminated when the system is running.
Testing
Once both boards are loaded, the red (D7) LED on the generator should be blinking, and the receiver should be dark. Connect the MIDI Out of the generator to the MIDI In of the receiver, and its red LED should start blinking. The blink rates will be the same, but they may be skewed relative to each other, not yet in perfect sync.
Now, press the
play button on the generator. The green LEDs on both boards should illuminate, and the red LEDs should now blink in synch with each other.
You can adjust the A1 potentiometer to speed up and slow down the tempo. The blink rate will change, and both units will stay in sync as it is adjusted.
The
continue button has some special behavior. MIDI defines Start (0xFA) as instructing recipients to reset to the beginning of the song, then start playing. Continue (0xFB) omits the reset, and starts from whereever it was last stopped.
With Other Devices
We can replace either end of the chain with other devices. For purposes of demonstration, we'll be using a Willzyx x0xb0x, an analog synthesizer with an onboard sequencer. The x0xb0x is connected as a receiver.
In order for this to work, the x0xb0x needs to be configured to follow incoming clock messages -- on the x0xb0x, this is as simple as setting the mode rotary switch to
PATT MIDI SYNC. It follows the tempo, starts and stops properly. However, it doesn't appear to properly obey the continue messages.
Example #2: MIDI-to-control-voltage
If you're interested in building a MIDI synthesizer, the MIDI Shield is quite capable. There is an example in the Forty Seven Effects documentation that uses the Arduino tone library as a simple MIDI instrument. We're going to build a little more ambitious system, that allows us to play the Moog Werkstatt with real keys. This is useful because the Werkstatt is usually played with an array of tiny tactile switches.
The Werkstatt is a good candidate for this project because it has a header on which it receives external control voltages. We'll be generating the following voltages:
- The pitch control voltage (CV) is a DC voltage that represents which key is currently pressed. The oscillator on the Werkstatt raises its pitch an octave for every volt present on this input; in other words, a semitone is 1/12th of a volt.
- The gate signal is an on/off indicator -- when one or more keys on the MIDI controller are held, the gate is high. When no keys are pressed, the gate is low. The Werkstatt responds to the gate by triggering the envelope generator, which in turn drives the VCF and VCA.
- There's a second analog control voltage from the converter that represents modulation wheel (continuous controller #0) position. It can be patched into other inputs on the Werkstatt, such as LFO rate or filter cutoff.
Materials
This project requires the following parts.
It also needs a MIDI keyboard controller and a MIDI cable.
Construction
Construction and testing is a little more involved that the other two projects.
First, we need a ground wire in the Werkstatt. In this case, a green wire is wrapped around one of the screw posts inside and out through a gap in the front of the chassis.
DAC Board Assembly
After that, prepare the DACs, and put them on the proto shield. The DACs come configured using the same I2C address (0x60) -- on one of the DAC breakouts, cut the traces on the pullup resistor jumper on the back of the board, and switch its address to 0x61 by switching the
ADDR jumper to VCC.
The two DACs are soldered onto strips of breakaway headers and then onto the proto shield. The following connections were made on the shield with wire:
- Their VCC pins were tied to the 5V pin.
- The SDA pins were tied to A4.
- The SCL pins were tied to A5.
- Longer wires were connected to the DAC outputs -- we selected a blue one, a white one, and a black one.
- The white wire is the output of the DAC at the default address.
- The black wire is the output of the other DAC.
- The blue wire was connected to D10.
- The green ground wire from inside the Werkstatt is tied to a
GNDpin on the proto shield.
Testing the DACs
Once the DACs are wired onto the protoshield, put the shield on the RedBoard to test them. Loading the DAC Integration Test sketch to verify that they're working. It verifies that they're working correctly and communicating on the correct addresses by generating opposing full-scale sawtooth waves.
Connections
Once you're confident that the DACs are working, you can put the MIDI shield on top of the stack, and connect the Werkstatt and MIDI controller. The MIDI controller is connected to the MIDI input on the shield. The wires from the protoshield are connected to the Werkstatt as follows:
- The white wire gets plugged into the
VCO EXPinput.
- The black wire goes into the
VCF IN.
- The blue wire is connected to the
GATE OUT.
Firmware
The firmware for this project is a little more complex, requiring ancillary *.CPP and *.H files. Get all three files from the
GitHub folder for the project, and store them in a directory named
MIDI-CV. When you open MIDI-CV.ino, it should also open notemap.h and .cpp. Compile and load the sketch, then press some keys on your MIDI controller.
This example uses a more sophisticated interface to the 47 Effects library: rather than polling the library for new messages, it installs callback routines for the relevant message categories. It's also configured to listen in Omni mode - it should respond to messages on any MIDI channel.
In Operation
With a range of 5V to work with, the design target was to have 4 octaves of control voltage, plus roughly +/-1/2 octave bend range. If multiple keys are held, the CV represents the lowest note.
The MIDI-CV sketch includes a fun bonus -- an arpeggiator! If you hold more than one controller key at a time, the converter will periodically cycle between the keys.
- Button D2 enables the arpeggiator in up mode.
- Button D3 enables the arpeggiator in Down mode.
- Button D4 changes the gate to follow the arpeggiator clock.
- Pot A1 controls the rate of arpeggiation.
- The red LED (D7) displays arpeggiator tempo.
- The green LED (D6) illuminates when the arpeggiator is enabled.
You can switch between up and down modes while the arpeggiator is running. To disable the arpeggiator, press the button for the current mode a second time.
Calibration
The convention for control voltage is that increasing the voltage by one volt will cause the pitch to raise by an octave; 1/12th of a volt (0.08333V) corresponds to a semitone.
The DACs are also referenced to VCC on the board, nominally 5V in this case. Different power supplies will change the scaling of the CV -- the test unit behaved a little differently when powered from the USB connection (yielding VCC of 5.065 V) and from a 9V wall-wart supply plugged into the barrel jack (resulting in VCC of 5.008 V). When you adjust the scaling, it's important that you're powering the system as it will be deployed!
The intonation can be adjusted at both ends of the interface described here. Trimpot VR5 in the Werkstatt allows for Volt-to-Octave adjustments. In the Sketch, the constant
DAC_CAL is also used to change the CV scaling. Since it's adjustable at both ends, it can lead to confusion if you start adjusting them at the same time.
The key to preventing that confusion is remembering the Volt/Octave convention. Independently adjust each end of the interface so that volts and octaves correspond. We calibrated the system using the following procedure:
- First the Arduino side is calibrated to produce a 1V step for an octave between note-on messages
- A DC Voltmeter was connected to the CV output of the DAC board.
- Ascending octaves were played on the keyboard.
- The
DAC_CALvalue was adjusted until those octaves resulted in 1VDC changes to the CV output.
- In this case, the default value of 6826 worked fairly well.
- The resulting voltages were 0.483V, 1.487V, 2.485V, 3.484V, and 4.485V.
- Once the CV has been adjusted to 1V/8ve, VR5 in the Werkstatt can be adjusted to volt/octave response.
- We used a frequency counter function in an oscilloscope to check the frequency, measured at the
VCO outpatch point. It was doublechecked with a digital guitar tuner.
- We started by hopping back and forth between low C and the next C up, then moving to higher octaves as the intonation got better.
- If you tune the high A to 440 Hz, working backwards to the lowest A on a 4-octave keyboard gives you 440 Hz, 220 Hz, 110 Hz and 55 Hz.
- The Werkstatt is most accurate in the lower registers, and tends to run a little flat in the upper registers.
- The tuning pot on the panel is also somewhat touchy. While you're building this interface, you might also consider adding Moog's Fine Tuning Mod.
Troubleshooting
This project was developed using a soft serial port for MIDI and the hardware serial port to print debug messages. Once it was working, the debug printing was disabled, and MIDI was reverted to the hardware serial port.
If you want to enable debug printing:
- Start by adjusting SJ1 and SJ2 for MIDI on the soft serial pins.
- Amend the declaration of the MIDI instance to create and use the soft serial port.
- Change the
VERBOSEmacro to
1
Example #3: MIDI Analyzer
Sometimes, when you're developing MIDI code, you want to be able to see the messages being sent on the line. Exactly what values are being sent? Is a byte getting lost somewhere?
If you've got an old toaster Mac sitting around, you can use MIDI Scope, or MIDI Ox on Windows systems.
This example is a MIDI sniffer along those lines. It receives MIDI messages via the library, and interprets them on the serial port.
Hardware Configuration
This sketch requires two serial ports -- one for MIDI communication, and one for printing the interpreted messages. The shield needs to be configured to use a software serial port for the MIDI communications.
The Sketch
/****************************************************************************** MIDI-sniffer.ino Use SparkFun MIDI Shield as a MIDI data analyzer. Byron Jacquot, SparkFun Electronics October 8, 2015 Reads all events arriving over MIDI, and turns them into descriptive text. If you hold the button on D2, it will switch to display the raw hex values arriving, which can be useful for viewing incomplete messages and running status. Resources: Requires that the MIDI Sheild be configured to use soft serial on pins 8 & 9, so that debug text can be printed to the hardware serial port._RAW_INPUT 2 #define PIN_POT_A0 0 #define PIN_POT_A1 1 static const uint16_t DEBOUNCE_COUNT = 50; // Need to use soft serial, so we can report what's happening // via messages on hard serial. SoftwareSerial SoftSerial(8, 9); /* Args: - type of port to use (hard/soft) - port object name - name for this midi instance */ MIDI_CREATE_INSTANCE(SoftwareSerial, SoftSerial, MIDI); // This doesn't make much sense to use with hardware serial, as it needs // hard serial to report what it's seeing... void setup() { // put your setup code here, to run once: // LED outputs Serial.begin(19200); Serial.println("Setting up"); // do I need to init the soft serial port? // No - MIDI Lib will do it. // We want to receive messages on all channels MIDI.begin(MIDI_CHANNEL_OMNI); // We also want to echo the input to the output, // so the sniffer can be dropped inline when things misbehave. MIDI.turnThruOn(); pinMode(PIN_RAW_INPUT, INPUT_PULLUP); } void loop() { static uint8_t ticks = 0; static uint8_t old_ticks = 0; // put your main code here, to run repeatedly: if(digitalRead(PIN_RAW_INPUT) == LOW) { // If you hold button D2 on the shield, we'll print // the raw hex values from the MIDI input. // // This can be useful if you need to troubleshoot issues with // running status byte input; if(SoftSerial.available() != 0) { input = SoftSerial.read(); if(input & 0x80) { Serial.println(); } Serial.print(input, HEX); Serial.print(' '); } } else { // turn the crank... if ( MIDI.read()) { switch (MIDI.getType()) { case midi::NoteOff : { Serial.print("NoteOff, chan: "); Serial.print(MIDI.getChannel()); Serial.print(" Note#: "); Serial.print(MIDI.getData1()); Serial.print(" Vel#: "); Serial.println(MIDI.getData2()); } break; case midi::NoteOn : { uint8_t vel; Serial.print("NoteOn, chan: "); Serial.print(MIDI.getChannel()); Serial.print(" Note#: "); Serial.print(MIDI.getData1()); Serial.print(" Vel#: "); vel = MIDI.getData2(); Serial.print(vel); if (vel == 0) { Serial.print(" *Implied off*"); } Serial.println(); } break; case midi::AfterTouchPoly : { Serial.print("PolyAT, chan: "); Serial.print(MIDI.getChannel()); Serial.print(" Note#: "); Serial.print(MIDI.getData1()); Serial.print(" AT: "); Serial.println(MIDI.getData2()); } break; case midi::ControlChange : { Serial.print("Controller, chan: "); Serial.print(MIDI.getChannel()); Serial.print(" Controller#: "); Serial.print(MIDI.getData1()); Serial.print(" Value: "); Serial.println(MIDI.getData2()); } break; case midi::ProgramChange : { Serial.print("PropChange, chan: "); Serial.print(MIDI.getChannel()); Serial.print(" program: "); Serial.println(MIDI.getData1()); } break; case midi::AfterTouchChannel : { Serial.print("ChanAT, chan: "); Serial.print(MIDI.getChannel()); Serial.print(" program: "); Serial.println(MIDI.getData1()); } break; case midi::PitchBend : { uint16_t val; Serial.print("Bend, chan: "); Serial.print(MIDI.getChannel()); // concatenate MSB,LSB // LSB is Data1 val = MIDI.getData2() << 7 | MIDI.getData1(); Serial.print(" value: 0x"); Serial.println(val, HEX); } break; case midi::SystemExclusive : { // Sysex is special. // could contain very long data... // the data bytes form the length of the message, // with data contained in array member uint16_t length; const uint8_t * data_p; Serial.print("SysEx, chan: "); Serial.print(MIDI.getChannel()); length = MIDI.getSysExArrayLength(); Serial.print(" Data: 0x"); data_p = MIDI.getSysExArray(); for (uint16_t idx = 0; idx < length; idx++) { Serial.print(data_p[idx], HEX); Serial.print(" 0x"); } Serial.println(); } break; case midi::TimeCodeQuarterFrame : { // MTC is also special... // 1 byte of data carries 3 bits of field info // and 4 bits of data (sent as MS and LS nybbles) // It takes 2 messages to send each TC field, Serial.print("TC 1/4Frame, type: "); Serial.print(MIDI.getData1() >> 4); Serial.print("Data nybble: "); Serial.println(MIDI.getData1() & 0x0f); } break; case midi::SongPosition : { // Data is the number of elapsed sixteenth notes into the song, set as // 7 seven-bit values, LSB, then MSB. Serial.print("SongPosition "); Serial.println(MIDI.getData2() << 7 | MIDI.getData1()); } break; case midi::SongSelect : { Serial.print("SongSelect "); Serial.println(MIDI.getData1()); } break; case midi::TuneRequest : { Serial.println("Tune Request"); } break; case midi::Clock : { ticks++; Serial.print("Clock "); Serial.println(ticks); } break; case midi::Start : { ticks = 0; Serial.println("Starting"); } break; case midi::Continue : { ticks = old_ticks; Serial.println("continuing"); } break; case midi::Stop : { old_ticks = ticks; Serial.println("Stopping"); } break; case midi::ActiveSensing : { Serial.println("ActiveSense"); } break; case midi::SystemReset : { Serial.println("Stopping"); } break; case midi::InvalidType : { Serial.println("Invalid Type"); } break; default: { Serial.println(); } break; } } } }
The library instantiation has a couple of specific details worth noting:
- It uses a software serial port on pins 8 and 9. This leaves the regular serial port available for displaying the output messages.
- It enables the soft-thru feature of the library, so it retransmits the messages it receives. We can insert the sniffer between two other devices without breaking the link.
To use this code:
- Load the sketch
- Connect the MIDI Shield inline, between other MIDI devices
- Start a terminal application at 19200 baud.
- Send MIDI message & observe the output on the terminal.
This sniffer has a hidden feature: if you hold down the
D2 button on the shield, it will read and display the raw bytes from the serial port, rather than the output of the MIDI library.
In Use
As a test, the sniffer was connected to a MIDI keyboard, and notes were occasionally pressed and released. In the screen capture that follows, the first few lines are shown in "cooked" format, and the next few are in "raw" format. We learn something interesting from the raw values.
The first few messages are note on and off pairs for low C on the keyboard. It turns on, it turns off.
Then the "raw mode" button is held, and the key is pressed a few more times. The first few articulations are somewhat rapid, the next few are a bit slower. The display moves to a new line for each new status byte received.
You'll notice that the rapid articulations are sent using a single status byte (0x90), followed by pairs of data bytes (0xC, 0x41, then 0xC, 0x0, etc ). The controller is using running status to reduce the number of bytes sent, and using note on with velocity of 0 to take advantage of running status. The next few lines in the trace are the same articulation of low C, just played more slowly. Each articulation gets a status byte (0x90), followed by two data bytes. Even without running status, it's still using note-on with velocity of 0, instead of note-off status bytes.
We have learned that this controller uses running status when the same message is sent in rapid succession, but sends fresh status bytes when the messages occur more slowly. The threshold for using running status seems to be somewhere at around 1/2 of a second.
Resources and Going Further
Resources
- If you need more detailed background information about MIDI, check out our MIDI Tutorial.
- The MIDI Specification is maintained by the MIDI Manufacturer's Organization.
- More information about the MCP4725 digital-to-analog converter can be found in this hookup guide
Going Further
- The WAV Trigger implements MIDI on it's serial port, allowing you to control up to 2048 velocity-sensitive samples.
- If the shield form-factor isn't right for you, we also sell the MIDI Connector all by itself.
- The latest revision of the Bleep Drum kit now includes a MIDI input
- Hairless MIDI allows you to open a virtual MIDI port over a standard serial bridge (like an FTDI adapter). | https://learn.sparkfun.com/tutorials/midi-shield-hookup-guide/all | CC-MAIN-2019-43 | refinedweb | 5,284 | 57.16 |
From the examples folder, NexLoop(nex_listen_list) inside the main loop should be correct.
Do you want to post HMI and ino files?
Attached is the code. I really wish I could just take the serial input from the nextion button events and use them to activate my relays. Then I wouldnt need to call the nextion library and i could get rid of nexLoop. I tried using the recievemessages.ino I found in one of the examples but it keeps telling me myNextion is not declared on this scope,. I have been researching it for a while and I'm getting a little frustrated. :(
I took a peek at your code:
I am not going to go through all, just point out a few
You have 4 DualState Buttons defined at the top, which you listen for their Release Event in Setup()
I am not going to do a byte count gauge_display()
but here you deviate from the library constructs
-- you didn't just declare the NexGauge? and the use setValue()
-- you didn't just declare the NexText components?
Part of what you miss here is that before sending out on serial you should be receiving any info that the Nextion has sent through first, once received is complete, then comes your turn to send
This is clearly overridden in your direct writes.
I have to wonder how fast the 1024 bit buffers (128 bytes) is exhausted in your loop.
It would seem to me, that depending on how many loops are iterated per second that
you could quickly have exceeded this buffer. A delay(5) will minimize the loop to a max of 200 iterations per second and perhaps this is too high. But you also must address receiving your incoming first.
This is taken care of for you when you use the proper infrastructure - such as was done with the
four dual states, and not done with any of the other components.
Note that sendCommand in NexHardware.cpp receives first before sending
Note that setValue, set_font_color_pco in NexGauge.cpp uses sendCommand
Note that setText in NexText also uses sendCommand
Also in your dual state button declarations, if the page changes on the HMI screen, I didn't see how you will catch the page changes, therefore either bt0 on page 2 is unavailable or the three bt1,bt2,bt3 on page 0 will be unavailable. What I do to ensure this is to prefix the .objname with the pagename in your code.
bt0 becomes page2.bt0 and bt1 becomes page0.bt1
(assuming page 0 is called page0 an page 2 is called page2)
This is only some issues you must look at.
Thanks Patrick. I may be jumping into this all too soon and getting ahead of myself.
Any idea why the receive messages wont compile? It tells me myNextion doesnt name a type. I am wondering if I have my Nextion.h setup correct or if there is something I need to change?
What is the contents of NexConfig.h?
You define your serial to the nextion here.
By default it is called nexSerial ... this can be anyname you want
But don't make the mistake of naming from everywhere
The reason for naming is so that you can see it in your code
you bounce between serial2, myNextion, nexSerial ... no good.
- the library is nexSerial ... use nexSerial and save yourself a mass headache
So I have multiple Nextion devices, I might be interested in multiple serial via a software serial
First you have to read about software serial at the Arduino site and look at the library
You will see they do good with some things, fail at others.
They suggest a different software serial, it does better at some, fails at others
So you have to learn about picking and choosing your battles.
When you have a hardware serial available - best to make use of it.
I am working on 2 automotive control touch screen projects right now using Nextion displays but only 1 per display. I can't see the need for 2 yet.
I have my nextion communicating successfully on my native serial2. Here is my nexConfig
#ifndef __NEXCONFIG_H__
#define __NEXCONFIG_H__
/**
* @addtogroup Configuration
* @{
*/
/**
* Define DEBUG_SERIAL_ENABLE to enable debug serial.
* Comment it to disable debug serial.
*/
#define DEBUG_SERIAL_ENABLE
/**
* Define dbSerial for the output of debug messages.
*/
#define dbSerial Serial
/**
* Define nexSerial for communicate with Nextion touch panel.
*/
#define nexSerial Serial2
#ifdef DEBUG_SERIAL_ENABLE
#define dbSerialPrint(a) dbSerial.print(a)
#define dbSerialPrintln(a) dbSerial.println(a)
#define dbSerialBegin(a) dbSerial.begin(a)
#else
#define dbSerialPrint(a) do{}while(0)
#define dbSerialPrintln(a) do{}while(0)
#define dbSerialBegin(a) do{}while(0)
#endif
/**
* @}
*/
#endif /* #ifndef __NEXCONFIG_H__ */
then based on your config, I would use nexSerial
nexSerial is the alias naming as defined
Serial2 is the natural naming as defined
Using in your code nexSerial and dbSerial helps you see which is being used
reverting to Serial2 and Serial can be confusing,
this is the reason why aliases are created - to avoid confusion.
If you really desired to use myNextion, then it is in this nexConfig.h
where you would swap myNextion for nexSerial ... but there may be other code
relying on the nexSerial ... so changing it, you would have to change all occurrences of
I get what you're saying and it makes perfect sense. I changed out myNextion with mySerial and still get the "Nextion does not name a type" when I try to compile my recieve message.ino I attached. I need more practice lol.
I am not the Arduino Guru, so I certainly can't solve all.
You will get a conflict using SoftwareSerial.h and Nextion.h
Here, you use one or the other, but not both for the same Nextion device
(unless you have multiple serials and multiple devices ... more advanced)
Joel Grubb
I have a fairly simple code for running my 7 inch Nextion display with my aftermarket automobile ECU. It works great. The way I have it setup is the second page is DS buttons used to operate a relay board to turn things on and off in the car. That works great also. The problem is it will only work with nexLoop in my main loop. When I insert it at the beginning of the main loop it renders my first page of display values virtually worthless. If I comment nexLoop out of the first page it works great but then I lose relay control. I am a programming beginner so please keep that in mind. Any suggestions on my error OR a workaround? Thanks. | http://support.iteadstudio.com/support/discussions/topics/11000008365 | CC-MAIN-2017-26 | refinedweb | 1,096 | 64.2 |
Let's say I need an extension method which selects required properties from the collection. This can be entity or .net collection. So I have defined such extension method:
public IQueryable<TResult> SelectDynamic<TResult>( this IQueryable<T> source, ...)
This works fine for
IQueryables. But, I also need to call this function for
IEnumerables.
And in that case, I can call it with the help of
.AsQueryable():
myEnumerable.AsQueryable() .SelectDynamic(...) .ToList();
Both work fine. And I have such question, if both work fine, in which conditions I must create two different methods for the same purpose, which one works with
IEnumerable and the other works with
IQueryable?
My method has to send query to the database in case of
Queryable.
For example, here is the source of
.Select extension method inside
System.Linq namespace:
I am repeating my main question again:
My method must send query to the database in case of
Queryable, but not when working with
IEnumerable. And for now, I am using
AsQueryable() for the enumerables. Because, I dont want to write same code for the
Enumerable. Can it have some side effects? | http://www.howtobuildsoftware.com/index.php/how-do/dMI/c-net-linq-ienumerable-iqueryable-in-which-cases-do-i-need-to-create-two-different-extension-methods-for-ienumerable-and-iqueryable | CC-MAIN-2019-39 | refinedweb | 185 | 76.32 |
Since. Finally, I have to note that both of us Vernons are conversant in Latin, which is the sort of coincidence sportscasters are prone to mis-label "ironic"... ;) Cheers, Vern On Mon, Jul 18, 2011 at 1:29 PM, kirby urner <kirby.urner at gmail.com> wrote: > Hi Vernon, > > ... not to be confused with Vern "the Watcher" Ceder. > > On Mon, Jul 18, 2011 at 8:47 AM, Vernon Cole <vernondcole at gmail.com>wrote: > > >> There is a very good reason for this: standard library code must be >> readable for people all over the world. That's why a Dutch software >> engineer wrote a language in which all the keywords and commentary are in >> English. >> >> > Yes, the Standard Library is to be Anglicized for some time to come, > maybe always, per Guido's talks. > > Of course there's nothing to stop someone from writing a translator > for the Standard Library, such that the source originals (as modified) > might be rendered in myriad other charactersets. > > Top-level names tend to be amenable to such treatment. > > This may be done down to the C family level, though I'm not suggesting > that it should be (nor are all Python implementations C family I hasten > to add, (a Jython is "C family" if the Java VM is)). > > The same is not true for 3rd party modules which, as you say, > may be written in any style. > > Learning the Latin (English) alphabet, building a vocabulary, remains > a good idea obviously, along with ASCII in the context of Unicode. > > I expect those focused in computer science will continue giving > themselves the benefit of this learning. > > I received Romanized Indonesian source code for quite awhile, until > the student moved to Japan and apparently stopped doing Python. > > I'm impressed with all the alphabets you know. > > 3rd party modules written in Cyrillic with the peppering of > Roman we know must be there, thanks to Standard Library > (untranslated) and the 33 keywords (so far), could be used > in computer science to help English speakers learn a > Cyrillic language. > > > > > >> > >> >." >> >> > Certainly the GUI needs to be intelligible yes. > > Lets just say there's a school of thought that has > no problem with a math, logic or grammar teacher > using only Chinese characters for top level names > in various exercises using Python or other > Unicode aware computer language. And no > problem with another teacher using only Hebrew > characters for top level names and so on. > > This school of though hangs out on the Python > Diversity list and self-organizes there. If you go > back in the archives, you'll find myself and a > guy named Carl doing stuff in the Python wiki > to expand the language base, including at the > source code level. With Pycon / Tehran in the > planning, we want to be in a better position to > address issues relating GeoDjango to Farsi, say. > > These exercises (mentioned above) may have > nothing to do with writing commercial applications. > These may not be programmers in training > (though some may be in commercial media, > where "programming" also has meaning (e.g. > in radio / TV)). Instead of using a calculator > or abacus to learn numeracy skills, people > have laptops and internet access. > > Having readable source code in languages > that aren't in a Roman alphabet is already > a spreading phenomenon, with many writers > happily giving up that so-called "world readability" > in favor of remaining intelligible to the girl or boy > next door. > > The syntax of URIs and domain names has > already taken this turn. You will have http//arabic letters// > quite frequently these days, thanks to the > Unicode basis of http (which Python now needs > to deal with, and does, as an http-aware language). > > CSS for Arabic is the kind of style concern for > which we may have insufficient literature to date. > We may have people joining Diversity who want to > develop that literature (recruiting happening). > > > > > >> > > I'm not denigrating PEP8 in any way, even though > I used some light sarcasm in my post. That was > not directed against PEP8, so much as against > the idea that the "rule book" is somehow complete, > just because we have it down that functions should > generally not start with a capital letter, and > l (lowercase L) is a terrible name for all purposes > because it's so indistinguishable from uppercase > I and the number 1 in many fonts. > > I think as people start getting a lot more experience > writing Python with different namespaces, with > non-Roman top-level names etc., that the rule > book is inevitably going to expand and that a > Book of Styles could conceivably become enormous. > > But then think of English: we acknowledge many > styles as being appropriate and don't have just > the one "book" where style is concerned (we have > so many) -- not like the dictionary, with a goal of > including every word in a finite and deliberately > exclusive set of standard words. > > I have some examples of Python source in my > blogs, using kanji as top-level names (might be > a Japanese program, as one of the kanji is for > Mt. Fuji as I recall). > > Then there's some tracking down Stallman on > a visit to Sri Lanka (awhile back) and chatter > about Python in Tamil and Sinhalese. And yes, > I am aware English is spoken in this parts as well, > as evidenced by Arthur C. Clarke's having lived > there for so long. One of our CSN chiefs has a > track record there too, another English speaker. > > > > > > > >: <> | https://mail.python.org/pipermail/edu-sig/2011-July/010406.html | CC-MAIN-2020-29 | refinedweb | 908 | 59.64 |
11 May 2012 14:16 [Source: ICIS news]
LONDON (ICIS)--European chemical stocks were down on Friday, in line with financial markets, after it was announced that the eurozone economy is set to shrink by 0.3% in 2012.
The European Commission announced on Friday, in its spring forecast for 2012–2013, that real GDP is projected to stagnate in the 27-member EU, while in the euro area it is expected to contract by 0.3% in 2012 and then increase by 1.0% in 2013.
The Commission added that following an output contraction in late 2011, the EU economy is estimated to currently be in a mild recession.
At 12:19 GMT, the ?xml:namespace>
With European indices trading lower, the Dow Jones Euro Stoxx Chemicals index was down by 0.59%, as shares in many of
Petrochemical major BASF’s shares had fallen by 0.50%, while fellow Germany-based chemical company LANXESS’s shares were trading down by 1.53%.
Shares in
Olli Rehn, the Commission's vice president for economic and monetary affairs and the euro, said a recovery is in sight but the economic situation remains fragile, and there was still a large disparity across member states.
," said Re | http://www.icis.com/Articles/2012/05/11/9559097/europe-chem-stocks-down-on-shrinking-economy-forecast-for-eurozone.html | CC-MAIN-2015-18 | refinedweb | 205 | 62.48 |
Django Logging – The Most Easy Method to Perform it!
Django Logging
All developers debug errors. The resolving of those errors takes time. It helps in most cases to know when and where the error occurred. I would say that Python’s error messages are quite helpful. But even these messages render useless when talking about projects with multiple files. That problem and many other benefits come with logging.
As Django developers, it is important that we also master logging. Backend technologies have various great tools to implement logging. Django too provides support for various logging frameworks and modules. We will implement it using the basic and one of the popular, logging module.
In this tutorial, we will learn about logging. The contents will be:
- What is Logging?
- Python Logging Module
- Logging in Django
Ready to learn. Let’s get started.
Stay updated with the latest technology trends while you're on the move - Join DataFlair's Telegram Channel
What is Logging?
Logging is a technique or medium which can let us track some events as the software executes. It is an important technique for developers. It helps them to track events. Logging is like an extra set of eyes for them. Developers are not only responsible for making software but also for maintaining them. Logging helps with that part of maintenance immensely.
It tracks every event that occurs at all times. That means instead of the long tracebacks you get much more. This time when an error occurs you can see in which the system was. This method will help you resolve errors quickly. As you will be able to tell where the error occurred.
How does it work? What’s the idea behind it?
The logging is handled by a separate program. That logging program is simply a file-writer. The logger is said to record certain events in text format. The recorded information is then saved in files. The files for this are called logs. The logs are simple files saved with log extension. They contain the logs of events occurred.
Well, this is just one of the most used implementations, as you can see. Since we are just storing the information, you can come up with more ways to deal with it.
Okay, it’s clear to us that logging is important. Is it difficult? No absolutely not buddy. Logging is not difficult at all. Anyone who knows a little bit about the setup can use this.
Being a Python programmer, you get extra features here too. We got a whole module that lets us accomplish just that. As for other languages, there are many modules, Java has Log4j and JavaScript has its own loglevel. All these are amazing tools as well. Okay, so we will be dealing with Python (my favorite Language) here.
Logging Module in Python
The logging module is a built-in Python module. It comes preinstalled with Python 3. Logging module is used to keep track of the events that occur as the program runs. That can be extended to any number of programs and thus for software, it is easy to set up and use.
Why use logging module?
As a developer, we can ask why use the logging module. All logging does is writing to a file or printing it on the console. All that can be achieved by using print statements. Then what is the need for a logging module?. That’s a valid question. The answer to that is logging is a module that gives what print cannot.
We use print function to output something on console. So, when a function occurs, we can get a print statement that the function executed. While that approach works with smaller applications, it is inefficient.
Print function becomes part of your program and if the software stops working, you won’t get the result. Also, if anything occurs and system restarts, the console is also clear. So, what will you do then? The answer to that is logging module.
The logging module is capable of:
- Multithreading execution
- Categorizing Messages via different log levels
- It’s much more flexible and configurable
- Gives a more structured information
So, there are 4 main parts of logging module. We will look at them one by one.
1. Loggers
The loggers are the objects that developers deal with. They can be understood as a function which will be invoked when they are called. We use loggers in our project files. Thus, when the function is invoked, we get a detailed report. The logger can generate multiple levels of responses. We can customize it to full extent.
2. Handlers
Handlers are the objects which emit the information. Think of them as newspaper handlers. Their main task is to transmit the information. That can be achieved by writing the info in a log file (The default behaviour). There are various handlers provided by logging module.
We can easily set-up multiple handlers for the same logger. There are SMTP Handlers too which will mail the log records to you. The handlers usually contain business logic for logging information.
3. Formatters
The formatters are the ones which format the data. You see handlers cannot send the information as it’s a Python data type. Before it can be sent to anyone, it needs to be converted.
The logs are by-default in Log Records format. That is the class pre-defined by logging framework. It gives various methods to developers to use. That format can not be directly sent over a network or written in a text file. To convert that or format that, we need formatters. There are different formatters for different handlers.
By default, the formatters will convert the Log Record into String. This behaviour can be easily changed and you can customize this as you want. This format can be based on the business logic that we write in Handlers.
4. Filters
The last piece of the logging module is filters. The filters, as the name suggests, filter the messages. Not every message we pass needs to be stored or transported. Or there can be different handlers for different messages. All that is achievable with the help of filters.
We can use filters with both loggers and handlers.
Okay, so now we have got a basic idea of the logging module. It also provides us with message levels. The message levels are defined as:
DEBUG: It is verbose system information when everything is running fine. It will tell you more precise details about the state of the system. Its severity point is 10.
INFO: The info will produce less verbose system information but is similar to debug. It generally tells an overview of what the system is executing. Its severity point is 20.
WARNING: This contains information regarding low-level problems. The problems may be ignored as they don’t cause the system to halt. Although, it is recommended that these problems shall be resolved.
Its severity point is 30.
ERROR: This message is serious. The error will contain information about the major problem that has occurred. The problem may have stopped the operation of program and needs immediate attention.
Its severity point is 40.
CRITICAL: The most critical message. This message is emitted when the problem has caused the system to stop. That means the whole application has halted due to this problem.
Its severity point is 50.
The severity point determines what priority shall be given. Suppose, we set the log level to be 30. Then the logger will log or store the information when the level is greater than or equal to 30. So, you just need to confirm what level of logging you want. We will learn more about them in the further section.
We can easily implement logging by importing the module. Since it’s a built-in module, we need not install it via pip.
Just write this in your Python file and we can use loggers.
Code:
import logging logging.warning("DataFlair Logging Tutorials")
The output generated by the code is right below. As you can see, the log is printed on the console, it has log-level, logger-name and message. root is the default logger-name. We will use these kinds of functions with our views in Django.
Explore a complete tutorial on Python Logging Module
Logging in Django
Django provides logging by using the logging module of Python. The logging module can be easily configured.
When we want to use logging in Django there are certain steps to be followed. The procedure will be as follows:
- We will configure our settings.py for various loggers, handlers, filters.
- We will include the loggers code in views or any other module of Django where necessary.
Okay, so let’s get started.
Create a new Project
We will create a new project for logging. You can then use the logging in existing projects. So, to create a new project, execute this command in the command line:
$ django-admin startproject dataflairlogs
This will start the new project. Now, we will configure settings.py of the project.
Configuring Settings
For including logging in Django, we need to configure its settings. Doing that is pretty easy. By configuring logging settings, we mean to define:
- Loggers
- Handlers
- Filters
- Formatters
Since Django works with different modules therefore, we use the dictConfig method.
There are other raw methods as well, but dictConfig is what Django’s default behavior.
Just copy this code in your settings.py
Code:
# DataFlair #Logging Information LOGGING = { 'version': 1, # Version of logging 'disable_existing_loggers': False, #disable logging # Handlers ############################################################# 'handlers': { 'file': { 'level': 'DEBUG', 'class': 'logging.FileHandler', 'filename': 'dataflair-debug.log', }, ######################################################################## 'console': { 'class': 'logging.StreamHandler', }, }, # Loggers #################################################################### 'loggers': { 'django': { 'handlers': ['file', 'console'], 'level': 'DEBUG', 'propagate': True, 'level': os.getenv('DJANGO_LOG_LEVEL', 'DEBUG') }, }, }
Understand the Code:
I know the code is a bit large but it’s very easy to grasp. We get a built-in variable LOGGING from Django. The logging’s default values come from this dictionary. Since we are configuring settings using a dictionary it’s called the dictConfig method. There are some important keys inside the LOGGING dictionary.
- version
- disable_existing_loggers
- handlers
- loggers
Let’s discuss them one by one. The version key tells the schema version. It’s important that it has value. It’s by default 1.
The next key is disable_existing_loggers. This key is to tell Django to do not disable loggers. By default, Django uses some of its own loggers. These loggers are connected with Django ORM and other inner parts of Django. This key is by default True. So, it will disable those loggers.
There are various database queries and function calls which the loggers log. Therefore, it is recommended that you don’t True this Key.
Handlers is the 3rd key. As we discussed, handlers handle the message and pass them to console, file, etc. The handlers itself is a dictionary. That dictionary-key names will be the names of the handlers. There are various handlers provided by the logging module. We are using two handlers here.
1. FileHandler: logger-name – filehandler
The FileHandler will store the logs in a file. As you can see, we have given the filename as dataflair-debug.log.
The level defines, until what level the logger shall commit logs.
Note:
Log files are generally stored with .log extension. This file can only be edited with permissions. It can only be appended.
2. StreamHandler: logger name – console
The stream handler will stream the log on console. This is not a recommended method. There is a limit of characters until the command line shows you logs. Since logs are too big to handle by command line therefore we need file-handlers.
There are more handlers like mailhandlers, AdminEmailHandler etc.
The AdminEmailHandler is a great addition to the opensource module.
Loggers are the ones that will log your server or software information. Loggers are also a dictionary type. It has a similar architecture as handlers. Although, there are different attributes and other properties.
Django also provides a set of loggers. There are various loggers like django, django.requests. and more.
Have you checked our latest Django request & response Tutorial
Now, to implement logging, you just need to follow these steps:
Start your server.
$ python manage.py runserver
The moment you hit enter there will be a series of logs and a lot of that. This happened because we set the default level to debug. All these logs are actually messages from the default loggers.
You can create your custom loggers in the consecutive files. That way you can get logs from specific files.
Here is the output of the file which is generated after logging.
These are the logs. There are a lot of logs but only warning or above levels shall be notified.
Summary
Logging is pretty important. If it’s implemented correctly, then it can save you a lot of time. Generally, logging is done after the project is complete.
Logging module is the basics of logging with Django or another python project. There are other solutions as well like, celery, travis etc. These log handlers also make it easy to handle logs. They can ease the task of searching whether the events occurred or not and what was the cause.
Any queries related to Django Logging article? Mention through comments | https://data-flair.training/blogs/django-logging/ | CC-MAIN-2019-51 | refinedweb | 2,217 | 69.58 |
Search
Hall of Fame
}
.bat files Other Languages 28-Oct-12 07:32 PM
bat files Other Languages 28-Oct-12 07:32 PM I have a .bat file that is supposed to start a console exe created with vb.net vs2010. The .bat file starts and issues the command and my console window opens but the program does The program runs fine from the IDE but when attempting to run it from the .bat file it just doesn't run. Arggggg! When I try to run the exe from the .bat file I see: WRONG! When I run the program from the IDE I see something like this: RIGHT! :) This is maddening! Any help? Thanks. Henry What is in the .bat file? Make sure you put the full path to the exe in your .bat file The way to do this is to forget about .bat files and go with PowerShell: #If your scripts don't run execute the command below
Convert .exe to .bat Lounge 28-Oct-12 07:42 PM
Hi Everyone ! Can any one share me the link of software that can convert *.exe to *.bat file or any idea convert it. . . Thanks Farrukh that isn't normally possible to do a compiled program and as such what it actually does is opaque to the user bat files are scripts of dos commands or the like and so can be read by to use to get c# from it but that still wouldn't give you a bat file what does the exe actually do? HI Pete rainbow, The .exe actually call third party application and execute data loading process, i want to edit in the .bat file and again make .exe. Before i have create a bat to .exe via convertor, now iam un able to recompile it. I have find the link of convertor .exe to .bat. But unable to download. . . http: / / / forum / index.php?showtopic = 26075 Thanks farrukh You can use Third party tool to convert exe file to bat file. Download tool fron here- • Attached File SuperExe2bat.zip Hope this will help you. HI
Bat File Login Username & Password Windows XP 28-Oct-12 08:14 PM
Bat File Login Username & Password Windows XP 28-Oct-12 08:14 PM Hi, I am having trouble with creating a login bat file. I am using Windows XP Pro. This calls a bat file that will connect to a database, but I cant get the username to work PASSWORD ) ELSE ( GOTO ERROR ) :PASSWORD IF "123" = = "%2" ( @ECHO WELCOME %USERNAME%. PAUSE> NUL CALL ZZV912G56.BAT ) ELSE ( GOTO ERROR ) :ERROR @ECHO YOU TYPED INCORRECT USERNAME / PASSWORD. @ECHO PLEASE TRY AGAIN. GOTO Cheers keywords: Windows XP, Windows XP Pro, insensitive, Login, database, Login username, GOTO ERROR description: Bat File Login Username & Password Hi, I am having trouble with creating a login bat file. I am using Windows XP Pro. This calls a bat file that will conn
Using WshShell to run a batch file ASP 28-Oct-12 08:27 PM
I am doing this: Set WShShell = Server.CreateObject("WScript.Shell") WShShell.Run "path to validate.bat" validate.bat is connecting to an AS400, sending data ( / I) and receiving a reply ( / O): zsockc / I validate.txt / O:cirout.txt SERVER PORT The problem is if I run validate.bat from the command window, it works fine, but if I run it from the ASP vbCrLf & "x'03'" & vbCrLf Set WShShell = Server.CreateObject("WScript.Shell") WShShell.Run "c: \ forms \ validate.bat" myTS.Close Set myTS = Nothing Set myFSo = Nothing Set WShShell = Nothing %> keywords: Set WShShell, CreateObject I am doing this: Set WShShell = Server.CreateObject(WScript.Shell) WShShell.Run path to validate.bat
run bat file in enterprise manager SQL Server 28-Oct-12 08:19 PM
Hi all, I'm trying to schedule a bat file in enterprise manager so that it will run every day. I have this as 2. It does not fail either. It seems like it hangs. I tried running the bat file manually (clicking on the bat file )and it works. I don't know why it would not work with the after selecting 'Operating System Command (CmdExec)' , In the command window you can type C: \ schedule .bat (This will call the batch file schedule .bat based on the schedule interval) Thanks for your reply. This is what I have done and it runs the bat file. But then step1 never exits and stays in running mode. It actually doesn't that it is running step1. Now as I said if I double click on the bat file itself it will run, accomplish its task and then it will exit. It just
what is the diff between .bat , .bak , .cmp extensions in sharepoint 2 SharePoint 28-Oct-12 07:37 PM
what are the diff between .bat , .bak , .cmp extensions in sharepoint 2010? what are the uses ? what are the limitations in sample export: stsadm -o export -url http: / / yourservername / a_site -filename c: \ a_sitebackup Where as a bat file is a Batch file Using bak file you can backup a Site Collection , Regards, .BAK: The BAK file type is primarily associated with 'Backup'. .bat : The BAT file type is primarily associated with 'Batch Processing' by Microsoft Corporation. DOS batch file. A is an ASCII file of commands that run as a program would run. If your BAT file extension association has been disabled, see the Associated Links for a possible fix. Note Import history keywords: SharePointbackup, date, Library, Site, Microsoft Corporation description: what is the diff between .bat , .bak , .cmp extensions in sharepoint 2 what are the diff between .bat , .bak , .cmp extensions in sharepoint 2010 what
config.sys and autoexec.bat VB 6.0 28-Oct-12 08:13 PM
any configuration or statement required in config.sys or autoexec.bat to run exe faster Hi, I dont think there is any special things that you a look at http: / / / ac.htm , http: / / en.wikipedia.org / wiki / AUTOEXEC.BAT and http: / / / ND_DOCS / 206.HTM for all types of details about c: \ dos \ emm386.exe ram noems dos = high device = c: \ dos \ setver.exe sample autoexec.bat @echo off c: \ dos \ smartdrv.exe set prompt$p$g set dircmd = / o set path have to change all the paths to be correct for your system. keywords: wiki AUTOEXEC.BAT, disk, VB, autoexec.bat, config.sys description: config.sys and autoexec.bat any configuration or statement required in config.sys or autoexec.bat to run exe faster 28-Oct-12 08:13 PM
how to call a ' bat' file from C# C# .NET 28-Oct-12 08:24 PM
Hi, I'm working with MS-Biztalk and I would like to call a '.bat' script as the last step of an orchestration. I'm looking for a way to call this '.bat' file in an 'expression' orchestration module or in a C# class. = = > I can do it regards, Franck System.Diagnostics.Process namespace to invoke an external executable. This should work with .bat files too hi, For this u need to import System.Diagnostics into your code behind keywords: module, class, VB, Microsoft Biztalk, Import, expression orchestration, orchestration description: how to call a ' bat' file from C# Hi, I'm working with MS-Biztalk and I would like to call a '.bat' script as the last step of an orchestration. I'
replacing a .bat file with multiple lines into a shell command VB.NET 28-Oct-12 07:34 PM
Instead of creating mulitple lines and dumping them to a .bat file and then executing the .bat file, is there a way to create a multi line string and dump it to Snippets.Replace keywords: VB.NET, SendKeys, Script, Replace, shell command, line string description: replacing a .bat file with multiple lines into a shell command Instead of creating mulitple lines and dumping them to a .bat file and then executing the .
How can i restore the Backedup (.bat) file in sharepoint 2010? SharePoint 28-Oct-12 07:37 PM
Hi, I did Scheduled Backup Successfully in Sharepoint 2010, i saved that file as .bat extension. Now i want to restore that Backedup file(.bat file) in sharepoint 2010, How can i do that one in sharepoint 2010? Explain me restoring is large, the process can take quite a while. Backup steps. 1. Create a BAT file and write the following commands in that file @ECHO OFF set PATH = C: \ Program URL = "http: / / ebizassist.com" stsadm -o backup -url %URL% -filename %BackupPath% pause 2. Run the Bat file to take the backup from 172.29.8.6 to the backup path Restore Steps 1. Create a BAT file and write the following commands in that file @ECHO OFF set PATH = C: \ Program http: / / ebizassist.com" stsadm -o restore -url %URL% -filename %RestorePath% -overwrite Pause 2. Run the Bat file to restore the ARCA.bak to the share point site You can simply use 2010, SharePoint Server 2007, SharePoint 2010, SharePoint, Office description: How can i restore the Backedup (.bat) file in sharepoint 2010 Hi, I did Scheduled Backup Successfully in Sharepoint 2010, i saved
ODBC Drivers for QuickBooks, Salesforce, SAP, MSCRM, SharePoint … Free Trial! | http://www.eggheadcafe.com/community/other/39/10449549/bat-files.aspx | CC-MAIN-2013-20 | refinedweb | 1,529 | 76.72 |
Unless you live under a rock, no matter where you looked at in the past few weeks, you have probably bumped into the new Animal Crossing.
In this article, we will recreate the Animal Crossing’s World Bending effect in Unity. Trust me, you will be surprised how simple it is! But we’re not stopping there, we will also explore how we can take it to the next level...
The World Bending effect brings in the horizon and the skyline, allowing both the ground and sky to be visible at the same time.
This is done by what Nintendo calls the “Rolling Log” effect, in which the terrain seems to bend and roll beneath the players while they wander around.
The effect has an interesting origin...
The producer of Animal Crossing: Wild World, Katsuya Eguchi, previously told IGN: “For the Nintendo DS, it made more sense to use this effect in order to show the sky in the upper screen, in oppose to a scrolling camera.”
After creating a new Universal Render Pipline project, I went ahead and created a fun scene we can work with. It has a boat for the player to control.
One thing to note regarding the scene, the effect requires meshes with a sufficient amount of polygon count. If your mesh poly count is too low – it will harm the desired look.
If you need more inspiration for your scenes, Animal Crossing is not the only game using this effect. It was also featured in games like DeathSpank, Subway Surfers, and more...
I created a Lit Shader Graph named World Bending.
It has two properties: a
Texture and a
Vector1. The Amount
Vector1 property expects tiny numbers, so I set it to a slider between
.005 and
.1.
Y-axis, so we split it.
Yposition to the vertex world position.
Vertex Positionof the Master node.
The fundamental shader graph is now ready. Let’s see our shader in action...
An advantage of the World Bending effect is that it allows us to present only a fraction of the world at any given moment. At the same time, we aren’t actually limited to the surface area of a cylinder or a sphere – so we can create infinite large worlds.
I added a moderate amount of fog by going to Window > Rendering > Lighting Settings. In this window, I ticked Fog and set the appropriate distances for my scene.
I also added post-processing effects. Just a subtle touch of vignette and depth-of-field.
In Animal Crossing, Whenever a dialogue kicks in, the camera changes two parameters. The first one is obviously the distance, the other one is the angle.
I added Cinemachine to the project and created two cameras. The Main camera angle is set to 50 and has a distance of 14. On the other hand, the Zoom camera angle is set to 30 and has a distance of 10.
I created a
CamZone script that expects a Cinemachine virtual camera:
On my bridge object, I created a trigger collider, added the script to it, and passed in the Zoom virtual camera.
Now when the boat passes under the bridge, we have a nice zooming behavior.
If you want to learn more about Cineamchine and Camera Zones, check out my article about it.
You may have noticed as the world rolls, background objects in the distance abruptly disappear instead of rolling all the way down.
This is an issue caused by the camera’s frustum culling.
The camera’s frustum is the area of the scene that is visible to the player. The Unity renderer ignores the objects outside of the frustum. In the illustration, we see that the trees inside the frustum are highlighted and rendered.
Our shader runs on the GPU and modifies the position of the objects before rendering them.
The CPU is not aware of the modifications made on the GPU. We can see this conflict if we select an object and move it – notice how the selection outline is not synchronized with the rendered object.
As our shader curves the world, it inserts more objects into the field of view. As a result, Unity renderer’s culling step, which runs on the CPU, ignores objects that are inside the field of view, but outside the frustum.
The illustration presents the same trees, both with and without the shader. It highlights the trees that are inside the frustum and rendered. The blue trees are modified by the shader – notice that some blue trees are inside the frustum but aren’t highlighted and rendered.
Our simple solution is to render with the normal frustum, but when Unity culls are objects, we use a modified frustum.
We will create a custom orthographic frustum with a larger field of view.
Notice that the blue trees are always highlighted when inside the frustum, therefore they are always rendered when we see them.
using UnityEngine; using UnityEngine.Rendering; public class BendingManagerDemo : MonoBehaviour { #region MonoBehaviour private void OnEnable () { if ( !Application.isPlaying ) return; RenderPipelineManager.beginCameraRendering += OnBeginCameraRendering; RenderPipelineManager.endCameraRendering += OnEndCameraRendering; } private void OnDisable () { RenderPipelineManager.beginCameraRendering -= OnBeginCameraRendering; RenderPipelineManager.endCameraRendering -= OnEndCameraRendering; } #endregion #region Methods private static void OnBeginCameraRendering (ScriptableRenderContext ctx, Camera cam) { cam.cullingMatrix = Matrix4x4.Ortho(-99, 99, -99, 99, 0.001f, 99) * cam.worldToCameraMatrix; } private static void OnEndCameraRendering (ScriptableRenderContext ctx, Camera cam) { cam.ResetCullingMatrix(); } #endregion }
I created a
BendingManager script:
Gone are the culling issues!
By now, we have achieved the desired look & feel. But modifying the scene is, to say the least, not a pleasant experience.
In the shader graph, I added a global multi compile boolean keyword and named its reference
ENABLE_BENDING.
I modified the shader, so the effect only takes place when the keyword is enabled.
I added an attribute and some more lines in top of the
BendingManager script.
When it awakes, I enable and disable the keyword depending on whether we are in play mode or not.
[ExecuteAlways] public class BendingManagerDemo : MonoBehaviour { #region Constants private const string BENDING_FEATURE = "ENABLE_BENDING"; #endregion #region MonoBehaviour private void Awake () { if ( Application.isPlaying ) Shader.EnableKeyword(BENDING_FEATURE); else Shader.DisableKeyword(BENDING_FEATURE); } // ...
Now when the editor is not playing, the scene is presented normally.
I modified the shader by adding a horizontal curve in addition to the depth curve. This will result in an effect that resembles a sphere in oppose to a log.
Now when we move, you can see how the world bends in both axes. It seems as we are on the edge of a planet.
Speaking of a planet... This effect has 3 parameters that we can play with to achieve a different look & feel: curvature amount, camera rotation, and camera position.
Let’s look at what happens when we change the camera rotation to 90 degrees and move it further away.
We now have a planet we can sail on.
Altering the curvature amount will change the planet’s size.
If you want to download the project we have created, you can get the Unity project from GitHub.
I would love to hear your thoughts on Twitter. Be sure to follow for updates when we post new tutorials on YouTube. | https://notslot.com/blog/2020/04/world-bending-effect/ | CC-MAIN-2020-34 | refinedweb | 1,185 | 66.03 |
-- File created: 2008-01-12 16:06:32 -- |Holds the 'Rule' and 'Coadjute' types and relevant functions which act on -- them. For user code, the important parts here are the 'rule' family of -- functions. module Coadjute.Rule( Rule(..), Coadjute, runCoadjute, getUserArgs, rule, ruleM, rule', ruleM' ) where import Control.Arrow (first) import Control.Monad.State (StateT(..), modify, gets) import Control.Monad.Trans (MonadIO, liftIO) import qualified Data.Set as Set import Coadjute.CoData import Coadjute.Task (Task(..), Source, Target) data Rule = Rule { rName :: String , rTasks :: [Task] } data RuleList = RL [Rule] addRule :: Rule -> RuleList -> RuleList addRule r (RL rs) = RL (r:rs) -- |Coadjute is the main monad you'll be working in. You can use the 'rule' -- family of functions to add rules whilst within this monad, and you can have -- a look at what arguments were given to you with 'getUserArgs'. -- -- For your convenience, a 'MonadIO' instance is provided. newtype Coadjute a = Co { unCo :: StateT (RuleList, [String]) CoData a } instance Monad Coadjute where return = Co . return (Co rs) >>= f = Co (rs >>= unCo.f) instance MonadIO Coadjute where liftIO = Co . liftIO runCoadjute :: Coadjute a -> CoData ([Rule], a) runCoadjute (Co st) = do ua <- asks coUserArgs (ret, (RL l, _)) <- runStateT st (RL [], ua) return (reverse l, ret) -- | You should use this instead of 'System.Environment.getArgs', to let -- Coadjute handle its own command line arguments first. getUserArgs :: Coadjute [String] getUserArgs = Co $ gets snd -- |A rule for building targets individually. rule :: String -> [String] -> ([Source] -> Target -> IO ()) -> [([Source],Target)] -> Coadjute () rule name args action = Co . modify . first . addRule . Rule name . map (\(d,t) -> Task name (Set.fromList args) [t] d (action d t)) -- |A rule for building multiple targets at once. ruleM :: String -> [String] -> ([Source] -> [Target] -> IO ()) -> [([Source],[Target])] -> Coadjute () ruleM name args action = Co . modify . first . addRule . Rule name . map (\(d,t) -> Task name (Set.fromList args) t d (action d t)) -- | > rule' = flip rule [] rule' :: String -> ([Source] -> Target -> IO ()) -> [([Source],Target)] -> Coadjute () rule' = flip rule [] -- | > ruleM' = flip ruleM [] ruleM' :: String -> ([Source] -> [Target] -> IO ()) -> [([Source],[Target])] -> Coadjute () ruleM' = flip ruleM [] | http://hackage.haskell.org/package/Coadjute-0.1.1/docs/src/Coadjute-Rule.html | CC-MAIN-2014-52 | refinedweb | 336 | 66.23 |
HP 17BII Financial Business Calculator
Manual
Preview of first few manual pages (at low quality). Check before download. Click to enlarge.
About
Details
Brand: Hewlett Packard
Part Number: F2234A#ABA
Here you can find all about HP 17BII Financial Business Calculator, for example manual and review. You can also write a review.
[ Report abuse or wrong photo | Share your HP 17BII Financial Business Calculator photo ][ Report abuse or wrong photo | Share your HP 17BII Financial Business Calculator photo ]
User reviews and opinions
Comments posted on are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us.
Documents
238 238
Dtails sur les calculs Calculs TRI%
Table des matires
File name : F_print_MP03_050426-PRINT.doc
Print data : 2005/4/27
253 253
Cash-Flow Calculations Bond Calculations Depreciation Calculations Sum and Statistics Forecasting Equations Used in (Chapter 14) Canadian Mortgages Odd-Period Calculations Advance Payments Modified Internal Rate of Return
Menu Maps
266 266
RPN: Summary About RPN About RPN on the hp 17bII+ Setting RPN Mode Where the RPN Functions Are Doing Calculations in RPN Arithmetic Topics Affected by RPN Mode Simple Arithmetic Calculations with STO and RCL Chain CalculationsNo Parentheses!
RPN: The Stack What the Stack Is Reviewing the Stack (Roll Down) Exchanging the X- and Y-Registers in the Stack ArithmeticHow the Stack Does It How ENTER Works Clearing Numbers The LAST X Register Retrieving Numbers from LAST X
Reusing Numbers Chain Calculations Exercises
276 283
RPN: Selected Examples Error Messages
List of Examples
The following list groups the examples by category. Getting Started Using Menus Using the Solver Arithmetic Calculating Simple Interest Unit Conversions Simple Interest at an Annual Rate (RPN example on page 276) General Business Calculations Percent Change Percent of Total Markup as a Percent of Cost Markup as a Percent of Price Using Shared Variables Return on Equity Currency Exchange Calculations Calculating an Exchange Rate Storing an Exchange Rate Converting between Hong Kong and U.S Dollars Time Value of Money A Car Loan A Home Mortgage A Mortgage with a Balloon Payment A Savings Account
53 159
208 214
An Individual Retirement Account Calculating a Lease Payment Present Value of a Lease with Advanced Payments and Option to Buy Displaying an Amortization Schedule for a Home Mortgage Printing an Amortization Schedule Calculations for a Loan with an Odd First Period Discounted Mortgage APR for a Loan with Fees (RPN example on page 276) Loan from the Lenders Point of View (RPN example on page 277) Loan with an Odd First Period Loan with an Odd First Period Plus Balloon Canadian Mortgage Leasing with Advance Payments A Fund with Regular Withdrawals Savings for College (RPN example on page 278) Tax-Free Account (RPN example on page 280) Taxable Retirement Account (RPN example on page 282) Insurance Policy Interest Rate Conversions Converting from a Nominal to an Effective Interest Rate Balance of a Savings Account Cash Flow Calculations Entering Cash Flows Calculating IRR and NPV of an Investment An Investment with Grouped Cash Flows An Investment with Quarterly Returns Modified IRR
MAIN menu FIN BUS SUM TIME SOLVE CURRX
BUS menu %CHG %TOTL MU%C MU%P EXIT
MU%C menu
Press to choose the BUS menu. Then press to choose the MU%C menu. Press e to return to the previous menu. Pressing times returns you to the MAIN menu. Press
e enough
@A to return to the MAIN menu directly.
When a menu has more than six labels, the label appears at the far right. Use it to switch between sets of menu labels on the same level. Example: Using Menus. Refer to the menu map for MU%C (above) along with this example. The example calculates the percent markup on cost of a crate of oranges that a grocer buys for $4.10 and sells for $4.60. Step 1. Decide which menu you want to use. The MU%C (markup as a percent of cost) menu is our destination. If its not obvious to you which menu you need, look up the topic in the subject
index and examine the menu maps in appendix C. Displaying the MU%C menu: Step 2. Step 3. Step 4. To display the MAIN menu, press @A. This step lets you start from a known location on the menu map. Press to display the BUS menu. Press to display the MU%C menu.
Using the MU%C menu: Step 5. Key in the cost and press to store 4.10 as the COST.
Step 6. Step 7.
Key in the price and press to store 4.60 as the PRICE. Press to calculate the markup as a percent of cost. The answer: .
Step 8.
To leave the MU%C menu, press e twice (once to get back to the BUS menu, and again to get to the MAIN menu) or @A (to go directly to the MAIN menu).
Calculations Using Menus
Using menus to do calculations is easy. You dont have to remember in what order to enter numbers and in what order results come back. Instead, the menus guide you, as in the previous example. All the keys you need are together in the top row. The menu keys both store numbers for the calculations and start the calculations. The MU%C menu can calculate M%C, the percent markup on cost, given COST and PRICE.
Keys: Display: Keys: Display: 4.60 Store 4.60
ee < s
Switches to TVM menu; NOM% value is still in calculator line. Stores adjusted nominal interest rate in I%YR.
Sets 12 payments per 12 e year; Begin mode.
@ 25&
Stores 84 deposit periods, $25 per deposit, and no money before the first regular deposit. Value of account in 7 years.
If the interest rate were the unknown, you would first do the TVM calculation to get I%YR (5.01). Then, in the ICNV PER menu, store 5.01 as NOM% and 12 as P for monthly compounding. Calculate EFF% (5.13). Then change P to 365 for daily compounding and calculate NOM% (5.00). This is the banks rate.
Cash Flow Calculations
The cash flow (CFLO) menu stores and analyzes cash flows (money received or paid out) of unequal (ungrouped) amounts that occur at regular intervals.* Once youve entered the cash flows into a list, you can calculate: The total amount of the cash flows. The internal rate of return (IRR%). The net present value (NPV), net uniform series (NUS), and net future value (NFV) for a specified periodic interest rate (I%). You can store many separate lists of cash flows. The maximum number depends on the amount of available calculator memory.
The CFLO menu
The CFLO menu creates cash-flow lists and performs calculations with a list of cash flows.
* You can also use CFLO with cash flows of equal amounts, but these are
usually handled more easily by the TVM menu.
7: Cash Flow Calculations
Table 7-1. CFLO Menu Labels Menu Label Description
Accesses the CALC menu to calculate TOTAL, IRR%, NPV, NUS, NFV. Allows you to insert cash flows into a list. Deletes cash flows from a list. Allows you to name a list. Allows you to switch from one list to another or create a new list. Turns the prompting for #TIMES on and off.
To see the calculator line when this menu is in the display, press I once. (This does not affect number entry.) To see this menu when the calculator line is in the display, press
The sign conventions used for cash flow calculations are the same as those used in time-value-of-money calculations. A typical series of cash flows is one of two types: Ungrouped cash flows. These occur in series of cash flows without groups of equal, consecutive flows.* Because each flow is different from the one before it, the number of times each flow occurs is one.
The name can be up to 22 characters long and include any character except: x ( ) < > : = space * But only the first three to five characters (depending on letter widths) of the name are used for a menu label. Avoid names with the same first characters, since their menu labels will look alike. Viewing the Name of the Current List. Press , then
* SUM does accept these exceptional characters in list names, but the Solver
functions SIZES and ITEM do not.
126 10: Running Total and Statistics
When you press , the SUM list that appears is the last one used. To start a new list or switch to a different one, the current list must be named or cleared. If it is named, then: 1. Press. The GET menu contains a menu label for each named list plus. 2. Press the key for the desired list. ( brings up a new, empty list.)
Clearing a SUM List and Its Name
2. If the list is named, youll see Press to remove the name. Press to retain the name with an empty list. To remove just one value at a time from a list, use.
Doing Statistical Calculations (CALC)
Once you have entered a list of numbers, you can calculate the following values. For one variable: the total, mean, median, standard deviation, range, minimum, and maximum. You can also sort the numbers in order of increasing value. For two variables: x-estimates and y-estimates (this is also called forecasting), the correlation coefficient for different types of curves (this is curve-fitting), the slope and y-intercept of the line, and summation statistics. You can also find the weighted mean and the grouped standard deviation.
10: Running Total and Statistics 127
Calculations with One Variable
The CALC menu calculates the following statistical values using one SUM list.
Table 10-2. The CALC Menu for SUM Lists Menu Key Description
Calculates the sum of the numbers in the list. Calculates the arithmetic mean (average). Calculates the median. Calculates the standard deviation.* Calculates the difference between the largest and smallest number.
Finds the smallest (minimum) number in the list. Finds the largest (maximum) number in the list. Sorts the list in ascending order. Displays a series of menus for calculations with two variables for curve fitting, estimation, weighted mean and grouped standard deviation, and summation statistics.
* The calculator finds the sample standard deviation. The formula assumes
that the list of numbers is a sampling of a larger, complete set of data. If the list is, in fact, the entire set of data, the true population standard deviation can be computed by calculating the mean of the original list, placing that value into the list, and then calculating the standard deviation.
Example: Mean, Median, and Standard Deviation. Suppose your shop had the following phone bills during the past six months:
Linear Curve Fit y y
Exponential Curve Fit
Logarithmic Curve Fit y y
Power Curve Fit
* The exponential, logarithmic, and power models are calculated using
transformations that allow the data to be fitted by standard linear regression. The equations for these transformations appear in appendix B. The logarithmic model requires positive x-values; the exponential model requires positive y-values; and the power curve requires positive x- and y-values.
10: Running Total and Statistics 133
To do curve fitting and forecasting : 1. Enter the data into two SUM lists: one for the x-values and one for the y-values. Make sure each list has the same number of items so that the items are in matched pairs. 2. From the SUM menu, press to display a menu of SUM-list names. The current list is labeled unless named otherwise. 3. Press a menu key to select a list of x-values (independent variable). 4. Select a list of y-values (dependent variable). 5. Now you see the FRCST menu. Whichever curve-fitting model was used last is named in the display. If you want to select a different model, press , and then the menu key for the model.
6. To calculate the curve-fitting results, press, , and. 7. To forecast (estimate) a value: a. b. Key in the known value and press the menu key for that variable. Press the menu key for the variable whose value you want to forecast.
Example: Curve Fitting. BJs Dahlia Garden advertises on a local radio station. For the past six weeks, the manager has kept records of the number of minutes of advertising that were purchased, and the sales for that week.
134 10: Running Total and Statistics
Number of Minutes of Radio Advertising (x-values, MINUTES)
Week 1 Week 2 Week 3 Week 4 Week 5 Week 4
Dollar Sales (y-values, SALES)
$1,400 $ 920 $1,100 $2,265 $2,890 $2,200
BJs wants to determine whether there is a linear relationship between the amount of radio advertising and the weekly sales. If a strong relationship exists, BJs wants to use the relationship to forecast sales. A graph of the data looks like this:
Acknowledging an Appointment
To acknowledge the appointment and clear the message, press any key (except @) during the beeping. Appointments not acknowledged within 20 seconds become past due. When an appointment comes due, the alarm starts beeping and the alarm annunciator ( ) is displayed, even if the calculator was off.* The message (or, if none, the time and date) is displayed.
* If the calculator is in the middle of a complex calculation when an
appointment comes due, the alarm annunciator comes on and the calculator beeps once. When the calculation is done, the alarm goes off.
The beeping can be suppressed or restricted to appointments. See Beeper
On and Off, page 36.
11: Time, Appointments, and Date Arithmetic 147
Unacknowledged Appointments
An appointment not acknowledged during its alarm becomes past due. The alarm annunciator remains on. To acknowledge a past-due appointment: 1. Press . 2. Press the menu key for the past-due appointment. 3. Press e to return to the APPT menu. The acknowledged appointment is no longer listed as past due. A repeating appointment is deactivated while it is past due and will not go off subsequently until the past-due appointment has been acknowledged.
Clearing Appointments
To cancel an appointment or to get rid of a repeating appointment, you need to clear the appointment. Clearing changes the date and time to 00/00/00, 12:00 AM, and removes the message and the repeat interval. To clear an appointment, press the menu label for that appointment and press @c To clear all ten appointments, display the APPT menu (the menu with , etc.) and press @c. Example: Clearing and Setting an Appointment. Today is Sunday, April 20, 2003. You want to set appointment #4 to go off every Tuesday at 2:15 p.m. to remind you of a staff meeting. Assume 12-hour time format and month/day/year date format.
Displays setting for
148 11: Time, Appointments, and Date Arithmetic
appointment #4. Clears appt. #4. Displays RPT menu. Sets repeat interval. Stores appt. time and supplies current date. Sets appt. time to PM. Stores appt. date. Enters message: staff.
Returns to APPT menu Appt. 4 is set.
Date Arithmetic (CALC)
The CALC menu performs date arithmetic: Determines the day of the week for any date. Determines the number of days between dates using one of three calendarsactual, 365-day, or 360-day. Adds or subtracts days from a date to determine a new date. The calendar for date arithmetic runs from October 15, 1582 to December 31, 9999. To display the CALC menu, press , then.
11: Time, Appointments, and Date Arithmetic 149
Table 11-4. CALC Menu Labels for Date Arithmetic Menu Label Description
Stores or calculates a date. Also displays the day of the week. If you omit the year, the calculator uses the current year. Stores or calculates the number of actual days between DATE1 and DATE2 , recognizing leap years. Calculates the number of days between DATE1 and DATE2 using the 360-day calendar (30-day months). Calculates the number of days between DATE1 and DATE2, using the 365-day calendar, ignoring leap years. A shortcut: recalls the current date, which can then be stored in DATE1 or DATE2.
The calculator retains the values for the TIME CALC variables DATE1, DATE2, DAYS until you clear them by pressing @c while the CALC menu is displayed. To see what value is currently stored in a variable, press label.
Determining the Day of the Week for Any Date
To find the day of the week for any date, key in the date and press or.
Calculating the Number of Days between Dates
To calculate the number of days between two dates: 1. Key in the first date (for todays date, use ) and press.
150 11: Time, Appointments, and Date Arithmetic
2. Key in the second date and press. 3. Press , , or to calculate the number of days using that calendar. Example: Calculating the Number of Days between Two Dates. Find the number of days between April 20, 2003 and August 2, 2040, using both the actual calendar and the 365-day calendar. Assume the date format is month/day/year.
4.202003
Displays CALC menu. Stores Apr. 20, 2003 as first date and displays its day of the week. Stores Aug. 2, 2040 as second date. Calculates actual number of intervening days.
8.022040
Calculates number of intervening days by a 365-day calendar.
Calculating Past or Future Dates
To calculate a date a specified number of days from another date: 1. Key in the known date (for todays date, use ) and press. 2. Key in the number of days. This number should be negative if the unknown date precedes the known date. Press.
11: Time, Appointments, and Date Arithmetic 151
3. Press. This calculation always uses the actual calendar. Example: Determining a Future Date. On February 9, 2003, you purchase a 120-day option on a piece of land. Determine the expiration date. Assume the date format is month/day/year.
2.092003
Displays CALC menu. Stores Feb. 9, 2003.
Stores number of days into the future. Calculates expiration date (DATE2).
152 11: Time, Appointments, and Date Arithmetic
The Equation Solver
The Equation Solver (the SOLVE menu) stores equations that you enter and creates menus for them. You can then use those menus to do calculations. Enter Solver equations in algebraic form regardless of the calculation mode (ALG or RPN). The Solver can store many equationsthe number and length of equations is limited only by the amount of memory available. The equations are stored in a list.
DELETE
Solver Example: Sales Forecasts
Suppose part of your job includes making sales forecasts, and that these forecasts are revised based on new information. For instance, A change in the price of the product will affect sales by a forecasted percentage, A%. A change in sales-force training will affect sales by a forecasted percentage, B%. A competitors new product will affect sales by a forecasted percentage, C%.
SIZEC(CFLO-listname) SIZES(SUM-listname) SPFV(i%:n)
SPPV(i%:n)
SQ(x) SQRT(x) #T(CFLO-listname:flow#) TRN(x:y)
USFV(i%:n)
USPV(i%:n)
12: The Equation Solver 171
Example Using a Solver Function (USPV): Calculations for a Loan with an Odd First Period. Suppose an auto purchase is financed with a $6,000 loan at 13.5% annual interest. There are 36 monthly payments starting in one month and five days. What is the payment amount? Use the following formula when the time until the first payment is more than one month but less than two months. Interest for this odd (non-integer) period is calculated by multiplying the monthly interest by the number of days and dividing by 30. The formula for this loan is:
N ANNI + ANNI DAYS 1200 PV 1 + + PMT ANNI 1200
= 0
where:
ANNI the annual percentage interest rate. N the number of payment periods. DAYS the number of leftover, odd days (an integer from 0 through 30). PV the amount of the loan. PMT the monthly payment.
The formula can be rearranged and simplified using USPV, the Solver function for returning the present value of a uniform series of payments: The keystrokes are: PV
*( 1 + ANNI / 1200 * DAYS / 30 ) + PMT * USPV ( ANNI / 12:N )= 0
172 12: The Equation Solver
Keys: @]
(type in equation as shown above)
Displays SOLVE menu and bottom of Solver list. Displays ALPHA menu. Remember that the colon is located after.
(Press ) Enters equation, verifies it, and creates menu. Stores loan amount in
6000 13.5
Stores annual percent interest in ANNI. Stores number of odd days in
Stores number of payments in
Calculates monthly PMT of $203.99.
12: The Equation Solver 173
Conditional Expressions with IF
Equations can include conditional expressions using the function IF. The syntax of the IF function is: IFconditional expression algebraic expression algebraic expression
@c e
v1000000 * v12 %/
12 1000000
v- 3 %=
Loan with an Odd (Partial) First Period
The TVM menu deals with financial transactions in which each payment period is the same length. However, situations exist in which the first payment period is not the same length as the remaining periods. This first period is sometimes called an odd or partial first period. The following Solver equation calculates N, I%, PV, PMT, or FV for transactions involving an odd first period, using simple interest for the odd period. The formula is valid for 0 to 59 days from inception to
14: Additional Examples 195
first payment, and a 30-day month is assumed.* A Solver Equation for Odd-Period Calculations: (For the character, press .)
PV = the loan amount. I% = the periodic interest rate. DAYS = the actual number of days until the first payment is made. PMT = the periodic payment. N = the total number of payment periods. FV = the balloon payment. A balloon payment occurs at the end of the last (Nth) period and is in addition to any periodic payment.
The following examples assume that you have entered the equation named ODD, above, into the Solver. For instructions on entering Solver equations, see Solving Your Own Equations, on page 29. Example: Loan with an Odd First Period. A 36-month loan for $4,500 has an annual interest rate of 15%. If the first payment is made in 46 days, what is the monthly payment amount? Select equation ODD in the Solver.
Creates menu. 36 payment periods. Stores loan amount. Stores periodic, monthly
v15 / 12
* You do not need to specify Begin or End mode. If the number of days until the
first payment is less than 30, Begin mode is assumed. If the number of days until the first payment is between 30 and 59, inclusive, End mode is assumed.
196 14: Additional Examples
interest rate. Stores days until first payment. No balloon payment. Calculates payment.
Example: Loan with an Odd First Period Plus Balloon. A $10,000 loan has 24 monthly payments of $400, plus a balloon payment of $3,000 at the end of the 24th month. If the payments begin in 8 days, what annual interest rate is being charged? Select equation ODD.
v8.175 - 28 % v
0 3000
Modified Internal Rate of Return
When there is more than one sign change (positive to negative or negative to positive) in a series of cash flows, there is a potential for more than one IRR%. For example, the cash-flow sequence in the following example has three sign changes and hence up to three potential internal rates of return. (This particular example has three positive real answers: 1.86, 14.35, and 29.02% monthly.) The Modified Internal Rate of Return (MIRR) procedure is an alternative that can be used when your cash-flow situation has multiple sign changes. The procedure eliminates the sign change problem by utilizing reinvestment and borrowing rates that you specify. Negative cash flows are discounted at a safe rate that reflects the return on an investment in
14: Additional Examples 209
a liquid account. The figure generally used is a short-term security (T-bill) or bank passbook rate. Positive cash flows are reinvested at a reinvestment rate that reflects the return on an investment of comparable risk. An average return rate on recent market investments might be used. 1. In the CFLO menu, calculate the present value of the negative cash flows (NPV) at the safe rate and store the result in register 0. Enter zero for any cash flow that is positive. 2. Calculate the future value of the positive cash flows (NFV) at the reinvestment rate and store the result in register 1. Enter zero for any cash flow that is negative. 3. In the TVM menu, store the total number of periods in N, the NPV result in PV, and the NFV result in FV. 4. Press to calculate the periodic interest rate. This is the modified internal rate of return, MIRR. Example: Modified IRR. An investor has an investment opportunity with the following cash flows:
Group No. of Months (FLOW no.) (#TIMES)
Cash Flow, $
180,000 100,000 100,200,000
Calculate the MIRR using a safe rate of 8% and a reinvestment (risk) rate of 13%.
Displays current cash-flow list. Clears current list or gets a
210 14: Additional Examples
new one. Stores initial cash flow,
180000 &
FLOW(0).
Stores FLOW(1) as zero since the flow amount is positive.
5I 100000 &
Stores 5 for #TIMES(1). Stores FLOW(2). Stores FLOW(2) 5 times. You can skip FLOW(3) and FLOW(4) because they are equal to zero for this part.
Stores monthly safe interest rate. Calculates NPV of negative cash flows. Stores NPV in register 0. Returns to CFLO menu. Clears list. Stores zero as FLOW(0). (Skip negative flows; store positive flows.)
Watch for this symbol in the margin earlier in the manual. It identifies keystrokes that are shown in ALG mode and must be performed differently in RPN mode. Appendixes D, E, and F explain how to use your calculator in RPN mode. The mode affects only arithmetic calculationsall other operations, including the Solver, work the same in RPN and ALG modes.
Setting RPN Mode
The calculator operates in either RPN (Reverse Polish Notation) or ALG (Algebraic) mode. This mode determines the operating logic used for arithmetic calculations.
To select RPN mode: Press
The calculator responds by displaying . This mode remains until you change it. The display shows the X register from the stack.
To select ALG mode: Press .
@>. The calculator displays
262 D: RPN: Summary
Where the RPN Functions Are
Function Name
ENTER LASTX R R X<>Y CHS
Definition
Enters and separates one number from the next. Recalls last number in X-register. Rolls down stack contents. Rolls up stack contents. X-register exchanges with Y-register. Changes sign.
Key to Use
= @L ~ (same as () [ (except in lists) x (same as )) &
D: RPN: Summary 263
Using INPUT for ENTER and for R. Except in CFLO and SUM lists, the I key also performs the E function and the ] key also performs the ~ function.
In lists: I stores numbers. Use = to enter numbers into the stack during arithmetic calculations. In lists: [ and ] move through lists. Use ~ to roll through stack contents.
Doing Calculations in RPN
Arithmetic Topics Affected by RPN Mode
This discussion of arithmetic using RPN replaces those parts of chapter 2 that are affected by RPN mode. These operations are affected by RPN mode: Two-number arithmetic (+, *, -, /, u). The percent function (%). The LAST X function (@L). See appendix E. RPN mode does not affect the MATH menu, recalling and storing numbers, arithmetic done inside registers, scientific notation, numeric precision, or the range of numbers available on the calculator, all of which are covered in chapter 2.
Simple Arithmetic
Here are some examples of simple arithmetic. Notice that
E separates numbers that you key in. The operator (+, -, etc.) completes the calculation. One-number functions (such as v) work the same in ALG and RPN
modes.
264 D: RPN: Summary
To select RPN mode, press @>.
To Calculate:
12 + x 3 12
276 F: RPN: Selected Examples
adjust the mortgage amount to reflect the points paid (PV = $60,000 2%). All other values remain the same (term is 30 years; no future value).
30 @ 11.0
R 2 %-
Example: Loan from the Lenders Point of View. A $1,000,000 10-year, 12% (annual interest) interest-only loan has an origination fee of 3 points. What is the yield to the lender? Assume that monthly payments of interest are made. (Before figuring the yield, you must calculate the monthly PMT = (loan x 12%) 12 mos.) When calculating the I%YR, the FV (a balloon payment) is the entire loan amount, or $1,000,000, while the PV is the loan amount minus the points.
F: RPN: Selected Examples 277
10 @ 1000000 E 12 % 12 /
Stores total number of payments. Calculates annual interest on $1,000,000. Calculates, then stores, monthly payment. Stores entire loan amount as balloon payment. Calculates, then stores, amount borrowed (total points). Calculates APRthe yield to lender.
1000000 3%-&
Example: Savings for College. Your daughter will be going to college in 12 years and you are starting a fund for her education. She will need $15,000 at the beginning of each year for four years. The fund earns 9% annually, compounded monthly. You plan to make monthly deposits, starting at the end of the current month. How much should you deposit each month to meet her educational expenses?
See figures 14-1 and 14-2 (chapter 14) for the cash-flow diagrams. Remember to press the = key for E while working in a list. (Pressing I will add data to the list, not perform an ENTER.)
278 F: RPN: Selected Examples
Step 1: Set up a CFLO list.
Sets initial cash flow, FLOW(0), to zero. Stores zero in FLOW(1) and prompts for the number of times it occurs.
12 E 12 * 1 - I
For E, press =, not I. Stores 143 (for 11 years, 11 months) in #TIMES(1) for FLOW(1).
15000 I
Stores amount of first withdrawal, at end of 12th year.
Stores cash flows of zero.. for the next 11 months. Stores second withdrawal, for sophomore year.
11 I 15000 II
F: RPN: Selected Examples 279
0 I 11 I 15000 II 0 I 11 I 15000 II
Stores cash flows of zero for the next 11 months. Stores third withdrawal, for junior year. Stores cash flows of zero for the next 11 months. Stores fourth withdrawal, for senior year.
Done entering cash flows;
gets CALC menu.
Step 2: Calculate NUS for the monthly deposit. Then calculate net present value.
298 Index
Iteration in Solver, 17983, 240, 24246
LN, 170 LNP1, 170 Loan amortizing, 7783 APR for, with fees, 193 LOG, 170 Logarithmic model, 130, 132, 133 Logarithms, 42, 170 Logical operators, 174 Low memory, 227 Low power, 224 and printing, 184 annunciator, 184
, 115 , 132 , 186 , 42 , 42
L, 44 in RPN, 273
L, 170 Language, setting, 224 Large number available, 47 in a list, 128 Large numbers, keying in and displaying, 47 Last result, copying, 44 LAST X register, RPN, 273 Leasing, 7477, 199200 LEFT-RIGHT, interpreting, 24246 Letter keys, 30 Linear estimation, 121, 13234 Linear model, 130, 133 Linear regression, 121 List. See CFLO list; SUM list; Solver list List, RPN, 264 rolling the stack, 269
, 132 , 109 , 49, 53 , 52, 128 , 128 , 128 , 128 , 132
in appointment setting menu, 145 in printer men, 186 , 143
, 56 key, 25
@A, 2226 @M, 37
Index 299
MAIN menu, 19 Manual, organization of, 16 Markup on cost, 49, 52 on price, 49, 52 Math in equations, 165, 167 MATH menu, 42, 260 MAX, 170 Mean, 251 calculating, 12830 weighted, 13839 Median, 251 calculating, 12830 Memory. See also Continuous Memory freeing, 227 insufficient, 227 losing, 229 using and reusing, 37 Menu labels, 19 maps, 25, 25460 Menus calculations with, 2728 changing, 25, 28 exiting, 28 names of, 161 printing values stored in, 18688 sharing variables, 53 Messages for appointments, 147 Messages, error, 283 MIN, Solver, 170
MOD, 170 Mode of payments (Begin and End), 64 Models, curve-fitting, 132, 133 Modes
, 36, 26162, 265 , 36, 261, 262
@>, 185 beeper, 36 double-space printing, 36, 185 menu map, 260 printer ac adapter, 36
Modified IRR, 20912, 253 Month/day/year format, 14344 Mortgage, 68, 69. See also Loan calculations, 6771, 7780 discounted or premium, 191 Moving average, 21719 MU%C, 50 equation, 247 MU%P, 50 equation, 247 Multiple equations, linking, 178 Multiplication in arithmetic, 21, 3840 in equations, 165
, 56 , 78
300 Index
in CFLO list, 9899 in SUM list, 126
equation, 249 Noise Declaration, 237 Nominal interest rate, 8487, 100 Non-integer period, 172 NOT, 174 Notes, discounted, 21617 NPV calculating, 100101 equation, 100, 248 Number lists. See CFLO list; SUM list; Solver list of days between dates, 14951 of decimal points, 47 of payments, in TVM, 62 range, 48 Numbers. See also Value entering, RPN, 264, 271 with exponents, 47 Numerical solutions, 17981 NUS, 100, 249
, 101 , 101 , 101 , 157 , 56 , 56 , 56 , 42 , 8586 @ , 63
N, non-integer, 63, 72 Names of equations, 161 of lists, clearing, 99 of variables, 166 Negative numbers in arithmetic calculations, 22 in cash-flow calculations, 9294 in TVM calculations, 64 Neighbors in Solver, 243 Nested IF function, in the Solver, 175 Net future value, 91, 101 Net present value, 91, 101 Net uniform series, 91, 101 NFV calculating, 91, 101
The gift that keeps on giving for Dads and Grads! The perfect gift is one that is memorable, fun, practical and that keeps giving for years to come. Too difficult to find for that Grad or Dad? See why HP calculators might be exactly what you are looking for.
Learn more
Volume 5 June 2008 Welcome to the fifth edition of the HP Solve newsletter. Learn calculation concepts, get advice to help you succeed in the office or the classroom, and be the first to find out about new HP calculating solutions and special offers. Featured Calculator
Your articles
Lets extrapolate the population growth in the US! Build upon what you have learned from the HP calculator e-newsletter in this month's exercise. With sequences and statistics and a little help from your HP calculator you can learn to extrapolate the US population growth.
The HP 12c: the industry standard for finance professionals and the worlds first horizontal financial calculator This oldie-but-goodie calculator revolutionized calculating and became a staple to millions of users. Read the stories of the lives the HP 12c has touched and learn how to get your own.
HP 17bII+ Financial Calculator Trust the HP 17bII+ financial calculator with even your heaviest work loads. This handy calculator is preferred in business, real estate and other financial endeavors and happens to be the HP calculator of the month!
HP calculator sales promotions buy in bulk and save! A new promotion was recently launched to create special discounts and savings for financial institutions using HP calculators. Get the promotion details and sneak-peaks into upcoming promotions!
Special Newsletter Discount As a reward for all the loyal readers out there, you can now save 10% on the HP 17bII+ Financial Calculator! Stay tuned next month for more great savings!
HP Calculator Blog Check out Wing Kin Cheung's blog, "The Calculating World with Wing and You."
View blog
The Calculator Club Join the Calculator Club and take advantage of: The Technical Corner Learn how your HP calculator finds the square root of a number in this months technical corner. RPN Tip #5 This months RPN tip teaches you to calculate the number of attendees at a sporting event and can be translated to other like mathematical equations. Learn more with this months RPN tip. Calculator games & Aplets PC/Mac screensavers & backgrounds HP Calculator fonts Custom calculator pouches HP Calculator forum
Register now
Upcoming HP Calculator events Event ASEE NECC AP Annual WIPTE Educause Dates - Location June 22-25 - Pittsburgh, PA June 29-July 2 - San Antonio, TX July 16-20 - Seattle, WA October 15-16 - Westville, IN October 28-31 - Orlando, FL
The gift that keeps on giving for Dads and Grads!
Article Next Are you looking for the perfect gift for the Dad or Grad in your life? Finding the right gift can be very difficult. We know you want to give something fun, but also something memorable, practical and something that will keep on giving for years to come. That perfect gift they will remember, it came from you! HP calculating solutions might be the perfect gift for your special Dad or Grad. HP recommends our most powerful calculators: HP 17bII+ Financial Calculator HP 35s Scientific Calculator HP 50g Graphing Calculator
Whether your special family member is or aspires to be a real estate tycoon, financial markets wizard, Nobel Prize winning scientist, world renowned mathematician, or professor emeritus in engineering at MIT, HP calculating solutions is the tool of choice for these intelligent leaders and many thought leaders around the world. Give them the gift that speaks to them, supports them, and enables them to be efficient, effective, and successful. Celebrate their achievements with HP calculating solutions. Save 10% on an HP 17bII+ Financial Calculator right now. Ideal for real estate, finance and business professionals, the HP 17bII+ features a simple keyboard, 250 menu-driven functions, RPN1 and algebraic. Click here for more information about the HP Calculators.
Feature calculator of the month: HP 17bII+ Financial Calculator
Previous Article Next New! HP 17bII+ Financial Calculator industrial design (silver). The simple-to-use HP 17bII+ Financial Calculator is built for the heaviest workloads of real estate, finance, and business professionals WW. Quickly calculate loan payments, interest rates and conversions, standard deviation, percent, TVM, NPV, IRR, cash flows, bonds and more. Features 28KB of user memory, over 250 menu-driven functions, RPN1 2
and algebraic entry-system logic, clock, calendar, HP solve application2, menus prompts and messages. Permitted for use on Certified Financial Planner (CFP) Certification Exam3 Fun facts about the new HP 17bII+ Financial Calculator (silver): Born: January 8, 2008 The 17bII+ (classic) replaced the 17BII, which was introduced in January of 1990 and it replaced the original HP17B, which was released on January 4th, 1988. The current HP17BII+ keyboard represents a return to the double-wide INPUT key which was present in the original HP17B (Jan 1988) and the HP17BII (Jan 1990). HP has used the model number 10 five times for scientific, printing, and business machines the original HP-10, HP-10C, HP-10B, HP-10BII, & HP10s. The model number 17 has been used for the "same" business machine with variations HP17B, HP17BII, HP17BII+ (gold) & HP17BII+ (silver).
Click here for more information about the HP 17bII+ Financial Calculator.
1. Reverse Polish Notation (RPN) is an efficient data-entry system that can significantly reduce keystrokes. More information is available at HP.com. 2. HP Solve is a time-saving application that allows you to solve for any variable without rewriting your equation. 3. CFP is a registered trademark of the Certified Financial Planner Board of Standards, Inc.
RPN Tip #5
Previous Article Next You are assigned the task of counting the people who attend a high school foot ball game by standing at the door. You dont have a mechanical push button counter, but you have your trusty RPN calculator. You prepare the machine to act as an electronic counter by pressing 1, ENTER, ENTER, ENTER, CHS, +. In place of CHS + you may save a keystroke if your machine has a correction key (such as the HP35s) by pressing it ( ). This also allows you to start a new count at any time. In place of CHS + you may also save a keystroke if you press zero. This method, however, will not terminate digit entry so the next keystroke should be your first count. You now have zero in the display (X register) and 1s in the Y, Z, & T registers. Every time you press the + key you will add 1 to the X register. This counter action occurs because of the RPN feature of replicating the T register every time the stack drops (as when you perform an addition). If you need to subtract a count press 1, - or -, CHS. Click here to learn more about RPN.
Lets extrapolate the population growth in the US!
Previous Article Next The previous activity looked at population data (overall and Hispanic) for both Texas and the United States. For this activity, we will focus on the Hispanic and overall population for the US. Examine the data. Does the US Hispanic population appear to be roughly linear? What about the total US? This activity will tie together two previous ideas you have worked with: sequences and statistics. First, you will explore the data using the ideas of common difference and common ratio from your work with sequences. You will then use the statistics aplet to see how accurate your sequence representations are by making predictions. You will also work with an aplet that has to be put on your calculator. Year US Hispanic Population 9,589,216 14,608,673 22,354,059 35,305,818 US Total Population 203,210,158 226,545,805 248,709,873 281,421,906
Exercise 1: Use the Sequence aplet to enter information about the US Total Population. 1. 2. What should U1(1) and U?
Now look at the US Hispanic Population. 5. 6. What should U2(1) and U.
Exercise 2: Lets now examine the data using the aplet TXY2K. Click here to download the TXY2K applet. Be sure to save the TXY2K applet to your desktop. Then connect your 39gs to your computer and be sure to 4
download the applet to calculator as a folder. Before you can answer any questions, you have to complete the downloading process of the aplet on your calculator. Once you have the aplet, make sure you can see the data for this problem in the NUM view. 1. Use the data for the total US population and find a model to fit the data. Which model did you choose? Why? Explain any similarities and differences between the model found using statistics and the formula you used in the sequence aplet. Use the data for the US Hispanic population and find a model to fit the data. Which model did you choose? Why? Explain any similarities and differences between the model found using statistics and the formula you used in the sequence aplet.
Exercise 3: In this exercise, rather than exploring the Hispanic population separate from the total US population, you will look at the ratio of the two populations. 1. To look at the US Hispanic population compared to the total US population, graph the year and the ratio of Hispanic to total. Describe the pattern you see. Use the statistics aplet to develop a model for the ratio of Hispanic to total. Use this model to predict when the Hispanic population will be at least 50% of the US population. Go back to your sequence aplet and look at the tables you created for the Hispanic population and total US population. According to this table, when is the Hispanic population at lease 50% of the total US population? Does this match your answer from the statistics model? Explain any differences that may occur.
Teacher Notes The following TEKS can be found in this activity. The TEKS are listed in grade-level order, not the order in which they appear in the activity. Also, some of the TEKS statements are abbreviated or paraphrased here. 6.2D: estimate and round to approximate reasonable results and to solve problems where exact answers are not required 6.3A: use ratios to describe proportional situations 6.3C: use ratios to make predictions in proportional situations 6.7: use coordinate geometry to identify location in two dimensions 7.5B: formulate problem situations when given a simple equation and formulate an equation when given a problem situation 7.7A: locate and name points on a coordinate plane using ordered pairs of integers 8.3A: compare and contrast proportional and non-proportional linear relationships 8.4: make connections among various representations of a numerical relationship 8.5A: predict, find, and justify solutions to application problems using appropriate tables, graphs, and algebraic equations Table 1: TEKS Covered, by Activity TEKS Covered 6.2D, 6.3A, 6.3C, 7.5B, 8.3A, 8.4, 8.5A 6.7, 7.5B, 7.7A, 8.4, 8.5A 6.2D, 6.3A, 6.3C, 6.7, 7.5B, 7.7A, 8.3A, 8.4, 8.5A
Activity Exercise 1 Exercise 2 Exercise 3 Answer Key 5
Exercise 1: 1. 2. U1(1) = 203,210,158 and U1(2) = 226,545,805 Students should realize they are looking at a common difference situation. Even though the differences are not exact, you are adding close to the same amount each time. Answers will vary. Students may just guess or base it on the approximate difference in the population in 1970 and 1980. Some student may add all the differences and divide by three to get the mean. For the answers shown here, we are using a common difference of 13,000,000
Below is the table based on the symbolic input shown in the previous problem.
There are two main differences. The values in the table appear in scientific notation and students will not be able to see the precise numbers. However, they should be able to see enough of the values to notice that these numbers are different than what was provided in the problem. 5. 6. U2(1) = 9,589,216 and U2(2) = 14,608,673 Students should realize that they are not dealing with a common difference in this problem. While they may not be able to immediately recognize a common ratio, it should be clear that if it is not a common difference, they should be checking for a common ratio. As with the common difference with the previous data, student answers will vary for the common ratio. They can use a guess and check method, choose only one of the ratios, or average the three. For the answers shown here, a common ratio of.15 is used.
Exercise 2: 1. A linear model since we used a common difference to set up an arithmetic sequence.
A linear model best represents an arithmetic sequence. However, the common difference used is not the same as the slope of our statistical model. An exponential model because we used a common ratio to define our sequence. An exponent model best represents a geometric sequence. However, the statistical fit is not the same as our sequence definition.
Exercise 3: In this exercise, rather than exploring the Hispanic population separate from the total US population, you will look at the ratio of the two populations. 1. The pattern appears to be exponential.
According to the statistical model, the US Hispanic population will be at least 50% between 2040 and 2050. A better approximation is between 2042 and 2043.
According to the table, the population will reach 50% sometime between 70 and 80 years past the original starting point of 1970, so between 2040 and 2050. 4. The data for the ratio in the table of sequences indicates that the choice of an exponential model is best. Differences in prediction values occur because of our choice of model. This is why predictions for population and other things will vary in different newspaper or magazine articles. When different mathematical models are used, different results can occur.
Click here for more information about the HP 39gs Graphing Calculator.
The industry standard for finance professionals and the worlds first horizontal financial calculator
Previous Article Next Some things just dont go out of style the original HP 12c hits 27 years in 2008. HP introduced a $150 pocket-sized device that revolutionized financial calculations for people on the move. Its innovative design and breakthrough Reverse Polish Notation (RPN) entry forever changed the way students and professionals reach their goals. Instantly recognized for its unique horizontal layout, the HP 12c financial calculator has sold in the millions to investors, real estate professionals, accountants, loan officers, business students and teachers. It is an iconic consumer electronics product that is still sold virtually unchanged under its original name and model number 27 years after it was introduced. The HP 12c thats sold today acts and looks just like it did when it was first snapped up by thousands of financial and real estate students and professionals at its worldwide debut. Few other products of industrial design have achieved such a classic distinction. HP still offers the HP 12c and it is still one of our most popular financial calculators today. It has touched many lives and you can see the story winners here: Tales of the AMAZING 12C story contest The HP 12c is approved for many courses and exams including the GARP FRM, CFP, CFA, AFP, CIIA, and MFA. Easily calculate loan payments, interest rates and conversions, standard deviation, percent, TVM, NPV, IRR, cash flows, bonds and more. Now you can solve all your financial calculations with the industry standard financial calculator. HP provides FREE calculator training specific to our calculator models. Check out the links below: Click here for FREE HP 12c CBT training Click here for FREE HP 12c intro to finance video training Click here for FREE HP 12c learning modules
HP calculator sales promotions buy in bulk and save!
Previous Article Next In May, HP officially launched our first external calculator sales promotion of the year. This program is aimed at providing additional discounts and benefits for financial institutions throughout the Western United States. The program, entitled Banking and Insurance Promotion, was launched through a direct mail campaign targeting specific financial institutions. With this promotion we are looking to create awareness of our high-end financial calculator line by informing customers of the many benefits of our calculators. We hope to illustrate how our 17bII+ financial calculator is a finance-savvy tool that can help people easily answer some of the toughest financial questions, such as: What will my new ARM payment be? Should I choose a car loan or a lease? Which APR will cost me the least? How much life insurance do I need? What will my savings be worth in 40 years? 8
In the upcoming months we expect to launch additional promotions aimed at our business customers throughout the world. These promotions will provide extra discounts, incentives, training, and programs for our biggest and best customers. Through these promotions our customers will receive extra discounts off our top-selling calculating solutions the larger the purchase the bigger the discount. We also plan to incorporate a buy 10 get one free promotion. This program will greatly reward our customers that purchase our calculators in bulk. Also, unlike the previous promotions this one will use a two-pronged approach by providing incentives to our customers and internal HP sales folks. It is clear our calculators are the perfect productivity tool for anyone involved in the financial industry. But, our calculators also make the perfect gift for your employees and your customers. So, next time you are thinking of a great gift consider purchasing HP calculators in bulk and at a discount for your customers and employees! For more information or to obtain a quote on a bulk purchase please go to the HP Calculator homepage, join the calculator club, and click on the link for HP discounts & special offers. On this page you will find the link to request discounted pricing for bulk purchases.
The Technical Corner
Previous Article Next Ever wonder how your HP Calculator finds the square root of a number? HP calculators stores numbers as a set of 3 items: a sign, + or -, a 12 digit number called mantissa always greater or equal to 100000000000 and an exponent. For example, 34.432 is stored as +, 344320000000, 1. To extract a square root of a number, the calculator makes use of the following mathematical fact:
n2 + n n2 result 0.5 = 0.5 * n + 2 = 2 result =1
Before calculating the square root, the calculator prepares the number by looking at the exponent. Mathematically, the exponent of the square root of a number is of the exponent of the original number (1e2=100 is the square root of 1e4=10000). But, there is an issue if the original exponent is odd, in this case, you can not divide it by 2, so if the exponent is even, the exponent is first decremented and the mantissa multiplied by 10 (which yield the same number). The number is then divided by 2 (because the formula equates the sum with n2/2). Then comes the real work, calculating the square root digits. To do that, the calculator successively removes result +0.5 from the original number with result incrementing from 0 to 9 until it can not do it anymore. This gives us n, an approximation of the square root of n2/2. Then the calculator repeats the process for the next digit until it has done it for all digits of the mantissa. Let us look at the algorithm in a pseudo computer language and try to apply it to the number 2 (sign:+ mantissa:200000000000 exponent:0) // calculate exponent of the result and modify the mantissa as needed. 9
if exponent is odd then mantissa= mantissa/10 and exponent= exponent-1 exponent= exponent/2 // divide the mantissa by 2 as per the algorithm mantissa= mantissa/2 // initialize increment for the first digit. // note that increment/2 is 0.5*increment! increment= 100000000000 result= 0 for i=12 to 0 while decrement i each time { while mantissa >= result+increment/2 repeat {
// for each of the 12 digits // see if we can remove result+0.5 // from mantissa // while we can do it, do it! // number = number // result+increment*0.5 // increment result // next digit! // next increment
mantissa= mantissa-result-increment/2
result= result + increment } mantissa= mantissa*10 increment= increment/10
// and we are done. exponent is the result exponent. // and result if the mantissa of the result! The following table shows the content of the various variables during the execution of the algorithm and show how the square root of 2 appears during the process. i mantissa 100,000,000,000 200,000,000,000 595,000,000,000 302,000,000,000 191,800,000,000 503,795,000,000 795,315,500,000 882,088,750,000 335,606,320,000 527,636,078,000 103,372,009,355 437,705,999,155 result 100,000,000,000 140,000,000,000 141,000,000,000 141,400,000,000 141,420,000,000 141,421,000,000 141,421,300,000 141,421,350,000 141,421,356,000 141,421,356,200 141,421,356,230 141,421,356,237 increment 100,000,000,000 10,000,000,000 1,000,000,000 100,000,000 10,000,000 1,000,000 100,000 10,000 1,10 1
Special Newsletter Discount
Previous Article Save 10% on an HP 17bII+ Financial Calculator right now. Ideal for real estate, finance and business professionals, the HP 17bII+ features a simple keyboard, 250 menu-driven functions, RPN1 and algebraic 10.
Travelmate-C100 LE32A41B CW29M64N HPD-10 PV-DC252D MEX-DV900 Bipac 5200 SR-28NMB KD-400Z Black Evo4 DEQ-P7000 Lycosa F800 F801 Slide W 83 ML-1451N DT-560 MP-C941 SV-MP805V Cyclecomputing A8 Vulcan 1700 Stories TX-P37x20E M200 C Citadel XR1 Imedia 2579 KX-TGA731EX 702NK HT502 EW1248W Macrozoom DPF-V1000N HT-X720G OH120-180 Pspaa FFH-DV25AD FW-M355-22 Roland DD-6 MS-50 DI-32Q82 SC-PT673 Gigaset 2015 DI810-2 PM-730C HD081GJ Review AM-957B HTR-5930 Kxtg2512UA SL 65 Audio EQ-2 2 4GHZ Nokia 6061 RS2625 Toyota SP10 Abit SL6 Zanussi T633 SGH-J208 Updatecd3 8 450s 480S Edition Citation A NF-M2-nview CDP-XB630 TG100 5470C 5 5 Servers DSC-W370 G CTY 105 DMT-8VL AM100 W2261VG Massage MAT - 2000 AD2022 SGH-Z540 Processor NAS-C5E PMA-1500RII Dtxtreme III Strike C-500 Zoom MS-192A LE22C450e1W CQ-VA707 MAC 300 LV-7215 Serie 7 Binatone X350 MR314 Tpmc-8X Creator 4955 UE-40C8700XS Abacus Singer 4525 Cmx-1242 CCD-TR417E Convertible 2002 DVD142TV CDE30 | http://www.ps2netdrivers.net/manual/hp.17bii..financial.business.calculator/ | CC-MAIN-2015-11 | refinedweb | 9,414 | 64.1 |
robust mqtt publish freezes randomly on Nb-IoT
- randomrnti last edited by
I've been having an intermittent problem that will occur randomly within 10 minutes to a couple hours where it will freeze. I've troubleshooted it to getting stuck on simple mqtt's socket write of the payload:
def publish(self, topic, msg, retain=False, qos=0): .... self.sock.write(msg)
I've observed that the pycom will keep sending publishes even though the payload wouldn't make it to the broker. It would continue to do this (34 publishes) until it eventually freezes on the socket write. I'm not getting any error messages and it appears to stay stuck for 7 minutes before it's able to send payloads again. I can reboot the device and it will connect broker first time and successfully send the payloads. I've altered the sleep time to 5 seconds but I still experience the same issue. Does anyone have some insight as I'd like some method to be able to catch this issue and attempt a disconnect and reconnect to the broker. Below is the code I'm using:
lte = LTE() def connectNbIoT(): try: print("Connecting to NB-IoT...") pycom.rgbled(0x7f7f00) # yellow if lte.isconnected(): print('Disconnecting...') lte.disconnect() time.sleep(1.5) if lte.isattached(): print('Dettaching...') lte.dettach() time.sleep(1.5) while not lte.isattached(): print('Attaching...') pycom.rgbled(0xff7f00) # yellow lte.attach() time.sleep(20) while not lte.isconnected(): print('Connecting...') pycom.rgbled(0x7f007f) # purple lte.connect() time.sleep(2.5) print("Connected to NB-IoT") pycom.rgbled(0x00007f) # blue except: print("Could not connect to Nb-IoT") return False return True def checkNbIoTConnection(): if (lte.isconnected()): return connectNbIoT() while True: if connectNbIoT(): break client = MQTTClient("123456789", "<url>", port=1883, keepalive=180) def connectMQTT(): try: pycom.rgbled(0x00007f) # blue print("Connecting to MQTT broker...") client.connect() print("Connected to MQTT broker...") pycom.rgbled(0x007f00) # green return True except OSError as e: print("MQTT error occurred: %r" % e) return False while True: if connectMQTT(): break topic = "devices/Pysense.12345" payload = {} while True: checkNbIoTConnection() while True: try: pitch = acc.pitch() roll = acc.roll() temperature = si.temperature() humidity = si.humidity() battery_voltage = py.read_battery_voltage() battery_level = int(((battery_voltage - PYSENSE_MIN_VOLTAGE) / (PYSENSE_MAX_VOLTAGE - PYSENSE_MIN_VOLTAGE)) * 100) if (battery_level > 100): battery_level = 100 elif (battery_level < 0): battery_level = 0 except OSError as e: print("error with bus") continue break payload["data"] = {"roll": roll, "pitch": pitch, "humidity": humidity, "temperature": temperature, "batteryLevel": battery_level, "batteryVoltage": battery_voltage} print("Sending " + json.dumps(payload)) try: client.publish(topic=topic, msg=json.dumps(payload)) except: print("MQTT Publish exception") connectMQTT() print("Finish Send") time.sleep(0.2)
- randomrnti last edited by
I've done some further troubleshooting and tried the following methods to get it to send consistently:
Change QoS to 1. I found that I still encountered the same issue before where it got stuck after a random amount of time. This also delayed each additional message by about 1 - 5 seconds so I removed the time.sleep(0.2)
Wait for ping message after every 10 publishes. I found that this also ended up getting stuck as well. Again I removed the sleep cause the delay was between 1-5 seconds on the ping.
if (count == 10): client.ping() client.wait_msg() count = 0 count += 1
Ping message after every publish. This didn't suffer the problem of freezing during the hour that I tried. I did find however that the duration for it to get a successful ping took longer as time went on. From about 1-3 seconds average to about 7-10 seconds average after about an hour.
Connect and disconnect the client after every publish. This seems to working the best, it hasn't frozen within the 30 minutes I've been testing it and each message is taking 3-5 seconds to send to the broker.
If anyone has any further suggestions it would be appreciated. | https://forum.pycom.io/topic/4418/robust-mqtt-publish-freezes-randomly-on-nb-iot/?page=1 | CC-MAIN-2019-51 | refinedweb | 650 | 50.12 |
JSTL Custom Tags
like the code is provided for Struts and other applications, step by step code for JSTL custom Tags creation and their usage in a JSP page should be provided
in the same way for log files also
like a log file should be there and when we give userid
Code
Free download
jsp with jstl
i have the folowing error while trying to print prime number using jsp array and jstl foreach loop.
<%
int[] prime=new int[20];
%>
<c:forEach
<%
prime[i]= i+1;
%>
WHILE I AM COMPILING THE FOLLOWING
Java :
util packages in java
util packages in java write a java program to display present date and after 25days what will be the date?
import java.util.*;
import java.text.*;
class FindDate{
public static void main(String[] args
Defining and using custom tags - JSP-Servlet
an example code with explanation. Hi Friend,
A custom tag is a user... of custom tags.
For more information,visit the following links:
http...Defining and using custom tags Hi Sir,
I want to know | http://www.roseindia.net/tutorialhelp/allcomments/1109 | CC-MAIN-2014-52 | refinedweb | 174 | 64.75 |
If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community, have a look at our Contribution Guidelines.
F-1 Score for Multi-Class Classification
Last modified: October 19, 2020
1. Introduction
In this tutorial, we’ll talk about how to calculate the F-1 score in a multi-class classification problem. Unlike binary classification, multi-class classification generates an F-1 score for each class separately.
We’ll also explain how to compute an averaged F-1 score per classifier in Python, in case a single score is desired.
2. F-1 Score
F-1 score is one of the common measures to rate how successful a classifier is. It’s the harmonic mean of two other metrics, namely: precision and recall. In a binary classification problem, the formula is:
The F-1 Score metric is preferable when:
- We have imbalanced class distribution
- We’re looking for a balanced measure between precision and recall (Type I and Type II errors)
As the F-1 score is more sensitive to data distribution, it’s a suitable measure for classification problems on imbalanced datasets.
3. Multi-Class F-1 Score Calculation
For a multi-class classification problem, we don’t calculate an overall F-1 score. Instead, we calculate the F-1 score per class in a one-vs-rest manner. In this approach, we rate each class’s success separately, as if there are distinct classifiers for each class.
As an illustration, let’s consider the confusion matrix below with a total of 127 samples:
Now let’s calculate the F-1 score for the first class, which is class a. We first need to calculate the precision and recall values:
Then, we apply the formula for class a:
Similarly, we first calculate the precision and recall values for the other classes:
The calculations then lead to per-class F-1 scores for each class:
4. Implementation
In the Python sci-kit learn library, we can use the F-1 score function to calculate the per class scores of a multi-class classification problem.
We need to set the average parameter to None to output the per class scores.
For instance, let’s assume we have a series of real y values (y_true) and predicted y values (y_pred). Then, let’s output the per class F-1 score:
from sklearn.metrics import f1_score f1_score(y_true, y_pred, average=None)
In our case, the computed output is:
array([0.62111801, 0.33333333, 0.26666667, 0.13333333])
On the other hand, if we want to assess a single F-1 score for easier comparison, we can use the other averaging methods. To do so, we set the average parameter.
Here we’ll examine three common averaging methods.
The first method, micro calculates positive and negative values globally:
f1_score(y_true, y_pred, average='micro')
In our example, we get the output:
0.49606299212598426
Another averaging method, macro, take the average of each class’s F-1 score:
f1_score(y_true, y_pred, average='macro')
gives the output:
0.33861283643892337
Note that the macro method treats all classes as equal, independent of the sample sizes.
As expected, the micro average is higher than the macro average since the F-1 score of the majority class (class a) is the highest.
The third parameter we’ll consider in this tutorial is weighted. The class F-1 scores are averaged by using the number of instances in a class as weights:
f1_score(y_true, y_pred, average='weighted')
generates the output:
0.5728142677817446
In our case, the weighted average gives the highest F-1 score.
We need to select whether to use averaging or not based on the problem at hand.
5. Conclusion
In this tutorial, we’ve covered how to calculate the F-1 score in a multi-class classification problem.
Firstly, we described the one-vs-rest approach to calculate per class F-1 scores.
Also, we’ve covered three ways of calculating a single average score in Python.
If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community, have a look at our Contribution Guidelines. | https://www.baeldung.com/cs/multi-class-f1-score | CC-MAIN-2022-27 | refinedweb | 703 | 51.99 |
import "golang.org/x/build/kubernetes/gke"
Package gke contains code for interacting with Google Container Engine (GKE), the hosted version of Kubernetes on Google Cloud Platform.
The API is not subject to the Go 1 compatibility promise and may change at any time. Users should vendor this package and deal with API changes.
func NewClient(ctx context.Context, clusterName string, opts ...ClientOpt) (*kubernetes.Client, error)
NewClient returns an Kubernetes client to a GKE cluster.
ClientOpt represents an option that can be passed to the Client function.
OptProject returns an option setting the GCE Project ID to projectName. This is the named project ID, not the numeric ID. If unspecified, the current active project ID is used, if the program is running on a GCE intance.
func OptTokenSource(ts oauth2.TokenSource) ClientOpt
OptTokenSource sets the oauth2 token source for making authenticated requests to the GKE API. If unset, the default token source is used ().
OptZone specifies the GCP zone the cluster is located in. This is necessary if and only if there are multiple GKE clusters with the same name in different zones.
Package gke imports 12 packages (graph) and is imported by 1 packages. Updated 2017-10-15. Refresh now. Tools for package owners. | https://godoc.org/golang.org/x/build/kubernetes/gke | CC-MAIN-2017-43 | refinedweb | 205 | 59.9 |
Improving library documentation
From HaskellWiki
(Difference between revisions)
Revision as of 14:43, 26 November 2006
If you find standard library documentation lacking in any way, please log it here. At the minimum record what library/module/function isn't properly documented. Please also suggest how to improve the documentation, in terms of examples, explanations and so on.
Example:
package base Data.List unfoldr
An example would be useful. Perhaps:
-- A simple use of unfoldr: -- -- > unfoldr (\b -> if b == 0 then Nothing else Just (b, b-1)) 10 -- > [10,9,8,7,6,5,4,3,2,1] --
dons 00:31, 26 November 2006 (UTC)
Tag your submission with your name by using 4 ~ characters, which will be expanded to your name and the date.
If you'd like, you can directly submit your suggestion as a darcs patch via the bug tracking system.
Please add your comments under the appropriate package:
1 base
package base Data.Array.IO and Data.Array.MArray descriptions
An example would be usefull. Arrays can be very difficult when you see them the very first time ever with the assumption that you want to try them right now and that Haskell is a relatively new to you. Maybe something like this could be added into the descriptions of the array-modules.
module Main where import Data.Array.IO -- Replace each element with 1/i, where i is the index starting from 1. -- Loop uses array reading and writing. loop :: IOUArray Int Double -> Int -> Int -> IO () loop arr i aLast | i > aLast = return () | otherwise = do val <- readArray arr i writeArray arr i (val / (1+fromIntegral i)) loop arr (i+1) aLast
main = do arr <- newArray (0,9) 1.0 -- initialise an array with 10 doubles. loop arr 0 9 -- selfmade loop over elements arr <- mapArray (+1) arr -- a map over elements elems <- getElems arr putStrLn $ "Array elements: " ++ (show elems)
Isto 14:43, 26 November 2006 (UTC) | https://wiki.haskell.org/index.php?title=Improving_library_documentation&diff=8680&oldid=8674 | CC-MAIN-2017-22 | refinedweb | 321 | 63.49 |
Find length of longest subsequence of one string which is substring of another string
Given two strings X and Y. The task is to find the length of the longest subsequence of string X which is a substring in sequence Y.
Examples:
Input : X = "ABCD", Y = "BACDBDCD" Output : 3 "ACD" is longest subsequence of X which is substring of Y. Input : X = "A", Y = "A" Output : 1
PREREQUISITES: Longest common subsequence problem will help you understand this problem in a snap 🙂
Method 1 (Brute Force):
Use brute force to find all the subsequences of X and for each subsequence check whether it is a substring of Y or not. If it is a substring of Y, maintain a maximum length variable and compare its length.
Time Complexity: O(2^n)
Method 2: (Recursion):
Let n be the length of X and m be the length of Y. We will make a recursive function as follows with 4 arguments and return type is int as we will get the length of maximum possible subsequence of X which is Substring of Y. We will take judgement for further process based on the last char in strings by using n-1 and m-1.
int maxSubsequenceSubstring(string &X,string &Y,int n,int m) { .... return ans; }
For recursion we need 2 things, we will make 1st is the base case and 2nd is calls on smaller input (for that we will see the choice diagram).
BASE CASE:
By seeing the argument of function we can see only 2 arguments that will change while recursion calls, i.e. lengths of both strings. So for the base case think of the smallest input we can give. You’ll see the smallest input is 0, i.e. empty lengths. hence when n == 0 or m == 0 is the base case. n and m can’t be less than zero. Now we know the condition and we need to return the length of subsequence as per the question. if the length is 0, then means one of the string is empty, and there is no common subsequence possible, so we have to return 0 for the same.
int maxSubsequenceSubstring(string &X,string &Y,int n,int m) { if (n==0 || m==0) return 0; .... return ans; }
CHILDREN CALLS:
Choice Diagram
We can see how to make calls by seeing if the last char of both strings are the same or not. We can see how this is slightly different from the LCS (Longest Common Subsequence) question.
int maxSubsequenceSubstring(string &X,string &Y,int n,int m) { // Base Case if (n==0 || m==0) return 0; // Calls on smaller inputs // if the last char of both strings are equal if(X[n-1] == Y[m-1]) { return 1 + maxSubsequenceSubstring(X,Y,n-1,m-1); } // if the last char of both strings are not equal else { return maxSubsequenceSubstring(X,Y,n-1,m); } }
Now here is the main crux of the question, we can see we are calling for X[0..n] and Y[0..m], in our recursion function it will return the answer of maximum length of the subsequence of X, and Substring of Y (and the ending char of that substring of Y ends at length m). This is very important as we want to find all intermediary substrings also. hence we need to use a for loop where we will call the above function for all the lengths from 0 to m of Y, and return the maximum of the answer there. Here is the final code in C++ for the same.
C++
Java
Python3
C#
Javascript
Length for maximum possible Subsequence of string X which is Substring of Y -> 3
Time Complexity: O(n*m) (For every call in recursion function we are decreasing n, hence we will reach base case exactly after n calls, and we are using for loop for m times for the different lengths of string Y).
Space Complexity: O(n) (For recursion calls we are using stacks for each call).
Method 3: (Dynamic Programming):
Submethod 1: Memoization
NOTE: Must see the above recursion solution to understand its optimization as Memoization
From the above recursion solution, there are multiple calls and further, we are using recursion in for loop, there is a high probability we have already solved the answer for a call. hence to optimize our recursion solution we will use? (See we have only 2 arguments that vary throughout the calls, hence the dimension of the array is 2 and the size is
because we need to store answers for all possible calls from 0..n and 0..m). Hence it is a 2D array.
we can use vector for the same or dynamic allocation of the array.
// initialise a vector like this vector<vector<int>> dp(n+1,vector<int>(m+1,-1)); // or Dynamic allocation int **dp = new int*[n+1]; for(int i = 0;i<=n;i++) { dp[i] = new int[m+1]; for(int j = 0;j<=m;j++) { dp[i][j] = -1; } }
By initializing the 2D vector we will use this array as the 5th argument in recursion and store our answer. also, we have filled it with -1, which means we have not solved this call hence use traditional recursion for it, if dp[n][m] at any call is not -1, means we have already solved the call, hence use the answer of dp[n][m].
// In recursion calls we will check for if we have solved the answer for the call or not if(dp[n][m] != -1) return dp[n][m]; // Else we will store the result and return that back from this call if(X[n-1] == Y[m-1]) return dp[n][m] = 1 + maxSubsequenceSubstring(X,Y,n-1,m-1,dp); else return dp[n][m] = maxSubsequenceSubstring(X,Y,n-1,m,dp);
Code for memoization is here:
C++
Java
Python3
Javascript
Length for maximum possible Subsequence of string X which is Substring of Y -> 3
Time Complexity: O(n*m) (It will be definitely better than the recursion solution, the worst case is possible only possible when none of the char of string X is there in String Y.)
Space Complexity: O(n*m + n) (the Size of Dp array and stack call size of recursion)
Submethod 2: Tabulation
Let n be length of X and m be length of Y. Create a 2D array ‘dp[][]’ of m + 1 rows and n + 1 columns. Value dp[i][j] is maximum length of subsequence of X[0….j] which is substring of Y[0….i]. Now for each cell of dp[][] fill value as :
for (i = 1 to m) for (j = 1 to n) if (x[i-1] == y[j - 1]) dp[i][j] = dp[i-1][j-1] + 1; else dp[i][j] = dp[i][j-1];
And finally, the length of the longest subsequence of x which is substring of y is max(dp[i][n]) where 1 <= i <= m.
Below is implementation this approach:
C/C++
// C++ program to find maximum length of // subsequence of a string X such it is // substring in another string Y. #include <bits/stdc++.h> #define MAX 1000 using namespace std; // Return the maximum size of substring of // X which is substring in Y. int maxSubsequenceSubstring(char x[], char y[], int n, int m) { int dp[MAX][MAX]; // Initialize the dp[][] to 0. for (int i = 0; i <= m; i++) for (int j = 0; j <= n; j++) dp[i][j] = 0; // Calculating value for each element. for (int i = 1; i <= m; i++) { for (int j = 1; j <= n; j++) { // If alphabet of string X and Y are // equal make dp[i][j] = 1 + dp[i-1][j-1] if (x[j - 1] == y[i - 1]) dp[i][j] = 1 + dp[i - 1][j - 1]; // Else copy the previous value in the // row i.e dp[i-1][j-1] else dp[i][j] = dp[i][j - 1]; } } // Finding the maximum length. int ans = 0; for (int i = 1; i <= m; i++) ans = max(ans, dp[i][n]); return ans; } // Driver Program int main() { char x[] = "ABCD"; char y[] = "BACDBDCD"; int n = strlen(x), m = strlen(y); cout << maxSubsequenceSubstring(x, y, n, m); return 0; }
Time Complexity: O(n*m) (Time required to fill the Dp array)
Space Complexity: O(n*m + n) (the Size of Dp array)
C++
Java
Python3
C#
PHP
Javascript
3 share more information about the topic discussed above. | https://www.geeksforgeeks.org/find-length-longest-subsequence-one-string-substring-another-string/?ref=rp | CC-MAIN-2022-27 | refinedweb | 1,416 | 63.63 |
0
Hi all :)
Recently i got a an assignment to be done which need to deal with c++ pointer..
This is the question :
Write a function that accepts a pointer to a C-string as its argument. The function should return the character that appears most frequently in the string. Demonstrate the function in a complete program.
So i've tried written my codes this way :
#include <iostream> using namespace std; char large(char *p,int a) { int large=0,x=1,b; char r; for (int i=0;i<a;i++) { for(int j=0;j<a;j++) { if(*p+j==*p+i && i!=j) { x++; } if (x>large) { large=x; b=i; } x=0; } r=*(p+b); return r; } } int main() { char test[100]; cout<<" Enter a word :"; cin.get(test,100); cout<<endl; char *p; p=test; int a=strlen(test); char ans= large(p,a); cout<<ans<<endl; system ("pause"); return 0; }
I dont know where i got it wrong, but the output is not correct..
Help me please..this need to be passed by this sunday..
Thank You :) | https://www.daniweb.com/programming/software-development/threads/323077/how-to-solve-this-pointer-problem | CC-MAIN-2017-22 | refinedweb | 183 | 80.82 |
Question :
I have a multi-line string defined like this:
foo = """ this is a multi-line string. """
This string we used as test-input for a parser I am writing. The parser-function receives a
file-object as input and iterates over it. It does also call the
next() method directly to skip lines, so I really need an iterator as input, not an iterable.
I need an iterator that iterates over the individual lines of that string like a
file-object would over the lines of a text-file. I could of course do it like this:
lineiterator = iter(foo.splitlines())
Is there a more direct way of doing this? In this scenario the string has to traversed once for the splitting, and then again by the parser. It doesn’t matter in my test-case, since the string is very short there, I am just asking out of curiosity. Python has so many useful and efficient built-ins for such stuff, but I could find nothing that suits this need.
Answer #1:
Here are three possibilities:
foo = """ this is a multi-line string. """ def f1(foo=foo): return iter(foo.splitlines()) def f2(foo=foo): retval = '' for char in foo: retval += char if not char == 'n' else '' if char == 'n': yield retval retval = '' if retval: yield retval def f3(foo=foo): prevnl = -1 while True: nextnl = foo.find('n', prevnl + 1) if nextnl < 0: break yield foo[prevnl + 1:nextnl] prevnl = nextnl if __name__ == '__main__': for f in f1, f2, f3: print list(f())
Running this as the main script confirms the three functions are equivalent. With
timeit (and a
* 100 for
foo to get substantial strings for more precise measurement):
$ python -mtimeit -s'import asp' 'list(asp.f3())' 1000 loops, best of 3: 370 usec per loop $ python -mtimeit -s'import asp' 'list(asp.f2())' 1000 loops, best of 3: 1.36 msec per loop $ python -mtimeit -s'import asp' 'list(asp.f1())' 10000 loops, best of 3: 61.5 usec per loop
Note we need the
list() call to ensure the iterators are traversed, not just built.
IOW, the naive implementation is so much faster it isn’t even funny: 6 times faster than my attempt with
find calls, which in turn is 4 times faster than a lower-level approach.
Lessons to retain: measurement is always a good thing (but must be accurate); string methods like
splitlines are implemented in very fast ways; putting strings together by programming at a very low level (esp. by loops of
+= of very small pieces) can be quite slow.
Edit: added @Jacob’s proposal, slightly modified to give the same results as the others (trailing blanks on a line are kept), i.e.:
from cStringIO import StringIO def f4(foo=foo): stri = StringIO(foo) while True: nl = stri.readline() if nl != '': yield nl.strip('n') else: raise StopIteration
Measuring gives:
$ python -mtimeit -s'import asp' 'list(asp.f4())' 1000 loops, best of 3: 406 usec per loop
not quite as good as the
.find based approach — still, worth keeping in mind because it might be less prone to small off-by-one bugs (any loop where you see occurrences of +1 and -1, like my
f3 above, should automatically trigger off-by-one suspicions — and so should many loops which lack such tweaks and should have them — though I believe my code is also right since I was able to check its output with other functions’).
But the split-based approach still rules.
An aside: possibly better style for
f4 would be:
from cStringIO import StringIO def f4(foo=foo): stri = StringIO(foo) while True: nl = stri.readline() if nl == '': break yield nl.strip('n')
at least, it’s a bit less verbose. The need to strip trailing
ns unfortunately prohibits the clearer and faster replacement of the
while loop with
return iter(stri) (the
iter part whereof is redundant in modern versions of Python, I believe since 2.3 or 2.4, but it’s also innocuous). Maybe worth trying, also:
return itertools.imap(lambda s: s.strip('n'), stri)
or variations thereof — but I’m stopping here since it’s pretty much a theoretical exercise wrt the
strip based, simplest and fastest, one.
Answer #2:
I’m not sure what you mean by “then again by the parser”. After the splitting has been done, there’s no further traversal of the string, only a traversal of the list of split strings. This will probably actually be the fastest way to accomplish this, so long as the size of your string isn’t absolutely huge. The fact that python uses immutable strings means that you must always create a new string, so this has to be done at some point anyway.
If your string is very large, the disadvantage is in memory usage: you’ll have the original string and a list of split strings in memory at the same time, doubling the memory required. An iterator approach can save you this, building a string as needed, though it still pays the “splitting” penalty. However, if your string is that large, you generally want to avoid even the unsplit string being in memory. It would be better just to read the string from a file, which already allows you to iterate through it as lines.
However if you do have a huge string in memory already, one approach would be to use StringIO, which presents a file-like interface to a string, including allowing iterating by line (internally using .find to find the next newline). You then get:
import StringIO s = StringIO.StringIO(myString) for line in s: do_something_with(line)
Answer #3:
If I read
Modules/cStringIO.c correctly, this should be quite efficient (although somewhat verbose):
from cStringIO import StringIO def iterbuf(buf): stri = StringIO(buf) while True: nl = stri.readline() if nl != '': yield nl.strip() else: raise StopIteration
Answer #4:
Regex-based searching is sometimes faster than generator approach:
RRR = re.compile(r'(.*)n') def f4(arg): return (i.group(1) for i in RRR.finditer(arg))
Answer #5:
I suppose you could roll your own:
def parse(string): retval = '' for char in string: retval += char if not char == 'n' else '' if char == 'n': yield retval retval = '' if retval: yield retval
I’m not sure how efficient this implementation is, but that will only iterate over your string once.
Mmm, generators.
Edit:
Of course you’ll also want to add in whatever type of parsing actions you want to take, but that’s pretty simple.
Answer #6:
You can iterate over “a file”, which produces lines, including the trailing newline character. To make a “virtual file” out of a string, you can use
StringIO:
import io # for Py2.7 that would be import cStringIO as io for line in io.StringIO(foo): print(repr(line)) | https://discuss.dizzycoding.com/iterate-over-the-lines-of-a-string/ | CC-MAIN-2022-33 | refinedweb | 1,139 | 71.14 |
Now that we've learned how to set up and run a Spring Boot app using Eclipse IDE and CLI, we'll see how to build a microservices application using Spring Boot.
Let me show how we can create Microservices Application for Top Sports Brands using Spring Boot and Netflix Eureka Server in detail. Before creating the application, let me tell you what are the challenges with Microservices Architecture.
Spring Boot enables building production-ready applications quickly and provides non-functional features:
So, let us see the challenges with microservices architecture.
While developing a number of smaller microservices might look easy, there is a number of inherent complexities that are associated with microservices architectures. Let's look at some of the challenges:
In this Spring Boot microservices example, we will be creating Top Sports Brands' application, which will have three services:
Let us see which of the following tools required to create this Spring Boot microservices example application.
If you facing any difficulty in installing and running the above tools, please refer to this blog.
To begin with, create a
EurekaServer Spring Starter Project in Eclipse IDE. Click on Spring Starter Project and click on Next.
Name your Spring Starter Project as EurekaServer and other Information will be filledautomatically.
Note: Make sure your Internet is connected otherwise it will show an error.
Now, modify
EurekaServer/src/main/resources/application.properties file to add a port number and disable registration.
Open
EurekaServer/src/main/java/com/example/EurekaServiceApplication.java and add
@EnableEurekaServer above
@SpringBootApplication.
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; @EnableEurekaServer @SpringBootApplication
This annotation will configure a registry that will allow other applications to communicate.
To start the Application: Right Click on the
Project -> Run As -> Click on " Spring Boot App "
Now open. Here Spring Eureka Server will open and will show no service will be running.
Again create a new project. Use
Item-catalog-service for the artifact name and click on Next.
Add the following dependencies:
Click on Finish.
Now, create an entity, to
ItemCatalogServiceApplication.java . The code below assumes you're putting all classes in the same file.
If you're using an editor that doesn't auto-import classes, here's the list of imports needed at the top of
ItemCatalogServiceApplication.java.
Add an application name in
item-catalog-service/src/main/resources/application.properties file to display in the Eureka service, and set the port to 8088.
Now, Create the Cloud Properties file.
Click on File -> New -> Other -> File and add the/
Now, to start the Application:
Right Click on Project -> Run As -> Click on " Spring Boot App "
Note: In case of error try this step: Right Click on the Project -> Run As -> Click on "Maven Build."
Now open. Here you will see Item Catalog service will be running.
You will see the list of items from the catalog service.
Now let us move forward and create the Edge Service.
It is similar to the standalone Item service created in Bootiful Development with Spring Boot and Angular. However, it will have fallback capabilities which prevent the client from receiving an HTTP error when the service is not available.
Again create a new project. Use
edge-service for the artifact name:
Click on Finish.
Since the
item-catalog-service is running on port 8088, you'll need to configure this application to run on a different port. Modify
edge-service/src/main/resources/application.properties to set the port to 8089 and set an application name.
Now, Create the Cloud Properties file.
Click on File -> New -> Other -> File and add/
To enable Feign, Hystrix, and registration with the Eureka server, add the appropriate annotations to
EdgeServiceApplication.java:
Create a
Item DTO (Data Transfer Object) in this same file. Lombok's will generate a methods, getters, setters, and appropriate constructors.
Create a
ItemClient interface that uses Feign to communicate to the
Item-catalog-service.
Create a
RestController below the
ItemClient that will filter out less-than-top brands and shows a
/top-brands endpoint.
Start the
edge-service application with Maven or your IDE and verify it registers successfully with the Eureka server.
Now invoke localhost:8089/top-brands, you will see the list of top brands from the catalog service.
Note: If you shut down the
item-catalog-service application, you'll get a 500 internal server error.
To fix this, you can use Hystrix to create a fallback method and tell the
goodItems() method to use it.
Restart the
edge-service and you should see an empty list returned.
Start the
item-catalog-service again and this list should eventually return the full list of top brands names.
If you want the source code for this application, please leave your comment in comment section.
Thank you !
spring-boot microservices web-development
Learn about the benefits of microservice architecture and how to implement microservices with Spring Boot and Spring Cloud. What are Microservices?”.
Build your eCommerce project by hiring our expert eCommerce Website developers. Our Dedicated Web Designers develop powerful & robust website in a short span of time.
In this article, you'll learn how you can build a reactive microservices architecture using Spring Cloud Gateway, Spring Boot, and Spring WebFlux. | https://morioh.com/p/ab77bd0e5134 | CC-MAIN-2020-29 | refinedweb | 866 | 56.45 |
The Q3WidgetStack class provides a stack of widgets of which only the top widget is user-visible. More...
#include <Q3WidgetStack>
This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information.
Inherits Q3Frame..
Constructs an empty widget stack.
The parent, name and f arguments are passed to the Q3Frame constructor.
Destroys the object and frees any allocated resources.().
Reimplemented from QObject::childEvent().
Reimplemented from QObject::event().
Reimplemented from Q3Frame::frameChanged().
Returns the ID of the widget. Returns -1 if widget is 0 or is not being managed by this widget stack.
See also widget() and addWidget().
Reimplemented from QWidget::minimumSizeHint().
Raises the widget with ID id to the top of the widget stack.
See also visibleWidget().
This is an overloaded function.
Raises widget w to the top of the widget stack.
Removes widget w from this stack of widgets. Does not delete w. If w is the currently visible widget, no other widget is substituted.
See also visibleWidget() and raiseWidget().
Reimplemented from QWidget::resizeEvent().
Fixes up the children's geometries.
Reimplemented from QWidget::setVisible().
Reimplemented from QWidget::sizeHint().
Returns the currently visible widget (the one at the top of the stack), or 0 if nothing is currently being shown.
See also aboutToShow(), id(), and raiseWidget().
Returns the widget with ID id. Returns 0 if this widget stack does not manage a widget with ID id.
See also id() and addWidget(). | https://doc.qt.io/archives/4.6/q3widgetstack.html | CC-MAIN-2021-21 | refinedweb | 255 | 63.66 |
tonix (Antonio Nati) wrote:
At 15.31 16/03/2005, you wrote:
Good morning,
I recently installed chkuser in response to a SpamCop listing. I have a user getting addresses rejected that we know exist. The addresses are in valias and work fine when I send a message. The user in question is recently getting rejections. Here is a sample of the qmail-smtp log.
2005-03-15 17:06:06.731444500 CHKUSER rejected rcpt: from <[EMAIL PROTECTED]::> remote <SUPPORT4:wls-41-226-196-65.tls.net:65.196.226.41> rcpt <[EMAIL PROTECTED]> : not existing recipient
2005-03-16 08:37:28.526532500 CHKUSER accepted rcpt: from <[EMAIL PROTECTED]::> remote <[192.168.1.101]:64-184-8-148.bb.hrtc.net:64.184.8.148> rcpt <[EMAIL PROTECTED]> : found existing recipient
You should enable CHKUSER_RCPT_FORMAT and see if there are any strange characters (invisible in log) that make the address unusable (you have rcpt not existing when you could have INVALID FORMAT)
I can certainly do that. But I am confused, if I did't enable CHKUSER_RCPT_FORMAT shouldn't the address work as it did before? Or is there some level of format checking going on by default?
You could also modify chkuser.c this way, in order to track better the rejected recipient... The following change display complete address length, so you may check if the address length corresponds to what you read:
If I get an INVALID_FORMAT I will do so.
static void chkuser_commonlog (char *sender, char *rcpt, char *title, char *description) {
char str[30]; sprintf (str, "%d", strlen (rcpt));
substdio_puts (subfderr, "CHKUSER "); substdio_puts (subfderr, title); substdio_puts (subfderr, ": from <"); substdio_puts (subfderr, sender); substdio_puts (subfderr, ":" ); if (remoteinfo) { substdio_puts (subfderr, remoteinfo); } substdio_puts (subfderr, ":" ); #if defined CHKUSER_IDENTIFY_REMOTE_VARIABLE if (identify_remote) substdio_puts (subfderr, identify_remote); #endif substdio_puts (subfderr, "> remote <"); if (fakehelo) substdio_puts (subfderr, fakehelo); substdio_puts (subfderr, ":" ); if (remotehost) substdio_puts (subfderr, remotehost); substdio_puts (subfderr, ":" ); if (remoteip) substdio_puts (subfderr, remoteip); substdio_puts (subfderr, "> rcpt <"); substdio_puts (subfderr, rcpt); substdio_puts (subfderr, ":" ); substdio_puts (subfderr, str); substdio_puts (subfderr, "> : "); substdio_puts (subfderr, description); substdio_puts (subfderr, "\n"); substdio_flush (subfderr);
I'm not sure just why this is happening, I do not have CHKUSER_RCPT_FORMAT defined, in fact the only changes I made to the chkuser_settings.h was to uncomment CHKUSER_ALWAYS_ON and set the CHKUSER_MBXQUOTA to "90" in my qmail-smtpd run script.
I had the user send me the message in question and I noticed that the addresses had single qoutes in them,
> > '[EMAIL PROTECTED]' <mailto:[EMAIL PROTECTED]>. > > '[EMAIL PROTECTED]' <mailto:[EMAIL PROTECTED]>.
I would suspect that was the issue except that this address book worked prior to installing chkuser, and the qmail-smtpd log shows the address correctly when it is rejected.
chkuser uses and logs exactly what receives from qmail-smtpd.
Thanks,
DAve
-- Dave Goodrich Systems Administrator Get rid of Unwanted Emails...get TLS Spam Blocker! | https://www.mail-archive.com/[email protected]/msg20869.html | CC-MAIN-2018-30 | refinedweb | 464 | 53.21 |
Error in importing xml.parsers.expat
Hi
When I try to import xml.parsers.expat, it gives me the following error. (also when I want to read a plist). Can you show an expample of reading a plist data file?
import xml.parsers.expat
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/var/mobile/Applications/7FEFDAEF-0307-4187-92A2-C5B6C77BF2B5/Pythonista.app/pylib/xml/parsers/expat.py", line 4, in <module>
from pyexpat import *
ImportError: No module named pyexpat
Sorry about this. I hope that I can fix those problems with the next update.
You can use the somewhat outdated and slow <a href="">xmllib</a> for parsing xml (and xml plists).
However, if your goal is to read system plists of iOS, neither that, nor plistlib/expat would help you much – those plists are usually stored in binary format and I don't know of any Python library that can read those.
No. I don't want to use binary plists. I Actually I had re-formatted a free English-English dictionary with about 118000 entires to Plist format ('<'key> word '<'key> '<'string>definition'<'string>) and was using it in an app I wrote in Codea. I was going to reconstruct that app in Pythonista too because Pythonista allows me to use my app on my iPod touch too.
So you say for the time being I can not use readPlist function, yes?
If so I will wait for the update.
And after all questions I have to thank you for your great app and patience you have with newbies like me.
Thanks michael, but actually I just was using plistlib.readPlist (from your link) when I encountered that error. I guess it is because, from description of plistlib.readPlist in your link, "XML data is parsed using the Expat parser from xml.parsers.expat" when you use plistlib.readPlist.
Thanks anyway.
- pudquick51
@omz - There is a binary plist library out there - several. Here's one: Pure python, BSD license.
Unfortunately it doesn't help in this situation because NoneNone is working with an XML plist (plus biplist depends on plistlib for certain things).
Here's a plistlib-free alternative, intended for python 2.4/2.5 - prior to plistlib being available on all OSes:
It's MIT licensed. Not sure if it'll solve your problem as it also makes use of the xml module - may end up running into the same parsing issue. Try it and see :)
hi
a question: i can easily read mu plist using plistlib of python on my PC. it is a huge plist containing about 118000 entries and their definitions.
i am new to python. is there a solution (a routine) by which i can read the plist on PC and write it as an easily-read-by-python format?
it doesnt't matter if it takes time (30 mins for example) because after that i have a file which can be easily used iny pythonista.
thnx
- pudquick51
Sure. The root object for most plists is a dict. The json format lends itself well to this. Just load the plist on the PC, which should get you a dict. Then you can do:
Saving a dict/list to a json file:
import json
f = open("dict.json","w")
json.dump(some_dict, f)
f.close()
Reading it:
import json
f = open("dict.json")
read_dict = json.load(f)
f.close()
Yes, JSON is generally a very good format for this kind of thing. If you want the maximum speed, don't care about compatibility with non-Python code and you can trust the data, you can also use the <a href="">marshal</a> module which is very fast for serializing core Python types.
thanks pudquick,OMZ
actually i converted my plist to JSON and imported it into my iDevice. it works pretty well.
thanks | https://forum.omz-software.com/topic/27/error-in-importing-xml-parsers-expat/9 | CC-MAIN-2020-10 | refinedweb | 644 | 75 |
I have a script that I have to interpret. I'm really new to shell scripting, however I have tried to explain the lines to the best of my ability. Can you please look at the script and let me know if I am on the right track (also point out if I'm off track).
Thanks in advance....
[B]#!/bin/sh[/B] [I]In these lines, #!/bin/sh is used as the first line of a script to invoke the named [I]shell[/I].[/I] [B] # adduser - Adds a new user to the system, including building their # home directory, copying in default config data, etc. # For a standard Unix/Linux system[/B] [I]These are comment lines. Anything given on the rest of the line is passed [I]as a single argument [/I]to the named [I]shell[/I]. # is used in shell scripts as the comment character. The script typically ignores all text that follows on the same line.[/I] [B]&2[/B] [I]If you are not signed in as the root, then the script will exit with the error message, “Error: You must be root to run this command."[/I] [B] exit 1[/B] [I]The exit is self explanatory. The 1 represents a failure error code.[/I] [B] fi[/B] [I]fi is an end statement. If the person is not a root account holder then the script will stop. It will not go any further than step above.[/I] [B]echo "Add new user account to $(hostname)" echo -n "login: " ; read login [/B] [I]This statement displays the string between the quotes $(hostname). This information is defined in the .profile file. [/I] [B]# Adjust '5000' to match the top end of your user account namespace # because some system accounts have uid's like 65535 and similar.[/B] [I]These are comment lines. # is used in shell scripts as the comment character. The script typically ignores all text that follows on the same line.[/I] [B]> $gfile mkdir $homedir[/B] [I]This line then creates a new directory for the user as defined above.[/I] [B] cp -R /etc/skel/.[a-zA-Z]* $homedir[/B] [I]This line copies the each source file into the directory (retaining the same name).[/I] [B] chmod 755 $homedir[/B] [I]This line states the permissions of the users on the directory. 755 in this case would mean that: 7 - Owner can read, write and execute 5 - The group can read and execute, but not write, 5 - Everyone else can read and execute, but not write.[/I] [B]find $homedir -print | xargs chown ${login}:${login}[/B] [I]In this line, 'find' looks in a directory, and locates every file and directory within it and every subdirectory. It then has the option of doing something with each thing that it finds. In this case, it simply prints the name of whatever it finds '-print'. Each name is piped into xargs, which runs a command on each name it receives.[/I] [B]# Setting an initial password passwd $login[/B] [I]These line sets the initial value (password - created by administrator), then allows the user to change the created password at their first login.[/I] [B]exit 0[/B] [I]This line exits the script. The “0” represents a success (or no-error) code to return to the calling program.[/I]
Again thanks for the assistance. | https://www.daniweb.com/programming/software-development/threads/70682/interpreting-a-script-line-by-line | CC-MAIN-2018-05 | refinedweb | 564 | 74.19 |
I’m working on a fun new project called Fret War these days as a way to merge my love for playing guitar with my love of writing software. The concept is simple: Guitarists learn to play a difficult piece of music based on a theme, players and fans rate the quality of their submissions. In order for Fret War to work though, I needed to create a rating system that fought the bimodal trend of most other 1-5 rating systems out there using some different statistics.
In this blog post I’d like to lay out the mathematics and theories I’m using to create a rating system that combats the “1 or 5” tendency. I’ll have code you can use in your own system and encourage comments on the method in order to improve it.
The Competitive Blogging Concept
To understand why this rating system might work (notice I said “might”) you kind of need to understand the overall concept of what I’m calling “competitive blogging”. Competitive blogging for me is where you create a blogging environment where people are doing their posts not to just post, but to compete in some niche competition. In fact, they might not even realize they are “blogging” and instead they’re simply playing a game.
It’s not a terribly original idea, since lots of other sites have sort of done something similar, but not quite the same. If you take CSS Zen Garden and CrossFit you can see almost competitive blogs. They put up some sort of challenge, and people who visit the site post their renditions of it. What’s missing is an overt competitive system with ratings for submissions.
In the case of CrossFit you can see the already competitive nature in the comments:
My technique on the snatch not so great…
People desperately want to compete on CrossFit, but the site doesn’t provide a direct way for them to do it. In fact, it’d be difficult because players would be uploading videos of themselves lifting weight for review. Eh, it might work but that’s a seriously narrow audience.
In the case of Fret War we have the perfect setup:
- Guitarists are highly competitive.
- Music is easily distributed and posted to the internet.
- Players and Fans love listening to guitarists try to be badass.
You could possibly find other genres with a similar mix. I’ve already started work on a DJ version, and I’m looking for others.
However, the one thing that binds this whole concept together and makes it potentially work is the rating system. There is no game without a solid rating system that is clearly open to everyone for inspection.
Why 1-5 Is Bimodal
It’s a suspected or potentially known fact that sites with a 1-5 rating system end up being “bimodal”. Bimodal means that you have lots of votes for around 1, and lots of votes for around 5. If you produce a histogram of these votes it’d look like this:
In R you can simulate something like this and get a summary with this code:
\> \> bimod \> hist (bimod, freq=FALSE) \> summary (bimod) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.707 2.028 2.483 2.504 3.005 3.242 \>
Notice though that while in the graph above we have lots of votes near 1 and lots of votes near 5, when we do a summary we get a mean of 2.5, which is actually misleading.
Now, lots of people have pointed this out, but what they haven’t really said is why it ends up this way. The reason comes down to the average person’s inability to evaluate complex qualitative things in an arbitrary 1-5 scale.
Non-experts Always Rate Like/Dislike
My hypothesis is that without some form of Rubric, non-experts will use a 1-5 system as if it’s a 1/0 system for “like/dislike”. The variability around 1 or 5 comes from people saying how much they liked or disliked, and isn’t any kind of useful information different from what you’d get using the standard deviation around the mean of a logistic summary.
Yes, that’s a lot of words you probably don’t know so I’ll explain.
- When people say 1 or 2, they are saying “I really hated it.” or “I
kinda hated it.”
- When people say 4 or 5, they are saying “I kinda liked it.” or “I
really liked it.”
- People who say 3 are undecided (more on that later).
- The logistic
model of
statistics uses degrees or percentages between 0 and 1 based on
boolean choices.
- Logistic models show that you get almost the same information from
many boolean votes as you do from complex 15 or open ended voting
systems.
- With a logistic model, you can simply asked “did you like it” and
give a check box.
- From multiple user votes, you’ll get mean and standard deviations
between 0 and 1 which you can use to determine liked/disliked and
whether sorta/really.
- However, without a simple “survey” or rubric to guide the
non-expert, they’ll have a hard time making a good evaluation.
- To get the best results, combine boolean choices with 15
ratings (linear model) but influence the user’s choices with user
interface changes.
By assuming that users will need some help guiding their evaluation, and providing them with a micro survey that features “like/dislike” as well as “overall ratings”, I can then gather up some simple statistics which make the rating very robust and meaningful. You’ll see that what I’m getting out of the Fret War ratings is actually why they liked or disliked a particular submission and also helping them pick a better 1-5 rating.
An Indirect Rubric For Users
When you rate a Fret War submission you see this (with the Overall Rating pulled down):
These five “qualities” of Accuracy, Speed, Interpretation, Uniqueness, and Tone are actually things that guitarists care about, and experts would use to rate a player’s abilities. Tone in particular is a very guitarist specific quality. Notice also that there’s a 15 overall rating, matching the same number of qualitative ratings presented. The goal is to get people to make the same rating an expert would make by presenting them with an indirect rubric to use as the basis of their 15 vote.
What I’m doing here is subverting the way to do a “correct” survey by purposefully influencing the commenter’s viewpoint. In a real survey I wouldn’t present these two pieces of information together since one would influence the other. In this case, I want to influence their rating so I present the qualities they should rate in a way that then gets them to pick a 1-5 that’s similar.
In other words, my hypothesis is that their overall rating will be closer to the number of check boxes they check off, and that by doing this I’ll get a more normally distributed overall rating instead of a bimodal one.
The Math And Code
The only downside to this is you now need some slightly complex math to handle the summary statistics, and that math needs to be a rolling calculation. The last thing you want is to have to troll through a table in the database adding up votes. You want to take each vote and use information collected so far to quickly recalculate the new summary.
The first thing you need is a separate table that contains the statistics for any object in your database:
CREATE TABLE statistic (other*type TEXT, other*id INTEGER, name text, sum REAL, sumsq REAL, n INTEGER, min REAL, max REAL, mean REAL, sd REAL, primary key (other*type, other*id, name));
In this table, we use “othertype” and “otherid” as a sort of polymorphic relation. The “name” is what the name of the statistic is, like “accuracy” or “tone”. The other numbers are used in doing the rolling calculations and later pulling up the values of “mean” and “sd” (standard deviation).
With that table in place, and some functions to get and update them, you have only this tiny bit of Python and you’ve got a rolling “sample” method:
def sample (other*type, other*id, name, value): stat = get (other*type, other*id, name) if not stat: create (other*type, other*id, name) stat = get (other*type, other*id, name) stat.sum += value stat.sumsq += value \* value if stat.n == 0: stat.min = value stat.max = value else: if stat.min \> value: stat.min = value if stat.max < value: stat.max = value stat.n += 1.0 try: stat.mean = stat.sum / stat.n except ZeroDivisionError: stat.mean = 0.0 try: 1. (sqrt ( ((s).sumsq - ( (s).sum \* (s).sum / (s).n)) / ((s).n-1) )) stat.sd = sqrt ( (stat.sumsq - ( stat.sum \* stat.sum / stat.n )) / (stat.n - 1) ) except ZeroDivisionError: stat.sd = 0.0 update (stat)
The super magic in this calculation is in the line where we set the
stat.sd value. That math is basically the normal calculation for standard deviation, but turned on its head with some algebra so that we don’t need to look at all the records over and over. In fact, I’ve been using this code so long that I just sort of trust it and only validate it against R periodically.
You would use the above code like this:
\>\>\> from app.model import ratings \>\>\> ratings.sample ("submission", 0, "overall\_rating", 1) \>\>\> ratings.sample ("submission", 0, "overall\_rating", 2) \>\>\> ratings.sample ("submission", 0, "overall\_rating", 3) \>\>\> ratings.sample ("submission", 0, "overall\_rating", 5) \>\>\> ratings.sample ("submission", 0, "overall\_rating", 5) \>\>\> stat = ratings.get ("submission", 0, "overall\_rating") \>\>\> stat.n 5 \>\>\> stat.sd 1.7888543819998315 \>\>\> stat.mean 3.2000000000000002 \>\>\> stat.sum 16.0 \>\>\> stat.min 1.0 \>\>\> stat.max 5.0 \>\>\>
Which if you did in R comes out to:
\> summary (c (1,2,3,5,5)) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.0 2.0 3.0 3.2 5.0 5.0 \> sd (c (1,2,3,5,5)) [1] 1.788854 \>
That’s pretty close apart from a few rounding errors as you get further out.
The beauty of this code is that you can keep track of as many varieties of statistics with just a few database accesses, and you can also “roll up” these statistics.
Mean of Mean Theory
How we use this on Fret War is that, when you vote on your submission we do a sample of each of your qualitative boolean choices, and your overall rating. We then also roll this up by taking the “mean of mean” and “mean of sd” for all submissions to produce the overall round summary.
In our code we’re kind of cheating, or being “practical” by using a standard model to analyze what’s really a logistic model. We just use the same mean/sd calculations for binary data as we do for 1-5 data. This makes real statisticians cringe, but for practical purposes, it’s good enough.
One useful theory though is that if you take a mean of a summary statistic (like mean or standard deviation) then that summary will be normally distributed no matter what form the original data takes.
It’s kind of like doing a meta-mean or meta-sd, and it says that, even if your data is totally weird and not normal, you can assume that the meta-version will be normal.
In this way I’m cheating since I get each submission’s rating mean and standard deviation, which is really logistic in shape, and then just turn them into a normal distribution by meta-summarizing all of them.
In practice this isn’t terribly useful, but in Fret War it’s very important because we use it to determine rankings and analyze the trend of the round. For example, we can see that a particular fan’s rating is probably a troll if they are consistently 1 standard deviation away from everyone else in the round. Simply keep the meta-mean for all submissions in a round, and then if Joe rates every submission at less than
(meta_mean - meta_sd) then he’s trolling.
This is the plan to make these measurements robust. By knowing the meta-mean and meta-sd of the round, we can evaluate outliers and potentially throw them out, and possibly even do it in an automated fashion.
Standard Deviation And “Sorta” vs. “Really”
Alright, that’s a hell of a lot of math and information, and sadly guitarists and fans are not known for their math prowess. That means we needed a way to describe these statistics to people in a meaningful way.
Here’s what all the ratings displays look like on Fret War:
Which is kind of funny, but when people look at it they find it makes total sense. How do we determine these? Here’s the code:
def mean*sd*as\_english (mean, sd): level = "" if mean < 0.1: level = "Sucks" elif mean < 0.2: level = "Mediocre" elif mean < 0.5: level = "Not Bad" elif mean < 0.7: level = "Awesome" elif mean <= 1.0: level = "Kicks Ass" else: level = "ERROR: %f" % mean if sd 0.5: level = "Sorta" + level return level
This function is only used on the logistic descriptors (Accuracy, Speed, etc.) which should be between 0 and 1. The levels and names are pretty much just guessed at, but seem reasonable.
What’s very fun though is the use of standard deviation (
sd) to determine “Sorta” vs. “Really”. The standard deviation is basically a measure of how “wide” your distribution is around the mean. A smaller
sd (tighter) means that most people rated it consistently at that level. A larger
sd (wider) means that people weren’t so consistent.
For example, if two players both have an Accuracy
mean of 0.8, but Joe’s
sd is 0.1 and Mary’s is 0.8 then you can determine the following:
- Joe was seen as more consistently accurate than Mary.
- Mary was still just as accurate, but enough people voted the other
way that it spread her distribution out.
- I can use Joe’s
sdto say he was “Really” awesome, as a way of
denoting consistency in the voting.
- Consquently, Mary’s
sdsays she was “Sorta” awesome because enough
people thought she wasn’t.
- Mary also may have gotten more votes than Joe, and actually people
who thought she was accurate probably ranked her as more accurate
than Joe.
- There’s probably something else going on with Mary’s submission
that’s confounding her accuracy rating. Maybe she picked a Rhythm
that some people just don’t like or can’t hear well.
With that in place, it’s very simple to present to the user what’s actually a very complex statistical model of their playing, but in a way they understand.
Cowbells
If you look at the Winnars page you can see we have a rating called “Cowbells” which seems really weird. Here’s a screenshot of it:
To make things fun I decided that we’d have what seems like a fairly arbitrary huge ass number to show your ranking compared to someone else. That page is showing the winnars sorted by their mean (DESC) then their standard deviation (ASC) so that higher means with lower sd are at the top.
The Cowbells is meant to be funny and keep people guessing, but it’s simply the following:
\> mean (c (0,0,0,0,0,1)) \* 1000 [1] 166.6667 \> mean (c (1,1,1,1,1,5)) \* 1000 [1] 1666.667 \> mean (c (0.5,0.5,0.5,0.5,0.5,3)) \* 1000 [1] 916.6667 \>
Yep, just the mean of all the qualitative ratings and the overall rating combined times 1000. Why 1000? Then you get to see 666 when you’re a top perfect player, and that’s so metal.
Robustness And Gaming
Obviously anything can be gamed, and this is no different. It’s trivial for a bunch of trolls to go on Fret War and consistently rate one way or another, as demonstrated by the Mountain Men’s Three Wolf Tee on Amazon.
If a bunch of people want a particular player to suck, well that’s what they’ll do. They do it to American Idol and they’ll do it on Fret War.
What this set of measurements gives us though is the ability to detect the gaming, and it also sets the bar a little higher. It’s not just a 1-5 but instead several check boxes and a required comment of 20 characters. We can also decided after a round if we want to throw out outlier votes, and in fact a simple query will show us all the possible gamers.
But, like I said, anything can be gamed, even this.
Current Flaws
Currently there are two really obvious flaws which we’re fixing.
The first is that the method of getting and setting a new statistic has a race condition. That was fine when it was just a few people hacking on it, but pretty soon we’ll need to serialize the summary calculation code. In our case we’ll just delay all posted comments and ratings and send them through a Lamson server. Lamson will then do the calculations on the posts in order after spam filtering and other quality control.
If you were to use this code in your own site, you could do something similar by having a secondary table that stored the periodic votes. Just make a table with the parameters to
sample and then have something run every 5 minutes or so to roll them up and clear the table. This is sort of a compromise between running this calculation on all table rows each time, and having the race condition.
Another flaw, which might not be such a big deal, is progressive rounding errors. You can already see a small rounding error above with just a few samples. As the number of samples goes up we’ll see rounding errors increase for later samples.
We’ll be fighting that by simply running one mass calculation at the end of a round to determine the real winners.
Future Development
Currently Fret War is in beta so we’ll definitely have problems with this code. I’ll hopefully be tweaking most of the displays and measurements over the next few months and working on ways to keep it sane.
If you want to help out, try voting on a submission and then shoot us feed back in the round’s comments so we can improve it.
Also, if you have feedback on this method then feel free to email me and discuss it.
There’s a good chance the site will crash if this blog post hits the nerd sites, so just ignore Fret War until it’s stable. | https://zedshaw.com/archive/rubrics-and-the-bimodality-of-1-5-ratings/ | CC-MAIN-2017-04 | refinedweb | 3,187 | 70.53 |
ICANN Backflips Again 94
Posted by Unknown Lamer
from the dance-for-me dept.
from the dance-for-me dept.."
Oh well, it was fun while it lasted. (Score:1, Interesting)
Goodbye internet!
It was fun while it lasted. I will happily await the arrival of your replacement, just as BBS/Gopher/eWorld/AOL had its time apparently you've had yours as well. It's truly sad to see your demise brought about by a bunch of fucking monkeys who couldn't care less about your well being, so long as they're making more money off stupid shit nobody wants or needs.
Who's up for a nice cryptographically secure distributed DNS system that runs over IPv6?
-AC
Re: (Score:3)
DNS is not the Internet.
Re:Oh well, it was fun while it lasted. (Score:5, Interesting)
DNS is not the Internet.
Unless you use this. [analogbit.com]
Re: (Score:2)
As part of the requirements for the new TLDs, DNS must support DNSSEC and IPv6 from day 1. It ups the standard of DNS quality across the entire Internet, and puts pressure on the TLDs which aren't up to date yet.
Re: (Score:2)
When doing a backflip, you end up in the same place, generally in the same spot (if you have good technique). If they did this with a flip, it would have been an Arabian, not a backflip.
HAND.
ICANN Backflips (Score:3, Funny)
What do they want, a medal?
Re: (Score:3)
What do they want, a medal?
Nah, it's just... being for the benefit of Mr. Kite.
Dumb idea (Score:5, Informative)
ICANN apparently can't solve one of the most basic object oriented programming problems: Namespace organization and integrity.
There's only a couple of organizational schemes that make sense; Geographical, topical, and organizational. Of those, the third was the first used: Separating domains on the basis of their function; educational, commercial, non-commercial, and governmental. Then we tried to launch geographical, which meant that agents within the system would need to register on both basis; You'd have, for example, usairforce.gov, and airforce.us. But then ICANN botched big-time; they tried to organize based on... er, nothing. Rather than a couple hundred nodes on the root, you now have effectively an infinite number of roots.
The results were predictable: Complete and total chaos as everyone tried to register every possible permutation of trademarks, organization names, governments -- and although the cost of running a gTLD was in the tens to hundreds of thousands of dollars (which, itself, seems rather retarded; Why does adding a name to a file containing a list cost a hundred grand?) -- there are literally hundreds of thousands of organizations and individuals with the desire and cash to do so.
And they all threw their money at the problem at the same time. Now they're stuck because there's hundreds of millions sunk into the program, and they can't go back on the process. It's a bureaucratic cluster-fuck beyond even what our most inept governmental organizations can do.
At this point, the entire DNS system should be scrapped and start over from scratch. But that won't happen for years and years. Eventually though, it'll have to happen... when it does, I hope they pick one organizational scheme and stick to it.
Re: (Score:2)
At this point, the entire DNS system should be scrapped and start over from scratch. But that won't happen for years and years. Eventually though, it'll have to happen... when it does, I hope they pick one organizational scheme and stick to it.
Maybe something that roots in
.bit or .p2p?
Re: (Score:2)
Thanks for bringing those up, I had forgotten about them. I would have modded you up, but
.bit and .p2p don't really solve the namespace problem, they just allow a different organisation (or none) to control DNS. Anyway, really cool. Here's a link to .bit, [dot-bit.org] . The .p2p site seems to be down.
Re: (Score:1)
But that won't happen for years and years
You accidentally misspelled "that won't happen EVER".
Re: (Score:2)
Because, else, you would have bots registering domains for squatting, at the cheapest price offered. Since it is a race to the bottom without some sort of oversight, this is the way that keeps you from having to register myawesomecompany93282.com
Re: (Score:2)
Exactly.
.com domainname without hitting on one of those placeholder ad-ridden pages that offer the domain for sale?
So now only the registrars itself are able to effectively register for free, due to gaps in the regulation.
Ever noticed that it's practically impossible to enter a 3-4 character
Re: (Score:2)
We're talking about gTLDs, so wouldn't it be com.myawesomecompany93282?
(On a related note: What is the point, anyway, of registering a gTLD unless you're going to run an absolutely massive number of domains inside it? I can sort of see this happening for MS, Apple and Google; maybe Facebook. Nobody else, really.)
Re: (Score:2)
Well if you register the right one it could be a malware guy's wet dream. lets say you register
.cpm or .cim, many people can slip up and mistype that and more importantly how many Joe Average are gonna notice that single letter typo in a link?
So then you have Microsoft.cpm, Adobe.cpm, Amazon.cpm, see where this is going? It'd also be a good way to extort some cash out of legit businesses, such as "hey I bet you don't want us using Washingtonstate.cum for college porn do ya? Well we'll be happy to sell the
Re: (Score:1)
ICANN apparently can't solve one of the most basic object oriented programming problems: Namespace organization and integrity.
However, they have solved wonderfully well one of the most basic business problems: making a profit.
Re: (Score:2)
There's only one root. A lot of domains under that root.
However, they should just have made this open and cheap from the get go. Trademarks and other such things could have been limited to the ccTLDs only.
Re: (Score:1)
Great. Things "worth a lot of money" that one entity practically can create from thin air.
I wonder that that will do to currency stability.
Re: (Score:2)
On hindsight, that was not the wisest decision. It would have been better to ONLY use the geographical system. That would have meant no com, net and org.
What about things like linux.org, you might ask? Well, when I look at the whois page, I see an ameri
.assclown (Score:4, Funny)
I only hope that ICANN was able to register
.assclown for themselves. Anyone else getting it would be unfair.
Re: (Score:3)
I only hope that ICANN was able to register
.assclown for themselves. Anyone else getting it would be unfair.
Not so fast. Several politicians and corporate C?O's have a vested interest in that TLD.
Re: (Score:2)
It's ~200k, and what would happen is that they'd deny your application for violating the rules and keep the money (yes, the money is a fee for the review, not a payment for keeping the gTLD).
This is getting stupid. (Score:4, Insightful)?
People keep suggesting decentralised DNS, but I'm not convinced it's a workable solution. If there's no central authority controlling the DNS, there's nobody who can give your domain back when someone breaks into your system and steals it, or when you accidentally lose your crypto keys.
Re: (Score:2)
Re: (Score:2)
That's what I'm guessing the fear is, that some grandmother will go to coke.coke and get a virus and switch to pepsi fo
Re: (Score:1, Funny)
People keep suggesting decentralised DNS, but I'm not convinced it's a workable solution.
DNS isn't strictly required to access websites on the web, except for its use in the host header which helps apache pick which virtual host to serve up to you.
HOW TO MAKE ICANN IRRELEVENT:
1. Google (or Bing, or both) begins by indexing the current system (they most likely already have)
2. Google tweakes their engine so that people can go to the google homepage ( for example - out of many, which could easily be saved as a favourite in any browser), enter their search, and google
Re: (Score:1)
you just know that it is because you trust the identifyer (mybank.com)
just as if you contacted your bank (or pinged mybank.com before abandoning the DNS) you would be able to find out a trustworthy IP address
Re: (Score:2)
You know you can trust mybank.com because it's a memorable name and you've used it before. And once you know mybank.com, you know that subdomains like accounts.mybank.com or invest.mybank.com or whatever also belong to your bank.
It's not so easy to remember your bank's IP address, and it's impossible to tell whether an IP address you haven't seen before belongs to them or not.
Re: (Score:1)
how many people remember all their friends phone numbers? answer is they don't. they add contacts in their phones
but hey if you don't mind relying on asscann or have a better idea, be my guest. i'm certainly not holding a gun
Re: (Score:2)
So when my bank moves their web servers from one data center to another for whatever reason, every single customer has to be informed to update their shortcuts.
And when someone else gets assigned the old IP addresses and also puts up a copy of the bank web site on them it's the customers fault for clicking on an old link, right?
When I change my phone service provider I move my phone number along with me, precisely so that the people I know don't have to update their address books. But you want a bank to inf
Re: (Score:1)
When I change my phone service provider I move my phone number along with me
but not automatically. you have to contact your phone service provider so they can switch it over in their system (unless you're talking about a mobile)
banks contact their customers all the time for all sorts of reasons (change to policy, bank statements, offers, etc), so mass mailing every customer regarding a change of URL would not be a big deal. there is also email, sms and various other ways to contact customers to ensure they don't mistakenly use the old IP address. also, i doubt an IP address pre
Re: (Score:3)
This idea has many problems:
You can't change your ISP, or renumber your network, or move your website to a different server on your network, or switch to IPv6, without making all existing links to your site invalid.
A link can only point to a specific IP, not to a website that has multiple redundant servers with different public IPs, or a website with both IPv4 and IPv6 support.
Anyone can create a site like (for example) [203.0.113.135] , and no user can distinguish it from the 'real' westp
Re: (Score:1)
Re: (Score:2)
spelling is more important than capitalization of web forums
If you're going to be writing anything more than one sentence long, capitals are very helpful to the reader. It's just a matter of convention that helps break up the text. Personally, I tend to skip over posts that don't use caps as I find them annoying to read.
Re: (Score:1)
I find them annoying to read
dammit you stumbled on the real reason why i do it
:)
Re: (Score:1)
if you want to pay me to be professional on slashdot, i'll start capitalizing sentences
after all, its pretty hard for someone to be considered professional if they aren't paid
Re: (Score:2)
If you're not going to talk the tiny effort to press the shift key, how valuable can your contribution be? I'm lazy, but gees, too lazy to press one damned key is absurd.
I have to agree with the other respondent to your comment. You're disrespectful of those reading your comments.
As to "smelling mistaks", everyone makes mistakes, but not capitalizing is a deliberate act of willful negligence and shows an awful lot of immaturity. Grow the fuck up, boy.
Re: (Score:1)
how valuable can your contribution be
if you value contribution based on capitalization, then how much can i possibly give a rats about how much you value my contribution?
signed: willfully negligent disrespectful immature boy
Re: (Score:2)
> dead links wouldn't be a huge problem because the only links you would need to maintain are with the major search engines anyway
Seriously? The great innovation of hypertext was that websites can link to each other, and it's used all the time. Slashdot itself is a pretty good example. Almost every site, not just search engines and Facebook, has links to other domains on it.
If you remove the ability for websites to link to one another reliably, you kill the web. I am not exaggerating.
Re: (Score:1)
If you remove the ability for websites to link to one another reliably, you kill the web
based on your assumption, the web would already be dead (it would never have come "alive" in the first place), because DNS doesn't enforce the integrity or reliability of third party hyperlinks, which is why the web is already full of dead links
there are many cases where URLs are stable within the DNS, but its not really because of the DNS itself
many external links from websites need to be regularly checked and updated, so while removing DNS from the equation wouldn't make that better, it wouldn't ma
Re: (Score:1)
Re: (Score:2)
One doesn't need to be ICANN to think your idea is stupid. One just needs to be sane.
Re: (Score:1)
Re: (Score:2)
Not factually incorrect, just fucking stupid. Your idea takes power away from a non-profit oriented organisation and concentrates it in the hands of several profit oriented corporations. Then, for bonus points, it introduces several inherent security vulnerabilities, a gigantic namespace collision fault, and a violation of the HTML specification. It also relies on people being able to remember the IP addresses of every site they ever visit, or to rely on a profit oriented search engine to remember it for
Re: (Score:1)
i could pick apart your bullshit piece by piece, but i fear i may cause you to have an aneurysm
Re: (Score:1)
concentrates it in the hands of several profit oriented corporations
last time i checked profit driven competition was kind of the back bone of western economies, so privatisation of a simple internet addressing scheme won't cause total chaos and anarchy. also, google and bing are already the primary internet addressing system for most internet users anyway. to by far most users, nothing would change.
a gigantic namespace collision fault
really!? this almost made me laugh. unfortunately your bullshit was kinda overshadowed by the fact that IP addresses and virtua
Re: (Score:1)
all google would be doing is constructing valid http requests based on an index of IP addresses and virtual host names
do you even know what a http request looks like? try websniffer.net
the bit about eventually changing the URL scheme was actually a proposal to eventually revise the HTML spec. it is unlikely that browser vendors (except maybe microsoft) would ever support non-standard URL schemes
and the "inherent securi
Re: (Score:1)
websniffer.net
obviously i mean web-sniffer.net
perfect example of the fallability of DNS right there
Digital archery? (Score:1)
I read about how "digital archery" was supposed to work. If I had read it out of context, I'd have assumed it was some sort of parody or April Fool's joke.
Backflips? (Score:3)
The use of "backflips" suggests they've done something wrong. Yet the summary seems to say that there were complaints about the application process, and that ICANN has responded to those complaints by improving the process -- or at least altering it so as to remove the parts that were being complained about. In fact it doesn't even have anything negative to say about the news itself, other than the headline.
They actually listened to criticism and removed the cause of it. What more do people want of them?
(Other than coming to their senses and aborting the whole thing, of course.)
Re: (Score:2)
Clowns in a demollition derby? What next? (Score:2)
I think they need to change their name from ICANN to ICANNOT, ASAP!
Re: (Score:2)
I checked, and your link doesn't appear to contain any truth, just some blowhard whining about his apparently 100% legal site being shut down by no less than three registrars (which, incidentally, is a sign his site is not 100% legal). Perhaps you should check your sources next time.
Why we need new TLDs again? (Score:4, Insightful)
What was the official reason from ICANN for new TLDs again?
.net, .com, .de, .org anyway to secure it's trademark. For example Disney: all TLDs redirect to the domain go.com with is registered with Disney Enterprises Inc., except .gov. So the only clasification that survived is .gov, all the others are basically the same.
The current scheme don't make sense anymore anyhow, a company have to register
After the introduction of cTLDs, there was no purpose for the ICANN anymore, other then to ensure that each country gets one cTLD. With a cTLD each country can make their own DNS sub-tree, like
.co.uk. So there would be no issue what-so-ever with the long discussed berlin domain: just make .berlin.de, .munich.de, etc. and if a US company wishes they can get also their own domain: pepse.us.
Mark my prediction: there will be a time in the near future where the meaning of a TLD is gone and you can choose your TLD freely. That will be the final money grab of ICANN.
Firefox already got rid of the protocol part of the URL (the [.] So why we not just get rid of the TLD part? (It's already in firefox, for [slashdot.com] I can just enter "slashdot" in the URL bar).
Re: (Score:2)
You were with me until your stupid protocol argument. DNS and TCP/IP in general are used for many more things than just HTTP requests.
Re: (Score:2)
Complain to the Firefox developers.
Sorry, you think I approve to get rid of the protocol part? Maybe I was not clear, I just stated that Firefox already not showing the protocol part in the URL bar. I am not agree to that, the very first thing was I changed it to show [http] again.
The real reason (Score:1)
The real reason ICANN is doing all applications simultaneously, is so that the folks in the later batches won't have an opportunity to ask for their money back when they realize that a gTLD is completely worthless.
Here's what's going to happen: Somebody reigsters the gTLD "apple", and sets up his website at [apple] and his email somebody@apple. Then he finds out he gets no web traffic, because people don't type "http://" into their browsers. They just type "apple" and get the top search engine hit (ap
Re: (Score:3)
There is no A or MX record for "com", "net","org","us","uk","info","museum","biz","mobi"... I'm going to say that none of them work that way. If anyone thought they would work that way, that's their own damn fault.
Re: (Score:2)
HAH! I just didn't try enough weird country codes. I stand corrected.
Why so much to run a tld? (Score:2)
Why does it cost hundreds of thousands to run a tld? Is most of that just labor/marketing costs? I would assume it would just be a matter of setting up a few replicating bind servers and a basic api for buying/adding domains that could be distributed to domain brokers (GoDaddy, Moniker, etc).
Maybe there is more involved that I think there should be?
I'm just curious where hundreds of thousands go to launch and run a tld.
Re: (Score:2)
You forgot the executives golden parachute I think, that's a lot of money right there in order to have the "proper" person running the company. | http://tech.slashdot.org/story/12/07/31/012250/icann-backflips-again | CC-MAIN-2014-41 | refinedweb | 3,485 | 71.75 |
Launching OddPatch 0.1 Beta
Tuesday, 19. September 2006, 01:30:00
Launching in browser.js tomorrow or so: OddPatch beta, solving many of the compatibility issues with the service formerly known as OddPost, now Yahoo!Mail beta.
It's a complex patch sorting out a daunting number of issues on both sides. When things get this complex, it's rarely only "their fault" or "our fault" - the testing has uncovered several bugs in Opera and several mistakes in their JavaScript. Here's a walk-through of the entire patch:
// browser sniffing workaround - walking in through the back door if( location.href.indexOf( '/dc/system_requirements?browser=blocked' ) >-1){ location.href='/dc/launch?sysreq=ignore'; }
Y!Mail has three modes for browsers: supported, possibly working, or blocked. Opera is blocked, but luckily there is a backdoor that will bypass the sniffing.
The irony here is that when they block us, they are making their work on Opera-compatibility much harder than necessary. If we get access, we'll do our best to make things work: test, find bugs, even decide to support things we haven't supported previously (we'll have selectSingleNode soon because Y!Mail uses it heavily, and their code was also a very important reason why DOM2 Style support was prioritised for Opera 9.0!). Blocking us makes it much harder for us to make their life simpler.
if( top.location.href.indexOf('/dc/launch')>-1 ){ // Gecko compatibility library uses defineGetter and defineSetter. We need to fake them. //* Patch below is required but causes trouble.. Object.prototype.__defineGetter__= function(){} Object.prototype.__defineSetter__= function(){}
This is known as "fake it until you make it". We won't have getters and setters anytime soon, but things will work anyway if we pretend we do.
// IEism called loadXML, basically a DOMParser / DOMLS equivalent // must handle XML fragments without root element! Element.prototype.loadXML=function(s){ try{ var d=new DOMParser().parseFromString(s, 'text/xml');
This, I think, is a bit of the strange world IE lets you into if you put an XML tag in a page. That tag takes on a life of its own and starts behaving in some contexts like a document, it aquires several methods and properties - and though I at first thought I could simply fake it with a DOMParser I had to think again because...
}catch(e){ // DOMParser could not parse fragment, probably because of missing single root element. Workaround time.. var d=document.implementation.createDocument('', this.tagName, null), el=d.createElement('el'); //?? why did I use this.tagName there? el.innerHTML=s; for(var i=0 ; i<el.childNodes.length;i++){ d.appendChild(el.childNodes[i].cloneNode(true)); } }
Yes, they are not always playing with well-formed XML fragments. Oh well, we'll pull out good-old-tagsoup-parsing .innerHTML and eat their strings anyway. Then we move on and fill in some other required bits and pieces of IE's XML DOM. Right now I'm not sure if all the stuff in this block is required, but there it is.. I'll be the first to admit that both code and comments are evidence of the somewhat chaotic process of late-night patching..
// faking IE-style XML element DOM - separate documents with documentElement within the main doc's DOM this.documentElement=d.documentElement||d.firstChild; //?? firstChild is probably leftover from earlier versions using documentFragment? this.XMLDocument=d; // address book loading checks .parseError.errorCode this.XMLDocument.parseError={ 'errorCode':0 }; return d; }
And then is a peculiar mystery, who would ever need a function called isSameNode when == would presumably do the job?
// some method called isSameNode is called. Not sure where it comes from but simple enough to fake.. Element.prototype.isSameNode = function(n){ return n===this; }
Here we go, more delicacies from IE's internals: the handy XPath method selectSingleNode. I have quite some reservations against loadXML and the other XML DOM stuff above, but selectSingleNode should be written into a standard as soon as possible because document.evaluate needs too many arguments for lazy JS coders and the returned object is too fiddly too.
// selectSingleNode support var realSelectSingleNode=function( expr, resolver ){ var result=(this.ownerDocument?this.ownerDocument:this).evaluate( expr+'[1]', this, resolver, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE,null ); return ( result && result.snapshotLength ) ? result.snapshotItem(0) : null; } Node.prototype.selectSingleNode = function (expr, resolver) { if (!resolver) if (this.nodeType == Node.DOCUMENT_NODE){ resolver = document.createNSResolver (this.documentElement); }else if(this.nodeType == Node.ELEMENT_NODE && this.ownerDocument && this.ownerDocument.documentElement ){ resolver = document.createNSResolver (this.ownerDocument.documentElement); }else{ resolver = document.createNSResolver (this); } return realSelectSingleNode.apply (this, [expr, resolver]); }
Now, up till now the patching has been quite ordered. A patch is a bit of a kludge anyway, but so far it's been a nice kludge. Here come the problems that were too ugly for a nice kludge. Certain issues just required search and replace operations on the script source code, nothing else to do about them - and because Yahoo source is compressed and variable names random, search and replace must take that into account and go for seriously complicated and ugly regular expressions.
opera.addEventListener('BeforeScript', function(e){ // This is the riskiest patch // Fixing typo: missing ' after attribute value e.element.text=e.element.text.replace( /\):\(">"\)\),/, "):(\"'>\"))," ); e.element.text=e.element.text.replace( /" );
Yep, twice in their source code they say things like <tag class="foo> omitting the closing quote. That caused Opera to parse it as a text node instead of an element, meaning source code would appear here and there in the interface. Oops.
// WebForms2 problem: button attribute "action" is a URL in WF2 e.element.text=e.element.text.replace( /\.(action\b)/g, ".js$1" );
Specs and implementations collide again: if you set input.action to a value like 'markAsSpam' the WF2 spec means it will be resolved as a URL, so when the script reads it again it will see '' which is not at all what it expected.
// send button not working - attribute nodes must be in the document they will be used e.element.text=e.element.text.replace( /(\w)\.setAttributeNode\((\w)\)/, "$1.setAttributeNode($1.ownerDocument.importNode($2, true))" );
Hm, is Firefox sloppy with exceptions on cross-document node usage again?
// workaround for getting the documentElement.xml markup e.element.text=e.element.text.replace( /(([\w\.]*)documentElement).xml/g, "(document.implementation.createLSSerializer()).writeToString($1)" );
IE's XML DOM rides again. Elements over there have an .xml property which is basically the equivalent of .innerHTML for an HTML element, showing the inner serialised markup.
I didn't take a long and hard look at how Y!Mail used it but some of that seemed very weird. I had the impression that they read .xml of the contacts list only to pass it around as a string and use loadXML later on. Why would they serialize markup just to parse it right into a DOM tree again? Oh well, there is probably some complex reason..
// for...in on objects run into our faked __defineGetter__ and __defineSetter__ // we try to add an exception to any for...in loops e.element.text=e.element.text.replace( /(for\((var |)(\w*) in \w*\)\{)/g, "$1if($3.match(/^__define/))continue;" );
It turns out "fake it until you make it" wasn't such a good idea after all. The site called our bluff with code like
function foo(obj){ for( p in obj )return false; return true } var bar = {}; if(!foo(bar)) return;
and what exactly they meant by that I don't know either, except to check that a newly created object really REALLY had no properties. Huh?
// To: / CC: autocomplete fails // we support IE's TEXTAREA.createTextRange but unfortunately not its boundingLeft property. Improving object detection.. if(e.element.src&&e.element.src.match(/ac\.js$/))e.element.text=e.element.text.replace( /if \( editCtrl\.createTextRange \)/, "if ( editCtrl.createTextRange && editCtrl.createTextRange().boundingLeft )" );
This is a typical trap of piecemeal implementation of something: we support whatever of IE's stuff was deemed important to get some plaintext formatting JavaScript to work with 8.x. .boundingLeft wasn't on the list back then. Sorry. Look a bit harder when you look for something.
// Preferences not read correctly from XML attributes // IE has an attribute node .text property. .nodeValue will work in Opera.. e.element.text=e.element.text.replace( /\.selectNodes\((\w*)\);\s*\}return\((\w*)\.length\)\?(\w*)\[0\]\.text:/g, ".selectNodes($1);}return($2.length)?$2[0].nodeValue:" );
IE again. text alias nodeValue, enough said.
// We throw an unwanted exception if both arguments to insertBefore are the same node e.element.text=e.element.text.replace( /var (\w*)=(\w*)\?(\w*)\.nextSibling:(\w*)\.firstChild;\s*(\w*)\.insertBefore\((\w*),(\w*)\);/, "var $1=$2?$2.nextSibling:$4.firstChild;if($6!=$1) $4.insertBefore($6,$1);" );
Now this is plainly a bug. The DOM spec says we should throw an error if the node you insert is a parent of the reference child, but we also did so if the inserted node was the reference child itself.
The code doesn't make sense, mind you... Why do you want to replace an element with itself?
// Opera 9.00 and 9.01 has a bug that means createContextualFragment on table elements is unreliable // easily the worst patch.. but then it works around a really tricky bug.. if( navigator.userAgent.indexOf('9.01')>-1 || navigator.userAgent.indexOf('9.00')>-1 ){ // UA detection to target specific bug in specific version is OK e.element.text=e.element.text.replace( /(\b(\w*)\.selectNodeContents\((\w*)\);\s*var (\w*)=(\w*).createContextualFragment\((\w*)\))/, "if($3.tagName=='TBODY'||$3.tagName=='TR'){ $2.createContextualFragment=function(s){var n=s.match(/<(\\w*)/)[1]; var e=document.createElement('div');e.innerHTML='<table><tbody>'+s+'</tbody></table>';return e.getElementsByTagName(n)[0]; } }$1" );
Just as ugly as it looks, just an attempt to make the code work in 9.01.
Imagine if Yahoo!Mail blocked us until they one day decided that it was necessarily to start working on Opera compatibility? If we never had gotten to test their system, they would probably have to add such an ugly workaround to their application to get around this bug. Developers everywhere: please, don't sniff, just leave more of the burden of compatibility on the UA's table (and listen to feedback!).
Hey, we're done with the replacements! It wasn't pretty, and I look forward to deleting one by one while things are fixed on either side. That will also give us a nice performance lift. Their scripts are huge. At some point I did some profiling of the above replace calls and found that up to half of the time it took Opera to load Y!Mail was spent applying the above patches.
} }, false) // No scrollbars appear for message list.. // uses an "overflow" CSS property to control scrollbars. e.overflow="-moz-scrollbars-vertical", and some odd clipping as well.. document.addEventListener( 'load', function(){ setTimeout( function(){ var divs=document.getElementsByTagName('div');for(var i=0,div;div=divs[i];i++)if(div.className&&div.className.indexOf('fakeScrollBar')>-1){div.style.overflow='auto';div.style.clip='auto';}},500);}, false );
Yes. That problem. It uses some CSS I still haven't fully understood.. I think they wanted to show an element with a scroll bar but clip the whole element away so only the scroll bar would be visible.
// redraw problem hides To: field in compose screen document.addEventListener( 'load', function(){ if(top.document.frames['newmessage']){ setTimeout( function(){try{top.document.frames['newmessage'].document.body.className+=' ';}catch(e){}},1000); }}, true);
This is another tricky one, it has to do with timing and I still haven't quite captured the sequence of events. Basically the "To" and "Subject" fields in the compose screen disappear until you click "Show BCC".
// sluggish performance due to unintended event capture (function(ael){ window.addEventListener = function(type, func, capture){ ael.call(window, type, func, false); } })(window.addEventListener);
..and just to top if off, they had to capture events by mistake. Of course. That's from the curriculum of "How to code Opera-incompatible websites 101".
opera.postError( 'Yahoo mail patched' ); } }
Yippee! We did it!
Now, this patch is (repeat after me) in beta! There are problems that I'm aware of but haven't fixed, and there are problems that I'm not aware of and haven't fixed. And while I was working on this stuff, code changes in Yahoo mail would break things again every few days (and even break differently on the U.S. and the U.K. sites!). So, hurry up and try it while you have a chance! I'll try to keep the patch maintained, and we sure hope that we'll get all the issues sorted out from either side for a fast, friendly, responsive experience - somewhere in the future..
olli # 18. September 2006, 23:49
hallvors # 18. September 2006, 23:53
hallvors # 18. September 2006, 23:55
xErath # 19. September 2006, 00:59
P.S.: Don't forget to maintain browser.js for Opera 8. Not this patch though.. but all others.
rseiler # 19. September 2006, 01:57
But why would they close off? Their developer site gives Opera 9.0 under XP an overall "A" grade for compatibility with Yahoo sites (though not this one), so shouldn't they have some interest now? Maybe in the old days I would have expected that attitude, but not today.
It's also interesting that Yahoo itself was willing to do all the heavy lifting (probably) necessary to get Firefox/Mozilla/Netscape going.
Andrew Gregory # 19. September 2006, 03:34
Thankfully, from the looks of things, Gmail was a whole lot simpler! Very impressive Mr Steen
robodesign # 19. September 2006, 08:20
Impressive work.
Yet, it's really bad Yahoo doesn't fix their site. Most performance penalties are in the regex.
non-troppo # 19. September 2006, 10:37
FataL # 19. September 2006, 13:19
But you guys really need better handling of tag soup. Some developers will never learn (bother) to write valid code.
hallvors # 20. September 2006, 12:58
rseiler: we have a good relationship with Yahoo, and they do fix things for us - but we also have to live with being at the bottom of the priorities list for something like Yahoo mail beta. It's descending from a system that was written to be IE-only after all, now they have been tweaking it for Gecko compatibility and I guess Safari is next on their list. It is the vicious cycle of low usage share causing incompatible services which again prevents users from switching to Opera. browser.js can break that cycle, and when something is as big as Y!Mail it can make a real difference.
FataL # 20. September 2006, 13:28
xErath # 25. September 2006, 03:01
Originally posted by hallvors:
look what I've found
grimace # 15. October 2006, 02:27
One question, Hallvors - does this JS patch (it looks identical to the Yahoo Mail Beta patch in browser.js) supposed to allow pane resizing? Or is that something that still needs work?
xErath # 8. November 2006, 06:21
Not yet. It's a known issue, and there's a patch for it.
Basically, they try to manipulate a stylesheet in another domain, which fires the expected security exception.
FataL # 21. December 2006, 21:08
Some interesting answers from Yahoo guy under rckenned nick name:
Originally posted by rseiler:
Originally posted by rckenned:
Originally posted by FataL:
Originally posted by rckenned:So, let's hope some things was fixed! | http://my.opera.com/hallvors/blog/show.dml/471328 | crawl-002 | refinedweb | 2,576 | 50.94 |
This module lets you define your own instances of
Zoom and
Magnify.
The warning from Lens.Micro.Internal applies to this module as well. Don't export functions that have
Zoom or
Magnify in their type signatures. If you absolutely need to define an instance (e.g. for internal use), only do it for your own types, because otherwise I might add an instance to one of the microlens packages later and if our instances are different it might lead to subtle bugs.
Synopsis
- type family Zoomed (m :: * -> *) :: * -> * -> *
- class (MonadState s m, MonadState t n) => Zoom m n s t | m -> s, n -> t, m t -> n, n s -> m where
- type family Magnified (m :: * -> *) :: * -> * -> *
- class (MonadReader b m, MonadReader a n) => Magnify m n b a | m -> b, n -> a, m a -> n, n b -> m where
- newtype Focusing m s a = Focusing {
- unfocusing :: m (s, a)
- newtype FocusingWith w m s a = FocusingWith {
- unfocusingWith :: m (s, a, w)
- newtype FocusingPlus w k s a = FocusingPlus {
- unfocusingPlus :: k (s, w) a
- newtype FocusingOn f k s a = FocusingOn {
- unfocusingOn :: k (f s) a
- newtype FocusingMay k s a = FocusingMay {
- unfocusingMay :: k (May s) a
- newtype FocusingErr e k s a = FocusingErr {
- unfocusingErr :: k (Err e s) a
- newtype Effect m r a = Effect {
- newtype EffectRWS w st m s a = EffectRWS {
- getEffectRWS :: st -> m (s, st, w)
- newtype May a = May {
- newtype Err e a = Err {
Classes
type family Zoomed (m :: * -> *) :: * -> * -> * Source #
Instances
class (MonadState s m, MonadState t n) => Zoom m n s t | m -> s, n -> t, m t -> n, n s -> m where Source #
zoom :: LensLike' (Zoomed m c) t s -> m c -> n c infixr 2 Source #
When you're in a state monad, this function lets you operate on a part of your state. For instance, if your state was a record containing a
position field, after zooming
position would become your whole state (and when you modify it, the bigger structure would be modified as well).
(Your
State /
StateT or
RWS /
RWST can be anywhere in the stack, but you can't use
zoom with arbitrary
MonadState because it doesn't provide any methods to change the type of the state. See this issue for details.)
For the sake of the example, let's define some types first:
data Position = Position { _x, _y :: Int } data Player = Player { _position :: Position, ... } data Game = Game { _player :: Player, _obstacles :: [Position], ... } concat <$> mapM makeLenses [''Position, ''Player, ''Game]
Now, here's an action that moves the player north-east:
moveNE ::
StateGame () moveNE = do player.position.x
+=1 player.position.y
+=1
With
zoom, you can use
player.position to focus just on a part of the state:
moveNE ::
StateGame () moveNE = do
zoom(player.position) $ do x
+=1 y
+=1
You can just as well use it for retrieving things out of the state:
getCoords ::
StateGame (Int, Int) getCoords =
zoom(player.position) ((,)
<$>
usex
<*>
usey)
Or more explicitly:
getCoords =
zoom(player.position) $ do x' <-
usex y' <-
usey return (x', y')
When you pass a traversal to
zoom, it'll work as a loop. For instance, here we move all obstacles:
moveObstaclesNE ::
StateGame () moveObstaclesNE = do
zoom(obstacles.
each) $ do x
+=1 y
+=1
If the action returns a result, all results would be combined with
moveObstaclesNE returns a list of old coordinates of obstacles in addition to moving them:
<>– the same way they're combined when
^.is passed a traversal. In this example,
moveObstaclesNE = do xys <-
zoom(obstacles.
each) $ do -- Get old coordinates. x' <-
usex y' <-
usey -- Update them. x
.=x' + 1 y
.=y' + 1 -- Return a single-element list with old coordinates. return [(x', y')] ...
Finally, you might need to write your own instances of
Zoom if you use
newtyped transformers in your monad stack. This can be done as follows:
import Lens.Micro.Mtl.Internal type instance
Zoomed(MyStateT s m) =
Zoomed(StateT s m) instance Monad m =>
Zoom(MyStateT s m) (MyStateT t m) s t where
zooml (MyStateT m) = MyStateT (
zooml m)
Instances
type family Magnified (m :: * -> *) :: * -> * -> * Source #
Instances
class (MonadReader b m, MonadReader a n) => Magnify m n b a | m -> b, n -> a, m a -> n, n b -> m where Source #
magnify :: LensLike' (Magnified m c) a b -> m c -> n c infixr 2 Source #
This is an equivalent of
local which lets you apply a getter to your environment instead of merely applying a function (and it also lets you change the type of the environment).
local:: (r -> r) ->
Readerr a ->
Readerr a
magnify:: Getter r x ->
Readerx a ->
Readerr a
magnify works with
Reader /
ReaderT,
RWS /
RWST, and
(->).
Here's an example of
magnify being used to work with a part of a bigger config. First, the types:
data URL = URL { _protocol :: Maybe String, _path :: String } data Config = Config { _base :: URL, ... } makeLenses ''URL makeLenses ''Config
Now, let's define a function which returns the base url:
getBase ::
ReaderConfig String getBase = do protocol <-
fromMaybe"
<$>
view(base.protocol) path <-
view(base.path) return (protocol ++ path)
With
magnify, we can factor out
base:
getBase =
magnifybase $ do protocol <-
fromMaybe"
<$>
viewprotocol path <-
viewpath return (protocol ++ path)
This concludes the example.
Finally, you should know writing instances of
Magnify for your own types can be done as follows:
import Lens.Micro.Mtl.Internal type instance
Magnified(MyReaderT r m) =
Magnified(ReaderT r m) instance Monad m =>
Magnify(MyReaderT r m) (MyReaderT t m) r t where
magnifyl (MyReaderT m) = MyReaderT (
magnifyl m)
Instances
Focusing (used for
Zoom)
Effect (used for
Magnify)
newtype EffectRWS w st m s a Source # | https://hackage.haskell.org/package/microlens-mtl-0.2.0.1/docs/Lens-Micro-Mtl-Internal.html | CC-MAIN-2022-21 | refinedweb | 919 | 57.2 |
Each Answer to this Q is separated by one/two green lines.
I want to ask what the
with_metaclass() call means in the definition of a class.
E.g.:
class Foo(with_metaclass(Cls1, Cls2)):
- Is it a special case where a class inherits from a metaclass?
- Is the new class a metaclass, too?
with_metaclass() is a utility class factory function provided by the
six library to make it easier to develop code for both Python 2 and 3.
It uses a little sleight of hand (see below) with a temporary metaclass, to attach a metaclass to a regular class in a way that’s cross-compatible with both Python 2 and Python 3. and c) when you subclass from a base class with a metaclass, creating the actual subclass object is delegated to the metaclass. It effectively creates a new, temporary base class with a temporary
metaclass metaclass that, when used to create the subclass swaps out the temporary base class and metaclass combo with the metaclass of your choice:(type): def __new__(cls, name, this_bases, d): return meta(name, bases, d) @classmethod def __prepare__(cls, name, this_bases): return meta.__prepare__(name, bases) return type.__new__(metaclass, 'temporary_class', (), {})
Breaking the above down:
type.__new__(metaclass, 'temporary_class', (), {})uses the
metaclassmetaclass to create a new class object named
temporary_classthat is entirely empty otherwise.
type.__new__(metaclass, ...)is used instead of
metaclass(...)to avoid using the special
metaclass.__new__()implementation that is needed for the slight of hand in a next step to work.
- In Python 3 only, when
temporary_classis used as a base class, Python first calls
metaclass.__prepare__()(passing in the derived class name,
(temporary_class,)as the
this_basesargument. The intended metaclass
metais then used to call
meta.__prepare__(), ignoring
this_basesand passing in the
basesargument.
- next, after using the return value of
metaclass.__prepare__()as the base namespace for the class attributes (or just using a plain dictionary when on Python 2), Python calls
metaclass.__new__()to create the actual class. This is again passed
(temporary_class,)as the
this_basestuple, but the code above ignores this and uses
basesinstead, calling on
meta(name, bases, d)to create the new derived class.
As a result, using
with_metaclass() gives you a new class object with no additional base classes:
>>> class FooMeta(type): pass ... >>> with_metaclass(FooMeta) # returns a temporary_class object <class '__main__.temporary_class'> >>> type(with_metaclass(FooMeta)) # which has a custom metaclass <class '__main__.metaclass'> >>> class Foo(with_metaclass(FooMeta)): pass ... >>> Foo.__mro__ # no extra base classes (<class '__main__.Foo'>, <type 'object'>) >>> type(Foo) # correct metaclass <class '__main__.FooMeta'>.
| https://techstalking.com/programming/python/python-metaclass-understanding-the-with_metaclass/ | CC-MAIN-2022-40 | refinedweb | 421 | 57.77 |
9 stable releases
Uses old Rust 2015
27 downloads per month
38KB
592 lines
nestools
A set of Rust tools used to help make NES games. The documentation is generated from the code, and can be found on the docs.rs page. Releases are in the github releases.
lib.rs:
This is a set of relatively simple tools used to assist with the building of NES games. Currently, its only functionality is in managing sprite sheets.
All binaries are individually described in their own binary modules. The list of binary modules is namespaced for convenience.
To download releases, you can either use a standard
cargo install, or you can visit the
GitHub releases page. I'll do my best to
support a standard set of targets, but I can't make strong guarantees, as I do all of my
development on my Linux machine.
Dependencies
~1.3–1.9MB
~44K SLoC | https://lib.rs/crates/nestools | CC-MAIN-2021-21 | refinedweb | 151 | 65.32 |
Unreal Engine 4 Tutorial: Painting With Render Targets
In this Unreal Engine 4 tutorial, you will learn how to paint various textures onto meshes using materials and render targets.
Version
- Other, Other, Other
A render target is basically a texture that you can write to at runtime. On the engine side of things, they store information such as base color, normals and ambient occlusion.
On the user side of things, render targets were mainly used as a sort of secondary camera. You could point a scene capture at something and store the image to a render target. You could then display the render target on a mesh to simulate something like a security camera.
With the release of 4.13, Epic introduced the ability to draw materials directly to render targets using Blueprints. This feature allowed for advanced effects such as fluid simulations and snow deformation. Sounds pretty exciting, right? But before you get into such advanced effects, it’s always best to start simple. And what’s more simple than just painting onto a render target?
In this tutorial, you will learn how to:
- Dynamically create a render target using Blueprints
- Display a render target on a mesh
- Paint a texture onto a render target
- Change the brush size and texture during gameplay
- Part 1: Painting With Render Targets (you are here!)
- Part 2: Deformable Snow
- Part 3: Interactive Grass
Getting Started
Start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). Unzip it and navigate to CanvasPainterStarter and open CanvasPainter.uproject. If you press Play, you will see the following:
The square in the middle (the canvas) is what you will be painting on. The UI elements on the left will be the texture you want to paint and its size.
To start, let’s go over the method you will use to paint.
Painting Method
The first thing you will need is a render target to act as the canvas. To determine where to paint the render target, you do a line trace going forward from the camera. If the line hits the canvas, you can get the hit location in UV space.
For example, if the canvas is perfectly UV mapped, a hit in the center will return a value of (0.5, 0.5). If it hits the bottom-right corner, you will get a value of (1, 1). You can then use some simple math to calculate the draw location.
But why get the location in UV space? Why not use the actual world space location? Using world space, you would first need to calculate the hit’s location relative to the plane. You would also need to take into account the plane’s rotation and scale.
Using UV space, you don’t need to do any of these calculations. On a perfectly UV mapped plane, a hit in the middle will always return (0.5, 0.5), regardless of the plane’s location and rotation.
First, you will create the material that will display the render target.
Creating the Canvas Material
Navigate to the Materials folder and then open M_Canvas.
For this tutorial, you will create render targets dynamically in Blueprints. This means you will need to set up a texture as a parameter so you can pass in the render target. To do this, create a TextureSampleParameter2D and name it RenderTarget. Afterwards, connect it to BaseColor.
Don’t worry about setting the texture here — you will do this next in Blueprints. Click Apply and then close M_Canvas.
The next step is to create the render target and then use it in the canvas material.
Creating the Render Target
There are two ways to create render targets. The first is to create them in the editor by clicking Add New\Materials & Textures\Render Target. This will allow you to easily reference the same render target across multiple actors. However, if you wanted to have multiple canvases, you would have to manually create a render target for each canvas.
A better way to do this is to create render targets using Blueprints. The advantage to this is that you only create render targets as needed and they do not bloat your project files.
First, you will need to create the render target and store it as a variable for later use. Go to the Blueprints folder and open BP_Canvas. Locate Event BeginPlay and add the highlighted nodes.
Set Width and Height to 1024. This will set the resolution of the render target to 1024×1024. Higher values will increase image quality but at the cost of more video memory.
Next is the Clear Render Target 2D node. You can use this node to set the color of your render target. Set Clear Color to (0.07, 0.13, 0.06). This will fill the entire render target with a greenish color.
Now you need to display the render target on the canvas mesh.
Displaying the Render Target
At the moment, the canvas mesh is using its default material. To display the render target, you need to create a dynamic instance of M_Canvas and supply the render target. Then, you need to apply the dynamic material instance to the canvas mesh. To do this, add the highlighted nodes:
First, go to the Create Dynamic Material Instance node and set Parent to M_Canvas. This will create a dynamic instance of M_Canvas.
Next, go to the Set Texture Parameter Value node and set Parameter Name to RenderTarget. This will pass in the render target to the texture parameter you created before.
Now the canvas mesh will display the render target. Click Compile and then go back to the main editor. Press Play to see the canvas change colors.
Now that you have your canvas, you need to create a material to act as your brush.
Creating the Brush Material
Navigate to the Materials folder. Create a material named M_Brush and then open it. First, set the Blend Mode to Translucent. This will allow you to use textures with transparency.
Just like the canvas material, you will also set the texture for the brush in Blueprints. Create a TextureSampleParameter2D and name it BrushTexture. Connect it like so:
Click Apply and then close M_Brush.
The next thing to do is to create a dynamic instance of the brush material so you can change the brush texture. Open BP_Canvas and then add the highlighted nodes.
Next, go to the Create Dynamic Material Instance node and set Parent to M_Brush.
With the brush material complete, you now need a function to draw the brush onto the render target.
Drawing the Brush to the Render Target
Create a new function and name it DrawBrush. First, you will need parameters for which texture to use, brush size and draw location. Create the following inputs:
- BrushTexture: Set type to Texture 2D
- BrushSize: Set type to float
- DrawLocation: Set type to Vector 2D
Before you draw the brush, you need to set its texture. To do this, create the setup below. Make sure to set Parameter Name to BrushTexture.
Now you need to draw to the render target. To do this, create the highlighted nodes:
Begin Draw Canvas to Render Target will let the engine know you want to start drawing to the specified render target. Draw Material will then allow you to draw a material at the specified location, size and rotation.
Calculating the draw location is a two-step process. First, you need to scale DrawLocation to fit into the render target’s resolution. To do this, multiply DrawLocation with Size.
By default, the engine will draw materials using the top-left as the origin. This will lead to the brush texture not being centered on where you want to draw. To fix this, you need to divide BrushSize by 2 and then subtract the result from the previous step.
Afterwards, connect everything like so:
Finally, you need to tell the engine you want to stop drawing to the render target. Add an End Draw Canvas to Render Target node and connect it like so:
Now whenever DrawBrush executes, it will first set the texture for BrushMaterial to the supplied texture. Afterwards, it will draw BrushMaterial to RenderTarget using the supplied position and size.
That’s it for the drawing function. Click Compile and then close BP_Canvas. The next step is to perform a line trace from the camera and then paint the canvas if there was a hit.
Line Tracing From the Camera
Before you paint on the canvas, you will need to specify the brush texture and size. Go to the Blueprints folder and open BP_Player. Afterwards, set the BrushTexture variable to T_Brush_01 and BrushSize to 500. This will set the brush to a monkey image with a size of 500×500 pixels.
Next, you need to do the line trace. Locate InputAxis Paint and create the following setup:
This will perform a line trace going forward from the camera as long as the player is holding down the key binding for Paint (in this case, left-click).
Now you need to check if the line trace hit the canvas. Add the highlighted nodes:
Now if the line trace hits the canvas, the DrawBrush function will execute using the supplied brush variables and UV location.
Before the Find Collision UV node will work, you will need to change two settings. First, go to the LineTraceByChannel node and enable Trace Complex.
Second, go to Edit\Project Settings and then Engine\Physics. Enable Support UV From Hit Results and then restart your project.
Once you have restarted, press Play and left-click to paint onto the canvas.
You can even create multiple canvases and paint on each one separately. This is possible because each canvas dynamically creates its own render target.
In the next section, you will implement functionality so the player can change the brush size.
Changing Brush Size
Open BP_Player and locate the InputAxis ChangeBrushSize node. This axis mapping is set to use the mouse wheel. To change brush size, all you need to do is change the value of BrushSize depending on the Axis Value. To do this, create the following setup:
This will add or subtract from BrushSize every time the player uses the mouse wheel. The first multiply will determine how fast to add or subtract. For safe measure, a Clamp (float) is added to ensure the brush size does not go below 0 or above 1,000.
Click Compile and then go back to the main editor. Use the mouse wheel to change the brush size while you paint.
In the final section, you will create functionality to let the player change the brush texture.
Changing the Brush Texture
First, you will need an array to hold textures the player can use. Open BP_Player and then create an array variable. Set the type to Texture 2D and name it Textures.
Afterwards, create three elements in Textures. Set each of them to:
- T_Brush_01
- T_Brush_02
- T_Brush_03
These are the textures the player will be able to paint. To add more textures, simply add them to this array.
Next, you need a variable to hold the current index in the array. Create an integer variable and name it CurrentTextureIndex.
Next, you need a way to cycle through the textures. For this tutorial, I have set up an action mapping called NextTexture set to right-click. Whenever the player presses this button, it should change to the next texture. To do this, locate the InputAction NextTexture node and create the following setup:
This will increment CurrentTextureIndex every time the player presses right-click. If the index reaches the end of the array, it will reset back to 0. Finally, BrushTexture is set to the appropriate texture.
Click Compile and then close BP_Player. Press Play and press right-click to cycle between the textures.
Where to Go From Here?
You can download the completed project using the link at the top or bottom of this tutorial.
Render targets are extremely powerful and what you’ve learnt in this tutorial is only scratching the surface. If you’d like to learn more about what render targets can do, check out Content-Driven Multipass Rendering in UE4. In the video, you will see examples of flow map painting, volume painting, fluid simulation and more.
Also check out the live training for Blueprint Drawing to Render Targets to learn how to create a height map painter using render targets.
If there are any effects you’d like to me cover, let me know in the comments below! | https://www.raywenderlich.com/5246-unreal-engine-4-tutorial-painting-with-render-targets | CC-MAIN-2019-35 | refinedweb | 2,103 | 65.73 |
.
If you like to read instructions then keep reading. If you like to watch the video then check below to learn how to setup Geckodriver and use it with Selenium 3 Webdriver.
Selenium 3 WebDriver for Java
Go to the Selenium HQ website. And download the Selenium Server and Java WebDriver Client.
Download Firefox Geckodriver
You can check the geckodrivers github page.
From here you can download the driver in zip file. After downloading the zip file you need to extract this into folder.
This path to the folder where you can downloaded geckodriver.exe needs to be in system path.
To add the application in path, you need to follow the steps below. You can skip this step if you want but it helps to add this driver into the system variables.
Set Environment Variables for Windows (Optional)
1. Go to Windows Button.
2. Go to Computer Link.
3. Right click on the menu option of computer link.
4. Choose Properties.
5. When the Systems window opens up check the left hand sidebar.
6. Choose Advanced System settings.
7. This opens “System properties” dialog box.
8. Click on “Environment Variables” button.
9. Copy the path to the “geckodriver.exe”
12. Now find path in “System Variable”.
13. Paste the copied geckodriver into it.
14. Save the settings by clicking on Ok button.
Setup Eclipse Project
Now we need to setup the eclipse project. Follow the instructions below to setup eclipse for selenium 3 Java webdriver.
1. Open Eclipse IDE.
2. Go to File > New > Java Project.
3. Name your project, project folder and choose the Java version as 1.8.
4. Click next.
5. In this dialog box choose the build tab.
6. Find the directory where you have downloaded the selenium server.
7. Add the selenium server to the path.
8. Add the Selenium Java webdriver binding to the path.
9. Click finish to create the project.
10. After your project is created you have to create a class.
11. Go to File > New > Class. Name it “Demo”
12. Add the following code:
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class Demo {
public static void main(String[] args) {
System.setProperty(“webdriver.gecko.driver”,”\\path to\\geckodriver.exe”);
WebDriver driver = new FirefoxDriver();
driver.get(“”);
System.out.println(“Website Name : “+driver.getTitle());
driver.quit();
}
}
Now you need to run the program. You’ll find the Firefox browser instance open. And on the console you’d find the website name shown as google. This is just a small example. And you can now run more complex programs depending on your needs. I hope this article helps you work with the new Geckodriver with the new Selenium 3 webdriver for Java. If you have any specific question then feel free to ask here. Do share the above video and the article with your friends on social media. 🙂 | http://onecore.net/selenium-3-open-firefox-using-geckodriver.htm | CC-MAIN-2017-09 | refinedweb | 480 | 70.5 |
See Part I, Part II , Part III and Part IV of this installment to learn more about Python functions.
Python functions with multiple arguments and a
return statement
Both versions of the
greet Python functions defined above were actually straightforward in terms of functionality that they perform. One more functionality that functions are capable of performing is to return a value to the calling statement using the keyword
return. Consider a function that takes several parameters, performs some mathematical calculation on it and returns the output. For example:
# Function with two parameters ‘a’ and ‘b’
def add(a, b):
“””Computes the addition and returns the result.
It does not implement the print statement.”””
result = a + b # Computes addition
return result # Returns the result variable
This user defined function
add takes two parameters
a and
b, sums them together and assigns its output to a variable
result and ultimately returns the variable to calling statement as shown below:
# Calling the add function
x = 5
y =
print(f’The addition of {x} and {y} is {add(x, y)}.’)
We call the function
add with two arguments
x and
y (as the function definition has two parameters) initialized with
5 and
6 respectively, and the addition returned by the function gets printed via the
The addition of 5 and 6 is 11.
Similarly, Python functions can also return multiple values based on the implementation. The following function demonstrates the same.
# Function definition
def upper_lower(x):
“””Returns the upper and lower version of the string.
The value must be a string, else it will result in an error.
This function does not implement any error handling mechanism.”””
upper = x.upper() # Convert x to upper string
lower = x.lower() # Convert x to lower string
return upper, lower # Return both variables upper and lower
The above
upper_lower function takes one argument
x (a string) and converts it to their upper and lower versions. Let us call it and see the output.
Note: The function upper_lower implicitly assumes to have a string as a parameter. Providing an integer or float value as an argument while calling will result in an error.
# Calling the function
upper, lower = upper_lower(‘Python’)
# Printing output
print(upper)
PYTHON
print(lower)
python
Here, the call to
upper_lower function has been assigned to two variables
upper and
lower as the function returns two values which will be unpacked to each variable respectively and the same can be verified in the output shown above.
In the next installment, the author will discuss Python functions with default arguments.. | https://www.tradersinsight.news/ibkr-quant-news/python-function-tutorial-part-v/ | CC-MAIN-2020-24 | refinedweb | 422 | 50.57 |
Simplified Exception Identification in Python
July 28th, 2002 by Jean-Francois Touchette in
One of the features that makes Python a great programming language is exceptions for error handling. Exceptions are convenient in many ways for handling errors and special conditions in a program. But, if several kinds of exceptions occur in short sections of code and in several parts of a program, it quickly becomes tedious and error-causing to recode the "except:" chunks of the code. Error recovery, especially, is a place where you want to have well-tested, clear and simple chunks of code. This article suggests an approach that helps users achieve this goal.
A second factor that makes Python so flexible is it does not reinvent the wheel when its API interfaces with the operating system. Any C or C++ programmer familiar with standard *NIX system calls and libraries can leverage his/her previous knowledge when moving into Python application development. On the other hand, the Python socket module generates exceptions that differ from one platform to another. As an example, the Connection refused error exception is assigned the number 111 under Linux, but it is 10061 in another popular operating system. Again, it quickly becomes boring to code multiple except: clauses for use on many different platforms.
Always searching for a better and easier way to do things, let's look now at how we can identify and categorize exceptions in Python in order to simplify error recovery. In addition, let's try to do this in a way that can be applied to multiple operating systems.
There is more than one generic way to discover the identity of an exception. First, you need to code a catch-all except: clause, like this:
try: ...some statements here... except: ...exception handling...
In the exception handling code, we want to have as few lines of code as possible. Also, it is highly desirable to funnel both normal exceptions (disconnect, connection refused) alongside the weirder ones (Attribute error!), running both through a single execution path. You will, of course, need more statements to do the precise action required by the condition. But, if you can do the first four or five steps in a generic way, it will making testing things later as easier task.
In our example, Python offers two ways to access the exception information. For both, the Python script first must have import sys before the try: .. except: portion of the code. With the first method, the function sys.exc_type gives the name of the exception, and sys.exc_value gives more details about the exception. For example, in a NameError exception the sys.exc_value command might return "There is no variable named 'x'", when x was referenced without first having been assigned a value. This method, however, is not thread-safe. As a result, it is not that useful, because most network applications are multithreaded.
The second way to access exception information is with sys.exc_info(). This function is thread-safe and also is more flexible, although it might look intimidating at first. If you run the following code:
import sys try: x = x + 1 except: print sys.exc_info()
You will see this message:
(<class exceptions.NameError at 007C5B2C>, <exceptions.NameError instance at 007F5E3C>, <traceback object at 007F5E10>)
How's that for cryptic! But, with a few lines of code we can unravel this into rather useful chunks of information. Suppose that you run the following code instead:
import sys import traceback def formatExceptionInfo(maxTBlevel=5): cla, exc, trbk = sys.exc_info() excName = cla.__name__ try: excArgs = exc.__dict__["args"] except KeyError: excArgs = "<no args>" excTb = traceback.format_tb(trbk, maxTBlevel) return (excName, excArgs, excTb) try: x = x + 1 except: print formatExceptionInfo()
This will display:
('NameError', ("There is no variable named 'x'",), [' File "<stdin>", line 14, in ?\n'])
The function formatExceptionInfo() takes the three-element tuple returned by sys.exc_info() and transforms each element into a more convenient form, a string. cla.__name__ gives the name of the exception class, while exc.__dict__["args"] gives other details about the exception. In the case of socket exceptions, these details will be in a two-element tuple, like ("error", (32, 'Broken pipe'). Lastly, traceback.format_tb() formats the traceback information into a string. The optional argument (maxTBlevel> in the sample code) allows users to control the depth of the traceback that will be formatted. The traceback information is not essential to identify or categorize exceptions, but if you want to log all the spurious unknown exceptions your program encounters, it is useful to write that traceback string in the log.
With the first two elements--the name of the exception class and the exception details--we can now try to identify the exception and reduce it to a well-known one from a set of previously recognized exception patterns..
Subscribe now!
Featured Video
Linux Journal Gadget Guy, Shawn Powers, takes us through installing Ubuntu on a machine running Windows with the Wubi installer.
Re: Simplified Exception Identification in Python
On May 31st, 2004 Anonymous says:
Dude, ever hear of errno?
errno is not thread safe
On September 27th, 2005 JFT (not verified) says:
On October 13th, 2003 Anonymous says:
The same as the first comment, thanks, exactly what I needed to know, I was having trouble getting the trace back from errors in a threaded program
Re: Simplified Exception Identification in Python
On May 31st, 2004 Anonymous says:
[to author:] Dude, ever hear of errno?
Re: Simplified Exception Identification in Python
On September 12th, 2003 Anonymous says:
Thanks man, this was exactly the code I was looking for. | http://www.linuxjournal.com/article/5821 | crawl-001 | refinedweb | 929 | 54.93 |
I'm working on a TGA writer for a ray-tracer, but I can't get it to work. Can anyone see what is wrong with it?
The written file is wierd, as one can see from the output. Why, for example, are there values in the written file that were not in the object targa?The written file is wierd, as one can see from the output. Why, for example, are there values in the written file that were not in the object targa?Code:
#include <fstream.h>
#include <iostream.h>
typedef unsigned char UCHAR;
typedef signed char SCHAR;
typedef unsigned short int USINT;
class TARGA
{
public:
TARGA();
UCHAR idfield_no; // no. of chars in id field.
UCHAR colormap_type; // 1 = map 0 = no map
UCHAR image_type; // 1 = color mapped, 2 = rgb, 3 = b&w. All uncompressed
USINT colormap_origin; // index of first entry (lo-hi)
USINT colormap_length; // no. of color map entries (lo-hi)
UCHAR colormap_bit; // 16 for 16bit, 24 for 24bit...
USINT x_origin; // lower left corner (lo-hi)
USINT y_origin; // lower left corner (lo-hi)
USINT width; // width of image (lo-hi)
USINT height; // height of image (lo-hi)
UCHAR pixel_bit; // 16 for 16bit, 24 for 24bit...
UCHAR img_descriptor; // Wierd bit-thingy (ignore?)
//UCHAR* idfield; // Usually omitted (idfield_no=0)
//UCHAR* colormap; // Omitted if truecolor
UCHAR* image_data; // data stored differently depending on bitrate and colormap etc.
// The footer is ignored
};
TARGA::TARGA()
{
idfield_no = 0;
colormap_type = 0;
image_type = 2; // truecolor
colormap_origin = 0;
colormap_length = 0;
colormap_bit = 24; // 24 bit
x_origin = 0;
y_origin = 0;
width = 2; // width
height = 2; // height
pixel_bit = 24; // 24 bit
img_descriptor = 0;
//idfield; omitted!
//colormap; omitted!
image_data = new UCHAR[width*height*3];
for (int n=0; n<width*height*3; n++)
{
image_data[n]= 128; // should make all pixels gray
}
}
main(int argc, char* argv)
{
fstream file;
TARGA targa;
char outfile[] = "outfile.tga";
file.open(outfile, ios::out); // should it be "ios::binary"?
if (!file)
{
cout << "Error in opening " << outfile << " for writing" << endl;
return 1;
}
file.write((UCHAR *)&targa,sizeof(targa));
file.close();
file.open(outfile, ios::in);
if (!file)
{
cout << "Error in opening " << outfile << " for reading" << endl;
return 1;
}
char c;
int n=0;
while (true)
{
file.get(c);
cout << n << ": " << int(c) << endl;
if (file.eof()) { break; }
n++;
}
//file.read((UCHAR *)&targa,sizeof(targa));
file.close();
return 0;
}
Some bytes of the file seems to be correct while others are not. For example the forth byte, is "-52" (or something) when it should be "0' (colormap origin). Or is that because it is a USINT?
Any suggestions? | https://cboard.cprogramming.com/cplusplus-programming/12412-need-help-tga-writer-printable-thread.html | CC-MAIN-2017-34 | refinedweb | 418 | 67.04 |
Submitted solutions are automatically compiled and run under Linux test system. You must keep strictly the input output specification. In particular your program must not printout any additional messages like 'input the number please', which are not specified in the problem formulation.
Submitted solutions are compiled according to the user selection made during the submission process. For example:
Source code length is restricted to 50 000 bytes in a one file. Additional restrictions might be provided for particular problems. Using external libraries or files is prohibited.
Execution time is restricted for each problem and for each test case separately. In each case restriction is not severer then 1s. Solutions are tested on equal PIII 733 MHZ processors.
Accessible memory is at least 64MB..
Task: write the program for sum two integers. On input two integers A and B. On output sum of these integers.
Input example:
2 3
2 3
Output example:
5
5
#include <stdio.h>
int main() { int a, b; scanf("%d %d", &a, &b); printf("%d\n",a+b); return 0; }
#include <stdio.h>
int main() { int a, b; scanf("%d %d", &a, &b); printf("%d\n",a+b); return 0; }
program ex;
var a, b: integer;
begin
read(a,b);
writeln(a+b);
end.
program ex;
var a, b: integer;
begin
read(a,b);
writeln(a+b);
end.
public class Main
{
public static void main (String[] args) throws java.lang.Exception
{
java.io.BufferedReader r = new java.io.BufferedReader
(new java.io.InputStreamReader (System.in));
int a=Integer.parseInt(r.readLine()),
b=Integer.parseInt(r.readLine());
System.out.println(a+b);
}
}
To submit a solution choose problem from list of problems and press button 'Submit' at the top of the problem description. You can submit multiple solutions to each problem. Score for the problem is equal to the score of the best submitted solution.
To see the Statistic for problem choose problem from list of problems and press button 'All submissions' at the top of the problem description. To view the status, hover over the check box, cross or warning icon in the result column.
At present the following status codes are available:
You could use symbolic constant ONLINE_JUDGE for debugging purpose. In C you can use:
#ifndef ONLINE_JUDGE
freopen("input.txt","r",stdin);
freopen("output.txt","w",stdout);
#endif
Input data are taken from input.txt, and output data are print to output.txt only outside the CodeChef system.
Please consider it important that a new line characters can differ between Linux and other OS. It might be a cause of malicious bugs.
If you use a compiler in which a long long type does not exist (like MS VisualC++) then to assure proper compilation you can create a symbolic constant:
#define __int64 long long
In this case do not forget about proper output format:
%lld instead of %I64d in the case of using scanf/printf functions.
We use cookies to improve your experience and for analytical purposes.Read our Privacy Policy and Terms to know more. You consent to our cookies if you continue to use our website. | https://www.codechef.com/help/ | CC-MAIN-2021-10 | refinedweb | 514 | 59.19 |
readdir_r - read a directory
#include <dirent.h>
int readdir_r(DIR *dirp, struct dirent *entry, struct dirent **result);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
readdir_r():
_POSIX_C_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
This:
The readdir_r() function returns 0 on success. On error, it returns a positive error number (listed under ERRORS). If the end of the directory stream is reached, readdir_r() returns 0, and returns NULL in *result.
ENAMETOOLONG
A directory entry whose name was too long to be read was encountered.
For an explanation of the terms used in this section, see attributes(7).
POSIX.1-2001, POSIX.1-2008.
readdir(3)
This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://www.zanteres.com/manpages/readdir_r.3.html | CC-MAIN-2022-33 | refinedweb | 139 | 58.58 |
Feature #13805
Make refinement scoping to be like that of constants
Description
Refinements are currently lexically scoped, which makes their use burdensome when one wants to apply a refinement to an entire project, requiring boiler plate at the top of every project file. I propose that there ought to be a method of applying refinements to an entire namespace (module), similar to how constants are scoped.
For example, here's a trivial example of how one is able to scope a new RuntimeError class inside a module.
# a.rb require 'b' module MyProject class RuntimeError < ::RuntimeError; end end # b.rb module MyProject class SomeClass def initialize raise RuntimeError, "Application error occurred" # Raises MyProject::RuntimeError exception end end
Could refinements be scoped in a similar way, without the requirement for the
using clause in every file. For example:
# a.rb require 'b' module MyProject refine! String do def exclaim self + "!!!" end end end # b.rb module MyProject class SomeClass def initialize puts "Hello".exclaim # Outputs "Hello!!!" end end
I believe this would be an intuitive means of making refinements global to a project, which seems to be a highly desired feature, and something that needs to happen to mostly eliminate monkey patching. Of course, using the
refine name would clash with the current behaviour of refine and break backwards compatibility, so I'd propose introducing
refine! as an alternative to invoke this proposed new behaviour.
Apologies if this has already been proposed/discussed. I did search for similar proposals, but couldn't find anything. I'm interested to hear what some of the potential pitfalls of this would be.
Related issues
History
#1
Updated by wardrop (Tom Wardrop) 3 months ago
- Subject changed from Make refinement scope like that of constants to Make refinement scoping to be like that of constants
#2
[ruby-core:82346]
Updated by jeremyevans0 (Jeremy Evans) 3 months ago
#3
[ruby-core:82355]
Updated by shevegen (Robert A. Heiler) 3 months ago
I do not think that issue #4085 necessarily has to be the "one and only one true refinement", in
particular if we keep in mind that towards ruby 3.x, even syntax changes could happen if they
may make sense (and matz would approve).
On a general side note, I wonder whether I am the only one who dislikes the syntax in regards to
refinements; that includes both the current form but also the "refine!" variant. For some reason,
the syntax does not seem to fit into ruby code I write but perhaps that is just me. It's weird
because I am in full agreement that refinements are good to be had, but the syntax is ... strange.
Especially the "using" clause, so in one way or another, although I dislike the "refine!" variant
used by Tom above, I actually understand his syntax proposal a bit. Then again perhaps I
misunderstood it.
Of course, using the refine name would clash with the current behaviour of refine and break
backwards compatibility, so I'd propose introducing refine! as an alternative to invoke this
proposed new behaviour.
Well, in the worst case, you could target ruby 3.x - matz said at a conference that the major
reason why he will not eliminate (too much?) existing functionality is to stay backwards
compatible in the 2.x branch, whereas the 3.x could possibly include such changes. So I think
you could actually propose any alternative syntax if it were to target 3.x (which is destined
loosely towards the year ~2020, give or take).
#4
[ruby-core:82371]
Updated by wardrop (Tom Wardrop) 3 months ago
jeremyevans0 (Jeremy Evans) wrote:
You probably want to read the very long issue that introduced refinements (#4085), which contains the reasoning.
I thought there'd be one of these long discussions floating around, thanks for the links. Didn't find a lot of discussion on why refinements weren't implemented as module-scoped. The discussion pretty much started off as being lexically/file scoped. Either way, ~4 years on and with refinements now in the wild for some time, it's definitely something worth raising again in my opinion.
#5
Updated by duerst (Martin Dürst) 2 months ago
- Related to Feature #13109: `using` in refinements is required to be physically placed before the refined method call added
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/13805 | CC-MAIN-2017-47 | refinedweb | 720 | 62.88 |
Forum:You bastards huffed GRUESLAYER!!!!
From Uncyclopedia, the content-free encyclopedia
Note: This topic has been unedited for 1546 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response.
Fuck you. Fuck all of you! --Scofield & 1337 22:24, June 29, 2012 (UTC)
- Find a cached version, and re-make it. →A (Ruins) 01:02, 30 June 2012
- No don't leave it dead. It's an uncyclopedia game and is therefore crap. ~Sir Frosty (Talk to me!)
02:15, June 30, 2012 (UTC)
- See? This is the kind of contemptuous attitude towards the Game namespace that really hurts me. I for one loved Grueslayer to the extent that I nominated it for vfh! And you heartless wretches disposed it off. If you're going to huff every text adventure we've made on the grounds of "we already have Zork", maybe we should have listened to Skullthumper after all. Fuck your "war on the Game namespace". Seriously. --Scofield & 1337 10:50, June 30, 2012 (UTC)
- Skully had the right idea. RIP SKULLY. ~Sir Frosty (Talk to me!)
12:02, June 30, 2012 (UTC)
- I feel bad knowing that I was the one who Nom'd it. But it WAS just an humour-less zork knockoff. - ENTER CITADEL T)alk C)untributions B)an 20:03, June 30, 2012 (UTC)
- Text adventures can be fun when done right....I used to play lots of them in the 80s, and not just Zork. Suspended for one, Mission Impossible for another...ah the memories. The graphics were all in the player's head. -- Simsilikesims(♀GUN) Talk here. 22:05, June 30, 2012 (UTC) | http://uncyclopedia.wikia.com/wiki/Forum:You_bastards_huffed_GRUESLAYER!!!!?t=20120701162606 | CC-MAIN-2016-40 | refinedweb | 280 | 78.35 |
Key Takeaways
- This article attempts to answer the question "Should Windows Communication Foundation (WCF) Hosting be Supported in .NET Core?"
- Arguments for yes include: many engineers are happy with WCF as a programming model, and don't want the (opportunity cost) expense of migrating from this when moving to .NET Core; and
- The benefits of WCF, such as the enforced contract-based programming model, flexibility of end points and bindings, security, etc., require very little plumbing code
- Arguments for no include: WCF is potentially redundant with ASP.NET WebAPI and REST, and WCF often is implemented in a very complicated fashion; and
- Creating server-side WCF is not worth the investment when things like SignalR and tooling should take precedence
Should Windows Communication Foundation (WCF) Hosting be Supported in .NET Core? To a lot of people this seems like a strange question; the answer is obviously... yes? no? Well actually it is quite contentious with people on both sides of the issue fiercely arguing for their position.
The arguments have become so heated, that in January Barry Dorrans, a member of the WCF for .NET Core project, had to suspend all debate on the issue. We’ll try to unpack the debate, starting with the proponents.
Jan Johansson started a thread in 2016 asking people to list the features of WCF they use and want in .NET Core. As you can see from the consolidated list below, developers actually are taking advantage of many of WCF’s advanced features.
A transactional queuing programming model
- Behaviors
- Bindings
- Context
- Contract based programming model
- Contract Behaviors
- Contract-based Coding Model
- Discovery
- Durable services
- Endpoint Behaviors
- Extensibility
- Extensibility model
- Headers
- Instance Management
- Interception-based Pipeline
- Message inspectors for instrumentation and access to SOAP headers
- Metadata Exchange
- MEX endpoint and MEX framework
- Named pipes
- Net.Pipe Binding
- Net.TCP Binding
- NetTcp (half-duplex)
- Operation Behaviors
- OperationContext
- Ordered Messages
- Queued Services
- Reliability
- Security
- Self-hosting using ServiceModel.ServiceHost
- Service Behaviors
- System.Transactions
- Throttling
Note that these are all WS-* or WCF specific features. For example, when they say “reliability” they actually mean WCF’s support for the WS-ReliableMessaging standard.
WCF Today
Many people have the impression that WCF is only used for legacy code. But actually, there are many companies that are actively investing in WCF based software. Here are some examples,
Oliver C. Lanz writes,
A specific business case I have, is that we have an investment management system with several million lines of code for which the server side is currently hosted on premise on MS Windows Servers at about 200 insurance companies and banks worldwide. We have about 100+ service types, of which in the largest installations there are 700 to 800 concurrent service instances at play. Our product is driving important parts of the core businesses.
The expenditure on IT is huge for our customers. This is also the place where we are looking to make major improvements over the coming years. A part of that is to find alternative hosting environments. A favorable choice would be Windows Nano, or the .NET Core on Linux. For being able to adopt .NET Core (or Windows Nano) we are missing the WCF server side.
Since we are very happy with WCF as a programming model, there is no incentive to rewrite our applications other than that WCF server side is unavailable in our future hosting environments. Particular features that we use is long. But to start .NET Core adoption are, these are the important ones:
[...]
Yes. We would continue building WCF services also on .NET Core.
Niplar writes,
Over the past six years, the solutions I have been involved in (in the .net area), we have always relied on WCF to handle the service layer (behind the firewall & intranet) of the solutions. Nothing out there gives us the same level of flexibility and support for different communication channels like WCF does.
I have been holding back moving to .net core for production environments for the lack of WCF support specifically. Not supporting WCF server on .net core creates a need to rewrite a lot of infrastructure code; which will probably end up mimicking the WCF programming model anyway.
The biggest solution I worked on was used by over 300 health care institutes, rewriting the server layers and functionalities is a big investment to say the least, not to mention high risk.
In fact, in that same solution we were looking at a way to unify the programming model between server and embedded devices (Linux) for new products. Supporting WCF services on .net core (not just clients) could've been a really big help and cost saver as there would be no need to have two development teams; but instead have a larger singularly focused team.
Fernando Ferreira Diniz de Moraes writes,
My scenario is pretty much similar to [Oliver C. Lanz], except that my business case is Point of Sales. Like him, we have our application deployed to numerous stores worldwide. We are also looking for alternate ways of hosting our application in order to reduce infrastructure costs. As [Jan Johansson] said, agnostic deploying WCF services would be great and would give us a huge flexibility.
WCF plays a major role in our application: it is based on a plug-in architecture where which plug-in is basically a WCF Service, so communication between plug-ins are actually WCF calls. Changing this aspect would mean have to rewrite/rethink a lot of infrastructure code.
In our case, self-hosting using instances of ServiceHost and the Contract based programming model is crucial. Our plan is not only migrate existing services, but also create new services.
Websitewill adds,
I have done a lot of projects that leverage the many aspects of WCF. Most of what I leverage in WCF (pretty much everything) is currently missing from .NET Core. Much of this missing capability would require various 3rd party libraries (or extensive custom code) to fill in the gaps and it's just not worth that level of investment when WCF already works, brilliantly. WCF is, above all else, a great productivity enhancer for my teams.
The biggest piece missing for me, currently, is the extensibility model that WCF provides.
Most of my projects leverage the capability to completely decouple cross-cutting concerns from my other components (which are light-weight WCF services). WCF provides a fantastic mechanism for doing this without the need of 3rd party Aspect Oriented Programming libraries. Developers don't know (or even care) and can focus solely on delivering the business value of the features they are concerned with.
We also leverage many other aspects of WCF such as:
named pipes, transactional queuing programming model, enforcement (not encouragement) of interface-based design, security capabilities, process isolation, error masking, I could go on.
Without WCF (or equivalent capabilities) in .NET core, I would lose way too much productivity and cannot justify the switch. It would be great to have all of these powerful capabilities in a platform that can be used in any type of environment. Imagine, productivity of WCF plus cost-savings of cheaper hosting environments. That is a tremendous business value.
Pavel Dvorak goes so far as to claim WCF is why they are using .NET,
Absolutely support the idea of having server-side "WCF" in .NET Core. We just finished another fairly large almost entirely server-side processing system ("the user experience is that there isn't any") Initially we went through a lot of pressure not to use Microsoft/ .NET mostly due to the relative advantages of other (open source) stacks when doing "microservices-based" solutions just as traditional web services. But the benefits of WCF, such as the enforced contract-based programming model, the availability specifically of Named Pipes binding, flexibility of end points and bindings (yes, declarative approach/configurability can be an advantage), security, extensibility for utilities such as logging, were really key when the system grew and required scale and performance as well as maintainability and having really very little plumbing code. The obvious next step is proper containerization (we have been explicitly waiting for Nano Server) and in general being able to port the system to the next generation of runtime platform(s) without losing any of the current qualities.
You can see more examples of developers endorsing the use of WCF in development in the thread titled Server side WCF #1200.
Why was WCF hosting not originally included?
As always, one answer is that it is simply an issue of manpower. There are only so many developers and they can’t get to everything. Ron Cain of Microsoft explains,
For the record, the "missing" features of the full .NET framework's WCF were not deliberately excluded from .NET Core WCF. Rather, the initial goal was to support all the existing Windows Store WCF API's on NET Core (which are all client-facing) before tackling other mission-critical features of WCF. It might help to know that much of the work of porting WCF features involves re-implementing OS-level libraries that WCF depends on (e.g. socket layer, cryptography, etc.) to allow it to work cross-platform. So, lighting up WCF features usually involves replacing OS-level libraries for each platform first. It might help to think that the "W" in WCF is no longer a given in NET Core.
This is one reason why it is so valuable to hear from you which features matter most, because it lets us investigate more deeply questions like "Which libraries are required to do feature X on Linux? OS X?, etc.". Please keep those suggestions and specific scenarios coming!
Why not just use REST?
The most common criticism about WCF is that its redundant with ASP.NET WebAPI and REST. To that Jörg Lang responds,
Messaging between services is important and when doing enterprise backends, REST is not going to work. It lacks too many things like others have mentioned. So certainly Transactions, Queues Messaging, Named Pipes, Extensibility should be supported by .Net Core
So, .NET Core has to provide those things in one way or another. If it is called WCF I don't care. Maybe it would be the opportunity to fix some of the weaknesses of WCF like the overly complicated configuration story and replace it with a convention based approach.
Also you should/must support other Messaging frameworks beside MSMQ or Service Bus. In general support of AMQP would be nice, including the various messaging patterns.
Agustin M Rodriguez echoes that thought,
WCF has been instrumental in delivering quality services that scale and deliver reliability by leveraging transactions across the wire (System.Transactions). Without support of WCF on .NET Core we would lose many of the "Free" benefits we get thru the extensive WCF interceptor chain, including logging, behaviors, and context flow.
For Scott Hurlbert the allure of WCF is that it doesn’t have to use HTTP at all,
These two lines of code are just a sample but what's interesting here is that the Binding and the Endpoint schema are components:
Binding binding = new BasicHttpBinding(); IMySimplestService proxy = ChannelFactory<IMySimplestService>.CreateChannel(binding, new EndpointAddress( "" ) ); proxy.Echo("Hello");
This is interesting because it means that by swapping those out WCF can communicate over Http/https, UDP, TCP-IP, MSMQ, NetPipes and a bunch of other protocols and endpoint schemes.
I understand that in the recent web many have forgotten about anything other than HTTP, but if your app grows you may find yourself wishing you had the ability to use the exact same code, but pointing to a queue'd endpoint. Sadly, most other frameworks will leave you re-implementing your system for queueing rather than just repointing it.
Not to mention transactions (also fallen out of favor - UNTIL they are essential) and various forms of authentication and encryption and on and on.
Catalin Pop is likewise focused on how WCF is an abstraction over various protocols,
Yes, there are a gazillion frameworks, large and small, more “enterprisey” or more 1337 h4x0r oriented, with different options and patterns, some rigid and some flexible ... and ... this is exactly the problem ... there's no base line.
WCF should help to get rid of all these disparate options and provide an unified and abstracted communication framework that is a base line for .Net. development.
You want binary protocolX instead of http, sure, you plug that in. You want Oauth2 instead of windows auth, sure, configure that one, you want to use NServiceBus plug in an NServiceBus binding. You want to somehow transfer contexts like a Transaction scope over the wire, you can plug that in too. The application programming model stays the same.
RPC in itself isn't a complicated thing from an .Net application point of view, you call a method or receive a call. What makes it complicated is the multitude of ways and frameworks, with security, format, feature and other options that you can implement it in.
This is where WCF should come in, and that's the point of WCF.
“WCF is too complicated”
A common criticism of WCF is that it is too complicated. And to be fair, it often is implemented in a very complicated fashion. But as Scott Hurlbert points out, it doesn’t have to be that way. Below is the code necessary for a complete WCF client and server.
This is an ENTIRE Wcf service:
[ServiceBehavior] public class MySimplestService : IMySimplestService { public string Echo( string pInput ) { return $"I heard you say: {pInput}"; } }
Here is the interface:
[ServiceContract] public interface IMySimplestService { [OperationContract] string Echo( string pInput ); }
Well, maybe hosting the service is hard. Nope. Here is the service hosted and called. This is the code in it's entirety:
var myServ = InProcFactory.CreateInstance<MySimplestService, IMySimplestService>(); Console.WriteLine( myServ.Echo("Hello World") );
There's just nothing that is enterprise quality and as simple.
I think WCF has gotten a strangely bad rap because people implemented it in unbelievably complicated ways. Believe it or not, for .net on both sides of the wire, this is all that's needed. With the ServiceModelEx framework from iDesign you can even build a proxy on the fly:
Binding binding = new BasicHttpBinding(); IMySimplestService proxy = ChannelFactory<IMySimplestService>.CreateChannel( binding, new EndpointAddress( "" ) ); proxy.Echo("Hello");
Seriously, that's four lines of code (out of a total 17 with the structural) to create an enterprise service, it's proxy and make the service call. That's not legacy, that's productivity.
Note that even this is actually more than you need. If you are using a proxy generator, the service class can act as its own service contract without the need for a separate interface.
The opportunity cost argument against WCF
Blake Niemyjski sums up the real argument against WCF: opportunity cost and bad experiences in the past,
I don't think server side WCF is worth the investment when things like SignalR and tooling should take precedence. I've always had such a hard time with WCF client Config that I despise it :( please correct me if I'm wrong... But why wouldn't web API be the way to go... It's lightweight and gives you everything you'd need to build apps that can work from any platform with a simple http call...
Leave security, transactions, pub/sub up to the implementer...
Several other developers echo the that “WCF and VB.NET” are not worth investing in. (Equating WCF with VB.NET seems to be a common theme in these debates.) However, Catalin Pop takes issue with this stance,
[Leave security, transactions, pub/sub up to the implementer...] is the worst piece of advice anyone can give. Security should never be left to the implementer, it's the source of most atrocious security bugs ever .... There are really few security experts or developers capable of implementing security details properly, this is a job for less than 1% of the programmers out there.
Where does Server-Side WCF stand now?
Back in June of last year Jeffrey T. Fritz wrote,
There very much is a WCF team, and we are working on features to ensure compatibility with Windows 10, Windows Server, and Windows Containers.
We had a very successful presentation at Build where we showed the first of our Windows Containers. Work is continuing in that space, and we plan to continue in these investments.
WCF for .NET Core is very much on our radar, and we are working with the ASP.NET Core to prioritize this work. We will share more details when they are available to broadcast publicly.
While some of the low-level libraries needed by WCF have been ported to .NET Core and/or added to .NET Standard, no real progress has been made on WCF hosting itself. In March a separate thread titled Call for Decision: Server-Side WCF was created and tagged with the .NET 3.0 milestone.
More recently Immo Landwerth wrote, “We won’t bring [...] WCF hosting in the .NET Core 3.0 time frame but depending on customer feedback we’ll do it later.”
The source code for WCF is available on GitHub under the MIT open source license. So in theory a community sponsored version of WCF for .NET Core is feasible.
Community comments
What about workflow?
by Donald Kemper /
What about workflow?
by Donald Kemper /
Your message is awaiting moderation. Thank you for participating in the discussion.
WCF is used to host WF, and I don't believe WF was ported to Core either. | https://www.infoq.com/articles/WCF-Net-Core-Debate/?useSponsorshipSuggestions=true&itm_source=articles_about_API&itm_medium=link&itm_campaign=API | CC-MAIN-2019-26 | refinedweb | 2,913 | 55.54 |
Microsoft Plays Linux Games at Work 632."
As it turned out, he had unpacked the tarball (I had to explain what a tarball was) on the CD by double-clicking its package icon in gmc and then double-clicking the install icon that came up. He had absolutely no idea where the game had been installed, and didn't know how to search for it..
I had him pull up a terminal window and run `sh install` (since he had a 4.5 GB drive containing only a fresh install of RH6, he wasn't too concerned with finding his previous installation just yet), and as the graphical install smoothly copied the files into their proper place, we chatted amiably.
Me: "So what kind of system are you using for this?"
Him: "It's a... [pause to read label on the case] HP Vectra."
Me: "Umm, what processor does it use?"
Him: "It's a Pentium III, uh... 450 MHz?"
Me: "Yes, PIIIs do come in 450 MHz."
Eventually, the installation finished. I encouraged him to grab the patch from our website, and he thanked me and hung up.
Ordinarily, I am very respectful to newbies. I don't even laugh at them behind their backs--especially if they have been looking through man pages and reference books trying to figure things out. This time I almost peed my pants.?"
Note from RM: Yes, we verified the story. All parties are real.
What I want to know is.... (Score:2)
i can see it... (Score:2)
--Siva
Keyboard not found.
Figures. (Score:2)
I'm going to whack anyone who's asking why they're using clueless newbies. Microsoft is doing it for two reasons; one, a clueless newbie is the typical Microsoft customer. Two, a clueless newbie will easily get frustrated and say that Linux sucks, giving Microsoft more FUD ammunition. Both of those points should be obvious.
Microsoft is loading for bear with this. They're going to put all these total idiots on overpowered machines. They're going to have them use Linux for a few weeks. Then Windows 2000 for a few weeks. Release the 'study' as 'fact' and genuine 'scientific research' in their battle against all unixes.
Even Linus says Linux isn't ready or meant for the desktop. *sigh* Oh well. More Microsoft FUD on the way.. excuse me while I put on my PR Flak Jacket.
-RISCy Business | Rabid unix guy, networking guru
Former microserf (Score:5)
--Shoeboy
Hmm (Score:2)
BTW
And you know what ? They would be right.
Linux is NOT ready for the market MS owns now and won't be for some time ( and don't forget,it is unlikely that MS will be waiting for them to catch up.)
Bigger deal than we realize (Score:5)
While all the gnome, redhat, etc people involved can pat themselves on the back, this does point out some things that are really small that *NEED* to be done... off the top of my head I can think of:
1. Autorun.
2. a dummy-fied RPM/DEB/any other kind of package installer/viewer/uninstaller that can be used cross-distribution and cross-version with similar functionality to the dreaded "add/remove programs" control panel
3. less jargon.
We're getting there. While things may be in a state now where linux+gnome/kde+icewm/enlightenemnt/* may be "mom friendly". It's certainly not friendly to someone who's going to be installing hundreds of programs cluelessly every day -- like your average computer using teenager.
-Chris
Installing... (Score:2)
This isn't surprising at all. (Score:4)
Developers? No way! Once they got a hold of Linux they'd never go back to Windows.
Marketing types? Would you even try to sell Windows after using Linux?
Sales? See marketing.
FUD slingers? Nope. They couldn't even do their job anymore.
So who else do you hire other than someone expendable? Someone with absolutely no knowledge whatsoever? They'll probably poke his eyes out and sew his mouth shut after they're done with him.
Re:What are they doing playing games anyway? (Score:2)
Games are installed by users. Only.
-Chris
Re:Figures. (Score:4)
Id say: Leave a clueless user (tm) who has no idea about this whole computing thingie and whos not even willing to read any sort of documentation alone with a blank harddisk and a W98 install CD and guess what he will achieve? Yeah nothing. Right.
Windows has nothing to do with intuition, its only got to do with being used to it since years. Anyone whos grown up on Linux, will consider this an "intuitive" install:
$ tar zxvf tarball.tar.gz
./configure
$ cd tarball
$
$ make
$ su -c "make install"
Get the point?
-martin
The last Linux Frontier- Games (Score:2)
Perhaps not all that it seems (Score:4)
He may have been clueless or he may have just been acting that way.
Many people would just put the cd in the drive and *expect* an auto-install to start. If nothing happens, then they'll double click on some likely looking filenames in gmc/whatever.
Game installation now is a complete no-brainer compared to the bad old days when you had to run install programs from dos, make custom boot disks, maybe find a working video driver, yadda, yadda.
Win9[58] as a gaming environment is pretty good - most of the time you don't have to worry about stuff.
As for the 'newbie' not knowing what his pc is: chances are he was given a blank pc and a stack of CDs and told to install them and see how easy it is and if the platform is sensible for a *real* newbie, i.e. the 'foot pedal, cup-holder and monitor-stand' brigade.
dave
Re:Figures. (Score:4)
But take a deep breath. Microsoft is in the operating system business. I'm sure they've got legions of people doing "usablity studies" on MacOS 9, BeOS, OS/2 5, Solaris 7 and so on. Eventually reports get written, MS finds a few new features to steal, some contractors get easy money and everyone is happy.
Also, don't forget these guys are paranoid as hell. Why should they believe either Linus or the trade press when they say "Linux is not ready for the desktop", when they can afford their own usablity lab to make that determination for them.
Re:Bigger deal than we realize (Score:2)
--
Jeremy Katz
Re:Installing... (Score:2)
People should know and never forget that EVERY SINGLE TIME security and/or stability collide with simplicity and slickness Microsoft opts for the latter. Yes, that means they get "cool stuff" like Autorun. They also get DLL hell and corrupted Registry information.
If you want to compare Linux's ease of use with Microsoft Windows' ease of use, you would have to have a Linux setup in which every user was UID 0, because that's the fair comparison: the guy in front of the terminal can do _anything_. Many of us find that an unacceptable practice.
Love to see the results (Score:2)
It will be, as you said, for a *nix vs. W2k study and there would be nothing better than releasing the rebuttal before the conjured up findings. So...
All you Linux techies get your facts straight on each of these calls, document them and be prepared for another X vs. M$ fiasco.
Relevance? (Score:2)
Furthermore, until you know what group the Linux newbie was from, you'll have a hard time finding the meaning of this (non-)event. And even if you did know, all you could deduce is that some pointy haired manager somewhere got a pawn of his to write him a "report" about Linux.
In short, if you think this is news, you need your head examined. MS has lots of Linux users on campus, lots of independent groups with their own motivations and interests, lots of competitors and lots of clueless contractors.
Move along, now.
Re:Bigger deal than we realize (Score:3)
The one thing Microsoft has done right in that regard is the Office2K installer; it sets everyting up for autorepairs (even cooler than the Mandrake self-update, in some regards). We need an installer that's an imitation of it (but better!).
Off-topic, I know, but only a little bit.
Re:Bigger deal than we realize (Score:3)
Keep 'em. Just associate a new looonngger version (perhaps to give a bit of context) with grammatical (not cryptic) switches.
Once people learn the long version they can still pick up the short one later.
The first one to create a very easy Linux will cash in, just watch.
Re:Usability study my foot... (Score:2)
Re: former Microserf (Score:2)
Re:Installing... (Score:2)
You can think about it this way... the install functions that actually do the install are not really part of Windows proper - but are usually components of a third party program called Install Shield. By the way, there is a version of Install Shield that will run on any operating system that will work with jars and has Java installed; it looks and works great in Linux!
NEXT QUESTION?
Re:What are they doing playing games anyway? (Score:2)
Yes, they are. Games on Windows are in VERY good shape. Two clicks from CD/Net to action. Most are quite stable. I'm just starting Linux gaming, but already I know it is WAAAY behind. When terms like "Good luck, not for the faint of heart" comes up in the help file, well, it's time to buy more caffeine.
Thus, an independant study shows, in 1999 Win98 is a far superior gaming platform. The 1999 part will most likely disappear after 2 years or so. Cynical and Microsoft = Effect and Cause.
Re: former Microserf (Score:2)
March of 99 was when I left MS. I have no idea what the current situation wrt linux is.
--Shoeboy
Re:Bigger deal than we realize (Score:4)
there is this trend to hide the difference between data and programs, but it's absolutely WRONG. all it achieves is to blur the difference in such a way that you can no longer use your computer SAFELY without actively thinking about safety every five minutes. installing or running a program is supposed to take an actual (even if fairly minimal) effort, if only because it can do "very bad things" (tm) to your computer.
Re:Installing... (Score:3)
Wait a minute. Have you ever noticed how elegant the packet managers are in Linux distributions? You can update a lot of software by just giving one simple command. And if you don't like typing, do it with your mouse. There's no need to answer a lot of questions, shut down other programs or reboot.
I use NT at work and Linux at home. If I update Netscape on both, my RedHat-box does it in notime with a simple command:"rpm -Uvh nets*.rpm". On NT I have to shut down other software, dblclick on the icon, answer a few sets of questions, wait a longer time, reboot the computer and pray that it boots and doesn't give me a BSOD.
I used to pretty much trust NT until a simple installation of a sound card and it's driver resulted in constant boot-time-BSODs that required a reinstall from scratch.
Hello People!! You don't get it! (Score:5)
According to the fellow in question, they were performing a "useability" study. That means just that: useability. How easy is Linux for people who are not already accustomed to it to use?
So, why are they having people do studies on Linux? It's competition, and anyone who wants to compete will take a gander at the competition.
Why are they using "newbies"? Think about this. What good would it be to do a "useability" study on WordPerfect 3.1 using people who have already memorized all the fkey combos, or who know to look for fkey combos? NONE! Why? These people have already adjusted to the environment, and so any reports they have on how "useable" that environment are are SKEWED. People who don't know to read the manual, and don't know much about linux (or even computers, for that matter) are PERFECT for a true "usability" study. They allow a clearer look at how obvious and easy it is to do what you want to do. The question of useability attempts to answer the question: what do I have to learn in order to use this? Do you have to learn to install software in at least 5-6 steps (gunzip, untar, cd,
In this case, the answer is a resounding NO. Linux is complicated. Many if not most applications are distributed primarily in source-code format, which requires compiling, which requires installation of all the development libraries and toolkits, which requires keeping up with the most recent versions of these same libraries, which involves visiting ftp sites, which involves knowing about ftp-commands....and if not that, it requires discovery of rpm and it's man page, which requires discovery of man pages (not exactly the first thing that comes to mind when presented with a command prompt for most people), or it requires the discovery of gnorpm (not advertized as much as it is), which requires knowing why you need to be root for some things, but don't want to be for most things. Even just typing "help" provides you with a bewildering list of commands and a fairly cryptic set of symbols describing their use - BUT NOT WHAT THEY DO! (please, is anyone so deluded as to argue that any os that provides "trap [arg] [signal_spec
Suffice to say, to use Linux pretty much at all, you need to know A LOT about how it works, how computers work, how unixes work - some mixture thereof - to get ANYWHERE.
And why would they want to find out how "useable" Linux is from someone who already knows all about how to use and configure it? They don't. Because that information would be WRONG. At least, it would be in all areas that they care about.
Yes, it's funny. No, I don't know why. But it's newbies because that's the only kind of "useable" that counts for the mass market. "Useable" means "really fricking obvious" in the mass market. What's obvious to you and me is quite often nowhere near obvious to anyone else. Microsoft may be all about FUD, but that's not what it's doing here...at least, not yet.
Origin of GREP (Score:2)
[from the qed/ed editor idiom g/re/p, where re stands for a regular expression, to
Globally search for the Regular Expression and Print the lines containing matches to
it, via Unix grep(1)]
Don't be too hard on the guy (Score:2)
Linux users, of course, don't have it so easily. You have to deal with downloading libraries, extracting stuff from tarballs, fooling with gcc, and so on. Reading documentation is a necessary part of the Linux culture, and Linux users have accepted the complications as part of the package.
Unfortunately, we live in a point-and-click, plug-and-play world, and most people have different expectations of what computers should be like. Most computer users are not programmers, and they don't want to learn how to do anything complicated. The doctors and lawyers and teachers and car mechanics and whoever else demand SIMPLICITY, so they can continue to go about their daily lives.
You can't even expect programmers to part with this "I don't wanna fool it" mentality. I got my first exposure to UNIX in a CS course, and I got my first Linux CD from a CS professor. However, it's possible to go through college, major in CS, and graduate without having touched UNIX at all. I've taken courses at three colleges, and only one of them used UNIX in CS courses.
I guess my point is that the guy calling tech support represents most of the computer using public and that Windows 95 and the Mac OS set the standard for ease of use. People are beginning to demand that from Linux, and if it can't deliver, it's the fault of the OS, and not the user.
Take care,
Steve
DISCLAIMER: I am NOT a supporter of Microsoft and am writing this from my Linux workstation.
Have you Beta Tested Win2k? (Score:3)
Three whole bloody hours.
I don't care what you say, even with all the dicking around it does not take that long to install any other OS, Linux included. The fact that W2k asks about one question and then goes "Please wait" is good, but jeez, I reckon they could warn you that you may as well go and watch "The Matrix" while you wait.
Easy installation is one thing, but trying to detect over a thousand devices over a two hour period is another.
Read the story again. (Score:3)
Re:exactly (Score:2)
Viable Desktop Environment... (Score:4)?"
So, Microsoft's been touted for years for hiring smart cookies. Even with the degradation of its standards and practices, and the complacency of being the largest corporation with an enviable bottom line, it's not easy to walk in and get a Microsoft job.
I still expect that the guy who called up wasn't an idiot. Sure, hadn't yet looked at the machine that Bill bought him, sure, hadn't used Linux before very much. Isn't that the perfect useability test case ? And given that... how did Linux perform? The out-of-box experience seems to have failed.
I was on the team when Windows 95 was still called Windows 93, before it even grew the codename Chicago. At that time, the general manager of the desktop Windows Business Unit, Brad Silverberg, coined a mantra of the ideal in useability. He said that his [nontechnical] mother should be able to use Windows. Personally, I think we failed at reaching that ideal, but we made the right evolutionary step from Windows 3.1.
Now, how well can your mom use Linux?
What can we learn from this, boys and girls? (Score:2)
On another note: regarding the kick-arse computer this clueless user had, it's pitiful, but another painful truth is that the newbies, for some reason, always get the best boxen. I'm a developer for Intel (yes, spawn of Satan), and I work on a Pentium Pro 200. Meanwhile, the marketing guy next to me who has to call tech support and pay someone to come out and install his shareware for him, has a PIII-550 with gobs of RAM and a monitor the size of my cubicle wall. Go figure. And you'd think Intel wouldn't have any problem supplying the cool chips to its employees...heh, yeah, right. I'm sure Microsoft is much the same way...
Re:Perhaps not all that it seems (Score:2)
I expect once features like this become more common in Linux, game vendors will want to take advantage of them.
using clueless newbies for usability is correct! (Score:5)
In this one example they seem to be looking at
games. A game that can't be installed easily by a 10 year old with 6 months experience pointing and clicking on things is not a market threat to them. That's all they care about. The fact that it's "obvious" to you or me or anyone else how to install it does not matter. That's not the target market.
Put it this way: Have you ever been asked to do QA on or write docs for code that you've written? For real end users I mean here, not man pages or READMEs or comments in the Makefile. I have and I've seen the results of these attempts many times. IT DOES NOT WORK. you are too close to it. You don't know to explain the parts that the end user will find confusing because it's not confusing to you. You don't know to test a part of the program in a way you didn't think of because... well you didn't think of it.
Same goes for usability. You bring in the intelligent but ignorant. If they can't make it work it doesn't work - because they are the customers. After your ignorant pawn has done this
for a while they lose their usefullness because they also know it too well and are too close to it. And LO! a tech support rep is created! Been there, done that. Eventually the smart ones understand it too well and become terrible tech support reps because they can't explain it to the end user in tiny words that they understand.
MICROS~1, and any other company that actually delivers products to "normal people", understands this early on or they go out of business. This is often a blind spot for OSS advocates but ICROS~1 has always understood it quite well. Technology, Quality, Stability - these may be their blind spot but this isn't.
garyr
Re:Hello People!! _You_ don't get it! (Score:3)
This is actually an idea that I've stolen from Jurassic Park, but I think that there is a real (and bad!) movement in the US to make everything brain-dead. We try to minimize the amount of knowledge that you need to start doing something, at expense of how well or fast it can be done.
Oh well, I'm probably the only one who thinks this.
Prolly get moderated to flamebait, but oh well... (Score:2)
If there was one thing I could tell MS... (Score:5)
Use tcp_wrappers. The security benefits of tcp_wrappers have been proven by Wietse Venema; the rest comes only from my own meandering experience.
Run
If you're going to be paranoid and deny telnet and ftp in favor of SSH, don't send your mail passwords plaintext with POP3.
Maybe Linux will take over the desktop, maybe it won't. Maybe InstallShield for tarballs will be created; maybe it won't. Either way, your Mindcraft scores are half chance -- and so are everyone else's.
Be kind to your root partition. You'll miss it when it's gone.
If you don't know which direction your favorite window manager will go, don't worry. A lot of the greatest programmers I know had no idea what they were doing at version 2.2
Each day, activate a compiler flag that warns you.
Do not read Slate Magazine -- It will only make you feel ugly.
Accept certain truths as inevitable: USB support is dodgy, "stable" kernels will crash, and you too will lose your CHANGELOG -- at which point you will fantasize that when you were at version 2.2.x, USB suited your purposes, kernels never crashed, and people read their source code.
Read your source code. Source code is a form of nostalgia... it lets you pick it up, parse through the comments, and audit it to make better code in the future.
But trust me on the tcp_wrappers.
/* thanks to Baz Luhrmann */
Re:Bigger deal than we realize (Score:5)
DON'T YOU DARE DUMBDOWN THE TERMINOLOGY!
Those are the words, new users have to learn it. Whenever you start something new, there's a learning curve. New terminology is part of it. UNIX is designed by a fundamentally different philosophy. If you want the M$-suck way, then stick with M$. (I really hate the concept behind fvwm95. The best way I've heard it described is as "methadone for windows users".)
Hell, even if you replaced "tarball" with "thingy" and "keyboard" with "doodad" you'd still have people saying "Ehhhh! That's too hard! `Thingy?' Why don't you just speak English!". ("tarball" is a perfect word. It's a ball of files and it ends in ".tar" what's so wrong with that?)
I've said it many times before (and people don't like it), but I don't think "Linux For The Masses" is a good idea. Does the person thinks "me too!" is being insightful really need (or should even have) a UNIX box sitting in their home? One of the major things that makes Linux great is the community, and the fact that the community as a whole doesn't just whine and complain, but is actually useful. Why is this? There's an entrance fee to be paid to get into the community, and you pay it by critticaly thinking. The Unwashed would do nothing but drown out the original community members with, "This is too hard!" (Don't believe me? Right now there's a Visa radio commercial running talking about online shopping that says, "Clicking is hard work." (And no, the don't say it in jest.)) These are the people I'm talking about. Give them a dreamcast-esqe device with email, a web browser, and a wordprocessor that's they all they need and really want. (I'm distinguishing between the "ignorant" and those that don't even try. It's the second group I don't think Linux should be marketed to.)
Personally I think the the people that spout that are either
People ask now, "Yeah but can my mom use it?", but a more question would be, "Yeah but does my mom need this?" (You don't see particle accelerators being sold at Sears now do you?)
</rant>
This is probably going to get moderated down as "flamebait" because it's a rant (and I don't deny that there's a viable reason for moderating it down), but Mr. Moderator, before you hit "Moderate" just think about what I'm saying. Which is basically this, "Linux is more OS than most people can handle. Sure they need something, just not this."
This brings up some nasty issues (Score:4)
The newbie doesn't understand mounting. That's step one. You can't even *read* the README on the CD until you do that. When you explain mounting, they usually say something like, "that's pretty stupid."
But that's not so much of a problem. The biggest problem, as I see it is the variable filesystem structure among distributions. There's tons of work being done on useability, etc, but it is pretty much in the context of one distribution at a time (SuSE installs KDE in
What is needed is an agreement on a filesystem structure, first and foremost. There was work being done on that, but where is it now??
How come I haven't heard a thing about it in *months*? I've heard so much news about new releases of XX distrib, XX desktop, etc. LSB? nothing.
I think that the importance of GUI install work should be downgraded to make room for this. When a developer can release a package and not have to supply different binaries for different distributions, we'll all be happier.
Ok, for small packages,
The worst thing is that this is an aspect of open-source that "low-ranking" people like me and many others cannot really make an impact in. This has to be done by Red Hat, Caldera, SuSE, Debian, Corel, among others.
from another former microserf (Score:3)
This is a little bit off topic, but here goes...
For just everyday use (and not this usability study) - Microsoft doesn't really care what OS their employees use as long as they can perform their job functions. You can get quite a bit of functionality on their network with a Linux machine, especially with Samba.
The only thorn in its side that I heard of was that Linux didn't have an equivalent to the MS Proxy Client. I'm not sure if that has changed these days (someone chime in here?). So you couldn't really access the internet, except when web browsing.
But you could still use Netscape of course, and anything else provided it was not illegal software. Microsoft certainly endorses their own products, but if an employee feels more comfortable using Linux, and is still productive, then Microsoft doesn't care.
Just don't try calling their support desk for help
Re:Bigger deal than we realize (Score:5)
I don't CARE what the difference is between 10W30 and 10W40 motor oil is. I don't care what my "CV joint" is. I don't have to know the difference between shocks and struts to drive my car. I never want to have to do more than put gas and windshield wiper fluid in my car in order to drive it. When I use my car, I want to get in, turn the key and go somewhere. Yes, I *do* have to know about the steering wheel, turn signal, gas and break pedals, but I don't have to know anything technical about the vehicle to use it properly.
That's how computers *HAVE* to be. Slashdotters and geeks in general have to get over this elitist view that newbies (and the general public) must learn how to do X, Y, and Z just to get their brand spanking new Linux install to a usable level and then do A, B, and C to get Whizz-Bang New Game(tm) working. Linux CAN fulfil this role, much better than Windows (or BeOS at this point). From a technical point, it's better than MacOS, so us geeks like it, but that has to be the target for a UI. Not just a GUI... the ENTIRE USER INTERFACE. Macs don't have ejects on the floppy drives for a purpose; it simplifies things. Yeah, us PC geeks get pissed off when we can't get our disk out when we want to, but that's life.
In order to win in the Real World, you have to cater to the masses -- NOT MAKE THE MASSES CATER TO YOU. Granted, many companies have made the public bend over backwards in the past (utilities come to mind real fast), but if it isn't easy to use, do what users need, or doesn't work, then they will move on to something else.
--
Here's the back-on-topic part:
Linux MAY be more than Mom needs right now, but she certainly doesn't need $400 worth of Microsoft OS and programs just to email Junior, surf the web, and type up her resume. As in the Dvorak article yesterday, when PCs get to be sub-$300 items, the OS and basic set of utilities and programs better clock in at free or darn close or it will completely screw the vendor. If we want Linux to be on there instead of WindowsCE, we better get a UI For Dummies on there. And fast.
-Chris
Re:Have you Beta Tested Win2k? (Score:2)
Well, it can take longer if you do an FTP install. However, it blows people away when you tell them that you can install 5.5GB of software from a single floppy disk.
Linux at Microsoft (Score:2)
Re:Origin of GREP - that guy got it right (Score:2)
Correct. The other answers aren't correct (especially not the one that mentioned "Gnu", given that grep existed long before the GNU project ever existed...).
Re:Origin of GREP - that guy got it right (Score:2)
Well, the other guy who said Global Regular Expression Print was right as well.
Usability study of what? (Score:2)
They get some guys who pretend to be clueless, make some calls and some mail list posts and see what kind of response they get. They can then tally up all of the RTFM responses, the support engineers who "almost peed my pants" with laughter (and then promptly posted to a Linux advocacy board), and compare those with the quality responses.
Unlike some hypothetical desktop-battle, this information can be effectively used by Microsoft in FUD tactics. "Our informal studies show that if you aren't proficient in Unix, the Linux tech support companies will just ignore you or laugh at you." This can go along way in scaring managers that are (rightfully) worried about the skill gap of their staff when it comes to Linux.
Re:Prolly get moderated to flamebait, but oh well. (Score:2)
It seems to me (any many others) that this is a wakeup call regarding how easy Linux (or any UNIX, for that matter) really is to a new user.
I want to see Linux flourish as a desktop environment. In many ways it has surpassed Windows. However, the battle isn't over yet. There are many issues that still have to be solved. The problem is that us "advanced users" don't think about them enough.
I have Beta Tested Win2k. (Score:3)
Or I tried to.
Easy to install? Not really. RC1 took two hour-long installs to actually get to the point where it would boot. When it did boot, the high-end video card in the machine (An Elsa GLoria) didn't work in anything other than 640x480 in 16 vibrant colors. There were no drivers for the card on Elsa's site, but I managed to convince the NT 4 drivers to work, after a fashion.
Easy to use? I guess, if you don't mind waiting. The default install came with all the needless crap turned on -- menus that fade into existance, all the ugly extras in Explorer that seemed to chew up CPU time rather than offer any useful features, even a cute little shadow under the mouse pointer that probably took up its fair share of processor cycles. Even after turning all the cruft off, the machine still ran like a three-legged horse. It felt _much_ slower than NT 4, which really is saying something. The video was unbearably slow, though using NT 4 drivers might have been the cause of that. Programs loaded slower. The machine took longer to boot and longer to shut down (which needed to be done often, even moreso than NT 4). Although it had a later version of DirectX than NT 4 SP3 did, none of the games I tried to use worked with it. (A few of them did in NT 4.) Internet Explorer kept forgetting that I'd told it to use a proxy server. Eventually I got sick of it and reinstalled NT 4.
The SP2 CDs just arrived yesterday. I'm very afraid.
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
Re:using clueless newbies for usability is correct (Score:2)
I agree, a newbie is the best possible person to use for usability studies. That is actually one thing MS got right: pop in the cd, wait, press the install button, fill in a few fields, (usually) reboot and off you go. Anoying for the advanced user, but extremly easy for the newbie (this is after my recent experiences of installing both NT and 95). However, anything more complicated (eg downloaded drivers), their model just doesn't work as well, nor for installing the os.
Poor Taste (Score:2)
Re:Have you Beta Tested Win2k? (Score:2)
Re:Viable Desktop Environment... (Score:2)
She manages quite well, thank you very much. Seriously, I set up a dual-boot for her a couple of months ago since she was curious about it because to all the recent hype. Now she uses Linux for some things and seems quite pleased that it never crashes. Surprised, even. She said one of the things she liked about it the most was the feeling of being "in control".
I think a lot of non-geek people will gladly learn a few more commands and what-not in order to have a PC that doesn't BSOD on them every time they turn around. Not all of them, but a lot of them.
Re:Hello People!! _You_ don't get it! (Score:3)
For a first-time experience, for the 'usability tests' that we're talking about, if you're looking to change something in the way your computer behaves, Windows gives you some clues from the get-go that the command line simply doesn't: try the big friendly button with the bouncing arrow saying "Start Here."
Inside there, a clearly-marked "Settings" menu, and inside there, Control Panels, Printers, so forth. Clearly-labeled hierarchical menus that lead you to figuring out where you're going, even if you're not sure. Once you get "Control Panels" open, there's a mess of nicely labeled icons to poke at and try to figure things out. From context, you know that everything in there is going to change some setting on the computer, and can poke around inside that context for a while until you get what you want.
The original poster's 'easier' alternative, the command prompt, offers no such context. Finding out that 'ntsysv' can change around certain settings is nice to know, but offers no context in finding out how to change _other_ settings -- commands starting with 'nt'? command ending in 'sysv'? Nope. No rhyme or reason. Or clues. Or hints. Or help. You just scratch around until you stumble across what you're looking for, and slowly accumulate knowledge. Maybe you find 'man,' maybe you figure out how the GNU info viewer works, maybe you have Gnome or KDE installed and can use THEIR visual context.
But if you plunk down newbies to a Win32 desktop and to a command-line Unix environment and just say 'go' with no further instructions, I think you know where my money'd be.
Now, me, I'm a command-line junkie. I _love_ it. I live at the bash or tcsh prompt all day, even on Windows boxes I administer (Cygwin's getting pretty cool these days). But to say that command line is '_so_ much easier' is simply wrong, and so I, too, hope the original poster was being tongue-in-cheek.
--
Innapropriate and pointless.. (Score:3)
Personally, I don't really think this is a good thing to point out at the end of a tech support call that outlines several of Linux's and it's various desktop systems weaknesses. Also, this should have never been made public. It gives Linux (and Loki) a bad image.
Employee: I can't get Oracle for Linux installed.
Boss: Call tech support.
Employee: But what if I ask something dumb and my call shows up on Slashdot and I have to quit my job and become a hermit?
Boss: Hmm.. good point. Let's use Solaris.
Re:from another former microserf (Score:3)
Re:Figures. (Score:3)
...but perhaps also an irritating one; much nicer might be:
where install_source is a script to do all of the above.
Yeah, I do that stuff by hand, but that's at least in part because I read the README and/or INSTALL first - which raises the point that it's not necessarily as simple as you describe.
Then again, a fair number of Windows programs installed with those Wonderful User-Friendly GUI Auto-Install Tools pester me with a bunch of questions about what directory I want to install the program in, blah blah blah, although at least there it offers me a default that's usually what I end up picking anyway.
Some of them also offer me different types of installs - Basic, Full, or Custom, and stuff such as that.
So not all Windows software is trivial to install, either, even with autoplay, etc.. (That stuff might, to some extent, be the equivalent of the details in a README or INSTALL file.)
So:
A better installation process than either the traditional UNIX one or the one I've seen with some Windows applications might be interesting. Is there any OS out there that's done it "right" or, at least, closer to "right"? Have some Windows applications managed to avoid an installation process that asks you lots of questions to which the answers might not be obvious? (Applications, that is, that aren't so simple as not to have to ask you questions. Or is that, perhaps, the way to simplify the installation process - Keep It [i.e, the product] Simple, Stupid?)
Linux, The Masses, and Information Appliances (Score:2)
I agree. Personally I don't see anything close to the Holy Grail of an "Information Appliance" yet. (Which is basically what we're talking about here.) Everything seems to be cheap PCs, or something that doesn't even come close to the versitility one needs. There doesn't seem to be a happy middle yet. You don't necessarily need majorly upgradeable hardware. (These things should be cheap enough for you not even really have to think about just going out and buying a new one.)
There needs to be the right kind of software too.
I agree it needs to be inexpensive, but by the same token it doesn't really need to be "full featured" either. Just all the common stuff. (Gimp is good example of this. It's powerful enough for you use without feeling tied and gaged (i.e. Paintbrush or xpaint) but it's not something the CIA would use to doctor photos either.) And of course the Interface needs to be clean. Personally I like the PalmOS UI. (Except for the dedicated writing area. I like WinCE's way better.)
Of course any IA is doomed to fail if you can't share files with the rest of the world, but again this is a software issue.
Bad, bad, bad, bad Loki (Score:3)
I'm even more disgusted with the
Please join me in boycotting Loki. I'm not about to trust these clowns with a tech support call, much less my credit card number.
chris
Re:You've got some problems with your story there (Score:2)
Just out of curiosity, what are the circumstances under which you wouldn't just hit the "Next" button on the install "wizard" at that point, and would, instead, specify a place to install it other than the default? I've always found that particular part of the installation process for Windows to be an irritant, but there're presumably users for whom it's a necessity (installing on a file server? Or something else?).
(The stuff that offers you various types of installations, including "Custom", is also sometimes a pain; I seem to remember not always getting a good idea from it of what the consequences of choosing different types of installations, or of choosing to or not to install some particular piece in a custom install, would be, other than "it'll take up this much disk space" - but I think I've seen the same thing installing, say, various UNIX-flavored OSes, so that's not unique to Windows.)
The debaters here have largely talked aboot software installation on Linux (as a proxy for UNIX-flavored systems in general, although others may do things differently) and Windows; how does, say, the MacOS software installation process differ?
Starting with Civ? (Score:3)
Re:Hello People!! _You_ don't get it! (Score:2)
Re:Have you Beta Tested Win2k? (Score:2)
Nothing beats NFS installs though. At my last school we had a 100MBit network. I stuck Slack on a box and everybody installed from that. The teacher's jaw dropped, being used to the 10 MBit network at the university.
Yes, everybody in the class were Slackers (the teacher, too). I had a SuSE system for myself, though. I installed PHP and Apache on a Slackware box before, and it just took up a lot of time. SuSE can just make it work at installation.
Don't get me started on Win installs. I buy all "name-brand" components and still it crashes over and over on install. I think it averages about 5 during the install alone, in my experience.
Then, I have to, ugh, install all of those damn drivers and then apps. It takes about a week (a conservative estimate) to get to where a big distrib like SuSE or Debian gets in a day (including apps).
Anybody who thinks Linux is hard to install is living in a dream world. It's after the install where everybody should be working on useability now.
Warning: I like subclauses. (Score:2)
$ tar zxvf tarball.tar.gz
$ cd tarball
$
$ make
$ su -c "make install"
- what are you, retarded? unquote, will say "No, *that's* retarded. The mouse has been around for 15 years. Why can't I just poke the file with it?" And so Corel tries to make it happen, while RHAT makes its manual a hundred pages longer.
Now, before you get all worked up and start goin' root-this and null-that, you should know that I *want* to use a *nix, because I know what's good about all of 'em: they work right. That's why I don't use WinAnything, even though I "should," being a luser. I've got a relatively useless iMac staring at me right now that I want to run YDL on once the G4 arrives, because I figure a Linux OS will make better use of the little girl's limited memory while doing silly crap like slashdotting and visiting alt.fan.traci-lords. For me (non-sysadmin, non-dev), that's about all Linux offers right now. But to do work, I need two things (aside from Photoshop; don't get me started about GIMP (argh!)): solid uptime (which *nix has) and a GUI for every occasion. Mac OS 8.6, though it chews through memory like mad, almost has the second. There are Windows-style pop-up menus down in the left corner, GNOME-type open-app buttons in the right, a bunch of specialized 3rd-party menu-bar pulldowns and "virtual desktops," and, as a plus, the native scripting language is ridiculously simple (and works with almost every application). I use all of these, all the time. What I want is a command line. I know how much faster it is to grep than it is to use the Finder, and I'd like to be able to do more while keeping all my windows open (script-on-the-fly, for example). A real "EasyLinux" would be perfect. But there isn't one. Not even close. I don't wanna *make* nothin' but pretty pictures and invoices.
And unfortunately for The Movement (which I truly do love), Mac OSX Client, a prettied-up and expensive BSD, probably will be exactly what I (and most everyone I know, almost uniformly Mac- and Be-using graphics/publishing/music pros) need. We don't like that it's not really Free(TM) and isn't particularly anti-MS, but so what? Got work to do (if I ever shut up).
And if Linux can't even convince a super-luser like me, it's valid to ask what my mom (whose computer is so damn nice I pray for her death daily) and some doofus trying to play CivIII and listen to Korn mp3s are going to think of it. The word "retarded" springs to mind.
Re:Figures. (Score:5)
My father has spent some 20 years working with computers, most of them in a DOS environment. Recently he had to adapt to win95 and I was trying to teach him the basics. Now my father can issue 'arcane' commands like copy and mkdir and fdisk, and he has even mastered wildcards and such. He can program, and he can compile his own programs. Yet, it took him some thirty minutes to grasp the idea that "when I drag a file on another directory, the file is not moved, not copied, instead just a shortcut is created" After some frustration, he realized it'd be quicker to do it through the prompt, and he never used windows eplorer again since. Then I had to explain about shortcuts on the menus and desktop.. which eventually led to the question "can't I just add the damn directory to the $PATH??" Great fun!
Intuitive interface is an interface that provides you with an easy-to-grasp expectation as to what will happen when you do some action, and that fullfils that expectation. Well, I never really understood how that applies to Microsoft's interface. It harldy ever manages to do what I expect to happen.
It is natural with users of an interface to get comfortable with it over time. But intuitiveness does not refer to that. It refers to making users comfortable with the interface without prior experience and habitual familiarity with it.
Nick Moraitakis
Re:Bad, bad, bad, bad Loki (Score:4)
Um, this wasn't exactly a Loki press release. This was an individual employee, not speaking for the company, who I doubt even asked his supervisor if he could spread the story.
A few people here scream "boycott!" at the drop of a hat (which is often red, incidentally).
It doesn't solve anything.
You boycott a company that hires child/slave labour in foreign countries. You boycott companies that destroy the environment or personal freedoms. You don't boycott trivial stuff like this. Few people will listen to you.
Instead, a linux company does the same thing and suddenly it's "Microsoft is hiring idiots and trying to spread FUD, blah blah blah".
Actually, most of the talk on this story is about useability issues with Linux. To top it all off, I see lots of agreement that Linux does lack useability in many areas.
Re:Bigger deal than we realize (Score:2)
That's the fucking point. There are people out there for whom even having to know about the steering wheel is "too technical". And their number will only grow if this "dumbing down" philosophy goes on.
Maybe we should remember the old adage that "there's not such a thing as a free lunch"? If you want to enjoy all the wonderful things that a computer can do for you, you'll have to make a little effort to learn about it. I'm not talking about writing your own device drivers; I'm just talking about knowing that you cannot drive your car telepathically.
Re:Bad, bad, bad, bad Loki (Score:4)
Like hell, they didn't. They said that he was a male Microsoft employee that worked on a project which was evaluating Linux. He had a 450MHz PIII with a Loki game installed on it. Believe me, he would not be that hard to track down.
Since this was never a "real" tech support call, is any "real" privacy being violated?
Uhm, it looked pretty damn "real" to me. What, are you saying that this tech just made up this story?
Are you also going to boycott all Microsoft products because they are trying to fake a study on Linux usability?
Who said anything about faking a story on Linux usability? Corporations test out competitors' products all the time, to help them improve their own products. The only thing we know about this was that this was a usability test. Anything beyond that is pure speculation.
Re:Bigger deal than we realize (Score:2)
If it found dpkg it converted itself into a deb and installed itself.
The same with rpm. If it found nothing it just installed as a TGZ.
If you want it, email me and I'll dig it out. Its just a bundle of shell scripts.
Re:Bad, bad, bad, bad Loki (Score:2)
I agree with you that my use of the word "boycott" was a little too strong. I'm serious when I say that I have doubts about Loki's concern for customers' privacy but I will not judge them until I see an official Loki press release on this. If they publicly apoligize, then I will know that this tech support person screwed up and that this is not the way Loki treats their customers. Otherwise, I can assure you that I won't buy any of their products.
chris
Re:Bigger deal than we realize (Score:2)
I'm guessing that quite a few folks feel like me, that is to say, if it becomes reduced to a toy for the lowest common denominator, then I will seek a new platform. I always liked FreeBSD
Linux didn't get to be even this grown up by pandering to every trivial fashion to come along.. Frankly, if I wanted a brain-damaged box, hobbed by trying to be overly friendly, I would have bought another Mac...
Re:Hello People!! _You_ don't get it! (Score:2)
For the kinds of operations that average users spend most of their time doing - moving files around, cutting and pasting bits of documents - Windows has plenty of shortcuts that even occasional or inexperienced users quickly become accustomed to, but at no stage is the user thrown in at the deep end and expected to pick them up straight away. Result: for an average user, Windows is easier to learn and easier to use.
"Designed by idiots for idiots" should be the mantra of anyone who seriously wants to take on the mass market.
Re:You've got some problems with your story there (Score:2)
In linux, if I have multiple drives or partitions, I mount them accordingly, and netscape goes where it should by default! It doesn't need to know or care what is happenning on the FS or hardware level.
Also, that "Last Known Good" hardware choice for NT has NEVER worked. I support 60 NT boxen, and 5 NT servers, and I have NEVER been able to get that to work. Even hardware profiles being created don't help, for some reason. And guess what Tech Support is gonna tell you to do?
And hopefully noone is an idiot enough to overwrite their old kernel without first booting to the new one...
Descent from Paranoia (Score:4)
This is NOT a competition against Microsoft. Don't use Linux as your private banner for campaign against Microsoft - or any other competitor. Those of you who do are working directly against Linux. I refer you to the crusades, the spanish inquisition. Both done in the name of Christianity. Both waged against an enemy that any convert could see was evil.
Microsoft is big enough that if the Linux following tries to make sure Microsoft can never out do Linux *by observing* Microsoft at the microscopic level, then Linux will turn into a Windows parasite.
I would suggest that Netscape's biggest undoing came not from Microsoft, but from Netscape. They got too obsessed with 'beating down Microsoft', and less and less focused on 'making a better Netscape'.
By Netscape 4.5, Microsoft didn't really have much to compete with.
I realise people are going to jump up here and tell me how the court case helped thus and something else did that.
But do we have a great web-browser? No.
Microsoft play a game, a competition. Linux has no need to enter the bullring. Remember what makes Linux what it is is people developing Linux for users, developing Linux for sys admins, developing Linux for deployment. Don't turn this into Linux development for comparison charts.
Oliver
Dangers in dumbing down (was Re:Bigger deal than (Score:3)
One of the reasons I use Linux is that I'm relatively immune from the viri and other nasties which affect the Windows world. An important reason why I'm immune is that foreign software, if it is to do anything serious to the system, must be installed as root, and anything which I install as root I have the sourcecode to and have usually compiled myself. I usually haven't crawled through that code to check for hidden nastiness, but I could if I wanted to.
Recently I've had packages (mostly Java ones) which come with MS-Windows-like graphical point and click installers. To install these you've fundamentally two choices: to install them in the user space of some user, as that user, in which case you've immediately got problems with other users using the same software; or to install them as root, without being able to do 'make -n install' first to see what the hell they're up to.
This is what you're asking for if you ask for a point-and-click RPM installer. It would have (in the general case) to be su root, because otherwise it couldn't install into privileged parts of the filesystem; and before you know where you are you would have masses of hostile variants of well-known RPMs installing trojans and trap-doors and worms all over the shop.
Now, of course, we elite
but the newbies and journos and other less elite and refined Linux users would be, and they would not be impressed. And then the media would be full of stories about how insecure and risky Linux was and we'd lose all the ground we've gained over the past years.
There really is a significant engineering trade off to be made here. Microsoft (and Apple before them) know this perfectly well and have made a conscious choice to go for ease-of-(unskilled)-use over security and stability. And Microsoft are now moving from that extreme ease-of-use position towards a still easy to use but more secure position by using installers that look at digital signatures and so on before installing a package.
Remember, we (the Linux community) are not competing with a bunch of incompetent morons here. We're competing with an extremely slick and professional marketing organisation, who hire very capable software engineers. We've got where we are because we occupy an ecological niche that Microsoft hasn't yet occupied: something with better security, but a bit harder to learn. I don't believe we can compete with Microsoft in their core market, because they are already established there and they are very good at what they do.
If we erode the things which make our product distinct from theirs we risk losing our market share, with (in my opinion) little prospect of taking theirs.
What we need to do is not change our product (at least, not radically) but to educate the marketplace to see its benefits. Our message must be 'Yes, linux may be a bit harder to learn, but the improved stability and reliability are worth it'. Instead of going out to capture their market, we need to bring (some of) their market to us.
Standard development machine... (Score:2)
--
Linux usability and its implications (Score:2)
It's not FUD. Linux is not usable by most of the world's population, and it was never intended to be. Linux remains a technical, enthusiasts' OS; to use it to do almost anything at all requires a vast store of knowledge and familiarity with the functioning of the OS and programs. Think of all the concepts we understand and take for granted:
This lends itself to more of a learning cliff than curve. Most of the world's population doesn't even want to know what a filesystem is. They just want to be able to press a button to send email to Jimmy. If they're going to use Linux as a desktop OS, they need to be abstracted from all the internals of the machine.
Linux, even pre-installed with KDE/Gnome, is nowhere close to this. I would never recommend Linux to a non-technical-enthusiant in a million years. If you had to give OS support to your clueless grandmother/uncle/neighbor, which you rather they use--Linux or 98?
The important question is: do we really want these people using linux, in any form? It's not as easy a question as it might sound. On one hand, pretty much all Linux users dislike Micro$oft. We're all happy to see a proprietary, closed, inferior OS get trashed by Linux. The rapid expansion and public hype has also benefited the Linux community immensely. A couple years ago you never would have seen useful things like QT, XFS, and Darwin open-sourced, major games on Linux, or graphics companies releasing Linux drivers... Such benefits will continue to flow as more people and hence desktop applications support linux.
But there are also dangers if this increasing popularization of Linux were to occur, more than just the irritation of having users that don't understand what a tarball is. The reason most people I know use Linux is because it's so complex. What first attracted me to Linux was its complexity, its power, and the ability to manipulate, control, and monitor the OS on a low level.
The problem is that while ease-of-use/idiotproofing and power can coexist, it's a difficult and unstable situation. As it stands now, most programs cater to advanced users -- text
Programs that might otherwise be ported to Linux as it is now, with full functionality, could work only in user-friendly mode. Hard-core Linux users could face the unpleasant choice of either 1) contuining as they do now, compiling software, not using insecure features, and being unable to use most software out there, or 2) having to deal with many of the annoyances of Windows, except on Linux.
So think through the issue carefully before espousing Linux as the OS of the masses... do we really want Linux to be an OS usable by those who have no idea how it works? Or do we want to keep it an OS for technophiles, one that chooses power, flexibility, and security over ease-of-use and simplicity? I know why I use Linux; the choice is clear to me. There are enough tech-loving people around to make linux a viable, well-supported choice without opening it the masses.
Ali Soleimani
Caltech math/physics undergrand
[email protected]
Re:What are they doing playing games anyway? (Score:2)
Re:Bigger deal than we realize (Score:3)
Coaxial, you are the voice of reason. Thankyou for speaking in defence of Linux. When Linus began Linux, he set out to build a free Unix-like operating system, not a contestant for dominating OS of the Universe. He wanted Unix, but now so many people who want an alternative to MS have attached their allegiance to Linux. However, they don't want a Unix-like OS, they want an OS for everyone.
Unix isn't about dumbing down, it is about empowerment through knowledge. The more I learn about Unix, the more I discover I can do. Most of the things I am thinking of here cannot be expressed efficiently in a GUI (OK I can think of a way to make a GUI environment for shell-scripting, but it doesn't speed things). Unix isn't about catering to the consumer.
Essentially, to cater for the consumer, you would have to remove most of the things that make Linux so great. The multi-user environment is fundamental to Linux security, both from the outside world and poor-quality local code, but logging in and maintaining user accounts gets in the way of the consumer, so throw it away! Then we don't need to worry about file permissions and ownership, after all they just get in the way of the consumer, so throw them away! The consumer finds it inconvenient and difficult to build from source, so throw away the development tools, and open source! Eventually you end up with a free Win9x.
I'm not saying that consumers shouldn't be allowed to use Linux. I'm saying that consumer interests should not be allowed to damage Linux. If you manage to create a Linux distribution that caters for the consumer without damaging the source tree by removing things, just by setting things up so that they are easier, it will likely get Linux publicity. It will give the wider populace the impression that Linux is a low-end OS that doesn't doesn't have powerful (OS) features, and probably with masses of people that find clicking difficult, low stability.
Let consumers have Linux. But don't prevent anyone from having a free Unix-like operating system. Don't let the one of the most brightest public displays of open source go out.
Re:Figures. (Score:2)
Re:Bigger deal than we realize (Score:2)
Computers should be as easy as cars? Well, do you have to have a driver's license for a computer? Do you have to take a driver's test? Nope. You just have to learn a few things about "driving" the computer.
Cars are really not that simple. You have to know about all the traffic rules. For some people that's really tough. However, you learn them by reading, not by just driving around hitting people and then complaining how this thing crashed
:-)
You really don't have to change engines or gas in your car. You don't have to change network cards and hard disks either. Where's the difference?
But then again, if you want to use the computer easily, I have one for you. It's called Barney.
:-)
I understand MS's position (Score:2)
Disclamier: I do not hate Linux. I think it's a good OS depending on what you want to do. I just feel like MS is a better choice for a newbie and for me
Re:Document Reading. (Score:2)
What's so special about this event? (Score:2)
From the point of view of "Microsoft spying on Linux" it is not exactly breathtaking news that Microsoft might be playing around and doing tests on competitor products. I would bet my left hand that it's not the first or the last time they do this, and it's not even worth noting as a practice in this industry.
I can hardly consider this article "news".. I would more easily classify it as "gossip", given that I also have my doubts as to whether it is an admirable practice to publicize the contents of private support service calls.
Perhaps this article has given foot to an interesting discussion over UI issues.. but nevertheless this could have been achieved without introducing tabloid-style gossip headlines such as "Microsoft plays Linux games at work" What's next? "Microsoft runs out of sanitary paper?" or "Bill's X-lover reveals true nature of company name?"
Nick Moraitakis
Re:Bigger deal than we realize (Score:2)
I don't CARE what the difference between glibc 1 and 2 is. I don't care what "CVS" is. I don't have to know the difference between kernel 2.2.6 and kernel 2.2.10 to run my desktop. (Faster and stabler than Windows, but that's another post). When I use my desktop, I never want to have to do more than turn the computer on and change my screen-saver. When I use my desktop, I want to doubleclick on something and start writing my reports. Yes, I *do* have to know about the mouse, both buttons, and my username and password, but I don't have to know anything technical about the O/S to use my desktop properly.
Am I missing something here, or is this already the case? RedHat 6.0 will boot from my CD and do everything except note that my mouse is an intellimouse on its own*; if they included AbiWord (for example) in their install, I'd never have to do anything else.
Nonetheless, I happen to know the differences between shocks and struts, and the grades of gasoline, and what the difference between 10W30 and 10W40 motor oil are because they make me a more effective user of my automobile. I also happen to be an inveterate tinkerer, so I also know how to rebuild an engine from scratch, if you'll send me the plans. Heck, I'm building my own car right now, from parts and plans other people supply; I'll be selling the plans into the public domain once I finish.
As it stands, most people are ignoring the vast majority of things they can do with 'computers' (in most cases, with their favored productivity app(s)) because they are either unwilling or unable to learn about the rest. I am an elitist; I don't necessarily believe that the Linux Command-Line Way is the Right Way, but there is a learning curve, especially for Windows**. (I went from an Apple IIGS to a Win3.1 PC box, and I was lost as all get-out until I sat down and read the manual cover-to-cover. The Apple IIGS was a triumph -- an totally general information appliance. Only if you stuck the GS/OS or Finder disks in specifically did you even realize there
However, alot of the general resisistance to dumbing-down comes from (geek) culture rather than technical issues***. It goes against what's valued (especially in the gift culture that's developing these applications and operating systems about which you're pontificating) to dumb things down, though not to make things better. In many cases, the two are diametrically opposed. (Also, the less-choice-is-simpler camp attacks the liberterian in every geek.) Our gift culture values displays of competence, pretty much. Social standing is generally taken in terms of competence (up to wizardry in areas); it's no suprise that Joe WinBloze user is universally reviled. No one objects to mpeg123 front-ends, but using a front-end in this case actually gains you a significant amount of functionality; and the command-line is still there if you ever (cronjob alarm clock, anyone?) need to script it. So there's a certain prestige attatched to the mpeg123 front-end builder.
I personally used fvwm95 until I got my hands on KDE, so I don't find it particularly disturbing. It also means that I personally know that it's possible to move from the wrapper to the real thing (so to speak -- not to be dissing fvwm95). On the other hand, while WinLinux is technically impressive (slurping h/w settings out of the windows registry and translating that into Linux drivers & setup is decidedly non-trivial), not many
Elegant (in the classic sense) designs are always easy to use, and that's what we should be aiming towards. The opposition stems from the dumbing-down process not generating an elegant design. If you
Intuition is wrong enough frequently that it's not always a good design...
-_Quinn
* Admittedly, the defaults aren't particularly brilliant, but that's life.
** The major advance of Windows 3.1 was the GUI for applications, not for the system. This is where Win9x went horribly wrong. It's a given that the computer won't always be right, so treating cases where it isn't as exceptions is a Bad Idea and tends to break things. (That is to say, I
I know it's admitting to being horribly inadequate, but I don't know TeX, and find that WordPerfect 8 and/or AbiWord satisfy my writing urges completely (except for WP8's broken postscript generator). I know I'm not a Real Coder because I like using jcc over emacs or pico. (Though I do keep an xterm handy for doing the makes, because my Makefiles would give makemake the shaking fits.)
*** IE, we're not building a homogeneous system here; interoperability is going to be a bitch. IE, mounting is the Right Thing to do; is automounting? (For the "Why should I have to 'mount' CD-ROMs when they 'just work' in Windows?" questions.) It is the Right Thing to distribute applications as source; should we adjust the LSB so that there's a user-writable subset of directories for scripted installs? Should we ship our distro with '.rpm' as a MIME-type for netscape so we can 'just download' a binary-form application?
Re:using clueless newbies for usability is correct (Score:2)
1) HELP is not cool (kewl/c00l);-) ("If you can't find it out by reading the manual or the README, thou art not worthy")
2) HELP-files are really hard to write. Big companies have people working for them doing nothing but making manuals, and these professionals are the people that make those incredibly clear, easy to read, VCR-manuals!!!
Truth is (IMHO) that if you guys want to give Linux a place on the desktop, you will have to cope with (l)users.....
What Linux probably needs is a "TESTGROUP" of some kind. Maybe just a bunch of (ex-)WIN users that are genuinely interested in Linux, but just can't get it working.
Truth is, I am such a person. I'd love to try and get Linux on my machine, but it's just too damn hard... Well, maybe I'm just a lazy bastard....
Bauke
_
Light travels faster than sound. That is why some people appear bright until you hear them speak.
Cars/Computers: BAD analogy (Score:3)
If you must have an analogy, I suggest computers to vehicles. Maybe you need a car, and I need a dump truck, and someone else would need a jet.
If you must have an OS for the masses, then why don't you write one? Linux was written to be a powerful, Unix-like OS. That's why Linus started on it in the first place. I don't see why it has to meet the needs of every single person. If you want a free home-user OS, there's plenty of GPL'd code you could use to start building one.
Linux Passes, Civ:CTP fails (Score:3)
I think in this case Linux actually passed extremely well. This guy has done a complete install of Linux, obviously got the desktop working to the extent of browsing the filesystem and CD in the GUI.
It's CIV:CTP that's failed if anything. I bet if their install had been called setup instead there would have been no problem.
I've seen plenty of software that's difficult to install under Win, and most that's impossible to uninstall. Quite often with NT some stuff can't be installed without administrator perms because obviously all DLL's have to go in the system directory.
I used to use Linux as a Microsoft Employee (Score:2)
Mounting disks (Score:2)
Somebody (if I ever get the time I'll look into it myself) needs to modify autofs so that a mountpoint can be specified, and will show up in the automount directory, but won't be mounted until you actually try to enter that directory and retrieve the file list. That way, under [Gnome|KDE|Enlightenment|*], there will be an icon for drooling user boy to click on. This would go a long way to making Linux easier to use.
Now, re: Loki's install. The biggest problem with the install script that I had was that it was not tagged as executable on the CD. That means that:
Now, I agree that autostart on programs is a BAD IDEA. I turn it off on my W9bluescreen systems, but it does make it easier for luserboy to run his stuff. I have a friend who has a five year old. She can play her games on the computer, since they all autostart. She finds the CD, puts it in the drive, and there is her game. There is no way my friend will leave that system in Linux and have to deal with a cranky kid! But, autostart should be an option (perhaps a daemon that is launched (or not) on a per user basis when you log in AT THE CONSOLE.
Lastly, the LSB needs to push some sort of standardized installer, be it RPM, DEB, or supercilfragoulishexpealidous. But it needs to be standardized, it needs to be able to set up any and all desktop managers (i.e. the various GUIs need to settle on a standard, common way to set file associations, icons, etc.), and it needs to work both as a GUI and CLI app.
Make no mistake: if we do not make Linux desktop ready, Bill will bide his time, gather his strength, and when he is ready, destroy us. In his world view, "There can be only one."
Does my girlfriend's mother count? (Score:2)
It's running RedHat Linux 6.0, updated with KDE 1.1.2, and heavily tweaked by me to make it easier to use. Sure, it took me two days to configure the machine so that I was happy with it - but after I was done the result was really very good. With constantly improving distributions and apps, I expect that next time it won't take me two days to get even better results...
Linux just works. Making it work the way you want it to is getting easier every day - and that includes making it user friendly.
The problem with most usability tests, is people are testing users who already have Windows or Mac experience, and therefore expect certain things from their computers. If Windows users hate using Macs and vice versa, is it fair to expect either to like Linux? A valid test would involve a well configured machine and a complete newbie. I'm getting the distinct impression that Linux can do a pretty good job in that scenario.
IE, Microsoft are stupid... (Score:2)
If Microsoft needs to do hands on tests to figure that linux is unfriendly to newbies who can't rtfm and are trying to manage their own machines, they really are as stupid as many people here seem to think.
The problem for MS is that a large portion of their constumers are not in this situation at all. I probably could not get many of my non-geek relations to move to Linux today, because they manage their own machines to some extent and don't have the enough interest to learn something new and complex.
My parents (and my surfin grandma), however, never do any management or installation on their own windows machinse anyways, so if it wasn't for the MS-Office thing I could move them straight over any day now (if they were still living in the same country as me that is). All they have to do is learn to click on an icon in KDE instead of Windows 98.
As Linux gets more and more simple and the average knowledge of computer users increases, the middle group is shrinking.
-
Here's your new Turing Machine, Mom. (Score:3)
That's the drill: put a Turing Machine on your mother's desk, but hamstring it first, so it won't do anything that would make her need to ask a question or two.
That's fine for an appliance, but let's not call the result a computer.
Ease of learning vs Ease of Use (Score:3)
Windows is much easier to learn than Linux. Sit pretty much anyone down who knows how to mouse, and (for example, an experienced Mac user) and they'll probably be able to get a lot of things done. The reason is that the GUI provides a lot of context for you- look at an empty screen, and there's a big start button that will lead you to almost everything that's useful on the computer with nice hierarchical labeling.
This does NOT mean that Windows is perfect in this regard- knowing to move a little box with a wire sticking out of it to make an arrow on the screen point at something is a new concept for a LOT of people. But it's possible for a reasonably computer literate person to use without reading any documentation.
It is not possible to find most of the useful things on your average Linux box by pointing and clicking. Yes, it *can* be set up this way for "normal" end-user tasks if someone knows what they are doing comes along and puts all of the right things in (for example) the KDE menuing system. But putting anything new onto the machine (or doing serious reconfiguration work) requires a lot more knowledge than you're likely to get by pointing and clicking. Even finding the right docs can be a real challenge.
But this is all about the first time you use a system. What about the 100th time? If you're a patient user and have taken the time to learn what to do, the problem changes entirely from "how do I find things" to "how do I get to what I need efficiently". IF you know Linux, it's very efficient to get around in. The command namespace is flat- there is no hierarchical set of menus to click through to get to what you need, so every command is at your fingertips if it's in your brain. Most things can be automated with scripting if you know what you're doing, and if typing three keystrokes to get your favorite text editor open (vi) is too much, you can alias it down to two.
My point is that "usability" is not a simple scale with things that are "usable" and things that aren't. A lot of you who love Linux today (including me) would hate some of the changes that would be required to make it more friendly for newbies, because it would sacrifice one kind of usability for another. And no, you can't always have it both ways... some of the properties of Linux that make it so powerful (customization) also fundamentally decrease the newbie-friendliness.
Re:Linux usability, and its implications (Score:2)
Actually, Linux might be able to have it both ways, by providing an ease-of-use layer (aka straightjacket) on top of the naked power of the basic system. I just urge those building the EOU layer not to introduce incompatibilities with the base system, so that software not explicitly invoking EOU features will remain portable.
And don't saddle us with a Macro Virus Support System under the title of "integration".
Re:i can see it... (Score:3)
Or VCR's. I still can't get mine to work;P
I'm a recovering Windows user, and for the most part using RH6 is at least as easy as the first time I tried to use Windows 95.
The fundemental problem with computers exists most often between the chair and keyboard. Hoew much easier is it supposed to be exactly when Microsoft even takes care to put a help entry two or three spots up the bottom from the start menu? The guy in this story saw the words README and didn't even think to check there first.
Don't get me wrong though...It gives me hope that Red Hat may be onto something basing their business off of supporting people. They'll always need it.
This simply isn't right.. (Score:3)
I don't care if they changed the name or not.. It was tacky, and makes Loki look bad..
"Hey guys!! I work at a sex hotline, and I just got a call from ".
For all the complaining about privacy, apperently it doesn't apply to Microsoft and their employees..
Computer Difficulty (Score:3)
The car analogy works in other ways, too.
When I did tech support for an ISP, it amazed me how often people moaned "Oh, this is SO hard" over the phone. I would tell them "click here... click there... click 'ok'" and get "Ohhh... this is sooo hard. How do you learn all this?" But I'm a "computer person" and they're not - why should I be amazed?
When's the last time you heard a reporter on TV moan or joke about the complexities of cars? "Yes Corky... I know what you mean. Last night I went for a little drive and there was a light blinking on the dash. By the time I figured out I needed something called 'gas', I THEN had to figure out what 'octane' to buy! Those cars are sure difficult" (group chuckle).
I'm sure this kind of car conversation wouldn't be as out of place if it were 70 or 80 years earlier. But these days, its ludicrous. Furthermore, no respecting "intelligent" public figure would repeat such absurdity. Cars are old hat. EVERYONE knows how to operate them. If they break, most people shrug and hire someone to fix them. When we're "car newbies", we take Driver's Ed. classes to get the basics. Then we build on the basic knowledge with experience. Its all very simple.
Welcome to the "computer generation". Pundits used to love talking about how computers will be in everyone's life during the 80s. We're there now. And how does popular culture refer to computers? "Ohhh... they're so HARD!"
Hobbiests are going to enjoy the ins and outs of their chosen interest. They'll tweak and tinker. And they'll smirk at those who don't have their understanding. Even if that hobby involves what others see as simple tools. But that works well for the hobbiest - they can make their interest their profession. Provide the casual user with a simplified interface so they can use their tool. Then take over from them if their tool breaks. It works for cars; it'll work for computers.
What we don't need is the continued absurdity that, in this day and age, computers are "too hard" fostered on us by popular culture.
Re:Bigger deal than we realize (Score:4)
It was this thinking that allowed for the rise of Microsoft. Or maybe it was the huge licensing fees. From the GNU Manifesto Once GNU is written, everyone will be able to obtain good system software free, just like air. Following that, shouldn't it be just as easy to use?
I'm not saying that consumers shouldn't be allowed to use Linux. I'm saying that consumer interests should not be allowed to damage Linux.
I don't see how they can, other than invading newsgroups and flooding newbie questions. But, when this happens, paying for Service comes into play. Regardless, the whole thing is based on choice, even if a new super-easy GUI distro comes out, you don't have to use it. Just because there are more layers on top doesn't mean you have to use them.
Change is always difficult to deal with. What I see in this post (and the others like it) is akin to a father watching his daughter go out on her first date. "Touch her and die!" You may shout, but if you had listened to that advice, she would never have existed. Trust that you raised her well and gave her the tools to deal with unwanted advances. | https://slashdot.org/story/99/09/23/1452234/microsoft-plays-linux-games-at-work | CC-MAIN-2018-22 | refinedweb | 14,608 | 71.95 |
CAPITAL BUDGETING (25 marks)
SeaGate Manufacturing is considering the replacement of an existing machine. The new machine costs R1,2 million and requires installation costs of R150 000. The existing machine can be sold today for R185 000 before taxes. It is 2 years old, cost R800 000 new, and has a remaining useful life of 5 years. It is being depreciated under the straight line method over an economic life of 5 years. If held until the end of 5 years, the machine’s market value would be R0. Over its 5-year life, the new machine should reduce operating costs by R350 000 per year. The new machine will be depreciated under the straight line method over an economic life of 5 years. After 5 years the new machine can be sold for an estimated R200 000 net of removal and cleanup costs. An increased investment in net working capital of R25 000 will be needed to support operations if the new machine is acquired. Assume that the firm has adequate operating income against which to deduct any losses experienced on the sale of the existing machine. The firm has a 10 percent cost of capital and is subject to a 40 percent tax rate.
REQUIRED
(a) Develop the relevant cash flows to analyze the proposed replacement. (15)
(b) Determine the payback, net present value (NPV), the internal rate of return (IRR)
and the profitability index (PI) of the proposal. (8)
(c) Make a recommendation and justify your answer. (2)
WEIGHTED AVERAGE COST OF CAPITAL (WACC) (18 marks)
Xylink Corporation’s current sales are R2 000 000 per year, consisting of fixed costs of R500 000 and variable costs of 35% of sales. The company aims at increasing its sales and intends expanding its production capacity by investing R900 000 in a new plant and machinery.
It is company policy to finance 60% of its total financing needs with equity and 40% with debt. At present 50 000 shares have been issued and trade at R50 each, while the company pays R200 000 interest per year on debentures with a yield of 12%. The company’s investment banker also
provided the following information:
- the average return on equity is 8%.
- flotation costs will be 10%.
- the dividend growth rate is expected to be 6%.
- the dividend cover is maintained at 4 times.
- FUN4U’s marginal tax rate is 40%.
REQUIRED
(a) Calculate Xylink’s current retained earnings. (6)
(b) Identify the forms and amounts of new financing required for the proposed investment. (4)
(c) Calculate the weighted average cost of capital (WACC) that should be used in the evaluation of the new investment. (6)
(d) Indicate how many new ordinary shares should be issued. (2)
NOTE: Expecting the solution as soon as possible.. | https://www.coursehero.com/tutors-problems/Finance/498020-CAPITAL-BUDGETING-25-marks-SeaGate-Manufacturing-is-considering/ | CC-MAIN-2018-51 | refinedweb | 465 | 63.9 |
Have you ever peeked into one of those bazillion .el files in your Emacs installation's lisp folder and wondered what it meant? Or have you ever looked at a GIMP script .scm file and scratched your head over all the parentheses? Lisp is one of the oldest programming languages still in common use, and Scheme is a streamlined dialect of Lisp. Many universities use Scheme as the language to introduce students to the Computer Science curriculum, and some of their teaching methods are based on the assumption that Scheme is the one language they can count on their students knowing. Even so, many active programmers and system administrators are unfamiliar with Scheme. This article will get you on your way to adding this tool to your developer or sysadmin toolkit.
One of the first questions people have about Lisp-family languages such as Scheme is "What's up with all the parentheses?" Scheme programs are just sequences of lists, and lists are delimited by parentheses. After all, LISP itself is an acronym for LISt Processor. Whereas other programs use a variety of punctuation for syntax, Scheme's syntax is trivial -- everything is expressed as lists -- and this simplicity manifests itself in lists of lists of lists, all showing up as nested parentheses. In C you might see something like
{
z = x;
}
where you see syntax elements (), {}, and ;. A similar behavior in Scheme would be written as
The entire expression is a list with three components: the operator
if and two more lists. The first of these two is a list with operator
= and two variables; the second is the operator
set! and two variables. There's no need to learn when to use braces or when the semicolon is required.
List manipulation
Where do lists come from and how are they taken apart for interpretation? There are three mystically named procedures,
cons,
car, and
cdr.
cons is an abbreviation for "construct" and constructs a pair from two elements. If the second of the two elements is a list, the resulting pair is a list. (There is also a non-list pair, but that is a topic for a longer article.)
car returns the first element of a list, and
cdr (pronounced could-er) returns the rest of the list, which might be empty, denoted
(). The two operators
car and
cdr get their names from a couple of registers used on the very early computer that Lisp was first written for.
Normally, Scheme tries to evaluate any symbols it encounters in a list. For example, if presented with the list
(a b c), Scheme will try to evaluate a, b, and c. To prevent this evaluation and tell Scheme to just use the list literally, a single quote mark precedes the symbol or a list, as
'(a b c). The
quote operator can also be used, as
(quote (a b c)).
If
ll is the list
(a b c), evaluating
(cons 'd ll) produces the list
(d a b c). Each of the elements could itself be a list. Now let's go the opposite direction and extract elements of the list.
(car ll) returns the value
a and
(cdr ll) returns the list
(b c). How would you get the value b? A good guess would be
(car (cdr ll)). But this is such a common task that Scheme provides a suite of abbreviations. The one-step way to get b is
(cadr ll).
We have been flirting with the question of how Scheme processes lists. A complete discussion takes quite a bit of space, but a beginner-level explanation is that Scheme looks at the first item in the list and evaluates it first, then decides what to do with the rest of the list. In a simple case like
(+ 13 aa), Scheme sees that
+ is the addition operator, and then proceeds to evaluate 13 and
aa. 13 just evaluates to the number 13, and if
aa evaluates to a number, the two are added. If
aa does not evaluate to a number, Scheme reports an error and aborts.
Some operators, called forms, do not always result in all of the following elements being evaluated. For instance, if the first item in the list is
or, following elements are evaluated in sequence only until one evaluates to
#t (Scheme's representation of true). For example, in
the display does not happen if x is a number.
Defining procedures
We need to take a deeper look at one special operator,
define, Scheme's operator for defining new procedures. The easiest way to grasp this operator is with the aid of a simple example. Let's define a procedure to compute the cube of a number. We'd like to end up with a procedure that will return the cube of, say, 32 when we say
(cube 32). Most Scheme programmers will define this procedure as:
(* x x x))
Simply start with the operator
define, write a pattern for how the procedure will be called, replacing a symbol for the parameter, and then an expression or a sequence of expressions for what the procedure does. The procedure's return value is just the value of the last expression evaluated. This example is trivial, but it contains the required elements.
Many procedures require local variables to be effective. Scheme offers two operators for initializing local variables,
let and
let*. The structure of both is similar:
(expr1)
(expr2)
...)
where any number of variables can be set to any number of values, and any number of expressions can be evaluated. The values may themselves be expressions. The difference between
let and
let* is that in
let expressions, all the value expressions must not contain any of the variables defined in the
let, whereas in the
let*, the values may be expressions that use variables defined above.
The method described above is syntactic sugar for a method that is both more flexible and, arguably, more expressive of what's happening under the covers. You can write this:
(lambda (x) (* x x x)))
The procedure
lambda is an operator that produces an unnamed procedure. Procedural programming, which is well supported by Scheme and Lisp, makes heavy use of such procedures. The object returned by
lambdais the procedure, and the
define simply assigns a name to an otherwise anonymous procedure. If this seems a bit bizarre to you, don't get sidetracked by it; you can write a lot of effective Scheme code without ever typing l-a-m-b-d-a.
If you're an experienced Scheme programmer, you're probably sputtering something to the effect that there's a lot more to defining procedures than this. That's true, but the number of options and variations is beyond the scope of this introduction. For all the gory details, turn to the references in the sidebar.
Key differences between Scheme and Lisp
Comparing GIMP script-fu files and Emacs elisp files quickly raises a lot of questions about differences between the two. The first difference to get used to is the difference between
define in Scheme and
defun in Lisp. In the latter, the pattern is
(...))
It's an easy enough difference to remember: Scheme uses a pattern that looks like how a Scheme procedure is called, and Lisp uses a pattern that looks like a lambda definition.
The other key thing to remember is that it is customary to define tests in Scheme with a question mark at the end, such as
list?, whereas in Lisp it is common to put a "p (for "predicate") at the end, such as
listp.
Beyond these two differences, it's mostly a matter of learning procedure names by that old tried-and-true method: start writing some code while you have a browser opened onto a good reference.
A simple practical example
Ready to see Scheme tackle some real work? I spend my workdays writing C++ code, and I got tired of always having to type in the same boilerplate information into the .cpp and .h files I create for every C++ class I create, so I called on Scheme to do that task for me. Now, by entering the shell command
NewClass MyClass, I get two files, myclass.h and myclass.cpp, both set up with just the right minimum of C++ content so I can go right to work on coding. You should be able to follow along now.
#! /usr/bin/guile -s
!#
;; Note the unusual shebang lines above.
;;
;; NewClass
;; Usage: NewClass ClassName
;; Produces classname.h, classname.cpp (all lowercase)
;; for C++ class ClassName (case preserved)
; Create header file containing class declaration
(define (make-cpp-decl className)
(let* ((fileName (format #f "~a.h" (string-downcase className)))
(outf (open-file fileName "w"))
(headerTag (format #f "~a_H" (string-upcase className))))
(display (format #f "#ifndef ~a\n" headerTag) outf)
(display (format #f "#define ~a\n\n" headerTag) outf)
(display (format #f "class ~a\n" className) outf)
(display "{\n" outf)
(display "public:\n\n" outf)
(display "protected:\n\n" outf)
(display "private:\n\n" outf)
(display "};\n\n" outf)
(display (format #f "#endif // ~a\n" headerTag) outf)))
; Create source file containing class definition
(define (make-cpp-def className)
(let* ((prefix (string-downcase className))
(srcName (format #f "~a.cpp" prefix))
(hdrName (format #f "~a.h" prefix))
(outf (open-file srcName "w")))
(display (format #f "#include \"~a\"\n" hdrName) outf)))
; Utility procedure for invoking both action procedures
(define (make-cpp-class className)
(begin
(make-cpp-decl className)
(make-cpp-def className)))
; Get second item on command line and invoke the utility procedure on it.
(make-cpp-class (cadr (command-line)))
Summary
This should be enough to get you started. If you're a GIMP or Emacs user, play around with a few scripts. If you're not, spend a little time with
guile and experiment. Pick a common task that you'd like to automate and see if you can implement it in Scheme. Before long, you'll want to hunker down with one of the resources given above to expand your knowledge. Have fun! | https://www.linux.com/news/its-time-learn-scheme | CC-MAIN-2016-30 | refinedweb | 1,674 | 61.87 |
Find Questions & Answers
Can't find what you're looking for? Visit the Questions & Answers page!
Dear all,
I try to do a File2File mapping. Both systems are SFTP connections to ERP filesystems.
PI 7.5 Java only.
From one source file, I would like to create two target files (multimapping).
First error I got was "InterfaceDetermination did not yield any actual interface" which I could solve by removing the SWK value for sender interface in ICO.
But now the mapping failed for first message, because :
"
This is caused, because the message has now the namespace of the mapping SWK and not from sender interface SWK. If I change the namespace in mapping test tool, the message get mapped without errors.
I already tried to remove namespace from sender interface message type and also changed the interface pattern in sender interface to Stateless XI30 Compatible.
Any ideas how get the sender interface namespace in the message again?
regards
Chris
Found the problem.
I am using content conversion in sender file adapter and I maintained the wrong namespace there. | https://answers.sap.com/questions/144695/multimapping-failed-due-namespace-issue.html | CC-MAIN-2018-39 | refinedweb | 178 | 74.69 |
How to Construct a Function in C Programming
In C programming, all functions are dubbed with a name, which must be unique; no two functions can have the same name, nor can a function have the same name as a keyword.
The name is followed by parentheses, which are then followed by a set of curly brackets. So at its simplest construction, a function looks like this:
type function() { }
In the preceding line, type defines the value returned or generated by a function. Options for type include all the standard C variable types — char, int, float, double — and also void for cheap functions that don’t return anything.
function is the function’s name. It’s followed by a pair of parentheses, which can, optionally, contain values passed to the function. These values are called arguments. Not every function features arguments. Then come the curly brackets and any statements that help the function do its thing.
Functions that return a value must use the return keyword. The return statement either ends the function directly or passes a value back to the statement that called the function. For example:
return;
This statement ends a function and does not pass on a value. Any statements in the function after return are ignored.
return(something);
This statement passes the value of the something variable back to the statement that called the function. The something must be of the same variable type as the function, an int, the float, and so on.
Functions that don’t return values are declared of the void type. Those functions end with the last statement held in the curly brackets; a return statement isn’t required.
One more important thing! Functions must be prototyped in your code. That’s so that the compiler understands the function and sees to it that you use it properly. The prototype describes the value returned and any values sent to the function. The prototype can appear as a statement at the top of your source code. Basic Function; No Return shows an example at Line 3.
BASIC FUNCTION; NO RETURN
#include <stdio.h> void prompt(); /* function prototype */ int main() { int loop; char input[32]; loop=0; while(loop<5) { prompt(); fgets(input,31,stdin); loop=loop+1; } return(0); } /* Display prompt */ void prompt() { printf("C:\DOS> "); }
Exercise 1: Use the source code from Basic Function; No Return to create a new project, ex1001. Build and run.
The program displays a prompt five times, allowing you to type various commands. Of course, nothing happens when you type, although you can program those actions later, if you like. Here’s how this program works in regard to creating a function:
Line 3 lists the function prototype. It’s essentially a copy of the first line of the function (from Line 22), but ending with a semicolon. It can also be written like this:
void prompt(void);
Because the function doesn’t require any arguments (the items in parentheses), you can use the void keyword in there as well.
Line 13 accesses the function. The function is called as its own statement. It doesn’t require any arguments or return any values, and it appears on a line by itself, as shown in the Listing. When the program encounters that statement, program execution jumps up to the function. The function’s statements are executed, and then control returns to the next line in the code after the function was called.
Lines 22 through 25 define the function itself. The function type is specified on Line 22, followed by the function name, and then the parentheses. As with the prototype, you can specify void in the parentheses because no argument is passed to the function.
The function’s sole statement is held between curly brackets. The prompt() function merely outputs a prompt by using the printf() function, which makes it seem like the function isn’t necessary, but many examples of one-line functions can be found in lots of programs.
Exercise 2: Modify the source code from Basic Function; No Return so that the while loop appears in its own function. (Copy Lines 7 through 16 into a new function.) Name that function busy() and have the main() function call it.
C has no limit on what you can do in a function. Any statements you can stuff into the main() function can go into any function. Indeed, main() is simply another function in your program, albeit the program’s chief function.
When declaring an int or char function type, you can also specify signed, unsigned, long, and short, as appropriate.
The main() function has arguments, so don’t be tempted to edit its empty parentheses and stick the word void in there. In other words, this construct is wrong:
int main(void)
The main() function in C has two arguments. It’s possible to avoid Listing them when you’re not going to use them, by keeping parentheses empty.
Other programming languages may refer to a function as a subroutine or procedure. | https://www.dummies.com/programming/c/how-to-construct-a-function-in-c-programming/ | CC-MAIN-2019-51 | refinedweb | 838 | 72.66 |
:-)
IRC, freenode, #hurd, 2013-10-21
<braunr> mhmm, there is a problem with thread destruction
IRC, freenode, #hurd, 2014-01-30
<sjbalaji> can any one exmplain me hello translator ? I am new to hurd <teythoon> sjbalaji: sure, what do you want to know ? <teythoon> how to use it ? <sjbalaji> No I mean what is the main reason of that translator. I am familiar with Linux. <gnu_srs> sjbalaji: start with: <sjbalaji> I ran that example but I am still clueless about the actual reason behind the translators and this simple hello world translator example. <teythoon> sjbalaji: the Hurd is a multiserver os, almost all functionality lives in userspace servers called 'translators' <teythoon> sjbalaji: the Hurd uses the file system as namespace to lookup these servers <teythoon> e.g. /servers/socket/1 is /hurd/pflocal and handles pipes and unix socket communication <sjbalaji> I can see from the example that a hello file is associated with a /hurd/hello translator <teythoon> yes <teythoon> think of translators like FUSE-based filesystems, only more general <teythoon> b/c translators are not restricted to provide filesystem-like services <sjbalaji> So this example hello translator just adds hello world in the associated file, am I correct ? <teythoon> it's not adding stuff to a file <teythoon> say you did settrans -ac /tmp/hi /hurd/hello, if you do cat /tmp/hi, cat does some rpc calls to the started /hurd/hello program that returns 'hello world' as the file contents <teythoon> in a sense /hurd/hello is a 'filesystem' that provides a single file <sjbalaji> So is it like hello is the mount moint for that hello server ? <teythoon> sjbalaji: yes, kind of that, but in a more general sense <sjbalaji> teythoon: How can I see the different servers that are running in the system ? I tried top in the terminal but it returned cannot find /proc/version <teythoon> sjbalaji: so it seems your procfs is not running, try as root: settrans /proc /hurd/procfs -c <sjbalaji> teythoon: But how does one differentiate between a server and a normal process ? <teythoon> one does not <teythoon> for a rule of thumb: anything from /hurd is a translator <teythoon> you can view a nodes passive translator record using showtrans, e.g. showtrans /dev/hd0 <sjbalaji> Is there something like a man page for translators ? Like how to work with them or to figure out what services are offered by them ? <teythoon> well, there is --help <teythoon> also, go to /dev and /servers and look around using showtrans or fsysopts <sjbalaji> teythoon: What is the difference between a nodes active and passive translator ? <teythoon> a passive translator record is stored in the file system for the node <teythoon> if the node is accessed, and no translator is currently running, it is started on demand <teythoon> we call a running translator an active one <sjbalaji> So the hello translator in the example is a passive one ? <teythoon> if you used settrans foo /hurd/hello, a node foo is created with an passive translator record <teythoon> if you used settrans -a foo /hurd/hello, the translator is started immediately <sjbalaji> teythoon: What do you mean by a passive translator record ? <teythoon> sjbalaji: it's an argv-vector encoded in the filesystem (currently, only ext2 supports this) <teythoon> in ext2, it is stored in a block and a os-specific field in the inode points to that block <sjbalaji> teythoon: I can't understand the logic behind that :( <teythoon> this way, the servers are started on demand <sjbalaji> But once they are invoked they are always online after that. <teythoon> yes <sjbalaji> I thought that the server goes down once its used <gnu_srs> teythoon: shouldn't the passive ones time out if unused? <teythoon> yes, that's how it was intented to be, but that has been patched-out in debian/hurd <gnu_srs> reason? <teythoon> i don't know the details, but there is a race condition
(
libports_stability.patch.)
IRC, freenode, #hurd, 2014-01-31
<sjbalaji> How can I see the complete control flow when I run the hello translator example ?
IRC, freenode, #archhurd, 2014-02-05
<CalimeroTeknik> plus I discussed quickly that idea with Richard Stallman and he told me translators had a conception flaw that would forbid such a system to be usable
IRC, freenode, #archhurd, 2014-02-06
<antrik_> CalimeroTeknik: the "conceptal problem" rms told you about was probably the simple issue that translators are always followed, even if they are run by another user <antrik> CalimeroTeknik: the conceptal problem is only in that the original designers believed that would be safe, which it isn't. changing that default policy (to be more like FUSE) wouldn't do much harm to the Hurd's features, and it should be easy to do <antrik> it's just nobody bothered so far, because it's not a big deal for typical use cases <antrik> rms isn't really in touch with Hurd development. he was made to believe it was a fundamental issue by a former Hurd developer who got carried away; and he didn't know enough to realise that it's really not a big deal at all
Candidates for Google Summer of Code Project Ideas
Extend
ls et al. for Translators
IRC, freenode, #hurd, 2014-02-08
<youpi> heh <youpi> I was wondering what that incoming/ directory was in my home <youpi> ls gave me hundreds of packages <youpi> then I remembered I had /hurd/httpfs on it :) <cluck> if only there were an easy and automated way to make ls and file managers (like dired!) aware of links, mounts and translators :) <youpi> cluck: what do you mean by "awaree"? <cluck> someting like: lrwxrwxrwx 1 foo foo 31 Aug 21 18:01 my_translator-23.0 -> ../some/fakefs /some_parameters* <cluck> (yes, i realize it goes against some security practices but maybe there could be a distinction like with soft/hard links that made it opaque for some use cases)
Passive Translators
IRC, freenode, #hurd, 2014-02-12
<braunr> well don't expect rsync to save passive translator records .. <braunr> i recommend you save either the entire disk image or the partition <gg0> should i expect it from tar/cp ? <braunr> no <braunr> i'm not even sure dumpe2fs does it <braunr> the only reliable way is to save the block device <azeem> might be a worthwhile GSOC <azeem> "implement Hurd specific features in GNU utilities" <azeem> there were some patches floating around for some things in the past IIRC <antrik> azeem: the plan for supporting Hurd features in FS utilities was exposing them as xattrs, like Roland's Linux patch did... cascardos once did some work on this, but don't remember how far he got
<antrik> you are right though that it would make for a good GSoC project... <antrik> of course, *some* utilities would still benefit from explicit Hurd support -- most notably ls <azeem> IIRC there were also ls patches at one point <antrik> can't recall that... but maybe it was befor my time ;-) | https://www.gnu.org/software/hurd/hurd/translator/discussion.html | CC-MAIN-2014-10 | refinedweb | 1,168 | 55.41 |
This article refers to the following Microsoft .NET Framework Class Library namespace:
- System.Threading
IN THIS TASK
Summary
This step-by-step article shows you how to submit a method to the thread pool for execution.
In the .NET environment, each process has a thread pool that you can use to run methods asynchronously.
The following list outlines the recommended hardware, software, network infrastructure, and service packs that are required:
This article assumes that you are familiar with the following topics:
In the .NET environment, each process has a thread pool that you can use to run methods asynchronously.
Requirements
The following list outlines the recommended hardware, software, network infrastructure, and service packs that are required:
- Microsoft Visual Studio .NET or Microsoft Visual Studio 2005
- The Visual C# programming language
Create a Visual C# Application that Uses the Thread Pool
- Start Microsoft Visual Studio .NET or Microsoft Visual Studio 2005.
- Create a new Visual C# Windows Application project named PoolDemo.
- Use the Toolbox to add a Button control to the form. The default name for the Button control is button1.
- Right-click the form, and then click View Code.
- Paste the following using directive after the existing using directives, but before the declaration of the PoolDemo namespace:
using System.Threading;
- Switch back to Design view, and then double-click button1. Paste the following code in the button1_Click event handler:
private void button1_Click(object sender, System.EventArgs e)
{
WaitCallback wcb = new WaitCallback(GetSysDirSize);
try
{
ThreadPool.QueueUserWorkItem(wcb);
MessageBox.Show("The work item has been placed on the queue");
}
catch (Exception ex)
{
MessageBox.Show("Error: " + ex.Message);
}
}
- Paste the following code within the body of the Form1 class. The GetSysDirSize method calculates the total number of bytes that are stored in the system directory. GetSysDirSize calls another method named DirSize to perform the calculation.
NOTE: This task might take some time to run.
private void GetSysDirSize(object state)
{
long total_length = DirSize(Environment.SystemDirectory);
this.Text = total_length.ToString();
}
private long DirSize(string path)
{
long sz = 0;
System.IO.DirectoryInfo d = new System.IO.DirectoryInfo(path);
// List files.
foreach(System.IO.FileInfo f in d.GetFiles())
{
sz += f.Length;
}
// Recurse into directories.
foreach(System.IO.DirectoryInfo dx in d.GetDirectories())
{
sz += DirSize(dx.FullName);
}
return sz;
}
Test the Sample
- Press CTRL+F5 to run the application.
- When the form appears, click the button. When the The work item has been placed on the queue message box appears, click OK to dismiss the message box and return to the main form. After a short delay, the total file size in the system directory is displayed in the caption of the form. The length of the delay depends on the speed of your computer and the number of files in the system directory. The calculation of file sizes takes place on a thread in the thread pool.
Propriedades
ID do Artigo: 315460 - Última Revisão: 11 de jul de 2008 - Revisão: 1 | https://support.microsoft.com/pt-br/help/315460/how-to-submit-a-work-item-to-the-thread-pool-by-using-visual-c | CC-MAIN-2017-47 | refinedweb | 485 | 59.6 |
Expose an on-premises WCF service to a web application in the cloud by using Azure Relay
This article shows how to build a hybrid cloud application with Microsoft Azure and Visual Studio. You create an application that uses multiple Azure resources in the cloud. This tutorial helps you learn:
- How to create or adapt an existing web service for consumption by a web solution.
- How to use the Azure Windows Communication Foundation (WCF) Relay service to share data between an Azure application and a web service hosted elsewhere.
You do the following tasks in this tutorial:
- Install prerequisites for this tutorial.
- Review the scenario.
- Create a namespace.
- Create an on-premises server.
- Create an ASP .NET application.
- Run the app locally.
- Deploy the web app to Azure.
- Run the app on Azure.
Prerequisites
To complete this tutorial, you need the following prerequisites:
- An Azure subscription. If you don't have one, create a free account before you begin.
- Visual Studio 2015 or later. The examples in this tutorial use Visual Studio 2019.
- Azure SDK for .NET. Install it from the SDK downloads page.
How Azure Relay helps with hybrid solutions
Business solutions are typically composed of a combination of custom code and existing functionality. Custom code tackles new and unique business requirements. Solutions and systems that are already in place provide existing functionality.
Solution architects are starting to use the cloud for easier handling of scale requirements and lower operational costs. In doing so, they find that existing service assets they'd like to use as building blocks for their solutions are inside the corporate firewall and out of easy reach by the cloud solution. Many internal services aren't built or hosted in a way that they can be easily exposed at the corporate network edge.
Azure Relay takes existing WCF web services and makes those services securely accessible to solutions that are outside the corporate perimeter without requiring intrusive changes to the corporate network infrastructure. Such relay services are still hosted inside their existing environment, but they delegate listening for incoming sessions and requests to the cloud-hosted relay service. Azure Relay also protects those services from unauthorized access by using Shared Access Signature (SAS) authentication.
Review the scenario
In this tutorial, you create an ASP.NET website that enables you to see a list of products on the product inventory page.
The tutorial assumes that you have product information in an existing on-premises system, and uses Azure Relay to reach into that system. A web service that runs in a simple console application simulates this situation. It contains an in-memory set of products. You can run this console application on your own computer and deploy the web role into Azure. By doing so, you'll see how the web role running in the Azure datacenter calls into your computer. This call happens even though your computer will almost certainly be behind at least one firewall and a network address translation (NAT) layer.
Set up the development environment
Before you can begin developing Azure applications, download the tools and set up your development environment:
- Install the Azure SDK for .NET from the SDK downloads page.
- In the .NET column, choose the version of Visual Studio you're using. This tutorial uses Visual Studio 2019.
- When prompted to run or save the installer, select Run.
- In the Web Platform Installer dialog box, select Install and continue with the installation.
Once the installation is finished, you have everything necessary to start to develop the app. The SDK includes tools that let you easily develop Azure applications in Visual Studio.
Create a namespace
The first step is to create a namespace, and to obtain a Shared Access Signature (SAS) key. A namespace provides an application boundary for each application exposed through the relay service. An SAS key is automatically generated by the system when a service namespace is created. The combination of service namespace and SAS key provides the credentials for Azure to authenticate access to an application.
Select Create a resource. Then, select Integration > Relay. If you don't see Relay in the list, select See All in the top-right corner.
Select Create, and enter a namespace name in the Name field. Azure portal checks to see if the name is available.
Choose an Azure subscription in which to create the namespace.
For Resource group, choose an existing resource group in which to place the namespace, or create a new one.
Select the country or region in which your namespace should be hosted.
Select Create. The Azure portal creates your namespace and enables it. After a few minutes, the system provisions resources for your account.
Get management credentials
Select All resources, and then choose the newly created namespace name.
Select Shared access policies.
Under Shared access policies, select RootManageSharedAccessKey.
Under SAS Policy: RootManageSharedAccessKey, select the Copy button next to Primary Connection String. This action copies the connection string to your clipboard for later use. Paste this value into Notepad or some other temporary location.
Repeat the preceding step to copy and paste the value of Primary key to a temporary location for later use.
Create an on-premises server
First, you build a simulated on-premises product catalog system. This project is a Visual Studio console application, and uses the Azure Service Bus NuGet package to include the Service Bus libraries and configuration settings.
Start Microsoft Visual Studio as an administrator. To do so, right-click the Visual Studio program icon, and select Run as administrator.
In Visual Studio, select Create a new project.
In Create a new project, select Console App (.NET Framework) for C# and select Next.
Name the project ProductsServer and select Create.
In Solution Explorer, right-click the ProductsServer project, then select Manage NuGet Packages.
Select Browse, then search for and choose WindowsAzure.ServiceBus. Select Install, and accept the terms of use.
The required client assemblies are now referenced.
Add a new class for your product contract. In Solution Explorer, right-click the ProductsServer project and select Add > Class.
In Name, enter the name ProductsContract.cs and select Add.
Make the following code changes to your solution: App.config to open the file in the Visual Studio editor. At the bottom of the
<system.ServiceModel>element, but still within
<system.ServiceModel>, add the following XML code. Be sure to replace
yourServiceNamespacewith the name of your namespace, and
yourKeywith the SAS key you retrieved earlier from the portal:
<system.serviceModel> ... >
Note
The error caused by
transportClientEndpointBehavioris just a warning and isn't a blocking issue for this example.
Still in App.config, in the
<appSettings>element, replace the connection string value with the connection string you previously obtained from the portal.
<appSettings> <!-- Service Bus specific app settings for messaging connections --> <add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://yourNamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=yourKey"/> </appSettings>
Select Ctrl+Shift+B or select Build > Build Solution to build the application and verify the accuracy of your work so far.
Create an ASP.NET application
In this section, you build a simple ASP.NET application that displays data retrieved from your product service.
Create the project
Ensure that Visual Studio is running as administrator.
In Visual Studio, select Create a new project.
In Create a new project, select ASP.NET Web Application (.NET Framework) for C# and select Next.
Name the project ProductsPortal and select Create.
In Create a new ASP.NET Web Application, choose MVC and select Change under Authentication.
In Change Authentication, choose No Authentication then select OK. For this tutorial, you're deploying an app that doesn't need a user to sign in.
Back in Create a new ASP.NET Web Application, select Create to create the MVC app.
Configure Azure resources for a new web app. Follow the steps in Publish your web app. Then, return to this tutorial and continue to the next step.
In Solution Explorer, right-click Models and then select Add > Class.
Name the class Product.cs, then select Add.
Modify the web application
In the Product.cs file in Visual Studio, replace the existing namespace definition with the following code:
// Declare properties for the products inventory. namespace ProductsWeb.Models { public class Product { public string Id { get; set; } public string Name { get; set; } public string Quantity { get; set; } } }
In Solution Explorer, expand Controllers, then double-click HomeController.cs to open the file in Visual Studio.
In HomeController.cs,, then double-click _Layout.cshtml to open the file in the Visual Studio editor.
Change all occurrences of
My ASP.NET Applicationto Northwind Traders Products.
Remove the
About, and
Contactlinks. In the following example, delete the highlighted code.
In Solution Explorer, expand Views > Home, then double-click Index.cshtml to open the file select Ctrl+Shift+B to build the project.
Run the app locally
Run the application to verify that it works.
- Ensure that ProductsPortal is the active project. Right-click the project name in Solution Explorer and select Set As Startup Project.
- In Visual Studio, select F5.
Your application should appear, running in a browser.
Put the pieces together
The next step is to hook up the on-premises products server with the ASP.NET application.
If it isn't already open, in Visual Studio, open the ProductsPortal project you created in the Create an ASP.NET application section.
Similar to the step in the Create an on-premises server section, add the NuGet package to the project references. In Solution Explorer, right-click the ProductsPortal project, then select Manage NuGet Packages.
Search for WindowsAzure.ServiceBus and select the WindowsAzure.ServiceBus item. Then finish the installation and close this dialog box.
In Solution Explorer, right-click the ProductsPortal project, then select Add > Existing Item.
Navigate to the ProductsContract.cs file from the ProductsServer console project. Highlight ProductsContract.cs. Select the down arrow next to Add, then choose Add as Link.
Now open the HomeController.cs file in the Visual Studio editor and replace the namespace definition with the following code. Be sure to replace
yourServiceNamespacewith the name of your service namespace, and
yourKeywith your SAS key. This code lets access signature the ProductsPortal solution. Make sure to right-click the solution, not the project. Select Add > Existing Project.
Navigate to the ProductsServer project, then double-click the ProductsServer.csproj solution file to add it.
ProductsServer must be running to display the data on ProductsPortal. In Solution Explorer, right-click the ProductsPortal solution and select Properties to display Property Pages.
Select Common Properties > Startup Project and choose Multiple startup projects. Ensure that ProductsServer and ProductsPortal appear, in that order, and that the Action for both is Start.
Select Common Properties > Project Dependencies on the left side.
For Projects, choose ProductsPortal. Ensure that ProductsServer is selected.
For Projects, choose ProductsServer. Ensure that ProductsPortal isn't selected, and then select OK to save your changes.
Run the project locally
To test the application locally, in Visual Studio select F5. The on-premises server, ProductsServer, should start first, then the ProductsPortal application should start in a browser window. This time, you see that the product inventory lists data retrieved from the product service on-premises system.
Select Refresh on the ProductsPortal page. Each time you refresh the page, you see the server app display a message when
GetProducts() from ProductsServer is called.
Close both applications before proceeding to the next section.
Deploy the ProductsPortal project to an Azure web app
The next step is to republish the Azure Web app ProductsPortal front end:
In Solution Explorer, right-click the ProductsPortal project, and select Publish. On the Publish page, select Publish.
Note
You may see an error message in the browser window when the ProductsPortal web project is automatically launched after the deployment. This is expected, and occurs because the ProductsServer application isn't running yet.
Copy the URL of the deployed web app. You'll need the URL later. You can also get this URL from the Azure App Service Activity window in Visual Studio:
Close the browser window to stop the running application.
Before running the application in the cloud, you must ensure that ProductsPortal is launched from within Visual Studio as a web app.
In Visual Studio, right-click the ProductsPortal project and select Properties.
Select Web. Under Start Action, choose Start URL. Enter the URL for your previously deployed web app, in this example,.
Select File > Save All.
Select Build > Rebuild Solution.
Run the application
Select F5 to build and run the application. The on-premises server, which is the ProductsServer console application, should start first, then the ProductsPortal application should start in a browser window, as shown here:
The product inventory lists data retrieved from the product service on-premises system, and displays that data in the web app. Check the URL to make sure that ProductsPortal is running in the cloud, as an Azure web app.
Important
The ProductsServer console application must be running and able to serve the data to the ProductsPortal application. If the browser displays an error, wait a few more seconds for ProductsServer to load and display the following message, then refresh the browser.
In the browser, refresh the ProductsPortal page. Each time you refresh the page, you see the server app display a message when
GetProducts() from ProductsServer is called.
Next steps
Advance to the following tutorial: | https://docs.microsoft.com/en-us/azure/azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay | CC-MAIN-2021-49 | refinedweb | 2,218 | 59.8 |
- Context Actions
- Highlighting paired items
Code completion
In C++ files, you can use automatic and basic (Ctrl+Space) completion when writing your code. For example, you can quickly add enum members taken from a different namespace:
When you call a method, you can always type a dot (
.) or an arrow (
->)
and get all available methods in the completion list.
The methods that do not match
./
-> are displayed in grey. If you choose such method, the
./
-> will be corrected automatically:
Generative completion suggestion are also available. For example, when you call a member function declaration on the object...
ReSharper generates the following function:
Syntax highlighting | https://www.jetbrains.com/help/resharper/2016.2/Coding_Assistance_in_CPP.html | CC-MAIN-2022-05 | refinedweb | 104 | 59.9 |
I'm having a strange problem with typescript interfaces. Because I'm using mongoose models I need to define one, but for some reason it's not recognising things that I have explicitly imported. This part works fine:
export interface ITrip extends mongoose.Document {
//
}
export var TripSchema = new mongoose.Schema({
//
});
export var Trip = mongoose.model<ITrip>('Trip', TripSchema);
import {Trip, ITrip} from '../trips/trip.model';
export interface IFeed extends mongoose.Document {
lastSnapshot: {
trips: [Trip]
}
}
feed.ts(12,13): error TS2304: Cannot find name 'Trip'.
trips: [Trip]
var a = new Trip({});
Trip isn't a type, it's a variable, so you can do this:
let t = Trip; let t2 = new Trip({});
But you can't do this:
let t: Trip;
You should change it to
typeof Trip:
export interface IFeed extends mongoose.Document { lastSnapshot: { trips: [typeof Trip] } }
Also, if you want
IFeed.lastSnapshot.trips to be an array, then it should be:
trips: typeof Trip[]
What you declared is a tuple of one item.
With an object the assignment is always the same (both js and ts):
let o = { key: "value" }
But when declaring types in typescript then you're not dealing with values:
interface A { key: string; } let o: A = { key: "value" }
In the mongoose documentation they are using only javascript so all of their examples don't include the type declarations. | https://codedump.io/share/vk0MV5hpL0TK/1/use-imports-inside-interface-definition | CC-MAIN-2018-22 | refinedweb | 223 | 64.2 |
(For more resources related to this topic, see here.)
A health check is a runtime test for our application. We are going to create a health check that tests the creation of new contacts using the Jersey client.
The health check results are accessible through the admin port of our application, which by default is 8081.
How to do it…
To add a health check perform the following steps:
- Create a new package called com.dwbook.phonebook.health and a class named NewContactHealthCheck in it:
import javax.ws.rs.core.MediaType; import com.codahale.metrics.health.HealthCheck; import com.dwbook.phonebook.representations.Contact; import com.sun.jersey.api.client.*; public class NewContactHealthCheck extends HealthCheck { private final Client client; public NewContactHealthCheck(Client client) { super(); this.client = client; } @Override protected Result check() throws Exception { WebResource contactResource = client .resource(""); ClientResponse response = contactResource.type( MediaType.APPLICATION_JSON).post( ClientResponse.class, new Contact(0, "Health Check First Name", "Health Check Last Name", "00000000")); if (response.getStatus() == 201) { return Result.healthy(); } else { return Result.unhealthy("New Contact cannot be created!"); } } }
- Register the health check with the Dropwizard environment by using the HealthCheckRegistry#register() method within the #run() method of the App class. You will first need to import com.dwbook.phonebook.health.NewContactHealthCheck. The HealthCheckRegistry can be accessed using the Environment#healthChecks() method:
// Add health checks e.healthChecks().register ("New Contact health check", new NewContactHealthCheck(client));
- After building and starting your application, navigate with your browser to:
The results of the defined health checks are presented in the JSON format. In case the custom health check we just created or any other health check fails, it will be flagged as "healthy": false, letting you know that your application faces runtime problems.
How it works…
We used exactly the same code used by our client class in order to create a health check; that is, a runtime test that confirms that the new contacts can be created by performing HTTP POST requests to the appropriate endpoint of the ContactResource class. This health check gives us the required confidence that our web service is functional.
All we need for the creation of a health check is a class that extends HealthCheck and implements the #check() method. In the class's constructor, we call the parent class's constructor specifying the name of our check—the one that will be used to identify our health check.
In the #check() method, we literally implement a check. We check that everything is as it should be. If so, we return Result.healthy(), else we return Result.unhealthy(), indicating that something is going wrong.
Summary
This article showed what a health check is and demonstrated how to add a health check. The health check we created tested the creation of new contacts using the Jersey client.
Resources for Article:
Further resources on this subject:
- RESTful Web Services – Server-Sent Events (SSE) [Article]
- Connecting to a web service (Should know) [Article]
- Web Services and Forms [Article] | https://www.packtpub.com/books/content/adding-health-checks | CC-MAIN-2016-44 | refinedweb | 492 | 57.77 |
Hi,
I'm trying to import the groovy-json-2.0.1 library in a script unit.
I added the jar file in <project_path>/Web Content/WEB-INF/lib as suggested in a previous topic of this forum but after importing it in script unit with the instruction import groovy.json, I have the following error in log file unable to resolve class groovy.json
Hi,
you should try to import all the classes of the library
import groovy.json.*;
Alternatively, if the Groovy script will use specific classes you can import only the needed ones. For example:
import groovy.json.<class1>;
import groovy.json.<class2>;
where <class1> and <class2> are the names of a Java classes.
Has anyone found a solution to this? I have the same error for the groovy.sql.Sql class. The groovy-all jar file is in my <project>/WEB-INF/lib folder, but if I try to use the import in a template file class for a custom unit the "unable to resolve ..." error message is given upon generation of the project.
Interesting though I can add import statements for java.sql.Connection, etc. What is the difference for the class loader for script units versus unit layout templates using scriptlets> | https://my.webratio.com/forum/question-details/importing-jar-library;jsessionid=A11F6B6BA88DA51C0C68FA0926D7E36B?nav=43&link=oln15x.redirect&kcond1x.att11=558 | CC-MAIN-2021-17 | refinedweb | 207 | 67.55 |
Given a string, write a function that will find the second most frequent character
Example
INPUT :
s = “aabbbc”
OUTPUT :
‘a’ is the second most frequent character
Time Complexity : O(n)
Algorithm
1. Scan the input string and construct a character count array from input string
ie, In the above example,
count of a is 2, so count[‘a’] = 2
count of b is 3, so count[‘b’] = 3
count of c is 1, so count[‘c’] = 1
2. Now, find the second largest value in count array
C++ Program
#include <bits/stdc++.h> #define NO_OF_CHARS 256 using namespace std; void secondFreqChar(string s) { int count[NO_OF_CHARS] = {}; for(int i=0; i<s.size(); i++) { count[s[i]]++; //increment the count of each character by using ASCII of character as key } //Finding the second largest number in count array int first = 0, second =0; for(int i=0; i < NO_OF_CHARS; i++) { //If the current char count is less than first, then change both variables if(count[i] > count[first]) // { second = first; first = i; } //If it is inbetween first and second else if(count[i] > count[second] && count[i] != count[first]) { second = i; } } if (second != '\0') { cout<<"second most frequent character is "<<char(second)<<endl; } else//if there is no second frequent character { cout<<"there is no second most frequent character"<<endl; } } int main() { string s = "tut"; cout<<"Input string is "<<s<<endl; secondFreqChar(s); } | https://www.tutorialcup.com/interview/string/second-frequent-character-2.htm | CC-MAIN-2020-10 | refinedweb | 233 | 50.91 |
Learn the Basics of Cookiecutter by Creating a Cookiecutter¶
The easiest way to understand what Cookiecutter does is to create a simple one and see how it works.
Cookiecutter takes a source directory tree and copies it into your new
project. It replaces all the names that it finds surrounded by templating
{{ and
}} with names that it finds in the file
cookiecutter.json. That’s basically it. [1]
The replaced names can be file names, directory names, and strings inside files.
With Cookiecutter, you can easily bootstrap a new project from a standard form, which means you skip all the usual mistakes when starting a new project.
Before you can do anything in this example, you must have Python installed on
your machine. Go to the Python Website and follow
the instructions there. This includes the
pip installer tool. Now run:
$ pip install cookiecutter
Your First Cookiecutter¶
To get started, create a directory somewhere on your computer. The name of
this directory will be the name of your Cookiecutter template, but it doesn’t
constrain anything else—the generated project doesn’t need to use the
template name, for example. Our project will be called
HelloCookieCutter1:
$ mkdir HelloCookieCutter1 $ cd HelloCookieCutter1
Inside this directory, we create the directory tree to be copied into the generated project. We want to generate a name for this directory, so we put the directory name in templating tags:
$ mkdir {{cookiecutter.directory_name}} $ cd {{cookiecutter.directory_name}}
Anything inside templating tags can be placed inside a namespace. Here, by
putting
directory_name inside the
cookiecutter namespace,
cookiecutter.directory_name will be looked up from the
cookiecutter.json
file as the project is generated by Cookiecutter.
Now we are inside the directory tree that will be copied. For the simplest
possible Cookiecutter template, we’ll just include a single file. Again, we
want the file name to be looked up from
cookiecutter.json, so we name it
appropriately:
$ touch {{cookiecutter.file_name}}.py
(
touch creates an empty file; you can just open it up in your editor). Now
edit the file so it contains:
print("Hello, {{cookiecutter.greeting_recipient}}!")
To finish, we create the
cookiecutter.json file itself, so that
Cookiecutter can look up all our templated items. This file goes in our
HelloCookieCutter1 directory, and contains all the names we’ve used:
{ "directory_name": "Hello", "file_name": "Howdy", "greeting_recipient": "Julie" }
Now we can actually run Cookiecutter and create a new project from our
template. Move to a directory where you want to create the new project. Then
run Cookiecutter and hand it the directory where the template lives. On my
(Windows, so the slashes go back instead of forward) machine, this happens to
be under the
Git directory:
$ cookiecutter C:\Users\bruce\Documents\Git\HelloCookieCutter1 directory_name [Hello]: file_name [Howdy]: greeting_recipient [Julie]:
Cookiecutter tells us what the default name for each item is, and gives us the
option of replacing that name with something new. In this case, I just pressed
Return for each one, to accept all the defaults.
Now we have a generated directory called
Hello, containing a file
Howdy.py. When we run it:
$ python Howdy.py Hello, Julie!
Voila! Instant generated project!
Note: The project we’ve created here happens to be Python, but
Cookiecutter is just replacing templated items with names it looks up in
cookiecutter.json, so you can produce projects of any kind, including
projects that aren’t programs.
This is nice, but what if you want to share your Cookiecutter template with
everyone on the Internet? The easiest way is to upload it to a version control
repository. As you might have guessed by the
Git subdirectory, this
example is on GitHub. Conveniently, Cookiecutter can build a project directly
from an internet repository, like the one for this very example. For variety,
this time we’ll replace the values from
cookiecutter.json with our own:
$ cookiecutter Cloning into 'HelloCookieCutter1'... remote: Counting objects: 37, done. Unpacking objects: 21% (8/37) remote: Total 37 (delta 19), reused 21 (delta 3), pack-reused 0 Unpacking objects: 100% (37/37), done. Checking connectivity... done. directory_name [Hello]: Fabulous file_name [Howdy]: Zing greeting_recipient [Julie]: Roscoe $ cd Fabulous $ python Zing.py Hello, Roscoe!
Same effect, but this time produced from the Internet! You’ll notice that even
though it says
Cloning into 'HelloCookieCutter1'..., you don’t see any
directory called
HelloCookieCutter1 in your local directory. Cookiecutter
has its own storage area for cookiecutters, which is in your home directory
in a subdirectory called
.cookiecutters (the leading
. hides the directory
on most operating systems). You don’t need to do anything with this directory
but it can sometimes be useful to know where it is.
Now if you ever find yourself duplicating effort when starting new projects, you’ll know how to eliminate that duplication using cookiecutter. But even better, lots of people have created and published cookiecutters, so when you are starting a new project, make sure you look at the list of pre-defined cookiecutters first! | https://cookiecutter.readthedocs.io/en/1.7.0/first_steps.html | CC-MAIN-2020-10 | refinedweb | 823 | 57.27 |
How can I expand contents from a zipTree in a way that only the contents of a given sub directory gets extracted
Use an include filter:
task unzip(type: Copy) {
from zipTree(zipFile)
into "unzipped"
include "dir/to/unzip/**"
}
Hi Peter thank you this has but this does unzip's but it extracts the all directory tree and not the contents from that given subdirectory.
Say if the contents of the zip where:
/A---+
B---C
D----E
> If the include is "A/D", I was expecting to have only E F In the output
That's not how include works. Ideally, you could achieve your goal with rename. However, rename currently operates on file names rather than file paths. What you can do is to add another copy task/action that copies everything below A/D to a new place. Or, code a solution based on _zipTree(zipFile).visit
_.
Thank you, Peter
Vote++ for a "rename" variant that can operate on full file paths
You can use the `eachFile {}` hook to do this.
task unzip(type: Copy) {
from zipTree(zipFile)
into "unzipped"
eachFile { FileCopyDetails fcp ->
if (fcp.relativePath.pathString.startsWith("dir/to/unzip/")) {
// remap the file to the root
fcp.relativePath = new RelativePath(fcp.file, fcp.relativePath.segments[3..-1])
} else {
fcp.exclude()
}
}
}
That's untested, but it should be pretty close.
docs:
[1]...
[2]...
----------------------------------------------------------------------------------------
[1]
[2]
@Luke
it is necessary to coerce the segments back into a String Array after removing elements. The code below works but has the unintended side effect of also copying the original directory structure with no files in it. Any thoughts?
eachFile { FileCopyDetails fcp ->
if (fcp.relativePath.pathString.startsWith(appName)) {
// remap the file to the root
def segments = fcp.relativePath.segments
def pathsegments =segments[1..-1] as String[]
fcp.relativePath = new RelativePath(!fcp.file.isDirectory(), pathsegments)
}
else {
fcp.exclude()
}
}
You need to specify to ignore empty dirs:
task unzip(type: Copy) {
...
includeEmptyDirs = false
}
We actually do have directories in the structure that are empty that we would like to unzip. Any Ideas?
This is a limitation of the API. You could keep track of what all the directories will be in the eachFile, and use a `doLast
`.
I'm hitting this same issue. Utility could be improved if the zipTree method took argument for a directory within the archive from which to root the tree. For example:
task unzip(type: Copy) {
from zipTree(zipFile, root: 'directory/in/the/zip/to/consider/as/root')
into destDir
include 'paths/relative/to/root'
}
I encounter this issue almost every time I try to pull a new tool into gradle land. Coming up on 2 years old. Here's how to solve this using Gulp:!
+1 I still believe this is a valuable use case.
+1, I do too. I don't understand why it should not be possible except for the extra effort to support it. Currently I typically extract everything and use only parts, wasting a lot of disk space.
For all tracking this, I've made a new GitHub issue at. | https://issues.gradle.org/browse/GRADLE-3025.html | CC-MAIN-2021-31 | refinedweb | 508 | 66.54 |
Ruby Basic Exercises: Compute the sum of the two integers
Ruby Basic: Exercise-22 with Solution
Write a Ruby program to compute the sum of the two integers, if the two values are equal return double their sum otherwise return their sum.
Ruby Code:
def sum_double(x, y) x == y ? (x+y)*2 : x+y end print sum_double(5, 5),"\n" print sum_double(4, 5)
Output:
20 9
Flowchart:
Ruby Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Ruby program to check if a number is within 10 of 100 or 200.
Next: Write a Ruby program to print "Ruby Basic Exercises" 9 times. | https://www.w3resource.com/ruby-exercises/basic/ruby-basic-exercise-22.php | CC-MAIN-2021-21 | refinedweb | 108 | 59.64 |
Workshop
The Workshop provides quiz questions to help you solidify your understanding of the material covered today as well as exercises to give you experience using what you have learned. Try to understand the quiz and exercises before continuing to tomorrow's lesson. Answers are provided in Appendix A, "Answers to Quizzes and Exercises."
Quiz
1. What is the common language runtime?
2. What is an assembly?
3. What is a namespace?
4. What is Internet Information Services, and what's it used for?
5. What makes a Web page dynamic?
6. What is a Web Service?
7. What's the difference between a Web Service and a Web server? How are they related?
Exercises
Does your Internet Information Services Web server have help files installed? Locate the virtual directories that contain these help files and take some time to browse through them.
Create a project directory for tomorrow's lesson on your computer's hard drive. Next, create a virtual directory that "points" to the project directory just created.
Try adding some HTML files to a virtual directory on your computer. If you aren't familiar with HTML, look for HTML documentation on the Internet, using a search site such as or. | https://www.informit.com/articles/article.aspx?p=25755&seqNum=10 | CC-MAIN-2021-43 | refinedweb | 203 | 59.7 |
Filesystems HOWTO
Martin Hinner <[email protected]>
Version 0.7.5, 22 August 2000
This small HOWTO is about filesystems and accessing filesystems. It is
not Linux- or Unix-related document as you probably expect. You can
find there also a lot of interesting information about non-Unix
(file)systems, but Unix is my primary interest :-)
Table of Contents
1.1 Copyright
1.2 Filesystems mailing-list
1.2.1 Linux kernel filesystems mailing-list
1.2.2 FreeBSD filesystems mailing-list
1.3 Filesystems collection at metalab.unc.edu
1.4 Credits
1.5 Filesystems accessibility map
1.6 Introduction to contiguous allocation filesystems
1.7 Introduction to linked-list allocation filesystems
1.8 Introduction to FAT-based filesystems
1.9 Introduction to Inode filesystems
1.10 Introduction to extent filesystems
1.11 Introduction to filesystems using balanced trees
1.12 Introduction to logging/journaling filesystems
1.13 Other filesystem features
1.13.1 Quota
1.13.2 Snapshot
1.13.3 ACLs
2.1 PC Partitions
2.1.1 GNU parted
2.1.2 Repairing corrupted partition table
2.1.2.1 Fixdisktable
2.1.2.2 gpart
2.1.2.3 rescuept
2.1.2.4 findsuper
2.2 Other partitions
2.3.3 SCO OpenServer disklabel
2.3.4 Sun Solaris disklabel
2.4 Windows NT volumes
2.4.1 Repairing "fault tolerant" NTFS disks using FTEdit
2.5 MD - Multiple Devices driver for Linux
2.6 LVM - Logical Volume Manager (HP-UX LVM?)
2.7 VxVM - Veritas Volume Manager
2.8 IBM OS/2 LVM
2.9 StackVM
2.10 Novell NetWare volumes
3.1 VFAT: Long filenames
3.2 UMSDOS: Linux LFN/attributes on FAT filesystem
3.3 OS/2 Extended Attributes on FAT filesystems
3.4 Star LFN
3.5 Accessing VFAT from OS/2 (VFAT-OS2)
3.6 Accessing VFAT from DOS (LFNDOS driver)
3.7 Accessing VFAT from DOS (Free LFNDOS driver)
3.8 Accessing VFAT from DOS (Odi's LFN tools)
3.9 Accessing FAT32 from OS/2 (FAT32.IFS)
3.10 Accessing FAT32 from Windows NT 4.0
3.11 Accessing FAT32 from Windows NT 4.0
3.12 Accessing Stac/Dblspaced/Drvspaced drives from Linux (DMSDOS)
3.13 Accessing Dblspaced/Drvspaced drives from Linux (thsfs)
3.14 Fsresize - FAT16/32 resizer
3.15 FIPS - FAT16 resizer
4.1 Accessing HPFS from DOS (iHPFS)
4.2 Accessing HPFS from DOS (hpfsdos)
4.3 Accessing HPFS from DOS (hpfsa)
4.4 Accessing HPFS from DOS (amos)
4.5 Accessing HPFS from Linux
4.6 Accessing HPFS from FreeBSD
4.7 Accessing HPFS from Windows NT 3.5
4.8 Accessing HPFS from Windows NT 4
5.1 Accessing NTFS from DOS (NTFSDOS.EXE)
5.2 Accessing NTFS from DOS (ntpwd)
5.3 Accessing NTFS from OS/2
5.4 Accessing NTFS from Linux
5.5 Accessing NTFS from FreeBSD and NetBSD
5.6 Accessing NTFS from BeOS
5.7 Accessing NTFS from BeOS (another)
5.8 Repairing NTFS using NTFSDOS Tools
5.9 Repairing NTFS using NTRecover
6.1 Extended filesystem (ExtFS)
6.2 Second Extended Filesystem (Ext2 FS)
6.2.1 Motivations
6.2.2 ``Standard'' Ext2fs features
6.2.3 ``Advanced'' Ext2fs features
6.2.4 Physical Structure
6.2.5 Performance optimizations
6.3 Third Extended Filesystem (Ext3 FS)
6.4 E2compr - Ext2fs transparent compression
6.5 Accessing Ext2 from DOS (Ext2 tools)
6.6 Accessing Ext2 from DOS, Windows 9x/NT and other Unixes (LTools)
6.7 Accessing Ext2 from OS/2
6.8 Accessing Ext2 from Windows 95/98 (FSDEXT2)
6.9 Accessing Ext2 from Windows 95 (Explore2fs)
6.10 Accessing Ext2 from Windows NT (ext2fsnt)
6.11 Accessing Ext2 from BeOS
6.12 Accessing Ext2 from MacOS (MountX)
6.13 Accessing Ext2 from MiNT
6.14 Ext2fs defrag
6.15 Ext2fs resize
6.16 Ext2end
6.17 Repairing/analyzing/creating Ext2 using E2fsprogs
6.18 Ext2 filesystem editor - Ext2ed
6.19 Linux filesystem editor - lde
6.20 Ext2 undelete utilities
7.1 Accessing HFS from Linux
7.2 Accessing HFS from OS/2 (HFS/2)
7.3 Accessing HFS from Windows 95/98/NT (HFV Explorer)
7.4 Accessing HFS from DOS (MAC-ETTE)
7.5 HFS utils
7.6 MacFS: A Portable Macintosh File System Library
8.1 RockRidge extensions
8.2 Joliet extensions
8.3 Hybrid CD-ROMs
8.4 Novell NetWare indexes on ISO9660
8.5 Accessing Joliet from Linux
8.6 Accessing Joliet from BeOS
8.7 Accessing Joliet from OS/2
8.8 Accessing Audio CD as filesystem from Linux
8.9 Accessing Audio CD as filesystem from BeOS
8.10 Accessing all tracks from Linux (CDfs)
8.11 Creating Hybrid CD-ROMs (mkhybrid)
9.1 ADFS - Acorn Disc File System
9.2 AFFS - Amiga fast filesystem
9.3 BeFS - BeOS filesystem
9.4 BFS - UnixWare Boot Filesystem
9.5 CrosStor filesystem
9.6 DTFS - Desktop filesystem
9.7 EFS - Enhanced filesystem (Linux)
9.8 EFS - Extent filesystem (IRIX)
9.8.1 EFS and FFS library, libfs
9.9 FFS - BSD Fast filesystem
9.10 GPFS - General Parallel Filesystem
9.11 HFS - HP-UX Hi performance filesystem
9.12 HTFS - High throughput filesystem
9.13 JFS - Journaled filesystem (HP-UX, AIX, OS/2 5, Linux)
9.14 LFS - Linux log structured filesystem
9.15 MFS - Macintosh filesystem
9.16 Minix filesystem
9.17 NWFS - Novell NetWare filesystem
9.17.1 NetWare filesystem / 286
9.17.2 NetWare filesystem / 386
9.17.3 Accessing NWFS-386 from Linux
9.18 NSS - Novell Storage Services
9.19 ODS - On Disk Structure filesystem
9.20 QNX filesystem
9.21 Reiser filesystem
9.22 RFS (CD-ROM Filesystem)
9.23 RomFS - Rom filesystem
9.24 SFS - Secure filesystem
9.25 Spiralog filesystem (OpenVMS)
9.26 System V and derived filesystems
9.26.1 AFS - Acer Fast Filesystem
9.26.2 EAFS - Extended Acer Fast Filesystem
9.26.3 Coherent filesystem
9.26.4 S5
9.26.5 S51K - SystemV 1K
9.26.6 Version 7 filesystem
9.26.7 Xenix filesystem
9.27 Text - (Philips' CD-ROM Filesystem)
9.28 UDF - Universal Disk Format (DVD-ROM filesystem)
9.29 UFS
9.30 V7 Filesystem
9.31 VxFS - Veritas filesystem (HP-UX, SCO UnixWare, Solaris)
9.31.1 VxTools
9.32 XFS - Extended filesystem (IRIX)
9.33 Xia FS
10.1 Backing up raw partitions using DBsnapshot
11.1 Network filesystems
11.1.1 AFS - Andrew Filesystem
11.1.2 CODA
11.1.3 NFS - Network filesystem (Unix)
11.1.4 NCP - NetWare Core Protocol (Novell NetWare)
11.1.5 SMB - Session Message Block (Windows 3.x/9x/NT)
11.1.6 Intermezzo
11.2 Encrypted filesystems
11.2.1 CFS
11.2.2 TCFS
11.2.3 SFS
11.2.4 VS3FS: Steganographic File System for Linux
11.3 Filesystem benchmarking utilities
11.3.1 IOzone
11.4 Writing your own filesystem driver
11.4.1 DOS
11.4.2 OS/2
11.4.3 Windows NT
11.5 Related documents
The
<>.
1.1. Copyright
The Filesystems HOWTO, Copyright (c) 1999 Martin Hinner
<[email protected]>..
1.2. Filesystems mailing-list".
1.2.1. Linux kernel filesystems mailing-list
To join Linux kernel filesystems mailing list linuxfsdev
@vger.rutgers.edu, send e-mail to [email protected]. Put
"subscribe linux-fsdev" in message body.
1.2.2. FreeBSD filesystems mailing-list
To join techical FreeBSD filesystems mailing list freebsdfs
@FreeBSD.org, send e-mail to [email protected]. Put "subscribe
freebsd-fs" in message body.
1.3. Filesystems collection at metalab.unc.edu
Filesystems collection is FTP/WWW site providing useful information
about filesystems and filesystem-related programs and drivers. It
lives at <>, or FTP-only at
<>.
1.4. Credits:
Many thanks to the above people. If I have forgotten anyone, please
let me know.
1.5. Filesystems accessibility map''
1.6. Introduction to contiguous allocation filesystems
Some contiguous filesystems: ``BFS'', ``ISO9660 and extensions''.
1.7. Introduction to linked-list allocation filesystems
1.8. Introduction to FAT-based filesystems
(todo)
Some FAT filesystems: ``FAT12/16/32, VFAT'' and ``NetWare filestem''.
1.9. Introduction to Inode filesystems
1.10. Introduction to extent filesystems
Some 'extent' filesystems: ``EFS'' and ``VxFS''.
1.11. Introduction to filesystems using balanced trees
Some filesystems which use B+ trees: ``HFS'', ``NSS'', ``Reiser FS''
and ``Spiralog filesystem''.
1.12. Introduction to logging/journaling filesystems''.
1.13. Other filesystem features
1.13.1. Quota
1.13.2. Snapshot
1.13.3. ACLs
2. Volumes
2.1. PC Partitions
2.1.1. GNU parted
GNU Parted is a program for creating, destroying, resizing,
checking and copying partitions, and the filesystems on them..
2.1.2. Repairing corrupted partition table
2.1.2.1. Fixdisktable
This is a utility that handles ext2, FAT, NTFS, ufs, BSD disklabels
(but not yet old Linux swap partitions); it actually will rewrite
the partition table, if you give it permission.
2.1.2.2. gpart
GPART is a utility that handles ext2, FAT, Linux swap, HPFS, NTFS,
FreeBSD and Solaris/x86 disklabels, minix, reiser fs; it prints a
proposed contents for the primary partition table, and is welldocumented.
2.1.2.3. rescuept
Recognizes ext2 superblocks, FAT partitions, swap partitions, and
extended partition tables; it may also recognize BSD disklabels and
Unixware 7 partitions. It prints out information that can be used
with fdisk or sfdisk to reconstruct the partition table. It is in
the non-installed part of the util-linux distribution.
2.1.2.4. findsuper
Small utility that finds blocks with the ext2 superblock signature,
and prints out location and some info. It is in the non-installed
part of the e2progs distribution.
2.2. Other partitions
Because I use only Intel x86 machines, any contributions (or non-x86
machine donation ;-) ) are very welcome. If you can provide any useful
information, don't hesitate to mail me..
2.3.3. SCO OpenServer disklabel
2.3.4. Sun Solaris disklabel
2.4. Windows NT volumes
This linux-kernel driver allows you to access and mount linear and
stripe set volumes.
2.4.1. Repairing "fault tolerant" NTFS disks using FTEdit
If you have a Windows NT Workstation or Server configured for fault
tolerant (FT) partitions (such as stripes with parity and volume
sets), and those partitions are inaccessible and appear in Disk
Administrator as type Unknown, you can possibly make them
accessible again by using the utility FTEDIT.
2.5. MD - Multiple Devices driver for Linux
This driver lets you combine several hard disk partitions into one
logical block device. This can be used to simply append one
partition to another one or to combine several redundant hard disks
to a RAID1/4/5 device so as to provide protection against hard disk
failures. This is called "Software RAID" since the combining of the
partitions is done by the kernel.
2.6. LVM - Logical Volume Manager (HP-UX LVM?)
Linux implementation is available here:
2.7. VxVM - Veritas Volume Manager
For more information about Veritas Volume Manager see
<>.
See also: ``VxFS (Veritas Journaling Filesystem)''.
2.8. IBM OS/2 LVM
Logical Volume Manager is available in OS/2 WarpServer 5. It allows
you to create linear volumes on several disks/partitions. Some people
say that it's compatible with IBM AIX Logical Volume Manager.
See also: ``HPFS'', ``JFS''
2.9. StackVM <>.
2.10. Novell NetWare volumes
NetWare volumes are used for NWFS-386 filesystem.
3. DOS FAT 12/16/32, VFAT
3.1. VFAT: Long filenames.
3.2. UMSDOS: Linux LFN/attributes on FAT filesystem>.
3.3. OS/2 Extended Attributes on FAT filesystems.
3.4. Star LFN
<>.
3.5. Accessing VFAT from OS/2 (VFAT-OS2)
VFAT-OS2 is a package that will allow OS/2 to seamlessly access
Windows 95 VFAT formatted partitions from OS/2 as if they were
standard OS/2 drive letters. The ultimate aim of this package is to
be able to use the VFAT file system as a replacement of FAT. It can
now also access NTFS partitions in read-only mode.
3.6. Accessing VFAT from DOS (LFNDOS driver)
Some people say that Microsoft has released a driver called LFNDOS
that provides the Microsoft Long Filename API under DOS. If you know
where can this driver be downloaded, send me e-mail please.
3.7. Accessing VFAT from DOS (Free LFNDOS driver)
LFNDOS provides the Windows95 Long Filename (LFN) API to DOS
programs. It uses the same format for storing the names on disk as
Windows95 does, so you can view and use long filenames under both
systems interchangeably. It runs as a memory-resident program, and
while resident requires about 60k of conventional memory..
3.8. Accessing VFAT from DOS (Odi's LFN tools)
These tools provide easy file management under DOS with long
filenames created by Windows 95/98 on FAT32, FAT16 and FAT12 file
systems. Typing LDIR brings up the directory with its long
filenames. Copying a file with LCOPY preserves long filenames. You
can even create directories (LMD) with long names or rename files
(LREN) with long names.
3.9. Accessing FAT32 from OS/2 (FAT32.IFS)
FAT32.IFS for OS/2 will allow you to access FAT32 partitions from
OS/2. You cannot create FAT32 partitions, you'll still need Win95
OSR2 to do that. Also, OS/2s CHKDSK cannot fix all possible errors
that can occur, you'll have to use Windows 95 Scandisk to fix
certain errors.
3.10. Accessing FAT32 from Windows NT 4.0
FAT32 filesystem driver for NT 4.0 and NT 3.51.
3.11. Accessing FAT32 from Windows NT 4.0
This is a FAT32 file system driver for Windows NT(R) 4.0. Once
installed, any FAT32 drives present on your system will be fully
accessible as native Windows NT volumes. Free version provides
read-only capabilities. A read/write version is for sale.
3.12.
Accessing Stac/Dblspaced/Drvspaced drives from Linux (DMSDOS)
DMSDOS reads and writes compressed DOS filesystems (CVF-FAT). The
following configurations are supported:
It works with FAT32, NLS, codepages (tested with fat32 patches
version 0.2.8 under Linux 2.0.33 and with fat32 in standard 2.1.xx
kernels and 2.0.34+35). Dmsdos can run together with vfat or umsdos
for long filenames. It has been redesigned to be ready for SMP and
should now compile completely under libc6.
3.13.
Accessing Dblspaced/Drvspaced drives from Linux (thsfs)
3.14. Fsresize - FAT16/32 resizer
Resizes FAT16/FAT32 filesystems. It doesn't require any other
programs (like a defrager). It has --backup and --restore options,
so if there's a power failure, (or a bug), you can always go back.
The backup files are usually < 1 meg.
The author probably won't be releasing any more versions of fsresize,
because he is working on parted - a Partition Magic clone. It will be
able to resize, copy, create and check filesystems/partitions.
3.15. FIPS - FAT16 resizer
Good HPFS links:
4.1. Accessing HPFS from DOS (iHPFS)
iHPFS makes possible for OS/2 users to use their HPFS partitions when
they boot plain DOS. The HPFS partition is assigned a drive letter,
and can be accessed like any DOS drive.iHPFS is restricted to readonly
access.
This program is no longer being developed, because author doesn't use
OS/2. If you are willing to maintain the program, let him know.
4.2. Accessing HPFS from DOS (hpfsdos)
4.3. Accessing HPFS from DOS (hpfsa)
4.4. Accessing HPFS from DOS (amos)
4.5. Accessing HPFS from Linux
This driver is part of Linux kernel (2.1.x+). It can read and write
to HPFS partions. Access rights and owner can be stored in extended
attributes. Few bugs in original read-only HPFS are corrected. It
supports HPFS386 on Warp Server Advanced.).
4.6. Accessing HPFS from FreeBSD
Driver allows to mount HPFS volume into Unix namespace. ReadOnly
access is only supported for now.
4.7. Accessing HPFS from Windows NT 3.5
This program will edit the Windows NT registry and enable HPFS
support. Pinball.sys is the HPFS filesystem driver for Windows NT.
It can be found on NT 3.5x's CD-ROM. Microsoft no longer supports
HPFS. Installing this program will void your warranty and possibly
the license agreement.
4.8. Accessing HPFS from Windows NT 4
HPFS driver for Windows NT 4.0
5. New Technology FileSystem (NTFS)
information
5.1. Accessing NTFS from DOS (NTFSDOS.EXE)
NTFSDOS.EXE is a network file system redirector.
5.2. Accessing NTFS from DOS (ntpwd)
NTPwd contains command line tools to access NTFS partition, it'a a Dos
port of the driver used by Linux. It contains too a little utility to
change NT password.
5.3. Accessing NTFS from OS/2
ntfs_003.zip archive contains only command line tools to acccess a
NTFS partition in OS/2. A true IFS for accessing NTFS is included
in ``VFAT-OS2'' v0.05.
5.4. Accessing NTFS from Linux
Works both as a kernel driver, as well as a set of command line
utilities.
5.5. Accessing NTFS from FreeBSD and NetBSD
Driver allows to mount NTFS volumes under FreeBSD and NetBSD. We
also support limited writing ability: you can write into not
comressed files without holes, but you can't change the size of
file yet. Write support was made to swap on NTFS volume.
5.6. Accessing NTFS from BeOS
This is a ALPHA version of a NTFS driver for BeOS. It is not the
most polished thing in the world, but every release that author
puts out is more stable than the last. He just implemented
compressed file reads, so be careful with those. He also finally
worked with NTFS 5 volumes, and managed to root out a few bugs.
Author now works for Be Inc, so you will not see his NTFS and ext2
filesystem support updated on the web much more. The drivers will be
pulled into future BeOS releases.
5.7. Accessing NTFS from BeOS (another)
5.8. Repairing NTFS using NTFSDOS Tools
An add-on to NTFSDOS that allows one to rename existing files, or
to overwrite a file with new data. Very limited functionality.
5.9. Repairing NTFS using NTRecover
Uses a boot floppy and a serial connection to a second NT system to
provide full access to a NTFS drives on dead NT systems. Ideal for
salvaging data or replacing drivers.
6. Extended filesystems (Ext, Ext2, Ext3)
Extended filesystem (ext fs), second extended filesystem (ext2fs) and
third extended filesystem (ext3fs) were designed and implemented on
Linux by Rémy>
6.1. Extended filesystem (ExtFS)
This is old filesystem used in early Linux systems.
6.2. Second Extended Filesystem (Ext2 FS).
6.2.1. Motivations.
6.2.2. `.
6.2.3. ``Advanced'' Ext2fs features.
6.2.4. Physical Structure.
6.2.5. Performance optimizations:
6.3. Third Extended Filesystem (Ext3 FS)
Ext3 support the same features as Ext2, but includes also Journaling.
You can download pre- version from
<>.
6.4. E2compr - Ext2fs transparent compression
Implements `chattr +c' for the ext2 filesystem. Software consists
of a patch to the linux kernel, and patched versions of various
software (principally e2fsprogs i.e. e2fsck and friends). Although
some people have been relying on it for years, THIS SOFTWARE IS
STILL IN DEVELOPMENT, AND IS NOT ,END-USER`-READY.
6.5. Accessing Ext2 from DOS (Ext2 tools)
A collection of DOS programs that allow you to read a Linux ext2
file system from DOS.
6.6. Accessing Ext2 from DOS, Windows 9x/NT and other Unixes (LTools)
The LTOOLS are under DOS/Windows 3.x/Windows 9x/Windows NT or nonLinux
-UNIX, what the MTOOLS are under Linux. You can access (read,
write, modify) your Linux files when running one of the other
operating systems. The kernel of the LTOOLS is a set of command
line programs. Additionally a JAVA program as a stand alone
graphical user interface is available. Alternatively, you can use
your standard web browser as a graphical user interface. The LTOOLS
do not only provide access to Linux files on your own machine, but
also remote access to files on other machines.
6.7. Accessing Ext2 from OS/2
EXT2-OS2 is a package that allows OS/2 to seamlessly access Linux
ext2 formatted partitions from OS/2 as if they were standard OS/2
drive letters. The ultimate aim of this package is to be able to
use the ext2 file system as a replacement of FAT or HPFS. For the
moment the only lacking feature to achieve this goal is the support
for OS/2 extended attributes.
6.8. Accessing Ext2 from Windows 95/98 (FSDEXT2)
6.9. Accessing Ext2 from Windows 95 (Explore2fs)
A user space application which can read and write the second
extended file system ext2. Supports hard disks and removable
media, including zip and floppy. Uses a windows explorer like
interface to show files and details. Supports Drag& Drop, context
menus etc. Written for Windows NT, but has some support for
Windows 95. Large disks can cause problems.
6.10. Accessing Ext2 from Windows NT (ext2fsnt)
6.11. Accessing Ext2 from BeOS
This is a driver to allow BeOS to mount the Linux Ext2 filesystem.
The version that is currently released author consider pretty
stable. People have been using it for a long time, with no bug
reports.
Authow now works for Be Inc, so you will not see his ext2 and NTFS
filesystem support updated on the web much more. The drivers will be
pulled into future BeOS releases.
6.12. Accessing Ext2 from MacOS (MountX)
MacOS driver which allows you to mount ext2 filesystems (Linux and
MkLinux) on the Macintosh.
6.13. Accessing Ext2 from MiNT
This is a full working Ext2 filesystem driver for FreeMiNT. It can
read and write the actual ext2 version as implemented in Linux for
example. The partition size is not limited and the logical sector
size can be 1024, 2048 or 4096 bytes. The only restriction is that
the physical sector size is smaller or equal to the logical sector
size. The blocksize can be configured if you initialize the
partition with mke2fs.
6.14. Ext2fs defrag
Defragments your ext2 filesystem. Needs updated for glib
libraries.
6.15. Ext2fs resize
Resizes second extended filesystem.
6.16. Ext2end
For use with ``LVM'' Consists of 2 utilites. ext2endable
reorganises an empty ext2 file systems to allow them to be
extended, and ext2end that extends an unmounted ext2 file system.
If ext2endable has not been run when the file system was created
ext2end will only be able to extend it to the next multiple of
256MB
6.17. Repairing/analyzing/creating Ext2 using E2fsprogs
The ext2fsprogs package contains essential ext2 filesystem
utilities which consists of e2fsck, mke2fs, debugfs, dumpe2fs,
tune2fs, and most of the other core ext2 filesystem utilities.
6.18. Ext2 filesystem editor - Ext2ed
EXT2ED is a disk editor for the extended2 filesystem. It will show
you the ext2 filesystem structures in a nice and intuitive way,
letting you easily "travel" between them and making the necessary
modifications.
6.19. Linux filesystem editor - lde
This allows you to view some Linux fs's, hex block and inode
editing are now supported and you can use it to dump an erased file
to another partition with a little bit of work. Supports ext2,
minix, and xiafs. Includes LaTeX Introduction to the Minix fs. You
must patch sources to compile on 2.2.x and 2.3.x kernels beacuse of
missing Xia header files in kernel.
6.20. Ext2 undelete utilities
This is a patch for kernel 2.0.30 that adds undelete capabilities
using the "undeletable" attribute provided by the ext2fs. This
patch include man pages, the undelete daemon and utilities. Check
our web page for the latest and greatest version.
7. Macintosh Hierarchical Filesystem - HFS:
7.1. Accessing HFS from Linux
7.2. Accessing HFS from OS/2 (HFS/2)
HFS/2 lets OS/2 users seamlessly read and write files on diskettes
formatted with the Hierarchical File System, the file system used by
Macintosh computers. With HFS/2, Macintosh diskettes can be used just
as if they were regular diskettes.
7.3. Accessing HFS from Windows 95/98/NT (HFV Explorer)
An HFS volume browser for Windows NT and Windows 9x based on
hfsutils. Launch pad support for all major Macintosh emulators
running on Windows.
7.4. Accessing HFS from DOS (MAC-ETTE)
Mac-ette is a PC utility which can read, write, format and
duplicate Macintosh HFS format 1.4 Meg diskettes on a PC equipped
with a 3.5 inch high density diskette drive.
7.5. HFS utils
The hfsutils package contains a set of command-line utilities such as
hformat, hmount, hdir, hcopy, etc. They allow read-write access of
files and directories on HFS volumes.
7.6. MacFS: A Portable Macintosh File System Library
This is a Macintosh file system library which is portable to a
variety of operating systems and platforms. It presents a
programming interface sufficient for creating a user level API as
well as file system drivers for operating systems that support
them. Authors implemented and tested such a user level API and
utility programs based on it as well as an experimental Unix
Virtual File System. They also describe the Macintosh Hierarchical
File System and their.
8. ISO 9660 - CD-ROM filesystem
Useful ISO-9660 links:
8.1. RockRidge extensions
Extensions allowing long filenames and Unix-style symbolic links.
Useful RockRidge links:
8.2. Joliet extensions
Joliet is a Microsoft extension to the ISO 9660 filesystem that allows
Unicode characters to be used in filenames. This is a benefit when
handling internationalization. Like the Rock Ridge extensions, Joliet
also allows long filenames.
8.3. Hybrid CD-ROMs.
8.4. Novell NetWare indexes on ISO9660
8.5. Accessing Joliet from Linux
8.6. Accessing Joliet from BeOS
It is updated ISO9660 driver to be able to use a Joliet ISO9660
extensions.
8.7. Accessing Joliet from OS/2
Jcdfs.zip archive contains CDFS.IFS driver for OS/2 with Joliet
level 3 support.
8.8. Accessing Audio CD as filesystem from Linux
This kernel module allows you to access an audio CD as a regular
filesystem.
8.9. Accessing Audio CD as filesystem from BeOS
This filesystem add-on will allow you (if your CD drive supports
it) to treat a regular audio CD as if it were a bunch of WAV files.
You can copy the files, encode them to mp3, play them slower,
faster, even backwards.
8.10. Accessing all tracks from Linux (CDfs).
8.11. Creating Hybrid CD-ROMs (mkhybrid)
Make an ISO9660/HFS/JOLIET shared hybrid CD volume
9. Other filesystems
9.1. ADFS - Acorn Disc File System>.
9.2. AFFS - Amiga fast filesystem>.
9.3. BeFS - BeOS filesystem
BeFS is ``journaling'' filesystem used in BeOS. For more information
about BeFS see Practical File System Design with the Be File System
book or BeFS linux driver source code.
Linux BeFS implementation:
This driver supports x86 and PowerPC Linux platform. Also, it only
supports readable in hard disk and floppy disk.
9.4. BFS - UnixWare Boot Filesystem.
There is also mine old implementation, which is now obsolete. My plan
is to port this code to FreeBSD:
This is read-only UnixWare Boot filesystem support for Linux. You
can use it to mount read-only your UnixWare /stand partition or
floppy disks. I don't plan a read-write version, but if you want it
mail me. You might be also interested in ``VxFS'' Linux support.
9.5. CrosStor filesystem
This is new name for High throughput filesystem (HTFS). For more
information see CrosStor homepage at <>.
9.6. DTFS - Desktop filesystem:
9.7. EFS - Enhanced filesystem (Linux).
9.8. EFS - Extent filesystem (IRIX).. This version of efs contains support for hard-disk
partitions, and also contains a kernel patch to allow you to
install the efs code into your linux kernel tree. Handling of large
files has also been vastly improved.
Original efsmod is also available:
Efs-mod 0.6 is original EFS read/only module for Linux. Version 0.6
finished but Project frozen due to lack of time and information for
implementing the write part.
9.8.1. EFS and FFS library, libfs
A C library to read EFS and FFS from WinNT x86, SunOS and IRIX.
Easy to use (Posix like interface) and to links aginst existent
code FTP server has also winefssh.exe and winufssh.exe, simple
WinNT binaries to interactively read UFS and EFS file systems. Not
a very polished/documented package, but somebody may find it
useful.
Useful links:
9.9. FFS - BSD Fast filesystem
This is native filesystem for most BSD unixes (FreeBSD, NetBSD,
OpenBSD, Sun Solaris, ...).
See also: ``SFS, secure filesystem'', ``UFS''.
9.10. GPFS - General Parallel Filesystem.
9.11. HFS - HP-UX Hi performance filesystem
This is the second hfs that appears in this howto. It is used in older
HP-UX versions.
9.12. HTFS - High throughput filesystem
Read/Write commercial driver available from CrosStor:
9.13. JFS - Journaled filesystem (HP-UX, AIX, OS/2 5, Linux)
JFS is IBM's journaled file system technology, currently used in
IBM enterprise servers, and is designed for high-throughput server
environments.
9.14. LFS - Linux log structured filesystem
Linux Log structured filesystem implementation called d(t)fs:
d(t)fs is a log-structured filesystem project for Linux.
Currently, the filesystem is mostly up and running, but no cleaner
has been written so far.
There will also be a dtfs mailing list that will be announced on the
homepage. For more information you can have a look at:
<>
9.15. MFS - Macintosh filesystem
MFS is original Macintosh filesystem. It has been replaced by HFS /
HFS+. If you can provide further information, mail me please.
9.16. Minix filesystem
This is Minix native filesystem. It was also used in first versions of
Linux.
9.17. NWFS - Novell NetWare filesystem.
9.17.1. NetWare filesystem / 286
9.17.2. NetWare filesystem / 386
9.17.3. Accessing NWFS-386 from Linux
This driver allows you to mount NWFS-386 filesystem on Linux.
9.18. NSS - Novell Storage Services
This is a new 64bit ``journaling'' filesystem using a ``balanced
tree'' algorithms. It is used in Novell NetWare 5.
9.19. ODS - On Disk Structure filesystem
This is OpenVMS and VMS native filesystem.
9.20. QNX';
Driver for the QNX 4 filesystem.
9.21. Reiser filesystem.
9.22. RFS (CD-ROM Filesystem)
Sony's incremental packet-writing filesystem.
9.23. RomFS - Rom filesystem
Author of Linux RomFS implemplementation is Janos Farkas
<[email protected]> For more information see
/usr/src/linux/Documentation/filesystems/romfs.txt file.
9.24. SFS - Secure filesystemnumbered
inode is reserved for security information. The information
contains Access Control List information. I'm not sure if SFS has any
other abilities though.
SFS links:
9.25. Spiralog filesystem (OpenVMS):
9.26. System V and derived filesystems
Homepage of System V Linux project is at
<>. Maintainer of this project
is <[email protected]>.
9.26.1. AFS - Acer Fast Filesystem
The Acer Fast Filesystem is used on SCO Open Server. It is similar to
the System V Release 4 filesystem, but it is using bitmaps instead of
chained free-list of blocks.
9.26.2. EAFS - Extended Acer Fast Filesystem
The AFS filesystem can be 'extended' to handle file names up to 255
characters, but directories entries still have 14-char names. This
filesystem type is used on SCO Open Server.
9.26.3. Coherent filesystem
9.26.4. S5
This filesystem is used in UnixWare. It's probably SystemV compatible,
but I haven't verified it yet. For more information see
<>.
9.26.5. S51K - SystemV 1K
9.26.6. Version 7 filesystem
This filesystem type is used on Version 7 Unix for PDP-11 machines.
9.26.7. Xenix filesystem
9.27. Text - (Philips' CD-ROM Filesystem)
Philips' standard for encoding disc and track data on audio CDs.
9.28. UDF - Universal Disk Format (DVD-ROM filesystem)
There is a Linux UDF filesystem driver:
9.29. UFS
Note: People often call ``BSD Fast Filesystem'' incorrectly UFS. FFS
and UFS are diferrent filesystems. All modern Unixes use FFS
filesystem, not UFS! UFS was used in early BSD versions. You can
download source code at <>.
See also: ``BSD FFS''
9.30. V7 Filesystem
The V7 Filesystem was used in Seventh Edition of UNIX Time Sharing
system (about 1980). For more information see 7th Ed. source code,
which is available from the Unix Archive:
<>.
9.31. VxFS - Veritas filesystem (HP-UX, SCO UnixWare, Solaris)''.
9.31.1. VxTools
Unix command-line utilities for accessing VxFS versions 2 and 4 are
available under the GNU GPL:
Vxtools is a set of command-line utilites which allow you to access
your VxFS filesystem from Linux (and possibly other Unixes).
Current version can read VxFS versions 2 and 4..
9.32. XFS - Extended filesystem (IRIX).:
9.33. Xia FS
This filesystem was developed to replace old Minix filesystem in
Linux. Author of this fs is Franx Xia <[email protected]>
10. Raw partitions
10.1. Backing up raw partitions using DBsnapshot
(todo:)
11. Appendix
11.1. Network filesystems
This HOWTO is not about Network filesystems, but I should mention
them.
There is a brief list of some which I know:
11.1.1. AFS - Andrew Filesystem
11.1.2. CODA
Coda is a distributed filesystem with novel features such as
disconnected operation and server replication.
11.1.3. NFS - Network filesystem (Unix)
11.1.4. NCP - NetWare Core Protocol (Novell NetWare)
11.1.5. SMB - Session Message Block (Windows 3.x/9x/NT)
This protocol is used in Windows world.
11.1.6. Intermezzo
Intermezzo is a distributed file system for Linux. It was inspired
from coda but uses the disk file system as a persistent cache.
Intermezzo supports disconnected operation but does not yet
implement an identification system.
11.2. Encrypted filesystems
11.2.1.; cleartext is never stored on a
disk or sent to a remote file server. CFS employs a novel
combination of DES stream and codebook cipher modes to provide high
security with good performance on a modern workstation. CFS can
use any available file system for its underlying storage without
modification, including remote file servers such as NFS. System
management functions, such as file backup, work in a normal manner
and without knowledge of the key.
11.2.2. TCFS
The main difference between TCFS and CFS is the trasparency to user
obtained by using TCFS. As a matter of fact, CFS works in user
space while TCFS works in the kernel space thus resulting in
improved performances and security. The dynamic encryption module
feature of TCFS allows a user to specify the encryption engine of
his/her choiche to be used by TCFS. Currently available only for
Linux, TCFS will be relased soon also for NetBSD, and will support
in a near future also other FS then NFS.
11.2.3. SFS
( TODO: <> )
11.2.4. VS3FS: Steganographic File System for Linux
fspatch is a kernel patch which introduces module support for the
steganographic file system (formerly known as vs3fs, an
experimental type of filesytem that not only encrypts all
information on the disk, but also tries to hide that information in
such a way that it cannot be proven to even exist on the disk. This
enables you to keep sensitive information on a disk, while not be
prone to being forced to reveal that information. Even under
extreme circumstances, fake documents could be stored on other
parts of the disk, for which a pasword may be revealed. It should
not be possible to find out whether any other information is stored
on the disk.
11.3. Filesystem benchmarking utilities
11.3.1. IOzone
IOzone is a filesystem benchmark tool. The benchmark generates and
measures a variety of file operations. Iozone has been ported to
many machines and runs under many operating systems.
11.4. Writing your own filesystem driver
11.4.1. DOS
I haven't seen yet any good page about writing DOS filesystem drivers
(Network redirectors) on the net. The best source is Ralf Brown's
interrupt list and ``iHPFS'' source code.
11.4.2. OS/2
11.4.3. Windows NT
Microsoft IFS kit page ( <>) will
be useful as the best way to get into NT filesystems development (even
for $1K it costs).
For more information about writing FS drivers for Windows NT see
<> by <[email protected]>.
11.5. Related documents
Dynamic RAM drive IFS driver for OS/2 | http://www.sourcefiles.org/System/Filesystems-HOWTO.shtml | CC-MAIN-2015-22 | refinedweb | 6,110 | 61.33 |
NAME
VFS_CHECKEXP - check if a file system is exported to a client
SYNOPSIS
#include <sys/param.h> #include <sys/mount.h> int VFS_CHECKEXP(struct mount *mp, struct sockaddr *nam, int *exflagsp, struct ucred **credanonp);
DESCRIPTION
The VFS_CHECKEXP() macro is used by the NFS server to check if a mount point is exported to a client. The arguments it expects are: mp The mount point to be checked. nam An mbuf containing the network address of the client. exflagsp Return parameter for the export flags for this client. credanonp Return parameter for the anonymous credentials for this client. The VFS_CHECKEXP() macro should be called on a file system’s mount structure to determine if it is exported to a client whose address is contained in nam. It is generally called before VFS_FHTOVP(9) to validate that a client has access to the file system. The file system should call vfs_export_lookup(9) with the address of an appropriate netexport structure and the address of the client, nam, to verify that the client can access this file system.
RETURN VALUES
The export flags and anonymous credentials specific to the client (returned by vfs_export_lookup(9)) will be returned in *exflagsp and *credanonp.
SEE ALSO
VFS(9), VFS_FHTOVP(9), VFS_VPTOFH(9), vnode(9)
AUTHORS
This manual page was written by Alfred Perlstein. | http://manpages.ubuntu.com/manpages/maverick/man9/VFS_CHECKEXP.9freebsd.html | CC-MAIN-2015-35 | refinedweb | 217 | 62.68 |
I am starting to use the basic graphics in C, and I am trying to get keyboard input to work right. I want something like getch() to recognize what key I pressed and save it in a variable so I can use that variable to control an object's movement on the screen. Here I have a basic box moving code.
I put comments in the code to show you where I need help. I donI put comments in the code to show you where I need help. I donCode:#include <stdio.h> #include <conio.h> #include <graphics.h> using namespace std;int main(void) { int gd=DETECT,gm; initgraph(&gd, &gm, "C:\\TC\\BGI"); int ch; ch=getch();/*can i use this to bind getch() to a variable?*/ int x=200, y=300, x2=400, y2=400; while(1==1){ getch(); cleardevice(); if(ch==/*need help here*/){ x+=5; y-=10; x2+=5; y2-=10; } else if(ch==/*need help here*/){ x-=5; y+=10; x2-=5; y2+=10; } rectangle(x,y,x2,y2); } }
t know if I can even bind a variable to getch(). I have searched the web for this and haven't found a good answer. | http://cboard.cprogramming.com/c-programming/146889-instant-keyboard-input-questions.html | CC-MAIN-2014-41 | refinedweb | 200 | 81.83 |
Time to release #nettefw 2.1 is coming
6 years ago
- David Grudl
- Nette Core | 6827
I would like to solve these issues (plus add another cca 5 features) and release version 2.1<. Together with Nette will be released stable Nette Tester.
What will bring the new version?
- Presenter: secured links
- PresenterFactory: configurable mapping Presenter name → Class name
- Route: new pseudo-variables
%basePath%,
%tld%and
%domain%
- Dependency Injection:
- annotation @inject
- auto-generated factories and accessors via interface
- adding compiler extensions via config file
- auto-detection of sections in config file
- configurable presenters via config
- simple validating schema
- Database
- complete refactoring, a ton of bug fixes
- lazy connection
- much better (dibi-like) SQL preprocessor
- all queries are logged (error queries, transactions, …)
- new driver for Sqlsrv
- Debugger
- Dumper: colored and clickable dumps in HTML or terminal
- Bar: you can see bar after redirect and is updated via AJAX
- full stack trace on fatal errors (requires Xdebug)
- Forms
- new macro
n:form,
<select n:input>and
<textarea n:input>
- partially rendered radiolists using
{input name:$key}and
{label name:$key}
- setOmitted: excludes value from $form->getValues() result
- removed dependency on Environment
- improved toggles
- improved netteForms.js
- Latte
- short block syntax
{#content}
- macro {link} generates URL without presenter
- a lot of small improvements
- modifier
|noescape
- RobotLoader: on-the-fly filters
- SmtpMailer: persistent connection
- Json: supports pretty output
- added new SessionPanel
Do you know about something important, what should be added to or removed from 2.1?
6 years ago
- Jan Tvrdík
- Nette guru | 2550
@David Grudl: I still believe that
@property annotation should be added back to
SystemContainer (revert 4a1ce4).
I understand that in theory DI container should be used only in composition
root, but for practical reasons (lots of apps using it now, great for
prototyping, less writing than proper injecting), we should keep the
@property annotation. I suggest we continue this discussion under
the related
commit.
6 years ago
I hope there will be proper beta/rc phases as well as their tags on GitHub. Not the way how 2.0.x versions are released (no tag, just some archive somewhere which noone actually will test, since it can't be pulled by composer).
These are things I think are worth fixing before release:
- Cache::ALL smaže celou cache, ne pouze namespace
- Latte: “pretty output” is actually ugly
- This ugly thing (see comments)
- Fix 90c6c851 (see comments)
- Fix d4418f01 – Session::regenerateId() called explicitly should really regenerate id
- Reconsider 27435fad having as property (see comments)
- See 6b0b1ff6 – add this to Sandbox?
- Fix 0e79764f – both BOM removal and mbstring dependency (add to composer?)
- Reconsider 1bc4260a – it pollutes global namespace, debug() is worst of them (see discussion)
6 years ago
Authorizator won't be fixed? It's pretty much useless in current state (for me at least).
EDIT: Just noticed that it actually is in the 2.1 milestone on GitHub. But I'm still little confused since it isn't mentioned in the list above.
Last edited by enumag (2013-04-16 22:58)
6 years ago
- David Grudl
- Nette Core | 6827
- @property: I'll be think about it
- Cache::ALL: prepare pull request
- Latte: “pretty output”: dtto
- This ugly thing: dtto
- REGENERATE_INTERVAL: was removed from framework
- add this to Sandbox: ok, prepare pull request
- debug(): will be removed
- Authorizator: will be in 2.1
6 years ago
- Database: Will be multi-related function supported?
foreach ($article->related('article_tag:tag') as $tag) { .... }
- Database: Will be SqlLiteral in select() supported?
$db->select('*') ->select(new SqlLiteral('AES_DECRYPT(secured_note, ?) AS secured_note', $aesPassword));
These are only two things I am waiting for in Nette :-)
6 years ago
6 years ago
6 years ago
David Grudl wrote:
- Cache::ALL: prepare pull request
- Latte: “pretty output”: dtto
No idea how to fix these. Lets discuss it in those issues if needed?
- This ugly thing: dtto
I provided a different solution, @enumag another one (neither of these wasn't polluting public API), but you rejected them without discussion…
6 years ago
6 years ago
@bene:
All Nette services should have nette prefix.
Addition of the prefixes is more complicated than you think. Adding aliases is not enough because it won't work in all cases. We already tried it, it was even merged once but had to be reverted soon after. Basically we don't know how to do it without braking back compatibility yet. See my pull request for details and discuss it there.
About Context @property I don't understant why anotation is problem if magic methot __get() allows access to services?!
Feel free discuss it here.
Last edited by enumag (2013-04-18 23:08)
6 years ago
About Context @property I don't understant why anotation is problem if magic methot __get() allows access to services?!
A lot of classes in Nette has @property anotations because has getXyz() method and extends Nette\Object. Why is the Container different?
Nette\Security\User miss anotation @property-read IUserStorage ;-)
@enumag: I've read discussion about services prefix (I confess I didn't deep study this problem) bud Davids comment is about same service in config. Uncompability in config is little problem I think.
I've read discussion about
@property and should I worry about
reject magic method
__get() from
Container? Back
compability is easy, create my own
Container which will be
implement magic
__get() method. But I think all peaple who have
benefits from magic
__get() method will be implement own
Container and this “mistake” will by solved only for new
programers.
6 years ago
@enumag: I've read discussion about services prefix (I confess I didn't deep study this problem) bud Davids comment is about same service in config. Uncompability in config is little problem I think.
No it isn't. It's huge BC break. And don't discuss it any further in this topic please.
Same for the @property. Discuss both on GitHub or create a separate topic(s) if you don't have a GitHub account.
Last edited by enumag (2013-04-19 00:03)
6 years ago
Will these be in 2.1? (They should in my opinion.)
Also what about this?
Shouldn't this one be in master for testing purposes?
6 years ago
- Honza Marek
- Member | 1674
Deprecated
$form->onSubmit should be removed in next version.
In autocompletion list it is before onSuccess and I make a mistake almost on
every attempt on submitting a form ;)
6 years ago
Honza Marek wrote:
Deprecated
$form->onSubmitshould be removed in next version. In autocompletion list it is before onSuccess and I make a mistake almost on every attempt on submitting a form ;)
It is not deprecated, it is an event that occurs always, even when submitted an invalid form. | https://forum.nette.org/en/1150-time-to-release-nettefw-2-1-is-coming | CC-MAIN-2019-13 | refinedweb | 1,116 | 52.6 |
Making Tests Reliable
There are different types of problems which can cause problems in your tests:
Tests are not understood by other developers/testers and are disabled or accidentally broken
Changes in the application causes tests to fail
Problems in the testing environment causes tests to fail
You need to take these into account or will quickly end up with a test suite where a few test always fail. As experienced testers can tell you, this test suite is as good as having no tests at all, because a test suite which is always "a bit red" is not taken seriously by any developer.
Creating Readable Tests
Just as with code, it is important to write tests so that the reader understands the intent. When each test contains high level, meaningful calls, the reader will immediately grasp what is being tested. When/if she wants to know more details about some part of the test, she can then dig into that part. If the test is full of low level details about how you locate the parts of the application you want to interact with, it becomes completely overwhelming to try to decode what the test is actually trying to verify.
Guarding Against Application Changes
If your application never changes, you can test it manually just once and you will know that it works properly. In most cases though, your application will be developed forward and you need to maintain the tests when the application evolves.
As long as you abstract away the details from the tests to page/view objects, you only need to take care that your page/view objects are built in a robust way.
You should avoid by all means necessary to depend on the HTML DOM structure. If you depend on finding a
<div> inside a
<span> or anything similar, you will have to update the page/view object for every small detail that changes in the application.
Similarly, you should avoid depending on strings targeted for humans in your application. While it is in many cases tempting to find the button with the text "Save", you will run into unnecessary problems when somebody decides to change the text to "Store", or decides to internationalize the application.
Define Ids for the Components
For most cases, it makes sense to define
ids for all the elements you want to interact with inside your page/view object. The
ids are only created to be able to identify a given element and there is typically no reason to change them when the application evolves.
When using templates, you also do not need to worry about global
ids and
ids colliding with each other, as the id of a given element only needs to be unique inside the shadow root, i.e. the template. For layouts and components outside templates (an inside a single template), you should take care that you do not use the same
id in multiple places.
Dealing with Test Environment Problems
When dealing with browser based tests, and especially older browser such as IE11, you need to take into account that the environment is not always as stable as you would want it to be. Ideally the test would fire up the browser, execute the actions and terminate the browser nicely. Always. In practice, there is potential to have network problems (especially when using a cloud based browser provider), there can be browser problems causing randomness or even browser crashes (yes, this is about you IE11).
When the point of failure is outside your control, e.g. a temporary network failure, your options are very limited. To deal with all kinds of unexpected randomness, in the network or the browsers, TestBench offers a
RetryRule, which is simply a way to automatically run the test again to see if the temporary problem has disappeared.
RetryRule is used as a JUnit 4
@Rule, with an parameter describing the maximum number of times the test should be run, e.g:
public class RandomFailureTest extends TestBenchTestCase { // Run the test max two times @Rule public RetryRule rule = new RetryRule(2); @Test public void doStuff() { ... } }
If the test passes on the first attempt, it will not be re-run. Only if the first attempt fails, it will try again until either the test passes or the maximum number of attempst has been reached. | https://vaadin.com/docs/latest/tools/testbench/making-tests-reliable | CC-MAIN-2021-49 | refinedweb | 722 | 53.24 |
All of the machine-learning models presented so far in this series were written in Python. Models don’t have to be written in Python, but many are, thanks in part to the numerous world-class Python libraries that are available, including Pandas and Scikit-learn. ML models written in Python are easily consumed in Python apps. Calling them from other languages such as C++, Java, and C# requires a little more work. You can’t simply call a Python function from C++ as if it were a C++ function. So how do you invoke models written in Python from apps written in other languages? Put another way, how do you operationalize Python models such that they are usable in any app and any programming language?
The middle block in the diagram below shows one strategy: wrap the model in a Web service and expose its functionality through a REST API. Then any client that can generate an HTTP(S) request can invoke the model. It’s relatively easy to do with Python frameworks such as Flask. The service can be hosted locally or in the cloud, and it can even be containerized for easy deployment.
In this post, I’ll walk you through three scenarios:
- How to invoke a Python model from a Python client
- How to invoke a Python model from a non-Python client
- How to containerize Python models for easy deployment
In my next post, I’ll address the rightmost block in the diagram above by demonstrating how to build and consume machine-learning models in C#. If the client is written in C# and the model is, too, then you can invoke the model directly from the client without any middleware in between.
Consuming a Python Model from a Python Client
To first order, invoking a Python model from a Python client is simple: just call predict on the model. Of course, you don’t want to have to retrain the model every time you use it. You want to train it once, and then empower the client to recreate the model in its trained state. For that, Python programmers use the Python pickle module.
To demonstrate, the following code trains the Titanic model featured in my post on binary classification and uses it to predict the odds that a 30-year-old female passenger traveling in first class will survive the voyage:
import pandas as pd from sklearn.linear_model import LogisticRegression import pickle df = pd.read_csv('Data/titanic.csv') df = df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin', 'Embarked', 'Fare', 'Parch', 'SibSp'], axis=1) df = pd.get_dummies(df, columns=['Sex', 'Pclass']) df.dropna(inplace=True) x = df.drop('Survived', axis=1) y = df['Survived'] model = LogisticRegression(random_state=0) model.fit(x, y) probability = model.predict_proba([[30, 1, 0, 1, 0, 0]])[0][1] print(f'Probability of survival: {probability:.1%}')
Once the model is trained, it can be serialized into a binary file with one line of code:
pickle.dump(model, open('Data/titanic.pkl', 'wb'))
To invoke the model, a Python client can then deserialize the model, which recreates it in its trained state, and call predict_proba to get the same result:
model = pickle.load(open('Data/titanic.pkl', 'rb')) probability = model.predict_proba([[30, 1, 0, 1, 0, 0]])[0][1] print(f'Probability of survival: {probability:.1%}')
Now the client can quickly use the model to make a prediction, even if the model is a complex one that takes a long time to train.
My post on support-vector machines (SVMs) introduced Scikit’s make_pipeline function, which allows estimators (objects that make predictions) and transforms (objects that transform data input to the model) to be combined into a single unit, or pipeline. The pickle module can be used to serialize and deserialize pipelines, too. Here’s the model featured in my post on sentiment analysis recast to use make_pipeline to combine a CountVectorizer for transforming data with a LogisticRegression object for making predictions:
import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression df = pd.read_csv('Data/reviews.csv', encoding="ISO-8859-1") df = df.drop_duplicates() x = df['Text'] y = df['Sentiment'] vectorizer = CountVectorizer(ngram_range=(1, 2), stop_words='english', min_df=20) model = LogisticRegression(max_iter=1000, random_state=0) pipe = make_pipeline(vectorizer, model) pipe.fit(x, y) review = 'great food and excellent service!' pipe.predict_proba([review])[0][1]
One line of code serializes the pipeline:
pickle.dump(pipe, open('Data/sentiment.pkl', 'wb'))
A Python client can then deserialize it and call predict_proba to score a line of text for sentiment:
pipe = pickle.load(open('Data/sentiment.pkl', 'rb')) review = 'great food and excellent service!' pipe.predict_proba([review])[0][1]
Pickling in this manner works not just with CountVectorizer, but with other transformers such as StandardScaler, too.
One caveat to be aware of is that a model (or pipeline) should be pickled and unpickled with the same version of Scikit. Serializing the model with one version and attempting to deserialize it with another will either throw an exception or at least generate a warning message.
If you’d like to write a Python client that performs sentiment analysis, start by copying the code that trains a sentiment-analysis model and the line of code that serializes the pipeline into a Jupyter notebook and run it to produce sentiment.pkl. Then create a Python script named sentiment.py and paste the following statements into it:
import pickle, sys # Get the text to analyze if len(sys.argv) > 1: text = sys.argv[1] else: text = input('Text to analyze: ') # Load the pipeline containing the model and the CountVectorizer pipe = pickle.load(open('sentiment.pkl', 'rb')) # Pass the input text to the pipeline and print the result score = pipe.predict_proba([text])[0][1] print(score)
Copy sentiment.pkl into the same directory as sentiment.py, and then pop out to the command line and run the script:
python sentiment.py "great food and excellent service!"
The output should look something like this, which is proof that you succeeded in recreating the model in its trained state and invoking it to analyze the input text for sentiment:
Consuming a Python Model from a C# Client
Suppose you wanted to invoke the sentiment-analysis model from an app written in another language – say, C#. You can’t directly call a Python function from C#, but you can wrap the Python model in a Web service and expose its predict (or predict_proba) method using a REST API. One way to code the Web service is to use Flask, a popular Web framework for Python.
To see for yourself, make sure Flask is installed on your computer. Then create a file named app.py and paste in the following code:
import pickle from flask import Flask, request app = Flask(__name__) pipe = pickle.load(open('sentiment.pkl', 'rb')) @app.route('/analyze', methods=['GET']) def analyze(): if 'text' in request.args: text = request.args.get('text') else: return 'No string to analyze' score = pipe.predict_proba([text])[0][1] return str(score) if __name__ == '__main__': app.run(debug=True, port=5000, host='0.0.0.0')
This code uses Flask to implement a Python Web service that listens on port 5000. At startup, the service deserializes the sentiment-analysis model (pipeline) saved in sentiment.pkl. The @app.route statement decorating the analyze function tells Flask to call the function when the service’s analyze method is called. If the service is hosted locally, then the following request invokes the analyze method and returns a string containing a sentiment score for the text passed in the query string: food and excellent service!
To demonstrate, go to the directory where app.py is located (make sure sentiment.pkl is there, too) and start Flask by typing:
flask run
Then go to a separate command prompt and use a curl command to fire off a request to the URL:
curl -G -w "\n" --data-urlencode "text=great food and excellent service!"
The output should resemble this:
If you have Visual Studio or Visual Studio Code installed on your machine and are set up to compile and run C# apps, you can use the following C# code in a command-line app to do the same thing:
using System; using System.Net.Http; using System.Threading.Tasks; class Program { static async Task Main(string[] args) { string text; // Get the text to analyze if (args.Length > 0) { text = args[0]; } else { Console.Write("Text to analyze: "); text = Console.ReadLine(); } // Pass the text to the Web service var client = new HttpClient(); var url = $"{text}"; var response = await client.GetAsync(url); var score = await response.Content.ReadAsStringAsync(); // Show the sentiment score Console.WriteLine(score); } }
Of course, you’re not limited to invoking the Web service (and by extension, the sentiment-analysis model) from C#. Any language will do, because virtually all modern programming languages provide a means for sending HTTP requests.
Incidentally, this Web service is a simple one that reads input from a query string and returns a string. For more complex input and output, you can serialize the input into JSON and transmit it in the body of an HTTP POST, and you can return a JSON payload in the response.
Containerizing a Machine-Learning Model
One downside to wrapping a machine-learning model in a Web service and running it locally is that the client computer must have Python installed, as well as all the packages that the model and Web service require. An alternative is to host the Web service in the cloud where it can be called via the Internet. It’s not hard to go out to Azure or AWS, spin up a virtual machine (VM), and install the software there. But there’s a better way. That better way is containers.
Containers have revolutionized the way software is built and deployed. A container includes an app and everything the app needs to run, including a run-time (for example, Python), the packages the app relies on, and even a virtual file system. If you’re not familiar with containers, think of them as lightweight VMs that start quickly and consume far less memory. Docker is the world’s most popular container platform, although it is rapidly being supplanted by Kubernetes.
Containers are created from container images, which are blueprints for containers in the same way that in programming, classes are blueprints for objects. The first step in creating a Docker container image that contains the machine-learning model and Web service in the previous section is creating a file named Dockerfile (no file-name extension) in the same directory as app.py and sentiment.pkl and pasting the following statements into it:
FROM python:3.6.7-stretch RUN pip install flask numpy scipy scikit-learn==0.24.2 && \ mkdir /app COPY app.py /app COPY sentiment.pkl /app WORKDIR /app EXPOSE 5000 ENTRYPOINT ["python"] CMD ["app.py"]
A Dockerfile contains instructions for building a container image. This one creates a container image that includes a Python run-time, several Python packages such as Flask and Scikit-learn, and app.py and sentiment.pkl. It also instructs the Docker run-time that hosts the container to open port 5000 for HTTP requests and to execute app.py when the container starts.
I won’t go through the steps to build the container image from the Dockerfile. There are several ways to do it. For example, if Docker is installed on the local machine, you can use a docker build command like this one:
docker build -t sentiment-server .
Or you can upload the Dockerfile to a cloud service such as Microsoft Azure and build it there. This prevents you from having to have Docker installed on the local machine, and it makes it easy to store the resulting container image in the cloud. (Container images are stored in container registries, and modern cloud services are capable of hosting container registries as well as containers.) If you launch a container instance in Azure, the Web service in the container can be invoked with a URL similar to this one: food and excellent service!
One of the benefits of hosting the container instance in the cloud is that it can be reached from any client app running on any machine and any operating system, and the computer that hosts the client app doesn’t have to have anything special installed. Containers can be beneficial even if you host the Web service locally rather than in the cloud. As long as you deploy a container stack such as the Docker run-time to the local machine, you don’t have to separately install Python and all the packages that the Web service requires. You just launch a container instance on the local machine and direct HTTP requests to it via localhost. This is great for testing, among other things.
Summary
Writing a Python client that invokes a Python machine-learning model requires little more than an extra line of code to deserialize the model from a .pkl file. A convenient way for a non-Python client to invoke a Python model is to wrap the model in a Python Web service and invoke the model using a REST API. That Web service can be hosted locally or in the cloud, and containerizing the Web service (and the model) simplifies deployment and makes the software more portable.
In my next post, I’ll show you one way to build machine-learning models without Python. There are many tools and libraries for building machine-learning models. The one I’ll use is Microsoft’s ML.NET, which makes building ML models with C# almost as easy as building them with Scikit – and offers a few advantages that Scikit does not. | https://www.wintellect.com/operationalizing-machine-learning-models/ | CC-MAIN-2021-39 | refinedweb | 2,294 | 54.93 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
The different "openerp model inheritance" mechanisms: what's the difference between them, and when should they be used ?
In OpenERP, there are 3 ways to inherit from an existing model:
_inherit = 'model'(without any
_name)
_inherit = 'model'(with a specified
_name)
_inherits = 'model'
What's the difference between them and, therefore, how to use them properly ?
This is a wide question:
In OpenERP we have many main type of inheritance:
Classical using Python inheritance.
It allows to add specific "generic" behavior to Model by inheriting classes that derive from orm.Model like geoModel that adds goegraphic support.
class Myclass(GeoModel, AUtilsClass):
Standard Using _inherit
The main objective is to add new behaviors/extend existing models. For example you want to add a new field to an invoice and add a new method: `
class AccountInvoice(orm.Model): _inherit = "account.invoice" _column = {'my_field': fields.char('My new field')} def a_new_func(self, cr, uid, ids, x, y, context=None): # my stuff return something
`
You can also override existing methods:`
def existing(self, cr, uid, ids, x, y, z, context=None): parent_res = super(AccountInvoice, self).existing(cr, uid, ids, x, y, z, context=context) # my stuff return parent_res_plus_my_stuff`
It is important to note that the order of the super call is defined by the inheritance graph of the addons (the depends key in
__openerp__.py).
It is important to notice that
_inherit can be a string or a list. You can do
_inherit = ['model_1', 'model_2'].
List allows to create a class that concatenate multiple
Model,
TransientModel or better
AbstractModel into a single new model.
So what about our '_name' property
-
_nameif your override an existing model this way you may have some trouble, it should be avoided. It is better to use this to create new classes that inherit from abstract model.
Polymorphic data using _inherits
When using
_inherits you will do a kind of polymorphic model in the database way.
For example
product.product inherits
product.template or
res.users inherits
res.partner. This mean we create a model that gets the know how of a Model but adds aditional data/columns in a new database table. So when you create a user, all partner data is stored in res_partner table (and a partner is created) and all user related info is stored in res_users table.
To do this we use a dict:
_inherits = {'res.partner': 'partner_id'}
The key corresponds to the base model and the value to the foreign key to the base model.
From here you can mix inheritance if you dare...
Hope it helps.
What will happen when using
_inherits we use the same
_name?
Python default override class declaration, then how its works here ? Like product and stock module have class product declaration, where it do not overwrite previously declared class method and features ? How it keeps methods declared in other modules ?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/the-different-openerp-model-inheritance-mechanisms-what-s-the-difference-between-them-and-when-should-they-be-used-46 | CC-MAIN-2017-30 | refinedweb | 528 | 57.16 |
import java.util.Scanner; public class Askisi14{ public static void main(String args[]){ int i; int voteA = 0; int voteB = 0; int voteC = 0; int white = 0; int winnerVotes; char winner; char vote; double percentage; Scanner input = new Scanner(System.in); for (i=0;i<1500;i++){ vote = input.nextChar(); if (vote == 'A') voteA++; else if (vote == 'B') voteB++; else if (vote == 'C') voteC++; else white++; } if (voteA>voteB){ if (voteA>voteC){ winner = 'A'; winnerVotes = voteA; } else{ winner = 'C'; winnerVotes = voteC; } } else{ if (voteB>voteC){ winner = 'B'; winnerVotes = voteB; } else{ winner = 'C'; winnerVotes = voteC; } } percentage = winnerVotes*100/1500; System.out.printf("The winner is %s with a percentage of %.2f",winner,percentage); } }
cannot find symbol
any suggest? | http://www.dreamincode.net/forums/topic/233253-problem-with-inputnextchar%3B/ | CC-MAIN-2016-40 | refinedweb | 117 | 51.78 |
Archives
Using custom .net classes in ASP Classic Compiler
To use custom .net classes, I extended VBScript syntax with the Imports statement. The Imports statement introduce the top namespace to the VBScript, as in the example below. The following example demonstrates using ADO.NET in ASP Classic Compiler:
<%
imports system
dim filePath = Server.MapPath("/Database/authors.mdb")
dim oleConn = new system.data.oledb.oledbconnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & filePath)
oleConn.Open()
dim cmd = new system.data.oledb.OleDbCommand("SELECT * From authors", oleConn)
dim dr = cmd.ExecuteReader()
do while dr.Read()
response.Write(dr.GetString(1) & "<br/>")
loop
%>
In the example above, “imports system” instructs VBScript.net that system is a namespace rather than an object. When VBScript.net encounters system.data.oledb.oledbconnection, it follows the namespace hierarchy to find the oledbconnection class.
However, VBScript.net does not automatically search all the loaded assemblies. Instead, it only searches the assemblies it was instructed search. VBScript.net will always search the system.dll assembly. However, to instruct VBScript.net to also search in the system.data.dll, we need to add the following code to global.asax.cs:
using System.Reflection;
using Dlrsoft.Asp;
…
protected void Application_Start(object sender, EventArgs e)
{
Assembly a1 = Assembly.Load("System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089");
AspHandlerConfiguration.Assemblies.Add(a1);
}
The above code tells AspHandler to search the System.Data.dll assembly for classes.
Lastly, to saves the typing of the fully qualified class name, the “imports” statement also supports alias. The following code demonstrates using alias to save typing:
<%
imports t = system.text
dim sb = new t.StringBuilder()
sb.append("this")
sb.append(" is ")
sb.append(" stringbuilder!")
response.write sb.toString()
%>
Uploaded ASP Classic Compiler Build 0.6.1.34834
It has been a long time without a new release. This release contains only a few small fixes, but it is a change in the strategy. I decided to branch the code: one branch to stabilize the VBScript 4.0 features and the other branch to implement the VBScript 5.0 features such as Eval/Execute and VBScript classes.
The reason is that not all users need VBScript 5.0 features. Some users have only a small number of VBScript classes and can easily convert them to C# classes as VBScript.net can use .net classes. Although I still have a lot to do to reach a mature release, some users only use limited VBScript features so they can use ASP Classic Compiler first. I will issue a license under the “Early Access Program” for those users who want to go live with pre-release versions of ASP Classic Compiler.
The “Early Access Program” is an enterprise/ISV class of program with source code access. It allows participating users to self-support under emergency situations with source code access. Although the compiler code may have a steep learning curve, many users can learn simple bug fixes fairly quickly by following bug fixes in the source control system. The remaining users can still get the binaries from the Codeplext site without warranty. If you are interested in the program, please contact me using the contact user form. | https://weblogs.asp.net/lichen/archive/2010/3 | CC-MAIN-2021-21 | refinedweb | 530 | 53.17 |
’m sure we’re going to look back at 2009 and say “it was the best of times, it was the worst of times’ and it will no doubt be interesting. Here’s my predictions….
Although online social networking companies are already struggling with diminished valuations, in 2009 we’ll see social networks break out of their silos and become essential platform elements that see their way into other online applications such as travel, e-commerce, job-posting boards, online dating services, CRM services, web based email systems, etc. Blogging is also changing, slowing down in fact. Micro-blogging with status update-esque features in FaceBook, Windows Live, and of course the explosion of Twitter will take on even larger roles. It’s as true today as it was back in 1964 when fellow Canadian Marshall McLuhan wrote “The Medium Is The Message”.
Okay, so you’ll still be able to walk into a video store to rent a DVD or buy a spindle of blanks at your grocery store but make no mistake about it – the death march is on, and that includes you too Blu-Ray. Blu-Ray will never see the adoption curve that DVD’s had. They thought they won when HD-DVD died, but if winning means dying last, then sure, you won. We’ll increasingly be renting our movies on-demand through our cable boxes, on our converged PC’s and XBOX 360’s via services like Netflix. Along with this, the rest of us will start to realize we don’t really need to own our libraries of movies. With IPod penetration as high as it is, it may take longer to realize we don’t need to own our music either – frankly we don’t own it anyway even though the pricing models try to convince us we do. I won’t go out and predict the death of DRM, frankly, I think 2009 maybe the year where DRM starts to get more tolerable once we are clearly renting our music and movies. The Zune Pass is making some inroads here but until Apple starts offering a similar subscription pricing, this may take a bit longer.
The Mac Air may have been a bit ahead of the curve with dropping the optical drive, but get used to it. Expect more vendors to do the same as they reduce size or cram in additional batteries or hard drives.
If 2009 is the year of doing more with less, then this will surely be the NetBook’s year. Mainstream hardware manufacturers hate these and their small profit margins, but Acer and Intel will be raking it in building market share if not large bottom lines. Who knows, MS may learn to love the NetBook if they can get Acer to start shipping Windows 7 on them this year as well. Be prepared to see these everywhere in 2009, but don’t expect to see Apple make one (ever).
The big story at the end of 2008 has been the global suicide of the original Zune 30s. I predict that tomorrow they’ll be they shall rise from the dead but it might take until the 2nd for everybody to figure out that they need to entirely drain the battery. The big news is that there won’t be a Zune phone with the MS brand name on it, but the Zune UI will come to Windows Mobile (6.5?) turning legions of touch based smart phones into music players almost as good as an IPhone. The bad news is that without an App Store to vet software quality, crapware will continue to be the source of reliability issues for the Windows Mobile platform. The good news is that without an App Store, Windows Mobile users will have lots of choice in the software for their devices, not to mention lots of choice in devices, carriers and plans. The battle between Good and Evil may morph into the battle between Reliability and Choice.
Get your head out of the gutter, that’s not what I meant. What I did mean is that 12-24 months from now, it will be difficult to purchase a digital frame, LCD monitor or phone without an onscreen touch capability. Windows 7 will light these devices up and we’ll start to not think about the differences between Tablet PC’s and Notebooks as they just converge into a single device. With the advent of Silverlight, WPF and Surface computing, MS has been banging the “user experience” drum for a while now but when touch starts to be the expectation and not the exception, we’ll have to re-engineer our applications to optimize for the touch experience. This may turn out to be bigger than the mouse or even a windowed operation system.
In 2008 we’ve been teased with sold state hard drives but with less than stellar performance at outrageous prices, they’ve been on the fringe. In 2009 prices and read/write times will both come down in solid state drives, but with the increased capacity of USB memory sticks 32gb, 64gb +, we likely won’t see SSD drives hitting mainstream this year. Instead I think we’ll see an increase in the behavior of people keeping their entire lives on USB flash memory sticks. Hopefully we’ll see sync & backup software such as Windows Live Sync, Active Sync, Windows Home Server, etc. become more aware of these portable memory devices that may get synced from any device in your mesh.
Camera flash will have to have a new format as SDHC currently is maxed at 32gb. With the increase in demand for HD video recording on still and video cameras, we’ll need a new format. As such we’re seeing rock bottom prices on 2gb chips now. Maybe somebody will come out with a SD Raid device that lets us plug in a bank of 2GB SD Cards.
Cloud computing is going to be a very long term trend. I think we’ll only see baby steps in 2009 towards this goal. In the consumer space we’ll see more storage of digital media in the cloud, online backup services and the move of many applications to the cloud. Perfect for your Touch Zune Phone and Touch NetBook without an optical drive eh? IT shops will take a bit longer to embrace the cloud. Although many IT Data centers are largely virtualized already, applications are not all that virtual today and that doesn’t seem to be changing soon as developers have not whole-heartedly adopted SOA practices, addressed scalability and session management issues nor adopted concepts such as multi-tenancy. As we do more with less in 2009, we won’t see that changing much as a lot of software out there will be in “maintenance mode” during the recession.
Adobe Air & Silverlight are mainstreaming web deployed and updated rich client desktop apps. It’s hard to take advantage of touch interfaces and massive portable flash storage within a browser. All of these other trends can influence Smart Client applications, potentially to a tipping point. We’ll hopefully see out of browser, cross-platform Silverlight applications in 2009 to make this an easy reality on the MS Stack.
Many of my customers began large-scale re-writes of their key software assets in 2008, many of them against my recommendations. For most of my key customers in 2008 and into 2009 I’m an advocate of providing incremental value in short iterative releases, not major re-writes that take 6+ months to develop. Even if your application is written in PowerBuilder 6 or Classic ASP, avoid the temptation to rewrite any code that won’t see production for 4 months or longer. We can work towards componentized software by refactoring legacy assets and providing key integration points so that we can release updated modules towards gradual migration. It is difficult for software teams in this economy to produce big-bang, “boil the ocean”, build cathedral type projects. We simply can’t predict what our project’s funding will be in 4 months from now, or if we’ll be owned by another company, scaled down, out sourced or just plain laid off. That is of course unless you work for the government. Government spending will continue if not increase in 2009, but still, try to spend our taxpayer money wisely by delivering short incremental software releases. It allows you to build trust with your customers, mark a line in the sand and move onward and upward, and let’s you move quickly in times of fluid business requirements and funding issues.
Incremental, Value-Based software development isn’t easy. It takes lots of work, creative thinking, and much interop and integration work than one would prefer. It might easily seem like an approach that costs more in the long term, and in some cases you could be right. But if a company has to throw out work in progress after 6-8 months or never sees the value of it because of other changing business conditions, then what have you saved? Probably not your job anyway..
Live Search got a refresh recently and it's actually pretty good, dare I say may be even better than Google.
I'm going to give Live Search a trial as my default search engine for the next week or so and see how it goes. I'm optimistic.
In Part 0: Introduction of this series after asking the question "Does the Entity Framework replace the need for a Data Access Layer?", I waxed lengthy about the qualities of a good data access layer. Since that time I've received a quite a few emails with people interested in this topic. So without further adieu, let's get down to the question at hand.
So let's say you go ahead and create an Entity Definition model (*.edmx) in Visual Studio and have the designer generate for you a derived ObjectContext class and an entity class for each of your tables, derived from EntityObject. This one to one table mapping to entity class is quite similar to LINQ to SQL but the mapping capabilities move well beyond this to support.
So to use the Entity Framework as your data access layer, define your model and then let the EdmGen.exe tool do it's thing to the edmx file at compile time and we get the csdl, ssdl, and msl files - plus the all important code generated entity classes. So using this pattern of usage for the Entity Framework, our data access layer is complete. It may not be the best option for you, so let's explore the qualities of this solution.
To be clear, the assumption here is that our data access layer in this situation is the full EF Stack: ADO.NET Entity Client, ADO.NET Object Services, LINQ to Entities, including our model (edmx, csdl, ssdl, msl) and the code generated entities and object context. Somewhere under the covers there is also the ADO.NET Provider (SqlClient, OracleClient, etc.) In?
If you're following along, you're probably asking exactly where is this query code above being placed. For the purposes of our discussion, "business layer" could mean a business object or some sort of controller. The point to be made here is that we need to think of Entities as something entirely different from our Business Objects.
Entity != Business Object
In this model, it is up to the business object to ask the Data Access Layer to project entities, not business objects, but entities.
This is one design pattern for data access, but it is not the only one. A conventional business object that contains its own data, and does not separate that out into an entity can suffer from tight bi-directional coupling between the business and data access layer. Consider a Customer business object with a Load method. Customer.Load() would in turn instantiate a data access component, CustomerDac and call the CustomerDac's Load or Fill method. To encapsulate all the data access code to populate a customer business object, the CustomerDac.Load method would require knowledge of the structure the Customer business object and hence a circular dependency would ensue.
The workaround, if you can call it that, is to put the business layer and the data access layer in the same assembly - but there goes decoupling, unit testing and separation of concerns out the window.
Another approach is to invert the dependency. The business layer would contain data access interfaces only, and the data access layer would implement those interfaces, and hence have a reverse dependency on the business layer. Concrete data access objects are instantiated via a factory, often combined with configuration information used by an Inversion of Control container. Unfortunately, this is not all that easy to do with the EF generated ObjectContext & Entities.
Or, you do as the Entity Framework implies and separate entities from your business objects. If you've used typed DataSets in the past, this will seem familiar you to you. Substitute ObjectContext for SqlConnection and SqlDataAdapter, and the pattern is pretty much the same.
Your UI presentation layer is likely going to bind to your Entity classes as well. This is an important consideration. The generated Entity classes are partial classes and can be extended with your own code. The generated properties (columns) on an entity also have event handlers created for changing and changed events so you can also wire those up to perform some column level validation. Notwithstanding, you may want to limit your entity customizations to simple validation and keep the serious business logic in your business objects. One of these days, I'll do another blog series on handing data validation within the Entity Framework.
How are database connections managed?
Using the Entity Framework natively itself, the ObjectContext takes care of opening & closing connections for you - as needed when queries are executed, and during a call to SaveChanges. You can get access to the native ADO.NET connection if need be to share a connection with other non-EF data access logic. The nice thing however is that, for the most part, connection strings and connection management are abstracted away from the developer.?
By default the Entity Framework dynamically generates store specific SQL on the fly and therefore, the queries are not statically located in any one central location. Even to understand the possible queries, you'd have to walk through all of your business code that hits the entity framework to understand all of the potential queries.
But why would you care? If you have to ask that question, then you don't care. But if you're a DBA, charged with the job of optimizing queries, making sure that your tables have the appropriate indices, then you want to go to one central place to see all these queries and tune them if necessary. If you care strongly enough about this, and you have the potential of other applications (perhaps written in other platforms), then you likely have already locked down the database so the only access is via Stored Procedures and hence the problem is already solved.
Let's remind ourselves that sprocs are not innately faster than dynamic SQL, however they are easier to tune and you also have the freedom of using T-SQL and temp tables to do some pre-processing of data prior to projecting results - which sometimes can be the fastest way to generate some complex results. More importantly, you can revoke all permissions to the underlying tables and only grant access to the data via Stored Procedures. Locking down a database with stored procedures is almost a necessity if your database is oriented as a service, acting as an integration layer between multiple client applications. If you have multiple applications hitting the same database, and you don't use stored procedures - you likely have bigger problems.
In the end, this is not an insurmountable problem. If you are already using Stored Procedures, then by all means you can map those in your EDM. This seems like the best approach, but you could also embed SQL Server (or other provider) queries in your SSDL using a DefiningQuery.
Do changes in one part of the system affect others?
It's difficult to answer this question without talking about the possible changes.
Schema Changes: The conceptual model and the mapping flexibility, even under complex scenarios is a strength of the entity framework. Compared to other technologies on the market, with the EF, your chances are as good as they're going to get that a change in the database schema will have minimal impact on your entity model, and vice versa.: What if the change you want in one part of the system is to change your ORM technology? Maybe you don't want to persist to a database, but instead call a CRUD web service. In this pure model, you won't be happy. Both your Entities and your DataContext object inherit from base classes in the Entity Framework's System.Data.Objects namespace. By making references to these, littered throughout your business layer, decoupling yourself from the Entity Framework will not be an easy task..
As a bonus, what you do get is query composition across your domain model. Usually version 1.0 of a convention non-ORM data access layer provides components for each entity, each supporting crud behaviour. Consider a scenario where you need to show all of the Customers within a territory, and then you need to show the last 10 orders for each Customer. Now I'm not saying you'd do this, but what I've commonly seen is that while somebody might write a CustomerDac.GetCustomersByTerritory() method, and they might write an OrderDac.GetLastTenOrders(), they would almost never write a OrderDac.GetLastTenOrdersForCustomersInTerritory() method. Instead they would simply iterate over the collection of customers found by territory and call the GetLastTenOrders() over and over again. Obviously this is "good" resuse of the data access logic, however it does not perform very well.
Fortunately, through query composition and eager loading, we can cause the Entity Framework (or even LINQ to SQL) to use a nested subquery to bring back the last 10 orders for each customer in a given territory in a single round trip, single query. Wow! In a conventional data access layer you could, and should write a new method to do the same, but by writing yet another query on the order table, you'd be repeating the mapping between the table and your objects each time.
Layers, Schmayers: What about tiers?
EDM generated entity classes are not very tier-friendly. The state of an entity, whether it is modified, new, or to be delete, and what columns have changed is managed by the ObjectContext. Once you take an entity and serialize it out of process to another tier, it is no longer tracked for updates. While you can re-attach an entity that was serialized back into the data access tier, because the entity itself does not serialize it's changed state (aka diff gram), you can not easily achieve full round trip updating in a distributed system. There are techniques for dealing with this, but it is going to add some plumbing code between the business logic and the EF...and make you wish you had a real data access layer, or something like Danny Simmons' EntityBag (or a DataSet).
Does the Data Access Layer support optimistic concurrency?
Out of the box, yes, handily. Thanks to the ObjectContext tracking state, and the change tracking events injected into our code generated entity properties. However, keep in mind the caveat with distributed systems that you'll have more work to do if your UI is separated from your data access layer by one or more tiers.
How does the Data Access Layer support transactions?
Because the Entity Framework builds on top of ADO.NET providers, transaction management doesn't change very much. A single call to ObjectContext.SaveChanges() will open a connection, perform all inserts, updates, and deletes across all entities that have changed, across all relationships and all in the correct order....and as you can imagine in a single transaction. To make transactions more granular than that, call SaveChanges more frequently or have multiple ObjectContext instances for each unit of work in progress. To broaden the scope of a transaction, you can manually enlist using a native ADO.NET provider transaction or by using System.Transactions.:.
In this first post, I first provide some background on the notion of a Data Access Layer as a frame of reference, and specifically, identify the key goals and objectives of a Data Access Layer.
While Martin Fowler didn't invent the pattern of layering in enterprise applications, his Patterns of Enterprise Application Architecture is a must read on the topic. Our goals for a layered design (which may often need to be traded off against each other) should include:.
In addition to the goals of any layer mentioned above, there are some design elements specific to a Data Access Layer common to the many layered architectures:
Getting a little more concrete, there are a host of other issues that also need to be considered in the implementation of a D 30 3rd party Object Relational Mapping tools available to choose from.
Ok, so if you're not familiar with the design goals of the Entity Framework (EF) you can read all about it here or watch a video interview on channel 9, with Pablo Castro, Britt Johnson, and Michael Pizzo. A year after that interview, they did a follow up interview here.
In the next post, I'll explore the idea of the Entity Framework replacing my data access layer and evaluate how this choice rates against the various objectives above. I'll then continue to explore alternative implementations for a DAL using the Entity Framework.. | http://blogs.objectsharp.com/?tag=/Services | CC-MAIN-2016-07 | refinedweb | 3,674 | 59.23 |
Flutter native_state plugin
This plugin allows for restoring state after the app process is killed while in the background.
What this plugin is for
Since mobile devices are resource constrained, both Android and iOS use a trick to make it look like apps like always running in the background: whenever the app is killed in the background, an app has an opportunity to save a small amount of data that can be used to restore the app to a state, so that it looks like the app was never killed.
For example, consider a sign up form that a user is filling in. When the user is filling in this form, and a phone call comes in, the OS may decide that there's not enough resources to keep the app running and will kill the app. By default, Flutter does not restore any state when relaunching the app after that phone call, which means that whatever the user has entered has now been lost. Worse yet, the app will just restart and show the home screen which can be confusing to the user as well.
Saving state
First of all: the term "state" may be confusing, since it can mean many things. In this case state means: the bare minimum amount of data you need to make it appear that the app was never killed. Generally this means that you should only persist things like data being entered by the user, or an id that identifies whatever was displayed on the screen. For example, if your app is showing a shopping cart, only the shopping cart id should be persisted using this plugin, the shopping cart contents related to this id should be loaded by other means (from disk, or from the network).
Integrating with Flutter projects on Android
This plugin uses Kotlin, make sure your Flutter project has Kotlin configured for that reason.
Find the
AndroidManifest.xml file in
app/src/main of your Flutter project. Then remove the
name attribute from the
<application> tag:
When not removed, you'll get a compilation error similar like this:
Attribute application@name value=(io.flutter.app.FlutterApplication) from AndroidManifest.xml:10:9-57 is also present at
:native_stateAndroidManifest.xml:7:18-99 value=(nl.littlerobots.flutter.native_state.FlutterNativeStateApplication). Suggestion: add 'tools:replace="android:name"' to
element at AndroidManifest.xml:9:5-32:19 to override.
If you prefer to use your own application class, add the
tools:replace="android:name" attribute to
AndroidManifest.xml as suggested in the error message,
and call
StateRegistry.registerCallbacks() from your
Application class.
Integrating with Flutter project on iOS
This plugin uses Swift, make sure your project is configured to use Swift for that reason.
Your
AppDelegate.swift in the
ios/Runner directory should look like this:
import Flutter // add this line import native_state @UIApplicationMain @objc class AppDelegate: FlutterAppDelegate { override func application( _ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]? ) -> Bool { GeneratedPluginRegistrant.register(with: self) return super.application(application, didFinishLaunchingWithOptions: launchOptions) } // add these methods override func application(_ application: UIApplication, didDecodeRestorableStateWith coder: NSCoder) { StateStorage.instance.restore(coder: coder) } override func application(_ application: UIApplication, willEncodeRestorableStateWith coder: NSCoder) { StateStorage.instance.save(coder: coder) } override func application(_ application: UIApplication, shouldSaveApplicationState coder: NSCoder) -> Bool { return true } override func application(_ application: UIApplication, shouldRestoreApplicationState coder: NSCoder) -> Bool { return true } }
Using the plugin
The
SavedStateData class allows for storing data by key and value. To get access to
SavedStateData wrap your
main application in a
SavedState widget; this is the global application
SavedState widget. To retrieve the
SavedStateData
use
SavedState.of(BuildContext) or use the
SavedState.builder() to get the data in a builder.
SavedState widgets manage the saved state. When they are disposed, the associated state is also cleared. Usually you want to
wrap each page in your application that needs to restore some state in a
SavedState widget. When the page is no longer displayed, the
SavedState associated with the page is automatically cleared.
SavedState widgets can be nested multiple times, creating nested
SavedStateData that will be cleared when a parent of the
SavedStateData is cleared, for example, when the
SavedState widget is removed
from the widget tree.
Saving and Restoring state in
StatefulWidgets
Most of the time, you'd want your
StatefulWidgets to update the
SavedState. Use
SavedState.of(context) then call
state.putXXX(key, value) to
update the state.
To restore state in your
StatefulWidget add the
StateRestoration mixin to your
State class. Then implement the
restoreState(SavedState)
method. This method will be called once when your widget is mounted.
Restoring navigation state
Restoring the page state is one part of the equation, but when the app is restarted, by default it will start with the default route,
which is probably not what you want. The plugin provides the
SavedStateRouteObserver that will save the route to the
SavedState automatically. The saved route can then be retrieved using
restoreRoute(SavedState) static method. Important note: for
this to work you need to setup your routes in such a way that the
Navigator will restore them when you set the
initialRoute property.
Another requirement is that you set a
navigatorKey on the
MaterialApp. This is because the tree is rebuilt after the
SavedState is initialised. When
rebuilding, the Flutter needs to reuse the existing
Navigator that receives the
initialRoute.
Why do I need this at all? My apps never get killed in the background
Lucky you! Your phone must have infinite memory :)
Why not save all state to a file
Two reasons: you are wasting resources (disk and battery) when saving all app state, using
native_state is more efficient as it only saves the bare
minimum amount of data and only when the OS requests it. State is kept in memory so there are no disk writes at all.
Secondly, even though the app state might have saved, the OS might choose not to restore it. For example, when the user has killed your app from the task switcher, or after some amount of time when it doesn't really make sense any more to restore the app state. This is up to the discretion of the OS, and it is good practice to respect that, in stead of always restoring the app state.
How do I test this is working?
For both Android and iOS: start your app and send it to the background by pressing the home button or using a gesture. Then from XCode or Android Studio, kill the app process and restart the app from the launcher. The app should resume from the same state as when it was killed.
When is state cleared by the OS
For Android: when the user "exits" the app by pressing back, and at the discretion of the OS when the app is in the background.
For iOS: users cannot really "exit" an app on iOS, but state is cleared when the user swipes away the app in the app switcher.. | https://pub.dev/documentation/native_state/latest/ | CC-MAIN-2020-05 | refinedweb | 1,152 | 53.61 |
Java Log Best Practices
What do I mean? There are lots of Java logging frameworks and libraries out there, and most developers use one or more of them every day. Two of the most common examples for Java developers are log4j and logback. They are simple and easy to use and work great for developers. Basic java log files are just not enough, though, but we have some Java best practices and tips to help you make the most of them! applications – so even more logs to dig through.
- It’s flat and hard to query; even if you do put it in SQL, you are going to have to do full-text indexing to make it usable.
- It’s hard to read; messages are scrambled like spaghetti.
- You generally don’t have any context of the user, etc.
- You probably lack some details that would be helpful. (You mean “log.Info(‘In the method’)” isn’t helpful???)
- You will be managing log file rotation and retention. Additionally, you have all this rich data about your app that is being generated and you simply aren’t proactively putting it to work.
It’s Time to Get Serious About Logging
Once you’re working on an application that is not running on your desktop, logging messages (including exceptions) are usually your only lifeline to quickly discovering why something in your app isn’t working correctly. Sure, APM tools can alert you to memory leaks and performance bottlenecks, but generally lack enough detail to help you solve a specific problem, i.e. why can’t this user log in, or why isn’t this record processing?
At Stackify, we’ve built a “culture of logging” which set.
In this post, we’ll explore these best practices, and share what we’ve done to address it, much of which has become a part of Stackify’s log management product. Also, if you haven’t used Prefix to view your logs, be sure to check it out!
Start Logging All the Things!
I’ve worked in a lot of shops where log messages looked like this:
I’ll give the developer credit; at least they are using a try/catch and handling the exception. The exception will likely have a stack trace so I know roughly where it came from, but no other context is logged.
Sometimes, they even do some more proactive logging, like this:
But generally, statements like that don’t go a long way towards letting you know what’s really happening in your app. If you’re tasked with troubleshooting an error in production, and/or on doing that.
Walk the Code
Let’s pretend that you have a process that you want to add logging around so that you can look at what happened. You could just put a try / catch around the entire thing and handle the exceptions (which you should) but it doesn’t tell you much about what was passed into the request. Take a look at the following, oversimplified example.
public class Foo { private int id; private double value; public Foo(int id, double value) { this.id = id; this.value = value; } public int getId() { return id; } public double getValue() { return value; } }
Take the following factory method, which creates a Foo. Note how I’ve opened the door for error – the method takes a Double as an input parameter. I call doubleValue() but don’t check for null. This could cause an exception.
public class FooFactory { public static Foo createFoo(int id, Double value) { return new Foo(id, value.doubleValue()); } }
This is a simple scenario, but it serves the purpose well. Assuming this is a really critical aspect of my Java app (can’t have any failed Foos!) let’s add some basic logging so we know what’s going on.
public class FooFactory { private static Logger LOGGER = LoggerFactory.getLogger(FooFactory.class); public static Foo createFoo(int id, Double value) { LOGGER.debug("Creating a Foo"); try { Foo foo = new Foo(id, value.doubleValue()); LOGGER.debug("{}", foo); return foo; } catch (Exception e) { LOGGER.error(e.getMessage(), e); } return null; } }
Now, let’s create two foos; one that is valid and one that is not:
FooFactory.createFoo(1, Double.valueOf(33.0)); FooFactory.createFoo(2, null);
And now we can see some logging, and it looks like this:
2017-02-15 17:01:04,842 [main] DEBUG com.stackifytest.logging.FooFactory: Creating a Foo 2017-02-15 17:01:04,848 [main] DEBUG com.stackifytest.logging.FooFactory: com.stackifytest.logging.Foo@5d22bbb7 2017-02-15 17:01:04,849 [main] DEBUG com.stackifytest.logging.FooFactory: Creating a Foo 2017-02-15 17:01:04,851 )
Now we have some logging – we know when Foo objects are created, and when they fail to create in createFoo(). But we are missing some context that would help. The default toString() implementation doesn’t build any data about the members of the object. We have some options here, but let’s have the IDE generate an implementation for us.
@Override public String toString() { return "Foo [id=" + id + ", value=" + value + "]"; }
Run our test again:
2017-02-15 17:13:06,032 [main] DEBUG com.stackifytest.logging.FooFactory: Creating a Foo 2017-02-15 17:13:06,041 [main] DEBUG com.stackifytest.logging.FooFactory: Foo [id=1, value=33.0] 2017-02-15 17:13:06,041 [main] DEBUG com.stackifytest.logging.FooFactory: Creating a Foo 2017-02-15 17:13:06,043 )
Much better! Now we can see the object that was logged as “[id=, value=]”. Another option you have for toString is to use Javas’ reflection capabilities. The main benefit here is that you don’t have to modify the toString method when you add or remove members. Here is an example using Google’s Gson library. Now, let’s look at the output:
2017-02-15 17:22:55,584 [main] DEBUG com.stackifytest.logging.FooFactory: Creating a Foo 2017-02-15 17:22:55,751 [main] DEBUG com.stackifytest.logging.FooFactory: {"id":1,"value":33.0} 2017-02-15 17:22:55,754 [main] DEBUG com.stackifytest.logging.FooFactory: Creating a Foo 2017-02-15 17:22:55,760 )
When you log objects as JSON and use Stackify’s Retrace tool, you can get some nice details like this:
Retrace Logging Dashboard JSON Viewer
Logging More Details with Diagnostic Contexts
And this brings us to one last point on logging more details: diagnostic context logging. When it comes to debugging a production issue, you might have the “Creating a Foo” message thousands of times in your logs, but with no clue who the logged in user was that created it. Know who the user was is the sort of context that is priceless in being able to quickly resolve an issue. Think about what other detail might be useful – for example, HttpWebRequest details. But who wants to have to remember to log it every time? Diagnostic context logging to the rescue, specifically the mapped diagnostic context. Read more about SLF4J’s MDC here:.
The easiest way to add context items to your logging is usually a servlet filter. For this example, let’s create a servlet filter that generates a transaction id and attaches it to the MDC.
public class LogContextFilter implements Filter { public void init(FilterConfig config) { } public void destroy() { } public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws ServletException, IOException { String transactionId = UUID.randomUUID().toString(); MDC.put("TRANS_ID", transactionId); try { chain.doFilter(request, response); } finally { MDC.clear(); } } }
Now, we can see some log statements like this:
More context. We can now trace all log statements from a single request. LOGGER, then overhead should be low. Last, if you’re worried about space and log file rotation, there are smarter ways to do it, and we’ll talk about that in the next section.
Work Smarter, Not Harder
Now that we’re logging everything, and it’s providing more contextual data, we’re going to look at the next part of the equation. As I’ve mentioned, and demonstrated, just dumping all of this out to flat files still doesn’t help you out a lot in a large, complex application and environment. Factor in thousands of requests, files spanning multiple days, weeks, or longer, and across multiple servers, you have to consider how you are going to quickly find the data that you need.
What we all really need is a solution that provides:
- Aggregates all Log & Exception data to one place
- Makes it available, instantly, to everyone on your team
- Presents a timeline of logging throughout your entire stack/infrastructure
- Is highly indexed and searchable by being in a structured format This is the part where I tell you about Stackify Retrace. troubleshooting.
First, we realize that lots of developers already have logging in place, and aren’t going to want to take a lot of time to rip that code out and put new code in. That’s why we’ve created logging appenders for the most common Java logging frameworks.
Continuing with log4j as a sample, the setup is easy. Just add the Stackify appender to your project’s maven pom file.
<dependency> <groupId>com.stackify</groupId> <artifactId>stackify-log-log4j12</artifactId> <version>1.1.9</version> <scope>runtime</scope> </dependency>
Also, add in some configuration for the Stackify appender to your logging.properties file.
log4j.rootLogger=DEBUG, CONSOLE, STACKIFY.STACKIFY=com.stackify.log.log4j12.StackifyLogAppender log4j.appender.STACKIFY.apiKey=[HIDDEN] log4j.appender.STACKIFY.application=test-logging log4j.appender.STACKIFY.environment=test
As you can see, if you’re already using a different appender, you can keep it in place and put them side-by-side. Now that you’ve got your logs streaming to Stackify we can take a look at the logging dashboard. (By the way, if our monitoring agent is installed, you can also send Syslog entries to Stackify as well!)
This dashboard shows a consolidated stream of log data, coming from all your servers and apps, presented in a timeline. From here, you can quickly
- View logs based on a range of time
- Filter for specific servers, apps, or environments Plus there are a couple of really great usability things built in. One of the first things you’ll notice is that chart at the top. It’s a great way to quickly “triage” your application. The blue line indicates the rate of log messages, and the red bars indicate # of exceptions being logged.
It’s clear that a few minutes ago, my web app started having a lot more consistent activity but more importantly, we started getting more exceptions about the same time. Exceptions don’t come without overhead for your CPU and memory, and they also can have a direct impact on user satisfaction, which can cost real money.
By zooming in on the chart to this time period, I can quickly filter my log detail down to that time range and take a look at the logs for that period of time.
Searching Your Logs
Do you see that blue text below that looks like a JSON object?
Well, it is a JSON object. That’s the result of logging objects, and adding context properties all objects with an id of 5. Fortunately, our log aggregator is smart enough to help in this situation. That’s because when we find serialized objects in logs, we index each and every field we find. That makes it easy to perform a search like this:
json.idNumber:5.0
That search yields the following results:
Want to know what else you can search by? Just click on the document icon when you hover over a log record, and you’ll see all the fields that Stackify indexes. Being able to get more value out of your logs and search by all the fields is called structured logging.
Exploring Java Exception Details
You may have also noticed this little red bug icon (
) next to exception messages. That’s because we treat exceptions differently by automatically showing more context. Click on it and we present a deeper view of that exception.
Our libraries not only grab the full stack trace, but all of the web request details, including headers, query strings, and server variables, when available. 60 times over the last hour. Errors and logs are closely related, and in an app where a tremendous amount of logging can occur, exceptions could sometimes get a bit lost in the noise. That’s why we’ve built an Errors Dashboard as well, to give you this same consolidated view but limited to exceptions.
Here I can see a couple of great pieces of data:
- I’ve had an uptick in my rate of exceptions over the past few minutes.
- The majority of my errors are coming from my “test” environment – to the tune of about 84; perhaps some buggy code (like a leaking SQL connection pool) went out and is causing a higher rate of SQL timeout errors than normal.
It’s not hard to imagine a lot of different scenarios for which this could provide early warning and detection. Hmm. Early warning and detection. That brings up another great topic.
Monitortest’ checking for null values when creating Foo objects. I’ve since fixed that and confirmed it by looking at the details for that particular error. As you can see, the last time it happened was 12 minutes ago:
It was a silly mistake, but one that is easy to make. a great little bit of automation to help out when you “think” you’ve solved the issue and want to make sure.
Log Monitors
Some things aren’t very straightforward to monitor. Perhaps you have a critical process that runs asynchronously and the only record of its success (or failure) is logging statements. Earlier in this post, I showed the ability to run deep queries against your structured log data, and any of those queries can be saved and monitored. I’ve got a very simple scenario here: my query is executed every minute, and we can monitor how many matching records we have.
It’s just a great simple way to check system health if a log file is your only indication.
Java Logging Best Practices
All of this error and log data can be invaluable, especially when you take a step back and look at a slightly larger picture. Below is the Application Dashboard for a Java web app that contains all of the monitoring:
As you can see, you get some great contextual data at a glance that errors and logs contribute to: Satisfaction and HTTP Error Rate. You can see that user satisfaction is high and the HTTP error rate is low. You can quickly start drilling down to see which pages might not be performing well, and what errors are occurring:
There was a lot to cover in this post, and I feel like I barely scratched the surface. If you dig a little deeper or even get your hands on it, you can! I hope that these Java logging best practices will help you write better logs and save time troubleshooting.
All of our Java logging appenders are available on GitHub and you can sign up for a free trial to get started with Stackify today!
Java Best Practices for Smarter Application Logging & Exception Handling | https://notes.haifengjin.com/tech/software_engineering/java_log_best_practices/ | CC-MAIN-2020-45 | refinedweb | 2,560 | 62.88 |
A web service allows a site to expose programmatic functionality via the Internet. Web services can accept messages and optionally return replies to those messages.
Creating a Web Service
This example shows step-by-step how to create a web service in Visual Studio 2010.
1. Start Microsoft Visual Studio 2010. Click File > New > WebSite. Following Window will appear.
Select the "Visual C#" node under "Project Types". Under Templates, select "ASP.NET Web Service" and name the project "WebServiceSample". Click OK Button.
2. When you press OK the following window will appear.
We can see that few code of lines are automatically written here. There is a demo function named HelloWorld (), when it will be called it will return a value “Hello world”. This function HelloWorld () is just for a reference to how to make a new function. If it is not require, you can also remove/comment it.
Now, here we have to do our coding according to our task.
Here, I am creating a function performOperation(),which will accept three parameters operation,number1,number2 sequentially and will return the value of calculation as double data type.
3. Code () {
}
[WebMethod]
public double performOperation(string operation,double number1,double number2)
//this will accept three parameters and return the result of calculation
{
double result;
switch (operation)
{
case "Add":
result = number1 + number2;
break;
case "Subtract":
result = number1 - number2;
break;
case "Multiply":
result = number1 * number2;
break;
case "Divide":
result = number1 / number2;
break;
default:
result = 0;
break;
}
return result;
}
}
4. Now, we need to compile the application and add the compiled application to IIS in
5. Now, Create a New ASP.Net Website to demonstrate the Web Service.
Open Visual Studio 2010 > File > New >ASP.NET website and provide it website name, and design the following User Interface
6. We need to add the web reference of the web service. To add web reference of the web service right click in “Solution Explorer” and select “Add Service Reference”, add web reference dialog box will appear. There add the web reference of the service in my case it is
On Clicking, there will be following window opened.
Click on Advance Button, Following window will open
Now, Click on Add Web Reference Button, following window will be open.
Paste copy string in “URL” box, then click button “go”. If service found then click on Add Reference Button.
After Successful adding the Service Reference, you can see it in your Solution Explorer.
7. Now, back to coding part, and add the following code on the Click Event of the Get Result Button.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partial class _Default : System.Web.UI.Page
{
double result;
localhost.Service ws = new localhost.Service();
protected void Page_Load(object sender, EventArgs e)
{
}
protected void Button1_Click(object sender, EventArgs e)
{
//Response.Write(cbOperation.SelectedValue.ToString());
if (Convert.ToInt32(txtN2.Text) == 0 && cbOperation.SelectedValue == "Divide")
{
Response.Write("Can't Divide a Number by 0 ");
return;
}
result= ws.performOperation(cbOperation.SelectedValue.ToString(), Convert.ToInt32(txtN1.Text), Convert.ToInt32(txtN2.Text));
//passing the parameter to the Web Service's Method
Response.Write(result.ToString());
// it will get the result from web service and will print it.
}
}
8. Run the Website > Enter the First and Second Number in corresponding text box and the select the desired operation wish to perform and click on Get Result Button.
| https://www.mindstick.com/articles/987/web-services-in-dot-net | CC-MAIN-2019-30 | refinedweb | 568 | 51.85 |
Created on 2011-06-28 10:07 by Thorney, last changed 2012-11-14 07:16 by pitrou. This issue is now closed.
Raymond, do we care whether or not the pure Python version of functools.partial supports inheritance and instance testing?
The constructor is technically documented as returning a "partial object" rather than a simple staticmethod instance with additional attributes.
My own preference leans towards keeping the closure based implementation due to its simplicity, which is what makes it particularly useful as a cross-check on the C implementation.
> Raymond, do we care whether or not the
> pure Python version of functools.partial
> supports inheritance and instance testing?
We don't care. The docs make very few
guarantees beyond the core functionality.
Everything else is an implementation detail.
Cheers for the comments Eric. I've modified the patch accordingly.
Brian's patch looks ok to me. There's a missing newline (or two) just before test_main() but that can easily be fixed on commit.
Ezio and I made further minor comments that can be handled by the person doing the commit; I’d like to do it.
Just noticed one minor nit with the patch: the pure Python version of functools.partial should support "func" as a keyword argument that is passed to the underlying object. The trick is to declare a positional only argument like this:
def f(*args, **kwds):
first, *args = args
# etc...
Also, the closure based implementation should be decorated with @staticmethod (see) and the tests updated accordingly.
I've updated the patch to address the comments here and in the code review.
I added more cross testing of the pure Python implementation of partial - as you pointed out inheritance wasn't supported so I changed from the simple closure to a class implementation.
Instead of skipping repr tests for the pure Python implementation could we not just implement it? I did skip the pickle test for the Python implementation though.
Nick, I wasn't sure how to decorate the partial object as a staticmethod so I don't think this patch addresses issue 11704.
Also I didn't understand why Lock was being imported from _thread instead of thread. Since coverage went to 100% and the tests continue to all pass when I changed this.
Why does the pure Python version of partial have to be so complicated?
I don't think the __class__, __setattr__ and __delattr__ are useful. As Raymond said, only the core, documented functionality needs to be preserved, not implementation details.
Something else:
- from _thread import allocate_lock as Lock
+ from thread import allocate_lock as Lock
The module is named _thread in 3.x, so this shouldn't have been changed (but admittedly it's thread in 2.x).
Thanks Antoine, the __class__ attribute wasn't useful, I've removed that. Overriding the __setattr__ and __delattr__ gives consistent behaviour with the both versions - allowing the unittest reuse. Also I've changed thread back to _thread.
Isn't more compatibility between the Python and C implementations desired? Is it an aim to document more of the functionality? An earlier version of this patch had a closure implementation of partial; I'm happy to revert to that if simplicity is preferred to compatibility?
Should the caching decorators be tested from multiple threads?
Le mardi 24 juillet 2012 à 06:00 +0000, Brian Thorne a écrit :
> Isn't more compatibility between the Python and C implementations
> desired?
IMHO, not when compatibility regards obscure details such as whether
setting an attribute is allowed or not. I don't know why this was
codified in the unit tests in the first place, perhaps Nick can shed
some light.
> Should the caching decorators be tested from multiple threads?
Why not, if there's an easy way to do so.
Back to a simpler closure implementation of partial and skip the repr test for python implementation.
New changeset fcfaca024160 by Antoine Pitrou in branch 'default':
Issue #12428: Add a pure Python implementation of functools.partial().
Sorry for the delay. I have now committed the patch to 3.4 (default). Thank you!
The following from the changeset left me with questions:
-from _functools import partial, reduce
+try:
+ from _functools import reduce
+except ImportError:
+ pass
* Why the try block when there wasn't one before?
* Should reduce be added to __all__ only conditionally?
* Should the pure Python partial only be used if _functools.partial is not available?
* Should _functools.partial be removed?
> * Why the try block when there wasn't one before?
> * Should reduce be added to __all__ only conditionally?
My mistake, the try block should have just covered the import of partial - that is after all the exceptional circumstance we can deal with by using the pure python implementation.
Possibly reduce could be handled in a similar way with a fallback python implementation? Otherwise your suggestion of conditionally adding it to __all__ makes sense to me.
> * Should the pure Python partial only be used if _functools.partial is not available?
> * Should _functools.partial be removed?
What are the main considerations to properly answer these last questions? Performance comparison between the implementations, maintainability?
>?
>> *.
> >?
I tried to remove the try block, but when making the import
unconditional the tests fail with an ImportError (because reduce doesn't
have a pure Python implementation). I haven't investigated further,
since the try block doesn't look like a real issue to me. | https://bugs.python.org/issue12428 | CC-MAIN-2019-09 | refinedweb | 897 | 58.18 |
atomic_toggle()
Safely toggle a variable
Synopsis:
#include <atomic.h> void atomic_toggle( volatile unsigned * loc, unsigned bits );
Since:
BlackBerry 10.0.0
Arguments:
- loc
- A pointer to the location whose bits you want to toggle.
- bits
- The bits that you want to change.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The atomic_toggle() toggle the 0xdeadbeef bits in a flag:
#include <atomic.h> … volatile unsigned flags; … atomic_toggle( &flags, 0xdeadbeef );
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/a/atomic_toggle.html | CC-MAIN-2020-10 | refinedweb | 103 | 53.27 |
seong lee4,503 Points
What did I do wrong?
I got this wrong for some reason, please help me. Thank you in advance
def first_function(arg1): return 'arg1 is {}'.format(arg1) def second_function(arg1): return 'arg1 is {}'.format(arg1) class MyClass: args = [1, 2, 3] def class_func(self): return self.args
4 Answers
Christian Rowden2,278 Points
This one was a bit frustrating for me as well. It is extremely meticulous. You must ensure that any blank spaces at the end of the lines of code as taken care of as well. I hope that helps
Cheers and happy coding.
Christian Rowden2,278 Points
I can't since I can't interactively manipulate your code in any way but if you put your cursor at the end of your lines and use the up and down arrow keys you may find some white spaces that you can't actually see just be looking at the code. | https://teamtreehouse.com/community/what-did-i-do-wrong-39 | CC-MAIN-2021-21 | refinedweb | 156 | 82.24 |
Since the writing of my sidebar gadget introduction, I've been wrestling with the frustration of not being able to do with a gadget what can be easily done with .NET. I love how compact and simple gadgets can be, but I find it hard to build a truly useful gadget simply because there's no real power using JavaScript. Unfortunately for the .NET community, gadgets rely almost purely on JavaScript. It's hard to mix my excitement for gadgets with their huge limitations.
Enter Gadget .NET Interop!
In this article we'll explore how to build an interop layer between gadgets and .NET so you can run any .NET code from your sidebar gadget. We'll do that by building a C# project to read your GMail inbox.
It's not fair to say that gadget's can't run .NET code. The truth is that it's very easy to create COM object instances from scripting languages. The real problem is that it's terribly inconvenient to have to register all your code for COM interop. Doing so would require you to first modify all your code to be COM compatible. Then you'd have to re-package your code and distribute an MSI file along with each and every gadget just to install and register your assembly (and possibly add it to the GAC if it's going to be shared across gadgets). That kind of workaround isn't a realistic solution, especially if you already have code that you don't want to rewrite and package just for COM interop. Further, you can't assume your users have the knowledge or permissions to install a COM component.
What's the answer then?
No matter what, there must be some COM pieces in place; otherwise we'll never get past the limitations of JavaScript. We'll get to the GMail part once we have a suitable COM layer (see below if you're comfortable with COM in .NET). Let's start with the real nuts and bolts of the solution by creating a basic COM object that can be used load any .NET assembly. See this article for more details on how .NET COM objects.
The idea is simple; create a small, lightweight .NET COM component that uses reflection to load any assembly and type. Then, that type can be called directly from JavaScript. Let's take a look at the interface for the "Gadget Adapter" that will do the bulk of the work.
[ComVisible(true),
GuidAttribute("618ACBAF-B4BC-4165-8689-A0B7D7115B05"),
InterfaceType(ComInterfaceType.InterfaceIsDual)]
public interface IGadgetInterop
{
object LoadType(string assemblyFullPath, string className);
object LoadTypeWithParams(string assemblyFullPath, string className,
bool preserveParams);
void AddConstructorParam(object parameter);
void UnloadType(object typeToUnload);
}
There are only four methods that the implementing Gadget Adapter class will need to handle. The point to take note of is that the interface has three attributes that will allow us to expose the implementing class as a COM object. Four methods are all we need to create and call any type in managed code.
Now that the interface is defined, let's look at the actual Gadget Adapter implementation of this interface. We'll break it out piece-by-piece starting with the class attributes.
[ComVisible(true),
GuidAttribute("89BB4535-5AE9-43a0-89C5-19B4697E5C5E"),
ProgId("GadgetInterop.GadgetAdapter"),
ClassInterface(ClassInterfaceType.None)]
public class GadgetAdapter : IGadgetInterop
{
...
}
There are a few differences between these attributes and the attributes on the interface. The most important attribute for our purposes is the "ProgId" attribute. This attribute represents the string we'll use to create the ActiveX object via JavaScript. Now that the GadgetAdapter is decorated properly, the next step is loading assemblies and creating class instances. The AddConstructorParam method allows JavaScript code to add values that will be passed to the class constructor's arguments. This is only necessary when want to load a .NET type using a constructor with one or more arguments.
GadgetAdapter
AddConstructorParam
private ArrayList paramList = new ArrayList();
public void AddConstructorParam(object parameter)
{
paramList.Add(parameter);
}
The next method is where all the magic happens. The LoadTypeWithParams method has the three arguments that allow any .NET assembly to be loaded. The method takes the path to the assembly, the type to create, and a flat for handling constructor parameter disposal.
LoadTypeWithParams
public object LoadTypeWithParams(string assemblyFullPath, string className,
bool preserveParams)
{
...
Assembly assembly = Assembly.LoadFile(assemblyFullPath);
object[] arguments = null;
if (paramList != null && paramList.Count > 0)
{
arguments = new object[paramList.Count];
paramList.CopyTo(arguments);
}
BindingFlags bindings = BindingFlags.CreateInstance |
BindingFlags.Instance |
BindingFlags.Public;
object loadedType = assembly.CreateInstance(className, false, bindings,
null, arguments, CultureInfo.InvariantCulture,
null);
...
return loadedType;
}
Using standard .NET reflection, the specified assembly is loaded and an instance of the input type is created. That instance is returned and is then directly callable by JavaScript (more on that to come). The preserveParams flag prevents the constructor arguments from being cleared after the object is created. This is only necessary when you're creating multiple instances of a class with the same constructor arguments.
preserveParams
Finally, because we're in the COM world, we have to be careful to do our own object disposal. The UnloadType method calls dispose of the incoming object to allow for graceful cleanup.
UnloadType
public void UnloadType(object typeToUnload)
{
...
if (typeToUnload != null && typeToUnload is IDisposable)
{
(typeToUnload as IDisposable).Dispose();
typeToUnload = null;
}
catch { }
...
}
The one convention I opted for is that classes exposed to gadgets must implement IDisposable, so only types implementing that interface will work with the sample code. That's all there is to the interop layer. It creates .NET objects and it destroys .NET objects; nothing more, nothing less.
IDisposable
Now we have a working COM-friendly Gadget Adapter, but how does it get registered? Normally you would rely on an MSI installer to register and GAC your COM components. Remember that the goal here is to run .NET code in a gadget without the user having to install an MSI. To get around the MIS (or RegAsm.exe) we can "fake" the registration by adding the right values directly to the registry (My thanks to Frederic Queudret for this idea). The GadgetInterop.js a JavaScript library is designed to facilitate the Gadget Adapter registration (as well as all the COM object wrapping). The RegAsmInstall JavaScript method takes all the information about the Gadget Adapter interop assembly and creates all the necessary registry entries to register it. The beauty of this step is that any gadget can register the interop layer at runtime the first time the gadget executes.
RegAsmInstall
function RegAsmInstall(root, progId, cls, clsid, assembly, version, codebase)
{
var wshShell;
wshShell = new ActiveXObject("WScript.Shell");
wshShell.RegWrite(root + "\\Software\\Classes\\", progId);
wshShell.RegWrite(root + "\\Software\\Classes\\" + progId + "\\", cls);
wshShell.RegWrite(root + "\\Software\\Classes\\" + progId + \\CLSID\\,<a href=",">
</a> clsid);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\", cls);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\", "mscoree.dll");
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\ThreadingModel", "Both");
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\Class", cls);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\Assembly", assembly);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\RuntimeVersion", "v2.0.50727");
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\CodeBase", codebase);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\" + version + "\\Class", cls);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\" + version + "\\Assembly", assembly);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\" + version + \\RuntimeVersion,
"v2.0.50727");
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\InprocServer32\\" + version + "\\CodeBase", codebase);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
"\\ProgId\\", progId);
wshShell.RegWrite(root + "\\Software\\Classes\\CLSID\\" + clsid +
\\Implemented Categories\\{62C8FE65-4EBB-45E7-B440-6E39B2CDBF29}\\,
"");
}
Nothing too complex, just a few basic registry entries. The rest of the JavaScript library serves as a wrapper for the Gadget Adapter's methods. You can use the GadgetBuilder's methods from your own script to load or unload any .NET type.
Now that we have the interop layer in place, let's take a look at the GMail example (included in the source code download) to see how the interop layer actually gets used.
Reading a GMail account inbox is as easy as reading an XML feed. The feed is so simple that it actually could be parsed using JavaScript. I purposely added some complexity to the GmailReader assembly to have something in code managed that I couldn't do in JavaScript. Plus, using .NET code adds speed and the ability to debug (That's far more than you can ask of JavaScript). To that end, the response XML from the GMail feed is run through an XslCompiledTransform to deserialize the response into a generic list (i.e. strongly typed) of a type I created. The types in that generic list can then be exposed directly to JavaScript. I won't go into GMail code here as it's easily understandable and well commented. What's really important is understanding what was done to make the code JavaScript-friendly, and how to call that code from JavaScript. There are a few key steps that are required.
XslCompiledTransform
[ComVisible(true)]
public class GmailClient : IDisposable
{
...
}
Byfar the most important step is to add the [ComVisible(true)] attribute to any class that will be used by your JavaScript. The class will simply not be callable by JavaScript without this attribute. The other convention is implementing the IDisposable interface. This is really just preventative maintenance and good practice since our object is exposed to COM.
[ComVisible(true)]
At this point lets examine the Gmail.js file and take a look at how a .NET assembly is used from the gadget. The first thing to do is create and initialize an instance of the GadgetBuilder wrapper found in the GadgetInterop.js file. We'll use that wrapper to load and unload .NET types.
var builder = new GadgetBuilder();
builder.Initialize();
Calling the Initialize method does a few important tasks. First, it checks if the Gadget Adapter is already registered by trying to create an ActiveX object instance of it. If that fails, the builder attempts to run the registration code listed above. The beauty here is that you never need to manually register the Gadget Adapter COM object. The JavaScript library will do it for you the first time it's run.
Initialize
function Initialize()
{
if(InteropRegistered() == false)
{
RegisterGadgetInterop();
}
_builder = GetActiveXObject();
}
The nextstep is to load the GmailReader assembly and create an instance of the client type. The GmailClient has two constructor agruments, userName and password whch are required to create an instance. The values for both arguments come from the gadget's settings page, and are stored using the Gadget API, so once the exist for the lifetime of the gadget.
GmailReader
GmailClient
userName
builder.AddConstructorParam(userID);
builder.AddConstructorParam(password);
gmailClient = builder.LoadType(System.Gadget.path +
"\\bin\\GmailReader.dll", "GmailReader.GmailClient");
We're telling the builder to load the GmailReader.dll assembly located in the gadget's bin directory. There's no need to put your assembly in a "bin" directory, or even under the same folder structure as your gadget. I simply did that for convenience in this example.
At this point, the gmailClient JavaScript variable holds a reference to a fully-loaded .NET GmailClient type. Now we can directly invoke the objects methods just like you would in managed code. To get enough information to display something meaningful on the gadget UI, we can call the following code.
gmailClient
gmailClient.GetUnreadMail();
var count = gmailClient.UnreadMailCount;
var mailLink = document.getElementById('mailCountLink');
mailLink.innerText = count;
Notice that except for the var data type, it's no different that a .NET equivalent. In other words, you now have full access to any method or property you want to expose in your own object, and there's no further COM work to do. This is true for any assembly. With the Gadget Adapter in place, you don't have to do any COM work again.
var
Even though we're working with an inferred type, var, once we get a value back from the .NET code it can be used like any other JavaScript value. In this case, the number of unread mail items is displayed to the user, and the and the background is changed to reflect no mail or new mail.
var
Lastly, because the gmailClient is kept im memory, the mail contents can be displayed any time the user clicks on the unread mail count link. In other words, the .NET object maintains its (hence the need for manual cleanup). Here's how the details are displayed in the gadget.
That's really all there is to it. Once your object is created, you can use it as if you were calling it from managed .NET code. Also, because the interop layer is registered after the first time you run your gadget, it's reusable across all your gadgets. The best part is that you can package your assembly, the interop assembly, and the interop JavaScript library with your gadget, and Vista will handle the entire install process just like any other gadget.
It's unfortunate that Microsoft left managed code out of the gadget framework, especially when they have support for it in so many other areas. Still, the truth is that managed code can still be easily used, so there's still hope for some really useful gadget development.
Enjoy!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
<script type ="text/javascript">
function myload()
{
try
{
var myAx = new ActiveXObject("myadd.AddClass");
var t = document.getElementById("text1");
t.value = myAx.add(2,2);
}
catch(e)
{
var textbox = document.getElementById("text1");
textbox.value = e;
}
}
</script>
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/17408/NET-Interop-for-Gadgets-A-C-GMail-Inbox-Reader-Ex?msg=2378911 | CC-MAIN-2014-35 | refinedweb | 2,298 | 58.18 |
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
CSS in Django5:12 with Lacey Williams Henschel
Add some app-specific static content to your site. This is CSS or other static files that are only needed by one particular app, not the whole project. Understand how to include app-specific content in your project, and why you might want to do so.
Managing Static Files: The Django documentation goes into more detail about how Django serves static content in different environments. If you'd like more detail than what we've gone into here, I recommend checking out the docs. Always read the docs!
You can also use template tags to get at other data in your project, 0:00 including your static files like your CSS. 0:05 In Django, you need to store your static files in a special way. 0:08 It's best to store your project-wide static files in a directory called assets. 0:14 In our case, the setup looks like this. 0:19 To add CSS that's specific to an app however, 0:22 you need to follow a slightly different pattern. 0:26 If we wanted to add CSS to our courses app, 0:29 we'd need to add a directory called static underneath our courses app. 0:33 That directory contains its own directory also called courses. 0:38 Static slash courses then contains directories for your CSS, 0:43 your JavaScript files, and any images you're using in your app. 0:47 This is so the Django can always find the files you're referring to. 0:53 Adding a directory with the same name as your app inside your static folder 0:58 feels a little weird, but it helps you avoid namespace collisions later on. 1:03 For example, if your project had an app called admin, and 1:09 another called main site, and both of those apps had static 1:13 files associated with them, they might have files with the same names. 1:17 Styles.css is a really common way to name a basic CSS file. 1:21 When Django tries to find styles.css in the static folder for 1:28 your admin app, if you have the extra directory with the name of your app, 1:33 then Django goes to the CSS directory in static/admin, 1:38 and that extra admin helps Django know it's in the right place. 1:42 This is called namespacing. 1:47 So let's try it out. 1:49 Make sure to reload your workspace, 1:52 because the new workspace has some new CSS files. 1:54 We added some CSS to our courses app. 1:58 Let's walk through how that's set up before we get started. 2:01 First, we created a new directory in the course's app called static. 2:04 Next, we created a directory within the static directory 2:10 with the same name as our app, courses. 2:13 Finally, we created a directory called CSS inside the course's directory. 2:17 This is where we store the CSS file that is specific to this app. 2:22 Now, we just need to add this new CSS to our templates, but we have a problem,. 2:27 This is app specific CSS, so we should not add it to layout.html. 2:32 We don't want this CSS to be used on other pages in this project 2:38 that aren't in the course's app. 2:43 So, we should add a new block tag to layout.html. 2:45 So we'll add that here, right below the existing style sheet. 2:51 We just open up a new block tag, call it static, and immediately close it. 2:55 We add the static block after the original style sheet 3:01 because we don't want to have to add the project wide CSS to each template. 3:05 The static tag is for app specific CSS in other templates. 3:10 Now, let's open up course list.html in the course's app and 3:15 add our app specific CSS to this file. 3:20 So first, right here above the block title, we will add our block static tag. 3:24 And I like to immediately close it, so I don't have to remember to do that later. 3:32 And we can also save ourselves a little bit of typing by copying the link 3:36 to the existing style sheet and pasting it here. 3:41 We just have to remember to change this path. 3:45 Now here, we're going to be a little more specific about the path to the CSS file. 3:48 So we'll say courses/css/courses.css. 3:55 This is so Django will definitely know where to look to find this particular 4:01 CSS file. 4:05 We also need to add the load tag to the top of this page. 4:06 So, load static from static files. 4:10 This lets Django know that we intend to load some new static content. 4:15 Now, we just need to start our server. 4:19 So we will change directories into learning site and 4:22 then use python manage.py 4:26 ruserver on our port to take a look at our site. 4:30 So here we can see that the titles of the courses are navy blue, 4:39 instead of the maroon color that you see in the header. 4:43 This is because this particular page is using our app-specific CSS. 4:47 If we open the CSS file, we see that the CSS that we changed 4:52 is the color of these headers, which is now navy blue. 4:56 So that's the static tag in a nutshell. 5:01 In the next video, we'll get into some useful built-in filters, and 5:04 start building our own custom template filters. 5:08 | https://teamtreehouse.com/library/customizing-django-templates/template-tags-and-filters/css-in-django | CC-MAIN-2020-40 | refinedweb | 1,045 | 80.51 |
Once you are happy with a plugin you can publish it so that other people can use it. Publishing of plugins happens through the Python Package Index and can be automatically done with the help of the lektor shell command.
Before you can go about publishing your plugin there needs to be at least
some information added about it to your
setup.py. At least the keys
name,
version,
author,
author_email,
url and
description need to be
set. Here is a basic example of doing this:
from setuptools import setup setup( name='lektor-your-plugin', author='Your Name', author_email='[email protected]', version='1.0', url='', license='MIT', packages=['lektor_your_plugin'], description='Basic description goes here', entry_points={ 'lektor.plugins': [ 'hello-world = lektor_hello_world:HelloWorldPlugin', ] }, )
Once you augmented your
setup.py you can go ahead with the publishing. First
you need to make sure you have a PyPI account. If you do not, you can
create one at pypi.python.org.
Once you have done that, you can publish the plugin from the command line
with the
lektor command:
$ cd path/to/your/plugin $ lektor dev publish-plugin
When you use this for the first time it will prompt you for your login
credentials for
pypi. Next time it will have remembered them. | https://www.getlektor.com/docs/plugins/publishing/ | CC-MAIN-2018-17 | refinedweb | 212 | 55.64 |
How to create multiple Swift connections? [closed]
I'm using the python-swiftclient api, and am attempting get the multithreading to work. I'm having issues creating multiple connections to Swift. The documentation I'm referring to is here: (...)
And the function get_conn() looks like this:
def get_conn(): return swiftclient.client.Connection(user=user, key=key, authurl=url)
I pass get_conn() to the connection_maker param for the queue_manager(), but the connection returned is always the same; as a result when I try to run PUTs in parallel the objects get overwritten. How do I create multiple Swift connections using the same login credentials?
Why did you close this question? If you found the answer, please share it so others can find it in the future. | https://ask.openstack.org/en/question/35225/how-to-create-multiple-swift-connections/ | CC-MAIN-2020-34 | refinedweb | 125 | 58.48 |
>
I have a cube with the following script on it:
using UnityEngine;
using System.Collections;
public class IncreaseSpeed : MonoBehaviour {
//public GameObject otherObj;
public int speed;
void OnTriggerEnter (Collider other){
if(other.collider.tag == "Trigger01"){
Debug.Log("We've touched");
//otherObj.attachedRigidbody.AddForce(Vector3.up * speed)
}
}
}
I have another cube that moves through it, and I'm simply trying to get the console to read "We've touched" as you can see in my script.
I've added a tag on the moving cube and name it "Trigger01".
I don't understand why I'm not getting the console to read "We've touched"
The code under Debug is irrelevant right now. I figure if I can get Debug here then I'll get the code below it to work.
Any help is appreciated.
Answer by lil_billy
·
Oct 18, 2012 at 11:18 PM
so trigger colliders
only detect collisions with other colliders that have a Rigidbody attached to them
Both objects have a Rigidbody attached to them
try this instead of ==
other.gameObject.CompareTag ("Player")
for some reason using the "==" can cause the if statement to ignore the event
damn i hate this forums text engine, randomly chooses which paragraphs to accept. *Instead of == use: other.gameObject.CompareTag ("Player")
It's still not working. The moving cube goes through the still cube and no words appear on the console.
So the cube that is moving has a solid collider right?
other than that maybe it could be how its moving
if the moving cube starts off in the trigger it might not register (though this is rare)
if you are using transform.Translate to move it instead of say charactercontroler.move or rigidbody move that can in rare cases cause it not to register.
Maybe check the spelling difference of your written tag and actual tag
Heres a great debug u can use
put this before the if
Debug.Log(other);
and lets see if its registering the collisions.
Objects Touching?
1
Answer
OnTriggerEnter Not Working
3
Answers
Dynamic terrain (Safeground)
0
Answers
OnTrigger event when colliders are already touching eachother
1
Answer
Alternate between two Audio clips on collision
3
Answers | https://answers.unity.com/questions/334691/detection-help.html | CC-MAIN-2019-26 | refinedweb | 362 | 63.8 |
Privet/hello/salut/halo/hola!
I have the following code:
#include <iostream> #include <iomanip> #include <fstream> #include <cstdlib> using namespace std; using namespace Finder; int main() { ifstream in( "1.dat" ); Event *P = new Event( sphere ); double px, py, pz; while( in>>px>>py>>pz ) { P->AddParticleRaw( px, py, pz ); } in.close(); P->Normalize(); cout << P->GetNumber() << " particles in the event." << endl; OJFRandom::SetSeed( 13 ); double radius = 1.0; // R parameter of eq. (20) in reference [1] unsigned ntries = 3; // number of tries JetSearch* js = new JetSearch( P, radius, ntries ); unsigned njets = js->FindJetsForOmegaCut(0.05); if( njets == 0 ) { cout << "Jets lost." << endl; exit(1); } Jets* Q = js->GetJets(); cout << Q->GetNumber() << " jets found." << endl; cout << "Omega: " << Q->GetOmega() << ", " << "Y: " << Q->GetY() << ", " << "Esoft (normalized): " << Q->GetESoft() << "." << endl; cout << "The details of the jets (E px py pz):" << endl; Jet* jet = Q->GetFirst(); while( jet ) { cout << setw( 10 ) << jet->GetE() << " " << setw( 10 ) << jet->GetPx() << " " << setw( 10 ) << jet->GetPy() << " " << setw( 10 ) << jet->GetPz() << endl; jet = jet->GetNext(); } delete P; delete js; }
My little practical question is about opening using "ifstream":
I have a directory with some "n.dat" (n=1,2...) files that are used in the code, so just wanted the program to compile all these files one by one.
How to make the program to open not just one file "1.dat", but to open
the next file after compiling this one, for example beginning with
"1.dat" and ending at "10.dat" compiling all of them, and making a
final file with all the data obtained?
I made this code to do that, but unfortunately it doesn't work...
for(int i = 1; i <= 10; i++) { std::stringstream ss; ss << i << ".dat"; }
thks | https://www.daniweb.com/programming/software-development/threads/71770/ifstream-string | CC-MAIN-2021-10 | refinedweb | 283 | 73.78 |
Defines a %%cache cell magic in the IPython notebook to cache results of long-lasting computations in a persistentpickle file.
Project Description
Defines a %%cache cell magic in the IPython notebook to cache results and outputs of long-lasting computations in a persistent pickle file. Useful when some computations in a notebook are long and you want to easily save the results in a file.
Example along with the outputs. Rich display outputs are only saved if you use the development version of IPython. When you execute this cell again, the code is skipped, the variables are loaded from the file and injected into the namespace, and the outputs are restored in the notebook.
-
Use the –force or -f option to force the cell’s execution and overwrite the file.
-
Use the –read or -r option to prevent the cell’s execution and always load the variables from the cache. An exception is raised if the file does not exist.
-
Use the –cachedir or -d option to specify the cache directory. You can specify a default directory in the IPython configuration file in your profile (typically in ~.ipythonprofile_defaultipython_config.py) by adding the following line:
c.CacheMagics.cachedir = “/path/to/mycache”
If both a default cache directory and the –cachedir option are given, the latter is used.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/ipycache/ | CC-MAIN-2018-09 | refinedweb | 237 | 64.61 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.