text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
C++ Programming/Chapter Object Oriented Programming Print version:
Object Oriented Programming
Structures
A simple implementation of the object paradigm from (OOP) that holds collections of data records (also known as compound values or set). A struct is like a class except for the default access (class has default access of private, struct has default access of public). C++ also guarantees that a struct that only contains C types is equivalent to the same C struct thus allowing access to legacy C functions, it can (but may not) also have constructors (and must have them, if a templated class is used inside a struct), as with Classes the compiler implicitly-declares a destructor if the struct doesn’t have a user-declared destructor. Structures will also allow Operator Overloading.
A struct is defined by:
struct myStructType /*: inheritances */ { public: // public members protected: // protected members private: // private members } myStructName;
Because it is not supported in C, it is uncommon to have structs in C++ using inheritances even though they are supported just like in classes. The more distinctive aspect is that structs can have two identities one is in reference to the type and another to the specific object. The public access label can sometimes be ignored since the default state of struct for member functions and fields is public.
An object of type myStructType (case-sensitive) is declared using:
myStructType obj1;
- Why should you Use Structs, Not Classes?
Older programmer languages used a similar type called Record (i.e.: COBOL, FORTRAN) this was implemented in C as the struct keyword. And so C++ uses structs to comply with this C's heritage (the code and the programmers). Structs are simpler to be managed by the programmer and the compiler. One should use a struct for POD (PlainOldData) types that have no methods and whose data members are all public. struct may be used more efficiently in situations that default to public inheritance (which is the most common kind) and where public access (which is what you want if you list the public interface first) is the intended effect. Using a class, you typically have to insert the keyword public in two places, for no real advantage. In the end it's just a matter of convention, which programmers should be able to get used.
The natural way to represent a point is using two doubles. The structure or struct is one of the solutions to group these two values into a compound object.
// A struct definition: struct Point { double x, y; };
This definition indicates that this structure contains two members, named x and y. These members are also. This syntax is in place to allow the programmer the facility to create an instance[s] of the struct when it is defined.
Once you have defined the new structure, you can create variables with that type:
struct Point blank; blank.x = 3.0; blank.y = 4.0;.
As usual, the name of the variable blank appears outside the box and its value appears inside the box. In this case, that value is a compound object with two named instance variables.
- Accessing instance variables
You can read the values of an instance variable using the same syntax we used to write them:
int x = blank.x;
The expression blank.x means "go to the object named blank and get the value of the member named expression, so the following are legal.
cout << blank.x << ", " << blank.y << endl; double distance = sqrt(blank.x * blank.x + blank.y * blank.y);
The first line outputs 3, 4; the second line calculates the value 5.
- Operations on structures curly brackets get assigned to the instance variables of the structure one by one, in order. So in this case, x gets the first value and y gets the second.
Unfortunately, this syntax can be used only in an initialization, not in an assignment statement. Therefore, the following is illegal.
Point blank; blank = { 3.0, 4.0 }; // WRONG !!
You might wonder why this perfectly reasonable statement should be illegal, and there is no good answer. (Note, however, that a similar syntax is legal in C since 1999, and is under consideration for possible inclusion in C++ in the future.)
On the other hand, it is legal to assign one structure to another. For example:
Point p1 = { 3.0, 4.0 }; Point p2 = p1; cout << p2.x << ", " << p2.y << endl;
The output of this program is
3, 4.
- Structures;; int j = 9; swap (i, j+1); // WRONG!!.
-
The this keyword is an implicitly created pointer that is only accessible within nonstatic member functions of a struct (or a union or class) and points to the object for which the member function is called. This pointer is not available in static member functions. This will be restated again on when introducing unions a more in depth analysis is provided in the Section about classes.
union
The union keyword is used to define a union type.
- Syntax
union union-name { public-members-list; private: private-members-list; } object-list;
Union is similar to
struct (more that
class), unions differ in the aspect that the fields of a
union share the same position in memory and are by default
public rather than
private. The size of the
union is the size of its largest field (or larger if alignment so requires, for example on a SPARC machine a
union contains a
double and a
char [17] so its size is likely to be 24 because it needs 64-bit alignment). Unions cannot have a
destructor.
What is the point of this? Unions provide multiple ways of viewing the same memory location, allowing for more efficient use of memory. Most of the uses of unions are covered by object-oriented features of C++, so it is more common in C. However, sometimes it is convenient to avoid the formalities of object-oriented programming when performance is important or when one knows that the item in question will not be extended.
union Data { int i; char c; };
Writing to Different Bytes.
Classes
This label indicates any members within the 'public' section can be accessed freely anywhere a declared object is in scope.
private
The protected label has a special meaning to inheritance, protected members are accessible in the class that defines them and in classes that inherit from that base class, or friends of it. In the section on inheritance we will see more about it.
Inheritance (Derivation)
The static keyword can be used in four different ways:
- to create permanent storage for local variables in a function.
- to specify internal linkage.
- to declare member functions that act like non-member functions.
- to create a single copy of a data member.
static member function::auto
- +
- ^ (XOR)
- | (OR)
- & (AND)
- ~ (complement)
- << (shift left, insertion to stream)
- >> (shift right, extraction from stream)
All of the bitwise operators are binary, excepting
- ! (NOT)
- && (AND)
- || (OR) (read: plagues and velociraptors):
- access
Declaring a std string is done by using one of these two methods:
using namespace std; string std_string; or std::string std_string;
Text I/O; }
Advanced use
Chapter Summary
- Structures
- Unions
- Classes (Inheritance, Member Functions, Polymorphism and this pointer)
- Operator overloading
- Standard Input/Output streams Library | http://en.wikibooks.org/wiki/C%2B%2B_Programming/Chapter_Object_Oriented_Programming_Print_version | CC-MAIN-2014-10 | refinedweb | 1,195 | 61.06 |
A couple of months ago I started working with a startup called Genome Compiler that specializes in software platform to accelerate Genome and DNA design. Their main product was created using Flash and they wanted me to help them to create a new Genome viewer using plain web technologies. They wanted to visualize plasmids, which are small DNA molecules represented as a circle with annotations. They also wanted to visualize sequences, which are the primary structure of a biological molecule written in A, T, G and C characters. To create such visualization we needed the ability to create graphics inside the browser.
For more than four months I’ve been helping Genome Compiler to create their viewer using both Scalable Vector Graphics (SVG) and AngularJS. During the time, I learned how to combine SVG and AngularJS together to create biological models and other graphics. In this article I’ll explore what SVG is. Then, I’ll explain how SVG can be used in AngularJS applications. Towards the end of the article, you will see a simple application that combines both SVG and AngularJS.
Note: The article won’t cover the project that I’m doing for Genome Compiler due to the fact that it’s too huge to be covered in an article (or even in a book).
Disclaimer: This article assumes that you have basic knowledge of AngularJS. If you are not familiar with AngularJS, I encourage you to stop reading and start learning about this framework today.
Editorial Note: You can learn more about AngularJS using our tutorials at
This article is published from the DNC Magazine for .NET Developers and Architects. Download this magazine from here [Zip PDF] or Subscribe to this magazine for FREE and download all previous and current editions
SVG is an XML-based graphics model that you can use in your front-end. As opposed to other new HTML5 graphics models (such as canvas and WebGL), SVG version 1.0 was made a W3C recommendation in 2001. The SVG developers’ adoption was very small due to the popularity of plugins such as Flash, Java and Silverlight, and the lack of browser support. In 2011, W3C introduced the second edition of SVG, version 1.1, and SVG gained a lot of attention as an alternative graphics model, besides the Canvas pixel graphics model.
SVG is all about vector graphics. With SVG you can create and draw two-dimensional vector graphics using HTML elements. These HTML elements are specific to SVG, but they are part of the Document Object Model (DOM) and can be hosted in web pages. The vector graphics can be scaled without loss of image quality. This means that the graphics will look the same in different screens and resolutions. This makes SVG a very good candidate to develop graphics for applications that can be run on different screens (mobile, tablets, desktop or even wide screens). Some prominent areas where Vector graphics are used are in CAD programs, designing animations and presentations, designing graphics that are printed on high-res printers and so on.
The fact that SVG elements are HTML elements makes SVG a very interesting graphics model. You can get full support for DOM access on SVG elements. You can use scripts, CSS style and other web tools to shape your graphics and manipulate it. As opposed to Canvas, which doesn’t include state, SVG is part of the DOM and therefore you have the elements and their state for your own usage. This makes SVG a very powerful model. There is one notable caveat though – drawing a lot of shapes can result in performance decrease.
When you use SVG, you define the graphics within your HTML using the SVG tag. For example, the following code snippet declares an SVG element:
<svg version="1.1" xmlns="">
</svg>
It is an HTML element and can be embedded inside your web page as any other HTML element. If not stated, the width of the SVG element will be 300 pixels and its height 150 pixels.
Note: Pay attention to the SVG XML namespace which is different from regular HTML. We will use this information later on when we will use SVG with AngularJS.
Using Shapes
SVG includes a lot of built-in shape elements that can be embedded inside the SVG tag. Each shape has its own set of attributes that helps to create the shapes appearance. For example, the following code snippet shows how to create two rectangles with different colors using the RECT element:
<svg width="400" height="200" version="1.1" xmlns="">
<rect fill="red" x="20" y="20" width="100" height="75" />
<rect fill="blue" x="50" y="50" width="100" height="75" />
</svg>
The output of running this piece of SVG will be as shown here:
Figure 1: Two rectangles drawn by SVG
Here are a couple of points to note in the snippet:
1. The last rectangle will appear on top of the first rectangle. SVG behavior is to put the last declared elements on top of previously declared elements, if there are shapes that overlap.
2. In order to create a rectangle, you have to indicate its left-top point using the x and y attributes and its width and height. The default values for these attributes are all set to 0 and if you don’t set them, the rectangle will not be drawn on the SVG surface.
There are other shapes that you can use such as circles, ellipsis, polygons, polylines, lines, paths, text and more. It is up to you to learn to draw these shapes and a good reference for the same can be found in the Mozilla Developer Network (MDN) -.
Other than attributes, shapes can also include styling using style attributes such as stroke or fill. Stroke accepts a color to create the shape border and Fill accepts a color to fill the entire shape. You can also set styles using regular CSS but not all CSS styles can be applied on SVG elements. For example, styles such as display or visibility can be used with SVG elements but margin or padding has no meaning in SVG. The following example shows a rectangle with some inline styles:
<rect x="200" y="100" width="600" height="300" style="fill: yellow; stroke: blue; stroke-width: 2"/>
As you can see in the code snippet, the rectangle will be filled with yellow color, will have a blue border, with a border width of 2 pixels.
You can group shapes using the g element. The g element is a container for other shapes. If you apply style to a g element, that style will be applied to all its child elements. When you create SVG, it is very common to group some elements inside a g container. For example the following snippet shows you the same two rectangles from Figure 1 but grouped inside a g element:
<svg width="400" height="200" version="1.1" xmlns="">
<g>
<rect fill="red" x="20" y="20" width="100" height="75" />
<rect fill="blue" x="50" y="50" width="100" height="75" />
</g>
</svg>
Using SVG Definitions
SVG includes a defs element which can be used to define special graphical SVG elements such as gradients, filters or patterns. When you want to use a special SVG element, you first define it inside the defs element and later on, you can use it in your SVG. Make sure you specify an id for your special element, so that you can use it later.
The next snippet shows how to define a linear gradient:
<svg version="1.1" xmlns="">
<defs>
<linearGradient id="lg1">
<stop offset="40%" stop-
<stop offset="60%" stop-
<stop offset="80%" stop-
</linearGradient>
</defs>
<rect fill="url(#lg1)" x="50" y="50" width="100" height="100"/>
</svg>
And the output of running this SVG will be:
Figure 2: Gradient inside a rectangle
A couple of things to notice about the snippet:
1. I defined the gradient using the linearGradient element. There are other elements that you can use to define other graphical aspects. Each element has its own attributes and sub elements, so it’s up to you to learn them.
2. The gradient has an id which is later on used in the rectangle using the url(#nameOfSVGElement) syntax.
Note: The article doesn’t cover all the possible SVG element definitions.
Now that we are familiar with SVG it is time to move on and see how SVG and AngularJS work together.
You can combine SVG and AngularJS and it is very straight forward. Since SVG elements are part of the DOM, you can add them into view templates both as static graphics, and also as dynamic graphics. The first option is very simple and you just embed static SVG inside the HTML. The second option has a few caveats that you need to know in order to be on the safer side.
The first caveat is dynamic attributes and data binding. Since SVG has its own XML definition, it doesn’t understand AngularJS expressions. That means that if you will try to use SVG attributes with data binding expressions (curly brackets), you will get an error. The work around is to prefix all the dynamic attributes with ng-attr - and then set the binding expression. You can find a reference about ng-attr- prefix in the AngularJS website under the topic “ngAttr attribute bindings” using the following link: .The following example shows you how to use the ng-attr and define databinding expressions:
<rect ng-</rect>
In the example, you can see that all the rectangle attributes are set to some scope properties.
The second caveat is related to directives. Since the SVG XML definitions are different from HTML, directives that generate SVG elements need to declare that they generate SVG. That means that in the Directive Definition Object (DDO) that you return to define the directive, you will need to set the templateNamespace property to 'svg'. For example, the following snippet shows a simple directive DDO that declares that it generates SVG:
(function () {
'use strict';
angular.
module("svgDemo").
directive("ngRect", ngRect);
ngRect.$inject = [];
function ngRect() {
return {
restrict: 'E',
templateUrl: <rect x="50" y="50" width="100" height="100"></rect>',
templateNamespace: 'svg'
};
}
}());
Now that we know how to combine SVG and AngularJS, it is time to see some code in action.
The application that we are going to build will generate a rectangle and you will be able to change its width and height, using data binding:
Figure 3: Application using SVG and Angular
We will first start by defining a rectangle directive that will resemble some of the code snippets that you saw earlier:
(function () {
'use strict';
angular.
module("svgDemo").
directive("ngRect", ngRect);
ngRect.$inject = [];
function ngRect() {
return {
restrict: 'E',
replace: true,
scope: {
xAxis: '=',
yAxis: '=',
rectHeight: '=',
rectWidth: '='
},
templateUrl: 'app/common/templates/rectTemplate.html',
templateNamespace: 'svg'
};
}
}());
The directive will include an isolated scope that can accept the x, y, width and height of the rectangle. It also declares that it generates SVG and that it loads a template. Here is the template code:
Now that we have our rectangle directive, we will define a directive that will hold our entire demo:
(function () {
'use strict';
angular.
module("svgDemo").
directive("ngDemo", ngSvgDemo);
ngSvgDemo.$inject = [];
function ngSvgDemo() {
return {
restrict: 'E',
templateUrl: 'app/common/templates/demoTemplate.html',
controller: 'demoController',
controllerAs: 'demo'
};
}
}());
The main thing in the directive is the controller that it will use, and the template that it will load. So let’s see both of them. We will start with the controller:
(function () {
'use strict';
angular.
module("svgDemo").
controller("demoController", demoController);
demoController.$inject = [];
function demoController() {
var demo = this;
function init() {
demo.xAxis = 0;
demo.yAxis = 0;
demo.rectHeight = 50;
demo.rectWidth = 50;
}
init();
}
}());
In the controller, we define the properties that will be used later for data binding. Now we can look at the template itself:
<div>
<div>
<svg height="300" width="300">
<ng-rect</ng-rect>
</svg>
</div>
<div>
<label>Set Rectangle Width: </label><input type="text" ng-</br></br>
<label>Set Rectangle Height: </label><input type="text" ng-
</div>
</div>
Please observe the usage of ng-rect directive that we have created, and the binding of its attributes to the controller attributes. Also, we have bound the textboxes to the relevant properties using ng-model directive. That is all.
Now we can create the main web page and the svgDemo module. Here is how the main web page will look like:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>SVG & Angular</title>
<link href="styles/main.css" rel="stylesheet"/>
</head>
<body>
<div ng-
<section class="body-content">
<ng-demo></ng-demo>
</section>
</div>
<script src="app/vendor/angular/angular.min.js"></script>
<script src="app/app.js"></script>
<script src="app/common/controllers/demoController.js"></script>
<script src="app/common/directives/ngDemoDirective.js"></script>
<script src="app/common/directives/ngRectDirective.js"></script>
</body>
</html>
And here is how the svgDemo module will be defined:
(function () {
var app = angular.module('svgDemo', []);
}());
This is a very simple application but it shows you how to combine both SVG and AngularJS and to create dynamic graphics in your applications. You can also use common SVG generator libraries such as Raphael or d3.js inside your directives but the idea was to show you how to do raw SVG graphics before you jump into a library.
SVG is a very powerful graphics model that can be used in the browser. It generates graphics that looks good and scales well across different screens and resolutions. It also includes variety of elements that can help you shape your graphics easier. As you saw in the article, combining SVG and AngularJS to generate some sophisticated graphics is not so hard. As I wrote in the introduction, I was able to generate very interesting biological models using SVG and AngularJS and this should encourage you to try and create your own models.
Download the entire source code of this article (Github) | http://www.dotnetcurry.com/angularjs/1213/create-graphics-using-svg-angularjs | CC-MAIN-2017-26 | refinedweb | 2,331 | 62.38 |
Apache OpenOffice (AOO) Bugzilla – Issue 67777
CHOOSE function cannot return a cell reference
Last modified: 2013-08-07 15:15:24 UTC
In attached err508.xls I am getting #VALUE in cell J6. Formula appears to
correct.
Excel calculates this formula just fine.
The formula is "=IF($H$2=1;0;(SUM($O6:CHOOSE(($H$2-1);$O6;$P6;$Q6;$R6))))+G6"
If I save file in .ods then I am getting different error alltogether - #NAME?.
Created attachment 38029 [details]
testcase 1
Created attachment 38030 [details]
testcase 1
Created attachment 38031 [details]
testcase 2
This issue appears to be a misunderstanding of the CHOOSE function as used in
Calc. In MS Excel, the CHOOSE function can either return values or cell
references. That is, Excel will choose based on context whether to return the
contents of cell A6 or the referece "A6" (refering to cell A6).
In Calc, the CHOOSE function returns only values.
Let's use the example from your spreadsheet:
SUM($O6:CHOOSE(($H$2-1),$O6,$P6,$Q6,$R6)) where cells O6-R6 all contain the
value "1."
Excel will return a cell reference from the CHOOSE function based on what the
SUM function needs, resulting in (if $H$2-1 evaluates to 2, for example)
SUM($O6:$P6) after CHOOSE has been evaluated. This is a logical function, and
therefore works.
Calc will return the value of the cell called, however. So (if $H$2-1 evaluates
to 2, for example) the same function will result in SUM($O6:1) after CHOOSE has
been evaluated. This isn't a logical function and results in the error.
I established this difference in behavior by reading the Help files in Excel and
Calc on the CHOOSE function and by doing tests to verify the problem. CHOOSE
function works as described in Help for both programs, but Calc's functionality
is a subset only of Excel's functionality in this case.
-----------------------------------
* Changing type to Enhancement request. Requested enhancement: CHOOSE function
should choose based on context whether a cell value or reference is to be
returned, as this will grant more Excel compatibility.
* Lowering priority to P3.
* Adding ms_interoperability keyword.
* Changed summary from "Calc returns #VALUE, Excel calculates formula just fine"
to "CHOOSE function cannot return a cell reference"
* Changed OS to All (I think this applies regardless of OS)
Thanks,
Steve
as enhancement to requirements for decission finding
Dear Steve,
thanks a lot for your analisis!
Dear developers,
please enhance CHOOSE function to return cell reference too.
I am migrating 40+ users to OOo and this function is used in number of cases.
What makes it even more critical is it is upper management who uses it.
Thanks a lot.
There may be some confusion here? In Calc CHOOSE *will* return a reference:
=SUM(CHOOSE(1;B4:B5;B5))
works fine
I think the difficulty may be that Calc does not expect a function after the
colon in a range:
=SUM(b4:indirect("b5"))
gives a #NAME? error, even though INDIRECT always returns a reference. Calc
seems to be trying to recognise indirect as a named cell.
This seems to be yet something different. With the changes in CWS odff related
to issue 4904 (implementation of range operator) loading the .ods file works
fine now, but loading the .xls files still results in #VALUE! error as if the
import filter doesn't transform that into the correct token sequence.
Furthermore, re-editing the formula in J6 will calculate the value in the cell
but that will not update cell values in K6 and L6, which get updated only if
those formulas are re-edited as well. Seems as if broadcasters/listeners are not
setup correctly. However, that may be an effect related to the original problem
and should be re-investigated once that is solved.
Defect of import filter.
excel import fixed
Reassigning to QA for verification.
THANK YOU very much!!
verified in internal build cws_odff
Hello kpalagin, *,
I have tested the first attached file with DEV300m28 under Debian SID AMD64 and
it works (i.e. the calculation in J6 takes place ... ;) ) So I close this issue.
@kpalagin: Could you check this with m28 under Windows as well? Does it work for
you too? If not, you can reopen this issue immediately ... ;) | https://bz.apache.org/ooo/show_bug.cgi?id=67777 | CC-MAIN-2021-39 | refinedweb | 713 | 66.03 |
Yes; everyone who has posted.Yes; everyone who has posted.does anyone know how I can fix the "1.#QNAN" error or not?
I do like these easy questions.
Soma
This is a discussion on Template/classes/strings within the C++ Programming forums, part of the General Programming Boards category; does anyone know how I can fix the "1.#QNAN" error or not? Yes; everyone who has posted. I do like ...
Yes; everyone who has posted.Yes; everyone who has posted.does anyone know how I can fix the "1.#QNAN" error or not?
I do like these easy questions.
Soma
please tell me how to fix the "1.#QNAN" error
You don't have any code posted in this thread that involves double variables in any way shape or form. How do we know what you've done to that poor defenseless double variable? All we see is you trying to print out some "PACKAGE" things, and nobody knows what you think that should do either.
Because it's a mess, and it's unreadable. If we can't put any effort into reading your code, how can we know what you are doing wrong?
You might prefer to write unreadable code, and again, it's your choice. But when you're asking for help, you have to adjust to the standards of the community you're asking, or they won't help you at all. This is just an advice. To follow it or not is up to you, but you'll notice you'll get more helpful responses if you do fix this.
nvm
Code:#include <iostream> #include <ostream> #include <string> using namespace std; template <class T> class DATE_EXP { T NAME_EXP,ADRESS_EXP; public: DATE_EXP(T first,T second) { NAME_EXP=first,ADRESS_EXP=second; } void INSERT_DATE_EXP () { getline(cin,NAME_EXP); getline(cin,ADRESS_EXP); cout << "NAME_EXP IS " << NAME_EXP << "\n"; cout << "ADRESS_EXP IS " << ADRESS_EXP << "\n"; }; }; //class DATE_EXP template <class T> class KG_GC { T KG,GC; public: KG_GC(T first,T second) { KG=0,GC=0; } void INSERT_KG_GC () { cin >> KG; cin >> GC; cout << "KG:"<< KG <<"\n"; cout << "GC:"<< GC <<"\n"; }; }; //class KG_GC template <class T> class CALC_COST { T kg,gc; public: CALC_COST(T kg,T gc) { kg=0,gc=0; } T DA_REZULTATU () { return (kg*gc); } }; int main() { int A0=0; do { cout<<"CHOOSE PROGRAM NUMBER:25\n"; int B0=25; switch(B0) { case 1: { cout << " 0=EXIT\n 1=COMMAND LIST\n25=2505111-T2-2-1\n"; break; } case 0: { return 0; } case 25: { string NAME_EXP_TEMP, ADRESS_EXP_TEMP; DATE_EXP<string> EXP_TEMP(NAME_EXP_TEMP,ADRESS_EXP_TEMP); EXP_TEMP.INSERT_DATE_EXP(); double KG_TEMP = 0,GC_TEMP = 0; KG_GC<double>KG_GC_TEMP(KG_TEMP,GC_TEMP); KG_GC_TEMP.INSERT_KG_GC (); cout << "\n"; cin >> A0; CALC_COST<double>CALC_COST_2(1,2); CALC_COST_2.DA_REZULTATU (); cout << "\n"; cin >> A0; break; }//switch B0=4 }//switch B0 } while(1);//doodoo return 0; }//main
Last edited by llVIU; 05-25-2011 at 03:24 PM.
Your insert function is supposed to return a double. But:
You didn't bother. This is why your compiler is saying "control reaches end of non-void function".You didn't bother. This is why your compiler is saying "control reaches end of non-void function".Code:template <class P> P KG_GC<P>::INSERT_KG_GC () { cin >> KG; cin >> GC; cout << "KG:"<< KG <<"\n"; cout << "GC:"<< GC <<"\n"; };
yea thanks I got that part now | http://cboard.cprogramming.com/cplusplus-programming/138321-template-classes-strings-2.html | CC-MAIN-2014-35 | refinedweb | 544 | 74.79 |
Your. Today, we’ll discuss how to use OOP’s best practices to make your code cleaner, more isolated, and more decoupled.
Is Your App Worth Refactoring?
Let’s start by looking at how you should decide if your app is a good candidate for refactoring.
Here is a list of metrics and questions I usually ask myself to determine whether or not my code needs refactoring.
- Slow unit tests. PORO unit tests usually run fast with well-isolated code, so slow running tests can often be an indicator of a bad design and overly-coupled responsibilities.
- FAT models or controllers. A model or controller with more than 200 lines of code (LOC) is generally a good candidate for refactoring.
- Excessively large code base. If you have ERB/HTML/HAML with more than 30,000 LOC or Ruby source code (without GEMs ) with more than 50,000 LOC, there’s a good chance you should refactor.
Try using something like this to find out how many lines of Ruby source code you have:
find app -iname "*.rb" -type f -exec cat {} \;| wc -l
This command will search through all the files with .rb extension (ruby files) in the /app folder, and print out the number of lines. Please note that this number is only approximate since comment lines will be included in these totals.
Another more precise and more informative option is to use the Rails rake task
stats which outputs a quick summary of lines of code, number of classes, number of methods, the ratio of methods to classes, and the ratio of lines of code per method:
bundle exec rake stats +----------------------+-------+-----+-------+---------+-----+-------+ | Name | Lines | LOC | Class | Methods | M/C | LOC/M | +----------------------+-------+-----+-------+---------+-----+-------+ | Controllers | 195 | 153 | 6 | 18 | 3 | 6 | | Helpers | 14 | 13 | 0 | 2 | 0 | 4 | | Models | 120 | 84 | 5 | 12 | 2 | 5 | | Mailers | 0 | 0 | 0 | 0 | 0 | 0 | | Javascripts | 45 | 12 | 0 | 3 | 0 | 2 | | Libraries | 0 | 0 | 0 | 0 | 0 | 0 | | Controller specs | 106 | 75 | 0 | 0 | 0 | 0 | | Helper specs | 15 | 4 | 0 | 0 | 0 | 0 | | Model specs | 238 | 182 | 0 | 0 | 0 | 0 | | Request specs | 699 | 489 | 0 | 14 | 0 | 32 | | Routing specs | 35 | 26 | 0 | 0 | 0 | 0 | | View specs | 5 | 4 | 0 | 0 | 0 | 0 | +----------------------+-------+-----+-------+---------+-----+-------+ | Total | 1472 |1042 | 11 | 49 | 4 | 19 | +----------------------+-------+-----+-------+---------+-----+-------+ Code LOC: 262 Test LOC: 780 Code to Test Ratio: 1:3.0
- Can I extract recurrent patterns in my codebase?
Decoupling in Action
Let’s start with a real-world example.
Pretend we want to write an application that tracks time for joggers. At the main page, the user can see the times they entered.
Each time entry has a date, distance, duration, and additional relevant “status” info (e.g. weather, type of terrain, etc.), and an average speed that can be calculated when needed.
We need a report page that displays the average speed and distance per week.
If the average speed for the entry is higher than the overall average speed, we’ll notify the user with an SMS (for this example we will be using Nexmo RESTful API to send the SMS).
The homepage will allow you to select the distance, date, and time spent jogging to create an entry similar to this:
We also have a
statistics page which is basically a weekly report that includes the average speed and distance covered per week.
The Code
The structure of the
app directory looks something like:
⇒ tree . ├── assets │ └── ... ├── controllers │ ├── application_controller.rb │ ├── entries_controller.rb │ └── statistics_controller.rb ├── helpers │ ├── application_helper.rb │ ├── entries_helper.rb │ └── statistics_helper.rb ├── mailers ├── models │ ├── entry.rb │ └── user
I won’t discuss the
User model as it’s nothing special since we are using it with Devise to implement authentication.
As for the
Entry model, it contains the business logic for our application.
Each
Entry belongs to a
User.
We validate the presence of
distance,
time_period,
date_time and
status attributes for each entry.
Every time we create an entry, we compare the average speed of the user with the average of all other users in the system, and notify the user by SMS using Nexmo(we won’t discuss how the Nexmo library is used, though I wanted to demonstrate a case in which we use an external library).
Notice, that the
Entry model contains more than the business logic alone. It also handles some validations and callbacks.
The
entries_controller.rb has the main CRUD actions (no update though).
EntriesController#index gets the entries for the current user and orders the records by date created, while
EntriesController#create creates a new entry. No need to discuss the obvious and the responsibilities of
EntriesController#destroy :
While
statistics_controller.rb is responsible for calculating the weekly report,
StatisticsController#index gets the entries for the logged in user and groups them by week, employing the
#group_by method contained in the Enumerable class in Rails. It then tries to decorate the results using some private methods.
We don’t discuss the views much here, as the source code is self-explanatory.
Below is the view for listing the entries for the logged-in user (
index.html.erb). This is the template that will be used to display the results of the index action (method) in the entries controller:
Note that we are using partials
render @entries, to pull the shared code out into a partial template
_entry.html.erb so we can keep our code DRY and reusable:
The same goes for the
_form partial. Instead of using the same code with (new and edit) actions, we create a reusable partial form:
As for the weekly report page view,
statistics/index.html.erb shows some statistics and reports the weekly performance of the user by grouping some entries :
And finally, the helper for entries,
entries_helper.rb, includes two helpers
readable_time_period and
readable_speed which should make the attributes more humanly readable:
Nothing fancy so far.
Most of you will argue refactoring this is against the KISS principle and will make the system more complicated.
So does this application really need refactoring?
Absolutely not, but we’ll consider it for demonstration purposes only.
After all, if you check out the previous section, and the characteristics that indicate an app needs refactoring, it becomes obvious that the app in our example is not a valid candidate for refactoring.
Life Cycle
So let’s start by explaining the Rails MVC pattern structure.
Usually, it starts by the browser making a request, such as.
The web server receives the request and uses
routes to find out which
controller to use.
The controllers do the work of parsing user requests, data submissions, cookies, sessions, etc., and then ask the
model to get the data.
The
models are Ruby classes that talk to the database, store and validate data, perform the business logic, and otherwise do the heavy lifting. Views are what the user sees: HTML, CSS, XML, Javascript, JSON.
If we want to show the sequence of a Rails request lifecycle, it would look something like this:
What I want to achieve is to add more abstraction using plain old ruby objects (POROs) and make the pattern something like the following for
create/update actions:
And something like the following for
list/show actions:
By adding POROs abstractions we will assure full separation between responsibilities SRP, something that Rails is not very good at.
Guidelines.
Refactoring
Before we get started, I want to discuss one more thing. When you start the refactoring, usually you end up asking yourself: “Is that really good refactoring?”
If you feel you are making more separation or isolation between responsibilities (even if that means adding more code and new files), then this is usually a good thing. After all, decoupling an application is a very good practice and makes it easier for us to do proper unit testing.
I won’t discuss stuff, like moving logic from controllers to models, as I assume you are doing that already, and you are comfortable using Rails (usually Skinny Controller and FAT model).
For the sake of keeping this article tight, I won’t discuss testing here, but that doesn’t mean you shouldn’t test.
On the contrary, you should always start with a test to make sure things are ok before moving forward. This is a must, especially when refactoring.
Then we can implement changes and make sure the tests all pass for the relevant parts of the code.
Extracting Value Objects
First, what is a value object?
Martin Fowler explains:
Value Object is a small object, such as a money or date range object. Their key property is that they follow value semantics rather than reference semantics.
Sometimes you may encounter a situation where a concept deserves its own abstraction and whose equality isn’t based on value, but on the identity. Examples would include Ruby’s Date, URI, and Pathname. Extraction to a value object (or domain model) is a great convenience.
Why bother?
One of the biggest advantages of a Value object is the expressiveness that they help achieve in your code. Your code will tend to be far clearer, or at least it can be if you have good naming practices. Since the Value Object is an abstraction, it leads to cleaner code and fewer errors.
Another big win is immutability. The immutability of objects is very important. When we are storing certain sets of data, which could be used in a value object, I usually don’t want that data to be manipulated.
When is this useful?
There is no single, one-size-fits-all answer. Do what is best for you and what makes sense in any given situation.
Going beyond that, though, there are some guidelines I use to help me make that decision.
If you think of a group of methods is related, with Value objects they are more expressive. This expressiveness means that a Value object should represent a distinct set of data, which your average developer can deduce simply by looking at the name of the object.
How is this done?
Value objects should follow some basic rules:
- Value objects should have multiple attributes.
- Attributes should be immutable throughout the object’s life cycle.
- Equality is determined by the object’s attributes.
In our example, I’ll create an
EntryStatus value object to abstract
Entry#status_weather and
Entry#status_landform attributes to their own class, which looks something like this:
Note: This is just a Plain Old Ruby Object (PORO) that does not inherit from
ActiveRecord::Base. We have defined reader methods for our attributes and are assigning them upon initialization. We also used a comparable mixin to compare objects using (<=>) method.
We can modify
Entry model to use the value object we created:
We can also modify the
EntryController#create method to use the new value object accordingly:
Extract Service Objects
So what is a Service object?
A Service object’s job is to hold the code for a particular bit of business logic. Unlike the “fat model” style, where a small number of objects contain many, many methods for all necessary logic, using Service objects results in many classes, each of which serves a single purpose.
Why? What are the benefits?
- Decoupling. Service objects help you achieve more isolation between objects.
- Visibility. Service objects (if well-named) show what an application does. I can just glance over the services directory to see what capabilities an application provides.
- Clean-up models and controllers. Controllers turn the request (params, session, cookies) into arguments, pass them down to the service and redirect or render according to the service response. While models only deal with associations, and persistence. Extracting code from controllers/models to service objects would support SRP and make the code more decoupled. The responsibility of the model would then be only to deal with associations and saving/deleting records, while the service object would have a single responsibility (SRP). This leads to better design and better unit tests.
- DRY and Embrace change. I keep service objects as simple and small as I can. I compose service objects with other service objects, and I reuse them.
- Clean up and speed up your test suite. Services are easy and fast to test since they are small Ruby objects with one point of entry (the call method). Complex services are composed with other services, so you can split up your tests easily. Also, using service objects makes it easier to mock/stub related objects without needing to load the whole rails environment.
- Callable from anywhere. Service objects are likely to be called from controllers as well as other service objects, DelayedJob / Rescue / Sidekiq Jobs, Rake tasks, console, etc.
On the other hand, nothing is ever perfect. A disadvantage of Service objects is that they can be an overkill for a very simple action. In such cases, you may very well end up complicating, rather than simplifying, your code.
When should you extract service objects?
There is no hard and fast rule here either.
Normally, Service objects are better for mid to large systems; those with a decent amount of logic beyond the standard CRUD operations.
So whenever you think that a code snippet might not belong to the directory where you were going to add it, it’s probably a good idea to reconsider and see if it should go to a service object instead.
Here are some indicators of when to use Service objects:
- The action is complex.
- The action reaches across multiple models.
- The action interacts with an external service.
- The action is not a core concern of the underlying model.
- There are multiple ways of performing the action.
How should you design Service Objects?
Designing the class for a service object is relatively straightforward, since you need no special gems, don’t have to learn a new DSL, and can more or less rely on the software design skills you already possess.
I usually use the following guidelines and conventions to design the service object:
- Do not store state of the object.
- Use instance methods, not class methods.
- There should be very few public methods (preferably one to support SRP.
- Methods should return rich result objects and not booleans.
- Services go under the
app/servicesdirectory. I encourage you to use subdirectories for business logic-heavy domains. For instance, the file
app/services/report/generate_weekly.rbwill define
Report::GenerateWeeklywhile
app/services/report/publish_monthly.rbwill define
Report::PublishMonthly.
- Services start with a verb (and do not end with Service):
ApproveTransaction,
SendTestNewsletter,
ImportUsersFromCsv.
- Services respond to the call method. I found using another verb makes it a bit redundant: ApproveTransaction.approve() does not read well. Also, the call method is the de facto method for lambda, procs, and method objects.
If you look at
StatisticsController#index, you’ll notice a group of methods (
weeks_to_date_from,
weeks_to_date_to,
avg_distance, etc.) coupled to the controller. That’s not really good. Consider the ramifications if you want to generate the weekly report outside
statistics_controller.
In our case, let’s create
Report::GenerateWeekly and extract the report logic from
StatisticsController:
So
StatisticsController#index now looks cleaner:
By applying the Service object pattern we bundle code around a specific, complex action and promote the creation of smaller, clearer methods.
Homework: consider using Value object for the
WeeklyReport instead of
Struct.
Extract Query Objects from Controllers
What is a Query object?
A Query object is a PORO which represent a database query. It can be reused across different places in the application while at the same time hiding the query logic. It also provides a good isolated unit to test.
You should extract complex SQL/NoSQL queries into their own class.
Each Query object is responsible for returning a result set based on the criteria / business rules.
In this example, we don’t have any complex queries, so using Query object won’t be efficient. However, for demonstration purpose, let’s extract the query in
Report::GenerateWeekly#call and create
generate_entries_query.rb:
And in
Report::GenerateWeekly#call, let’s replace:
def call @user.entries.group_by(&:week).map do |week, entries| WeeklyReport.new( ... ) end end
with:
def call weekly_grouped_entries = GroupEntriesQuery.new(@user).call weekly_grouped_entries.map do |week, entries| WeeklyReport.new( ... ) end end
The query object pattern helps keep your model logic strictly related to a class’ behavior, while also keeping your controllers skinny. Since they are nothing more than plain old Ruby classes, query objects don’t need to inherit from
ActiveRecord::Base, and should be responsible for nothing more than executing queries.
Extract Create Entry to a Service Object
Now, let’s extract the logic of creating a new entry to a new service object. Let’s use the convention and create
CreateEntry:
And now our
EntriesController#create is as follows:
def create begin CreateEntry.new(current_user, entry_params).call flash[:notice] = 'Entry was successfully created.' rescue Exception => e flash[:error] = e.message end redirect_to root_path end
Move Validations into a Form Object
Now, here things start to get more interesting.
Remember in our guidelines, we agreed we wanted models to contain associations and constants, but nothing else (no validations and no callbacks). So let’s start by removing callbacks, and use a Form object instead.
A Form object is a Plain Old Ruby Object (PORO). It takes over from the controller/service object wherever it needs to talk to the database.
Why use Form objects?
When looking to refactor your app, it’s always a good idea to keep the single responsibility principle (SRP) in mind.
SRP helps you make better design decisions around what a class should be responsible for.
Your database table model (an ActiveRecord model in the context of Rails), for example, represents a single database record in code, so there is no reason for it to be concerned with anything your user is doing.
This is where Form objects come in.
A Form object is responsible for representing a form in your application. So each input field can be treated as an attribute in the class. It can validate that those attributes meet some validation rules, and it can pass the “clean” data to where it needs to go (e.g., your database models or perhaps your search query builder).
When should you use a Form object?
- When you want to extract the validations from Rails models.
- When multiple models can be updated by a single form submission, you might want to create a Form object.
This enables you to put all the form logic (naming conventions, validations, and so on) into one place.
How do you create a Form object?
- Create a plain Ruby class.
- Include
ActiveModel::Model(in Rails 3, you have to include Naming, Conversion, and Validations instead)
- Start using your new form class as if it were a regular ActiveRecord model, the biggest difference being that you cannot persist the data stored in this object.
Please note that you can use the reform gem, but sticking with POROs we’ll create
entry_form.rb which looks like this:
And we will modify
CreateEntry to start using the Form object
EntryForm:
class CreateEntry ...... ...... def call @entry_form = ::EntryForm.new(@params) if @entry_form.valid? .... else .... end end end
Note: Some of you would say that there’s no need to access the Form object from the Service object and that we can just call the Form object directly from the controller, which is a valid argument. However, I would prefer to have clear flow, and that’s why I always call the Form object from the Service object.
Move Callbacks to the Service Object
As we agreed earlier, we don’t want our models to contain validations and callbacks. We extracted the validations using Form objects. But we are still using some callbacks (
after_create in
Entry model
compare_speed_and_notify_user).
Why do we want to remove callbacks from models?
Rails developers usually start noticing callback pain during testing. If you’re not testing your ActiveRecord models, you’ll begin noticing pain later as your application grows and as more logic is required to call or avoid the callback.
after_* callbacks are primarily used in relation to saving or persisting the object.
Once the object is saved, the purpose (i.e. responsibility) of the object has been fulfilled. So if we still see callbacks being invoked after the object has been saved, what we are likely seeing is callbacks reaching outside of the object’s area of responsibility, and that’s when we run into problems.
In our case, we are sending an SMS to the user after we save an entry, which is not really related to the domain of Entry.
A simple way to solve the problem is by moving the callback to the related service object. After all, sending an SMS for the end user is related to the
CreateEntry Service Object and not to the Entry model itself.
In doing so, we no longer have to stub out the
compare_speed_and_notify_user method in our tests. We’ve made it a simple matter to create an entry without requiring an SMS to be sent, and we’re following good Object Oriented design by making sure our classes have a single responsibility (SRP).
So now our
CreateEntry looks something like:
Use Decorators Instead of Helpers
While we can easily use Draper collection of view models and decorators, I’ll stick to POROs for the sake of this article, as I’ve been doing so far.
What I need is a class that will call methods on the decorated object.
I can use
method_missing to implement that, but I’ll use Ruby’s standard library
SimpleDelegator.
The following code shows how to use
SimpleDelegator to implement our base decorator:
% app/decorators/base_decorator.rb require 'delegate' class BaseDecorator < SimpleDelegator def initialize(base, view_context) super(base) @object = base @view_context = view_context end private def self.decorates(name) define_method(name) do @object end end def _h @view_context end end
So why the
_h method?
This method acts as a proxy for view context. By default, the view context is an instance of a view class, the default view class being
ActionView::Base. You can access view helpers as follows:
_h.content_tag :div, 'my-div', class: 'my-class'
To make it more convenient, we add a
decorate method to
ApplicationHelper:
module ApplicationHelper # ..... def decorate(object, klass = nil) klass ||= "#{object.class}Decorator".constantize decorator = klass.new(object, self) yield decorator if block_given? decorator end # ..... end
Now, we can move
EntriesHelper helpers to decorators:
# app/decorators/entry_decorator.rb class EntryDecorator < BaseDecorator decorates :entry def readable_time_period mins = entry.time_period return Time.at(60 * mins).utc.strftime('%M <small>Mins</small>').html_safe if mins < 60 Time.at(60 * mins).utc.strftime('%H <small>Hour</small> %M <small>Mins</small>').html_safe end def readable_speed "#{sprintf('%0.2f', entry.speed)} <small>Km/H</small>".html_safe end end
And we can use
readable_time_period and
readable_speed like so:
# app/views/entries/_entry.html.erb - <td><%= readable_speed(entry) %> </td> + <td><%= decorate(entry).readable_speed %> </td>
- <td><%= readable_time_period(entry) %></td> + <td><%= decorate(entry).readable_time_period %></td>
Structure After Refactoring
We ended up with more files, but that’s not necessarily a bad thing (and remember that, from the onset, we acknowledged that this example was for demonstrative purposes only and was not necessarily a good use case for refactoring):
app ├── assets │ └── ... ├── controllers │ ├── application_controller.rb │ ├── entries_controller.rb │ └── statistics_controller.rb ├── decorators │ ├── base_decorator.rb │ └── entry_decorator.rb ├── forms │ └── entry_form.rb ├── helpers │ └── application_helper.rb ├── mailers ├── models │ ├── entry.rb │ ├── entry_status.rb │ └── user.rb ├── queries │ └── group_entries_query.rb ├── services │ ├── create_entry.rb │ └── report │ └── generate_weekly
Conclusion
Even though we focused on Rails in this blog post, RoR is not a dependency of the described service objects and other POROs. You can use this approach with any web framework, mobile, or console app.
By using MVC as the architecture of web apps, everything stays coupling and makes you go slower because most changes have an impact on other parts of the app. Also, it forces you to think where to put some business logic – should it go into the model, the controller, or the view?
By using simple POROs, we have moved business logic to models or services that don’t inherit from
ActiveRecord, which is already a big win, not to mention that we have a cleaner code, which supports SRP and faster unit tests.
Clean architecture aims to put the use cases in the center/top of your structure, so you can easily see what your app does. It also makes it easier to adopt changes since it is much more modular and isolated.
I hope I demonstrated how using Plain Old Ruby Objects and more abstractions decouples concerns, simplifies testing and helps produce clean, maintainable code. | https://www.toptal.com/ruby-on-rails/decoupling-rails-components?utm_source=rubyweekly&utm_medium=email | CC-MAIN-2017-17 | refinedweb | 4,086 | 55.54 |
QGraphicsView 90 degree Rotation
Hi all ,
I try to apply rotation through transform.rotate(90) , Rotation works fine in this case . But when i try to apply zoom , the view disappears in this case . So I wish to find some other way for rotation .
I have found some example for rotation . Among all transform.setmatrix (m11,m12,m13,m21,m22,m23,m31,m32,m33) works fine .But In the example the matrix values are calculated for 180 degree .
I want to find the values for 90 degree .
Means can any one help me to find the values of the matrix for 90 degree
Hi !!!!
Thanks for your reply .
There is a correction in my angle im using clokwise 90 degree So the angle is -90 degree
I got the coordinates for the -90 degree matrix
m11= 0 m12 = 1 m13 = 0
m21 = -1 m22 = 0 m23 = 0
m31 = 0 m32 = 0 m33 = 1
I got the actual transformation which i want .
But I got another one issue by achieving this.
When you look into the matrix the value m11 and m22 are equal to zero .
These m11 is for horizontal scaling and m22 is for vertical scaling
So in wheelevent when i try to scale the value of horizontal and vertical scaling coordinates it becomes zero
**view.scale(view.transform().m11()1.05 , view.transform().m22()1.05)
The value becomes zero in both X and Y.
So Now my questions i have achieved the transformation but I want some help for handling the wheel vent to zoom the view along -90 degree
Please can anyone help me on this issue to handle the wheelevent to acheive zoomIn() and zoomOut()
See example code below.
customgraphicsview.h
#ifndef CUSTOMGRAPHICSVIEW_H #define CUSTOMGRAPHICSVIEW_H #include <QGraphicsView> class CustomGraphicsView : public QGraphicsView { Q_OBJECT public: explicit CustomGraphicsView(QWidget *parent = nullptr); void reset(); protected: void wheelEvent(QWheelEvent* event) override; private: void updateTransform(); double m_scale = 1.0; double m_angle = 0.0; }; #endif // CUSTOMGRAPHICSVIEW_H
customgraphicsview.cpp
#include "customgraphicsview.h" #include <QBrush> #include <QColor> #include <QWheelEvent> CustomGraphicsView::CustomGraphicsView(QWidget *parent) : QGraphicsView(parent) { setBackgroundBrush(QBrush(QColor("grey"))); } void CustomGraphicsView::reset() { m_scale = 1.0; m_angle = 0.0; updateTransform(); } void CustomGraphicsView::wheelEvent(QWheelEvent *event) { auto const delta = (event->delta() <0.0) ? -1.0 : 1.0; if (event->modifiers() & Qt::ShiftModifier) m_scale += 0.1 * delta; else m_angle += 15.0 * delta; updateTransform(); } void CustomGraphicsView::updateTransform() { QTransform t; t.scale(m_scale, m_scale); t.rotate(m_angle); setTransform(t); }
@Wieland said in QGraphicsView 90 degree Rotation:
void CustomGraphicsView::updateTransform()
{
QTransform t;
t.scale(m_scale, m_scale);
t.rotate(m_angle);
setTransform(t);
}
Thanks for your kind reply with full code .
When i try to apply this code the code scales the view in x direction . In my caes if i rotate my view 90 clockwise it should scale in Y direction .
Means in normal mode when i try to zoom , the view should expand by width mean while when i rotate and zoom in that rotated view it should expand in height
A Short note to all regarding this issue .
Currently I have achieved the 90 degree clockwise rotation thourgh
QTransform transform1; transform1.rotate(90); // Set the Views transformation view()->setTransform(transform1);
Rotation works fine
I try to achieve zooming functionality using fitInView()
Normal case
fitInView(0,0,calcValue,viewRect.height())
In this case , zooming works fine
90 rotation case
fitInView(0,0,viewRect.weight(),calcValue)
In this case , zooming is not not achieved . Please help me .
I dont want to use scale() method because I set some restrictions in zooming through calcValue depending on the delta of the wheelEvent
Please some one look into this issue . Timely helps are really appreciable .
Sry, I think I don't understand exactly what you want to do. Maybe you can draw a few pictures to help my imagination :)
I tried to add image in my post but i cant .
Can you help me on it
@Wieland
X Mode
**-----------------------------|
-----------------------------**
Width
This is how I locate my graphicsview at initial stage
Im achieving zooming through wheel event .
I have currentZoomingWidth level if the delta is +ve then i will increase the currentZoomingWidth level and if it is -ve then i will decrease the currentZoomingWidth
So finally i find the width to expand (currentZoomingWidth )
wheelvent()
{
fitInVieW(0,0,currentZoomingWidth ,view().geometry().height())
}
So zooming is achieved through fitInView()
Y MODE
| |
| |
| |
| |
| | Height
| |
| |
Width
Like same case im calculating the currentZoomingWidth depending upon the delta and applying it to fitInView() height
wheelvent()
{
fitInVieW(0,0 ,view().geometry().width(),currentZoomingWidth)
}
In X mode i apply the calculated zoom factor to width and in Y mode I apply it to the height
My problem is In X mode the fit in view works fine but in Y mode it works weird
@keksi-venksi said in QGraphicsView 90 degree Rotation:
I tried to add image in my post but i cant .
Can you help me on it
Hi!.
@Wieland
I have explained my query in my previous post . Please give your suggestions
Any how i have added the image here .This is how my graphicsview rotated .
Now i wanna apply zooming functionality using fitinview in this 90 degree rotated graphicsview .
This is how Y Mode looks
- kshegunov Qt Champions 2017
Use two
QTransformobjects - one for rotation and one for scaling, then multiply them in the proper order (the order of multiplication matters as scaling and rotation don't commute). After that apply your transform to whatever you need.
@kshegunov
Thanks for your reply .
Can you give some example code .
- kshegunov Qt Champions 2017
QTransform rotation; rotation.rotate(90); QTransform scaling; scaling.scale(0.5, 1); setTransform(rotation * scaling); // Or QVector2D vector; vector = rotation * scaling * vector;
Modify according to your needs.
Thanks for your reply .
In my case I dont want to use scale to zoom
I wish to use fitInView() to achieve zoom.
In short note after applying transform (rotate 90 degree) fitInView() does not works fine . I want fitInView() to do zoom
Can any one help me on it .
Helps are really appreciable
customgraphicsview.h
#ifndef CUSTOMGRAPHICSVIEW_H #define CUSTOMGRAPHICSVIEW_H #include <QGraphicsView> class CustomGraphicsView : public QGraphicsView { Q_OBJECT public: enum class Mode { x, y }; explicit CustomGraphicsView(QWidget *parent = nullptr); void reset(); void fit(); void setMode(Mode mode); protected: void wheelEvent(QWheelEvent* event) override; private: void updateTransform(); Mode m_mode = Mode::x; double m_scale = 1.0; }; #endif // CUSTOMGRAPHICSVIEW_H
customgraphicsview.cpp
#include "customgraphicsview.h" #include <QBrush> #include <QColor> #include <QWheelEvent> #include <QDebug> CustomGraphicsView::CustomGraphicsView(QWidget *parent) : QGraphicsView(parent) { setBackgroundBrush(QBrush(QColor("grey"))); } void CustomGraphicsView::reset() { m_mode = Mode::x; m_scale = 1.0; updateTransform(); } void CustomGraphicsView::fit() { auto const r = scene()->itemsBoundingRect(); switch (m_mode) { case Mode::x: { double f1 = width() / (r.width() + 2); double f2 = height() / (r.height() + 2); m_scale = qMin(f1,f2); updateTransform(); break; } case Mode::y: { double f1 = height() / (r.width() + 2); double f2 = width() / (r.height() + 2); m_scale = qMin(f1,f2); updateTransform(); break; } } } void CustomGraphicsView::setMode(CustomGraphicsView::Mode mode) { if (m_mode == mode) return; m_mode = mode; updateTransform(); } void CustomGraphicsView::wheelEvent(QWheelEvent *event) { auto const delta = (event->delta() <0.0) ? -1.0 : 1.0; m_scale += 0.1 * delta; updateTransform(); } void CustomGraphicsView::updateTransform() { QTransform t; if (m_mode == Mode::y) t.rotate(90); t.scale(m_scale, m_scale); setTransform(t); }
I would like to continue with this thread any one suggest me the coordinates for the fitInView() for the rotated view
Help me on it
@keksi-venksi Please try the code above that I just posted. If I understood you right, then it should meet your requirements :)
@Wieland
@Wieland
Thanks for your reply
I am using fitInView in many places in my project to work stable . So I would like to get some suggestions for fitInView() coordinates
@keksi-venksi said in QGraphicsView 90 degree Rotation:
I am using fitInView in many places in my project to work stable . So I would like to get some suggestions for fitInView() coordinates
This is probably my last posting here because I think I can't help you. Have you seen my code above? The
fit()function? It does just what you want, look:
Hi
Thanks for your reply
In this example in fit method you are trying it to fit the view in the frame when we click a button but my case i have used fitInView() while zooming .So it should adjust the view with the zooming delta
So when ever we zoom fitInView() should be adjusted accordingly to the scene and update the scene as well
To All The issue has been fixed
what i did was simple
X mode fitinview(0,0,calcValue,viewRectHeight)
Y Mode fitinview(0,0,viewRectWidth,calcValue) ; rotate(90)
After calling fitinview again im calling rotate of 90 degree
- Eddy Moderators
@keksi-venksi
Could you please mark this topic as solved then. Others will find it easier if they have similar questions. | https://forum.qt.io/topic/79466/qgraphicsview-90-degree-rotation | CC-MAIN-2019-04 | refinedweb | 1,447 | 55.64 |
IndexMapper
Since: BlackBerry 10.0.0
#include <bb/cascades/DataModel/IndexMapper>
Indicates whether a ListView can translate cached items to new indexes.
An instance of this class can be sent along with the new indexes class that can be sent along with the DataModel::itemsChanged signal so that the listening ListView can translate all of the items in its cache to the new indexes (and won't have to do a full refresh of the items). Items in the ListView cache that have no such translation are removed from the ListView.
Overview
Public Functions Index
Public Functions
virtual
Destructor.
bool
Called by ListView for every item in its cache in response to the DataModel::itemsChanged signal.
true if the item in question remains in the model after the update, false if the item has been removed.
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/cascades/bb__cascades__datamodel__indexmapper.html | CC-MAIN-2014-35 | refinedweb | 156 | 65.22 |
15. Failure during invoke: UnspecifiedIgor JStarter Jan 27, 2011 8:44 AM (in response to Igor JStarter)
No... nothing changes...
so it is not a namespace problem and not a dateTime problem for sure...
The WSDL:
<?xml version="1.0" encoding="utf-8"?>
<wsdl:definitions name="PspService"
targetNamespace=""
xmlns:wsdl=""
xmlns:soap=""
xmlns:wsu=""
xmlns:soapenc=""
xmlns:wsam=""
xmlns:tns=""
xmlns:wsa=""
xmlns:wsp=""
xmlns:wsap=""
xmlns:xsd=""
xmlns:msc=""
xmlns:wsaw=""
xmlns:soap12=""
xmlns:wsa10=""
xmlns:wsx=""
xmlns:
<wsdl:types>
<xsd:schema
<xsd:import
<xsd:import
<xsd:import
< msc:
<wsdl:operation
<wsdl:input wsaw:
<wsdl:output wsaw:
</wsdl:operation>
<wsdl:operation
<wsdl:input wsaw:
<wsdl:output wsaw:
</wsdl>
So.. it must be JBOSS - .NET problem...
Any idea what I can do any more?
16. Failure during invoke: UnspecifiedGary Brown Jan 27, 2011 9:02 AM (in response to Igor JStarter)
As I put in my last post, you will need to contact the jbossws project to find out what additional diagnostics you can use to determine whether the soapAction is being passed.
The trace log you uploaded shows that it is loading the soapAction as part of the external service's WSDL, prior to sending the request - but does not provide enough information as to whether it is actually sending the soapAction.
17. Failure during invoke: UnspecifiedIgor JStarter Jan 27, 2011 9:02 AM (in response to Igor JStarter)
I got one more reply... apache team
It's very strange, ODE-260 was fixed a long time ago in version 1.2,and now I can see, that version 1.3.3 still contains this bug. I thinkmaintainer didn't apply all my patches, missed some fixes, so thisparticular case is still here. Well, in bug report [1] there is cleardescription when this bug appears, it appears when external servicereturns empty SOAP message without body and without header.
<?xml version='1.0' encoding='utf-8'?>
<soapenv:Envelopexmlns:
<soapenv:Body />
</soapenv:Envelope>
You cannot work around this bug, you have to either return some dummydata or header from your external service, or apply patch [2] andrecompile ODE runtime. I'm going to reopen this bug report to notifydeveloper, that this bug still persist.
Just one note, I work with JPA DAO, and as I see you work with JBoss,and so I think you work with Hibernate. If that so, you can try to workaround with just disabling debug logging for class " org.apache.ode.bpel.runtime.
ScopeFrame",I don't know but it can help you.
Is my issue solveable ?:)
18. Failure during invoke: UnspecifiedGary Brown Jan 27, 2011 9:09 AM (in response to Igor JStarter)
From a quick inspection of the patch, this just seems to be a workaround to overcome the impact of the fault response.
It would be good to have this patch applied, so that null values don't result in exceptions (in case they are expected, however in your case I don't think you are expecting a null value.
19. Failure during invoke: UnspecifiedIgor JStarter Jan 28, 2011 3:05 AM (in response to Gary Brown)
Hey !
I got no answer on the jboss web services forum...
But I used the time and installed tcpdump... and made a pcap capture...
The SoapAction is not sending !!!
In the HTTP post i see:
Connection: keep-alive
Content-Type: text/xml; charset=UTF-8
Host: psplatform.dyndns.org
SOAPAction: ""
Transfer-Encoding: chunked
Any idea why it's not included? How cai i set the SOAPAction ?
Regards, Igor!
20. Failure during invoke: UnspecifiedIgor JStarter Jan 28, 2011 4:15 AM (in response to Igor JStarter)
Hm... writing a proxy which inserts the soapaction header?
If your client isn't sending that now, and aren't willing to change it, you might be able to write some sort of proxy that adds the header and passes along the request as a stopgap solution. Might be easier than rewriting the server-side handler.
Common, there must be a solution on JBoss ?
21. Failure during invoke: UnspecifiedGary Brown Jan 28, 2011 4:36 AM (in response to Igor JStarter)
As I mentioned before we have an integration test that invokes a .NET service:
The only difference I can see is that the integration tests run against JBossWS-native 3.2.2.GA. So you might want to try using that version to see if that makes any difference.
Regards
Gary
22. Failure during invoke: UnspecifiedIgor JStarter Jan 28, 2011 5:11 AM (in response to Gary Brown)
Sorry for these newbie questions... but how do I upgrade that it works for sure... I've got a running server...
1. stop server
2. download jbossws-native-3.2.2.GA to riftsaw/install/ws-stack
3. riftsaw/install/ ant deploy -Ddatabase=mysql -Dws.stack=native -Dws.version=3.2.2.GA
4. run server
Is that all ? Without cleaning the 3.2.1 package, ...
23. Failure during invoke: UnspecifiedGary Brown Jan 28, 2011 5:25 AM (in response to Igor JStarter)
You don't need to download the jbossws stack - this will be automatically done when installing.
So just stop the server then do:
ant undeploy -Ddatabase=mysql
and then
ant deploy -Ddatabase=mysql -Dws.version=3.2.2.GA
Regards
Gary
24. Failure during invoke: UnspecifiedIgor JStarter Jan 28, 2011 6:06 AM (in response to Gary Brown)
Thank you Gary! There were no problems upgrading to 3.2.2.GA.
But... nothing has changed ! the log, the send request... everything the same...
I dont know if u saw what i was told on the WS forum:
Any idea how I could do that double check ??
About your test example with the doNet.wsdl... the service isn't working anymore... so i can't test it ...
25. Failure during invoke: UnspecifiedGary Brown Jan 28, 2011 6:41 AM (in response to Igor JStarter)
Igor JStarter wrote:
Any idea how I could do that double check ??
Sorry don't know - best solution might be to provide a simple test case that we (and the jbossws team) can run.
Igor JStarter wrote:
About your test example with the doNet.wsdl... the service isn't working anymore... so i can't test it ...
Just checked - the service runs within our vpn, so that is why you can't access it. But the test does run successfully as part of the integration tests.
26. Failure during invoke: UnspecifiedIgor JStarter Jan 28, 2011 10:43 AM (in response to Gary Brown)
Can u tell me 100% that when calling your doNET.wsdl your SOAPAction is not empty ?
About the test case... The external web service is a .svc service... And for a test... I'm invoking a helloworld now returning hello + string..
So i think it's easy to test. If you don't have a svc?wsdl, then I can send you by PRIVATE my partners, ready to copy to bpel...
27. Failure during invoke: UnspecifiedMarek Baluch Jan 31, 2011 5:19 AM (in response to Igor JStarter)
Hi Igor
what version of Riftsaw are you using? All I saw in your posts is Riftsaw 2.1... The issue metioned by Gary was found in Riftsaw 2.1.0.Final and fixed in Riftsaw 2.1.1.Final (see).
If you're alredy using 2.1.1.Final or higher then please disregard this post.
Regards
Marek
28. Failure during invoke: UnspecifiedIgor JStarter Jan 31, 2011 7:09 AM (in response to Marek Baluch)
Thank you very much for your reply... We assumed it was a bug, but now we are sure ...
Does anyone know what was changed in 2.1.1 ?
Do I have a chance to update my RiftSaw 2.1.0 to work ?
29. Failure during invoke: UnspecifiedGary Brown Jan 31, 2011 9:10 AM (in response to Igor JStarter)
Hi Igor
I'm in a workshop so only have limited email/internet access.
Can you try upgrading to RiftSaw 2.2.0.Final?
Regards
Gary | https://developer.jboss.org/message/583609 | CC-MAIN-2016-50 | refinedweb | 1,326 | 66.44 |
java code to upload and download a file - Java Beginners
java code to upload and download a file Is their any java code to upload a file and download a file from databse,
My requirement is how can i... and Download file upload in struts - Struts
java file upload in struts i need code for upload and download file using struts in flex.plese help me Hi Friend,
Please visit the following links:
http
upload and download mp3
upload and download mp3 code of upload and download Mp3 file in mysql database using jsp
and plz mention which data type used to store mp3 file in mysql database
Upload and Download multiple files
Upload and Download multiple files Hello Sir/Madam,
I need a simple code for upload and download multiple files(it may be image,doc... link:
Photo upload, Download
Photo upload, Download Hi
I am using NetBeans IDE for developing application for java(Swings). And i am using MySQL as backend database.
My... dont know whether this code proper) . My question is how can i load
file upload download - Java Beginners
file upload download how to upload and download files from one system to another using java.io.* and java.net.* only please send me code
Struts File Upload and Save
Struts File Upload and Save
... regarding "Struts file
upload example". It does not contain any... example will provide you
with the code to upload the file ,in the upload
Based on struts Upload - Struts
Based on struts Upload hi,
i can upload the file in struts but i want the example how to delete uploaded file.Can you please give the code
upload and download a file - Java Beginners
upload and download a file how to upload a file into project folder in eclipse and how to download the same using jsp
Hi Friend,
Try the following code:
1)page.jsp:
Display file upload form to the user
Upload and download file - JSP-Servlet
Upload and download file What is JSP code to upload and download a document in a web page? Hi Friend,
Try the following code to upload............
Now to download the word document file, try the following code
how to upload and download images in java?
how to upload and download images in java? what is the code....
Upload and Download images:
1)page.jsp:
<%@ page language="java" %>
<HTML>
<HEAD><TITLE>Display file upload form
File Upload And download JSP Urgent - JSP-Servlet
File Upload And download JSP Urgent Respected Sir/Madam,
I... Download in JSP.. In the Admin window, There must be "Upload" provision where admin can upload files.. And in the user window, There must be a "Download" provision
Download Struts
in the source code of the Struts
framework you can download the source code from
http...Learn how to Download Struts for application development or just for learning the new version of Struts.
This video tutorial shows you how you can download
download
download how to download the file in server using php
code:
"<form method="post" action="">
enter the folder name<input type="text" name... the wrong foldername";
end1:
}
?>
</form>`print("code sample
upload video
upload video hi sir i want to code of upload and download video in mysql database using jsp...
and plz give advice which data type is used to store video in table
FileUpload and Download
FileUpload and Download Hello sir/madam,
I need a simple code for File upload and Download in jsp using sql server,that uploaded file should... be download with its full content...can you please help me... im in urgent i have
upload and download video
upload and download video how to upload and download video in mysql...;Display file upload form to the user</TITLE></HEAD>
<BODY>
<FORM...;center><td colspan="2"><p align="center"><B>UPLOAD THE FILE<
upload and download files from ftp server using servlet - Ajax
upload and download files from ftp server using servlet Hi,Sir... for upload and download files from ftp server using servlet and how to use servlet for these applications.I have send my code to you previous time
File Download in jsp
File Download in jsp file upload code is working can u plz provide me file download
upload and download files - JSP-Servlet
upload and download files HI!!
how can I upload (more than 1 file) and download the files using jsp.
Is any lib folders to be pasted? kindly... and download files in JSP visit to :
Struts File Upload Example
Struts File Upload Example
...
is the heart of the struts file upload application. This interface represents a
file... to upload is as
follows:
<%@ taglib uri="/tags/struts-bean
how to upload and download images using buttons in jsp?
how to upload and download images using buttons in jsp? how to upload and download images using buttons in jsp
Download and Build from Source
Of An Application
Download Source Code...
Download and Build from Source
Shopping cart application developed using Struts 2.2.1 and MySQL can be
downloaded from
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... can again download the same file in future.
It is working fine when I
Struts File Upload Example - Struts
Struts File Upload Example hi,
when i tried the struts file upload example(Author is Deepak) from the following URL
i have succeeded. but when i try to upload file
download code from database
download code from database how to download files from database
php download file code
php download file code PHP code to download the files from the remote server
Photo Upload - JSP-Servlet
Photo Upload Dear Sir,
I want some help you i.e code for image upload and download using Servle/jsp.
Thanks&Regards,
VijayaBabu.M ... = 0;
//this loop converting the uploaded file into byte code
FTP File Upload in Java
easy to write code for FTP File
Upload in Java. I am assuming that you have FTP... a connection to FTP Server and perform
upload/download/move/delete operations... is client.login(userName, password);
Code to upload the 2 File Upload error
Struts 2 File Upload error Hi! I am trying implement a file upload using Struts 2, I use this article, but now the server response the error... Upload In Struts2
Thanks
java/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet
Download and Installing Struts 2
Download Struts 2.0
In this section we will download and install the Struts 2.0 on the latest
version... development server. Then we will download Struts 2.0 and install the
struts
download excel
download excel hi i create an excel file but i don't i know how to give download link to that excel file please give me any code or steps to give download link
Downloading Struts & Hibernate
;
In this we will download Struts & Hibernate....
Download Struts
The latest release of Struts can be downloaded from http... libext under "C:\Struts-Hibernate-Integration\code\WEB-INF\"
. We
FileUpload and Download
coding for Upload and download file, but it is not stored in database and also it s not download the file with its content... just it download doc with 0 Bytes...;
<HTML>
<HEAD><TITLE>Display file upload form to the user<
file download
file download I uploaded a file and saved the path in database. Now i want to download it can u plz provide
HTML Upload
HTML Upload Hi,
I want to upload a word / excel document using the html code (web interface)need to get displayed on another webpage. Please let me the coding to display on another webpage using code - Struts
struts code In STRUTS FRAMEWORK
we have a login form with fields
USERNAME:
In this admin
can login and also narmal uses can log...://
Thanks
How to download JDK source code?
How to download JDK source code?
This articles explains you how you can download the source code of JDK. You
will learn how to download JDK source code... to download JDK source code?
JDK installer is packaged with the binary of the JDK
how to distinguish engines having same code - Struts
how to distinguish engines having same code hi we are using struts... and it is kept in central repository.user can change the file and again upload... introduced a new engine for testing purpose it has almost same code as previous engine
image upload
to database
The given code allow the user to browse and upload selected file...image upload Hello sir I want to upload image or any other type... be upload in the server and their path should be stored in database either
download code using servlet - Servlet Interview Questions
download code using servlet How to download a file from web to our system using Servlet
upload a image
upload a image sir,how can i upload a image into a specified folder using jsp Hi Friend,Try the following code:1)page.jsp:<p>&...;<HEAD><TITLE>Display file upload form to the user<
Jsp Upload
an error when Uploading to an database Mysql
the code is given below</p>...;
<p>else { </p>
<p>out.println("unsucessfull to upload...;<TITLE>Display file upload form to the user</TITLE></HEAD>
Upload image
Upload image Hai i beginner of Java ME i want code to capture QR Code image and send to the server and display value in Mobile Screen i want code..., here is a code:
package Example;
import javax.microedition.midlet.*;
import
Ajax File Upload Example
;
Download Complete Source Code...Ajax File Upload Example
This application illustrates how to upload a file using
upload image
upload image how can i retreive image from mysql using jsp... problem is when i use the retreival code,it displays only a small box instead of image at the screen.The retreival and uplaod code is here..
i.upload.jsp
download java source code for offline referral
download java source code for offline referral how can i download complete java tutorial from rose india so that i can refer them offline
struts2 - Struts
struts2 hello, am trying to create a struts 2 application that
allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend,
Please visit the following
Excel File data upload into postgresql database in struts 1.x
Excel File data upload into postgresql database in struts 1.x Dear members please explain how Excel Files data upload into postgresql database in struts
1.x
To scan a image and upload to server
To scan a image and upload to server I am beginner of JavaME I want a code to scan a image and upload to server
File Upload Servlet 3.0 Example
as :
Download Source Code...File Upload Servlet 3.0 Example
In this tutorial you will learn how to upload... annotation to upload the file. Earlier versions than the servlet 3.0
specification were
Spring 2.5 MVC File Upload
server. You can download the Spring MVC file upload example code at the
end... folder that use for validation file upload .The code of
"... as :
Download Code
Download example code
How to download, compile and test the tutorials using ant.
the application on tomcat:
Download the code and then extract it using
WinZip...
Compiling and running Struts 2.1.8 examples with ant
In this section we will learn how to download | http://www.roseindia.net/tutorialhelp/comment/16898 | CC-MAIN-2014-52 | refinedweb | 1,924 | 62.17 |
36127 on
Developer Community if you have new
information to add and do not yet see a matching new report.
If the latest results still closely match this report, you can use the
original description:
Inspector is looking great so far and already very helpful in my workflow! I have an enhancement request.
In my project, I tend to need to invoke a lot of my own code and various framework code each time I debug. These are located in various namespaces (for example, in my PCL project and my iOS project, in MvvmCross's various namespaces, etc. Every time I start the inspector, I need to run several "using Blah" to get access to the functionality I need. For example, "Mvx.Trace" is part of MvvmCross that I use all the time.
It would be awesome if I could store the list of these commonly-used namespaces in the Inspector, or in the project/solution settings somewhere, so every time the Inspector starts, it would just auto-run these using statements and my code / frameworks would be immediately available for debugging.
A bonus would be detection if the specified namespaces are actually present in the loaded assemblies (and skipping them if they aren't), but this isn't necessary.
Thanks!
Hi Matt,
We're definitely looking into this, but we're trying to be careful on how we address it. There are a lot of similar and related, but not quite identical things we think we can accommodate to make Inspector more streamlined on "similar session" cases. An extreme would be saving and replaying full sessions, marking parts of a session as recorded (e.g. macro mode), etc, and "default namespaces" sort of fall into this category.
Thanks for the feedback!
As an aside: if you enter an invalid namespace in Inspector today it will be ignored, albeit verbosely. That is, you can enter a slew of valid and invalid namespaces, and only the valid ones will register. The invalid ones will result in a lot of error/warning spew however. Perhaps we could just suppress that particular compiler error (CS0246).
> using Invalid1; using System.IO; using Invalid2; using System.Collections.Concurrent
This will result in a bunch of error spew, but both System.IO and System.Collections.Concurrent will end up in scope for subsequent REPL input...
The reason it's worth mentioning is that today you could have a snippet and XS and just paste it into the Inspector. Certainly not ideal, but better than typing by hand!
FYI, the upcoming 0.5 Inspector release quiets the warning message for importing namespaces multiple times, so a copy/paste of a large set of potentially duplicate namespaces as I mentioned in my previous comment should be less annoying. | https://xamarin.github.io/bugzilla-archives/36/36127/bug.html | CC-MAIN-2019-39 | refinedweb | 460 | 64 |
In NYS we have a custom ESRI geocoding service that can geocode to rooftop level accuracy, and I am trying to use for batch geocoding in a Jupyter notebook using python. However when I attempt to perform a batch_geocode operation I am not convinced that the address is being sent to the NYS service rather than the ESRI universal geocoding service. Here is my code:
In [1]: # Import the necessary libraries
from arcgis.gis import GIS
from arcgis.geocoding import Geocoder, get_geocoders, batch_geocode, geocode
In [2]: #Set custom ESRI geocoding service url to the NYS Geocoding Service
nys_gcdr_url = 'Locators/Street_and_Address_Composite (GeocodeServer) '
In [3]: #Establish my connection to ArcGIS Online and set the preferred geocoder to the NYS geocoder
gis = GIS("", "username", "password")
esrinl_geocoder = Geocoder(nys_gcdr_url, gis)
esrinl_geocoder
Out [3]: <Geocoder url:" r">
In [4]: addresses = ["80 South Swan Street, Albany, NY, 12201",
"10 B Airline Drive, Albany, NY, 12205",
"800 North Pearl Street, Albany, NY, 12204",
"5826 Fayetteville Rd, Durham, NC 27713"]
In [6]: results = batch_geocode(addresses, esrinl_geocoder)
In [7]: for result in results:
print("Score " + str(result['score']) + " : " + result['address'])
Out [7]:
Score 99.52 : 80 S Swan St, Albany, New York, 12210 Score 100 : 10 Airline Dr, Albany, New York, 12205 Score 100 : 800 N Pearl St, Albany, New York, 12204 Score 100 : 5826 Fayetteville Rd, Durham, North Carolina, 27713
In [8]: results[1]['location']
Out [8]: {'x': -73.81899493299994, 'y': 42.73686765800005}
My issue...
From what I can gather, the gis = GIS() in Input [3]: establishes a connection to ArcGIS Online using my developer account credentials. Even though I set the Geocoder to the NYS geocoder url with esrinl_geocoder, I’m wondering which geocoder is actually being used? The results/scores I output in the code are higher than I expect them to be, for example I get a score of 95.0 when I run the first address against the NYS geocoder here (Find Address Candidates: (Locators/Street_and_Address_Composite) ). Also the x,y for the address vary based on method:
{'x': -73.81899493299994, 'y': 42.73686765800005} for ESRI Python and
{'x': -73.75989873712614, 'y': 42.654069473771784} for NYS Geocoder
In addition, the North Carolina address should not provide a match against the NYS geocoder, however it returns a score of 100 in Out [7].
So:
1) Does the batch_geocode operation default to the ESRI Geocoder despite the custom geocoder url being set with esrinl_geocoder?
2) Are there certain circumstances when this occurs, such as the ESRI geocoder kicks in when a match cannot be returned with the specified geocoder (e.g.; NC address)?
Thanks for any insight!
Can you use Fiddler software against Jupyter notebook to see the url of the geocode service that is being called when retrieving the NC address? | https://community.esri.com/thread/220903-using-a-custom-geocoder | CC-MAIN-2020-24 | refinedweb | 455 | 67.59 |
Anyone who works with Adobe(R) Photoshop knows about its very versatile color picker dialog, which far exceeds the features built in Windows' out-of-date native color dialog.
Some might say, the Windows one does the job very well. But, especially for creating applications which handle graphics or painting, we might want to have some special capabilities in the picker, such as:
Whenever you code a graphical application, you should keep in mind that the user might want to keep his colors and color themes regardless of what color model he has entered them. That means, an HSL color has to be stored the same way as a La*b* color.
Therefore, I have written this set of color picking controls and combined them into a picker dialog, inspired by Adobe's. I couldn't reproduce the exact values in the LAB section, because they seem to use another weighting format, which is based on the white-point of the spectrum.
The main goal of this picker was to support different color models. Therefore, it has to be kept in mind that all selection controls have to support a general interface. This is achieved using the class ColorSelectionModule. It is an abstract class, and every class inheriting from it offers properties for:
ColorSelectionModule
ColorSelectionFader
ColorSelectionPlane
XYZ
The ColorSelectionModule coordinates the updates of the fader and the plane. For example, if you scroll the fader, the image of the plane has to change, and vice versa. Also, if you change the selected color, both have to change.
The core of the picker are the color spaces which are the base for the ColorSelectionModule. XYZ or formerly CIEXYZ is used as the internal format, as it features floating-point intensities, and contains all the other spectrums. LAB is a weighted derivate of the XYZ space, which is basically derived from RGB, just as HSV and CMYK are.
Each color space can only convert to the next in the chain. For detailed specifications and official conversion algorithms, visit EasyRGB.com.
The main goal of this article is to provide a useful color dialog, not explain color models. If you want further information, have a look at this excellent article: Manipulating colors in .NET.
If you write a raster graphics editor, you would certainly want to be able to pick a color off the document. This can be achieved using a tool, and spoken more natively, a delegate which is called for every mouse move on the screen. To be able to pick a color from anywhere in the screen area is, on the one hand, more flexible because you can pick a color from another application as well, and on the other hand, it may be subject to more errors, as you can run onto a grid line in your document or other windows, for example.
The code involves basic Win32 API, which offers a fairly easy way to copy a color from an HDC.
// functions
[DllImport("user32.dll")]
private static extern IntPtr GetDC(IntPtr hwnd);
[DllImport("user32.dll")]
private static extern int ReleaseDC(IntPtr hwnd, IntPtr hdc);
[DllImport("gdi32.dll")]
private static extern int GetPixel(IntPtr hdc, int x, int y);
/// <summary>
/// returns the color of any location on the screen
/// </summary>
public static Color GetScreenPixel(int x, int y)
{
IntPtr descdc=GetDC(IntPtr.Zero);
Color res=ColorTranslator.FromWin32(GetPixel(descdc,x,y));
ReleaseDC(IntPtr.Zero,descdc);
return res;
}
The ability to do this is encapsulated in the control ColorLabel, which also manages Hexcode drawing and previous color comparison.
ColorLabel
Finally, we have the different controls combined on a form, which could be used straight-forward in the code. But, we should also be able to used it same way as the basic .NET / Windows color dialog. Therefore, a class called ColorDialogEx, which inherits from Component, is added. If dragged from the Toolbox onto the form of the application, it snaps to the components area and can be accessed from the code through the Color property, which gets or sets a System.Drawing.Color.
ColorDialogEx
Component
Color
System.Drawing.Color
Advanced functionalities such as picking a color off the screen, copy a color's hexcode to the Clipboard, returning to the previously selected color, or selecting the L*a*b color model is accessed through a context menu on the color label. Furthermore, pressing Shift while dragging on the selection plane/fader will display a grid the selection point will snap to. There maybe ways to display these features more visibly, but I decided to instead add a tooltip help.
Finally, I hope this will be a useful component for anyone dealing with color management in their applications. There are some more tools in the DrawingEx namespace, such as a color button, a 32 bpp true-color icon encoder with quantizer, and some 3D helper classes. Soon, there will also be a gradient selector tool similar to Adobe illustrator's one, so we will not have to re-learn interacting with the application when switching from Adobe products.
DrawingEx
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/35436/Adobe-Color-Picker-Clone?msg=3617429 | CC-MAIN-2013-48 | refinedweb | 876 | 50.87 |
Merge Images Horizontally and Vertically Using Python
There are several websites that have applications/tools to merge images online. Sometimes it becomes very irritating if websites put limitations on the number of images that you can combine or on the number of merged files. I faced this problem when a website did not allow me to create more than two merged files. So, I wrote the following code to help myself. This is a basic code to merge images horizontally or vertically. You can modify it to change size, resolution, etc.
import numpy as np from PIL import Image def merge_images_horizontally(imgs): ''' This function merges images horizontally. ''' #create two lists - one for heights and one for widths widths, heights = zip(*(i.size for i in imgs)) width_of_new_image = sum(widths) height_of_new_image = min(heights) #take minimum height # create new image new_im = Image.new('RGB', (width_of_new_image, height_of_new_image)) new_pos = 0 for im in imgs: new_im.paste(im, (new_pos,0)) new_pos += im.size[0] #position for the next image new_im.save('data/final1.jpg') #change the filename if you want def merge_images_vertically(imgs): ''' This function merges images vertically ''' #create two lists - one for heights and one for widths widths, heights = zip(*(i.size for i in imgs)) width_of_new_image = min(widths) #take minimum width height_of_new_image = sum(heights) # create new image new_im = Image.new('RGB', (width_of_new_image, height_of_new_image)) new_pos = 0 for im in imgs: new_im.paste(im, (0, new_pos)) new_pos += im.size[1] #position for the next image new_im.save('data/final2.jpg') #change the filename if you want def main(): ''' Start of the code. ''' # open images files imgs = [Image.open(im) for im in list_im] ###merge images horizontally merge_images_horizontally(imgs) ###merge images vertically merge_images_vertically(imgs) #list of images list_im = ['data/photo1.jpg', 'data/photo2.jpg'] #change it to use your images if __name__ == '__main__': ''' This program merges images horizontally and Vertically. ''' main()
Here are the outputs of the above code: 1. merged horizontally, 2. merged vertically
One Comment
Thank you So much | https://www.bitsdiscover.com/merge-images-horizontally-and-vertically-using-python/ | CC-MAIN-2022-40 | refinedweb | 325 | 59.4 |
Figure 1
Figure 2
Welcome to my very first article (Thanks to Marc Clifton & others for guidelines to writing articles on CP!). *Takes deep breath* I hope it conforms to the CP guidelines! The article is about showing how to solve a dilemma that I experienced when I was developing an application, and it governs the usage of combo boxes! Ahhh! This pesky control by Microsoft seems a bit too.... contrived, to say the least, and caused me grief in trying to solve the problem with combo boxes. Basically, the problem with combo boxes is twofold:
Look at Figure 1 above to see what I mean. I know, I know, the screenshot looks simple, but what if you have a window that contains many controls, and a combo box happens to be near the edge of the client window and have a situation like the one as shown in the above screenshot! Hand on heart, it's not nice looking, is it?
Now, take a look at Figure 2 to see the horizontal scrolling in place. Now, the end user who would be looking at this, can safely scroll across to see if that pertinent selected item contains whatever is relevant, without fear of cluttering up the overall display. Sure, I can manually decrease the dropdown width and still it can be seen!
The first part of the solution involves having to figure out the length of the largest string in terms of pixels, which is easy enough. The last part is having to insert a horizontal scrollbar to the dropdown box, and this can make the overall look of the application more polished. (See Figure 2 above.)
Some of the code highlighted here can be found in the source archive.
I have included links to the relevant articles, at the bottom of the page, for reference.
There're a few pre-requisites. First, knowledge of using the Win32 API is vital, P/Invoke a must have! Secondly, a knowledge of translating from VB.NET to C# (more about this in a second!). . Lastly, loads of patience, trial & error, plenty of coffee and cigarettes, and a good reading/looking up MSDN!! <g>
In order to determine the length of the largest string, it is not in string length we're talking about here, it is in terms of pixels. Have a look at this section of code which calculates the length in pixels for a range of list items within the Items collection of the combo box. (See Figure 3.)
Items
#region GetLargestTextExtent - Obtain largest string in pixels
private void GetLargestTextExtent(System.Windows.Forms.ComboBox cbo,
ref int largestWidth){
int maxLen = -1;
if (cbo.Items.Count >= 1){
using (Graphics g = cbo.CreateGraphics()){
int vertScrollBarWidth = 0;
if (cbo.Items.Count > cbo.MaxDropDownItems){
vertScrollBarWidth = SystemInformation.VerticalScrollBarWidth;
}
for (int nLoopCnt = 0; nLoopCnt < cbo.Items.Count; nLoopCnt++){
int newWidth = (int) g.MeasureString(cbo.Items[nLoopCnt].ToString(),
cbo.Font).Width + vertScrollBarWidth;
if (newWidth > maxLen) {
maxLen = newWidth;
}
}
}
}
largestWidth = maxLen;
}
#endregion
Figure 3
Typical incantation of the above would be:
#region cboBoxStandard_DropDown Event Handler
private void cboBoxStandard_DropDown(object sender, System.EventArgs e) {
int pw = -1;
this.GetLargestTextExtent(this.cboBoxStandard, ref pw);
this.cboBoxStandard.DropDownWidth = pw;
}
#endregion
Figure 4
In Figure 4, the code consists of a combo box named cboBoxStandard and the DropDown event handler is wired up! Now, that's the first part of the problem solved, which will produce the result as shown in Figure 1.
cboBoxStandard
DropDown
The tricky part, is having to get the handle of the actual dropdown box; in C# speak, handle refers to SomeControl.Handle (it can also refer to a System.IntPtr type when using P/Invoke). In Win32 API, it is HWND which is a 32-bit double word otherwise known as a DWORD.
SomeControl.Handle
System.IntPtr
HWND
DWORD
Great! I have the combo box's handle, but that is as far as it goes in the eyes of .NET. Looking at the following information found in the Microsoft's KB, INFO article: Q262954 titled 'The parts of a Windows Combo Box and How they Relate': There're actually three windows combined to form a combo box, well I never.... a Combo Box control whose Windows class is 'ComboBox', an Edit control whose Windows class is 'Edit' and finally, a list box whose Windows class is 'ComboLBox'. For the uninitiated, a Windows class from Win32 API point of view, is how each and every window is registered in the Windows system. That is to say, it is not an OO (Object Oriented) thing.
ComboBox
Edit
ComboLBox
Righto! OK...ummm...hmmm...this 'ComboLBox' is what I'm interested in. In fact, it is the same as an ordinary standard list box, but contained in a window depending on the style of the combo box, i.e., Simple, DropDown or DropDownList. To recap:
Right, next part is how to get at that list box's handle...so I dug deep within MSDN after a cuppa too many, with a few cigarettes included. There's a neat Win32 API function which does exactly what I needed to achieve this... GetComboBoxInfo which returns a reference to a structure called ComboBoxInfo. In Win32 API speak, it returns a pointer to a structure COMBOBOXINFO. See Figure 5 for the declaration of the function which is commonly used in C/C++ family of Win32 development.
GetComboBoxInfo
ComboBoxInfo
COMBOBOXINFO
// Win32 API Function as per MSDN docs
BOOL GetComboBoxInfo(HWND hwndCombo, PCOMBOBOXINFO pcbi);
//
// C#'s equivalent Function.
[DllImport("user32")] public static extern bool
GetComboBoxInfo(IntPtr hwndCombo, ref ComboBoxInfo info);
//
#region RECT struct
[StructLayout(LayoutKind.Sequential)]
public struct RECT {
public int Left;
public int Top;
public int Right;
public int Bottom;
}
#endregion
#region ComboBoxInfo Struct
[StructLayout(LayoutKind.Sequential)]
public struct ComboBoxInfo {
public int cbSize;
public RECT rcItem;
public RECT rcButton;
public IntPtr stateButton;
public IntPtr hwndCombo;
public IntPtr hwndEdit;
public IntPtr hwndList; // That's what I'm interested in....
}
#endregion
Figure 5
I included the StructLayout attribute to guarantee the values will go into the right offsets during the P/Invoke call, as using P/Invoke marshals the data from managed to unmanaged boundaries and back again. I wrapped up this function into a simple method as shown in Figure 6.
StructLayout
private bool InitComboBoxInfo(System.Windows.Forms.ComboBox cbo){
this.cbi = new ComboBoxInfo();
this.cbi.cbSize = Marshal.SizeOf(this.cbi);
if (!GetComboBoxInfo(cbo.Handle, ref this.cbi)){
return false;
}
return true;
}
Figure 6
this.cbi is a global variable within the form's class. We call new on it to get a block of memory assigned to the variable, and we use Marshal.SizeOf() to pre-fill the cbiSize field of that structure prior to the call via P/Invoke. Some structures which are passed into Win32 API functions require this prior to P/Invoke. Check with the MSDN or pinvoke.net. Then pass it into the Win32 API function via P/Invoke, so that it is guaranteed that the block of memory gets filled up after the trip to the unmanaged world. If the call fails, we bail out, and the combo box will have standard default behavior after doing a simple check on the bool value returned in certain places! Great!
this.cbi
new
Marshal.SizeOf()
cbiSize
bool
Now, that we have the list box's handle, next part is 'sticking in the horizontal scroll bar'. More coffee and cigarettes, more reading...until I came across an article written in MSDN's December 2000 edition 'ActiveX and Visual Basic: Enhance the Display of Long Text Strings in a Combobox or Listbox'. In the article, the author described how to achieve the above code in Figure 3 using VB 6. It provided the inspiration to do what I needed to do exactly, albeit it was in VB 6. Look at Figure 7 to see the classic VB 6 code.
Private Const WS_HSCROLL = &H100000
Dim lWindowStyle As Long
lWindowStyle = GetWindowLong(List1.hwnd, GWL_STYLE)
lWindowStyle = lWindowStyle Or WS_HSCROLL
SetLastError 0
lWindowStyle = SetWindowLong(List1.hwnd, GWL_STYLE, lWindowStyle)
Figure 7.
It was a matter of translating the code directly to C#'s equivalent, as shown in Figure 8.
[DllImport("user32")] public static extern int
GetWindowLong(IntPtr hwnd, int nIndex);
[DllImport("user32")] public static extern int
SetWindowLong(IntPtr hwnd, int nIndex, int dwNewLong);
public const int WS_HSCROLL = 0x100000;
public const int GWL_STYLE = (-16);
int listStyle = GetWindowLong(this.cbi.hwndList, GWL_STYLE);
listStyle |= WS_HSCROLL;
listStyle = SetWindowLong(this.cbi.hwndList, GWL_STYLE, listStyle);
Figure 8.
That section of code can be found in the cboBoxEnhanced_DropDown event handler. Basically, what the above code does is, it adjusts the style of the list box to include a Window Style Horizontal SCROLLbar. Every control has a default style, which is a combination of bits, that defines the behavior of the control and how Windows handles the default behavior or processing of events. In this instance, I extract the original bit-mask for the list box's handle using Win32 API Function GetWindowLong via P/Invoke. Then I perform a bit-wise OR on the mask itself to include the horizontal scrollbar, then call SetWindowLong via P/Invoke again.
cboBoxEnhanced_DropDown
GetWindowLong
SetWindowLong
The constants can be found in the SDK; if you have Visual Studio 2003, it can be found in the VC7\PlatformSDK\Include. A browse around the C/C++ header file winuser.h is where constants can be found; for common controls it is commctrl.h. If you don't have Visual Studio, why not try get the Borland C++ 5.5 Compiler (Command Line only - which includes the SDK stuff).
Note the use of this.cbi.hwndList in the above Figure 8 (this.cbi.hwndList was obtained in the above Figure 6)! That's how the horizontal scroll bar gets inserted into the list box. Next, we need to notify the list box's horizontal scrollbar so that the scrolling magic can take place. To achieve that, another Win32 API function call is required, our friend SendMessage.
this.cbi.hwndList
SendMessage
[DllImport("user32")] public static extern int
SendMessage(IntPtr hwnd, int wMsg, int wParam, IntPtr lParam);
public const int LB_SETHORIZONTALEXTENT = 0x194;
// Set the horizontal extent for the listbox!
SendMessage(this.cbi.hwndList, LB_SETHORIZONTALEXTENT,
this.pixelWidth, IntPtr.Zero);
Figure 9.
So that's it...or so I thought....scrolling works just fine, the scrollbar's thumb-tracking doesn't work...damn... even more cups of coffee...OK...I realized that I need to subclass this list box and take care of the horizontal scrolling...more searching around until I came across a very fine article here on CP ' Subclassing in .NET -The pure .NET way' by Sameers (theAngrycodeR), which was written using VB.NET. It would be helpful if I could divert you to read the article and to understand how his code works. It is impressive! Thanks Sameers for publishing your article, without it, this wouldn't have been achieved!
Here's the translation of the VB.NET code into C#, as shown in Figure 10. I enhanced it slightly by changing the constructor and adding message crackers (a legacy from the Win 3.1 days when wParam and lParam were used to hold two 16 bit values within a long data type - which was C/C++'s datatype of 32 bit value at the time). Of course, this is an excellent example of how events/delegates comes into play here.
wParam
lParam
long
#region SubClass Classing Handler Class
public class SubClass : System.Windows.Forms.NativeWindow{
public delegate void
SubClassWndProcEventHandler(ref System.Windows.Forms.Message m);
public event SubClassWndProcEventHandler SubClassedWndProc;
private bool IsSubClassed = false;
public SubClass(IntPtr Handle, bool _SubClass){
base.AssignHandle(Handle);
this.IsSubClassed = _SubClass;
}
public bool SubClassed{
get{ return this.IsSubClassed; }
set{ this.IsSubClassed = value; }
}
protected override void WndProc(ref Message m) {
if (this.IsSubClassed){
OnSubClassedWndProc(ref m);
}
base.WndProc (ref m);
}
#region HiWord Message Cracker
public int HiWord(int Number) {
return ((Number >> 16) & 0xffff);
}
#endregion
#region LoWord Message Cracker
public int LoWord(int Number) {
return (Number & 0xffff);
}
#endregion
#region MakeLong Message Cracker
public int MakeLong(int LoWord, int HiWord) {
return (HiWord << 16) | (LoWord & 0xffff);
}
#endregion
#region MakeLParam Message Cracker
public IntPtr MakeLParam(int LoWord, int HiWord) {
return (IntPtr) ((HiWord << 16) | (LoWord & 0xffff));
}
#endregion
private void OnSubClassedWndProc(ref Message m){
if (SubClassedWndProc != null){
this.SubClassedWndProc(ref m);
}
}
}
#endregion
Figure 10.
Every control, no matter what, is inherited from NativeWindow which is the essence of how the .NET wrappers within the FCL work for all sorts of controls. There's one caveat emptor that I must mention regarding this class, it does not work for components such as ToolTips (BTW, its handle is not exposed at all! - Can somebody explain how to get at handle for controls such as Tooltips?). So now, it is a matter of deriving an instance of this class and passing in the this.cbi.hwndList into the class' constructor, create the event handler, and then we're in business..
NativeWindow
// Within the Constructor of the Form.
this.gotCBI = this.InitComboBoxInfo(this.cboBoxEnhanced);
if (this.gotCBI){
this.cboListRect = new RECT();
this.si = new SCROLLINFO();
this.scList = new SubClass(this.cbi.hwndList, false);
this.scList.SubClassedWndProc += new
testform.SubClass.SubClassWndProcEventHandler(scList_SubClassedWndProc);
}
Figure 11.
RECT and SCROLLINFO are structures which hold the rectangle region and scrolling information (surprise, surprise) respectively. You'll see why I initialized/instantiated the variables...hint, hint, subclass...
RECT
SCROLLINFO
private void scList_SubClassedWndProc(ref Message m) {
switch (m.Msg){
case WM_SIZE:
GetClientRect(this.cbi.hwndList, ref this.cboListRect);
this.xNewSize = this.scList.LoWord(m.LParam.ToInt32());
this.xMaxScroll = Math.Max(this.pixelWidth - this.xNewSize, 0);
this.xCurrentScroll = Math.Min(this.xCurrentScroll, this.xMaxScroll);
this.si.cbSize = Marshal.SizeOf(this.si);
this.si.nMax = this.xMaxScroll;
this.si.nMin = this.xMinScroll;
this.si.nPos = this.xCurrentScroll;
this.si.nPage = this.xNewSize;
this.si.fMask = SIF_RANGE | SIF_PAGE | SIF_POS;
SetScrollInfo(this.cbi.hwndList, SB_HORZ, ref this.si, false);
break;
case WM_HSCROLL:
int xDelta = 0;
int xNewPos = 0;
int modulo = (this.xNewSize > this.pixelWidth) ?
(this.xNewSize % this.pixelWidth) : (this.pixelWidth % this.xNewSize);
switch (this.scList.LoWord(m.WParam.ToInt32())){
case SB_PAGEUP:
xNewPos = this.xCurrentScroll - modulo;
break;
case SB_PAGEDOWN:
xNewPos = this.xCurrentScroll + modulo;
break;
case SB_LINEUP:
xNewPos = this.xCurrentScroll - 1;
break;
case SB_LINEDOWN:
xNewPos = this.xCurrentScroll + 1;
break;
case SB_THUMBPOSITION:
xNewPos = this.scList.HiWord(m.WParam.ToInt32());
break;
default:
xNewPos = this.xCurrentScroll;
break;
}
xNewPos = Math.Max(0, xNewPos);
xNewPos = Math.Min(xMaxScroll, xNewPos);
if (xNewPos == this.xCurrentScroll) break;
xDelta = xNewPos - this.xCurrentScroll;
this.xCurrentScroll = xNewPos;
this.si.cbSize = Marshal.SizeOf(this.si);
this.si.fMask = SIF_POS;
this.si.nPos = this.xCurrentScroll;
SetScrollInfo(this.cbi.hwndList, SB_HORZ, ref this.si, true);
break;
}
}
Figure 12.
Even more Win32 API function calls come into play here...well, that sounds like an overstatement, in truth that's two APIs here! APIs used here are SetScrollInfo, GetClientRect, which can be seen in the above Figure 12...
SetScrollInfo
GetClientRect
WM_SIZE
The scrolling is not 100% accurate, download and take a look at the demo app and play with the thumb tracking and arrow buttons...that's it. From there on, the sky's the limit, and of course, you can put all of this into an extender control if you so desire. There, it wasn't too hard, was it?...a bit of ingenuity, persistence, and patience does indeed pay off!
In the source archive, I have included two radio buttons and two labels, so it is slightly different to the screenshot in the above. It essentially changes the dropdown style at runtime to convince myself that the code works for both styles: DropDown and DropDownList.
No error checking is done. If you use this code, please put in error checking to make it production-ready! This hacking took me three days + nights.
While hacking this, initially, I tried to display a tooltip depending on the cursor position whilst the dropdown list box is visible, and the tooltip never showed up, it took me ages to figure out why - but I discovered through the MSDN archive, that apparently, the tooltip's window (in which the tooltip text is contained) has lower precedence than the dropdown list box, i.e., z-order of the window is such that the tooltip's window appears on the bottom of other windows. Hence the dropdown portion is on top of it and the tooltip will never show up! That I didn't know....but it is interesting because I initially made an attempt to simply bring the tooltip's window to the foreground via the Win32 API function call SetWindowPos. But then I was caught out as I realized that the handle of the tooltip wasn't exposed publicly...I don't know why...but that's for another day.....
SetWindowPos
The other thing, is that you might question - would subclassing the actual combo box work? To my amusement - with the above code in place, the combo box, get this...did not get any of the WM_HSCROLL messages.... funny this is, after investigating via Spy++, deciphering the hexadecimal messages flashing past my eyes, scrolling off the screen, and coming to a conclusion, the mouse capturing is taking place within the dropdown box and hence all mouse messages were sent to the dropdown box. That explains how the combo box got the focus on the dropdown box when the dropdown style is set to DropDownList or plain DropDown.
WM_HSCROLL
Yeah, I admit my code ain't reliable, i.e., the scrolling, but hey it works!
I did put this into an extender control class, and learnt a very important lesson, if you intend to develop a custom combo control using the code like above, be sure that the combo box's parent handle is set to the form at design time, otherwise bizarre problems will appear, such as subclassing not firing, the horizontal scrollbar not appearing etc. In fact, GetWindowLong fails with an error code of 0, and a quick check to Marshal.GetLastWin32Error() informs me of error code 1400 which is 'Invalid Windows Handle', and SetWindowLong fails!! Bear in mind, that you would have to drop a plain Win combo box when in design view, and in the code, change it to match that of the user control/extender control and you should be OK. If you would like to see a working example of the extender control code, which you can add to the toolbox in VS 2003, drag and drop it on to the form etc., let me know!
Marshal.GetLastWin32Error()
Tip: It would be best to create an event handler for the DropDownStyleChanged and put the call to InitComboBoxInfo in there, as you would have to call it anyway in order to ensure that the dropdown box's handle is up-to-date and to instantiate a fresh instance of the subclass to match that of the up-to-date handle. Otherwise, you'll run into the similar situations and problems like I did regarding invalid handles and bizarre problems!
DropDownStyleChanged
InitComboBoxInfo
A side bonus that I discovered when I put the above code in place was I got automatic vertical scrolling when the high-light was at the bottom of the dropdown box, whether that was because I have a mouse-wheel-type of mouse - I don't know! Another plus, if you have DrawMode set to OwnerDrawFixed with plain fonts and fancy colors for backgrounds etc., it works a treat. With images, you're on your own.
DrawMode
OwnerDrawFixed
Final note: I was surprised at how much I have learnt from hacking this Combo Box. If I were ever to look at such a difficult control like this Combo Box again, I'd be feeling like 'Oh no! Not another pesky control *sigh*'......
Initial version. | http://www.codeproject.com/Articles/9455/Hacking-the-Combo-Box-to-give-it-horizontal-scroll?msg=1094668 | CC-MAIN-2015-18 | refinedweb | 3,267 | 57.47 |
[12:32pm] ajturner: Allan brought up some points on the GeoRSS blog about diff't uses for featuretype - more formal "mountain", "restaurant", etc. but also things like "hallway", "cafe", etc.
[12:37pm] joshli: Introductions: John Goodwin - OS ontology modeling
[12:37pm] joshli: Andy Turner: developer of geoblogging software and georss contributor
[12:38pm] joshli: Chris Goad: works for Platial and also worked on rdf
[12:53pm] ajturner: sorry - I guess my phone died
[12:54pm] joshli: Peisheng: works at GMU on Web Services composition
[12:59pm] joshli: Discussion of whether we need a "placeholder" feature object to hang the georss properties on.
[1:08pm] joshli: Action: Chris and John Goodwin will collaborate on an OWL /RDF proposal for GeoRSS
[1:08pm] joshli: Issue of microformats: three proposals for use, no real consensus yet.
[1:15pm] joshli: Discussion: basic ontology components (review)
[1:33pm] joshli: Discussion: SOCoP collaboration, working on foundation ontology components form which to run a domain ontology pilot project
[1:36pm] ajturner: what MF proposals are there?
[1:37pm] joshli: 1. Use span and div and stuff coordinates into attributes like "title".
[1:37pm] joshli: Ugllyyy[1:37pm] ajturner: how do those fit w/ the current geo mf?
[1:39pm] joshli: it is a little unclear
[1:39pm] joshli: if you look at
[1:40pm] joshli: there is an example which renders coordinates: e.g. <span class="latitude">37.386013</span>
[1:41pm] joshli: and another one which doesn't have to:n <abbr class="latitude" title="37.408183">N 37° 24.491</abbr>
[1:41pm] ajturner: right - either are acceptable
[1:42pm] joshli: Putting the coordinates into "title" is not pretty, however.
[1:42pm] joshli: Allan and I looked at <object>[1:42pm] joshli: I have somewhere an example. It lets you put a data string in, a class, and also reference a component such as a plugin.
[1:43pm] joshli: 3. Use <meta content="" /> and <link /> , but in the body not just the head of the page
[1:44pm] ajturner: but I don't believe Object tags are rendered?
[1:44pm] joshli: This is the RDF-A proposal for annotating XHTML. It isn't currently valid XHTML, however, most browsers simply ignore it.
[1:44pm] ajturner: MF has the focus to keep it simple and as historically has been used and is simple to use in editors/current devs, etc.
[1:45pm] joshli: Re: Object - yes, that's the idea. A regular browser without some form of intervention has no useful rendering for geotags.
[1:45pm] joshli: But it makes the coordinates available for javascript or some other scripting language or plugin to make use of.
[1:46pm] ajturner: JS can access the current MF's very well too
[1:46pm] ajturner: in fact, it's rather easy now - can just getElementsByClassName("geo")
[1:47pm] joshli: Yes, and my only objection is the use of "title". It's a useful hack, but still a hack in terms of widespread usage.
[1:47pm] ajturner: true
[1:48pm] joshli: That said, I don't think these are mutually exclusive, it would just be good to have a strict usage in each case.
[1:53pm] ajturner: one thing to defintily keep in mind, which the current MF's don't do - is associate the geo or adr w/ the context
[1:53pm] ajturner: so you have those coords, but don't know specifically what they're referencing
[1:53pm] ajturner: the only way is to use an hCard or hEvent
[1:57pm] joshli: That is a good point, in that we can use several tags in RSS to be more specific, but the XHTML elements only allow one tag each.
[1:58pm] ajturner: right - RSS is at least forceably enclosed contextually - one item has a date, author, description, tag, geo reference
[1:58pm] ajturner: although - a big shortcoming of GeoRSS Simple is the limitation of a "single point"
[1:58pm] ajturner: and no possibility of a collection of points
[1:59pm] joshli: There is always the GeoRSS route for annotating anchors in a Web page. Then you can reference a context document to place the reference on, etc.
[2:02pm] ajturner: well, referencing another element would be sufficient
[2:02pm] ajturner: the problem is w/ MF, as with many standards bodies, there is a sol'n -b ut that gets dragged out forever in commmittee, pragmatic waiting, and adoption
[2:21pm] stakagi joined the chat room.
[2:26pm] joshli: Here is an <object> example:
<span>Tagged content<object class="georss:where" classid="georss:point" data="45.256 -110.45">
<param name="georss:crs" value="epsg:4269" /> </object></span>
The param element is a little wordy, but precise.
[2:27pm] ajturner: but - that object isn't rendered by the browser, right?
[2:28pm] joshli: No, in fact you can refer to what code is supposed to deal with the object.
[2:28pm] ajturner: well, but that's an extra layer of complexity - by default you do that work for no benefit
[2:28pm] ajturner: it requires that someon install some plugin to use that data
[2:28pm] joshli: It could just be javascript.
[2:30pm] ajturner: but what if I just want to display the coords?
[2:30pm] joshli: The concept is to geotag or "featurize" some part of a Web page, so that it is ignored unless appropriate, so that it has a standard meaning when appropriate, and so that it doesn't conflict with other usage in a page.
[2:30pm] ajturner: why can't you do all that just in a div or span insstead of an object? you don't like the coords in the title?
[2:31pm] ajturner: b/c I agree the MF could include more useful things like projection
[2:31pm] ajturner: name, tags, featuretype, etc.
[2:31pm] joshli: Displaying the coordinates is ok, but I don't think that would normally be appropriate.
[2:32pm] ajturner: hrm, depends - many people now display coords, but I agree that perhaps not really useful to most people
[2:32pm] ajturner: address/name is most useful
[2:33pm] joshli: Ultimately we'd like to have geotags and refs which the browser does something specifically useful with, but in the meantime they are just annotations which help find pages or help additional code to do something useful.
[2:34pm] ajturner: well, but "in the meantime" they could also be displayable - and there are lots of utils that already do something with the geo mf[2:35pm] ajturner: display maps, GPX, aggregate, etc. using FF extensions
[2:35pm] joshli: sure, that doesn't require display however.
[2:53pm] stakagi: I think it desirable that Geo Vocabulary Update is performed by the addition of a qualifier vocabulary. It is a modularization like Dublin Core.
[2:53pm] joshli: I am not sure what you mean.
[2:57pm] stakagi: it means this spec () itself is not updated but making a new specification. Based on Geo Vocabulary Update.
[2:58pm] stakagi: As qualifier
[2:58pm] joshli: Yes, I mean to update the namespace with a new specification, not really update the old vocabulary itself
[2:59pm] stakagi: yes
[3:02pm] stakagi: It is still under construction. >
[3:04pm] joshli: good work, now we hope for an OWL/RDF contribution to make use of it.
[3:05pm] stakagi: ISO6709 tools for Java is
[End of minutes] | http://www.w3.org/2005/Incubator/geo/minutes/20061211.html | CC-MAIN-2014-49 | refinedweb | 1,223 | 58.11 |
Coeur d’Alene Newsline >>your relevant, offbeat, local buzz publication
northwest’s best
The northwest’s best businesses
senior focus
Healthy living for seniors
happy easter
Celebrations around the world
calendar of events April events
April 2012 // Coeur d’Alene, Idaho Photo by Spectacular Images Photography
waterfront dining fresh seafood steaks • salads
View Atmosphere Experience…
Breakfast • Lunch • Dinner
5 8 B R I D G E S T R E E T A T C I T Y B E A C H, S A ND P O I N T I D A HO | 2 0 8 .2 5 5 . 7 5 5 8 W W W .T R I N I T Y A T C I T Y B E A C H. C O M
Excellent ExcellentAuto AutoBody BodyRepair Guaranteed Repair in Writing • Paintless Dent Repair without Breaking the Bank • Full Collision Repair • Paintless Dent Repair without Breaking the Bank • Custom Paint • Full Collision Repair • Resotration & Customization • Custom Paint • Rock Chip Repair & Glass Replacement • Restoration & Customization • Rock Chip Repair & Glass Replacement • One Day Detailing with Free Pickup & Delievery in Coeur • One Day Detailing with Free Pickup & Delivery in d’Alene, Hayden, Post Falls & Rathdrum Areas* Coeur d’Alene, Hayden and Post Falls
Guaranteed in Writing
Find us on Facebook!
ALLOver WORK GUARANTEED IN WRITING 20 years experience
All WoRk GuARAntEEd in WRitinG All Insurances accepted Over 20 years experience. Free Estimates All Insurances Accepted. At Free Estimates. Curwen’s Body & Paint where At Curwen’s & Paint where “Quality is Body Never an Accident” “Quality is Never an Accident”.
208.762.1171
(208) 762-1171
13440 N. Clovis Rd. Hayden 2.5 Miles North of Hayden Ave on HWY 95 13440 N. Clovis Road, Rathdrum Scan for contact information.
2.5 Miles North of Hayden Avenue on Highway 95
*Vehicle must be legally driveable
Excellent Auto Body Repair Guaranteed in Writing • Paintless Dent Repair without Breaking the Bank • Full Collision Repair • Custom Paint • Restoration & Customization • Rock Chip Repair & Glass Replacement • One Day Detailing with Free Pickup & Delivery in Coeur d’Alene, Hayden and Post Falls
All WoRk GuARAntEEd in WRitinG
2149496-0131
Over 20 years experience All Insurances accepted Free Estimates At Curwen’s Body & Paint where “Quality is Never an Accident”
(208) 762-1171 13440 N. Clovis Rd. Hayden 2.5 Miles North of Hayden Ave on HWY 95
*Vehicle must be legally drivable
*Vehicle must be legally driveable
2149496-0131
TABLE OF CONTENTS
Northwest’s Best Deal it Local
>
Find Boomer’s acorn hidden in an ad and be the 11th person to email us at [email protected] with it’s location to win $15!
>>
Senior Focus
>>
06
Boomer the Squirrel
Congratulations to March’s acorn finder Calvin Kimble!
Healthy living for seniors ADVERTISING & SALES
10
Heather Hart - ext. 140 [email protected]
Happy Easter
Celebrations around the world
14 08 Healthy Living
24 Calendar of Events
09 Smart Money
26 Activities and Fun
GRAPHIC DESIGN Whitney Howard [email protected]
View the Coeur d’Alene Newsline online at and don’t forget to like us on Facebook at facebook.com/newslinesonline.
April & May events
Improve the way you live Geothermal heating and cooling
Games & jokes
sandpointn
ewsline
>> your relev ant, offbeat,
12 Financial Focus
This Newsline is brought to you by Like-Media. If you would like to advertise with us please call 208.904.3838 or email [email protected]. Fax: 800.536.5967.
local buzz publ
ication
To submit articles, photos, nominations and events email us at [email protected].
Tax changes & homeowner’s insurance
16 Paw Prints
Advertise in the Sandpoint and Bonners Ferry Newslines. Call today for a “Network Buy”. Call 208.904.3838
Pets can improve your health
17 Business Spotlight
happy easter Celebration
s around the
outdoor exp erie
Change Your Mind
Fly fishing
Enter to win!
home and gard
18 Coeur d’Alene High School Sports
Oulman Photogr
aphy
april 2012 // sand
poin t, idaho SANDPOIN
April 2012 |
Dining Guide
T NEWSLINE
1
Local eats
208.904.3838
Living Water Lawncare
Lawns • Gutters • Windows • One Time or Scheduled Visits • Lawn Mowing • Trimming • Edging
Residential & Commercial
What’s in a seed?
Photo by Tanyia
Lake City HS & Coeur d’Alene HS
21
world
nce
guides
local buzz
Licensed & Insured Lawns Cared For by Our Family... For Your Family.
• Power Raking • Gutter Clearing • Indoor/Outdoor • Window Cleaning
• Leaf & Pine Needle Removal • Drive & Walkway Blowing • Fertilizing • Power Washing
Kyle & Brittney Holmes (208) 818.2400
Col 3:13 April 2012 | COEUR D’ALENE NEWSLINE 5
NORTHWEST’S BEST
Think Local. Buy Local. Deal it Local.
Northwest’s Best WHY BUY LOCALLY OWNED?
T
here are many well-documented benefits to our communities and to each of us to choosing local, independently owned businesses. We realize it is not always possible to buy what you need locally and so merely ask you to Think Local FIRST! 10 Reasons to Think Local - Buy Local – Deal it Local! 1. Buying local means supporting yourself. Studies have shown that when you buy from an independent, locally owned business, significantly more of your money is used to make purchases from other local businesses. make more local purchases requiring less transportation.; guaranteeing.
Make Beer, Make Wine
Great wine, great beer, great cheese, great experience! More and more people are discovering how fun and easy it is to craft premium quality wines, beers and cheeses that they can call their own. Let our knowledgeable and friendly staff (Levi) help assist you in choosing the perfect wine or beer that we can ferment and filter for you or may take home. We carry everything you need to build all of your favorite brews and cheese!. Coeur d’Alene—1411 North 4th Street 208.765.8576
Northwest Supply Company
Northwest Supply Company
Dan Petersen, Owner and General Manager of Northwest Supply Company, has been in the commercial janitorial and carpet cleaning business over 30 years. Dan and his family settled in the Coeur d’Alene, Idaho area to be near their extended family. Dan realized he was too young to be retired, recognized the need for a local company that would be a complete supplier of superior cleaning supplies and equipment for janitors, carpet cleaners, homeowners and businesses, and opened Northwest Supply Company in 2009. Having had such a good relationship with his supplier while in the cleaning business, Dan knows how important Customer Service is. Dan says, “And that is our focus - to give our customers what they need, when they need it to get the job done right.”
Super Silver
If you love silver, you’ll love our store! We carry thousands of sterling silver and semi-precious stone jewelry including rings, earrings, chains, pendants, bracelets, anklets, hoops, charms, toe rings & more! Coeur d’Alene—414 Sherman Avenue 208.667.8170
“For All the Things that Move You.”
All Seasons
1.888.897.5073
6
Voted #1 by JD Powers in Customer Satisfaction! Real Estate Experts and Customer Service Providers. Serving Coeur d’Alene, Hayden, Post Falls, Schweitzer, Bonners Ferry Hope, Priest River and Sandpoint. Call today for your free CMA. [email protected]
NORTHWEST’S BEST
Army/Navy Outdoor
Locally owned and operated. We love made in USA. Only independent boot dealer in the panhandle: Redwing, Thorogood, Danner, Wolverine, Georgia and Vanson. Mens and women’s boots. Surplus, ammo cans, cargo, camo, wool pants and wool blankets too. Carhart work clothing and more. Yes we special order. Come and see us! Coeur d’Alene—1620 North Government Way 208.667.6829
Rejuvenating Rays
Here at Rejuvenating Rays we want you to have a great experience. We provide everything you need to do that here, including tanning beds, air brush spray tan, massage therapy, an infrared sauna and a great line of tanning lotions. We strive to provide a very relaxing and tranquil place to hang out for awhile and chat in our waiting area. We have very reasonable prices and run great monthly specials. Come check us out and take a tour! Check out my website at. Coeur d’Alene—2900 North Government Way 208.664.4500. Find us on Facebook!. Coeur d’Alene—3202 North 4th Street 208.655.7074
Absolute Property Management
Absolute Property Management is a full-service property management company located in beautiful downtown Coeur d’Alene Idaho. We manage residential, vacation, and commercial properties in the greater Kootenai County area. We would love the opportunity to meet with you to discuss your personal needs and concerns about your rental property, or to help you find that perfect home!. For vacation rental information, email: [email protected]. Coeur d’Alene—910 North Third Street Phone: 888.208.2112, Fax: 208.665.0600
REMAX
REMAX All Seasons wants to advertise your property in Kootenai County! We are a full service Brokerage proven to be #1 in customer service by JD Powers. Isn’t that what you want when you are trying to sell or buy a home? Call today for your free Comparative Market Analysis and find out about the REMAX Difference!. 208.255.7400 or 1.888.897.5073
Living Water Lawn Care
Kyle Holmes, who recently returned from Afghanistan, unable to see his new bride for over a year was inspired to have a career closer to home. Thus, Living Water Lawncare was launched. Living Water Lawncare is a faith based lawn business, inspired by Colossians 3:23, “and whatsoever ye do, do it heartily, as to the Lord and not unto men.” With this philosophy combined with his military background Kyle Holmes will ensure your lawn care needs.. Spokane, Washington and Coeur d’Alene, Idaho 208.818.2400
Calder Store Calder Store
Calder Store is a truly unique and historical place. It was built in 1916, when Calder, Idaho flourished as an old railroad mill town. Now owned by Jeff Barber it still is the heart of the St. Joe River Valley. It is the centerpiece to the general outdoors with all of Calder, Idaho’s hunting, fishing, boating, camping, hiking and biking. As Jeff Barber says, “There is no place I’d rather be.”
Spokane Highlights Whiz Kids
April 14th and 15th is the perfect time for you and your family to check out historic Calder, Idaho as it is the turning point of the St. Maries to Calder Jet Boat races. At the Calder Store you may treat your family to prime rib in their historical restaurant. They also offer a full bar and a phenomenal General Country Store. Unlike anything you’ve ever experienced before, the Calder Store is truly the center piece to a sportsman’s paradise. Calder—40 Railroad Street 208.245.5278
Whiz Kids is a specialty toy store located on the skywalk level of River Park Square. We have the toys you grew up with, regardless of your generation. Our toys encourage families to spend time together. Our toy consultants can help you find toys for newborns to 108-year-olds. Come see our “smart toys for smart kids.”. Downtown Spokane—808 West Main Ave, #251 509.456.TOYS (8697)
F1 for HELP
Lolo Boutique is a stylish little boutique offering unique treasures for reasonable prices. Lolo carries contemporary women’s clothing, accessories and items for home and garden. Lolo is foremost a place where you can take time for yourself and maybe find a treat or two!. Downtown Spokane—319 West Second Avenue 509.747.2867
F1 for HELP is a locally owned and operated Computer Shop based in Rathdrum Idaho. With your support, we have been in business in this location for the last 6 years. Out of our Rathdrum office, we offer new and used computer systems and parts. We remove virus and malware from infected computers. We have a recycling program for your unwanted/unused or broken equipment. Our outside service tech brings the expertise of the shop to your home or place of business. Check out our website for more information about the many services we provide or give us a call at 208.687.0183. Joseph Hume, Owner. Rathdrum—13785 West Highway 53 208.687.0183
Lolo Boutique
April 2012 | COEUR D’ALENE NEWSLINE 7
HEALTHY LIVING
SMART MONEY
You Are Already a Perfect You
The Pain in my B--CLENCHED MUSCLES CAN BE PROBLEMATIC
O
n my first trip through San Francisco back in 1993, I was a passenger in a car cruising downtown very close to the lane of parked cars. Being young and unfamiliar to busy streets, I was quite scared! In a micro-moment I had a very odd realization that not only was I clenching my butt from being so scared, but also that I had been clenching my butt all my life in response to stressful, uncomfortable, and emotional situations! Once brought to my attention, I was able to swiftly change that pattern. I stopped feeding stress to my tired, clenched butt and began freeing my body of a chronic, stress-filled holding pattern. If I hadn’t managed the stress being held in my butt muscles, it may have later manifested in discomfort or pain. My tight muscles could have easily pulled my pelvis into a rotation that would have then affected my legs and feet, and eventually my neck and head, giving rise to the potential of arthritic conditions, limited range of motion, or a number of apparently unrelated conditions. In other words, that single issue could have created functional problems later in life. A clenched jaw or butt, tense shoulders, or tight muscles may not seem like a big deal in the moment, but in the long-term this type of muscular stress can become problematic. Releasing muscular tension through regular massage therapy is just one way to help manage this type of stress. Some people believe that massage therapy is a luxury rather than an investment, but others find the opposite to be true. Massage therapy provides many benefits. At the cellular level, it increases circulation and lymph drainage, helps to flush and eliminate toxins, softens fascia, invites the body’s natural endorphins to relieve pain, and relaxes and re-educates muscles to move with greater ease. Regular massage also promotes body awareness of how holding patterns can contribute to seemingly unrelated pain. Massage also induces “feel-good” hormones to pulse through the body, making the body feel as if it’s just had a two-week vacation! And contrary to some beliefs, massage does not have to be painful to be effective, but you may need to “try out” several therapists to find one who works well with you. Some specialized manual therapies, such as Myo-Fascial Release, Trigger Point Therapy, Structural Integration or Rolfing and Feldenkrais, address holding patterns more specifically and effectively than generalized massage. Sometimes it takes several treatments for tight muscles to truly relax, release and relearn; as with any bad habit, it takes time and commitment to break and establish a new pattern. Today, consider the holding patterns in your body and how they might be affecting the future of your long-term health and vitality. Massage therapy may be a great option to help you begin changing those patterns and reclaiming your functional health. Monica Perrier, LMP, NCMT is a Licensed Massage Practitioner and Nationally Certified Massage Therapist, and has been practicing for nearly 20 years. She can be reached through her website, or by phone, 208.651.0598. Stay tuned for future articles with some simple steps you can take to contribute to your health savings account. In the meantime visit my website at for more tips and information.
Whole Body Synergy Monica Perrier’s business is Whole Body Synergy, located near downtown Coeur d’Alene. She is a Licensed Massage Practitioner specializing in stress reduction, pain relief and injury treatment. She also assists people in nutritional health coaching as The Good Food Nanny.
8
BY DAVE SCHMITZ-BINNALL
T
here is really only one goal in life, that is, to be you and to be you as well as you possibly can. Over the twenty odd years that I’ve been working with people on how to improve their lives I still find it exciting to see the way in which everything seems to start falling into place for them, once they find their true self and begin living out of that. The problem is, most of us spend our lives trying to be something other than our true selves. We try to live up to what we think is expected of us – or – what we believe a person like us ought to be. This expectation comes from all around us society, parents, teachers, ministers, the media etc. We are surrounded by continual, relentless and staggering pressure to “be a particular way”, to “look a particular way” to have “particular dreams goals and ambitions” to hold “particular beliefs and values” to have a “particular type of job or profession” or a “particular level of education” and so on. To make matters worse – by the time we are in our twenties, most of these expectations are so deeply embedded within our mind that we believe that they are our own ideas! Most of us end up mistakenly believing that our true self is somehow inadequate, faulty, useless or just plain bad. Instead, we create, in our minds, an image of an ‘ideal self’ which exemplifies everything that we think we ought to be. Now we dedicate the rest of lives to the pursuit of this ideal. Sadly, most of us are doomed to either fail in this pursuit, get stuck somewhere along the way or sabotage ourselves. In some cases, we achieve the goals of the ideal self but still find our lives unsatisfying and unfulfilling. The solution is to make what I call the return journey to our true self. Everyone can do this, with some help and guidance. Unfortunately, many people are scared to make this journey because they have come to believe (mistakenly) that there is something wrong with whom they really are and that they will not like their true self. My response to people who are afraid of rediscovering their true selves is to point out that God doesn’t create rubbish. We are not born into the world alienated from ourselves. We are, in fact, born into the world as a perfect us. Uniquely talented and gifted - endowed with everything we need for a successful, happy fulfilled life. Thankfully, even though this person gets buried or suppressed, he/she is still there, it is not possible to destroy it. Next month I will talk about ways to begin this journey and how you can change the way you look at life so that you can increase your self-belief, self respect, self-confidence and decrease your stress and anxiety.
Love Your Life Center
SMART MONEY
Geothermal Heating & Cooling System LIKE PLANTING A MONEY TREE IN YOUR YARD
W
e. The energy savings can be quite significant over time. Despite higher up-front costs (which are partially offset by tax credits and utility company rebates), investing in geothermal can easily outperform money markets, mutual funds, and other investment options. Take a look at the chart to the left for an example of how geothermal can be a wise long-term financial investment – kind of like planting a money tree in your yard!
For more information about how a geothermal system can work for you, call R&R Heating & Air Conditioning at 208.765.9355 or visit geosaves.
So can we.
Save big on your heating & cooling with a 400-500% efficient geothermal system.
9030 N HESS ST. • HAYDEN, ID
(208) 765-9355
LIC.# RRHEAA*106BW
Can you find the money underground?
April 2012 | COEUR D’ALENE NEWSLINE 9
SENIOR FOCUS
10 Senior Mealtime Challenges BROUGHT TO YOU BY HOME INSTEAD SENIOR CARE
R
6. High Expense of Cooking for One (45%) Tip: Encourage shared meals when possible - your older loved one will get the benefit of reduced costs of meals as well as companionship. Check out your local senior center, which often offers affordable meals for older adults, as well as the home-delivered meals program, known as “Meals On Wheels®”. 7. Relying Too Much on Convenience Food (43%) Tip: Encourage your older adult to meet with a nutritionist Lack of Companionship During Mealtimes (62%) or talk with the doctor to learn how to read labels. So Tip: If you can’t be there to dine with a loved one many older adults don’t know the foods that are good and regularly, look for alternative options such as friends and bad for them. neighbors. Check out special activities at churches and 8. Loss of Appetite (41%) senior centers as well as a local Area Agency on Aging and Tip: Help older adults make mealtimes an event, which Home Instead Senior Care resources. can make dining more appealing. Pull out a favorite Cooking for One (60%) recipe, help that older adult prepare a meal or get out the Tip: Freeze most any type of leftovers including sliced and good dishes and decorate the table with real or artificial seeded fruit by placing it in plastic containers or freezer flowers. bags. Buy your senior healthier low-sodium dinners for 9. Eating Too Much Food (38%) one. Tip: The bigger issue is eating too much of the wrong Eating Nutritious Meals (56%) types of food. If you’re helping an older loved one with a Tip: Buy fresh, when possible, or frozen foods including shopping list or grocery shopping, encourage healthier fruits and vegetables. Frequent affordable farmer’s choices. markets in season. Your older loved one may enjoy 10. Eating Too Little Food (35%) perusing the racks of produce. If your senior is able, help Tip: Plan a trip to a favorite restaurant or special dish. plant a garden. If lack of food is an ongoing problem, check with your Grocery Shopping for One (56%) senior’s doctor to learn about supplemental products that Tip: Transportation can be a big issue for seniors. Contact could ensure an older adult is getting the proper nutrition. the local Area Agency on Aging and Home Instead Senior For more information about the National Association of Area Care office, or encourage your loved one to engage Agencies on Aging, go to. Learn about the Home neighborhood support systems when possible. Instead Senior Care network’s Craving Companionship program at Eating Three Meals a Day (49%) or contact your local office at Tip: So many seniors are on a prescription medications 208.415.0366.. Each Home Instead Senior that must be taken with or without food. Coordinate the Care franchise office is independently owned and operated. © Home food plan with the medication plan. “Remember, Dad, to Instead Inc. 2012. take this pill when you’re eating oatmeal for breakfast.”
esearch conducted for the Home Instead Senior Care© network reveals 10 mealtime challenges for older adults. The following percentages refer to the number of seniors who believe these are challenges for older people who live alone. After each are tips for how to make the most of mealtimes, from the Home Instead Senior Care network and Sandy Markwood of the National Association of Area Agencies on Aging. 1.
2.
3.
4.
5.
Schedule a
FREE CONSULTATION! CALL NOW!
208.644.2901
10
Yes! I want to be flexible, out of pain and active! We specialize in:
• Repetitive Motion Injuries • Headaches & TMJ • Chronic Pain (neck, back sciatic) • Fibromyalgia • Breast Cancer Recovery
• Hand Therapy • Therapeutic Exercise • Manual Therapy including • ADVANCED MYOFASCIAL RELEASE and CRANIOSACRAL TREATMENT
SENIOR FOCUS
Healing Gardens Help Memory Care BROUGHT TO YOU BY COEUR D’ALENE HOMES ASSISTED LIVING AND MEMORY CARE
A
local nonprofit assisted living and memory care facility, Coeur d’Alene Homes, has developed their own Horticultural Therapy (HT) Garden they named the Serenity Garden. Created by landscape designer Anne Hanenburg out of Spokane, the Serenity Garden was created specifically to meet the needs of their residents. Hanenburg believes that a garden should stimulate all the senses. Eyesight and hearing are often the first senses to diminish as we age; in the Serenity Garden she used plants with memory-evoking fragrances like roses and lavender. The Serenity Garden at Coeur d’Alene Homes also has a large water feature at its core. Water is known for its healing power and the sound the water feature makes is meant to deliberately calm and soothe. “We have seen amazing results in our Serenity Garden,” explains Christel Rosen, Director of Nursing at Coeur d’Alene Homes. “If a resident suffering from Alzheimer’s becomes agitated, a walk through the Serenity Garden has a dramatic calming effect on their behavior.” Coeur d’Alene Homes hosts an annual Garden Party each May that invites volunteers and families to help residents plant flowers. Throughout the summer months residents work hands-on with staff to create beautiful bouquets, plant vegetable seeds or pluck deadhead flower tops to encourage new blooms to form. The History of Horticultural Therapy (HT) In the 1940’s and 1950’s, HT was used to care for hospitalized war veterans. The program was such a success that today HT
Create a
beautiful garden at Coeur d’Alene Homes and...
is recognized as a practical and viable treatment with wideranging benefits for people in therapeutic, vocational and wellness programs. A more technical definition from the American Horticultural Therapy Association is: “Horticultural Therapy is the use of professionally directed plant, gardening, and nature activities for the purpose of restoring the physical and mental health of its participants.” Benefits of HT HT is a fantastic therapy for the elderly because it helps to maintain or improve physical health by providing unlimited opportunities for exercising, increasing flexibility, improving coordination and balance, not to mention building physical strength. Multiple studies have demonstrated that respiration, pulse and blood pressure respond positively to plants and helps our aging population keep healthy and strong. In addition, reports show that gardening can possibly prevent the development of dementia. In a study of over 2,000 older people living in France, Fabrigoule et al (1995) showed that those who gardened, travelled or carried out odd jobs or knitting were significantly less likely to develop dementia than those who did not. When HT was specifically used in Alzheimer’s research, scientists found that patients with Alzheimer’s were less likely to fall or have violent moods, and were often able to function at their highest ability when given access to a garden. If you would like more information about Coeur d’Alene Homes please call 208.664.8119 or visit.
A simple donation of $30 will pay tribute to a loved one, either in memory or in honor of their life. Your donation will help purchase flowers for residents to plant in the Serenity Garden. A small sign will be placed in the flower pot acknowledging your tribute and will be displayed throughout the summer months for the enjoyment of all residents and visitors. Your gift is tax deductible and will be acknowledged in our annual donor honor roll. We currently have apartments available.
Honor Someone Special
To find out more, visit or call 664-8119 for information. 624 W. Harrison Avenue, Coeur d’Alene, ID 83814 April 2012 | COEUR D’ALENE NEWSLINE 11
FINANCIAL FOCUS
Easter and Tax Changes for 2012 MANDATORY ANNUAL INFLATION ADJUSTMENTS
M
andatory annual inflation adjustments generally affect federal income tax brackets, retirement plan contribution limits and exemption levels from year to year. The 3.8% inflation rate (measured by the Consumer Price Index) used to index 2012 tax rates is higher than it was in the previous two years; the adjustments could lower your tax bill on your 2012 return (due in April 2013). Here are some changes that may affect you and your family.
The income phaseout limit for deducting contributions to traditional IRAs (for active participants in employer-sponsored retirement plans) rose to $58,000 AGI ($92,000 for joint filers), an increase of $2,000 over 2011. Roth IRA eligibility phase out limits rose to $110,000 AGI ($173,000 for joint filers), up slightly from 2011. For additional information on 2012 changes, visit. Of course, before you take any specific action, be sure to consult with your tax professional.
Personal and Dependent Deduction: $3,800 (up $100).
*The federal estate tax exemption is scheduled to fall to $1 million in 2013, unless Congress changes the current tax law. Sources: Internal Revenue Service, 2011; C. Is your Retirement Easter/Nest Egg hitting the target you desire? Are you at risk for paying too much in taxes during retirement? To find out, please contact us for your no cost no obligation review: Nathan Thurman, LUTCF, Registered Representative. 1044 Northwest Blvd, Suite 215B, Coeur d’ Alene, ID 83814. 877.807-9333, 208.771.1817, offering securities through, SCF Securities, Member FINRA and SIPC, Supervisory Branch, 155 E. Shaw Ave Ste. 102, Fresno, CA. 93710, 800.955.2517. SCF Securities, Inc. and Thurman & Associates, Inc are individually owned and operated.
Standard Deduction: $5,950 for single filers and married couples filing separately (up $150); $11,900 for married couples filing jointly (up $300). According to the IRS, almost two out of three taxpayers take the standard deduction rather than itemizing. Higher-Education Credit Income Thresholds [modified adjusted gross income (AGI)]: Phaseouts start at $52,000 (single filers) and $104,000 (joint filers) for the Lifetime Learning Credit; $80,000 (single filers) and $160,000 (joint filers) for the American Opportunity Tax Credit (formerly the Hope Scholarship Credit). Federal Estate Tax Exemption: $5,120,000 (up $120,000). The annual gift tax exclusion ($13,000) did not change.* Retirement Contribution Limits The annual employee contribution limit for employersponsored retirement plans (401k, 403b, 457 plans) increased from $16,500 to $17,000 — the first increase since 2009. However, the catch-up contribution for those aged 50 and older remains unchanged at $5,500.
Thurman and Associates, Inc.
W if you have any questions about our firm or the range of financial products and services we provide. Our firm has a relationship with a variety of financial services companies, so if we don’t have a product or service, we know a group that does. Offering securities through SCF Securities, Member FINRA and SIPC. Supervisory Branch | 155 East Shaw Avenue, Suite 102 |Fresno, California 93710 | 800.955.2517
{
Visit our website at.
Our desire is to understand your financial goals and carve out a strategy that provides you balance and direction for your life. At Thurman and Associates, we are committed to maintaining the highest standard of integrity and professionalism.
1044 Northwest Boulevard, Suite 215B | Coeur d’Alene, Idaho | 877.807.9333 | [email protected] | 12
FINANCIAL FOCUS
What a Standard Homeowner’s Insurance Policy Covers BROUGHT TO YOU BY HARRIS/DEAN WESTERN STATES INSURANCE
W
hen shopping for home insurance, you should know about the six types of home insurance coverage offered in standard policy:
and sports equipment.
1. Coverage A- Dwelling 2. Coverage B- Other Structures on Your Property 3. Coverage C- Personal Property/Contents 4. Coverage D- Loss of Use 5. Coverage E- Personal Liability Protection 6. Coverage F- Medical Payments. Other Structures that are protected by a standard home insurance policy are detached garages and other detached buildings on your property. The typical coverage for other structures is 10% of your dwelling coveragealthough higher amounts may be purchased if necessary. Personal Property coverage is included in a standard home insurance policy and protects your personal items and household contents in the event they are stolen or destroyed by fire, hurricane or other peril covered in your policy. These items may include, but are not limited to, furniture, clothing,. Have other questions about your Homeowner’s Insurance policy or would like a free competitive quote? Contact Matt Hague at Harris/Dean-Western States Insurance. [email protected]. 208.667.9406.
How CLEAR is your current Marketing Strategy?
Enter to win a FREE Custom Facebook Welcome Page April 2012 | COEUR D’ALENE NEWSLINE 13
HAPPY EASTER
Happy Easter! EASTER CELEBRATIONS AROUND THE WORLD
A. Somehow, the miracle of the butterfly never loses its fascination for us. Perhaps because the butterfly is a living parable of the promise of resurrection. Around the world millions of people celebrate Easter, a celebration that reminds us that. Here’s a window into some of the culture and traditions around the world that take place every Easter season: ARGENTINA Easter Sunday in Argentina consists of consuming eggs as well as the special Easter cake, Rosca de Pascua. People exchange eggs not only with their family, but also with friends and colleagues and the day culminates in attending mass followed by a big family gathering involving lots of food, often a huge barbecue. GREECE On Good Friday in Athens, a replica of Christ’s tomb is carried through town. People flock to churches at midnight on Saturday carrying unlit candles that they light from the Holy Flame and walk through town enjoying a glorious display of
fireworks, bells and jubilation. Easter Sunday’s menu features spit-fire roast lamb and lots of colored eggs. ITALY While Easter mass will be held in every church in Italy, the biggest and most popular mass is held by the Pope at St. Peter’s Basilica.. SCOTLAND Easter in Scotland is a mostly laid back event featuring traditional things like attending mass and having a big meal, but Scots also add a bit of fun for the kids - after eggs are boiled and painted in all kinds of colors and designs, they’re taken to park hills for rolling on Easter Sunday. The event symbolizes the rolling away of stones on Jesus’ tomb, thereby assisting in His resurrection. SEVILLE, SPAIN. SWEDEN Humour-filled celebrations commence on Easter Saturday with children dressing up as good witches and giving out letters and cards in return for eggs, sweets and coins. On Easter Sunday, food takes centre stage where, in typically Nordic fashion, the
Britches and Bows Childcare Center • Exceptional – Affordable
Newly Remodeled! No Enrollment Fees 14
Coeur d’Alene • 208.765.9580 Grangeville • 208.983.6089 Post Falls • 208.457.1165
No Up-Front Costs
HAPPY EASTER
feast comprises mostly fish. Edibles include different kinds of herring, smoked salmon, a hint of roast ham and various cheeses. Eggs are exchanged and used in a game where participants roll them down roofing tiles to see which can go the farthest without breaking.
Britches and Bows CHILDCARE CENTER
FRANCE Church bells ring every day of the year except for the three days of Easter. Legend has it that the bells stop ringing because they’ve made a trip to Rome in order to be blessed. On Easter Sunday, the bells make their return and tour the country sprinkling chocolate eggs, chickens and rabbits in every garden. After midday, children head to the gardens to find the hidden treasures left by the bells. The day includes a hearty meal, normally consisting of lamb, the Easter dish of choice. BRAZIL Brazil has the largest Catholic population in the world. Holy Week - Semana Santa in Portuguese - is observed throughout the country with processions and rituals similar to those of other Catholic countries, yet made unique by the specific context in which they happen..
Britches and Bows Childcare Center is freshly remodeled and has a certified preschool teacher who teaches children through activities including, reading, writing, songs and arts and crafts!
Happy Easter from Britches and Bows!
Vac Shack Josh: 208.818.7483 Recreated Logo
All makes and models including
&
1301 E Sherman Ave Coeur D Alene, ID 83814 [email protected]
Supplies •Service • Repairs • Parts
Free pick up and delivery.
March 2012 | COEUR D’ALENE NEWSLINE 15
PAW PRINTS
Pets Can Greatly Improve Your Health A HEALTHY INVESTMENT FOR LONG TERM HAPPINESS
H
aving a pet is one of the healthiest investments you can make to your long-term health and happiness. We know that having a pet enriches. Pets have been known to improve the lives of pet owners, significantly benefiting. Many of the benefits of having a pet are less tangible. Pets allow for physical contact and offer consistent companionship, as well as unconditional love. People with pets generally remain more stable emotionally during crises than people without pets. Pets also offer protection socially from isolation, separation anxiety for people in nursing homes, and for people who don’t have as much opportunity to interact with other people. GoodDog
We believe in quality foods, treats, toys and apparel that is both safe and beneficial for your pet! We also carry quality cat foods! We carry the following brands: Orijen/Acana, Nature’s Logic, Fromms, Stella and Chewy’s, Instinct, PureVita, Taste of the Wild, Weruva, TikiDog, TikiCat, Merrick, Kong, Angel Eyes/Plaque Off, Puppia, Nylabone and more!. Coeur d’Alene—3115 North Government Way 208.664.4364
Duncan’s Pet Shop
Your neighborhood pet shop. We have over 25 years experience and are family owned and operated. Our goal is to ensure your pets, reptiles and fish have a happy, healthy life. As a full line pet shop we offer fish, birds, reptiles, pet supplies and small pocket pets. We also carry premium pet foods featuring NATURAL BALANCE. Coeur d’Alene—1302 North Government Way 208.667.0618
16
Lake City Pet Hospital
If you live in Coeur d’Alene or the surrounding area in the inland northwest, then you have picked the perfect site to find a veterinarian. Dr. Amoreena Sijan is a licensed veterinarian, treating all types of pets and animals. Your pet’s health and well being is very important to us and we will take every step to give your pet the best possible care. Lake City Pet Hospital is a full service animal hospital and will take both emergency cases as well as less urgent medical, surgical and dental issues. Dr. Sijan is experienced in all types of conditions and treatments. Beyond first rate pet care, we make our clinic comfortable, kid-friendly and a very calm environment so your pet can relax in the waiting room and look forward to meeting his or her own. Coeur d’Alene Veterinarian.. Coeur d’Alene—902 Lincoln Way 208.664.5629
Paws and Claws Pet Resort
Canine social club, feline parlour and bathhouse. Whether your pet is with us for one day, one week or a month, we’re genuinely committed to treating all pets with respect, kindness and what we do best, shower them with LOVE! For all you cat lovers, please check out our new and delightful cat room. Coeur d’Alene—2900 North Government Way 208.667.6700
LaundraMUTT Do It Yourself Dog Wash
We provide everything but the dirty dog and we clean up the mess. Self service is first come first served for $15.00 per dog and $5.00 nail trim if needed. By appointment we also offer full service dog and cat grooming. We love dogs of all sizes. Do It Yourself hours TuesdaySaturday 10am-5pm. Grooming by appointment beginning at 8am Tuesday-Saturday. Coeur d’Alene—2900 North Government Way 208.676.8828
BUSINESS SPOTLIGHT
Change Your Mind OWNER KIMBERLEY A. BIRKHIMER, CNT
N
eurofeedback (NFB), also called EEG biofeedback, is an alternative natural therapy for both adults and children. NFB is non-invasive, non-pharmaceutical, and has a lengthy track record of use by sports professionals, therapists, and even government agencies (NASA used it in the 1960s to train astronauts for peak performance). In recent years, interest in and use of NFB has seen a resurgence due to scientific discoveries regarding brain plasticity. Research now shows that the brain can be trained to rewire itself for sharper mental acuity and more efficient performance. The result is a better quality of life. NFB offers a way to directly achieve this rewiring through brainwave training, and with almost no effort required on the part of the trainee. Whether the result of diagnosed disorders or injuries, cognitive deterioration due to aging (including menopause), or simply the outcome of poor sleep patterns, performance anxiety, and job stress, NFB has been clinically proven to train brain waves to find more efficient neural pathways and thus experience lasting relief. Most subjects (95%) achieve benefits that restore quality of life and peak performance, often very quickly, without aid of drugs, surgery, or continuous medical care. Veterans and their families in particular can benefit from NFB treatment. The stressors of war take a toll, not only on the returning vet, but also on his or her family. Issues such as nightmares, irritability, impatience, anxiety, panic attacks, hyper-vigilance, and alcohol/drug abuse often conspire to undermine the veteran’s ability to reintegrate into life at home. These issues can quickly cause problems on a number of fronts, from family interactions and work performance to impulse control and ability to function effectively in society.
Needless to say, the result can be devastating. Neurofeedback training can teach the brain to dramatically reduce these effects, often within five, one-hour sessions (though in these circumstances 20-40 sessions are often recommended to receive lasting benefits). Even young, fit individuals can use NFB to improve performance. Numerous Olympic and professional athletes, from skiers to soccer players, use NFB to prepare for competition. Neurofeedback’s ability to teach athletes to improve their focus allows them to get and stay “in the zone.” In fact, members of the four-time World Cup-winning Italian soccer team called neurofeedback their “secret weapon”! For more information visit or call Kim Birkhimer at 208.610.3183.
April 2012 | COEUR D’ALENE NEWSLINE 17
HIGH SCHOOL SPORTS
Distinguished Young Women of Coeur d’Alene JUNIOR MISS COMPETITION
D
istinguished Young Women of Coeur d’Alene Coeur d’Alene, Idaho, where a panel of five judges will select the representative(s) who will move on to the next level of competition at Distinguished Young Women of Idaho. The young woman selected as the state representative will then compete in the annual National Finals
competition in Mobile, Alabama. State representatives from all 50 states will travel to the historic port city of Mobile, Alabama, for the 55th Annual Distinguished Young Women National Finals, June 28-30, 2012.. Participants. Mission To positively impact the lives of young women by providing a transformative experience that promotes and rewards
How effective is your current Marketing Strategy?
Enter to win a FREE Custom Facebook Welcome Page 18
HIGH SCHOOL SPORTS
Lake City High School
Coeur d’Alene High School
scholarship, leadership and talent. Distinguished Young Women is a national scholarship program that inspires high school girls to develop their full, individual potential through a fun, transformative experience that culminates in a celebratory showcase of their accomplishments. • By encouraging continued education and providing college scholarships. • By developing self-confidence and the abilities to interview effectively, to speak in public, to perform on stage and to build interpersonal relationships. • By encouraging and showcasing excellence in academic achievement, physical fitness, on-stage performance skills, and the ability to think and communicate clearly. • By creating opportunities to beneficially inspire the lives of others..
More Information Please contact Kim Washko at [email protected] for further information. Join Us Do you want to be a part of the Distinguished Young Women family and help young women all over the country achieve their dreams? If so, please contact the Distinguished Young Women National Office at 800.256.5435 or email us today. Please visit our website.
Content and Photos Provided by Distinguished Young Women:
Say “I Do” HAVE YOU HEARD to ABOUT THEDay “90 DAY Our 90 Bridal CHALLENGE”? Challenge
ur Body! ansform YoYour TrTransform Body! Save money while you LOSE WEIGHT...two shakes a day and the pounds melt away! Katie Metz (208) 964-0858 2130086-0209
katiemetz.bodybyvi.com David Darrar: 208.640.9855
Carladaviddarrar.bodybyvi.com Darrar (208) 640-6036
Carla Darrar: 208.640.6036
CarlaDarrar.bodybivi.com
carladarrar.bodybyvi.com
April 2012 | COEUR D’ALENE NEWSLINE 19
20
Best Rest aurant s of 2012
DINING GUIDE
Steaks • Seafood • Chicken • Wild Game Open Wednesday through Sunday, 6am-9pm. Seasonal, Monday & Tuesday
Daily Breakfast Specials until 3pm!
Breakfast, Lunch and Dinner
615 North Spokane Street | Post Falls, Idaho | 208.777.9388
April 2012 | COEUR D’ALENE NEWSLINE 21
t s e B s Area’ Eats! Local
DINING GUIDE
L AND SUS H
I
Forty-One South
Shoga Sushi Bar
Fisherman’s Market
Las Chavelas
Mizuna
GW Hunters
Calypsos Coffee & Creamery
Angelo’s Ristorante & Catering. Coeur d’Alene—215 West Kathleen 208.664.4800 $/$$
Enjoy organic meat, local produce, an entirely separate vegetarian menu prepared by skilled chefs and a full bar specializing in fine wine and martinis. We also offer alleyway, patio dining during summer months. Hours are Mon-Sat, 11am-10pm and Sun, 4pm-10pm. Mizuna.com. Spokane—214 North Howard Street 509.747.2004 $
At Calpsos you’ll find a combination of amazing coffee, which they roast on-site, ice cream, fantastic food and live music on a regular basis. They display artwork from local artists, offer free wi-fi, have a play area for the kids and also offer a Smart Room for meeting rentals!. Coeur d’Alene —116 East Lakeside Avenue 208.665.0591 $ 22
Scan the QR Code for a Map of All Locations!
Trinity at City Beach
A beautiful waterfront, fine-dining restaurant in a romantic lodge setting overlooking Lake Pend Oreille. Spectacular sunsets, innovative cuisine, full bar and extensive wine list. Reservations recommended.. 41 Lakeshore Drive, Sagle, ID 208.265.2000 $$/$$$
G RIL
$ - Less than $10 $$ - $9-$20 $$$ - $16 and up - Wi-Fi Available
Tomato Street
Voted North ID’s #1 Italian Restaurant 3 consecutive years in a row. Said to have the best kids meal in town. Distinctive and entertaining atmosphere for everyone; using a wood fired oven to bring back many memories of the past. Beer, wine, full bar. 2012 BEST ITALIAN RESTAURANT.. Coeur d’Alene—221 West Appleway 208.667.5000 $/$$
AN’S MARK HERM ET FIS
Price of Entree for One Person
Sandpoint’s premier waterfront dining offers an extensive menu of American cuisine with an impressive wine list. Featuring a full service bar and beautiful views of Lake Pend Oreille.. Sandpoint —56 Bridge Street 208.255.7558 $/$$/$$$
Forty-One South brings sushi back to Sandpoint. Opening in May. Delicious sushi and Japanese cuisine. Beautiful, waterfront dining with spectacular sunset views. Professional and courteous service. Dinner 7 nights a week and lunch Mon-Fri. 41 Lakeshore Drive, Sagle, ID 208.265.2001 $/$/$$$
Las Chavelas is the home of the REAL Mexican food. We pride ourselves in serving to our customers authentic mexican food. We do lunch, dinner and catering for ANY size party and any number of people. Come and in and say, “Hello...” and stay awhile, have something to eat and enjoy some good conversation. Don’t forget about our Monday night football event!. Coeur d’Alene—296 West Sunset Avenue 208.664.3767 $/$$
Our mission is to bring customers “The best quality foods and service, at a fair price.” We only buy the freshest ingredients and cook every meal to order and we always strive for perfection. Breakfast, lunch and dinner.. Post Falls—615 North Spokane Street 208.777.9388 $/$$/$$$
.. Coeur d’Alene—846 North Fourth Street 208.765.2850 $/$$/$$$
“The Icing on the Lake!” lounge
Reservations Recommended
Live music, delicious food, fun cocktails, and beautiful sunsets.
41 Lakeshore Drive Sagle, Idaho | 208.265.2000 | | Find us on Facebook for updates!
Visit us during Lost in the 50s! May 17-20, 2012 in Sandpoint, Idaho..
Opening in May! Waterfront, Outdoor Dining Beautiful Atmosphere Courteous and Professional Service April 2012 | COEUR D’ALENE NEWSLINE 23
CALENDAR OF EVENTS
EVENTS & ACTIVITIES Apr 14 - Creative Spirits Art Auction/ Wine & Microbrew Tasting. Located at Blanchard Community Center, just off of highway 41 on 685 Rusho St. Event begins at 6pm. Call 208.437.4072 for more details. April 14, 15 - World Jet Boat Races. Weaver Seed World Jet Boat River Marathon begins on St. Joe River at St. Maries and features Show & Shine Boat at St. Maries Lower City Park in the Cormana Building at 6:30pm. There will be a firework show at dusk and a downtown celebration! April 21 - 2nd Annual Walk for Autism. Located at the Coeur d’Alene Riverstone Park, sign up and walk with a purpose. For more details and registration forms go to panhandleautismsociety.com. April 22 - Spring Dash. Blooms day community 5 mile run starts at 10am at Bank of America, 401 Front St. in downtown Coeur d’Alene. All proceeds benefit the United Way of Kootenai County. For more information, go to kootenaiunitedway.org. April 26 - NIC Career Workshop. This free workshop, Design Your Ideal Career, helps get a jump start on choosing a career direction. You will select the skills you would most like to practice in a job if you could truly do anything you dreamed. Open to the public. Sign up by calling NIC Career Services at 208.769.3297.
SHOWS/MUSIC/ARTS April 13, 14, 15 - Annual Artist Showcase. Starts Friday as a Featured Exhibit in the Coeur d’Alene ArtWalk. Presented by the Coeur d’Alene Art Association (CAA). Guest artists
painting, entertainment, and raffles. Opportunity to win original paintings and a $200 gift certificate towards art. Enjoy demos and informal visits with local artist members of CAA. Friday 10am-8pm, Saturday 10am-6pm, Sunday 10am-5pm at the Coeur d’Alene Resort Plaza Shops. Call 208.676.9132, or go to coeurdaleneartassoc.org. April 13 - Art Walk. Held in downtown Couer d’Alene the second Friday of every month now through November. Visit local galleries, enjoy wine & small tastes at each one, and many have live music. Being held from 5pm-8pm. April 13, 14 - Indoor Garage Sale & Flea Market. Located at Greyhound Park & Event Center on 5100 W. Riverbend Ave. Post Falls, ID. Gigantic variety of items, easy access and free parking. Lots of food, drinks and fun! Open from 9am-4pm. April 14 - Extravaganza Wine Tasting. At participating stores throughout downtown Coeur d’Alene area. Only $15 and includes a complimentary wine glass. Wrist bands will go on sale about 2:30pm in the center area of the Plaza Shops located at 210 E. Sherman. Portions of the proceeds go to 3C’s Charity. April 14, 15 - Spring Concert. Northwest Sacred Music Chorale Spring Concert begins at 7pm on Saturday and at 3pm on Sunday. Located at the Kroc Center in Coeur d’Alene, for more detail visit nwsmc.org. April 14, 15 - Rubber Stamp Show. This event is located at the Kootenai County Fairgrounds. This is for all paper, scrapbook and rubberstamp fans. Enthusiasts from Idaho, Canada, Montana and Washington attend this show filled with free demos on a range of techniques. Admission is $5. For
Real Movers • Real Trucks • Real Low Prices
ALL SEASONS MOVING, INC. Residential and Commercial Moves Out of State and Local Moves Business to Business Moves
Call 208.699.6538 24
workshops and joining sellers go to northidahorubberstamp.com. April 20 - Late Night Blues. A club atmosphere is taking over The JACC in April. Late Night Blues will feature blues, drinks and dancing. Starts 8:00 pm to Midnight, tickets are $10. Located at Jacklin Arts & Cultural Center, 405 N William St Post Falls, ID 83854. Call 208.457.8950 or go to thejacklincenter.org for more information. April 20 - Lost in Yonkers. A common Broadway production; written by Neil Simon. The Show is located at Lake City Playhouse, 1320 East Garden Avenue, Coeur d’Alene. Call 208.667.1323 or go to LakeCityPlayhouse.org for more information. April 27 - Music Walk. Concert venue is downtown Coeur d’Alene. Sample the wonderful variety of music that North Idaho has to offer. Whether you resemble jazz or rock, classical or pop, there is something for everybody. Contributing downtown businesses and restaurants are open extended hours and 2 hour parking is always free. A free, family-friendly event!. April 27 - Spring Choral Concert. Begins at 7:30pm at the Boswell Hall Schuler Performing Arts Center on the NIC campus. Featuring the NIC Cardinal Chorale and NIC Chamber Singers. April 28 - Bikini Championship Body Building. Prejudging begins at 9am and finals are at 6pm. Tickets are $35 at the door or $20 at Northern Quest Resort and Casino, Airway Heights GNC, Gold’s Gym and spokanebodybuilding.com. April 29 – Opera Coeur d’Alene Gala. An evening filled with great food, wine, and music! Proceeds for this event support our fall opera. Located at the Coeur d’Alene Resort, 5pm-9pm. Cost is $100 per person. For reservation information, call 208.765.3723.
CALENDAR OF EVENTS
THE AREA’S HOTTEST SPOTS Razzle’s Bar and Grill. Pool and darts. Daily drink special 5-7pm. 21+ only and Wi-Fi. 10325 Government Way, Hayden.
Photo by
Calypsos Coffee & Creamery. 116 East Lakeside Avenue, CDA. calypsoscoffee. com. 208.665.2464.
May 1 - Close Encounters of the Swingin’ Kind. Begins at 7:30pm and located at Boswell Hall Schuler Performing Arts Center. Features the NIC Jazz Ensemble and NIC Jazz Co.
DOC HOLIDAY’S Saloon and Grill. 9510 Government Way, Hayden. 208.449.1562. 315 Martinis And Tapas. 315 wallace Street, CDA. 315MartinisAndTapas.com. 208.667.9660.
May 4 - Chamber’s Annual Spring Golf Tournament. Located at the Coeur d’Alene Resort Golf Course. Shotgun starts at 1:30pm. This is a four man scramble. For more details call Brenda Young at 208.415.0110 or email Brenda@ CdAChamber.com.
Kelly’s Irish Pub. 726 North 4th Street, CDA.. 570.645.2000. Moon Dollars. Twin Lakes Village. 5416 West Village Boulevard, Rathdrum. moondollarsresturaunt.com. 208.777.7040. The Wine Cellar. 313 East Sherman Avenue, CDA. TheWineCellarCDA.com.
May 11, 12 - Hamlet. Readers Theater! Show starts at 7:30pm at the Lake City Playhouse at 1320 East Garden Avenue, Coeur d’Alene. Call 208.667.1323 or go to LakeCityPlayhouse.org. May 13 - Mother’s Day Concert in the Park. Music starts at 2pm at the Coeur d’Alene City Park. The concert features the NIC Wind Symphony and NIC Chamber Singers.
213 Appleway Appleway Plaza Coeur d'Alene, ID 83814
Moon Time. 1602 East Sherman Avenue, CDA. 208.667.2331. Gig’s Landing. 204 South Coeur d’Alene Avenue, CDA. 208.667.9600. Java On Sherman. 324 East Sherman Avenue, CDA. 208.667.1717.
Spring is in the air!
The Fedora Pub & Grille. 37914 South Kathleen Avenue, CDA. fedorapubandgrille.com. 208.765.8888.
Downtown Sandpoint Shopping Extravaganza!
Fashion Show Chocolate Walk In-Store Specials Shopping Passport
Saturday May 5, 2012 Food & Drink Tastings Live Music in 4 locations!
Kids’ Activities Magician Live Alpaca
...and lots more great activities throughout downtown! Learn more & join the fun at SandpointShoppingDistrict.com April 2012 | COEUR D’ALENE NEWSLINE 25
ACTIVITIES AND FUN
FRUIT THIEVES!”
“Yes, we did. We ate some when we got hungry,” they said.
JUST LOOKING AROUND
The farmer replied, “Okay, here is your punishment. I want each of you to go pick ten of your favorite fruit and come back to me.”
A blind man walks into a store with his seeing eye dog.
The men couldn’t believe their ears. This seemed more like a reward than a punishment! After fifteen minutes, the first thief came back with ten cherries. The
All of a sudden, he picks up the leash and begins swinging the dog over his head. The manager runs up to the man and asks, “What are you doing?!” The blind man replies, “Just looking around.”
SUDOKU SPRING WORD FIND
How effective is your current Marketing Strategy?
Enter to win a FREE Custom Facebook Welcome Page
26
The Best Kept Secret... ...is in SANDPOINT! SQUARE FOOT OF CARPET
Free!
Yes, we SQUARE deliver to Coeur d’Alene!
Sandpoint Shopping FOOT OF District 3.9x4.1” at $499 Sofas starting
Enjoy comfort in a crisp, lean style - and at a great price. Sandpoint Furniture offers 65 store buying power CARPET through their Pacific Furniture Dealers Association.
Free!
CARPET AS STAIN-RESISTANT AS IT IS SOFT.
I really wanted a soft, comfortable carpet, but my husband works outside all day. So I knew I needed one durable enough to CARPET AS STAIN-RESISTANT weather stains, too. AS IT IS SOFT.
Scan this code with a QRwanted readera soft, from your I really was $2199... $1599 comfortable carpet, but mobileNOW app store or visit tigressatales.com.
was $769...NOW
$599
(Above) This sofa proves you can combine casual and sophisticated to create a new, exciting look. The sofa features rolled/sloped arms, wood feet, and fun accent pillows...a real room pleaser. Available as a queen sleeper and in special order fabrics. Lifetime warranty.
my husband works outside (Above) The scrumptious plush fabric make this sectional as all day. So I knew I needed incredibly comfortable as it is stylish. handsome one The durable enough to chaise weather stains, too. can be moved to either side making it a flexible fit for any room. Special order Scan fabrics are available. Lifetime warranty. this code with a QR reader from your mobile app store or visit tigressatales.com.
Our ULTRA-PROTECTIVE microfiber technology weaves softness and strength together. Our ULTRA-PROTECTIVE microfiber technology weaves softness and strength together.
up to
50 % 50
bonus savings ! No need fabric with decorative throw pillows. $2219 SALE ENDS APRIL 15!to imagine...it’s here now on this outstanding up to
month month
36 36
was $2899...NOW
HURRY—was $879...NOW $699 bonus savin SALE ENDS APRIL 15! (Above) Imagine soft chenille in a lustrous red HURRY —
%
contemporary sofa. Available as a queen sleeper (Above) This sectional combines function and comfort * special select hard and in special order fabrics. Lifetime warranty. beautifully making financing it a perect choice for family room or tigressÁ products only* surface produc gathering area. The sofa comes with two recliners and the special financing select hard tigress Á products only plus a functional console. loveseat has two gliding recliners surface products VISIT TIGRESSATALES.COM ®
®
VISIT TIGRESSATALES.COM
Makingyouryour house a home 66 Making house a home for 66 for Years! 263-5138 263-5138
Years!
401 Bonner Mall Way, Ponderay, ID •
401 Bonner Mall Way, Ponderay, ID • 2012be|combined COEUR D’ALENE * Subject to credit approval. See store for details. At participating stores only; not all products at all locations. Photos for illustrative purposes only. Not responsible for typographical errors. Offer ends 04/15/12.April Offers cannot with other discounts ®
NEWSLINE 27
What We Do:
Home Checks • Handyman Services Snow Removal • Yard Maintenance Housekeeping and much more!
Benefits of Our Services:
• Reducing energy consumption and utiltiy bills • Maximizing the life of your home’s components, equipment and systems • Eliminating preventable breakdowns of your home’s components • Avoiding damages and restoration cost due to equipment or system failures • Lowering overall repair costs
Call us today to let us know how DSS Home Preservation Service can work for you!
208.676.1222
28 | https://issuu.com/likemedia/docs/coeurdalenenewslineapril2012 | CC-MAIN-2016-50 | refinedweb | 9,854 | 65.42 |
Nov 30, 2007 11:28 PM|fhirzall|LINK
Hi everyone,
How can I embed regular html/plain text content returned from my ashx handler into a page?
I know that when I use ASHX handlers to get images, I just throw in an <img> tag with the src set to the handler, what do I do to embed plain text content?
Thank you,
-Feras
ashx embed content
Member
220 Points
Dec 01, 2007 12:31 AM|TheDarkKnight|LINK
Use Response.Write to belch out html/plain text.
C#
using System.Web;; } } }
VBMore resources on this MSDN page.
Dec 01, 2007 12:45 PM|fhirzall|LINK
Hi DarkNight,
How would you then embed that handler into another web page? I know that if you just request that in the browser by typing in its URL it would work, but I'm trying to do something similiar to the way I did it when returning images (reference with an image tag from requesting page)
Thanks for your help!
Member
220 Points
Dec 01, 2007 01:09 PM|TheDarkKnight|LINK
4 replies
Last post Dec 02, 2007 07:35 PM by fhirzall | https://forums.asp.net/p/1188777/2036975.aspx?Re+Embed+ashx+text+content+into+page | CC-MAIN-2021-43 | refinedweb | 189 | 77.57 |
A
Cloud object represents a public or private database in an app container.
Language
- JavaScript
SDK
- CloudKit JS 1.0+
Overview
Each container has a public database whose data is accessible to all users and, if the current user is signed in, a private database whose data is accessible only by the current user. A database object applies operations to records, subscriptions, and zones within a database.
You do not create database objects yourself, nor should you subclass the
Cloud class. You get a database object using either the
public or
private properties in the
Cloud class. You get a
Cloud object using methods in the
Cloud namespace. For example, use
Cloud
get to get the default container object.
var container = CloudKit.getDefaultContainer(); var publicDatabase = container.publicCloudDatabase; var privateDatabase = container.privateCloudDatabase;
Read access to the public database doesn’t require that the user sign in. Your web app may fetch records and perform queries on the public database, but by default your app may not save changes to the public database without a signed-in user. Access to the private database requires that the user sign in. To determine whether a user is authenticated, see
set in
Cloud.
The asynchronous methods in this class return a
Promise object that resolves when the operation completes or is rejected due to an error. For a description of the
Promise class returned by these methods, go to Mozilla Developer Network: Promise.
This class is similar to the
CKDatabase class in the CloudKit framework.
Creating Your Schema
Before you can access records, you must create a schema from your native app using just-in-time schema (see Creating a Database Schema by Saving Records) or using CloudKit Dashboard (see Using CloudKit Dashboard to Manage Databases). Use CloudKit Dashboard to verify that the record types and fields appear in your app’s containers before you test your JavaScript code.
| https://developer.apple.com/reference/cloudkitjs/cloudkit.database | CC-MAIN-2017-22 | refinedweb | 314 | 55.95 |
23 February 2011 10:42 [Source: ICIS news]
SINGAPORE (ICIS)--Brent crude prices rose by more than $1/bbl (€0.73/bbl) on Wednesday on heightened concerns that ?xml:namespace>
At 09:20 GMT, April Brent on
April NYMEX light-sweet crude futures were trading at $96.06/bbl up 64 cents/bbl from the previous close. Earlier, the contract hit $96.25/bbl - the highest level since early October 2008.
On Monday, the political turmoil in
The ongoing violence in
On Tuesday, European oil companies ENI and Repsol halted oil production in
ENI has temporarily stopped some oil and gas operations, including flows of gas through the Greenstream gas pipeline from
Earlier this week, Wintershall, the oil and gas exploration subsidiary of BASF, announced that it would halt its oil production in
In a statement on its website, the IEA said it “…stands ready, as always, to make oil available to the market in the event of a major supply disruption if alternative supplies cannot readily be made available via normal market mechanisms”.
“At present, we are not in a situation where that is necessary. However, we are monitoring the situation closely and on an ongoing basis,” the organisation added.
According to IEA data, Europe receives over 85% of
An additional 150,000 bbl/day of crude oil | http://www.icis.com/Articles/2011/02/23/9437707/brent-crude-rises-1bbl-as-libya-unrest-stokes-supply-fears.html | CC-MAIN-2014-52 | refinedweb | 219 | 50.77 |
Welcome to the world of exotic flow control. With Python 2.2 (now in its third alpha release -- see Resources later in this article), programmers will get some new options for making programs tick that were not available -- or at least not as convenient -- in earlier Python versions.
While what Python 2.2 gives us is not quite as mind-melting as the full continuations and microthreads that are possible in Stackless Python, generators and iterators do something a bit different from traditional functions and classes.
Let's consider iterators first, since they are simpler to understand.
Basically, an iterator is an object that has a
.next() method. Well, that's not quite true; but it's close.
Actually, most iterator contexts want an object that will generate an
iterator when the new
iter() built-in function is applied to
it. To have a user-defined class (that has the requisite
.next() method) return an iterator, you need to have an
__iter__() method return
self. The examples will
make this all clear. An iterator's
.next() method might
decide to raise a
StopIteration exception if the iteration
has a logical termination.
A generator is a little more complicated and general. But the most typical use of generators will be for defining iterators; so some of the subtlety is not always worth worrying about. A generator is a function that remembers the point in the function body where it last returned. Calling a generator function a second (or nth) time jumps into the middle of the function, with all local variables intact from the last invocation.
In some ways, a generator is like the closures which were discussed in previous installments of this column discussing functional programming (see Resources). Like a closure, a generator "remembers" the state of its data. But a generator goes a bit further than a closure: a generator also "remembers" its position within flow-control constructs (which, in imperative programming, is something more than just data values). Continuations are still more general since they let you jump arbitrarily between execution frames, rather than returning always to the immediate caller's context (as a generator does).
Fortunately, using a generator is much less work than understanding all the conceptual issues of program flow and state. In fact, after very little practice, generators seem as obvious as ordinary functions.
Taking a random walk
Let's consider a fairly simple problem that we can solve in several ways -- both new and old. Suppose we want a stream of positive random numbers less than one that obey a backward-looking constraint. Specifically, we want each successive number to be at least 0.4 more or less than the last one. Moreover, the stream itself is not infinite, but rather ends after a random number of steps. For the examples, we will simply end the stream when a number less than 0.1 is produced. The constraints described are a bit like one might find in a "random walk" algorithm, with the end condition resembling a "statisficing" or "local minimum" result -- but certainly the requirements are simpler than most real-world ones.
In Python 2.1 or earlier, we have a few approaches to solving our problem. One approach is to simply produce and return a list of numbers in the stream. This might look like:
import random def randomwalk_list(): last, rand = 1, random.random() # init candidate elements nums = [] # empty list while rand > 0.1: # threshhold terminator if abs(last-rand) >= 0.4: # accept the number last = rand nums.append(rand) # add latest candidate to nums else: print '*', # display the rejection rand = random.random() # new candidate nums.append(rand) # add the final small element return nums
Utilizing this function is as simple as:
for num in randomwalk_list(): print num,
There are a few notable limitations to the above approach. The specific
example is exceedingly unlikely to produce huge lists; but just by making
the threshhold terminator more stringent, we could create arbitrarily
large streams (of random exact size, but of anticipatable
order-of-magnitude). At a certain point, memory and performance issues can
make this approach undesirable and unnecessary. This same concern got
xrange() and
xreadlines() added to Python in
earlier versions. More significantly, many streams depend on external
events, and yet should be processed as each element is available. For
example, a stream might listen to a port, or wait for user inputs. Trying
to create a complete list out of the stream is simply not an option in
these cases.
One trick available in Python 2.1 and earlier is to use a "static" function-local variable to remember things about the last invocation of a function. Obviously, global variables could do the same job, but they cause the familiar problems with pollution of the global namespace, and allow mistakes due to non-locality. You might be surprised here if you are unfamiliar with the trick--Python does not have an "official" static scoping declaration. However, if named parameters are given mutable default values, the parameters can act as persistent memories of previous invocations. Lists, specifically, are handy mutable objects that can conveniently even hold multiple values.
Using a "static" approach, we can write a function like:
import random def randomwalk_static(last=[1]):# init the "static" var(s) rand = random.random() # init a candidate value if last[0] < 0.1: # threshhold terminator return None # end-of-stream flag while abs(last[0]-rand) < 0.4: # look for usable candidate print '*', # display the rejection rand = random.random() # new candidate last[0] = rand # update the "static" var return rand
This function is quite memory friendly. All it needs to remember is one previous value, and all it returns is a single number (not a big list of them). And a function similar to this could return successive values that depend (partly or wholly) on external events. On the down side, utilizing this function is somewhat less concise, and considerably less elegant:
num = randomwalk_static() while num is not None: print num, num = randomwalk_static()
New ways of walking
"Under the hood", Python 2.2 sequences are all iterators. The familiar
Python idiom
for elem in lst: now actually asks
lst to produce an iterator. The
for loop then
repeatedly calls the
.next() method of this iterator until it
encounters a
StopIteration exception. Luckily, Python
programmers do not need to know what is happening here, since all the
familiar built-in types produce their iterators automatically. In fact,
now dictionaries have the methods
.iterkeys(),
.iteritems(), and
.itervalues() to produce
iterators; the first is what gets used in the new idiom
for key in dct:. Likewise, the new idiom
for line in file: is supported via an iterator that calls
.readline().
But given what is actually happening within the Python interpreter, it
becomes obvious to use custom classes that produce their own iterators
rather than exclusively use the iterators of built-in types. A custom
class that enables both the direct usage of
randomwalk_list()
and the element-at-a-time parsimony of
randomwalk_static is
straightforward:
import random class randomwalk_iter: def __init__(self): self.last = 1 # init the prior value self.rand = random.random() # init a candidate value def __iter__(self): return self # simplest iterator creation def next(self): if self.rand < 0.1: # threshhold terminator raise StopIteration # end of iteration else: # look for usable candidate while abs(self.last-self.rand) < 0.4: print '*', # display the rejection self.rand = random.random() # new candidate self.last = self.rand # update prior value return self.rand
Use of this custom iterator looks exactly the same as for a true list generated by a function:
for num in randomwalk_iter(): print num,
In fact, even the idiom
if elem in iterator is supported,
which lazily only tries as many elements of the iterator as are needed to
determine the truth value (if it winds up false, it needs to try all the
elements, of course).
Leaving a trail of crumbs
The above approaches are fine for the problem at hand. But none of them scale very well to the case where a routine creates a large number of local variables along the way, and winds its way into a nest of loops and conditionals. If an iterator class or a function with static (or global) variables depends on multiple data states, two problems come up. One is the mundane matter of creating multiple instance attributes or static list elements to hold each of the data values. The far more important problem is figuring out how to get back to exactly the relevant part of the flow logic that corresponds to the data states. It is awfully easy to forget about the interaction and codependence of different data.
Generators simply bypass the whole problem. A generator "returns" with the
new keyword
yield, but "remembers" the exact point of
execution where it returned. Next time the generator is called, it picks
up where it left before -- both in terms of function flow and in terms of
variable values.
One does not directly write a generator in Python 2.2+. Instead,
one writes a function that, when called, returns a generator. This might
seem odd, but "function factories" are a familiar feature of Python, and
"generator factories" are an obvious conceptual extension of this. What
makes a function a generator factory in Python 2.2+ is the presence of one
or more
yield statements somewhere in its body. If
yield occurs,
return must only occur without any
accompanying return value. A better choice, however, is to arrange the
function bodies so that execution just "falls off the end" after all the
yields are accomplished. But if a
return is
encountered, it causes the produced generator to raise a
StopIteration exception rather than yield further values.
In my opinion, the choice of syntax for generator factories was somewhat
poor. A
yield statement can occur well into the body of a
function, and you might be unable to determine that a function is destined
to act as a generator factory anywhere within the first N lines of a
function. The same thing could, of course, be true of a function factory
-- but being a function factory doesn't change the actual syntax
of a function body (and a function body is allowed to sometimes return a
plain value; albeit probably not out of good design). To my mind, a new
keyword -- such as
generator in place of
def --
would have been a better choice.
Quibbles over syntax aside, generators have the good manners to
automatically act as iterators when called on to do so. Nothing like the
.__iter__() method of classes is needed here. Every
yield encountered becomes a return value for generator's
.next() method. Let's look at the simplest generator to make
things clear:
>>> from __future__ import generators >>> def gen(): yield 1 >>> g = gen() >>> g.next() 1 >>> g.next() Traceback (most recent call last): File "<pyshell#15>", line 1, in ? g.next() StopIteration
Let's put a generator to work in our sample problem:
from __future__ import generators # only needed for Python 2.2 import random def randomwalk_generator(): last, rand = 1, random.random() # initialize candidate elements while rand > 0.1: # threshhold terminator print '*', # display the rejection if abs(last-rand) >= 0.4: # accept the number last = rand # update prior value yield rand # return AT THIS POINT rand = random.random() # new candidate yield rand # return the final small element
The simplicity of this definition is appealing. You can utilize the generator either manually or as an iterator. In the manual case, the generator can be passed around a program, and called wherever and whenever needed (which is quite flexible). A simple example of the manual case is:
gen = randomwalk_generator() try: while 1: print gen.next(), except StopIteration: pass
Most frequently, however, you are likely to use a generator as an iterator, which is even more concise (and again looks just like an old-fashioned sequence):
for num in randomwalk_generator(): print_short(num)
Yielding
It will take a little while for Python programmers to become familiar with the ins-and-outs of generators. The added power of such a simple construct is surprising at first; and even quite accomplished programmers (like the Python developers themselves) will continue to discover subtle new techniques using generators for some time, I predict.
To close, let me present one more generator example that comes from the
test_generators.py module distributed with Python 2.2.
Suppose you have a tree object, and want to search its leaves in
left-to-right order. Using state-monitoring variables, getting a class or
function just right is difficult. Using generators makes it almost
laughably easy:
>>>> # A recursive generator that generates Tree leaves in in-order. >>> def inorder(t): ... if t: ... for x in inorder(t.left): ... yield x ... yield t.label ... for x in inorder(t.right): ... yield x
Resources
- Read the previous installments of Charming Python.
- Get the third alpha release of Python 2.2.
- Regarding the last several Python versions, Andrew Kuchling has written his usual excellent introduction to the changes in Python 2.2; read What's New in Python 2.2.
- Read the definitive word on Simple Generators in the Python Enhancement Proposal, PEP255.
- The real dirt on Iterators is in PEP234.
- The code demonstated in this column installment can be found in a single source file.
- Read related developerWorks articles by David Mertz:
-. | https://www.ibm.com/developerworks/library/l-pycon/ | CC-MAIN-2015-22 | refinedweb | 2,224 | 55.24 |
In load tests I noticed that BPM Standard always requests the WSDL including all XSDs of a WebService before invoking the actual WebService. There is no caching in place, its deactivated or not working properly. BPM Standard acts as webservice consumer.
We see that before doing the actual webservice call, the WSDL describing the service is requested by BPM Standard. In consequence, lots of HTTP calls are required and we are afraid that this heavily impacts the performance of all our WebService invocations.
We would like to understand why BPM Standard downloads the WSDL during runtime. Is there any possibility to deactivate the downloading at all?
We would also like to understand why the actual WSDL and the XSDs are not cached and the download occurs redundantly. If its not possible to deactivate, is there any way to tell BPM to cache the WSDL permanently (e.g. until next JVM restart)?
Answer by Sue09 (942) | Jun 23, 2015 at 08:12 AM
BPM Standard provides a caching mechanism for WSDLs and XSDs used in outbound Web Service integrations. This cache got implemented with APAR JR42697: Caching of WSDL Objects Consumes Large Amount of Memory
Its default size is 25. This can be adapted by setting the following property in the 100Custom.xml:
<wsdl-cache-size>[Integer]</wsdl-cache-size>
This article has additional information on this cache and its implications beyond the APAR description: WSDL Cache Memory Consumption in IBM Business Process Manager
Starting with BPM 8.5.6.0, there is a big improvement for this cache. The items will be stored into the database during authoring time on Process Designer. So in runtime, the needed data will be retrieved from the database instead of from the remote server. And the cache entries can be shared between different Web Service Integrations if they are calling the same web service endpoint with the same WSDL. In that case, less cache entries are needed than in BPM 8.0.x. You can migrate to BPM 8.5.6.0 to use the new cache algorithm.
Frequently asked questions:
1) What implications do we have while setting the number of entries to 1000? What is the impact on the JVM heap? The cache will consume the heap memory. The memory usage depends on the WSDL size. Normally a full cache for one entry cost 1MB at least and can be as big as 100MB for a huge WSDL. So that 25 is a safe value for normal use to prevent OutOfMemory. For a setting of 1000, if all of the WSDLs which are used are small ones: Then this setting is possible. But if some of the WSDLs are big, then it's possible that OOM may happen. We suggest you don't increase that value too much. Maybe 50 is a reasonable value. The cache can be calculated with Web Service Integration numbers per snapshot multiplied with the snapshot count in running state.
2) Is there an invalidation timeout for the cache entries? If yes how long is it? There is no timeout for that cache. It's a FIFO cache.
3) Is there a possibility to deactivate invalidation on the cache? During web service call, BPM needs the WSDL data in the cache to generate the SOAP request and parse the SOAP response. So the WSDL cache is a must. If the WSDL is not in the cache, then the WSDL retrieve will happen everytime the Web Service Integration is being called.
47 people are following this question.
How can we control if the WSDL cache algorithm is working properly? 1 Answer
Changing the namespace in WebSphere Lombardi Edition (WLE) and IBM Business Process Manager (BPM) services exposed as web services 1 Answer
DPWWA1217E TAM Error occured. 1 Answer
Why are fields missing in objects of WebService response after migration to BPM 8.5.6 from BPM 7.5? 1 Answer
Outbound Web Service Endpoint URL 2 Answers | https://developer.ibm.com/answers/questions/197822/$%7Buser.profileUrl%7D/ | CC-MAIN-2019-39 | refinedweb | 658 | 65.93 |
Rails with Webpack - not for everyone
TL;DRTL;DR
Webpack may be an excellent choice for front-end web apps. But in a standard Rails app I find it to be volatile (configuration still changes) and surprisingly unfriendly (follow my adventure below). Sprockets 4 was a better trade-off for me in a standard Rails project.
Yarn in Rails on HerokuYarn in Rails on Heroku
I've been using Yarn in Rails for some time now, so I can pull in CSS and JS dependencies directly from the source npm packages. The advantage is that I don't have to wait for someone to update a ruby gem wrapper whenever a new version comes out. In fact, I don't necessarily want my Gemfile to be bloated with gems that wrap frontend assets at all.
Deploying a Ruby app that uses Node on Heroku without buildpack is luckily as trivial as adding
gem "webpacker" to your Gemfile. However, just to use Yarn, I now needed Webpack. In fact, I needed Webpack to be there without actually doing anything, because my assets were still compiled with Sprockets.
So, I ran
rake webpacker:install and, at that time, the config files looked very cluttered. But I somehow managed to have
rake assets:precompile run Webpack as a no-op by creating some empty files here and there. All this just so that I could use yarn out of the box.
And... I forgot about it (I did not, however, forget about the cluttered config files spread all over my Rails application.)
Fast-forward 6 months.
Obstacles when replacing Sprockets with WebpackObstacles when replacing Sprockets with Webpack
I'm in a project with several hundred SASS files and every time I change one of them, I need to wait 4 seconds for every browser reload because Sprockets re-compiles my CSS. Five days later I had to fix this.
Somewhere I read that Webpack is smart and fast, so I thought I'd give it a more serious attempt. If you google "Rails Webpack" you will find happy adventurous blog entries about how to replace the asset pipeline with Webpack in Rails 5.1. It had a hype air of "finally we're free from sprockets" and "using modern technology" and "custom JS processing" about it. Sure, why not.
So I ran
rake webpacker:install (again) and began moving my JS files to
app/javascript/packs. But then came my first stumble block. Would I really create a directory called
app/javascript/packs/stylesheets? My SASS files don't feel happy in a directory that includes
javascript in its name. But I continued.
The next pebble on my way was that there is no glob import support for SASS in Webpack, so I couldn't do
import something/**/* (while this is usually frowned upon, there are valid use-cases such as Harry Roberts ITCSS where components don't conflict with each other). I noticed that webpacker uses sass-loader which in turn recommends another loader to achieve glob imports.
A quick google for "webpack sass glob" made me suspicious. There were no less than 4 Webpack loaders that did the same thing: import-glob-loader, node-sass-glob-importer, import-glob, sass-resources-loader. (This made me feel as if the JS community was the Wild West that has not reached the same level of conformity than the Ruby community).
When you use Webpack in Rails, there are a bunch of loaders pre-configured for you. Each of these reads your asset file, may modify it, and passes it on to the next loader. By this magic, globbing can be achieved if a loader parses your
import **/* and replaces it with individual import statements for each file.
Unfortunately, all of these loaders only work with
.scss files and not
.sass which was exclusively used in the current project. Ok, I thought, let me create a few entry files (such as
application.scss and
admin.scss) where I use SCSS instead of SASS. It's a shame, but I won't add number 5 to the list of import glob loaders.
Yet, because loaders work the way they do, it parses the initial SCSS file, resolves the glob statements, but then the regular sass compiler takes over and every included file cannot use glob statements. For example, my
application.scss would use
**/* to load a
components.scss but in the latter I could not use
**/* any more. But I can work even around that. I just need to adapt the way I work to the framework (right?).
Even more obstaclesEven more obstacles
Then I had a really bad feeling when I had to add this to my JS files:
# This is a Javascript file import 'stylesheets/something.scss' import 'images/background.png'
I had to seriously import CSS and blog image files in Javascript in order to tell Webpack that they exist? This may make sense in some rich-client app, but not in my standard Rails project.
But, I got temporarily distracted by this error message:
I know that Webpack was created by a German but this error message reminded me a little bit too much of theoretical courses from university.
Invalid configuration object.
"Ah, my configuration file must have invalid YAML syntax", I thought. But after an hour, or so, lost on googling "debugging webpack", I finally, and accidentally, fixed the mistake. What was the problem? My
webpacker.yml had a
source_entry_path: some_path where
some_path was a directory that did not exist. That's all.
Why did Webpack not tell me that a directory is missing? I can only assume that this would have been a no-brainer in other contexts. But I still did not give up my hopes that the JS community would eventually become more user-friendly. So I ran
bin/webpack and all assets compiled in the end. At least I think so, because the output was, again, not as clear as I would have wished and compiling errors were so long that I had to scroll up 3 pages in my Terminal. (I will omit how I struggled to load the URL of a background image into my CSS, due to lacking documentation).
(Just to be clear, I am by no means badmouthing Webpack, but I present to you my experience with how Webpack presented itself to me.)
Yet, why were all my compiled JS files empty? Because of a bug reported in 2015 that has not been solved yet. If two files exist with the same basename, e.g.
application.sass and
application.js, then some plugin will remove all content from the JS file.
So I went through this "out-of-the-box" Webpack experience and gained... a SASS compiling improvement from 4 to 3 seconds. But I had to keep an eye on the Terminal window where the Webpack compiling took place, before I could reload the browser. So I didn't gain much at all.
I give up for nowI give up for now
At this point I called my buddy who is an Ember.js consultant and asked him about his experience with Webpack. He just laughed and said: "Webpack is terrible. All these loaders, you never know which one actually runs, then everybody creates their own little loader functions, you spend a year configuring it and the output is not beautiful and you don't even notice whether SASS compiles via Ruby or C."
There he summarized my last weeks exact experience in one sentence.
Finally, I realized that Sprockets 4 (which has been hiding as beta on Rubygems for two whole years now) supports SassC as a drop-in replacement of the Sass compiler implemented in Ruby. I gave it a try and, to be fair, my SASS compiling time has still not gone below 3 seconds. But at least I'm enjoying a true out-of-the-box experience for all assets.
SummarySummary
- Webpack is more difficult to configure than it should be. I really can't point my finger at it, except that it is far from trivial even for a bug-hunter. My experience with Ember-CLI makes Webpack look like a dragon in comparison.
- Sprockets automatically runs on browser request. No need to have a webpacker-dev-server running that looks for file changes, then compiles, and then you may reload your browser window (no, I don't want my browser to auto-reload after compiling).
- I can't get rid of the feeling that Webpack is primarily concerned with JS and not as much with PNG, WOFF, SVG, etc. In that respect, the Rails Asset Pipeline (apple) is just not comparable with Webpack (
pairpear).
- I can not confirm that Webpack overall is faster than Sprockets in a large project. After all, it's still underlying libraries such as Babel or SASS that do the actual work.
(Title image: "Eugène von Guérard - Lake Wakatipu with Mount Earnslaw", public domain from the Google Art Project.) | https://www.codementor.io/help/rails-with-webpack-not-for-everyone-feucqq83z | CC-MAIN-2018-09 | refinedweb | 1,498 | 72.36 |
- Introduction To SQL Server Event Handling
- SQL Server Startup
- DDL Events
- Event Notifications
- Extended Events (via Console App) | via GUI App | via PowerShell#.
Using Visual Studio Express 2012 (this also works with Visual Studio Community 2015), I created a new console application project. All of the code is in the "Program" class, which is created by default (it's in the "Program.cs" file).
using System; using System.Data.SqlClient; using Microsoft.SqlServer.XEvent.Linq; namespace XEventHandler { class Program { //Specify these two parameters. private static string sqlInstanceName = ".\\DBA"; private static string xeSessionName = "system_health"; static void Main(string[] args) { try { //Connection string builder for SQL //(Windows Authentication is assumed). SqlConnectionStringBuilder csb = new SqlConnectionStringBuilder(); csb.DataSource = sqlInstanceName; csb.InitialCatalog = "master"; csb.IntegratedSecurity = true; using (QueryableXEventData xEvents = new QueryableXEventData( csb.ConnectionString, xeSessionName, EventStreamSourceOptions.EventStream, EventStreamCacheOptions.DoNotCache)) { foreach (PublishedEvent evt in xEvents) { Console.ForegroundColor = ConsoleColor.Green; Console.WriteLine(evt.Name); Console.ForegroundColor = ConsoleColor.Yellow; foreach (PublishedEventField fld in evt.Fields) { Console.WriteLine("\tField: {0} = {1}", fld.Name, fld.Value); } foreach (PublishedAction act in evt.Actions) { Console.WriteLine("\tAction: {0} = {1}", act.Name, act.Value); } Console.WriteLine(Environment.NewLine + Environment.NewLine); //Whitespace //TODO: //Handle the event here. //(Send email, log to database/file, etc.) //This could be done entirely via C#. //Another option is to invoke a stored proc and //handle the event from within SQL Server. //This simple example plays a "beep" //when an event is received. System.Media.SystemSounds.Beep.Play(); } } } catch(Exception ex) { Console.WriteLine(Environment.NewLine); Console.ForegroundColor = ConsoleColor.Magenta; Console.WriteLine(ex.ToString()); Console.WriteLine(Environment.NewLine); Console.WriteLine("Press any key to exit."); Console.ReadKey(false); } } } }
A couple of quick notes before I run the app:
-. Remember, this is a console application--there's no GUI. YouTube, take it away...
That wasn't terribly exciting, was it? Three events were captured and handled: connectivity_ring_buffer_recorded (twice) and wait_info (once). Each event that was captured was handled by playing the default "beep" system sound. I could have almost as easily sent an email or logged some information to a table.
Code Analysis
One of the first articles I read about the QueryableXEventData class was Introducing the Extended Events Reader. It includes some examples for creating an instance of the QueryableXEventData class (it has more than one constructor) and how to loop through the collection of events. Those code samples were the starting point for what I've created here. Until I ran the code, I kept looking at that first (outer) foreach loop and thinking it would return a finite set of events, iterate through them, writing the event data to the console, then exit the loop. But as presented, program control never leaves the loop. It behaves more like this:
while (true) { //do stuff }
Within the first (outer) foreach loop are two more foreach loops:
foreach (PublishedEventField fld in evt.Fields) foreach (PublishedAction act in evt.Actions)
When you look at the XEvent Session Properties, these correspond to the items in the "Event Fields" tab and the "Global Fields (Actions)" tab, respectively:
Running The Code
As always, it's wise to try things in a test environment first. Please don't blindly run this code in a production environment! Tools that monitor any system typically have *some* level of impact on the performance of the system itself. That being said, Microsoft has baked in a safeguard to protect us: if the event stream fills up with data faster than our code can consume it, error 25726 should be encountered. This would cause the database engine to disconnect from the event stream to avoid slowing the performance of the server.
If/when you get to the point where you've got the code running, something to keep in mind is that the event_stream target is always an event "behind". Meaning at any given time, the most recent event is not captured by the QueryableXEventData class. (I demonstrated this issue in another post.)
Another Option
If Visual Studio and C# aren't in your wheelhouse, maybe this post didn't resonate with you. Perhaps Powershell can save the day! The code can be converted almost line-for-line into a Powershell script with a nearly identical run-time experience. I'll demonstrate that in my next post. | https://itsalljustelectrons.blogspot.com/2017/01/SQL-Server-Event-Handling-Extended-Events.html | CC-MAIN-2017-26 | refinedweb | 705 | 50.43 |
#include <vtkShaderCodeLibrary.h>
This class provides the hardware shader code.
Definition at line 29 of file vtkShaderCodeLibrary.h.
Reimplemented from vtkObject.
Definition at line 33 of file vtkShaderCodeLibrary.h.
Obtain the code for the shader with given name. Note that Cg shader names are prefixed with CG and GLSL shader names are prefixed with GLSL. This method allocates memory. It's the responsibility of the caller to free this memory.
Returns an array of pointers to char strings that are the names of the shader codes provided by the library. The end of the array is marked by a null pointer.
Provides for registering shader code. This overrides the compiled in shader codes.
Definition at line 68 of file vtkShaderCodeLibrary.h. | http://www.vtk.org/doc/release/5.2/html/a01233.html | crawl-003 | refinedweb | 121 | 62.44 |
No matter how you work, implementing the principles of responsive web design (RWD) on a large, complex project can be tricky. Like it or not, attaining the hallowed ground of multi-device, cross-platform layouts has a tangible overhead.
The more breakpoints and grid changes your content and framework dictate, the more complicated and time-consuming it becomes to maintain a project. At its worst, an elegant start can descend into a nightmare of specificity and confused overriding declarations. Potentially impacting your site performance with bloated CSS and an increased page load.
Expanding on already well-documented responsive techniques, we'll look to form a predicable pattern of responsive scaling, which can result in cleaner, lighter CSS and increased maintainability.
Combining an em-based layout and typography with flexible widths, we can serve up consistent large screen designs in which all elements both take full advantage of available screen real-estate and resolution. Change the layout once and you can be quietly confident that the change has been reflected throughout all of your breakpoints, and all controlled by a single CSS declaration. Sound good to you? Let's get started.
Dealing with the screen
The crux of any responsive site is in the definition how major and minor breakpoints allow a flexible grid to collapse, or equally, expand. No matter your preferred workflow, producing visuals with a modular style guide or designing in browser, it's the viewport width and its effect on our content that should drive our design decisions.
But the viewport width isn't the end of it. We also have to contend with a multitude of physical screen sizes with differing resolutions. Your content can both be magnified and reduced depending on the display. It can have a marked effect on the legibility of your content and effectiveness of your layout. For example, typography set to look comfortable on a large display will be totally inappropriate for a smaller device.
This isn't a new problem. In fact, screen resolution has always been a limiting factor in how we've designed websites. Remember the 960 grid, or designing for 760 before that? RWD's flexible approach to layout has helped us negotiate this limitation. Yet, it can still hold some challenges and, surprisingly, an opportunity.
Proportional thinking
In many ways, responsive typography has already gone some way in addressing our concern. It's common to encounter a site where headings and copy reduce in size as the screen real estate tightens.
While triggered by a narrowing viewport, the text can also be seen as applying appropriate scaling for the screen resolution. It becomes acceptable to reduce the font size because the reduction is countered, in part, by a magnification at smaller resolutions. The most common and perhaps most inefficient approach to responsive typography is the simple styling and then restyling of elements at individual breakpoints. Client-side scripts are also often used to dynamically resize headings, maximising their size to fit within the available space, FitText and slabText being some of the more popular open-source projects. But there is another approach, a simple yet powerful technique and one that should capture our interest.
Controlling scale with relative units
No doubt you've used pixels to size your typography and layout and why not? Converting a design into HTML requires a common unit and there's nothing wrong with using pixels. But there is a drawback. Pixels are absolute values, ignoring the context of their environment. (It's often accepted that a single CSS px equates to a physical screen pixel. In a high-definition world, that's not always the case.)
Ems are different. Defined as font-relative lengths, they are directly proportionate to a parent element's font-size. Defining this as a percentage will scale any child attributes defined in ems by that factor. Setting your font-size on the root of your document, you have a global effect on your typographic proportions.
Let's create a basic document with a HTML fontsize of 100%, with headings set in ems. Now, lower the HTML font-size to 80%. Everything defined in ems will scale by 80%. We can better take advantage of this with the introduction of media queries.
For example:
@media screen and (max-width: 55em) { html { font-size: 80%; } }
By simulating different viewport widths, you can get a feel for how our typography now scales. This can work both ways, by defining a font-size above 100% we can also enlarge our text.
@media screen and (min-width: 75em) { html { font-size: 135%; } }
A simple premise, but a powerful one. By using this method you can effectively control an entire site's typographic sizing with a single CSS declaration, completely avoiding the need to repeatedly target individual elements. If your document's vertical rhythm is composed of relative units, it will also scale beautifully. Find a live example of this on CodePen.
Adding a little more structure
As we've seen, defining our typography with ems can allow us to take full advantage of its relative properties. But ems are not only valid for typographical elements, we can use them to define any CSS measurement and, in doing so, we'll bind them into a relational relationship with the root font-size.
Sounds good. But if you're not careful, ems can cause trouble. When nested they can produce unwanted scaling through inheritance. But ems have a more inheritance-friendly counterpart in the form of rems.
A rem functions in the same manner as an em, with the only difference being that it looks directly to the root font-size for its relational value. This works around our inheritance issue and is perfect for our application.
While rems were broadly supported by leading browsers around the same time as media queries, for pre-CSS support, however, we can use px values as a unit fallback.
Expanding on our previous example, let's add a little more structure, being sure to define all of our layout and typography in rems with one exception. Because of an issue with Safari, we'll need to define our media queries in standard ems. You can view the expanded example on CodePen.
With that in place, the entire layout now scales, getting smaller as the screen narrows and larger as it widens. But importantly, it stays in proportion with both our typography and its surrounding framework. But, let's be honest, without a flexible grid at its heart, we are some way off a truly responsive layout. We need a plan.
We can still achieve our flexible layout in precisely the same way we would normally approach a responsive build. We just have to show a little discipline in where we introduce percentages into the mix. It's helpful to visualise two separate planes of effect within our document.
Plan of attack
The horizontal: anything that makes up the combined width of the document is defined as a percentage. This includes the likes of: widths, margin, padding, and border-left and right.
The vertical and everything else: specified in rems and bound to the documents root font-size it should be our default unit, including: min- and max-width, top and bottom relative, and absolute positioning.
Let's put that into action:
html { font-size: 100%; } h1 { font-size: 2.75rem; } h2 { font-size: 2.125rem; } .wrapper { width: 100%; min-width: 50rem; margin: 0 auto; } .header { position: relative; width: 100%; height: 3rem; } .header-logo { position: absolute; top: 1rem; left: 6%; width: 5rem; padding: 1.3rem 1.5rem; } .article { overflow: hidden; } .article-content { float: left; width: 56%; padding: 2rem 6% 4rem; } .article-aside { float: right; width: 26%; padding: 1.5rem 5% 3rem 0; } .article-figure { padding: 0 2%; } .article-figure > img { margin: 1.5rem 0 0; } .auxiliary-list { margin: 0; padding: 0 0 0 6%; } .auxiliary-list > li { display: inline-block; } .aux-item { width: 19%; margin: 2rem 5.6% 2rem 0; padding: 3.1rem 0; } .footer { width: 97%; padding: 3rem 0 0 3%; } @media screen and (max-width: 55em) { html { font-size:80%;} } @media screen and (min-width: 65em) { html { font-size:115% } } @media screen and (min-width: 75em) { html { font-size:135%; } }
For the sake of brevity, we're only dealing with layout within our example. You can view the complete styles on CodePen.
This all seems pretty straightforward, but, with percentages controlling two aspects of our layout, it can be a little confusing. So, to be clear, the percentages that make the document's overall width are not directly influenced by our root scaling. By maintaining this separation we can produce a site that is both flexible and one that can be proportionally scaled.
In the wild
While these short examples show the basic premise of what we're looking to achieve, its real life application will often rely on converting a design comp into styled HTML. For that, we'll need a method of expressing pixel measurements in rems.
Luckily for us, this already exists. It's a formula you should already be familiar with. It's the same as one we use to find proportional percentages in a responsive workflow, that is:
target ÷ context = result
Because rems are font-relative lengths, our context is then tied to the root font-size. Assuming a HTML font-size of 100% equates to 16px, then an example measurement of 58px would be 58 divided by 16 equals 3.625 rem. This is also true of converting px based font-sizes to rem units. It's completely feasible to work out our rem values longhand. But that in of itself creates an overhead, a barrier to working efficiently. Using a CSS preprocessor like Sass can save us a lot of the heavy lifting. With a simple mixin in place, we can define all of our measurements in pixels, making our CSS easier to digest.
$em-root: 16px !default; @function rem($target, $context: $em-root) { @return ($target / $context) * 1rem; }
Hidden benefits
Often overlooked, the browser zoom function allows users to control the magnification of a site. It forms an important part of a suite of accessibility tools for people with visual impairments. Untested, it can cause layouts to behave unexpectedly, potentially impacting site navigation. With our scaling patterns in place, we find ourselves replicating this native behaviour and testing against it becomes part and parcel of our workflow.
Scaling also helps us to present a more consistent experience over a wider number of screens. This consistency can help to find workflow efficiencies when working with a larger team. Many conversations on how the content should reflow and associated complexities can be reduced. Testing and QA become easier, as expectations are the same throughout the large-screen experience. This can allow us to invest more time in crafting interaction and our experience as a whole.
Taking it too far
Scaling your content has its limitations. Don't use it to the detriment of your small screen experience. Proportional RWD can work beautifully with a mobile first approach. We can look to craft our small screen experience with the introduction of a major breakpoint defining a small and large screen layout each controlled by an independent scaling pattern.
Conclusion
A simple execution, combining flexible widths with proportional scaling, can deliver a number of benefits. It achieves our responsive goals while presenting consistent, predicable layouts to our audience and reduces the need to re-flow our content. By echoing the browser native scaling function, we inherently test our designs against an undervalued accessibility feature.
On larger projects, the scaling of layouts reduces the need for redundant CSS, and allows for a more efficient team workflow, slashing our design and build time.
Words: Dan Nisbet
This article originally appeared in net magazine issue 253. | https://www.creativebloq.com/css3/slash-build-time-proportional-rwd-91412846 | CC-MAIN-2020-40 | refinedweb | 1,962 | 55.74 |
Where to get Cocos2d-x and what do I get?
You can clone the GitHub Repo and follow the steps in the README. You can also download as part of the Cocos package on our download page. No matter if you choose to develop in C++, JavaScript or Lua, everything you need is in one package. The Cocos family of products has a few different pieces.
Cocos2d-x - this is the game engine, itself. It includes the engine and the cocos command-line tool. You can download a production release or stay bleeding edge by cloning our GitHub Repo.
Cocos Creator - is a unified game development tool. You can create your entire game, from start to finish, using this tool. It uses JavaScript natively and can export to C++. Read more about Cocos Creator.
Cocos Launcher - is EOL'd. No replacement.
Coco Studio - is EOL'd and has been replaced by Cocos Creator.
Code IDE - is EOL'd. Common text editors and IDE's can be used instead.
Conventions used in this documentation
autois used for creating local variables.
using namespace cocos2d;is used to shorten types.
- each chapter has a compilable source code sample to demonstrate concepts.
- class names, methods names and other API components are rendered using fixed fonts. eg:
Sprite.
- italics are used to notate concepts and keywords. | http://cocos2d-x.org/docs/cocos2d-x/en/about/getting_started.html | CC-MAIN-2018-30 | refinedweb | 221 | 69.68 |
Joining unique values from two sequences with the LINQ Union operator
September 5, 2017 Leave a comment
Say you have two sequences of the same object type:
string[] first = new string[] {"hello", "hi", "good evening", "good day", "good morning", "goodbye" }; string[] second = new string[] {"whatsup", "how are you", "hello", "bye", "hi"};
You’d then like to join the two sequences containing the values from both but filtering out duplicates. Here’s how to achieve that with the first prototype of the LINQ Union operator:
IEnumerable<string> union = first.Union(second); foreach (string value in union) { Console.WriteLine(value); }
You’ll see that “hello” and “hi” were filtered out from the second sequence as they already figure in the first. This version of the Union operator used a default comparer to compare the string values. As .NET has a good default comparer for strings you could rely on that to filter out duplicates.
However, if you have custom objects then .NET won’t automatically know how to compare them so the comparison will be based on reference equality which is not what you want. Say you have the following object:
public class Singer { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
…and the following sequences:
IEnumerable<Singer> singersA = new List<Singer>() { new Singer(){Id = 1, FirstName = "Freddie", LastName = "Mercury"} , new Singer(){Id = 2, FirstName = "Elvis", LastName = "Presley"} , new Singer(){Id = 3, FirstName = "Chuck", LastName = "Berry"} }; IEnumerable<Singer> singersB = new List<Singer>() { new Singer(){Id = 1, FirstName = "Freddie", LastName = "Mercury"} , new Singer(){Id = 2, FirstName = "Elvis", LastName = "Presley"} , new Singer(){Id = 4, FirstName = "Ray", LastName = "Charles"} , new Singer(){Id = 5, FirstName = "David", LastName = "Bowie"} };
If you try the following:
IEnumerable<Singer> singersUnion = singersA.Union(singersB); foreach (Singer s in singersUnion) { Console.WriteLine(s.Id); }
…then you’ll see that the duplicates weren’t in fact filtered out and that’s expected. This is where the second version of Union enters the picture where you can provide your custom comparer, like the following:
public class DefaultSingerComparer : IEqualityComparer<Singer> { public bool Equals(Singer x, Singer y) { return x.Id == y.Id; } public int GetHashCode(Singer obj) { return obj.Id.GetHashCode(); } }
You can use this comparer as follows:
IEnumerable<Singer> singersUnion = singersA.Union(singersB, new DefaultSingerComparer()); foreach (Singer s in singersUnion) { Console.WriteLine(s.Id); }
Problem solved!
You can view all LINQ-related posts on this blog here. | https://dotnetcodr.com/2017/09/05/joining-unique-values-from-two-sequences-with-the-linq-union-operator-3/ | CC-MAIN-2022-40 | refinedweb | 400 | 58.62 |
2006
Having a heck of time with Remoting
Posted by ips007 at 5/31/2006 8:05:05 PM
I create a CFC and the associated actionscript to accessing it using Flash Remoting and rolled it out in a test envirnoment (ColdfusionMX 6.1) and it all worked. Take the same thing and run it on a ColdfusionMX 7 Server and it does nothing. I have been working in FlashMX Pro 2004 (7.2) I ...
more >>
Flash 8 with remoting creating problem Via using Ms Vss
Posted by Enchanter_saj at 5/31/2006 2:29:31 PM
we are using Ms Source Safe to handle source code of Flash 8 which is connecting with .net remoting but suddenly Flash is getting hang i have AMD 3K+ 512 ram and running on win Advance server plz give me any suggestion or solution Thanks :| ...
more >>
Connecting to Remoting
Posted by jhutchdublin at 5/24/2006 2:52:23 PM
I'm trying to flash remoting working on Windows XP Pro Service Pack 2 (IIS) with ColdFusion MX 7.01 Enterprise and Flash 8.0. I've installed the Flash Remoting Components and can see the NetService etc. However, when I try to include #include "NetServices.as" I get a message it can't fin...
more >>
Please some one take me out of my misery
Posted by pixculim at 5/20/2006 12:39:53 AM
Hi all, I have to say I have had one of the most frustating weeks ever....... I am a mid level flash programmer and ASP too, I want to start using Coldfusion w flash remoting, and/or Flash to conect to databases and so. I just got the Coldfusion MX 7 Web application construction Kit by Ben For...
more >>
using new RelayResponder from inside a function.
Posted by j_mcwatters at 5/16/2006 12:00:00 AM
Hi, I'm trying to finish a tutorial on Flash Remoting but the examples I have are from Flash MX, and I'm using Flash 8 Pro. I've tried using this syntax; <--- CODE ----> var nameVal = "New Model"; var descVal = "The latest seat inovation."; var priceVal ="21.45"; var typeVal = ...
more >>
Installation
Posted by MaxManNH at 5/13/2006 5:40:25 PM
I am trying to install the remoting components onto a WIn2k3 server in my office but the install keeps failing. I have tried both the version 8 and MX2004 install packages. I get the following error with the version 8 package. ISScript.msi failure I have downloaded and installed every...
more >>
Flash Remoting Works Locally But Not On The Server
Posted by vdiaz212 at 5/13/2006 3:58:26 AM
Hello, I have a flash file that connects to an asp.net page. The aspx page queries a databse and returns a set of records to flash. Flash reads in those records and processes those records. Locally, this works fine. However, on the server, nothing happends. I spoke with tech support at hos...
more >>
Accessing .NET Assemblies
Posted by MaxManNH at 5/12/2006 3:36:11 PM
Greetings all, I am developing a web app with VS2005 and Flash MX2004. I have a couple of classes that I have developed for use in my app that are in the App_Code directory. When I build and publish my site the classes all get compiled into a single App_Code.dll and I can't seem to figure...
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
HostMySite.com and Flash Remoting
Posted by ganzor at 5/11/2006 11:31:07 AM
My hosting service says they've set up Flash Remoting correctly, but I can't get it to work. Any ideas? I've used Adobe's devCenter tutorial, but HostMySite says the code won't work on CF 7.0 so then I downloaded and installed the realEstate tutorial which is designed for CF 7.0 - still d...
more >>
Posted by hemadri at 5/10/2006 7:48:26 AM
Hello I need to load images in my swf file dynamically from MySql using PHP. Brief is I have one Swf file and and i want to animate image in it but image should be come from dynamic path.. anybody can tell me how to do this? if you want more brief then i can provide it but it is urgent... ...
more >>
#include "NetServices.as",#include "NetDebug.as" problem
Posted by chayanvinayak at 5/8/2006 9:30:06 AM
#include "NetServices.as",#include "NetDebug.as" problem -------------------------------------------------------------------------------- #include "NetServices.as" #include "NetDebug.as" when i try to include these two files and publish in Actionscript2: it gives following error:...
more >>
Can't connect with XMLSocket
Posted by falconovich at 5/4/2006 7:43:31 PM
I am using XMLSocket to connect to a server application. The server app. is binding using the host name and port 1025. I can connect to the server app using tcp from other client machines w/ no problems. I am unable to connect to the server using XMLSocket. Looking at the TCP packet sniff ...
more >>
·
·
groups
Questions? Comments? Contact the
d
n | http://www.developmentnow.com/g/72_2006_5_0_0_0/macromedia-flash-flash-remoting.htm | crawl-001 | refinedweb | 856 | 74.19 |
Today’s challenge: given an array, find all combinations of three numbers that sum up to X. Target complexity: O(N2)
Example
Input: array =
[2, 3, 1, -2, -1, 0, 2, -3, 0], X =
0
Expected output:
[(-3, 0, 3)] * 3 + [(-3, 1, 2)] * 2 + [(-2, -1, 3)] + [(-2, 0, 2)] * 6 + [(-1, 0, 1)] * 3
Explanation
Let’s think of a simplified problem first: how can we find all pairs of numbers that sum up to some X? This is quite easy:
- we sort the array
- we place a cursor at the start and the end of the array, and we move them
- we compare X with the sum of the values pointed by the two cursors. If it is the same, we found a pair, if our sum is too high, we decrease the high cursor, otherwise we increase the low one.
Now that we can do this, we can use it to solve our original problem. We walk the array, and for every value, we find all combinations of two values in the remainder of the array that sum up to X minus the current value.
Here’s the code:
def compute_sorted_frequency_distribution_of_array(A): '''' Returns a sorted array of the unique values in A, with the number of times they appear in A. This is really just equivalent to: import collections return sorted(collections.Counter(A).values()) Complexity: O(N) ''' A.sort() freq = [] last_a = None how_many = 0 for a in A: if last_a == a: how_many += 1 else: if last_a is not None: freq.append((last_a, how_many)) how_many = 1 last_a = a if how_many > 0: freq.append((last_a, how_many)) return freq def find_two_number_summing_up_to(freq, target_value): ''' Given a sorted array with unique values and frequencies 'freq', and a target value, this function finds all pairs of values in the array that sum up to the value. Complexity: O(N) ''' s, e = 0, len(freq) - 1 while s != e and e >= 0 and s <= len(freq): delta = freq[s][0] + freq[e][0] - target_value if delta == 0: yield freq[s][0], freq[e][0], freq[s][1] * freq[e][1] e -= 1 elif delta > 0: # we need a smaller sum for the 2nd and 3rd number! e -= 1 else: # we need a bigger sum for the 2nd and 3rd number! s += 1 def solution(A, X): ''' Finds all c find all combination of three numbers that sum to X. Complexity: O(N^2) ''' freq = compute_sorted_frequency_distribution_of_array(A) combinations = [] for i, (value_1, how_many) in enumerate(freq): for value_2, value_3, how_many_pair in find_two_number_summing_up_to(freq[i + 1:], X - value_1): combinations += [(value_1, value_2, value_3)] * (how_many * how_many_pair) return combinations if __name__ == '__main__': print 'Starting tests..' assert solution([], 0) == [] assert solution([1], 0) == [] assert solution([1, 2], 0) == [] assert solution([1, 2, 3], 0) == [] assert solution([1, 2, -3], 0) == [(-3, 1, 2)] assert solution([1, 2, -3], 0) == [(-3, 1, 2)] assert solution([3, -3, 1, 0, 0], 0) == [(-3, 0, 3), (-3, 0, 3)] expected = [(-3, 0, 3)] * 3 + [(-3, 1, 2)] * 2 + [(-2, -1, 3)] + \ [(-2, 0, 2)] * 6 + [(-1, 0, 1)] * 3 assert solution([2, 3, 1, -2, -1, 0, 2, 0, -3, 0], 0) == expected assert solution([2, 3, 1], 5) == [] assert solution([2, 3, 1], 6) == [(1, 2, 3)] print 'Ok!' | http://www.lucainvernizzi.net/blog/2014/11/22/coding-interview-question-find-all-three-numbers-that-sum-up-to-x/ | CC-MAIN-2017-30 | refinedweb | 538 | 52.23 |
Created 06-23-2016 10:14 PM
I want to add a library and use it in Zeppelin (ex. Spark-csv). I succeeded in adding it to Spark and using it by putting my Jar in all nodes and adding spark.jars='path-to-jar' in conf/spark-defaults.conf.
However when I call the library from Zeppelin it doesn't work (class not found). From my understanding Zeppelin do a Spark-submit so if the package is already added in Spark it should work. Also, I tried adding using export SPARK_SUBMIT_OPTIONS=”--jars /path/mylib1.jar,/path/mylib2.jar" to zeppelin-env.sh but same problem.
Has anyone suceeded in adding libraries to Zeppelin ? have you seen this problem ?
See import external library section of
Since databricks csv is published to maven, you can just add the following as the first note before any other note.
%dep z.load("com.databricks:spark-csv_2.10:1.2.0")
Hi @Adel Quazani,
You can add the libraries in Zepplin with import statements.
For example:
import org.apache.spark.rdd._
import scala.collection.JavaConverters._
import au.com.bytecode.opencsv.CSVReader
Hope that answers your question.
Thanks,
Sujitha Sanku
Created 06-24-2016 05:44 AM
I am talking about libraries that doesn't come with Spark by default like spark-csv. This code works with Spark-shell but not with Zeppelin (same thing if I use Pyspark):
import org.apache.spark.sql.SQLContext val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("/tmp/sales.csv") df.printSchema() val selectedData = df.select("customerId", "itemId") selectedData.collect()
Should I add import statement ? why this is working in Spark directly
See import external library section of
Since databricks csv is published to maven, you can just add the following as the first note before any other note.
%dep z.load("com.databricks:spark-csv_2.10:1.2
You can add jar files straight under Interpreter dependencies
Load Dependencies to Interpreter
Regards, George Davy
Created 03-14-2018 02:21 PM
Created?
Created 05-21-2021 01:33 AM
Thx, this works for me also.
Adel,
I have the same issue, I have spark1.6 and I need to use spark-csv, can you tell me what I need to do please.
and for Zeppelin, does it work fot you? | https://community.cloudera.com/t5/Support-Questions/Adding-libraries-to-Zeppelin/m-p/155792 | CC-MAIN-2021-31 | refinedweb | 392 | 60.31 |
I guess from my perspective we are at
field:[<goop>-><goop>]
The delimiter is not yet defined, but the options currently discussed are
-
->
;
:
|
>
The problem with - and : is that they may be part of a date format.
The action taken by the QueryParser would depend on the type of field we
were using (if that were an easy change). For Date fields, it would convert
the <goop> to a Date using the SimpleDateFormat and try to guess the format
(I think it will handle the ISO 8601 formats).
OR
If adding a type to a field is difficult, then the next option is to just
support a date range and assume the data is a date.
OR
If adding a type to a field is difficult and we don't want to just support a
Date format, then we would create a specific format like
YYYY/MM/DDTHH:MM:SS
For dates and just a set of digits for numbers.
Does that sound about right? If so what's are people preference?
My preferences are
Solve with Option 3 now, but determine how to solve with option 1.
Delimiter preference would be ">" It seem intuitive to me.
--Peter
On 6/4/02 10:17 PM, "Otis Gospodnetic" <[email protected]> wrote:
> Hello,
>
> Just curious what the status of this issue is, as the discussion seems
> to have stopped.
>
> --- "Eric D. Friedman" <[email protected]> wrote:
>> Instead of reinventing the wheel for representing dates, how about
>> using an existing standard? ISO 8601 defines a simple lexical
>> representation for dates, times (with optional millisecond
>> precision),
>> and timezones that is easy to implement. This is what's used in the
>> XML Schema "dateTime" datatype.
>>
>> A summary of the ISO 8601 notation is available here:
>>
>>
>> The documentation for the XML Schema dateTime datatype is here:
>>
>
> I agree, that is why I immediately suggested YYYY-MM-DD. I dislike
> U.S.-centric or Europe-centric approaches when there is a standard
> format.
>
>> I whipped up a JavaCC parser to handle this lexical representation
>> (see
>> attachment).
>>
>> Note that for this to be useful in QueryParser, it's going to need
>> its
>> own lexical state. This makes sense anyway, since it would be a
>> mistake to have the query syntax infer magical properties about
>> strings
>> that appear to be dates. Better is to have a keyword in the query
>> syntax that introduces a date value: something like date(<VALUE>)
>> would work. So would to_date(<VALUE>) for those who know SQL. I
>> would
>> have suggested date:<VALUE> but I think that already means something
>> in
>> the QueryParser's lexical specification. (I don't actually use
>> QueryParser because the patches I've submitted previously haven't
>> made
>> it in yet, and until they do, QP is fatally crippled for my
>> purposes).
>
> I'll try to look for your patches in the archives (if you have the URL
> handly please send it to me), so that I can put it on the TODO list, if
> it makes sense to do so.
> As for the above comments about the parser, I'm afraid I'm still a
> JavaCC neophite. I don't dislike date(<VALUE>) approach. If users can
> grasp field:value they shouldn't have a problem with field:date(value),
> I think.
>
> Otis
>
>
>> On Sun, 2 Jun 2002, Peter Carlson wrote:
>>
>>> I like this idea of [GOOP:GOOP] as it gives the most flexibility.
>> However,
>>> this requires the field to have a known characteristic like a date
>> field,
>>> number field or text field correct? If you just use the static
>> Field.Date
>>> this would require adding a new attribute the field class? I like
>> this idea
>>> but I don?t know the difficulty / backward compatibility issues.
>>>
>>> If the extra field attribute is too difficult, then I suggest we
>> use the
>>> nnnn-nn-nn format method so we can use the pattern to determine the
>> data
>>> type.
>>>
>>> For number fields, should this support only integers, or decimal
>> numbers
>>> too?
>>>
>>> I don't think we should use the : character, because we probably
>> want to
>>> support time formats in the date format. Something like 03/01/2001
>> at
>>> 00:01:00. Maybe something like ">" or "|" or even "->" ?
>>>
>>> Also, inclusive vs. exclusive should be accounted for with the [ vs
>> {
>>> characters. I think this might already be done, but just wanted to
>> throw it
>>> out there.
>>>
>>> --Peter
>>>
>>>
>>> On 6/2/02 2:13 AM, "Brian Goetz" <[email protected]> wrote:
>>>
>>>>>> How about:
>>>>>>
>>>>>> DATE = nnnn-nn-nn
>>>>>> NUMBER = n*
>>>>>> RANGE = [ DATE : DATE ] | [ NUMBER : NUMBER ]
>>>>>>
>>>>>> An alternate, less parse-oriented approach would be this:
>>>>>> RANGE = [ GOOP : GOOP ]
>>>>>> where
>>>>>> GOOP = any string of letters/numbers not containing : or ].
>>>>>
>>>>> I'd go for the first one as it's more explicit. However,
>> perhaps the
>>>>> second approach is more extensible?
>>>>
>>>> When I first did the query parser, I defined terms by inclusion
>>>> (stating valid characters) instead of exclusion (excluding
>> non-term
>>>> characters.) Turns out I missed quite a few in the first go
>> around,
>>>> which taught me the lesson (again) that sometimes trying to be
>> too
>>>> specific is a rats nest. What about dates like 02-Mai-2002 (not
>> a
>>>> typo, french for May)? Letting DateFormat figure it out has some
>>>> merit.
>>>>
>>>>> DateField(Date) and NumberField(int) sounds right, but wouldn't
>> Field
>>>>> class make more sense?
>>>>
>>>> I had in mind static methods of Field, just like Field.Text --
>>>> Field.Date, Field.Number. Sorry if that wasn't clear. This
>> seems
>>>> an easy addition.
>>>>
>>>> --
>>>>>
>>>
>>> PARSER_BEGIN(ISO8601Parser)
>>
>> import java.io.*;
>> import java.util.*;
>> import java.text.*;
>>
>> public class ISO8601Parser {
>>
>> static DateFormat fmt;
>>
>> public static void main(String args[]) throws ParseException {
>> String date;
>>
>> //date = "1999-05-31T13:20:00Z";
>> //date = "1999-05-31T13:20:00-00:01";
>> date = "1999-05-31T13:20:00.999-08:00";
>>
>> TimeZone utc = TimeZone.getTimeZone("UTC");
>> fmt = DateFormat.getDateTimeInstance();
>> fmt.setTimeZone(utc);
>>
>> ISO8601Parser parser = new ISO8601Parser(new StringReader(date));
>> Date d = parser.date();
>> System.out.println(fmt.format(d));
>> }
>> }
>>
>> PARSER_END(ISO8601Parser)
>>
>> TOKEN :
>> {
>> <#DIGIT: ["0"-"9"]>
>> | <TWOD: <DIGIT><DIGIT>> // two digits used for day, month,
>> hours, minutes, seconds
>> | <MILLIS: <TWOD><DIGIT>> // millisecond precision is 000
..
>> 999
>> | <YEAR: <TWOD><TWOD>(<DIGIT>)*> // at least 4 digits, but
possibly
>> more
>> | <DASH: "-"> // delimiter for CCYY-MM-DD; doubles
>> as minus sign for signed ints
>> | <COLON: ":"> // delimiter for hh:mm:ss
>> | <DOT: "."> // delimiter for ss.mmm
>> (milliseconds)
>> | <T: "T" > // delimiter between date and time
>> | <Z: "Z" > // UTC timezone
>> | <PLUS: "+"> // indicates positive offset from
>> UTC
>> }
>>
>> /**
>> * Input to this production is a series of tokens matching the
>> following specification:
>> * CCYY-MM-DD -- a date with no time specification<br>
>> * CCYY-MM-DDThh:mm:ss -- a timestamp implicitly in the UTC
>> timezone<br>
>> * CCYY-MM-DDThh:mm:ssZ -- a timestamp explicitly in the UTC
>> timezone<br>
>> * CCYY-MM-DDThh:mm:ss-08:00 -- a timestamp with a negative 8 hour
>> offset from UTC<br>
>> * CCYY-MM-DDThh:mm:ss.mmm -- a timestamp with millisecond
>> precision<br>
>> * -CCYY-MM-DD -- a date whose year is before the common era
>> (BCE)<br>
>> * NNCCYY-MM-DD -- a date whose year is > 9999<br>
>> *
>> * <p> Note that years greater than 9999 are allowed, but that 0000
>> is not a valid year.
>> * Negative numbers are allowed when representing years BCE.
>> * </p>
>> *
>> * <p>Milliseconds are optional in the seconds field. The timezone
>> indicator is optional.
>> * </p>
>> *
>> *@return a java.util.Date instance in the UTC timezone, with
>> millisecond precision.
>> */
>> Date date() :
>> {
>> int CCYY = 0, MM = 0, DD = 0, hh = 0, mm = 0, ss = 0, millis = 0;
>> int deltahh = 0, deltamm = 0;
>> boolean deltaPlus = true;
>> Calendar c = Calendar.getInstance(TimeZone.getTimeZone("UTC"));
>> }
>> {
>> CCYY = year() <DASH>
>> MM = twod() <DASH>
>> DD = twod()
>> {
>> MM--; // months are 0 based
>> c.set(c.YEAR, CCYY);
>> c.set(c.MONTH, MM);
>> c.set(c.DAY_OF_MONTH, DD);
>> }
>> (
>> <T>
>> hh = twod() <COLON>
>> mm = twod() <COLON>
>> ss = twod()
>> {
>> c.set(c.HOUR_OF_DAY, hh);
>> c.set(c.MINUTE, mm);
>> c.set(c.SECOND, ss);
>> }
>> (
>> <DOT>
>> millis = millis()
>> {
>> c.set(c.MILLISECOND, millis);
>> }
>> )?
>> (
>> <Z> // we're already in UTC, so no adjustment needed
>> |
>> (
>> (
>> <PLUS> // somewhere ahead of UTC (east of Greenwich)
>> |
>> <DASH> // behind UTC (west of Greenwich)
>> {
>> deltaPlus = false;
>> }
>> )
>> deltahh = twod() <COLON>
>> deltamm = twod()
>> {
>> if (! deltaPlus) {
>> deltahh = -deltahh;
>> deltamm = -deltamm;
>> }
>> // millisecond offset
>> int offsetFromUTC = ((deltahh * 60) + deltamm) * 60 * 1000;
>> c.set(c.ZONE_OFFSET, offsetFromUTC);
>> }
>> )
>> )?
>> )?
>> {
>> return c.getTime();
>> }
>> }
>>
>> int millis() :
>> {
>> Token t;
>> }
>> {
>> t = <MILLIS> {
>> return Integer.parseInt(t.image);
>> }
>> }
>>
>> int twod() :
>> {
>> Token t;
>> }
>> {
>> t = <TWOD> {
>> return Integer.parseInt(t.image);
>> }
>> }
>>
>> int year() :
>> {
>> Token t;
>> boolean positive = true;
>> }
>> {
>> (
>> <DASH>
>> {
>> positive = false;
>> }
>> )?
>> t = <YEAR> {
>> int year = Integer.parseInt(t.image);
>> if (year == 0) {
>> throw new IllegalArgumentException("0000 is not a legal year");
>> }
>> return positive ? year : -year;
>> }
>> }
>>> --
>>> | http://mail-archives.apache.org/mod_mbox/lucene-dev/200206.mbox/%3CB923056A.88EC%[email protected]%3E | CC-MAIN-2014-52 | refinedweb | 1,420 | 65.12 |
************************Arduino Code*******************************************
/* This Arduino Sketch is part of a tutorial on the ForceTronics YouTube Channel and demonstrates how to use the Sleep cabilities on
Arduino as well as turn off the ADC to get low power consumption. In this tutorial the Extended Fuse on the Atmega was configured
to turn off the Brown Out Detection (BOD) for even further power savings. It is free and open for anybody to use at their own risk.
*/
/*
To turn off the BOD avrdude was used via the command prompt, the following command was used:
avrdude -c usbtiny -p atmega328p -U efuse:w:0x07:m
*/
#include <avr/sleep.h>
void setup() {
delay(6000); //Delay to see normal power level first
sleep_enable(); //enable the sleep capability
set_sleep_mode(SLEEP_MODE_PWR_DOWN); //set the type of sleep mode. Default is Idle
ADCSRA &= ~(1<<ADEN); //Turn off ADC before going to sleep (set ADEN bit to 0)
sleep_cpu(); //enter sleep mode. Next code that will be executed is the ISR when interrupt wakes Arduino from sleep
}
void loop() {
// put your main code here, to run repeatedly:
} | http://forcetronic.blogspot.com/2015/04/ | CC-MAIN-2017-22 | refinedweb | 176 | 64.04 |
gurpeet singh wrote:Please consider the following snippet of code
public class Animal {
// int i;
// i =7;
final int i; //compile time error here
}
why it gives compiler error. shouldn't it just give default value 0 to i variable ? if i remove final it works fine.
A field can be declared final (§4.12.4). Both class and instance variables (static and non-static fields) may be declared final.
It is a compile-time error if a blank final (§4.12.4) class variable is not definitely assigned (§16.8) by a static initializer (§8.7) of the class in which it is declared.
A blank final instance variable must be definitely assigned (§16.9) at the end of every constructor (§8.8) of the class in which it is declared; otherwise a compile-time error occurs. | http://www.coderanch.com/t/579171/java-programmer-SCJP/certification/giving-error | CC-MAIN-2015-06 | refinedweb | 138 | 69.38 |
I have a WSDL that serves as a contract with a vendor. I need to generate service clients and a service impl for testing that faithfully reflects the service of the vendor.
When I generate a java SEI using wsconsume and create a java impl of the SEI and deploy to jboss then the WSDL generated by jboss does not reflect the original WSDL used as input for wsconsume. The targetnamespace and service name has been changed to match the java package name and class name of the impl class instead of matching the names in the original WSDL. This causes clients generated from the original WSDL to fail because they can't find the names they are looking for.
The service java impl that I created references the autogenerated java SEI both by subclassing the SEI and through annotation @javax.jws.WebService(endpointInterface="...autogenerate SEI")
Looking into jsr 181 (sec 4.1.1) I noticed that the @Webservice(targetNamepspace="..") annotation maps to different parts of the WSDL depending on whether the annotation is present on the java SEI or the java impl class. If the annotation is only present in the SEI then its mapped to wsdl:porttype. In the opriginal WSDL the targetnamespace was used for wsdl:service.
So how do I generate a test service that reflects the original WSDL?
See @WebService(wsdlLocation="...")
Tnx - I managed to set the service name and namespace etc using annotations such as @webservice.targetNamespace on the SIB to complement the autogenerated @webservice on the SEI. I can't deploy the original WSDL directly because it contains .net binding stuff. | https://developer.jboss.org/thread/162309 | CC-MAIN-2017-47 | refinedweb | 268 | 55.34 |
Let's say I have a class with multiple constructors, one of which is a copy-constructor (to copy an object):
public class Rectangle {
int width, height;
public Rectangle(int width, int height) {
this.width = width;
this.height = height;
}
public Rectangle(Rectangle source) {
this(source.width, source.height);
}
}
source
null
IllegalArgumentException
You can do this:
public Rectangle(Rectangle source) { this(checkNotNull(source, "Source cannot be null").width, source.height); } private static <T> T checkNotNull(T t, String msg) { if (t == null) throw new IllegalArgumentException(msg); return t; }
I also agree with Jon Skeet that a
NullPointerException is not a bad bevahiour in this case. The only thing is that in long lines when you get an NPE it can be a bit hard to identify which object is
null, which is why a more specific message can be useful.
You can also not reinvent the wheel and use standard
java.util.Objects methods if you don't bother throwing a
NullPointerException instead:
public Rectangle(Rectangle source) { this(Objects.requireNonNull(source, "Source cannot be null").width, source.height); }
if your error message is expensive to build, you can provide a
Supplier<String> instead, to pay the cost of the construction of the message only when it's actually needed:
public Rectangle(Rectangle source) { this(Objects.requireNonNull(source, () -> explainError(source)).width, source.height); } | https://codedump.io/share/Z2Q51bXo3SqD/1/java-null-arguments-when-chaining-constructors | CC-MAIN-2018-13 | refinedweb | 221 | 52.49 |
>Actually, when I was reworking the tasks overview page, I was thinking
>of adding a section for Unsupported (or External, Contributed, whatever)
>Tasks, and listing the links in the doc. Would something like that
>suffice? (See the tasksoverview.html file in ...docs/manual in CVS to
>see what I'm referring to.)
That would be cool, particularly if there were a link to download the
relevant JAR file. I like all the documentation being available in one
place. It would be good if downloaded JARs contained enough information for
ANT to be able to use them immediately rather than having to put <taskdef>
entries in your code all over the place. It seems a shame that the
optional.jar is 'special' in this respect - part of the reason I want a
couple of extra items in the VSS tasks that I submitted yesterday as a patch
is that I don't want to have to do custom builds in order for them to be
accessible by default. If I could create 'MyVssStuff.jar' and add it into
the lib directory, this would be great.
Thinking about it, what would be even nicer is something such as the
following:
* Tasks can be in jar files dropped in the lib directory
* Tasks can be added merely by dropping new jar files into the lib directory
* If ANT processes a build file and meets a task that it doesn't recognise
(and can't find in its lib directory), it automatically (or prompts, or
whatever) looks at the ANT website, finds where the jar file is that
contains the task that is needed, downloads it and continues.
This does mean that the task 'namespace' is controlled by jakarta by
default, but that sounds like no bad thing...
--
To unsubscribe, e-mail: <mailto:[email protected]>
For additional commands, e-mail: <mailto:[email protected]> | http://mail-archives.apache.org/mod_mbox/ant-dev/200202.mbox/%3C1B42761E4CBED5119FF10002A587DD910342F9@NEBULA%3E | CC-MAIN-2014-52 | refinedweb | 317 | 61.26 |
Created on 2009-08-06.19:01:40 by kellrott, last changed 2013-02-19.18:49:33 by fwierzbicki.
Assignment of a newly created ast Node value fails. The assignment
destination become 'None' rather then accepting the value.
Test case:
Simple code mutation. Trying manipulate the ast tree to change a
methods call like
a.set( 1 )
to the assignment:
a = 1
By running attached code 'astMutationTest.py' ('unparse.py' code is
obtained from Cpython parse demo code)
the python output is:
Before Mutation
a.set(1)
After Mutation
a = 1
But for Jython:
Before Mutation
a.set(1)
After Mutation
None
A simpler example:
import _ast as ast
tmpAssign = ast.Assign()
tmpExpr = ast.Expr()
tmpExpr.value = tmpAssign
print tmpExpr.value
In Jython tmpExpr.value is None
In Python, it's something like:
<_ast.Assign object at 0x8ac50>
Is this be caused by the code in org.python.antlr.adapter.ExprAdapter?
The call chain is:
org.python.antlr.ast.Expr.setValue
org.python.antlr.adapter.AstAdapters.py2expr
org.python.antlr.adapter.ExprAdapter.iter2ast
org.python.antlr.adapter.ExprAdapter.py2ast
The py2ast method only includes type checks for PyInteger, PyLong,
PyFloat, PyComplex, PyString, and PyUnicode. All other elements (like
an 'Assign' class) would default to null, if they were set via 'setValue'..
I've been looking at it, and I'm trying to figure out if it's a byproduct of
me using the ast wrong. I'm putting the Assign object in the value field of
the Expr object (changing the expression a.set(1) into the assignment a=1).
An Assign is a stmt, but the value field for the Expr in the Java code
expects a expr. Looking back at the ast description at it looks like expr is a descendant
of stmt. So my code may be constructing a illegal tree. The unparse.py
code is still able to deconstruct it back into code, but adding in the java
type checking reveals the bad tree...
I'm going back to my original code to see if replacing the Expr node in it's
parent's list fixes the problem.
Don't kill the issue just yet, I need to make sure I'm right on this one.
Kyle
On Mon, Oct 26, 2009 at 11:55 AM, Frank Wierzbicki
<[email protected]>wrote:
>
> Frank Wierzbicki <[email protected]> added the comment:
>
>.
>
> _______________________________________
> Jython tracker <[email protected]>
> <>
> _______________________________________
>
I've confirmed that in the original code a list was passed to the
'value' field. In python this didn't cause any problems, despite the
fact that the official AST description for the value field of an Assign
is for a single expression, and not for a list of expressions. The type
checking in Java caused the bad assignment to default to None.
Fixing this issue would only be useful if Jython needs to handle badly
formed AST trees the same way as Python. Otherwise, raising an
exception for a bad assignment might be worth while.
Kyle: thanks for the further analysis. CPython accepts arbitrary Python
objects pretty much anywhere in an ast.py tree, even if it fails at
parse time. Matching up with this behavior will require a complete
re-design (which I plan to do, but it's going to take a while). CPython
separates ast.py from the internal ast -- they are deliberately
different (ast.py is only a mirror of the internal ast so that it can
evolve). Not understanding this, I actually made Jython internal ast
implementation the same as ast.py. I plan to separate these in the
future (CPython design makes loads of sense *slaps forehead*)
Related to #1497
Deferring to 2.6 | http://bugs.jython.org/issue1427 | CC-MAIN-2018-05 | refinedweb | 617 | 68.06 |
Cutting Edge
DHTML-Enabled ASP.NET Controls
Dino Esposito
Code download available at:CuttingEdge0507.exe(131 KB)
Contents
Anatomy of a Postback
The Client-Side Counterpart
DHTML Behaviors
A DropDownList Example
The Extended Object Model
Summing It Up
In the past, I've covered some core aspects of the interaction between DHTML behaviors, the browser, and ASP.NET runtime (see Cutting Edge: Extend the ASP.NET DataGrid with Client-side Behaviors and Cutting Edge: Moving DataGrid Rows Up and Down). But I haven't covered the intricacies of DHTML behaviors and advanced client-side scripting so I'll do that here. I'll show how to make ASP.NET code and the Internet Explorer DHTML Document Object Model (DOM) work together and discuss how you set up the communication between the ASP.NET runtime and a server-side instance of an ASP.NET control.
Anatomy of a Postback
To design an effective mechanism for cooperation between DHTML and server-side controls, you need a solid understanding of the ASP.NET postback mechanism. Imagine you have a page with a couple of textboxes and a Submit button. When a user clicks the button, the page posts back. The post can be initiated in one of two ways—through a Submit button or through script. A Submit button is represented by the HTML: <INPUT type="submit">. Most browsers also support posting via the submit method in a <form> element. In ASP.NET, the second approach is used for LinkButtons and auto-postbacks.
When a submit operation is initiated, the browser prepares and sends an HTTP request according to the form's contents. In ASP.NET, the "action" attribute of the sending form is set to the URL of the current page; the "method" attribute, on the other hand, can be changed at will and even programmatically. Possible methods include GET and POST.
The postback for an ASP.NET page that contains a couple of textboxes, a dropdown list, and a Submit button looks like this:
_VIEWSTATE=%D...%2D &TextBox1=One &TextBox2=Two &DropDownList1=one &Button1=Submit
The contents of all input fields, including hidden fields, are sent as part of the payload. In addition, the value of the currently selected item in all list controls is added, as is the name of the Submit button that triggered the post. If there are one or more LinkButtons on the page, two extra hidden fields called __EVENTTARGET and __EVENTARGUMENT are added to the payload:
__EVENTTARGET= &__EVENTARGUMENT= &_VIEWSTATE=%D...%2D &TextBox1=One &TextBox2=Two &DropDownList1=one &Button1=Submit
Both of these hidden fields are empty if the page posts back through a Submit button. If you post back through LinkButton in the page, the payload changes as follows:
__EVENTTARGET=LinkButton1 &__EVENTARGUMENT= &__VIEWSTATE=%D ... %2D &TextBox1=One &TextBox2=Two &DropDownList1=one
In this case, the __EVENTTARGET field contains the name of the LinkButton that initiated the post. ASP.NET uses this information when constructing the server-side representation of the requested page and in determining what caused the postback.
On the server, IIS picks up the request and forwards it on to the ASP.NET runtime. A pipeline of internal modules processes the request and instantiates a Page-derived class. The page class is an HTTP handler and, as such, implements the IHttpHandler interface. The runtime calls the Page's ProcessRequest method through the IHttpHandler interface and the server-side processing starts. Figure 1 provides an overall view of the request process.
Figure 1** ASP.NET Postback Process **
Once the server-side processing of the page has begun, the Page object goes through a sequence of steps, as outlined in Figure 2. As its first step, the Page object creates an instance of all server controls that have a runat="server" attribute set in the requested ASPX source file. At this time, each control is created from scratch and has exactly the same attributes and values outlined in the ASPX source. The Page_Init event is fired when all controls have been initialized. Next, the page gives all of its controls a chance to restore the state they had last time the posting instance of that page was created. During this step, each control accesses the posted view state and restores its state as appropriate.
Figure 2 Page Lifecycle Events
At this point, each control's state must be updated with any data posted by the browser. For this to happen, a special conversation is set up between individual controls and the ASP.NET runtime. This is an important point to consider in light of client-side interaction.
The ASP.NET Page class looks up the Form or QueryString collection, depending on the HTTP verb that was used to submit the request. The collection is scanned to find a match between a posted name and the ID property of a server-side control created to serve the request. For example, if the HTTP payload contains TextBox1=One, the Page class expects to find a server-side control named TextBox1. Each ASP.NET control lives on the server but retains a counterpart on the client. The link between them is the string containing the control's ID.
While the ASP.NET Page class can successfully locate a server control with a given name, it has no idea of the type of that control. In other words, from the page perspective, TextBox1 can be either a TextBox, a DropDownList, a DataGrid, or a custom control. For this reason, the Page class processes the control only if it adheres to an expected contract—the IPostBackDataHandler interface. If the control implements that interface, the page invokes its LoadPostData method. The method receives the name of the control (TextBox1, in the example) plus the collection of posted values—that is, Form or QueryString. As an example, a TextBox control will extract the corresponding value ("One", in the example) and compare it to its internal state. This behavior is common to all input controls and to all controls that expect to receive input data from the browser. For example, a DataGrid control that allows users to change the order of columns using drag and drop will, at this point, receive the modified order of columns.
The LoadPostData implementation depends on the characteristics and expected behavior of the particular control. The TextBox control compares the posted string to the value of its Text property. The DropDownList control compares the incoming data to the value of the currently selected item. If the compared values coincide, the method returns false. If the values differ, then the relevant control properties are updated and the method returns true. Figure 3 shows an implementation for a TextBox control.
Figure 3 IPostBackDataHandler Implementation
bool IPostBackDataHandler.LoadPostData( string name, NameValueCollection postedValues) { string oldValue = this.Text; string newValue = postedValues[name]; if (!oldValue.Equals(newValue)) { this.Text = newValue; return true; } return false; } void IPostBackDataHandler.RaisePostDataChangedEvent() { this.OnTextChanged(EventArgs.Empty); }
LoadPostData for a TextBox control compares the value posted for a given control with the current value of the Text property. Note that at the time this comparison is made, the Text property contains the value just restored from the view state. From now on, the state of the control is up to date and reflects the old state and the input coming from the client. The Boolean value that LoadPostData returns indicates whether or not the second method on the interface—RaisePostDataChangedEvent—must be invoked later. A return value of true means that the value of Text (or the property or properties a control updates with posted values) has been refreshed and subsequently the TextBox raises a server-side data-changed event. For a TextBox control, this event is TextChanged.
Once this step has been accomplished, the Page_Load event is fired and a second check is made on the control that appears to be responsible for the postback (based on the information sent from the browser). If this control implements IPostBackEventHandler, the RaisePostBackEvent method is invoked to give the control a chance to perform the postback action. The following pseudocode illustrates the implementation of this method for the Button class:
void IPostBackEventHandler.RaisePostBackEvent(string eventArgument) { if (CausesValidation) Page.Validate(); OnClick(new EventArgs()); OnCommand(new CommandEventArgs( CommandName, CommandArgument)); }
As you can see, when a button is clicked and the host page has completed its restoration process, the OnClick event is invoked, followed by OnCommand. A similar piece of code serves the LinkButton class. Code like this is used for any custom controls that require the post action to be started on the client.
The Client-Side Counterpart
Each server control outputs some markup that is sent down to the client. The browser then uses that information to build a DOM rooted in the outermost tag of the control's markup. Simple server controls such as TextBox map directly to HTML elements; more complex controls like the DataGrid map to a subtree of HTML elements, in many cases rooted in an HTML table tag. The root tag, or the most significant tag in the HTML, is given a name (the name HTML attribute) that matches the ID of the server control. This guarantees that the ASP.NET runtime can correctly match up client HTML elements with instances of server-side controls.
When you use or build an ASP.NET control with rich client-side functionalities you end up with at least two related problems. First, you have to figure out how to transfer to the server any input generated on the client. Second, you must make sure that the server control retrieves and properly handles that chunk of information. A third issue revolves around the format you use to send data across the wire.
There might be many ways to solve these issues and, frankly, any approach that works is valid. But when writing code for an ASP.NET control, why not do as the ASP.NET team did. That's where that anatomy of a postback fits in.
In the two articles that I mentioned at the beginning of this piece, I create a custom DataGrid control and collect some user input through drag and drop and other client-side operations. The input is then serialized to a string and packed into a hidden field. The hidden field is like any other <INPUT> tag except that it doesn't show up in the user interface. The hidden field is part of the form and its contents are picked up and used to prepare the HTTP payload when a postback is made. The hidden field is created by the server-side control and given the same ID of the control.
For example, a custom DataGrid named DragDropGrid1 will create its own personal hidden field with the same name. Any client-side action that is relevant to the behavior of the grid is persisted to the hidden field. When the page posts back, that information is carried to the server and consumed by ASP.NET in the manner described earlier. The matching ID determines the link between the contents of the input field and a server-side control. The control-specific implementation of the IPostBackDataHandler interface does the rest, giving the control a chance to modify its server state in light of client-side user actions.
If you get to consider simple and basic controls such as TextBox and DropDownList, then the <INPUT> element and the displayed user interface are the same thing. Any user interface-related operation automatically modifies the contents associated with the input element. This is much less automatic with more complex and advanced controls. Again, think of a DataGrid control that allows row movements or columns by drag and drop. The user interface of a DataGrid is a mere HTML table padded with plain text. The hidden field to carry data is silently created as part of the markup and injected in the page. Some additional code is needed to capture UI events and persist results to the hidden field.
What do you think this additional code should look like? Can it really be different from mere script code? It has to be pure JavaScript code at its core, but if the browser supports it, you can wrap it up in a more elegant and neater object model—that's mostly what a DHTML behavior is all about.
DHTML Behaviors
DHTML behaviors are a feature of Internet Explorer 5.0 and later. They're not supported by any other browser. A DHTML behavior component can be written in any Internet Explorer-compatible scripting language (usually JavaScript) and supplies dynamic functionality that can be applied to any element in an HTML document through CSS style sheets. DHTML behaviors use CSS to separate script and content in a document using an .htc that incorporates all the DHTML-based functionality needed to describe and implement a given behavior. This behavior, in turn, can be attached to a variety of HTML elements via a new CSS style. Put another way, DHTML behaviors bring the benefits of reuse to the world of scripting.
What's in a DHTML behavior? First, a behavior component can define an object model—a collection of methods, properties, and events that describe the provided behavior and supply tools to control it programmatically. In addition, a DHTML behavior needs to capture some page- and element-level events and handle them. You code this through classic HTML event handlers. You have access to the whole page DOM and can read and write attributes throughout the page. Figure 4 shows an HTC component that allows expanding and collapsing the children of the element to which it is applied.
Figure 4 DHTML Behavior Component
<PROPERTY NAME="Expanded" /> <ATTACH EVENT="onreadystatechange" HANDLER="Init" /> <ATTACH EVENT="onclick" HANDLER="HandleClick" /> <script language="javascript"> // Handles the initialization phase function Init() { if (Expanded == null) Expanded = true; // Toggle visibility for all children of THIS element for (i=0; i<children.length; i++) { if (Expanded == true) children[i].style.display = ""; else children[i].style.display = "none"; } } // Handles the OnClick event on the current element function HandleClick() { var i; var style; // Make sure the sender of the event is THIS element if (event.srcElement != element) return; // Toggle visibility for all children of THIS element for (i=0; i<children.length; i++) { style = children[i].style; if (style.display == "none") { style.display = ""; } else { style.display = "none"; } } } </script>
In DHTML, the expand/collapse functionality is achieved by toggling the value of the display attribute in the style object. A value of "none" keeps the element hidden; a value of "" (empty string) makes the element visible.
The core functionality is found in a <script> tag that collects public event handlers as well as internal functions and classes. Outside of the <script> tag, you define the object model of the behavior and the internal events it wants to handle:
<PROPERTY NAME="Expanded" /> <ATTACH EVENT="onreadystatechange" HANDLER="Init" /> <ATTACH EVENT="onclick" HANDLER="HandleClick" />
The preceding code snippet declares a variable named Expanded and a couple of handlers for the onclick and onreadystatechange DOM events. Properties can be assigned a value in the HTML source through the mechanism of attributes. Event handlers must be defined in the <script> tag. The onreadystatechange event is a common presence in many DHTML behaviors because it represents the initialization phase of the component. In Figure 4, you check the value of Expanded in the initializer and, based on that, you toggle the visibility value of child elements.
To attach a behavior, you use CSS notation (behaviors are ignored in browsers that do not support CSS):
<style> .LIST {behavior:url(expand.htc);} </style>
The CSS attribute is named "behavior". It is assigned a URL that ultimately points to the HTC file. Once you have defined a LIST class, you assign it to any HTML element that requires it:
<ul class="LIST" style="cursor: hand;" expanded="false">
As you see, any public properties defined on the behavior can be initialized as an attribute in any tags that contain style attributes.
Other than the public object model, DHTML behavior offers nothing that you can't get through plain scripting. But with a single attribute you can attach a certain behavior to a given HTML element or to the root of a HTML element subtree, as is the case with ASP.NET controls. A DHTML behavior can encapsulate a lot of details regarding the internal implementation of the behavior and it has full access to the page's DOM.
A DropDownList Example
In both aforementioned articles, I glossed over the code that specifically handles browser/server communication. Now it's time to focus on what the control needs to do in order to receive and properly process on the server any client-side input. The sample ExtendedDropDownList control is a custom control that is derived from the basic DropDownList control:
public class ExtendedDropDownList : System.Web.UI.WebControls.DropDownList { ... }
The most important difference between the basic and extended dropdown control is that the extended one exposes a client-side object model to let script code add elements dynamically. Newly added elements are added to the items collection and sent to the server out-of-band, that is outside the classic format that the HTTP payload takes when a dropdown control is involved.
In ASP.NET, the DropDownList control is designed to be read-only across postbacks. In other words, any list items dynamically added through DHTML code are lost once the page posts back to the server. The canonical HTTP payload doesn't include the items in the dropdown list. It only mentions the ID of the currently selected item. Control-specific information generated on the client can get to the server only in a hidden field. The hidden field can be given an arbitrary name, but in general you give the worker hidden field the same ID as the server control. As explained in the earlier "Anatomy of a Postback" section, this guarantees that the ASP.NET runtime invokes the methods of the IPostbackDataHandler interface on the control to post incoming data. For a custom DropDownList control, though, things are a little bit different. In fact, the base DropDownList control already requires an input element with the same name as the control's ID. This is a <SELECT> element:
<SELECT name="DropDownList1"> <OPTION value="one">One <OPTION value="two">Two <OPTION value="three">Three </SELECT>
A custom and enhanced dropdown list control requires an extra hidden field to carry the text and IDs of the additional items appended at the client. To avoid naming conflicts, this hidden field must have a different name. In my code, I use "_My" to postfix the ID. This is arbitrary, but once you choose a naming convention you must stick with it. In the source code of the customized ExtendedDropDownList control (available in the code download), the control defines a Boolean property to enable client-side insertion and overrides the OnPreRender method. The OnPreRender method simply registers the additional hidden field with the predetermined name. The hidden field is empty when the page is rendered to the client and will be filled as the user works with the control adding items dynamically to the list.
What about the contents of the hidden field? Should you use any specific format or convention? The data being passed to the server should be laid out according to a format that is known to both the server control and the client-side DHTML behavior. The format you choose is arbitrary as long as it achieves the expected goals. In this example, I'll use a pipe-separated pair of strings for each dynamically added item (note that without proper escaping, this prohibits the use of the pipe character in the actual values). The left part of the pair represents the text; the right part is for the ID. Here's an example:
One|id1,Two|id2
The custom ExtendedDropDownList control must implement the IPostBackDataHandler interface from scratch. The implementation on the base class is marked as private and can't be invoked from within a derived class.
The code in the LoadPostData method serves two main purposes. First, it manages the index of the selected item. The ID of this item comes through the HTTP payload and is matched against the current contents of the dropdown list. The index found (if any) is then used to overwrite the value of the SelectedIndex property. If this results in a change to the existing value, the host page gets a SelectedIndexChanged event. It is worth noting here that the event is not raised by the ASP.NET runtime. To be more precise, when LoadPostData returns true, the ASP.NET runtime invokes the RaisePostDataChangedEvent method on the same IPostBackDataHandler interface. By implementing this method, a control can fire a proper event.
The second goal of LoadPostData for the ExtendedDropDownList control is populating the Items collection with the new elements just added on the client. As mentioned, text and ID of these new elements are stored in the control's hidden field.
You can decide to raise an ad hoc event to signal that new elements have been added on the client. To do so, you define a private Boolean variable (_addedNewElements in the example) that is set during the execution of LoadPostData only if new elements have been added, as shown here:
void IPostBackDataHandler.RaisePostDataChangedEvent() { OnSelectedIndexChanged(EventArgs.Empty); if (_addedNewElements) OnNewItemsAdded(EventArgs.Empty); }
NewItemsAdded is a custom server-side event that is fired right after SelectedIndexChanged and before the postback event caused by the submit control:
public event EventHandler NewItemsAdded;
With the control's implementation discussed so far, when the Page_Load event is fired to the page the dropdown list control has been fully rebuilt to reflect the changes on the client—the new items are now definitely part of the control's state.
The Extended Object Model
When the dropdown list is displayed on the client, it takes the form of a <SELECT> tag. The DHTML object model designs a tree of objects around these elements and gives you the tools to add or remove items programmatically via JavaScript. The following code shows what you really need to execute:
var oOption = document.createElement("OPTION"); element.options.add(oOption); oOption.innerText = text; oOption.value = id;
By employing a DHTML behavior, you can wrap the previous code in a new method that's easier to use. Figure 5 details my dropdownlistex.htc behavior. It contains a Boolean property to enable support for client-side insertions as well as a client-side method named AddItem. The code for this method actually extends the DHTML tree for a regular dropdown element and simplifies the insertion of a new item. When attached to a client-side button, the following code adds a new item to the specified dropdown list entirely on the client. As you can see, it leverages the AddItem method defined on the behavior:
<SCRIPT lang="javascript"> function InsertTheNewItem() { var obj = document.getElementById("NewElement"); var text = obj.value; var id = obj.value; document.getElementById("DropDownList2").AddItem(text, id); } </SCRIPT>
Figure 5 The DropDownListEx.htc Behavior
<PROPERTY NAME="Modifiable" /> <METHOD NAME="AddItem" /> <ATTACH EVENT="onreadystatechange" HANDLER="Init" /> <script language="javascript"> // Handles the initialization phase function Init() { if (Modifiable == null) Modifiable = false; } // Add a new item programmatically function AddItem(text, id) { if (!eval(Modifiable)) return false; var oOption = document.createElement("OPTION"); element.options.add(oOption); oOption.innerText = text; oOption.value = id; var hiddenField = GetHiddenField(element.id + "_My"); // Add a separator var tmp = hiddenField.value; if (tmp != "") hiddenField.value += ","; hiddenField.value += text + "|" + id; } function GetHiddenField(fieldName) { // Go up in the hierarchy until the FORM is found var obj = element.parentElement; while (obj.tagName.toLowerCase() != "form") { obj = obj.parentElement; if (obj == null) return null; } if (fieldName != null) return obj[fieldName]; } </script>
Summing It Up
DHTML behaviors are Internet Explorer client-side components that encapsulate a given behavior and attach it to an HTML element. From the ASP.NET perspective, you can utilize these components to enrich server controls with advanced, browser-specific capabilities. It is important to realize that DHTML behaviors are not strictly necessary to endow server controls with powerful client capabilities; their use, however, makes it possible to better encapsulate all of the required features in a reusable and easily accessible object model.
In addition to learning the internal mechanics of DHTML behaviors, you should become familiar with the postback interfaces of ASP.NET controls—in particular, IPostBackDataHandler. This interface lets developers handle any posted data whose format and layout is entirely up to you. A deep understanding of this interface is key to implementing effective interaction between browser and server environments within the boundaries of ASP.NET controls.
Send your questions and comments for Dino to [email protected].
Dino Esposito is a Wintellect instructor and consultant based in Italy. Author of Programming ASP.NET and the new book Introducing ASP.NET 2.0 (both from Microsoft Press), he spends most of his time teaching classes in ASP.NET and ADO.NET and speaking at conferences. Get in touch with Dino at [email protected] or join the blog at weblogs.asp.net/despos. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2005/july/cutting-edge-dhtml-enabled-asp-net-controls | CC-MAIN-2019-51 | refinedweb | 4,172 | 55.03 |
Serve production static files with Flask.
Project DescriptionRelease History Download Files
Serve production static files with Flask. A Flask version of DJ-Static.
Usage
$ pip install fl-static
Configure your static assets in:
from flask import Flask app = Flask(__name__) app.config.update( STATIC_ROOT='static', STATIC_URL='/static', )
Then, use the Cling middleware:
from fl_static import Cling app = Cling(app)
You can use Werkzeug’s local development server:
app.run()
Jinja2 Magic
fl_static.Jinja2Magic extends static.StringMagic providing Jinja2 template support.
You can see it in action in example.py.
License
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/fl-static/ | CC-MAIN-2018-05 | refinedweb | 111 | 54.08 |
Question:
I was looking at the code below from stanford library:
void recursiveReverse(struct node** head_ref) { struct node* first; struct node* rest; /* empty list */ if (*head_ref == NULL) return; /* suppose first = {1, 2, 3}, rest = {2, 3} */ first = *head_ref; rest = first->next; /* List has only one node */ if (rest == NULL) return; /* put the first element on the end of the list */ recursiveReverse(&rest); first->next->next = first; /* tricky step -- see the diagram */ first->next = NULL; /* fix the head pointer */ *head_ref = rest; }
What I don't understand is in the last recursive step for e.g if list is 1-2-3-4 Now for the last recursive step first will be 1 and rest will be 2. So if you set *head_ref = rest .. that makes the head of the list 2 ?? Can someone please explain how after reversing the head of the list becomes 4 ??
Solution:1
Draw out a stack trace...
Intial - {1,2,3,4} Head - 1 Rest = 2,3,4 Recurse(2,3,4) Head = 2 Rest = 3,4 Recurse(3,4) Head = 3 Rest = 4 Recurse (4) Head = 4 Rest = null //Base Case Reached!! Unwind. So now we pick up Recurse(3,4) Head = 3 Rest = 4 // Return picks up here first->next->next = first; so list is: 3,4,3 // set head to null, null ,4,3, //Off with his head! 4,3 Return Now we're here Recurse(2,3,4) Head = 2 Rest = 3,4 Previous return leaves state as: Head = 2 //But Head -> next is still 3! -- We haven't changed that yet.. Rest = 4,3 Head->next is 3, Head->next->next = 2 makes the list (actually a tree now) 4->3->2 ^ | 2 And chop off the head leaving 4->3->2 and return. Similarly, do the last step which will leave 4->3->2->1 ^ | 1 and chop off the head, which removes the one.
Solution:2
Consider the list:
1 -> 2 -> 3 -> 4 -> NULL ^ ^ | | first rest
Where
first points to the first node and rest points to the node next to
first.
Since the list is not empty and list does not contain one node we make recursive call to
reverse to reverse the list pointed to by
rest. This is how the list looks after reversing the rest of the list:
1 -> 2 <- 3 <- 4 ^ | ^ | NULL | first rest
As seen
rest now points to the reversed list which has
4 at the beginning and
2 at the end of list. The next pointer of node
2 is
NULL.
Now we need to append the first node to the end of the reversed-rest list. To append anything to the end of the list we need to have access to the last node of the list. In this case we need to have access to the last node of the reversed-rest list. Look at the diagram,
first -> next points to the last node reversed-rest list. Therefore
first -> next -> next will be next pointer of the last node of the reversed-rest list. Now we need to make it point to
first so we do:
first -> next -> next = first;
After this step the list looks like:
1 <- 2 <- 3 <- 4 ^ -> ^ | | first rest
Now the
next field of the last node of the list must be
NULL. But it is not the case now. The
next field of the last node ( node
1) is pointing to the node before it ( node
2). To fix this we do:
first -> next = NULL;
After this the list looks like:
NULL <- 1 <- 2 <- 3 <- 4 ^ ^ | | first rest
As seen the list is now correctly reversed with
rest pointing to the head of the reversed list.
We need to return the new head pointer so the that changes are reflected in the calling function. But this is a
void function and
head is passed as double pointer so changing the value of
*head will make the calling function see the changed head:
*head = rest;
Solution:3
The rest isnât
2, itâs
2 -> 3 -> 4, which gets reversed recursively. After that we set
*head_ref to
rest, which is now (recursively reversed!)
4 -> 3 -> 2.
The important point here is that although both
first and
rest have the same type, i.e.
node*, they are conceptually fundamentally different:
first points to one single element, while
rest points to a linked list of elements. This linked list is reversed recursively before it gets assigned to
*head_ref.
Solution:4
I recently wrote a recursive method for reversing a linked list in ruby. Here it is:
def reverse!( node_1 = @head, node_2 = @head.link ) unless node_2.link node_2.link = node_1 @head = node_2 return node_1 else return_node = reverse!(node_1.link, node_2.link) return_node.link = node_1 node_1.link = nil return node_1 end return self end
Note:If u also have question or solution just comment us below or mail us on [email protected]
EmoticonEmoticon | http://www.toontricks.com/2019/02/tutorial-linked-list-recursive-reverse.html | CC-MAIN-2019-30 | refinedweb | 813 | 77.47 |
SYNOPSIS
#include <numaif.h>
long move_pages(int pid, unsigned long count, void **pages,
const int *nodes, int *status, int flags);
DESCRIPTION
move_pages() moves the specified pages of the process pid to the memory
nodes specified by nodes. The result of the move is reflected in sta-
tus. The flags indicate constraints on the pages to be moved.
pid is the ID of the process in which pages are to be moved. To move
pages in another process, the caller must be privileged (CAP_SYS_NICE)
or the real or effective user ID of the calling process must match the
real or saved-set user ID of the target process. If pid is 0 then
move_pages() moves pages of the calling process.
count is the number of pages to move. It defines the size of the three
arrays pages, nodes, and status.
pages is an array of pointers to the pages that should be moved. These
are pointers that should be aligned to page boundaries. Addresses are
specified as seen by the process specified by pid.
nodes is an array of integers that specify the desired location for
each page. Each element in the array is a node number. nodes can also
be NULL, in which case move_pages() does not move any pages but instead
will return the node where each page currently resides, in the status
array. Obtaining the status of each page may be necessary to determine
pages that need to be moved.
status is an array of integers that return the status of each page.
The array only contains valid values if move_pages() did not return an
error.
flags specify what types of pages to move. MPOL_MF_MOVE means that
only pages that are in exclusive use by the process are to be moved.
MPOL_MF_MOVE_ALL means that pages shared between multiple processes can
also be moved. The process must be privileged (CAP_SYS_NICE) to use
MPOL_MF_MOVE_ALL.
Page states in the status array
The following values can be returned in each element of the status
array.
0..MAX_NUMNODES
Identifies the node on which the page resides.
-EACCES
The page is mapped by multiple processes and can only be moved
if MPOL_MF_MOVE_ALL is specified.
-EBUSY The page is currently busy and cannot be moved. Try again
.
EACCES One of the target nodes is not allowed by the current cpuset.
EFAULT Parameter array could not be accessed.
EINVAL Flags other than MPOL_MF_MOVE and MPOL_MF_MOVE_ALL was specified
or an attempt was made to migrate pages of a kernel thread.
ENODEV One of the target nodes is not online.
ENOENT No pages were found that require moving. All pages are either
already on the target node, not present, had an invalid address
or could not be moved because they were mapped by multiple pro-
cesses.
EPERM The caller specified MPOL_MF_MOVE_ALL without sufficient privi-
leges (CAP_SYS_NICE). Or, the caller attempted to move pages of
a process belonging to another user but did not have privilege
to do so (CAP_SYS_NICE).
ESRCH Process does not exist.
VERSIONS
move_pages() first appeared on Linux in version 2.6.18.
CONFORMING TO
This system call is Linux-specific.
NOTES
for information on library support, see numa(7).
use get_mempolicy(2) with the mpol_f_mems_allowed flag to obtain the
set of nodes that are allowed by the current cpuset. Note that this
information is subject to change at any time by manual or automatic
reconfiguration of the cpuset.
Use of this function may result in pages whose location (node) violates
the memory policy established for the specified addresses (See | http://www.linux-directory.com/man2/move_pages.shtml | crawl-003 | refinedweb | 584 | 74.19 |
note Xiong <p>Amazing how difficult it is for me to express myself precisely on this issue! I thought an example would be explicit. Sorry. Please allow me to attempt to improve. </p> <p>I'm well acquainted with CPAN Search; it's a great tool. I'm not sure how much an ordinary dev will want to use <c>Parse::CPAN::Modlist</c>; but that's cool. Both of these are solutions to the problem of <i>I want to download a module that does This.</i> They are statements of <b>what is</b>. This is a common problem and an important one; but it's not the issue I'm attempting to address. </p> <p>The table I <i>seek</i> is not descriptive but <b>prescriptive</b>. This is where the markers for deprecated namespaces come in.* This table goes no further than the top level of namespaces, perhaps some well-defined second levels (e.g., <c>CGI::Application::</c>). It's a statement of <b>what should be</b>. </p> <p>Reviewing the descriptions of existing modules is indeed a way to infer prescriptions for future efforts, I admit. By the same token, one course of bricks is a guide to placing the next. Yet, if we want a wall to stand neatly, we generally use a plumb line. </p> <p>I realize that I touch upon philosophical issues here.** My feeling is only that I will be more comfortable writing modules -- even modules that may never escape from my own project -- that are named in conformance with some plan. </p> <p>I'm coming rapidly to the belief that no such table exists (I'd like to be wrong!) and that creating it is a worthwhile project. I wrote the original node in hopes that Monks might suggest a good way to get it started. </p> <p>*<readmore title="Deprecated Namespaces"> I don't personally <i>assert</i> that all modules currently under <c>CLI::</c> should have fallen under <c>Getopt::</c> or that all CLI-related modules should be created under the latter; but it's the kind of assertion that someone with more experience ought to be able to make. </p> <p>I do assert that no module should be created under the <c>Data::</c> namespace. I'm sure vigorous arguments can be mounted in support of it but I side with [petdance] that <b>data</b> is the [link://'s worst identifier]. All modules manipulate data. </p> </readmore> <p>**<readmore title="Philosophical Discursion"> <b>Description vs Prescription</b> is an old war in pedagogy, particularly among teachers of [wp://Disputes in English grammar|English grammar]. At one time, prescriptive grammar was the unchallenged model, leading to textbooks insisting on awkward constructions and forbidding much common usage. The descriptive reaction reached its nadir with [wp://Oakland Ebonics controversy|Ebonics]. </p> <p". </p> <p.) </p> <p>The middle ground is often best. I would suggest that a writer of Perl, being an engineer, might be better served by prescription than the author of an English novel. Some will say both are artists and I don't disagree. </p> </readmore> 821356 821370 | http://www.perlmonks.org/?displaytype=xml;node_id=821563 | CC-MAIN-2018-05 | refinedweb | 530 | 64.61 |
Alarm tutorial
This tutorial is for an alarm application that uses a simple countdown mechanism. It relies on two input buttons to set, activate and silence the alarm. During the countdown, the device is in sleep mode. When the countdown ends and the alarm triggers, an LED and a digital out pin go high. They go back to low when the alarm is reset.
The LEDs provides some feedback to the user: when setting the alarm, the LEDs blink to show the input was recognised. When the alarm is fully set, the LEDs blink the configured delay once, before letting the device go into sleep mode.
Tip: You can complete this tutorial with the Mbed Online Compiler or Mbed CLI.
Import the example application
If using Mbed CLI, use the
import command:
mbed import mbed-os-example-alarm cd mbed-os-example-alarm
If using the Online Compiler, click) { ThisThread::sleep_for(10); } // Once the delay has been input, blink back the configured hours and // minutes selected for (uint8_t i = 0; i < hour_count * 2; i++) { hour_led = !hour_led; ThisThread::sleep_for(250); } for (uint8_t i = 0; i < min_count * 2; i++) { min_led = !min_led; ThisThread::sleep_for(250); } // Attach the low power ticker with the configured alarm delay alarm_event.attach(&trigger_alarm_out, delay); // Sleep in the main thread while (1) { sleep(); } }
Compile and flash to your board
To compile the application:
- If using Mbed CLI, invoke
mbed compile, and specify the name of your platform and toolchain (
GCC_ARM,
ARM,
IAR). For example, for the ARM Compiler 5 and FRDM-K64F:
mbed compile -m K64F -t ARM
- If using the Online Compiler, click the Compile button.
Your PC may take a few minutes to compile your code.
Find the compiled binary:
- If using the Online Compiler, the compiled binary will be downloaded to your default location.
- If using Mbed CLI, the compiled binary will be next to the source code, in your local copy of the example.
Connect your Mbed device to the computer over USB.
Copy the binary file to the Mbed device.
Press the reset button to start the program.
Use the alarm
The alarm isn't set to a timestamp; it counts down from the moment it's activated. So to set the alarm, specify the countdown duration:
- Press Button1 for the number of desired hours to delay.
- Press Button2 to cycle to minutes, and repeat the previous step for the number of desired minutes.
- Press Button2 again to start the alarm.
- Press Button2 again once the alarm triggers to silence it.
Extending the application
You can set the alarm to a specific time by relying on either the platform's RTC or the time API. You will need to set the time on each reset, or rely on an internet connection and fetch the time.
Troubleshooting
If you have problems, you can review the documentation for suggestions on what could be wrong and how to fix it. | https://os.mbed.com/docs/mbed-os/v6.2/apis/drivers-tutorials.html | CC-MAIN-2020-34 | refinedweb | 483 | 62.48 |
(Very similar to bug 8903, but using inheritance instead of fields.)
Steps to reproduce:
1. Create a library called "Baz" with this class:
namespace Baz {
public class BazClass {
public BazClass () {
}
}
}
2. Create a library called "Bar" with this class:
namespace Bar {
public class BarClass : BazClass {
public BarClass () {
}
}
}
(And compile it linking to Baz.dll)
3. Create a console program "Foo" with this class:
using System;
namespace Foo {
class MainClass {
public static void Main (string[] args) {
Console.WriteLine ("begin");
try {
new FooClass ();
}
catch (Exception ex) {
Console.WriteLine ("WTF: " + ex.ToString ());
}
Console.WriteLine ("end");
}
public class FooClass : BarClass {
public FooClass () {
}
}
}
}
(And compile it linking to Bar.dll)
4. Compile Foo.
5. Go to Foo/bin/Debug, and delete Baz.dll.
6. Run Foo (mono Foo.exe)
Current results: TypeLoadException
Expected results: FileNotFoundException.
I've got the patch almost ready, will send a pull request soon.
Created attachment 3396 [details]
Proposed patch
Meh, I was convinced this patch would work, but it doesn't.
I'll debug the thing tomorrow...
The patch proposed is obsolete, I've finally fixed it, with this pull request:
With the above patch, the result of performing the steps to reproduce are:'
[ERROR] FATAL'
I don't think you can rely on the exception been TLE or FNFE.
chatlog for the record:
<kumpera> knocte: ping
<knocte> kumpera, pong
<kumpera> regarding
<kumpera> why is it a problem that it's throw TLE and not FNFE?
<knocte> kumpera: because it can be confusing, as sometimes the type highlighted in the TLE is not one inside the assembly that is missing, example:
<kumpera> oh, ok
<kumpera> makes sense
The way to fix this is not to increase the reliance on the thread local loader error. Experience has shown that using it is very tricky and can quite easily lead to working stuff be though as been broken because one piece of code forgot to check for errors.
Said that, the way is to wire up MonoError all around the loader functions so we can properly bubble up the original one and move toward removing the loader error stuff from the runtime.
Thanks Rodrigo. I would like to paste the last part of the chat-log about this, to make myself aware (in case in the future I feel like working on this) of the scary estimate:
<kumpera> knocte: anyways, it's a year+ effort
<kumpera> because we can't neither justify getting someone to work on this full time
<kumpera> nor do I wish someone working on this fulltime | https://bugzilla.xamarin.com/show_bug.cgi?id=10354 | CC-MAIN-2016-30 | refinedweb | 418 | 68.1 |
Introduction
Routing is arguably the most important part of any web application. Good routes give you many advantages:
- Hackable urls
- Search engine optimization
- make clear relationships
These are only a few of the advantages. If great routes are so important, then why has Microsoft given so little guidance as to how to proceed. Instead they give you this:
public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } }
A tiny example of how routes function; a catch all that will undoubtedly leave you frustrated. They might as well drop you into the wilderness with two sticks, and hope you make it out.
There has to be a better way? What if there was an opinionated way to add specific routes to your application? It would insure that each controller action in your ASP.Net MVC application was routed uniquely and gave you all the advantages mentioned above. In addition, it would give you the freedom to add any routes you needed.
Well luckily there is Ruby On Rails' resource routing... well something like it for ASP.Net MVC.
Say Hello to RestfulRouting!
Background
RestfulRouting is based on Ruby on Rails resource routing. It leverages most of the same opinions. The major opinion that needs to be understood is the 7 actions: Index, Show, New, Create, Edit, Update, Destroy. By default, all controllers will have some subset of these 7 actions.
Another important opinion is the difference between a resource and a set of resources. The difference being that a resource is a set of one item, where as resources are many. The view of a resource vs. resources is from the vantage point of the user. For example a user has an account (resource), while a user might have many orders (resources).
/account <- account#show (resource) /account/orders <- orders#index (resources) /account/orders/1 <- orders#show (resources)
How to Use
RestfulRouting is available on Nuget, and has a great documentation site to help answer all your questions.
PM> Install-Package RestfulRouting
Install the package through Nuget, and you are ready to go. Let's see what a standard mapping of routes looks like.
using System.Web.Routing; using RestfulRouting; [assembly: WebActivator.PreApplicationStartMethod(typeof(MvcApplication1.Routes), "Start")] namespace MvcApplication1 { public class Routes : RouteSet { public override void Map(IMapper map) { // 1. map route debugger map.DebugRoute("routedebug"); // 2. map root "/" map.Root<ProductsController>(x => x.Index()); // 3. map products resources map.Resources<ProductsController>(); } public static void Start() { var routes = RouteTable.Routes; routes.MapRoutes<Routes>(); } } }
So what did we get with the addition of this code? We got a total of 9 routes added to our application.
- We map the route debugger, this is optional, but a great way to see what your routes are, how they are ordered, and what parameters you need to pass into to resolve them.
- We are mapping the root of the site. This is the first action that will be hit when users are visiting your site.
- We are mapping the 7 actions to the products controller.
let's make sure we got that by looking at the route debugger. We will also see what the routes look like.
How awesome is that?! Look at those beautiful routes. Notice how we have a lot taken care of for us.
- HttpMethods are set properly
- Path contains the name "products" in it.
- Each action is set to map to an action on the controller.
- parameters are added to the path automatically.
What about a complex scenario like a blog. How do you map something with the relationships of a system? Let's look at that sample now.
public override void Map(IMapper map) { map.DebugRoute("routedebug"); map.Root<HomeController>(x => x.Index()); map.Resources<BlogsController>(blogs => { blogs.As("weblogs"); blogs.Only("index", "show"); blogs.Collection(x => x.Get("latest")); blogs.Resources<PostsController>(posts => { posts.Except("create", "update", "destroy"); posts.Resources<CommentsController>(c => c.Except("destroy")); }); }); }
We can do a lot with RestfulRouting.
- We can specify the name of the route.
- We can explicitly specify what actions are mapped.
- We can add additional actions when we want to break convention
- We can nest a resource / resources
Let's look at the routes we get.
Notice we get beautifully nested resources. The name of the blogs controller was also changed to "weblogs". We get all that with a few lines of RestfulRouting code.
Conclusion
There is so much that RestfulRouting can do. The greatest advantage I've found, is it changes my mindset as to what a controller should be doing. I think about how small I can make each controller. It also puts routing in one spot, rather than dispersed across your codebase. Keep in mind, RestfulRouting doesn't stop you from routing any way you'd like, it only enhances the experience of mapping routes. I urge you to give it a shot, it will only make your experience with ASP.Net MVC that much more enjoyable. If you have any question about RestfulRouting there is documentation at. I am also available to answer any questions. Good luck and route, route, route! :) | http://tech.pro/tutorial/1193/using-restfulrouting-with-aspnet-mvc | CC-MAIN-2014-15 | refinedweb | 864 | 68.47 |
import -.
This module defines only one method, import(), as this is the module you are technically 'use'ing in your code.
No internal methods are defined.
No variables are exported.
No internal variables are defined.
The value returned by executing the package is 1 (or true).
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Carp, DirHandle
This pragma removes itself from the %INC hash, allowing it to be 'use'd again.
mak - Michael King ( [email protected] )
import.pm v1.01 10/10/99 mak
1.01 first posting to CPAN.
The latest version of this module is likely to be available from:
The best place to discuss this code is via email with the author. | http://search.cpan.org/~mikeking/import-1_01/import.pod | crawl-002 | refinedweb | 128 | 77.53 |
C API of mxnet for ease of testing backend in Python. More...
#include <mxnet/c_api.h>
Go to the source code of this file.
C API of mxnet for ease of testing backend in Python.
Inhibit C++ name-mangling for MXNet functions.
This API partitions a graph only by the operator names provided by users. This will attach a DefaultSubgraphProperty to the input graph for partitioning. This function should be used only for the testing purpose.
Given a subgraph property name, delete the op name set in the SubgraphPropertyOpNameSet.
Given a subgraph property name, use the provided op names as the op_names attribute for that subgraph property, instead of the predefined one. This is only for the purpose of testing. | https://mxnet.apache.org/versions/1.5.0/doxygen/c__api__test_8h.html | CC-MAIN-2022-40 | refinedweb | 120 | 68.06 |
Henri Sivonen wrote a post entitled Vendor Prefixes Are Hurting the Web and Daniel Glazman wrote a response, which sparked quite a debate. So what do our experts think?
Chris Coyier
chriscoyier.net
Would we be better off now if browsers never used vendor prefixes? No. There would be less innovation, more bugs, and inconsistent implementations of features.
Would we be better off in the future if browsers stopped using vendor prefixes right now? No. Same issues as above, but also there are too many authors using them and too many released browsers that support them to go cold turkey and see any benefit.
To make things better, the answer is to keep raising awareness as to their proper use and encourage authors to keep their work up to date.
Chris is a web designer working at Wufoo
I agree that writing out the vendor-prefixed properties is annoying, but what’s the alternative? If we have to wait for browser makers to agree it will take forever, and if they were to implement competing implementations before a final spec we simply wouldn’t be able to use them. Vendor prefixes let browser makers nudge the standards groups into finalising things, and we get to play with them in the meantime. I’ll use what I can get my hands on, and I’d rather have something somewhat annoying than nothing at all.
Jonathan is a design lead at Zurb
Margaret Manning
I disagree with most if not all of Henri Sivonen’s post. “Vendor prefixes are hell for web authors and authoring tool vendors”. Speaking as a web author I think this is nonsense. I choose to use CSS prefixes when I think they are beneficial to a project, they are not forced upon me by browser vendors.
“Vendor Prefixes are Hurting Browser Users and Competition”. Again I disagree, if anything I think prefixes help browser competition and push uptake of new standards across browsers. I think one major browser implementing a host of new features (even if experimental) drives other major browser vendors to up their game and start supporting cutting edge features too.
As for browser vendor demos, obviously Safari’s demos use only webkit prefixes – they are trying to get people to use Safari! The thing is, for most people, using another browser than the one that they are currently using doesn’t even enter their thoughts, I know countless people who don’t even bother updating the browser they have, let alone changing to the latest and greatest release of the moment. People who are more technically minded or in the web industry will immediately query why the Safari demos don’t work in other browsers, I myself view the source and inspect the CSS of almost every site I visit (but that just could be a unhealthy habit of mine!).
“I think when browser vendors implement a feature experimentally, the feature should stay in experimental builds (nightly builds, ‘labs’ builds or similar) until it is close enough to a final design that it can be taken as a constraint for future changes to the feature.”
This is one of the only points that Daniel Glazman agrees with, again I disagree with this idea. I think CSS vendor prefixes should be kept in stable releases of browsers as opposed to experimental builds, I feel this would mean that they would actually be used in the wild, this lets web authors get used to any differences that may exist between rendering engines for specific features and means there will be more people to complain about any inconsistencies and give feedback to browser vendors.
It’s also worth remembering that once a browser implements a non-prefixed feature that was prefixed in an earlier version, it will apply the a non-prefixed style if it appears after the a prefixed version. So if for example browser X v.4 did funky things with X-border-radius, v.5 could have the correct implementation and dropped the prefix, so both V.4 and V.5 can enjoy nice round corners.
Who needs to act? Some major browser vendors need to shorten their release cycles, I’m looking at you Internet Explorer!
Margaret Manning is CEO of web design and digital marketing agency Reading Room
Simon Holywell
Vendor specific prefixes are a great way to expose experimental features to developers and end users without infringing upon the CSS namespace set out in the specifications. They allow us to try out new features and gather vital feedback for browser vendors from us and our users, which may help to improve the final specification for a given property.
Equally, however, it’s painful to have to update a number of different vendor prefixed properties that affect the same aspect. Another potential problem is that once formalised in the specification should the vendor specific prefix support for that property be dropped or maintained to preserve backwards compatibility?
There is no silver bullet, but I think that the vendor specific prefixes solve the issue we are facing right now. Sure there are problems, but I am yet to see a better approach in practice. If you do not like them then avoid implementing experimental CSS properties that have yet to reach candidate recommendation status with the W3C.
Simon is lead developer at Brighton agency Mosaic
Daniel Bramhall
bramhall.me
I believe that vendor prefixes are actively harming web standards and make developing for the web unnecessarily voluble.
How can one develop for the web efficiently when to declare a simple border radius, three different declarations are needed (and that’s just to support three out of the dozens of widely used, popular browsers)?
Daniel is a Mac OS X and iOS interface and application developer
Elliott Fienberg
auditory.ca
Browser prefixes sure are a pain to work with, but if it means that we’re able to use new features as they’re developed, I don’t have a huge beef. Of course this kind of apathy isn’t helping the situation, but there doesn’t seem to be a clear cut solution that we can all get behind anyways.
Elliott is fighting the war on bad music with his cold, dead hands
David White
No, CSS prefixes aren’t hurting the web but allow us to use new CSS features before a browser has fully implemented them. I treat vendor prefixes as CSS features still in beta, prefixes such as -webkit or -moz allow browsers to provide experimental features without adding them directly to the global namespace. This allows developers to use these features to target specific browsers where the same feature in a different browser may cause an undesirable effect.
David White is a website designer and developer currently working at the BBC
Paul Rhodes
They’re both good and bad and will continue to divide opinion depending upon who you are. Web developers have come to accept (in part) the CSS prefixes of webkit, mozilla and so on. So thumbs down for convention and ease of use. But they are essential to vendors to drive innovation and experimental features. The dogma surrounding web standards stifles innovation. Thanks to the vendors for making use of these prefixes otherwise we’d still be creating rounded boxes as graphics while page one of the standards is written (in draft of course). However, it’s a short-term solution and there is a limit, we don’t want too many vendor prefixes to remember and code. I guess the bottom line is for standards to reign supreme they must be quicker to the party.
Paul is founder of 22 Blue (22blue.co.uk)
Rob Hampson
robhampson.co.uk
Like Ewoks, vendor prefixes are plentiful and mildly annoying. I wouldn’t say they were massively hurtful. For example, using multiple prefixed syntaxes just to implement a border radius is a pain, but is it harmful? More often than not prefixed syntaxes are experimental features that haven’t made the grade yet. I say keep them in controlled dev/beta versions. When a universal syntax is justified and included, ship in a stable public release.
Rob Hampson designs for web and print and also creates illustrations
Robert Pataki
heartcode.robertpataki.com
Personally I don’t think they’re hurting much. They are experimental features that should be handled with care. If you are about to deliver web content you can always decide if you go for richer experimental stuff that isn’t widely supported or use old methods to make sure everyone gets the same old thing.
If the lack of the implementation doesn’t affect the presented information I believe it’s fine. It’s all about decisions we have to make anyway as we still have to support old browsers in other ways.
To me prefixes mean enhancement, experiment and progress. We already see how they affected browser vendors to speed up their release cycles and force each other (and the users) to keep up with the latest technologies. I also believe that prefixes help W3C and browser vendors to reach the standardisation milestones faster and highlight important issues much easier.
Robert is a developer at Waste Creative
Andy Pike
1minus1.com
Vendor prefixes are seen by some as a messy, non-standard solution that hurts the web. I disagree. In fact, I believe they have helped move the web forward. Take the CSS3 standard border-radius declaration. This has been available since FireFox 1 as -moz-border-radius. Developers haven’t had to wait for the official standard to be released to use it. The border-radius declaration only arrived in IE9 in 2011, should we have waited 6 years? This is where progressive enhancement improves UI, which in turn drives standards.
Andy is technical director of 1minus1 Limited
Joe Lambert
joelambert.co.uk
No, vendor prefixes give developers a way to test drive experimental features in real world scenarios before they are standardised. Without prefixes we wouldn’t be able to create the kind of compelling mobile web apps that are possible today. Also, with tools such as Compass and SASS the argument of having to “say the same thing in a different way to each browser” in practical terms is a non issue.
Joe is the creator of Flux Slider
Steve Cross
theartfulbadger.co.uk
We’ve been beating the ‘progressive enhancement’ drum for a long time now, I’m surprised that we’re not celebrating vendor prefixes as a way of providing cutting edge functionality while also allowing old or ‘late to the party’ browsers to tag along too.
Clearly, the code isn’t as tidy and I’m always thrilled when I know that another browser is accepting non-prefixed styling but there are a number of tools that help mediate the code bulk and the opportunity cost is one of innovation.
I suppose, if you want me to stop using vendor prefixes, I should also stop using pseudo elements given their patchy support in IE7? IE7’s still got a user-base I should cater for, right? Hell, why not build with tables – at least I know it renders in 6.
Building for the lowest common denominator stalls innovation and that harms us all. In a world without vendor prefixes, we’ve ground to a halt.
Steve is a frontend developer and co-founder of Rareloop | https://www.creativebloq.com/inspiration/big-question-are-vendor-prefixes-hurting-web-1126555 | CC-MAIN-2018-09 | refinedweb | 1,898 | 60.45 |
wat abt web.xml file....
awesome code
Post your Comment
Struts 2.1.8 Login Form
Struts 2.1.8 Login Form
In this section we will learn how we can create form based application... to
validate the login form using Struts 2 validator framework.
About the example
Struts 2 Login Form Example
Struts 2 Login Form Example tutorial - Learn how to develop Login form... you can create Login form in
Struts 2 and validate the login action....
Let's start developing the Struts 2 Login Form Example
Step 1
How To Develop Login Form In Struts
How To Develop Login Form In Struts
....
This
article will explain how to develop login form in struts. Struts adopts... click here!
In this tutorial you have learned how to develop simple login
Struts 2 Login Application
page in
the using Struts 2 framework.
Develop Login Form...;
Developing Struts 2 Login Application
In this section we are going to develop... let's develop the action class to handle the login request. In Struts 2 using Ajax
;/action>
Develop a Login Form Using Ajax : The GUI of the
application...Login Form using Ajax
This section provides you an easy and complete
implementation of login form
Login Form
Login Form
Login form in struts: Whenever you goes to access data from... for a
login form using struts.
UserLoginAction Class: When you download Login
login application - Struts
of Login and User Registration Application using Struts Hibernate and Spring.
In this tutorial you will learn
1. Develop application using Struts
2. Write...login application Can anyone give me complete code of user login
code for login fom - Struts
code for login fom we have a login form with fields
USERNAME:
In this admin can login and also narmal uses can log
Login/Logout With Session
;}
}
Download this code.
Develop Login Form: The GUI...;
In this section, we are going to develop a login/logout
application with session... class that handles the login request.
The Struts 2 framework provides a base
Login Form
of +,on clicking which a login form appears.
Only the middle column is different for each of the page.
How to write a code for login authentication so...Login Form I have 8 jsp pages.Each of them has three columns:Left
login i want to now how i can write code for form login incolude user and password in Jcreator 4.50
Hello Friend,
Visit Here
Thanks
develop structs application - Struts
develop structs application hi
i am new to structs,how to develop... me Hi friend,
For more information,Examples and Tutorials on Struts visit to :
Spring login form application
Spring login form application Hi Please give me the Spring login form application so that i can understand its flow ?
Hi please find the link of the tutorial
Spring Applications
Develop user registration form
User Registration Form in JSP
... registration form which will
insert the entered data into the database... will
be inserted into the database. Firstly the registration form
login form - JSP-Servlet
login form Q no.1:- Creat a login form in servlets? Hi...*;
import javax.servlet.http.*;
public class Login extends HttpServlet...();
pw.println("");
pw.println("Login");
pw.println("");
pw.println
Design, develop and test JSPs
Design, develop and test JSPs...;
Design, develop and test JSPs
Creating....
If you want to create a JSP file that uses Struts architecture, select
Developing Struts Hibernate and Spring Based Login/Registration Application
;
In this tutorial we will learn how to develop login...
In this section we will develop Login Form code for our application.
... frameworks e.g. Struts, Hibernate and Spring.
About this Login
re:web.xmlsunny April 5, 2011 at 4:02 PM
wat abt web.xml file....
commentharish January 26, 2012 at 2:20 PM
awesome code
Post your Comment | http://roseindia.net/discussion/19092-How-To-Develop-Login-Form-In-Struts.html | CC-MAIN-2013-48 | refinedweb | 628 | 57.77 |
Is it possible to write a single method total to do a sum of all elements of an ArrayList, where it is of type <Integer> or <Long>? I cannot just write public long total(ArrayList<Integer> list) and public long total(ArrayList<Long>
I have POJO class Student like this class Student { private int score; private String FirstName; //Getters and setters ................. } I am creating ArrayList like this public static void main(String[] args) { List<Student> al_students= new Arra
I am trying to sort List using java comparator the reference is import java.util.Comparator; import net.java.dev.wadl.x2009.x02.ResourceDocument.Resource; publi
How do you get all unique groups from contacts. I am already able to get all instances of groups but I guess I have more than one account on the device so I am receiving multiple instances of the same groups like Coworkers,Coworkers,Coworkers,Coworke code is from Java SCJP6. It's from the topic of The Comparable Interface from chapter 7 on Collections. In line 4 we are casting 'Object o' to DVDInfo type. I don't understand this. Why are we casting it as to DVDInfo? class DVDInfo implements C
I'm running the following code, but getting error The method trimToSize() is undefined for the type List<Integer> public class ListPerformance { public static void main(String args[]) { List<Integer> array = new ArrayList<Integer>(); Lis
So I have been creating a Word-Puzzle which I recently got stuck on a index out of bounds problem. This has been resolved however the program is not doing what I would like it to do. The Idea i that the test class will print 3 words in an array e.g.
public class Human { private int age; private float height; private float weight; private String name = new String(); public Human() { } public Human(String name, int age, float height, float weight) { this.name = name; this.age = age; this.height =
I've successfully parsed this data into my Android application. This JSON takes the format of [ {...}, {...}, {...} ] My issue is that I need to parse JSON in the format of { "count":3, "result":[ {...}
I Browse some Base adapter tuts and did this, I don't know how to get the contacts set to the list can any one explain what I missed Here is the code: class SingleRow{ String name; String number; int image; SingleRow(String name,String number){ this.
I am new to html/javascript This is my class: TravelObjects{ String sourceName; String lattitude; String longitude; ... getters and setters } I am recieving array of TravelObject class as server response whose variables I want to use on html side. Ho
This question already has an answer here: How to unserialize PHP Serialized array/variable/class and return suitable object in C# 3 answers In the picture you can see what is in ht, but how do i parse the values from ht.. its a php serialization that
Collections.sort(orderedStudents, new Comparator<Student>() { public int compare(Student s1, Student s2) { return s2.getAggregate().compareTo(s1.getAggregate()); } }); This is the method i used. --------------Solutions------------- The problem is th
Don't know why I'm getting this error. Working with ArrayLists and sorting words into appropriate ArrayLists by alphabetical order. If anyone can help me understand why I'm getting the error and how to fix it that would be great! import java.util.*;
I found that this worked :) I found a way to make it work. Instead of this: for(int i=0; i<alla.size(); i++){ if(alla.get(i).getClass().getName().equals("Aktie")){ alla.get(i).setKurs(0.0); } } I got this to work: for(Värdesak v : alla){ if(v
I am Rest Assured for testing REST API and have a case where I need to extract values (costCenterId and organizationId) from a GET request getTenantFloorManagers = given(authToken).when().get(“/costcenter/manager/floormanagers").asString(); that retu
I have a game that is based on a 9x9 grid array in which the user attempts to escape but at random positioning in the array there are blocks in which the user cannot move to or it will end the game. 3=user, 1=safe, 2=wall, 0=safezone. essentially I w
I am receiving input through stdin in the form of a line (there will be many many lines in the input), and what I want to do is take information from each line and store it into an arraybased list but I am having trouble with how to continue after a | http://www.dskims.com/tag/arraylist/ | CC-MAIN-2019-13 | refinedweb | 752 | 60.95 |
In Java final is the access modifier which can be used with a filed class and a method.
If you declare a variable as final, it is mandatory to initialize it before the end of the constructor. If you don’t, a compile-time error is generated.
public class Student { public final String name; public final int age; public void display(){ System.out.println("Name of the Student: "+this.name); System.out.println("Age of the Student: "+this.age); } public static void main(String args[]) { new Student().display(); } }
On compiling, this program generates the following error.
Student.java:3: error: variable name not initialized in the default constructor private final String name; ^ Student.java:4: error: variable age not initialized in the default constructor private final int age; ^ 2 errors
Generally, once you declare a local variable you need to initialize it before using it for the first time. If you try to use it without initialization then, you will get an error.
But, it is fine to declare a local variable final without initialization until you use it.
In the following Java program, we are declaring a local variable as final. Since we are not using it this gets compiled without errors.
public class Student { public static void main(String args[]) { final String name; } }
But, if you try to use it then, it will generate an error −
In the following Java program, we are declaring a local variable as final and using it
public class Student { public static void main(String args[]) { final String name; System.out.println(name); } }
On compiling, the above program generates the following error −
Student.java:4: error: variable name might not have been initialized System.out.println(name); ^ 1 error | https://www.tutorialspoint.com/why-final-variable-doesn-t-require-initialization-in-main-method-in-java | CC-MAIN-2020-50 | refinedweb | 286 | 56.86 |
The goto statement in C++ is used to transfer the control of program to some other part of the program. A goto statement used as unconditional jump to alter the normal sequence of execution of a program.
Syntax of a goto statement in C++
goto label; ... ... label: ...
- label : A label is an identifier followed by a colon(:) which identifies which marks the destination of the goto jump.
- goto label: : When goto label; executes it transfers the control of program to the next statement label:.
C++ goto Statement Example Program
#include <iostream> using namespace std; int main(){ int N, counter, sum=0; cout<< "Enter a positive number\n"; cin >> N; for(counter=1; counter <= N; counter++){ sum+= counter; /* If sum is greater than 50 goto statement will move control out of loop */ if(sum > 50){ cout << "Sum is > 50, terminating loop\n"; // Using goto statement to terminate for loop goto label1; } } label1: if(counter > N) cout << "Sum of Integers from 1 to " << N <<" = "<< sum; return 0; }Output
Enter a positive number 25 Sum is > 50, terminating loop
Enter a positive number 8 Sum of Integers from 1 to 8 = 36
In above program, we first take an integer N as input from user using cin. We wan to find the sum of all numbers from 1 to N. We are using a for loop to iterate from i to N and adding the value of each number to variable sum. If becomes greater than 50 then we break for loop using a goto statement, which transfers control to the next statement after for loop body.
- goto statement ignores nesting levels, and does not cause any automatic stack unwinding.
- goto statement jumps must be limited to within same function. Cross function jumps are not allowed.
- goto statement makes difficult to trace the control flow of program and reduces the readability of the program.
- Use of goto statement is highly discouraged in modern programming languages.
Uses of goto Statement
The only place where goto statement is useful is to exit from a nested loops.For Example :
for(...) { for(...){ for(...){ if(...){ goto label1; } } } } label1: statements;
In above example we cannot exit all three for loops at a time using break statement because break only terminate the inner loop from where break is executed. | https://www.techcrashcourse.com/2016/12/cpp-programming-goto-statement.html | CC-MAIN-2020-16 | refinedweb | 379 | 57.81 |
Introduction to Function Overloading in Python
In python, function overloading is defined as the ability of the function to behave in different ways depend on the number of parameters passed to it like zero, one, two which will depend on how function is defined. Overloading function provides code reusability, removes complexity and improves code clarity to the users who will use or work on it. Function overloading in python can be of two types one is overloading built-in functions and overloading the custom or user-defined functions in python. We will have a look into both of them in the below sections. In general, not every programming language supports function overloading but in this case, python supports functional overloading.
Syntax:
In python, we can define a method in such a way that it can be called in different ways using different numbers of parameters. We will see a function overloading example which can take zero or one argument and its syntax as below:
Code:
class World:
def hello(self, name=None):
if name is not None:
print ("Hello ", name)
else:
print("Hello")
obj = World
obj.hello("")
obj.hello("","srinivas")
In the above syntax example, we have created a class World with a method/function hello where we set the first argument is None so that we can call the function with or without an argument. We have created an obj of the class World and using this obj we will call its method using zero or one argument. In order to see how function overloading is working, we will call the function with zero parameters as obj.hello() and with one parameter as obj.hello(“srinivas”) and the output of the above program is as below. The above example is having up to one variable but it is not limited to it we can have a number of parameters.
Output:
How Function Overloading Works in Python?
Let us see how overloading of functions works in python using an example as below. Let us have a function to calculate the area of square and rectangle.
Code:
def area(l, b):
c = l*b
return c
def area(size):
c = size*size
return c
area(4)
area(5,6)
Output:
In python, when we define two functions with the same name than the function which we defined later only a valid function in python. So when we execute area(4) it executes properly but when we execute area(5,6) gives an error saying function area() takes exactly one argument. So by default function overloading is not there in python but it can be achieved using decorators, defining functions by setting parameters default values to None.
Let us have an example of function overloading by defining a function with default parameter values as None.
Code:
class Compute:
def area(self, x=None, y=None):
if x!=None and y !=None:
return x*y
elif x!=None:
return x*x
else:
return 0
obj = Compute()
print(obj.area())
print(obj.area(6))
print(obj.area(2,8))
Output:
In the above example, we have defined a class Compute with a function named the area where we have default parameter values as None so that the function can be called either with zero, one, and two parameters. If we have one argument then the area function will return zero as its output, if it has one parameter then the area function returns the square of the parameter, and if it has two parameters then the area function will return the product of two parameters as output. We have created an obj for the class Compute by which we can access the function area() with different parameters. Here we called obj.area() which gives output as 0, obj.area(6) gives the output as 36, and obj.area(2,8) gives output as 16 which is the product of 2 and 8.
Now we will see built-in function overloading with an example of overloading the len() function as below:
Built-in Function Overloading.
Code:
class Purchase:
def __init__(self, basket, consumer):
self.basket= list(basket)
self.consumer= consumer
def __len__(self):
return 10
purchase = Purchase(['pencil ', 'book '], 'python ')
print(len(purchase))
In the above example, we defined a class Purchase where it has constructor __init__ with parameters basket which is a list and consumer is a string variable and assigning to the self. We have a function __len__ which is overriding the built-in len() function of python. We have created an object purchase of class Purchase with parameters that will be initialized. When we execute the print statement in which we are calling len(purchase) will return 10 as we overloaded the built-in len() function of python. If it calls built-in len() function it will throw an error as an object doesn’t have any len() function.
Output:
The above program when function overloading successes:
The output of the above program, if we didn’t perform function overloading is an error:
TypeError: object of type ‘purchase’ doesn’t have len().
Examples of Function Overloading in Python
Let us have an example of a class student having a function hello with a parameter name having default value as None as below:
Example #1
Code:
class Student:
def sayHello(self, name=None):
if name is not None:
self.name = name
print("Hey, " , name)
else:
print("Hey")
stu = Student()
stu.sayHello()
stu.sayHello('dasu')
In the above example, we have a class Student with a function sayHello() where default parameter value is set to None so that the function can be called with either zero, one parameter but not limited to them. We have created an stu of class Student by which we can access the function sayHello(). While calling stu.sayHello() output of it will be “hey”, and while calling stu.sayHello(“dasu”) output of it will be “hey dasu”.
Output:
Example #2
Function overloading using overload decorator in python as below:
Code:
class Overloading:
@Overload
@signature()
def getMethod(self):
print("First method")
@getMethod.overload
@signature("int")
def getMethod(self,i):
print("Second method", i)
obj = Overloading()
obj.getMethod()
obj.getMethod(2)
In the above example, by using overload decorator function it ables to resolve between the function calling with an argument and function without an argument. We have declared an obj for class Overloading by which we can access its function.
Conclusion
Finally, it’s an overview of function overloading in python. So far we have discussed the definition of function overloading, the syntax of function overloading, how function overloading works in python, different types of function overloading like built-in function overloading and custom function overloading, and examples of function overloading. I hope after reading this article you will have a better understanding of function overloading in python, how to achieve function overloading in python in different ways.
Recommended Articles
This is a guide to Function Overloading in Python. Here we discuss a brief overview on Function Overloading in Python and its Examples along with its Code Implementation. You can also go through our other suggested articles to learn more – | https://www.educba.com/function-overloading-in-python/ | CC-MAIN-2020-24 | refinedweb | 1,180 | 60.45 |
Free "1000 Java Tips" eBook is here! It is huge collection of big and small Java
programming articles and tips. Please take your copy here.
Take your copy of free "Java Technology Screensaver"!.
JavaFAQ Home » Java Tools
IntelliJ IDEA become for me the number one tool not only in Java programming, but
also HTML code writing and validation. JSP code mostly produces different kind
of HTML pages and it is important to have all HTML code clean.
We used that HTML code can be bad, not properly formatted, tags can be opened
and never closed - modern browsers will swallow any HTML code and do the best...
I although thought like this until I realized that many problems with my
pages not due to errors in my Java code. No, it was bad HTML code. Strange
looking pages, different layout in different browsers. You probably say - yes,
yes I know, it is from browsers war. Not always really...
It has much to do with bad habits on HTML code writing . A la FrontPage
style...
Things changed when I started to use JIDEA! Not only layout, even speed of
page rendering increased dramatically when browsers switched from "quirks" to
"standards compliance" mode.
What is that - the "quirks" mode? And how it affects your JSP generated
pages? I will not give the answer right now, you will find it in one of my next
articles, because today we will look at IntelliJ IDEA new features.
The descriptions below is mostly from Idea's site. The list is not full,
mostly listed below is what I thought important for Java, especially JSP
developers.
IntelliJ IDEA 5.0 has introduced an unprecedented
level of support for JSP, including JSP-specific refactorings and support for
JSP 2.0. Additionally, IntelliJ IDEA combines all of the support for HTML, CSS,
and JavaScript into one seamless package, and provides full support for Java
code inside JSP scriptlets. It is truly the best JSP editor in the world.
All of IntelliJ IDEA's smart Java features, like
refactorings, intentions, inspections, etc., work in scriptlet and declarations
code:
JSP 2.0 support
JSP source-level debugging under WebSphere
Application Server
JSP code formatting
Optimize Imports in JSP
Structure view for JSP files
The support for coding HTML and XHTML in IntelliJ
IDEA 5.0 rivals, and in many cases surpasses, dedicated HTML tools. IntelliJ
IDEA's intelligence really shines, helping developers quickly navigate and code
HTML and XHTML in ways that are simply not possible with other tools.
Code completion for tags (including auto-insertion
of closing tags), attributes, styles, file references in hyperlinks, etc.
Find/highlight usages of tags, IDs, files, images, styles, etc.
Completion and validation for width and height attribute
values in img tags
Code formatting according to HTML-specific Code Style (defined via a dedicated
Code Style settings panel)
Matching brace highlighting + quick navigation to paired tag in HTML (Ctrl + ] / Ctrl + [)
Rename file, anchor, etc.
Move/Copy file
Safe Delete file
Show Applied Styles for Tag: Opens a tree-view of
all styles that are applied to the tag by CSS
Quick doc (Ctrl + Q and Shift + F1)
opens descriptions of HTML tags, attributes (with available values if defined),
etc., from the official W3C standard:
HTML Colors and Fonts settings
HTML Structure View
HTML and XHTML on-demand validation
Convert HTML into XHTML
Surround with tag
Show Content (Ctrl + Shift + I):
on a tag - shows schema
on a file - shows file content
on an image - shows image
IntelliJ IDEA 5.0's support for CSS enables
developers to quickly and easily edit HTML styles with code-completion, error
highlighting, finding references, and more. IntelliJ IDEA shows the power of an
IDE once again, eliminating the need for developers to wade through CSS
references all day, or to waste time debugging conflicting styles.
Missing or invalid closing braces
Invalid selector format
Invalid CSS properties
Unused CSS class definitions
Find/highlight usages
Quick doc (Ctrl + Q and
Shift + F1) opens description from the official W3C
standard
Unused CSS tags highlighting
Code Folding
Code formatting
CSS Structure view
Refactoring:
Rename (including the opportunity to
rename CSS file, class or ID attribute, etc., directly from within
HTML)
Extract inlined style block from HTML
into a CSS file
The new I18N support provides multiple
important features that make working on internationalization issues simple and
convenient. With enhancements introduced in version 5.1, IntelliJ IDEA can be
justly named the best Java I18N / L10N tool.
Auto-detection of resource bundles
Convenient multi-language resource-bundle editor
I18N intention actions for smart
internationalization of hard-coded String literals, both simple
and concateneted
String
@NonNls annotation & intention action for
excluding particular hard-coded strings from localization
I18N support for tag text in JSP files
Resource properties completion in Java references
Smart editing of .properties files
IntelliJ IDEA 5.0 has several new debugging
features, including some great breakpoint improvements to make debugging
smarter, so developers can find bugs faster with less manual effort.
Custom object display in debugger: Most
programmers write toString() methods so that objects are easier to browse and
inspect during debugging. IntelliJ IDEA 5.0 makes it even easier and more
powerful to customize any object display in the debugger, similar to the
Alternate Collections View feature. It's especially useful to help inspect
complex nested object structures.
JSP source-level debugging under WebSphere
Application Server (5.1 & 6.0)
Shortcut to setting up a logging breakpoint:
Select the expression to log, then Shift + Click in
the gutter
Dependent breakpoints: The second breakpoint
won't trigger unless the first breakpoint triggered
Muting breakpoints with one button
Smart step-into: Skip simple getters/setters
Force step-over, ignoring breakpoints
Drag and drop support for breakpoints
IntelliJ IDEA 5.0 is the first IDE to provide
serious support for JavaScript, introducing many of the useful features Java
developers have become accustomed to.
Code completion for JavaScript keywords, variables,
parameters and functions, including completion in HTML event handlers, etc.
Syntax and error highlighting, including on-the-fly
validation
Highlighting for escape sequences in String
literals
IntelliJ IDEA 5.1 enhances its support for J2EE
development with several new features:
Detection of unbound namespace prefix in JSP and
XML
Auto-import of tag libraries and XML schemas in
JSP and XML
Completion/renaming for references to Java
methods from Hibernate/Spring/JSF configuration files
IntelliJ IDEA 5.0 supports developers for the
increasingly popular J2ME platform with features to simplify configuring,
compiling, running, and debugging J2ME modules with third-party toolkits and
device emulators.
Support for mobile SDK MIDP1.0/MIDP2.0/NTT DoCoMo
i-mode
Dedicated J2ME module type added
Building mobile applications (MIDlet suites)
J2ME Run/Debug configurations panel
Run/Debug mobile phone emulators
IntelliJ IDEA is currently the undisputed leader in
smart Java coding features. IntelliJ IDEA 5.0 sprints ahead even further with
several additions and enhancements to its refactoring arsenal.
New Refactorings: IntelliJ IDEA has more
and better quality refactoring features than any other IDE, and IntelliJ IDEA
5.0 adds even more.
Move Method: A separate refactoring from
Move Static Method, this new refactoring safely moves non-static
methods from one class to another. It correctly handles inner classes
and honors method overriding
Inline Superclass
Move Field to Local Scope: Converts a
field to a local variable, reducing the complexity of the class
Inline Constructor: When only this(...) is used
Safe Delete: Removes class from class hierarchy
Change Method Signature: This is one of IntelliJ
IDEA's most powerful and useful refactorings, which saves a lot of time and
tedious work. IntelliJ IDEA 5.0 allows added parameters and exceptions to be
propagated throughout the method call hierarchy, as well as the ability to
add/remove the throws clause
Introduce... refactorings are smarter at guessing
types (e.g. when there are unresolved references in an expression)
Convert to Instance Method: Visibility options
added
Rename Class: Suggests to rename a GUI form if
one is bound to it
Find catch()es for thrown exceptions
IntelliJ IDEA is currently the best tool available
for static and on-the-fly code inspections. To get the same level of support
from other tools would cost much more than IntelliJ IDEA itself! IntelliJ IDEA
5.0 brings code analysis to a whole new level, helping developers not only to
understand code, but to find problems early, keep the design clean, and also to
aid iterative design when used in combination with refactoring.
That are all most important
improvements and features. You can find more details on the feature you
interested in if you go to JIdea
home page or do further research on the Web.
As I mentioned above I am going to
write soon an article on importance of clean HTML code writing. I will do an
overview on current situation with HTML standards and latest design trends (how
to write most efficient HTML code). I thing it is important for those who works
with JSP, servlets, web services and so on.
You probably want that your customers
see your application as a modern tool, not ugly formatted page from beginning of
90s of past century.
Disclaimer: English is not native language for me and I
appreciate if you correct my errors in a friendly way
RSS feed Java FAQ News | http://javafaq.nu/java-article1009.html | CC-MAIN-2017-30 | refinedweb | 1,548 | 52.19 |
Perfectily good command file snipped
>My configfile tells yarn to keep messages 12 days, max-keep 30 days.
>Although I import a few hundred news per day, expire will only delete less than 20
>messages per day.
Couple of things to try, set max-keep to 12 and/or a shorter expire time.
BTW, you do run expire daily?
>
>What have I done wrong?
Not much, its just that say 200 messages a day x 4k (WAG of average post
size) = 800K per day, x 30 = 24 M per month. However the kicker is that
at no time all the posts 30 days or 12 days old so by aggregation the
news.dat grows.
I set my keep for three days and max keep for three days, expire every
import and my news.dat runs about 5-7 megs. | http://www.vex.net/yarn/list/199706/0064.html | crawl-001 | refinedweb | 139 | 81.83 |
Deep learning can be complicated…and sometimes frustrating. Why is my lousy 10 layer conv-net only achieving 95% accuracy on MNIST!? We’ve all been there – something is wrong and it can be hard to figure out why. Often the best solution to a problem can be found by visualizing the issue. This is why TensorFlow’s TensorBoard add-on is such a useful tool, and one reason why TensorFlow is a more mature solution than other deep learning frameworks. It also produces pretty pictures, and let’s face it, everybody loves pretty pictures.
In the latest release of TensorFlow (v1.10 as of writing), TensorBoard has been released with a whole new bunch of functionality. This tutorial is going to cover the basics, so that future tutorials can cover more specific (and complicated) features on TensorBoard. The code for this tutorial can be found on this site’s Github page.
Recommended online course: If you’re interesting in learning more about TensorFlow, and you are more of a video learner, check out this inexpensive online course: Complete Guide to TensorFlow for Deep Learning with Python
Visualizing the graph in TensorBoard
As you are likely to be aware, TensorFlow calculations are performed in the context of a computational graph (if you’re not aware of this, check out my TensorFlow tutorial). To communicate the structure of your network, and to check it for complicated networks, it is useful to be able to visualize the computational graph. Visualizing the graph of your network is very straight-forward in TensorBoard. To do so, all that is required is to build your network, create a session, then create a TensorFlow FileWriter object.
The FileWriter definition takes the file path of the location you want to store the TensorBoard file in as the first argument, and the TensorFlow graph object, sess.graph, as the second argument. This can be observed in the code below:
The same FileWriter that can be used to display your computational graph in TensorBoard will also be used for other visualization functions, as will be shown below. In this example, a simple, single hidden layer neural network will be created in TensorFlow to classify MNIST hand-written digits. The graph for this network is what will be visualized. The network, as defined in TensorFlow, looks like:
I won’t go through the code in detail, but here is a summary of what is going on:
- Input placeholders are created
- The x input image data, which is of size (-1, 28, 28) is flattened to (-1, 784) and scaled from a range of 0-255 (greyscale pixels) to 0-1
- The y labels are converted to one-hot format
- A hidden layer is created with 300 nodes and sigmoid activation
- An output layer of 10 nodes is created
- The logits of this output layer are sent to the TensorFlow softmax + cross entropy loss function
- Gradient descent optimization is used
- Some accuracy calculations are performed
So when a TensorFlow session is created, and the FileWriter defined and run, you can start TensorBoard to visualize the graph. To define the FileWriter and send it the graph, run the following:
After running the above code, it is time to start TensorBoard. To do this, run the following command in the command prompt:
This will create a local server and the text output in the command prompt will let you know what web address to type into your browser to access TensorBoard. After doing this and loading up TensorBoard, you can click on the Graph tab and observe something like the following:
As you can see, there is a lot going on in the graph above. The major components which are the most obvious are the weight variable blocks (W, W_1, b, b_1 etc.), the weight initialization operations (random_normal) and the softmax_cross_entropy nodes. These larger rectangle boxes with rounded edges are called “namespaces”. These are like sub-diagrams in the graph, which contain children operation and can be expanded. More on these shortly.
Surrounding these larger colored blocks are a host of other operations – MatMul, Add, Sigmoid and so on – these operations are shown as ovals. Other nodes which you can probably see are the small circles which represent constants. Finally, if you look carefully, you will be able to observe some ovals and rounded rectangles with dotted outlines. These are an automatic simplification by TensorBoard to reduce clutter in the graph. They show common linkages which apply to many of the nodes – such as all of the nodes requiring initialization (init), those nodes which have gradients associated, and those nodes which will be trained by gradient descent. If you look at the upper right hand side of the diagram, you’ll be able to see these linkages to the gradients, GradientDescent and init nodes:
One final thing to observe within the graph are the linkages or edges connecting the nodes – these are actually tensors flowing around the computational graph. Zooming in more closely reveals these linkages:
As can be observed, the edges between the node display the dimensions of the Tensors flowing around the graph. This is handy for debugging for more complicated graphs. Now that these basics have been reviewed, we’ll examine how to reduce the clutter of your graph visualizations.
Namespaces
Namespaces are scopes which you can surround your graph components with to group them together. By doing so, the detail within the namespace will be collapsed into a single Namespace node within the computational graph visualization in TensorBoard. To create a namespace in TensorFlow, you use the Python with functionality like so:
As you can observe in the code above, the first layer variables have been surrounded with tf.name_scope(“layer_1”). This will group all of the operations / variables within the scope together in the graph. Doing the same for the input placeholders and associated operations, the second layer and the accuracy operations, and re-running we can generate the following, much cleaner visualization in TensorBoard:
As you can see, the use of namespaces drastically cleans up the clutter of a TensorBoard visualization. You can still access the detail within the namespace nodes by double clicking on the block to expand.
Before we move onto other visualization options within TensorBoard, it’s worth noting the following:
- tf.variable_scope() can also be used instead of tf.name_scope(). Variable scope is used as part of the get_variable() variable sharing mechanism in TensorFlow.
- You’ll notice in the first cluttered visualization of the graph, the weights and bias variables/operations had underscored numbers following them i.e. W_1 and b_1. When operations share the same name outside of a namescope, TensorFlow automatically appends a number to the operation name so that not two operations are labelled the same. However, when a name or variable scope is added, you can name operations the same thing and the namescope will be appended to the name of the operation. For instance, the weight variable in the first layer is called ‘W’ in the definition, but given it is now in the namescope “layer_1” it is now called “layer_1/W”. The “W” in layer 2 is called “layer_2/W”.
Now that visualization of the computational graph has been covered, it’s time to move onto other visualization functions which can aid in debugging and analyzing your networks.
Scalar summaries
At any point within your network, you can log scalar (i.e. single, real valued) quantities to display in TensorBoard. This is useful for tracking things like the improvement of accuracy or the reduction in the loss function during training, or studying the standard deviation of your weight distributions and so on. It is executed very easily. For instance, the code below shows you how to log the accuracy scalar within this graph:
The first argument is the name you chose to give the quantity within the TensorBoard visualization, and the second is the operation (which must return a single real value) you want to log. The output of the tf.summary.scalar() call is an operation. In the code above, I have not assigned this operation to any variable within Python, though the user can do so if they desire. However, as with everything else in TensorFlow, these summary operations will not do anything until they are run. Given that often there are a lot of summaries run in any given graph depending on what the developer wants to observe, there is a handy helper function called merge_all(). This merges together all the summary calls within the graph so that you only have to call the merge operation and it will gather all the other summary operations for you and log the data. It looks like this:
During execution within a Session, the developer can then simply run merged. A collection of summary objects will be returned from running this merging operation, and these can then be output to the FileWriter mentioned previously. The training code for the network looks like the following, and you can check to see where the merged operation has been called:
I won’t go through the training details of the code above – it is similar to that shown in other tutorials of mine like my TensorFlow tutorial. However, there are a couple of lines to note. First, you can observe that after every training epoch two operations are run – accuracy and merged. The merged operation returns a list of summary objects ready for writing to the FileWriter stored in summary. This list of objects is then added to the summary by running writer.add_summary(). The first argument to this function is the list of summary objects, and the second is an optional argument which logs the global training step along with the summary data.
Before showing you the results of this code, it is important to note something. TensorBoard starts to behave badly when there are multiple output files within the same folder that you launched the TensorBoard instance from. Therefore, if you are running your code multiple times you have two options:
- Delete the FileWriter output file after each run or,
- Use the fact that TensorBoard can perform sub-folder searches for TensorBoard files. So for instance, you could create a separate sub-folder for each run i.e. “Run_1”, “Run_2” etc. and then launch TensorBoard from the command line, pointing it to the parent directory. This is recommended when you are doing multiple runs for cross validation, or other diagnostic and testing runs.
To access the accuracy scalar summary that was logged, launch TensorBoard again and click on the Scalar tab. You’ll see something like this:
The scalar page in TensorBoard has a few features which are worth checking out. Of particular note is the smoothing slider. This explains why there are two lines in the graph above – the thicker orange line is the smoothed values, and the lighter orange line is the actual accuracy values which were logged. This smoothing can be useful for displaying the overall trend when the summary logging frequency is higher i.e. after every training step rather than after every epoch as in this example.
The next useful data visualization in TensorBoard is the histogram summary.
Histogram summaries
Histogram summaries are useful for examining the statistical distributions of your network variables and operation outputs. For instance, in my weight initialization tutorial, I have used TensorBoard histograms to show how poor weight initialization can lead to sub-optimal network learning due to less than optimal outputs from the node activation function. To log histogram summaries, all the developer needs to do is create a similar operation to the scalar summaries:
I have added these summaries so that we can examine how the distribution of the inputs and outputs of the hidden layer progress over training epochs. After running the code, you can open TensorBoard again. Clicking on the histogram tab in TensorBoard will give you something like the following:
This view in the histogram tab is the offset view, so that you can clearly observe how the distribution changes through the training epochs. On the left hand side of the histogram page in TensorBoard you can choose another option called “Overlay” which looks like the following:
Another view of the statistical distribution of your histogram summaries can be accessed through the “Distributions” tab in TensorBoard. An example of the graphs available in this tab is below:
This graph gives another way of visualizing the distribution of the data in your histogram summaries. The x-axis is the number of steps or epochs, and the different shadings represent varying multiples of the standard deviation from the mean.
This covers off the histogram summaries – it is now time to review the last summary type that will be covered off in this tutorial – the image summary.
Image summaries
The final summary to be reviewed is the image summary. This summary object allows the developer to capture images of interest during training to visualize them. These images can be either grayscale or RGB. One possible application of image summaries is to use them to visualize which images are classified well by the classifier, and which ones are classified poorly. This application will be demonstrated in this example – the additional code can be observed below:
Ok, there is a bit going on in the code above, which I will explain. In the first line, a boolean mask operation is created – this basically takes the scaled input tensor (representing the hand written digit images) and returns only those images which were correctly classified by the network. The tensor correct_prediction is a boolean vector of True and False values which indicate whether each image was correctly classified or not.
After the correct inputs have been extracted by this process, these inputs are then passed to tf.summary.image() which is how the image summaries are stored. The first argument to this function is the namespace of the images. The second is the images themselves. Note, that the input tensor x_sc is a flattened version of the 28 x 28 pixel images. We need to reshape the input tensor into a form acceptable to tf.summary.image(). The acceptable form is a 4D tensor of the following structure: (no. samples, image width, image height, color depth). In this case we use the automatic dimension prediction capabilities for the first dimension, so that it can dynamically adapt to the number of correctly classified images. The next two dimensions are the 28 x 28 pixels of the images. Finally, the last dimension is 1 as the images are greyscale – this would be 3 for RGB color images.
The last argument to tf.summary.image() is the maximum number of images to send to TensorBoard. This is an important memory saving feature – if we export too many images there is the possibility that the FileWriter object will become unwieldy. In this case, we only want to look at 5 images of correct predictions.
The next line in the code is the exact opposite of what was performed in the correct_inputs operation – in this case, all the incorrectly classified images are extracted. These are likewise sent to tf.summary.image() for storage.
If you re-run the code with the addition of these extra lines and go to the Images tab in TensorBoard, you’ll be able to see images such as the following for the correctly classified cases:
This is obviously a nice, clearly written “7” which the network has correctly classified. Alternatively, the image below is an example of the incorrectly classified case:
With the state-of-the-art classifier that is my brain, I can see that this image is a badly written “4”. However, the neural network created in this example hasn’t been able to correctly classify such a poorly written number.
That concludes this introductory TensorBoard visualization tutorial. TensorBoard is expanding with new versions of TensorFlow, and there are now additional summaries and visualizations that can be used such as video summaries, text summaries and even a debugger. These will be topics of future posts. I hope this tutorial assists you in getting a leg up into the great deep learning visualization tool that is TensorBoard.
Recommended online course: If you’re interesting in learning more about TensorFlow, and you are more of a video learner, check out this inexpensive online course: Complete Guide to TensorFlow for Deep Learning with Python | http://adventuresinmachinelearning.com/introduction-to-tensorboard-and-tensorflow-visualization/ | CC-MAIN-2018-51 | refinedweb | 2,733 | 50.26 |
> > Although this is an open source project and you do have the advantage of > being able to talk to the people who own the system headers, I really think > that making changes like this in system headers should be done very very > sparingly. > I agree and this this the reason why this thread was started > >I'm not sure, how this would look in real code, do you have an example ? > > #define HANDLE foo_handle > #include <winnt.h> > #undef HANDLE > The problem with the current implemantation is, that there is no way to hide the HANDLE definition and I can't see why an implementation in the following manner winnt.h #ifndef DONT_DEFINE_HANDLE typedef void * HANDLE #endif makes such big problems as Danny stated. It is compatible to the current header, but pave the way for an official qt/x11 release (or do you expect that trolltech would change their definitions of the x11 HANDLE type only based on the fact that this is a precedent ?) > Another thing I have to wonder is why you are using a mixture of the > Windows API and the X API rather than X + cygwin. That seems strange, > too. Chris January has told some issue of this, another topics are performance (using native FindFirstFile/FindNextfile instead of cygwin's opendir/readdir) and missing dns code (fortunately the qt x11 release contains already code for accessing native win32 dns code) I hear you saying, that this would not be the right way to do things, it would be better to optimize the cygwin dll or to add needed functions, but unfortunally I haven't the knowledge for doing so and I could only deal with what I have and that is the way to use function which are available. Okay, in the abstract there is a way to change the qt sources to use void *, but this produces incompatible qt libraries (because of the c++ abi) and this implicate a recompile of all available kde 2 packages, which result in much support traffic like "why does my kde package can't load the qt dll" and for this I have no additional time, sorry. Any comments ? | https://cygwin.com/pipermail/cygwin-patches/2002q3/002713.html | CC-MAIN-2022-40 | refinedweb | 360 | 52.87 |
A simulator of file system links.
Project description
- Why can’t file system links occur without being registered on the file
- system?
import virtuallinks virtuallinks.link('on/a/dir/far/far/away', name='far_away_dir') with open('far_away_dir/file.txt', 'w') as f: f.write('hello world!')
virtuallinks simulates at least one link, without changing the file system.
Install it with:
$ pip install virtuallinks
If you have any questions, then lets chat.
Pure Python code, 2 and 3 compatible.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/virtuallinks/ | CC-MAIN-2019-04 | refinedweb | 105 | 62.34 |
Ticket #135 (closed defect: fixed)
Default values of datetime.now now handled properly by ORM freezing
Description
The startmigration command doesn't properly handle freezing DateTimeFields? with a default of datetime.now. Below is an example of what happens.
If I have a model such as the following:
from django.db import models from datetime import datetime class Event(models.Model): date = models.DateTimeField(default=datetime.now)
And I run the command:
./manage.py startmigration myapp do_something --freeze=myapp
I end up with a migration which contains something like:
models = { 'myapp.event': { 'date': ('models.DateTimeField', [], {'default': 'datetime.datetime(2009, 5, 6, 15, 33, 15, 780013)'}), } }
Running this new migration generates an error similar to:
ValueError: Cannot successfully create field 'date' for model 'event': type object 'datetime.datetime' has no attribute 'datetime'.
Changing the default value in the migration's model information from "datetime.datetime" to just "datetime" seems to fix the problem.
Change History
comment:2 Changed 5 years ago by andrew
- Status changed from new to assigned
- Version set to subversion
- Milestone set to 0.6
comment:3 Changed 5 years ago by jtauber
Note that I just got this on 0.5 running ./manage.py startmigration foo bar --auto
see
where model had created = models.DateTimeField?(default=datetime.now)
Forgot to mention in the initial report that this happens with the SVN version of South. | http://south.aeracode.org/ticket/135 | CC-MAIN-2014-42 | refinedweb | 227 | 52.36 |
The function realloc () is used to resize the memory allocated earlier by function malloc () to a new size given by the second parameter of the function..
Illustrates malloc() and realloc ().
#include <stdio.h>
#include<stdlib.h>
int main ()
{
int a= 0, n, m, i, j,k =0;
int* Array;
clrscr();
printf("Enter the number of elements of Array.");
scanf("%d", &n );
Array= (int*) malloc( n* sizeof(int) );
if( Array== NULL)
{
printf("Error. Out of memory.\n");
exit(0);
}
printf("Enter the %d elements of Array:", n);
for ( i =0; i<n; i++)
scanf("%d",&Array[i]);
for( j =0;j<n; j++)
printf("Array[%d] = %d\n", j, *(Array+j));
printf("Enter the number of elements of extended array.");
scanf("%d", &m);
Array= realloc(Array, m);
if( Array== NULL)
{
printf("realloc has failed.\n");
exit(0);
}
printf("Enter values of %d additional elements of array.", m-n);
for (k= n ; k<m; k++)
scanf("%d", &Array[k]);
for( a =0;a<m; a++)
printf("Array[%d] = %d\n", a, *(Array+ | http://ecomputernotes.com/data-structures/basic-of-data-structure/what-is-the-purpose-of-realloc | CC-MAIN-2018-30 | refinedweb | 170 | 56.45 |
this won't link, because main is undefined (i dynamically linked the lib). now, of course i know there must be a main somewhere. but on several occasions the doc mentions it's automatically produced by boost.test. when i include <boost/test/included/unit_test.hpp> instead, it works. the doc also mentions function like unit_test_main and what not, you may have to define yourself but all in all it's still completely unclear to me, how the whole system works and what i really need to get anything up and running. disappointingly, the doc seems mostly incomplete, with many not-working examples, dead links/images...and many typos. now, since this is a boost lib where talking about here, it's probably my fault. i hope. otherwise i might resort to CxxTest. that seems to work ootb like a charm. but the last version is already a few years old and who knows if there will be an update someday (eg. to adept to the new C++ standard).
#include <my_class.hpp> #define BOOST_TEST_MODULE MyTest #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_CASE( my_test ) { my_class test_object( "qwerty" ); BOOST_CHECK( test_object.is_valid() ); } | http://www.gamedev.net/user/45474-maximal/?tab=topics | CC-MAIN-2015-11 | refinedweb | 188 | 68.36 |
Websockets Server
Table of Contents
Websocket Tutorial¶
As explained in this webpage, the Websocket protocol allows full-duplex, bi-directional communications between a server and clients.
For this WebSocket tutorial, we will need to implement a server. We chose the Tornado Websocket server for our Internet of Things project. In this tutorial, we will present an example of how to get the Tornado Server running using websockets.
This tutorial is divided into two parts:
- A Hello World which uses Tornado
- A websocket streaming example which uses Websocket4j.
1 - Hello World!¶
1.1 - Server side: Tornado¶
Warning
Tornado only runs on Unix-like platforms. If you are on Windows, you will need to change your OS to implement the server. But you can follow the second part of this tutorial ;)
Installation¶
$ tar xvzf tornado-2.0.tar.gz $ cd tornado-2.0 $ python setup.py build $ sudo python setup.py install
For more information concerning the installation, you can go here
Code¶
You can find all the documentation you need here, but I will explain the main parts of the following basic Tornado Websocket server program.
The main idea is to define a class inherited from the WebSocketHandler class.
In this class, you can override the following methods:
- open
- on_message
- on_close
It's very simple; when a client and the server have performed the websocket handshake, the open method is called.
When a message is received from a client, the on_message method is called and if the connection is closed, the on_close method is called.
class WSHandler(tornado.websocket.WebSocketHandler): def open(self): print 'new connection' self.write_message("Hello World") def on_message(self, message): print 'message received %s' % message def on_close(self): print 'connection closed'
In our case:
- when a connection is opened, the server prints "new connection" and sends to the client "Hello World"
- when a message is received, the server prints this message
- when a connection is closed, the server prints "connection closed"
To complete the server program, we have to have at the beginning of the program an HTTP Server.
application = tornado.web.Application([ (r'/ws', WSHandler), ]) if __name__ == "__main__": http_server = tornado.httpserver.HTTPServer(application) http_server.listen(8888) tornado.ioloop.IOLoop.instance().start()
An instance of tornado.web.Application is passed to the HTTP Server. When the server receives a request of the form: var ws = new WebSocket("ws://localhost:8888/ws");, it will instantiate a WSHandler object. Note the "ws" at the end of the url. This is useful for the client side implementation.
Finally, the whole program is:
import tornado.httpserver import tornado.websocket import tornado.ioloop import tornado.web class WSHandler(tornado.websocket.WebSocketHandler): def open(self): print 'new connection' self.write_message("Hello World") def on_message(self, message): print 'message received %s' % message def on_close(self): print 'connection closed' application = tornado.web.Application([ (r'/ws', WSHandler), ]) if __name__ == "__main__": http_server = tornado.httpserver.HTTPServer(application) http_server.listen(8888) tornado.ioloop.IOLoop.instance().start()
We save this in /Documents/tornado as server.py for now.
1.2 - Client side¶
Warning
To follow this tutorial, you have to use a browser which supports websockets. I am using Chrome with no problems, and would recommend using it. If you have Firefox or Opera you will need to enable it. This is likely to be your problem if you are having one. You can test here to see if your browser supports websockets.
If you are not familiar with jQuery, it is advisable to get a basic understanding here). The main part of the simple Hello World:
$(""); }; });
- On this webpage, there are 3 fields: host, port and uri which have a red background at the beginning. You can see this at the bottom of this tutorial.
- When the user has filled these 3 fields, he can try to open a websocket connection by clicking the open button
As for the server, there are three methods: onopen, onmessage and onclose:
- If the connection is opened, we change the background of the 3 previous fields to green
- When we receive a message, we open a pop-up containing the message received
- When the connection is closed, we open a pop-up containing Connection close
Finally, the whole program for the client is:
<!doctype html> <html> <head> <title>WebSockets Hello World</title> <meta charset="utf-8" /> <style type="text/css"> body { text-align: center; min-width: 500px; } </style> <script src=""></script> <script> $(document).ready(function () { var ws; $(""); }; }); }); </script> </head> <body> <h1>WebSockets Hello World</h1> <div> <label for="host">host:</label> <input type="text" id="host" value="localhost" style="background:#ff0000;"/><br /> <label for="port">port:</label> <input type="text" id="port" value="8888" style="background:#ff0000;"/><br /> <label for="uri">uri:</label> <input type="text" id="uri" value="/ws" style="background:#ff0000;"/><br /> <input type="submit" id="open" value="open" /> </div> </body> </html>
We save this as HelloWorld.html in the tornado directory.
1.3 - Demo¶
We start the server with the command:
$python server.py
And navigate to the Hello world test page.
Et Voilà! There is the pop-up on the client side containing Hello World and the backgrounds are green. On the server side, we see that a new connection has been opened.
2 - Streaming¶
2.1 - Server side: Websocket4j¶
Installation¶
This server is written in java. To develop in Java, I used Eclipse. Here are the links to download this software:
- JRE installation: here, click the Download button under JRE in Java SE 7 section. After that, you have to chose the program according to your operating system.
- IDE installation: here. take the version of your operating system.
Alternately, if you are running Linux you can just use
$ sudo apt-get install eclipse
Code¶
For this tutorial, we are going to have a server that broadcasts the message 'message: message 1' every two seconds to every connected client while incrementing the '1'.
The architecture of this server is very simple:
- Tha main function is an infinite loop which accepts websocket connections and does an action after that.
For instance, for us, the action can be:
Let's say that we have an array containing each websocket connection opened.
- Wait for a websocket connection
- Add this client to our array
- If this is the first connection opened:
- Start a thread which will broadcast each 2 secondes a message to clients in our array
The code for this can be:
int port = 8888; boolean first_client = true; WebServerSocket socket = new WebServerSocket(port); while (true) { System.out.println("Streaming server ready. Listen on: " + port + " &&& nb_clients: " + clients.size()); WebSocket ws = socket.accept(); if (ws.getRequestUri().equals("/streaming")) { clients.add(ws); if(first_client) { new WSStreaming().start(); first_client = false; } } else { System.out.println("Unsupported Request-URI"); try { ws.close(); } catch (IOException e) {} } }
Note that we will add in our array only clients which try to connect with an uri like: var ws = new WebSocket("ws://localhost:8888/streaming"); in this case.
Now the thread which will broadcast a message is also very simple:
public static ArrayList<WebSocket> clients = new ArrayList<WebSocket>(); public void run(){ long nb = 0; while (true) { if(clients.size() != 0) { for(int i = 0; i < clients.size(); i++) { try{ clients.get(i).sendMessage("message: " + nb); } catch(IOException e) { clients.remove(clients.get(i)); } } nb++; try { sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } } } }
As you can see, in an infinite loop we:
- we send to each client the message message nb.
- If the sending fails, we remove the client of the array.
- Then we wait during 2 secondes
An archive of all the code in the demo section.
2.2 - Client side¶
<!doctype html> <html> <head> <style type="text/css"> body { text-align: center; min-width: 500px; } </style> <script src=""></script> <script> function log(m) { d = document.getElementById("log"); d.innerHTML = m + "<br/>" + d.innerHTML; } $(document).ready(function () { var ws = new WebSocket("ws://localhost:8888/streaming"); ws.onopen = function(evt) { log("socket opened"); }; ws.onmessage = function(evt) { log("message: " + evt.data); }; ws.onclose = function(evt) { log("socket closed"); }; }); </script> </head> <body> <h1>Websockets Streaming</h1> <div id="log"></div> </body> </html>
Here, we save this to a file in our tornado directory as before, calling the file 'Streaming.html'. In this webpage, once the connection has been established, we print socket opened in a div with the id log and then for each received message, we extend this div with the message.
1.3 - Demo¶
To run the demo:
- Download this archive containing the server sources
- Put and extract this archive in your workspace (/home/samux/workspace for me)
- Start Eclipse
- File > Import > General > Existing Projects into Workspace. Then select the archive file in the select archive file field.
- Finish
Then on the left-hand side, you should have a directory named websocket4j, as you can see above. You can go in the src directory to see the packages used. The one which we are interested in is websocket4j.mbed_demo. You can open WSStreaming.java and see the code explained above.
Run the server by clicking on the small green icon with the arrow, with the WWStreaming.java file selected:
You should see on the console Streaming server ready. Listen on: 8888 &&& nb_clients: 0.
At this step, the server is started!
You can open the Streaming.html file with Chrome and see the results:
Et Voilà! The client(s) are receiving messages from the server and the server prints that there are two, or however many, clients connected. Note that although we are accessing the page from the same computer we are running the server on, it would be the same result if you were to do so from another computer on the LAN, or any on the Internet if you set up port forwarding correctly on your router.
Please login to post comments. | http://mbed.org/cookbook/Websockets-Server | CC-MAIN-2013-20 | refinedweb | 1,621 | 57.16 |
(This?
Let’s suppose that
rand() generates a uniform probability distribution; for any call we have a 1 / 32768 chance of getting any particular number. And let’s suppose that
range is 3. What is the distribution of
rand() % 3? There are 10923 numbers between 0 and 32767 that produce a remainder of 0, 10923 that produce a remainder of 1 and 10922 that produce a remainder of 2, so 2 is very slightly less likely, and 0 and 1 are very slightly more likely. We’ve introduced a bias towards smaller numbers.
This difference is tiny; so small that you would not notice it without millions of data points. But now let’s suppose that we want to generate a random number between 0 and 19999 using this technique, and see what the bias is. How many ways are there to get 0? We could generate 0 or 20000, so 2. How many ways are there to get 1? We could generate 1 or 20001. And so on, up to 12767. But there is only one way to get 12768 out of
rand() % 20000! Every number between 0 and 12767 is twice as likely as every number from 12768 to 19999; this is a massive bias.
How can we quantify how massive or tiny the bias is? One technique is to graph the desired probability distribution[1. Here shown as a continuous function, though of course it is a collection of discrete data points.] and the actual probability distribution, and then take the area of their difference.
Here the red line is the desired probability distribution: an even 1/20000 for each element. The blue line is the probability of getting every element with
rand() % 20000, and the shaded area is the difference between the desired and actual distributions. The area under both distribution lines is 100%, as you would expect for a probability distribution. A few quick calculations shows that the shaded area is a massive 28%! By contrast if we did the same exercise for
rand() % 3 the shaded area would be 0.004%. (Of course the two shaded areas are equal since the total area under both curves is the same. We could just compute one of the areas and double it.)
I computed the “bias area” for every range value from 1 to 32768; the result is an interesting shape:
Every power of two gives no bias at all; half way between each power of two gives an every-increasing bias area.
The moral of the story is: when using the remainder technique, you’re introducing a bias towards small numbers, and that bias can get quite large if the range is large. Even a range of a few thousand is already introducing a 5% bias towards smaller numbers.
So what then is the better technique? First off, obviously if you have a library method that does the right thing, use it! The .NET implementation of
Random.Next(int) has been designed to not have this bias. (It does not use the technique I describe here. Rather, it generates a random double and then multiplies that by the desired range, then rounds to an integer. If the range is very large then special techniques are used to ensure good distribution.) If you must use the remainder technique then my advice is to simply discard the results that would introduce the bias. Something like:
while(true) { int value = rand(); if (value < RAND_MAX - RAND_MAX % range) return value % range; }
From the graph, the Nth best case is 2^N. The worst case between the N and N+1 best cases is 2^N + 2^(N-1), i.e. halfway between the two.
As the bias gets greater and greater, how does your discard-bias algorithm perform on the worst cases asymptotically?
The algorithm doesn’t depend on the size of the bias, it depends on the proportion of values which produce bias. In the worse case (range = (MAX_RANGE / 2) + 1) it takes 2 (amortized) calls to rand() per randInt(range)
Brock’s analysis is correct. To expand on that.
In the worst case fully one half of the values are discarded. The probability of going one round is therefore 1/2. The probability of going two rounds is 1/2 * 1/2 = 1/4. The probability of going three rounds is 1/2 * 1/2 * 1/2 = 1/8. In general the probability of going n rounds is 1/2n. So in practice it is very rare to go more than a small number of rounds.
What is the expected number of rounds? That is ( 1 * 1/2 + 2 * 1/4 + 3 * 1/8 + 4 * 1/16 …) This is an arithmetic-geometric series that converges; the value it converges to is r / (1-r)2, where r in this case is 2, so the sum is 2. We expect on average to get a result in 2 rounds.
Now, that is the “worst case” in terms of the range which causes the most items to be discarded. The truly worst case for this algorithm is that the random number generator produces numbers outside the given range forever, in which case it is not a very good random number generator! One could add a fail-safe to the loop so that if it looped more than five times, bail out and return a possibly biased value.
The algorithm shown is simple, and eliminates bias, but it also throws out a lot of the data generated by the random generator. For example, if one wants numbers in the range 0 to 16384 inclusive, the algorithm will need to generate an average of almost 30 bits of random data for each number output, even though an average of only 14.000089 bits of data should be required. Are you going to examine ways of capturing left-over randomness on each stage rather than discarding it?
For this simple sketch I wasn’t going to consider that, no. Pseudo-random numbers are cheap, after all.
One approach would be to compute the “bias area” and only start looping if it is greater than some undesirable amount. In your example we’d then not be throwing away half the bits.
Pseudo-random numbers are cheap, but some sources of real randomness aren’t. If one sometimes wants random numbers from 0 to 5, and sometimes from 0 to 999,999 (inclusive), and one’s random source is e.g. a hardware RNG that costs 1ms per 8 bits, there may be some value in capturing all the randomness one can get. Plus (at least IMHO) it’s a cute little design challenge. Roughly a dozen lines of code should suffice I think for a generator which can generate numbers for any mix of big and small ranges, provided only that the product of RAND_MAX and the largest range of interest will fit in whatever integer type one is using for two state variables.
Or just do: ( rand() / (double)RAND_MAX ) * range ;
No! See the video I linked to below, starting at 6:02. You just introduced a massive bias against the last number in your range.
Well, that’s exactly what System.Random.Next(int) does (code shown after some inlining):
return (int)((double)this.Next() * 4.6566128752457969E-10 * (double)maxValue);
That funny constant is 1/2^31. So… don’t use System.Random.Next(int)?
If it does that then yes, very bad. Although contrary to your method you’ll notice that it actually uses (RAND_MAX + 1) which avoids the obvious problem but still is not perfect because floats are not uniformly distributed.
STL shows the empiric test results for that.
I think that solution is “good enough.” While not 100% perfect, one number being vastly under reported in a large range of possible values is not the end of the world. If near ideal results are required by the problem at hand, then a true cryptographic library should be employed.
Since floating point numbers aren’t uniformly distributed themselves this presumably isn’t perfectly uniform either. But contrary to the modulo problem, the math for that one seems more intricate.
The reason why using modulo to constrain your random number to a range is bad has less to do with bias and more to do with the typical implementation of rand(), a linear congruential generator (LCG). Most language runtimes use LCGs for their random function; only very recently designed languages tend to differ.
An LCG is just a multiply and an add (the modulo usually being implemented via an integer’s maximum size). It should be obvious that the low bits of such a sequence follow a regular pattern – the multiply doesn’t mix higher bits into the lower bits, and the add mutates the low bits in a constant way every iteration.
And FWIW, you don’t need to convert to double to implement Random.Next(Int32) in the fractional style. If you can generate random numbers with e.g. a 32-bit unsigned range, all you need to do is a 64-bit multiply, and take the high 32 bits as your result.
This is an excellent point, thanks!
Could the fact that upper LCG bits are “better” than lower ones be counteracted if n LCG-based rand() used a linear-congruential generator internally via something like `return ((seed >> 16) ^ seed) & 0x7FFF;` [perhaps using other shift amounts as well on processors which can handle them cheaply]? Adding such things into the “feedback loop” would risk having it fall into short cycles, but if `seed` is updated each time using an LCG, I would think that munging its output through almost any bijective function should be safe.
The Windows implementation of rand() already does something like this (but a bit simpler). It has 32 bits of internal state, but a call to rand() gives you 15 high bits instead of giving you low bits.
The default .NET Random implementation isn’t very good (see for instance, though I’ve run into this doing simulations).
If you’re interested in fixing bias, it’s probably also worth pointing out that there are many freely available alternatives; mt19937 is a popular alternative; lists several implementations.
Pingback: Java, RNG, Raster & Maps | Kynosarges Weblog
How can you not link to Stephan Lavavej’s excellent talk on this subject?
He explains the problem with both the modulo and the division technique, and then shows how to use C++ 11’s new random number facilities.
Pingback: Dew Drop – December 17, 2013 (#1685) | Morning Dew
I am not great on Maths but still able to comprehend this post. Great job.
So this is further proof that the taught and accepted wisdom that, when writing a Hash Table implementation, using a prime number as your array size doesn’t work well for computers. A power of 2 will be faster (both for memory access and for the index calculations) and will give you a better distribution of indices.
Ok, so not many people have cause to write such a thing these days, but it’s still useful to know.
The double based technique avoids this particular bias, but it causes other biases. My favourite demonstration is:
r.Next(1431655765)%2
This should obviously return the results 0 and 1 with (nearly) equal probability. What it actually does is returning 0 in with probability 2/3 and 1 with probability 1/3.
See a full example and the rest of my rant about System.Random ;) | http://ericlippert.com/2013/12/16/how-much-bias-is-introduced-by-the-remainder-technique/ | CC-MAIN-2014-41 | refinedweb | 1,915 | 71.44 |
key features in any programming language used for large scale application development is modularizing code of an application. The concept has different names in different languages viz header files in C, namespaces in C++ and C#, packages in Java; but all of these address a common problem. As mentioned in the first article of this series ECMAScript 6 – New language improvements in JavaScript, initially JavaScript was not designed to write a lot of code with it, like creating huge frameworks, apps etc. As we started writing a lot of JavaScript and due to lack of support of modules in the language, open source developers came up with standards like CommonJS module pattern, Asynchronous Module Definition (AMD) and libraries, implementing these approaches. Over the past few years, these libraries have gained a lot of attention and they are successfully used in several applications of enterprise scale.
ES6 brings the feature of modules into the language. It is going to take some more time before browsers implement this feature, as they will have to define a way to download files on the fly. Before browsers implement this feature, we can use one of the available transpilers like Traceur, 6to5, ES6 Module Loader or any available converter to turn ES6 modules to ES5. Converters polyfill the feature using one of the existing methodologies.
CommonJS Module System
CommonJS is a team of open source developers that have worked on designing some APIs and practices for JavaScript development. This team came up with a specification of modules in JavaScript. Every file is treated as a module and each file gets access to two objects: require and export. The object require is a factory function that accepts a string (name of a module) and returns object exported by the module. The export object is used to export objects and functions from the module. The require function returns this export object. Modules are loaded synchronously. The server side JavaScript engine Node.js uses this module system.
Asynchronous Module Definition (AMD)
AMD is a module system that loads dependent modules asynchronously. If the modules are in other files, they are loaded using XHR. Module depending on a set of other modules would be executed after the dependencies are loaded. An AMD module has to be a function passed as an argument to the define function. Return value of the function will be exported to the depending modules and these values are passed as arguments to the module function. The library require.js can be used to implement AMD in applications.
TypeScript Modules
TypeScript, the typed superset of JavaScript provides a module system. When it gets transpiled, it uses JavaScript module pattern. A TypeScript module is defined using module keyword and any objects to be exported have to be specified after the export keyword. The keyword import has to be used to load other modules into a module and to capture the exported object from the module. TypeScript modules are loaded synchronously.
ES6 module system is inspired from all the above existing module systems. It has the following features:
1. Exporting objects using export keyword. The keyword can be used any number of times
2. Importing modules into a module using import keyword. It can be used to import any number of modules together
3. Support for asynchronous loading of modules
4. Programmatic support for loading modules
Let us look at each of these features with code.
As in existing module systems, every JavaScript code file is a module in ES6. A module may or may not export any objects from it. Only the objects exported from a module are visible to the outside world. Rest of them remain private to the module. This behavior can be used to abstract details and export only essential features.
ES6 provides us with different ways to export objects from a module. They are discussed below:
Exporting Inline
Objects in an ES6 module can be exported in the same statement where they are defined. As export can be used any number of times in a module, all of the objects would be exported together. Following is an example:
export class Employee{
constructor(id, name, dob){
this.id = id;
this.name=name;
this.dob= dob;
}
getAge(){
return (new Date()).getYear() - this.dob.getYear();
}
}
export function getEmployee(id, name, dob){
return new Employee(id, name, dob);
}
var emp = new Employee(1, "Rina", new Date(1987, 1, 22));
Here, the module exports two objects: class Employee, function getEmployee. The object emp remains private to the module as it is not exported.
Exporting a Group of Objects
Though inline export works, it is less useful in case of large modules, as we may lose track of the objects exported from the module. In such cases, it is better to have a single export statement at the end of the module specifying all objects to be exported from the module.
The above module can be re-written as follows to use a single export statement:
class Employee{
constructor(id, name, dob){
this.id = id;
this.name=name;
this.dob= dob;
}
getAge(){
return (new Date()).getYear() - this.dob.getYear();
}
}
function getEmployee(id, name, dob){
return new Employee(id, name, dob);
}
var x = new Employee(1, "Rina", new Date(1987, 1, 22));
export {Employee, getEmployee};
It is also possible to rename the objects while exporting. Say, I want to export the class Employee as Associate and the function getEmployee as getAssociate. It can be done as follows:
export {
Associate as Employee,
getAssociate as getEmployee
};
Default Export
An object can be marked as the default object to be exported from a module by using the keyword default. The keyword default can be used only once per module. It can be used in both inline as well as group export statements.
Following statement marks a group export statement as default:
export default {
Employee,
getEmployee
};
Existing modules can be imported into other modules using the keyword import. A module can import any number of modules. Out of the objects exported by an imported module, the importing module may refer all, none or some of the objects. The objects to be referred have to be specified in the import statement. Following listing shows different ways of importing modules:
Importing with no Objects
If a module contains some logic to be executed and doesn’t export any objects, it can be imported to another module without expecting any objects from it. Following is an example of this:
import './module1.js';
Importing with Default Object
If a module exports an object using the default keyword, the object can be directly assigned to a reference in the import statement. Following is an example:
import d from './module1.js';
Importing with Named Objects
As discussed in the previous section, a module can export a number of named objects. If another module wants to import the named objects, it needs to specify them in the import statement, as shown below:
import {Employee, getEmployee} from './module1.js';
It is also possible to import both default and named objects in the same statement. The default object must have an alias name defined in such cases. It is shown below:
import {default as d, Employee} from './module1.js';
Importing with All Objects
In the case of named importing, only the objects specified in the import statement would be imported and other export objects won’t be available to the importing module. Also, consumer should know about the objects exported to use them. If a module exports a large number of objects and the consuming module wants to get all of these, it has to use an import statement similar to the following one:
import * as allFromModule1 from './module1.js';
The alias, allFromModule1 will have references to all objects exported from module1. They can be accessed as properties in the importing module.
Importing Programmatically On-demand
If a module needs to load another module based on some condition or, wants to defer loading of the module till certain event happens, it can do so by using the programmatic API of loading modules. The modules can be loaded programmatically using the System.import method. This is an asynchronous method and it returns Promise.
Syntax of the method is shown below:
System.import('./module1.js')
.then(function(module1){
//use module1
}, function(e){
//handle error
});
The promise passes if the module is successfully loaded and it passes the export object of the module to the success callback. The promise fails if there is a typo in name of the module or if the module doesn’t load for some reason like network latency.
As ES6 modules are not supported natively in any of the browsers as of now, we need to use a transpiler to get the code converted to ES5 before we load it on a browser. As I have been using Traceur as my choice of transpiler till now, let’s use the same tool to convert the modules to a browser-understandable way. Let’s see the different ways in which we can transpile the ES6 modules.
ES6 modules can be transpiled on the fly after loading the script on the browser using Traceur’s client-side library. We don’t need to run any commands to make the modules run if we use this approach. All we need to do is, load the traceur library on the page and add script to run the WebPageTranscoder.
<script>
new traceur.WebPageTranscoder(document.location.href).run();
</script>
Now, we can import any ES6 file inside a script tag with type assigned as module.
<script type="module">
import './modules/import1.js';
</script>
Any script tag with type assigned as module would be picked by the ES6 client-side library and is processed by it. The import statement in the above snippet makes an AJAX request for the corresponding JavaScript file and loads it. If this module internally refers to another module, a separate AJAX request will be made to load the file corresponding to that module.
ES6 modules can be transpiled to AMD or, CommonJS modules using commands exposed by Traceur. There are two advantages of using this approach over the on the fly transpiler:
1. Browser won’t have to perform additional work to transpile the modules
2. If an application is already half-built using ES5 and it used either AMD or CommonJS module systems, other half of the application can be built using ES6 and transpiled into one of these module systems, instead of migrating entire application to ES6 immediately
To be able to use the transpiled AMD/CommonJS modules, we need to include the libraries supporting the module system. AMD is my personal favorite, so I will cover it here. The process remains same for CommonJS modules as well.
Following is the command for transpiling a folder containing ES6 modules to AMD and store them in a separate folder:
traceur --dir modules es5Modules --modules=amd
To use CommonJS, you need to assign value commonjs to the modules flag in the above command.
Here, modules is the folder containing ES6 code and es5Modules is the output directory. If you check any file inside the folder es5Modules, you would see the AMD define blocks inside it. As require.js supports AMD, we need to add a script reference to require.js on the HTML page and specify the start file in data-main attribute, as shown below:
<script src="bower_components/requirejs/require.js" data-</script>
Transpiling modules using commands can be tiring and error prone at times. We can automate the transpilation process using the grunt-traceur task. For this, you need to install the NPM package of the task.
npm intall grunt-traceur –save
The data that we need to provide to the Grunt task is same as the data provided to the command. Following is configuration of the task:
traceur: {
options: {
modules: "amd"
},
custom: {
files: [{
expand: true,
cwd: 'modules',
src: ['*.js'],
dest: 'es5Modules'
}]
}
}
Now you can run the above grunt task on the console using the following command:
grunt traceur
As you see, it produces same result as we got by running the command.
Modularization is highly essential to any large application. ES6 modules bring this feature to JavaScript and these modules provide a lot of options to export and import objects. I look forward to that day when this feature would be in the browsers and we don’t need to add any third party library to create and load JavaScript modules. The popular client side MV* framework Angular.js uses ES6 modules instead of its own module system in its version 2, which is still under development.
Let’s start using the module system to make our code more organized and readable!
Download the entire source code of this article (Github) | https://www.dotnetcurry.com/javascript/1118/modules-in-ecmascript6 | CC-MAIN-2018-39 | refinedweb | 2,121 | 62.78 |
Red Hat Bugzilla – Bug 178086
anaconda timezone selector crashes when dragging mouse on map canvas
Last modified: 2007-11-30 17:11:21 EST
When selecting a timezone during the test2 install a dialog pop'd up with the
attached exception.
I pressed down on the mouse button and dragged toward the edge of the screen
when the crash occurred.
Created attachment 123324 [details]
the logged exception
Nils, really simple patch attached below.
diff -u -r1.40 timezone_map_gui.py
--- timezone_map_gui.py 16 Jan 2006 15:41:34 -0000 1.40
+++ timezone_map_gui.py 17 Jan 2006 19:28:11 -0000
@@ -442,7 +442,7 @@
def highlight (self, x, y):
shr = self.highlight_region
hr = self.find_region (x, y)
- if hr != shr:
+ if hr and hr != shr:
if shr and shr['pixbuf']:
shr['pixbuf'].hide ()
shr['frame'].hide ()
I've applied the fix in version 1.7.99.16. | https://bugzilla.redhat.com/show_bug.cgi?id=178086 | CC-MAIN-2017-43 | refinedweb | 146 | 77.43 |
Yesterday a colleague asked me for help. She was building an addin for ESRI ArcGis and she wanted to load a WPF form and show some related data. The problem was that when loading the WPF form some of the DLL’s were searched in the Addin folder but for some of the DLL’s, .NET was looking inside the bin folder of the main application. Easy solution would be to just copy over all addin assemblies to the bin folder of ArcGis. Of course this was not what we want as it removes the advantages of the whole addin concept.
So the big question is; why are some DLL’s loaded from the correct location and others not?
Let’s have a look at the WPF form first:
In this form the services DLL was loaded correctly whereas the caliburn.micro dll(still my favorite MVVM framework
) was not. The difference is that the services DLL namespace is constructed using the clr-namespace syntax whereas the Caliburn.Micro namespace is constructed using an URL.
Where does this URL come from?. This prevents you from loading a lot of namespaces yourself as they can be grouped using one XAML namespace alias.
Unfortunately it was exactly this functionality that caused the Addin loader to look for assemblies at the wrong location.
How did we fix it?
By relying on the CLR behavior that the runtime first will check if an assembly is already loaded before it will try to search for it on the file system, we were able to solve the issue. Therefore inside the bootstrap logic of the addin we added the following line to explicitly load the Caliburn.Micro assembly into the appdomain: | http://bartwullems.blogspot.com/2016/07/wpfusing-xmlnsdefinitionattribute.html | CC-MAIN-2017-26 | refinedweb | 285 | 72.87 |
27 April 2010 17:58 [Source: ICIS news]
LONDON (ICIS news)--Dow Chemical and Saudi Aramco are reconfiguring their planned petrochemicals project in Saudi Arabia following a decision to relocate the project from Ras Tanura to Al-Jubail, sources familiar with the project said on Tuesday.
The companies have decided to relocate the joint-venture project to Al-Jubail, which is more developed than the Ras Tanura site, due to cost pressures, the sources said.
So far, neither Dow nor Saudi Aramco has commented on the decision to relocate the project.
A Dow spokesman told ICIS news earlier this month that the evaluation phase for the project was still on schedule.
The Ras Tanura project was originally estimated to cost some $20bn (€15bn), but this has escalated as project have become more expensive to implement in the current economic climate.
The project has also been downsized, with the expectation that there would be one cracker rather than two, an engineering official with knowledge of the project said. However, he added that the final list of units has not yet been made available.
Roger Green, vice president with consultancy Nexant, said: “Dow and Aramco have been thrashing out the details. It appears there is a determination to go ahead.”
Building two crackers in one go was always ambitious, Green said.
“It was massively ambitious, even in 2007, to be building on that scale. But at the time, the petrochemical market was booming, oil prices were very strong and petrochemicals in ?xml:namespace>
Since then, Green said, there has been the banking crisis, global economic downturn, the cost of executing projects has soared and oil prices have softened.
“These factors work against projects,” he said.
Dow and Saudi Aramco signed a detailed memorandum of understanding to construct and operate the Ras Tanura project in May 2007. is expected to slip by several months, according to the engineering official.
The original project would have been integrated with Saudi Aramco’s Ras Tanura refinery and received a mixed feed of naphtha and ethane. The revised project at Al-Jubail was also likely to be based on a naphtha/ethane mix, sources suggested.
The joint venture with Saudi Aramco represents Dow’s third attempt to invest in
Dow had considered partnering with SABIC in its Petrokemya subsidiary and with Saudi Aramco in its Rabigh Refining and Petrochemical Co (PetroRabigh) joint venture with Sumitomo Chemical, but neither of these proposed partnerships went ahead.
“For anyone involved in the ethylene chain, access to competitive feedstock is a key factor,” said Green. “Saudi Arabian ethane is the feedstock of choice globally.”
($1 = €0.75)
For more on Dow Chemical | http://www.icis.com/Articles/2010/04/27/9354330/dow-saudi-aramco-move-project-to-al-jubail-due-to-costs.html | CC-MAIN-2014-52 | refinedweb | 441 | 60.14 |
Introduction: How to Drive a Relay From an Arduino (Bareduino) Using the Bare Minimum of Components
So i have come up with some ideas for some projects (i may or may not post) that involves relays.
So i get my browser ready and get on to eBay and order me some relays, and wait for delivery .......
I got myself 5 pcs of "JQC-3F 5 Pin SPST Coil Power Relays DC 3V" (I am a noob at this electronics stuff) thinking that the ATMega328 pin out voltage should be able to turn on and off the relay instead of using the more traditional relay module as I like to try and simplify and make things less complicated an boil it down to the bare minimal.
Well that didn't work :(
Step 1: What Do You Need...
1 * JQC-3F 5 Pin SPST Coil Power Relays DC 3V, Currently for the bargain of £2.64 for 10 , free delivery !!! (...)
1 * Breadboard £0.99 on ebay free delivery (...)
1 * Arduino board (Or a Bareduino in my case) £6.99 , free and fast delivery(...)
Some dupont cables , £0.85 for 40 (...)
and whatever you like to turn on and off, in my case for now just an LED (i'm sure you can find one of them yourself)
Step 2: Hook It Up
Hook it up as above and upload the code from below.
What you see in the image is:
- 4 AA rechargeable batteries powering the AtMega328.
- 5V (Red) going to the open relay pin (right hand side of relay).
- Digital PIN10 (White), hooked up to one ned of the coil.
- Black Relay cable going to ground.
- And the last red cable (left of relay) going to the LED which is connected to ground.
So when pin 10 goes hight, the relay connects the 2 red cables and viola !
Initially i didn't get the relay to turn on with just one output pin, so i came up with the idea to just add one more pin (White Digital PIN 9 and Grey Digital PIN 10, Frizing image) and that did the trick.
In the video you see that somehow it can work with just one pin, but i'm not too that will work if you have the micro controller run other things at the same time.
In an effort to run the relay with as little current as possible with 2 PINs, consider messing around with a POT or add some resistors so you bring the coil current down to a minimum so it doesn't pump out 40mAh/PIN :)
Step 3: Code:
#include "LowPower.h"
void setup() {
pinMode(9, OUTPUT); pinMode(10, OUTPUT); }
void loop() {
digitalWrite(9, HIGH);
digitalWrite(10, HIGH);
LowPower.powerDown(SLEEP_1S, ADC_OFF, BOD_OFF);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
LowPower.powerDown(SLEEP_2S, ADC_OFF, BOD_OFF); }
Be the First to Share
Recommendations
2 Discussions
4 years ago
Not a good idea. Before you turn the second pin high, the first pin will be shorted to ground from the second pin.
Reply 4 years ago
Are you sure, i mean the 2 pin is set to output ?
Even if thats the case one could just stick a diod in on the second pin. | https://www.instructables.com/How-to-Drive-a-Relay-From-an-Arduino-Bareduino-Usi/ | CC-MAIN-2021-04 | refinedweb | 534 | 78.28 |
On Saturday 26 July 2008 01:23:17 am Nick Coghlan wrote: > Sebastien Loisel wrote: > > However, just for posterity (and I'm not going to pursue the argument > > further than this), I'll say this. The problem of determining the > > meaning (or overridability or whatever) of x=4$6 is the same as the > > problem of determining the meaning of x=fooz(4,6). Since it's not a good > > argument against user-defined functions, I don't see it as a good > > argument against user-defined operators. > > The namespace of usefully mnemonic function names is infinitely larger > than that of usefully mnemonic punctuation marks. User-defined functions > are good, but once you have those there is no reason to have > user-defined operators *as well*. > > Cheers, > Nick. > Most mathematicians would disagree with you. I'll grant that it tends to make the code extremely obscure to those who don't work in the field, but it tends to make it much clearer to those who do so work. OTOH, it's also true that there aren't sufficient punctuation symbols. E.g. math people drafted capital sigma as sum of a series, etc. Therefore it seems to me that the appropriate thing is to create a convention that bar-somethingprintable-bar should be interpreted as a user defined operation, e.g. |+| or |@#|. The first one is reasonable as matrix multiplication, the second as some user defined operation that hasn't yet been specified (in this context). And since such things don't yet have a "secret name" I would suggest that they be defined either via an ordinary def, or via a def with their name in quotes, i.e., either: def |+| or def "|+|" If an ordinary functional reference is desireable (probably) there could be a decorators that assigned the name, and possibly the precedence, e.g.: @name=madd @precedence="+" def |+| (a, b): etc. The main drawback that I see is that code that relies heavily on this approach would become much less readable to anyone not in the particular field. Think of the first time you encountered the curl or del operators, or even the kronecker delta. OTOH, it seems far too late in the development process to be inserting such a change in Python 2.6 or 3.0. If this is important to you, you should probably propose it for 2.7/3.1. | https://mail.python.org/pipermail/python-dev/2008-July/081559.html | CC-MAIN-2016-30 | refinedweb | 398 | 60.45 |
Update spent the last couple of weeks writing sample code for ASP.NET 5/MVC 6 and I was surprised by the depth of the changes in the current beta release of ASP.NET 5. ASP.NET 5 is the most significant new release of ASP.NET in the history of the ASP.NET framework — it has been rewritten from the ground up.
In this blog post, I list what I consider to be the top 10 most significant changes in ASP.NET 5. This is a highly opinionated list. If other changes strike you as more significant, please describe the change in a comment.
1. ASP.NET on OSX and Linux
For the first time in the history of ASP.NET, you can run ASP.NET 5 applications on OSX and Linux. Let me repeat this. ASP.NET 5 apps can run on Windows, OSX, and Linux. This fact opens up ASP.NET to a whole new audience of developers and designers.
The traditional audience for ASP.NET is professional developers working in a corporation. Corporate customers are welded to their Windows machines.
Startups, in stark contrast, tend to use OSX/Linux. Whenever I attend a startup conference, the only machines that I see in the audience are Macbook Pros. These people are not the traditional users of ASP.NET.
Furthermore, designers and front-end developers – at least when they are outside the corporate prison – also tend to use Macbook Pros. Whenever I attend a jQuery conference, I see Macbook Pros everywhere (the following picture is from the jQuery blog).
Enabling ASP.NET 5 to run on Windows, OSX, and Linux changes everything. For the first time, all developers and designers can start building apps with ASP.NET 5. And, they can use their favorite development environments such as Sublime Text and WebStorm when working with ASP.NET apps (No Visual Studio required).
Take a look at the OmniSharp project to see how you can use editors such as Sublime Text, Atom, Emacs, and Brackets with ASP.NET 5:
2. No More Web Forms
I love ASP.NET Web Forms. I’ve spent hundreds – if not thousands – of hours of my life building Web Forms applications. However, it is finally time to say goodbye. ASP.NET Web Forms is not part of ASP.NET 5.
You can continue to build Web Forms apps in Visual Studio 2015 by targeting the .NET 4.6 framework. However, Web Forms apps cannot take advantage of any of the cool new features of ASP.NET 5 described in this list. If you don’t want to be left behind as history marches forward then it is finally time for you to rewrite your Web Forms app into ASP.NET MVC.
3. No More Visual Basic
It is also time to say goodbye to Visual Basic. ASP.NET 5 only supports C# and Visual Basic is left behind.
My hope is that this change won’t be too painful. I believe that there are only two people in the entire world who are building MVC apps in Visual Basic. It is time for both of you to stop it. There are good automatic converters for going from Visual Basic to C#:
4. Tag Helpers
Tag Helpers is the one feature that might have the biggest impact on the way that you create your views in an ASP.NET MVC application. Tag Helpers are a better alternative to using traditional MVC helpers.
Consider the following MVC view that contains a form for creating a new product:
@model MyProject.Models.Product @using (Html.BeginForm()) { <div> @Html.LabelFor(m => p.Name, "Name:") @Html.TextBoxFor(m => p.Name) </div> <input type="submit" value="Create" /> }
In the view above, the Html.BeginForm(), Html.LabelFor(), and Html.TextBoxFor() helpers are used to create the form. These helpers would not be familiar to an HTML designer.
Here’s how the exact same form can be created by using Tag Helpers:
@model MyProject.Models.Product @addtaghelper "Microsoft.AspNet.Mvc.TagHelpers" <form asp- <div> <label asp-Name:</label> <input asp- </div> <input type="submit" value="Save" /> </form>
Notice that this new version of the form contains only (what looks like) HTML elements. For example, the form contains an INPUT element instead of an Html.TextBoxFor() helper. A front-end designer would be fine with this page.
The only thing special about this view is the special asp-for attributes. These attributes are used to extend the elements with server-side ASP.NET MVC functionality.
Damien Edwards put together an entire sample site that uses nothing but Tag Helpers here:
5. View Components
Goodbye subcontrollers and hello View Components!
In previous versions of ASP.NET MVC, you used the Html.Action() helper to invoke a subcontroller. For example, imagine that you want to display banner ads in multiple views. In that case, you would create a subcontroller that contained the logic for returning a particular banner advertisement and call the subcontroller by invoking Html.Action() from a view.
Subcontrollers – the Html.Action() helper — are not included in the current beta of MVC 6. Instead, MVC 6 includes an alternative technology called View Components.
Here’s how you can create a View Component that displays one of two banner advertisements depending on the time of day:
using Microsoft.AspNet.Mvc; using System; namespace Partials.Components { public class BannerAd : ViewComponent { public IViewComponentResult Invoke() { var adText = "Buy more coffee!"; if (DateTime.Now.Hour > 18) { adText = "Buy more warm milk!"; } return View("_Advertisement", adText); } } }
If the time is before 5:00pm then the View Component returns a partial named _Advertisement with the advertisement text “Buy more coffee!”. If the time is after 5:00pm then the text changes to “Buy more warm milk!”.
Here’s what the _Advertisement partial looks like:
@model string <div style="border:2px solid green;padding:15px"> @Model </div>
Finally, here is how you can use the BannerAd View Component in an MVC view:
@Component.Invoke("BannerAd")
View Components are very similar to subcontrollers. However, subcontrollers were always a little odd. They were pretending to be controller actions but they were not really controller actions. View Components just seem more natural.
6. GruntJS, NPM, and Bower Support
Front-end development gets a lot of love in ASP.NET 5 through its support for GruntJS (and eventually Gulp).
GruntJS is a task runner that enables you to build front-end resources such as JavaScript and CSS files. For example, you can use GruntJS to concatenate and minify your JavaScript files whenever you perform a build in Visual Studio.
There are thousands of GruntJS plugins that enable you to do an amazing variety of different tasks (there are currently 4,334 plugins listed in the GruntJS plugin repository):
For example, there are plugins for running JavaScript unit tests, for validating the code quality of your JavaScript (jshint), compiling LESS and Sass files into CSS, compiling TypeScript into JavaScript, and minifying images.
In order to support GruntJS, Microsoft needed to support two new package managers (beyond NuGet). First, because GruntJS plugins are distributed as NPM packages, Microsoft added support for NPM packages.
Second, because many client-side resources – such as Twitter Bootstrap, jQuery, Polymer, and AngularJS – are distributed through Bower, Microsoft added support for Bower.
This means that you can run GruntJS using plugins from NPM and client resources from Bower.
7. Unified MVC and Web API Controllers
In previous versions of ASP.NET MVC, MVC controllers were different than Web API controllers. An MVC controller used the System.Web.MVC.Controller base class and a Web API controller used the System.Web.Http.ApiController base class.
In MVC 6, there is one and only one Controller class that is the base class for both MVC and Web API controllers. There is only the Microsoft.AspNet.Mvc.Controller class.
MVC 6 controllers return an IActionResult. When used as an MVC controller, the IActionResult might be a view. When used as a Web API controller, the IActionResult might be data (such as a list of products). The same controller might have actions that return both views and data.
In MVC 6, both MVC controllers and Web API controllers use the same routes. You can use either convention-based routes or attribute routes and they apply to all controllers in a project.
8.. You can interact with an MVC 6 controller from an AngularJS $resource using REST.
9. ASP.NET Dependency Injection Framework
ASP.NET 5 has built-in support for Dependency Injection and the Service Locator pattern. This means that you no longer need to rely on third-party Dependency Injection frameworks such as Ninject or AutoFac.
Imagine, for example, that you have created an IRepository interface and an EFRepository class that implements that interface. In that case, you can bind the EFRepository class to the IRepository interface in the ConfigureServices() method of the Startup.cs class like this:
services.AddTransient<IRepository, EFRepository>();
After you bind EFRepository and IRepository then you can use constructor dependency injection in your MVC controllers (and any other class) using code like this:
public class ProductsController : Controller { private IRepository _repo; public ProductsController(IRepository repo) { _repo = repo; } }
In the code above, the IRepository interface is passed to the constructor for the ProductsController. The built-in ASP.NET Dependency Injection framework passes EFRepository to the ProductsController because IRepository was bound to EFRepository.
You also can use the Service Locator pattern. Wherever you can access the HttpContext, you can access any registered services. For example, you can retrieve the EFRepository by using the following code inside of an MVC controller action:
var repo = this.Context.ApplicationServices.GetRequiredService<IRepository>();
10. xUnit.net
Goodbye Visual Studio Unit Testing Framework and hello xUnit.net!
In previous versions of ASP.NET MVC, the default testing framework was the Visual Studio Unit Testing Framework (sometimes called mstest). This framework uses the [TestClass] and [TestMethod] attributes to describe a unit test:
[TestClass] public class CalculatorTests { [TestMethod] public void TestAddNumbers() { // Arrange var calc = new Calculator(); // Act var result = calc.AddNumbers(0, 0); // Assert Assert.AreEqual(0, result); } }
ASP.NET 5 uses xUnit.net as its unit test framework. This framework uses the [Fact] attribute instead of the [TestMethod] attribute (and no [TestClass] attribute]):
public class CalculatorTests { [Fact] public void AddNumbers() { // Arrange var calculator = new Calculator(); // Act var result = calculator.AddNumbers(1, 1); // Assert Assert.Equal(result, 13); } }
If you look at the source code for ASP.NET 5 then you’ll see that xUnit.net is used to test ASP.NET extensively. For example, the MVC repository contains unit tests written with xUnit.net. You can take a look at the MVC repository (and its unit tests) here:
ASP.NET uses a fork of xUnit.net that is located here:
Please, revise the VB affirmation… If you aren’t strictly correct can cause FUD
I’m getting my information from the official aspnet repository: “ASP.NET 5 is C# only at this point and that will not change before we RTM. We plan to have extensibility points so other languages like VB, F#, etc can be added via the form of a support package or such.” See
“No More Visual Basic”… Wait! What? Is this official and no support planned in the future? I always assumed VB was coming later and C#-only was just how the CTP was.
@Stilgar – according to this issue from the aspnet repository “ASP.NET 5 is C# only at this point and that will not change before we RTM”. See
My understanding is that this answer says VB.NET support is coming in some form.
@Stilgar — Take a look at this SO answer “There are no plans to support VB in ASP.NET 5 in terms of compilation, project templates, and other tools…ASP.NET 5 has some in-progress support to enable non-C# compilers to be used, but there is still no official plans to support VB (you’d have to roll your own).”
See
Stephen is correct. VB is not part of ASP.NET future. End of story. OTOH, it will be around for a long time regardless.
If you read the GitHub question linked in that SO discussion, you’d see that it’s C# until RTM. That’s all. There has been no definitive statement about Microsoft shoving VB.NET out to pasture, and the part in this blog post was significantly misinforming I was also more than a little unprofessional and snide.
VB.NET’s usage rate is still considerably high, and many enterprise customers of Microsoft have significant investments in VB.NET. The numbers can be seen in Forester studies, as well as various articles and surveys in places like InfoWorld. I really don’t see not being able to create an ASP.NET 5 project in VB.NET after RTM.
While it may be interesting to some on a meta level that fanboy flaming applies not only to Android vs. IOS, Mac vs. Windows, and Java vs. .NET, but that some even extend it to different .NET languages, despite being in the same development environment, frameworks, with nearly identical il code, etc… That said, those types of debates usually still end up very much a waste of time.
I’d wait until some definitive information from Microsoft, rather than a derogative aside misinterpreting a SO post referencing another post, etc…
Could have sworn that was originally
“… and it was also more than a little unprofessional and snide.”
rather than:
“… I was also more than a little unprofessional and snide.”
Oops, completely different meaning. 😉
I think some of the things are taken a bit out of context.
Like 10. xUnit.net. Yes it is true that the ASP.NET team are using xUnit, but as stated here “ they plan on supporting MSTest.
Also they are adding new features to web forms like they say:
But we can totally agree that people should move on.
Interesting statement on Web Forms support – that seems to conflict with the statement on the ASP.NET 5 web site, which reads: “You can continue developing Web Forms apps and have confidence that Web Forms is an essential part of the .NET web development platform. We remain focused on adding new features to Web Forms to improve the development experience and keep the technology up-to-date with web practices.”
Sounds like a bit of creative salesmanship to me. While they are adding features to WebForms, there is NO support for .Net 5.0 (and likely will never have it), so it’s more or less on the track to retirement. Imagine if a product that only works on .Net 2.0 kept getting features: I’d call it a dead/dying product even if people are still using it since the product can’t take advantage of new _language_ features.
Yes, Web Forms and VB are dead end technologies in the Microsoft world. It’s not like you couldn’t see it coming. OTOH, no one is making you upgrade.
I find it hard to believe they are going to drop support for webforms. One Application I support has 100s of webforms. We have two developers. It would takes two full years of work to convert it to MVC.
@Todd — There is no evidence that Microsoft is planning to drop product support for Web Forms or VB.NET. In fact, Microsoft is continuing to invest in Web Forms by adding new features. As I mention in the blog post above, you can continue to build Web Forms apps using Visual Studio 2015.
However, that being said, Web Forms and VB.NET are not part of ASP.NET 5. So you cannot take advantage of many of the new ASP.NET 5 features described in the blog post — such as OSX/Linux support or GruntJS support — in a Web Forms or VB.NET app.
The bottom line is that if I was forced to decide on the ASP.NET technology to use for building a new Web application today then I would use C#/MVC with ASP.NET 5.
See
Rob Conery said Jim Newkirk created xUnit.net with Brad Wilson; kzu switched to xUnit.net for moq … c# now leaving VB in the dust … it’s about time … nevertheless, i’m flabbergasted … please excuse me while i go and check a printed calendar … i need to assure myself that today is not April 1st
I expect that there will be a publicly available package before it goes gold “We plan to have extensibility points so other languages like VB, F#, etc can be added via the form of a support package or such.” there are just too many developed apps to leave it swinging going forward. As far a requirement to use MVC, I am sure that will change as well.
You are in denial. Web forms and VB are being left out of the new lean and mean .NET 5.0. It’s a subset and will never have those features. Sorry if that ruins your day but this was their intention and they have been saying this for over 2 years.
But Microsoft has obviously not been explaining it very clearly otherwise Stephen wouldn’t have needed to reference a github issue It would have been laid out loud and clear by Microsoft by someone like Scott Guthrie or Scott Hanselman in a clear fashion.
Thank God, no more VB crap!
I have written exactly zero lines of VB.NET but it always seemed like a fine language to me. I don’t see why anyone would consider it “crap”
The fact that you haven’t written a line of it, AT ALL, to me, speaks volumes of why people consider it “crap”. Maybe not “crap” but not worthy of dual support. I’ve seen resumes that say “No VB under any circumstances”.
If you had a brand new project, VB would not likely be first choice for language for a jillion reasons.
If the world labeled every language I have not used “crap” or not worthy of support there would be very few languages indeed. Whole industries would be in trouble because I never used the languages they were built upon 🙂
> For the first time in the history of ASP.NET, you can run ASP.NET 5 applications on OSX and Linux.
Nope, ASP.NET MVC app have been supported on Mono for a long time. I’m running two right now (and have for 2 or 3 years).
> GruntJS, NPM, and Bower Support
This is a Visual Studio feature, not an ASP.NET feature. Also Gulp is better than Grunt, and both are supported.
> No More Web Forms
Web forms are still supported. From the ASP.NET site:
> You can continue developing Web Forms apps and have confidence that Web Forms is an essential part of the .NET web development platform. We remain focused on adding new features to Web Forms to improve the development experience and keep the technology up-to-date with web practices.
@Daniel — good point about Mono. I should have written “For the first time, the very same ASP.NET framework that runs on Windows will run on OSX/Linux”. Mono, unlike the new ASP.NET 5, is a separate code base.
And, I am not saying that Web Forms won’t be supported — I’m just pointing out that Web Forms won’t be part of ASP.NET 5. So, for example, you won’t be able to run Web Forms on OSX/Linux using ASP.NET 5.
I’d like to see some type safety in tag helpers, because “form asp-controller=’Products’ asp-action=’Create’ … ” looks too brittle for refactoring and compiler checking
Is there any plans to support that?
Otherwise the only way I can think of is code generation with a global list of strings for controller names, plus the same list of actions for each.
Unless you’ve been precompiling your views they’ve never been particularly friendly towards refactoring or type checking. Just yesterday I refactored a property name on my model and forgot to update the view and it blew up at run-time.
No VB and no Web Forms are the right decision.
I hope MS stands their ground and doesn’t change their direction. As a former VB.NET developer I can truthfully say that learning C# was the right decision for me. I love the direction that Microsoft is going with ASP.NET, Visual Studio, Azure, C#, etc. etc.
This is a great time to be a developer.
I hope MS has plans for items that don’t work in MVC like report viewer control, etc. I love Mac and Linux integration, but am surprised about VB.
“Tag Helpers are a better alternative to using traditional MVC helpers.” – I disagree:
@glen – I notice that they recently added a feature to add prefixes to tags used by Tag Helpers. See — it appears that this feature is optional which should make everyone happy 🙂
Hi stelhen.
Could you please give me more resources to read about this:
“You can interact with an MVC 6 controller from an AngularJS $resource using REST” ..
@mohammad – take a look a the following blog post
How do we use Tag Helpers in place of EditorFor? These don’t seem like replacements for that.
‘you can run ASP.NET 5 applications on OSX’
Presumably you mean so long as a dev ide is installed, or a client web server, unless you are talking about Cordova apps? Thank you in advance for any clarification.
@rod — ASP.NET Core 5.0 is truly cross-platform: you can run ASP.NET on OSX/Linux in the same way as you can on Windows. ASP.NET includes a web server named Kestrel that you can run from the command line and use to serve ASP.NET 5 Web applications. This is not dependent on Visual Studio. You can build ASP.NET apps using Sublime Text or (for that matter) TextEdit.
…well that is really good news indeed – kestrel it is, thank you. Hopefully kestrel might just run on demand so the user doesn’t have to start it up.
I’ve suggested a couple of things on Scott G’s site. I know open source is all the buzz for a lot of non-Windows devs but, having come back to .NET after several years (I’m a production man first, coder second) my head really spins over all this angular/grunt/bower stuff (the list goes on). I just wish it was all VS vNext. How about Anders writing a nice C# compiler that outputs all these JavaScript implementations so I don’t have to reach for the aspirins in the cupboard?
And finally a whacky idea. Back in 2000 when .NET was about to be launched, we marvelled at the simplicity of web + Windows (UI) apps and the prospect of common business and data layers. I hoped for the day when there was a single control set to go even further. What about rendering HTML and all this ASP.NET stuff for non-Windows devices whilst streaming XAML Universal Apps elsewhere i.e. finally one set of controls?
Thanks for listening. R
The majority of start-up developers using OSX/Linux are unlikely to use ASP.NET. Why? Clearly they opted not to purchase a Microsoft Surface or 3rd party machine running Windows OS. So, it is somewhat reasonable to say they are likely to use non-Microsoft web development technology as well e.g. PHP etc.
MVC has been around for decades. Why Microsoft did not go with ASP.NET MVC circa 2002 immediately was in hind sight a poor choice. Instead, they went with ASP.NET Web Forms and now millions of developers know it very well. However, these days ASP.NET MVC is getting all the love. Subsequently, many ASP.NET Web Forms developers have abandoned Microsoft in favour of other technologies that stay the course.
You say “it is finally time for you to rewrite your Web Forms app into ASP.NET MVC”.
So for the thousands of people, businesses and corporates that have spent a decade developing web forms apps, do you really mean throw it away and start from scratch?
Who can justify the time? Who pays for it? What real value does it provide? None. Sure for new projects MVC should be considered (except for say intranets). Sorry and no offence, but “rewrite your apps” is not the best advice.
GruntJS, NPM, and Bower. My prediction is they will disappear by 2020 in favour of something new. Cool kids get bored and move on to something different when the crowd copies them.
Microsoft should chart its own roadmap and stick to it. It is what true leaders do. Trying to be all things to all people is hard and usually fails.
I agree with a lot of your points. For the company I work for “rewrite your apps” is not on option for us anytime soon if at all. We have far too many web forms sites and a whole in-house framework built using web forms. We are starting to add MVC but there’s no easy quick way for us to switch our Web Forms to MVC.
We’ve been burned by Microsoft many times. I like a lot of what they’re doing now but man there are times that they really make things difficult. Naturally we are using VB.NET so if that’s not supported that’s a big deal. Not from a learning c# standpoint but just from a consistency perspective. One assembly in VB.NET and one in C#. Not that big a deal but not ideal.
100% agree! Microsoft completely lacks a consistent roadmap for web/server development. Web development is shifting from backend rendering to frontend rendering (JavaScript…). We are still in time of transition. But frontend rendering will get much more complex, more logic will go to front end etc. As a result, abstraction is crucial. They need some way to build a typed GUI that is decoupled from JavaScript logic.
In my opinion the biggest failure was to hear the designer guys that want control over HTML formatting like Razor View. HTML is just fucking markup. No one cares if it looks ugly or whatever as long as you follow the specifications. Razor is the most ugliest and worst piece of view that was ever developed by Microsoft. Even old school MFC C++ is much more logical and easier to understand. Because there is ABSTRACTION to solve problems. You always need abstraction to solve complex problems.
Microsoft needs to do:
– build JavaScript APIs for frontend development
– build a XAML parser that generates the markup and interacts with the JavaScript libraries
– commit to ONE communication API. I still don’t see why we need WebApi in the backend. WCF is much more powerful and easier to configure.
Stephen, congrats, you’ve found those two developers that use VB for ASP.NET! 😀 Great and witty post.
Vytautas you’re jumping to conclusions. (But I do see the humour in your comment)
I write code in C#.
anyone would like to discuss why we should start using x-unit for unit testing instead of MS test? people should discuss the advantages of x-unit for which we should use it and also should discuss the down side of ms-test. looking for information. thanks
No more VB and Web Forms? Hallelujah! Time to move on and expand your horizons 90’s developer guy.
Dear Stephen,
Being ASP.NET WebForms Developer (both C# and VB), Finally It’s time to completely learn MVC. 🙂
It’s time to upgrade skills (Classic ASP 3.0 -> ASP.NET WebForms -> ASP.NET MVC)
Please publish books for ASP.NET 5 and MVC 6 ASAP..! | https://stephenwalther.com/archive/2015/02/24/top-10-changes-in-asp-net-5-and-mvc-6 | CC-MAIN-2022-21 | refinedweb | 4,637 | 67.25 |
This blog post will guide us through the steps required to configure the SAP S/4HANA Cloud SDK Continuous Delivery Toolkit to scale dynamically on a Kubernetes cluster.
Note: This post is part of a series. For a complete overview please visit the SAP S/4HANA Cloud SDK Overview.
The goal of this blog post
When we decide to set up our first continuous delivery infrastructure, the first question that arises is, how much resources we need to reserve. However, answering this question is not easy. Having limited resources could throttle concurrent build-pipeline executions. On the other hand, if we plan to reserve enough resources to support concurrent build-pipeline execution, we may end up wasting the resources. There are good chances that, these resources stay idle for the most part of the day.
Autoscaling of the infrastructure solves such a problem. This feature has been introduced in the latest release of the SAP S/4HANA Cloud SDK Pipeline. Instead of reserving the resources proactively, the pipeline creates the Jenkins agents dynamically on a Kubernetes cluster during the execution. Once the agent completes the dedicated task, it is deleted and the resources are freed.
The infrastructure component failure such as a node crash is hard to predict and prevent. If we use the dockerized approach as introduced here, then we need to establish an additional mechanism to manage the failures. For example, we may need an infrastructure health monitoring tool such as Nagios to identify the failure and a fallback script to start services on a backup node.
Thanks to Kubernetes, the monitoring and self-healing come out of the box. Kubernetes performs regular health checks of the infrastructure and the services. If any infrastructure component or the service fails the health check, the kubernetes will create a new component and decommission the old one. If the degraded component is a node then kubernetes will create a new node and also ensures that services that were running on the node are moved to a healthy node. If the Jenkins master pod fails, kubernetes will spin up a new pod and the state is restored by reusing the persistent volume. If a pod fails while executing the pipeline stages, it will be re-created by the Kubernetes without propagating the failure to the pipeline.
In the following sections, we will see how to make use of the autoscaling feature. For a better understanding of this article, please go through the following tutorials first:
Note: The Kubernetes support in SAP S/4HANA Cloud SDK is currently offered only as an experimental feature.
Prerequisite
The current version of the SAP S/4HANA Cloud SDK Pipeline supports autoscaling only if the Jenkins master is also set up on a Kubernetes cluster. To begin with, we need a Kubernetes cluster where we will set up the Jenkins using the Jenkins helm chart. Helm is a package management tool for kubernetes. The documentation to Install Jenkins using helm can be found here. To use the Jenkins image provided by the SAP S/4HANA Cloud SDK, we have to pass s4sdk/jenkins-master as a value for the Master.Image command line argument while deploying Jenkins to Kubernetes.
The successfully completed deployment consists of a Jenkins pod with port 80 and 50000 exposed for HTTP and internal JNLP traffic respectively. The deployment also creates two services each to listen to incoming HTTP traffic on port 80 and the internal JNLP traffic on port 50000. Please note that in this example setup, the SSL/TLS termination happens at the load balancer, hence all the traffic between a load balancer and the Jenkins pod is unencrypted.
Kubernetes Plugin Configuration
The SAP S/4HANA Cloud SDK makes use of the Jenkins Kubernetes plugin. It comes pre-installed with the latest version of the s4sdk/jenkins-master docker image. If we use the helm to install Jenkins then the plugin is automatically configured by the helm chart and we can skip this section. However, if we use any other means to install the Jenkins then, we need to configure the plugin to establish the connectivity to the Kubernetes cluster. To do that navigate to Manage Jenkins menu on the left-hand side of our Jenkins welcome page.
Further navigate to Configure System > Cloud section, configure the Kubernetes cluster details by entering a value to the Kubernetes URL field. Please use the Credentials that has an edit access to the namespace which is configured in the Kubernetes Namespace field. Click on Test Connection.
The agents are created as a new pod, a logical node with the collection of containers that are required to execute the pipeline stage. The agent communicates with the master using the JNLP protocol. In this example setup, we are using port 50000 for internal JNLP traffic. Enter the value for the Jenkins tunnel field, which is service name followed by a port number (jenkins-agent:50000). Please note the tunnel value should not be prefixed with the protocol.
Environment Variable
The SAP S/4HANA Cloud SDK Continuous delivery Pipeline needs an environment variable set in the Jenkins to make use of the auto scaling feature. In order to set the environment variable, navigate to Manage Jenkins > Configure System> Global Properties. Add an environment variable ON_K8S and set the value to true.
Pipeline configuration
Now the Jenkins is ready to run our project pipeline as described here. However, every dynamic agent is created as a new pod with a JNLP agent container. By default, the jnlp-agent docker image is used. However, if our Jenkins has a TLS/SSL configuration which terminates at the pod level, then we need to provide a custom JNLP agent image with a valid certificate. Once we publish the custom JNLP image to a secured location, configure the same in our pipeline configuration file pipeline_config.yml.
Note: The Jenkins master server contains a user jenkins with ID 1000. hence, the JNLP agent is expected to have a user with ID 1000 as well to avoid access issues to the files that are shared by both the master and the agent.
#Project Setup general: jenkinsKubernetes: jnlpAgent: 'custom-jenkins-agent-k8s:latest'
That’s all the configuration required to benefit from the autoscaling feature of SAP S/4HANA Cloud SDK Pipeline. Push these changes to the repository. Now, the Jenkins agents are dynamically created and resources are allocated on the fly from the Kubernetes cluster for each stage of the pipeline as shown below.
Troubleshooting
Check connectivity to the Kubernetes cluster
We can test our configuration using the below example Jenkinsfile. Please replace the value for the image if we are using the custom JNLP image.
def podName= 'testing' podTemplate(label: podName, containers: [ containerTemplate(name: 'jnlp', image: 's4sdk/jenkins-agent-k8s:latest'), containerTemplate(name: 'testcontainer', image: 'alpine:latest', ttyEnabled: true, command: 'cat') ]) { node(podName) { stage('Sanity check') { container('testcontainer') { echo "I am inside a container" } } } }
Connection to Kubernetes cluster is not established
Please make sure that the URL to the Kubernetes cluster is valid. We can get the URL by executing the kubectl cluster-info command. Make sure that the credential that we are using has the authorization to create and list pods in a namespace that we are using.
The agent is created but suspended
This issue can arise due to following two reasons
The service is not created to listen to the JNLP traffic: Please make sure that there is a Kubernetes service which is listening to the JNLP traffic
Misconfigured certificates in Jenkins master and agent: Please make sure that master and the agent uses the same certificate if they have SSL/TLS encryption enabled for communication.
Cannot create file exception
If the Jenkins master and the agent have different user IDs then there will be an issue while accessing files that are created by each other. We might notice errors as shown below.
sh: 1: cannot create /home/jenkins/workspace/my-project/durable-5bb5cd84/jenkins-log.txt: Permission denied sh: 1: cannot create /home/jenkins/workspace/my-project/durable-5bb5cd84/jenkins-result.txt.tmp: Permission denied touch: cannot touch '/home/jenkins/workspace/my-project/durable-5bb5cd84/jenkins-log.txt': Permission denied
Please make sure that the Jenkins master and agent have same user IDs. By default, both s4sdk/jenkins-master and s4sdk/jenkins-agent-k8s uses user ID 1000.
Conclusion
The SAP S/4HANA Cloud SDK Continuous Delivery Toolkit enables users to scale the infrastructure on demand. We will benefit from the autoscaling feature only if the infrastructure resources (Kubernetes cluster) are shared among multiple teams (projects) or if we have a Kubernetes cluster that supports autoscaling. However, there could be a noticeable penalty on the on the performance due to network overhead and pod spin up time if used on a slow infrastructure in comparison to the standalone setup.
Questions and Feature Requests
Feel free to reach out to us on Stackoverflow via our s4sdk tag. We actively monitor
this tag in our core engineering teams.
You can also leave us a comment or a question in SAP Community using the SAP S/4HANA Cloud SDK tag.
If you would like to report a bug or have an idea for a feature request, you can create a corresponding issue in our GitHub repositories:
S/4HANA Cloud SDK Build Pipeline | https://blogs.sap.com/2018/09/26/autoscaling-of-sap-s4hana-cloud-sdk-continuous-delivery-toolkit-on-kubernetes/ | CC-MAIN-2018-43 | refinedweb | 1,549 | 52.39 |
by Zoran Horvat
Feb 16, 2021
Implement a queue structure which exposes operations: Enqueue, which adds an element to the end; Dequeue, which removes an element from the beginning; and GetMaxValue, which returns maximum value currently stored in the queue, without removing it. Each time current maximum is dequeued, the queue’s GetMaxValue operation should reflect the new largest element that has retained.
Example: Below is a sequence of enqueue/dequeue operations on a queue which is initially empty. In each step, state of the queue and maximum value (which would be returned by GetMaxValue) are indicated.
If the queue were only to implement adding items (the Enqueue operation), finding maximum of values stored in the queue would be straight-forward. We would maintain current maximum value and update it with every new element added.
Enqueue(x) if queue_is_empty OR x > current_max current_max = x -- add x to the end
As more items are added to the queue, the current_max value would track maximum out of all values seen so far.
However, there is the culprit, the Dequeue operation, which can remove current maximum, or leave it in the queue, depending on whether the largest value has travelled all the way to the beginning, or it is still stored somewhere behind in the queue. Unfortunately, we don’t know which of the two had happened, even in cases when the dequeued value equals the current maximum. Observe the following order of operations, for an example when dequeuing the maximum value doesn’t change the overall maximum.
enqueue 2 [2] max=2 enqueue 1 [2, 1] max=2 enqueue 2 [2, 1, 2] max=2 dequeue 2 [1, 2] max=2
Even though the Dequeue result equals current maximum, the maximum value has remained, because there is another instance of value 2 remaining in the queue. Therefore, it looks like we should somehow refresh maximum, at least in situations when dequeued value equals previous maximum. That is to say that we don’t have to refresh current maximum when dequeued value is smaller – in that case, it is obvious that the largest value is still somewhere in the queue.
Dequeue() output = DequeueInternally() if output == current_max current_max = RecalculateMaximum()
While this implementation would certainly work, the problem is with efficiency. The RecalculateMaximum operation apparently has no idea where the next maximum could be located, and therefore must pass through all the items remaining in queue to find the one that is largest among them. That operation looks to take time proportional to number of elements currently residing in the queue. Could we do better?
It is common in software engineering to solve such problems by designing a data structure which can efficiently serve the result we need.
In this exercise, we conclude that a common queue, which we imagine as a linear structure of consecutive items, is not appropriate, because it doesn’t guarantee efficient calculation of the next maximum value when we need it. We shall now focus on implementing a specialized data structure which solves the efficiency problem.
Queue is the FIFO structure – first-in, first-out – and that has caused issues in this exercise, because FIFO structure is notoriously bad at propagating information backwards. We want the information about candidates for new maximum readily available, but that information is stored behind the head element, and not readily available. That is where it becomes apparent that FIFO structures cannot propagate information backwards efficiently.
However, LIFO structure – last-in, first-out – also known as stack, is a perfect match for the problem of backpropagating information. As each item is pushed to the stack, a piece of its information can be retained aside, to affect other elements that will be pushed later, when this element is already deep under the top.
Potential to solve the problem of calculating maximum among remaining elements is there, and we only need to solve the problem of turning LIFO structure into FIFO structure. To address that issue, we remember that if we pass items through two LIFO structures in succession, their order would be reverted twice, and therefore retained – two stacks can be used as a queue, and now we shall construct the data structure based on that idea.
We could place two queues next to each other. The one on the left would accept new elements passed to Enqueue, and the one on the right would serve elements when Dequeue is called. However, being stacks, both can maintain the running maximum, i.e., each element can maintain the maximum of itself and any items below it. Therefore, the stacks would store pairs of values (item, running_max). Topmost element in each stack would indicate the maximum of all items in that stack. Combination of two maximums is therefore the overall maximum in the entire structure.
First things first, the following picture shows state of both stacks after four items were enqueued: 3, 5, 2 and 4, in that order.
As each value is pushed to the left-hand stack, and the running maximum is recalculated along the way (the second item in the pair). If the caller wanted to know current maximum, that would be 5, because top-most item is indicating maximum value 5. That is when Dequeue is called for the first time, and we must procure the first item that was enqueued.
Stack on the left is offering items, but in the reverse order. We need to reverse them once more before we can get grasp of the first element that was enqueued. That is where the right-hand stack comes to the picture.
If the right stack is empty, then we shall pop all pairs from the left stack and push them to the right stack. After that, we can simply pop one element from the right stack, as the first item added to the structure. That is the process shown in the picture below.
Note, however, that the running maximum is recalculated through this process. That is precisely how this data structure works. Every dequeue which causes items to be pushed to the right stack will cause the maximum of all elements that remain in the queue to be recalculated on the fly. Overall maximum will then be readily available.
As the final demonstration, we can see what will change when another value is enqueued. That value would be pushed to the left queue, and it will contribute to the maximum calculation as usual.
Picture below shows typical state that appears after a few Enqueue and Dequeue operations, where both stacks are non-empty. Maximum is then calculated by taking the larger of the two maximums from top pairs of each stack. In the picture below, that is maximum of 1 and 5, which is, of course, 5.
This has demonstrated that we can apply two stacks to implement a queue which serves maximum value at immediate notice. Before implementing this structure, we shall analyze its performance, so that we can assess how efficient it is compared to a common queue.
We can start from the viewing point of a single item which passes through the whole lifecycle of being enqueued, then being silently transferred to the right stack, until it is finally dequeued and removed from the data structure altogether.
Total cost of passing one item through the queue is then: two times push and pop (first to left stack, and then to right stack), and two comparisons to calculate the running maximum. That is the grand total of operations we perform per one item enqueued and eventually dequeued from the queue. We find, therefore, that enqueue and dequeue operations average O(1) cost. For a sequence of N items that are passed through the queue during an entire operation, we see that the queue will operate in O(N) time.
The trouble is that this calculation is only correct at average. True behavior of this structure is that we shall pay only half the price at almost every enqueue or dequeue operation, leaving the other half for a later occasion. That occasion will happen when dequeue stack is empty, and at that moment we shall pay the remaining half of the cost for all items currently residing in the queue. That may at times be quite expensive, causing program to look like frozen for a period of time. Please keep this issue in mind and assess whether that is acceptable in whatever problem domain you plan to apply this algorithm.
It is a bit easier to assess performance of the GetMaxValue operation because it boils down to comparing running maximums from tops of the two stacks. That is obviously an O(1) operation.
Space complexity is also easy to assess. Each item is only stored once in either of the stacks, and it is accompanied by exactly one running maximum. Since stacks take space proportional to size of items they store, we conclude that this data structure requires O(M) space, where M is number of items concurrently residing in the queue.
Therefore, we conclude that queue constructed with use of two stacks exhibits O(1) time complexity on average for all operations: Enqueue, Dequeue and GetMaxValue. Overall complexity of passing N items through the queue is O(N). Overall space complexity when M items are concurrently stored in queue is O(M). That leaves little room for improvement.
Below is the entire implementation of the queue structure in C#. It is based on standard Stack class. It is also a constrained generic class, which means that it can be applied to any type of items, guaranteed that they can be compared in order to choose the larger of the two.
Class below is following common .NET rules, which means that it has a few additional methods besides those that were requested. The GetMaxValue method is implemented as the Max property, once again following common .NET practice.
Code is telling more than words, so here is the entire queue implementation.
using System; using System.Collections; using System.Collections.Generic; using System.Linq; namespace Demo { public class MaxQueue<T> : IEnumerable<T> where T : IComparable<T> { private Stack<(T item, T max)> EnqueueStack { get; } = new Stack<(T item, T max)>(); private Stack<(T item, T max)> DequeueStack { get; } = new Stack<(T item, T max)>(); public int Count => this.EnqueueStack.Count + this.DequeueStack.Count; public void Enqueue(T item) => this.Push(item, this.EnqueueStack); public T Dequeue() { if (this.Count == 0) throw new InvalidOperationException(); this.EnsurePopAvailable(); return this.DequeueStack.Pop().item; } public T Max => this.EnqueueStack.Count > 0 && this.EnqueueStack.Peek().max is T leftMax ? this.GetMax(leftMax) : this.GetRightMax(); private T GetMax(T left) => this.DequeueStack.Count > 0 && this.DequeueStack.Peek().max is T right && right.CompareTo(left) > 0 ? right : left; private T GetRightMax() => this.DequeueStack.Count > 0 ? this.DequeueStack.Peek().max : throw new InvalidOperationException(); private void Push(T item, Stack<(T item, T max)> stack) { T max = stack.Count > 0 && stack.Peek() is (T _, T prevMax) && prevMax.CompareTo(item) > 0 ? prevMax : item; stack.Push((item, max)); } private void EnsurePopAvailable() { if (this.DequeueStack.Count == 0) { this.PourIntoDequeueStack(); } } private void PourIntoDequeueStack() { while (this.EnqueueStack.TryPop(out (T item, T _) tuple)) { this.Push(tuple.item, this.DequeueStack); } } public IEnumerator<T> GetEnumerator() => this.DequeueStack .Concat(this.EnqueueStack.Reverse()) .Select(tuple => tuple.item) .GetEnumerator(); IEnumerator IEnumerable.GetEnumerator() => this.GetEnumerator(); public override string ToString() => "[" + string.Join(", ", this.StringItems) + "]"; private IEnumerable<string> StringItems => this.Select(item => item.ToString()); } }
Below is a console program which supplies sample values to the MaxQueue<int>, reporting content and maximum values returned by the queue after each operation.
using System; namespace Demo { static class Program { static void Main(string[] args) { new MaxQueue<int>() .ReportEnqueue(3) .ReportEnqueue(5) .ReportEnqueue(2) .ReportEnqueue(4) .ReportDequeue() .ReportEnqueue(1) .ReportDequeue() .ReportDequeue() .ReportDequeue(); Console.Write("Press ENTER to exit . . . "); Console.ReadLine(); } static MaxQueue<int> ReportEnqueue( this MaxQueue<int> queue, int value) { queue.Enqueue(value); Report("enqueue", value, queue); return queue; } static MaxQueue<int> ReportDequeue( this MaxQueue<int> queue) { int value = queue.Dequeue(); Report("dequeue", value, queue); return queue; } static void Report( string operation, int value, MaxQueue<int> queue) => Console.WriteLine( $"{operation,7} {value} {queue,-12} max={queue.Max}"); } }
When this code is run, it prints the same content as in the example stated at the beginning of this exercise.
This completes implementation of the queue structure which can report maximum item at any time, while only spending constant time to calculate it.
In this exercise, we have investigated one possible implementation of the queue structure, which guarantees to calculate maximum of contained items in constant time on average, while retaining constant-time average for enqueuing and dequeuing operations.
Success in solving this problem is underlying one general principle: That performance of systems can be tweaked by the choice of data structures. When we encounter a performance issue, we turn attention to investigating specialized data structures which are guaranteeing better performance right where we need it.
If you wish to learn more, please watch my latest video courses
This course begins with examination of a realistic application, which is poorly factored and doesn't incorporate design patterns. It is nearly impossible to maintain and develop this application further, due to its poor structure and design.
As demonstration after demonstration will unfold, we will refactor this entire application, fitting many design patterns into place almost without effort. By the end of the course, you will know how code refactoring and design patterns can operate together, and help each other create great design.
More....
More...
Z. | https://codinghelmet.com/exercises/finding-maximum-value-in-queue | CC-MAIN-2021-10 | refinedweb | 2,245 | 54.22 |
C++ Multi-dimensional Arrays
C++ allows multidimensional arrays. Here is think as a table, which will have x number of rows and y number of columns. A 2-dimensional array a, whichimensioned arrays may be initialized by specifying bracketed values for each row. Following is an array with 3 rows and each row digram.
#include <iostream> using namespace std; int main () { // an array with 5 rows and 2 columns. int a[5][2] = { {0,0}, {1,2}, {2,4}, {3,6},{4,8}}; // output each array element's value for ( int i = 0; i < 5; i++ ) for ( int j = 0; j < 2; j++ ) { cout << "a[" << i << "][" << j << "]: "; cout << a[i][j]<< endl; }. | http://www.tutorialspoint.com/cplusplus/cpp_multi_dimensional_arrays.htm | CC-MAIN-2017-17 | refinedweb | 112 | 63.19 |
The articles below include content about downloading and building Mozilla code. In addition, you'll find helpful articles about how the code works, how to build add-ons for Mozilla applications and the like.
- allow developers to extend and modify the functionality of Firefox.
-
- blacklist of known scam sites, and presenting a warning to the user when they visit a site on the list.
- An introduction to hacking Mozilla
- If you find errors in this document, or if you want to contribute updated or additional sections, please contact Kai Engert.
-.
- Bird's Eye View of the Mozilla Framework
-).
-.
- Building Mozilla
- Building SpiderMonkey with UBSan
- 1. Compile a recent version of LLVM & Clang.
- C++ portability guide
- This document has migrated to Using C++ in Mozilla code.
- Calendar
- Technical review completed.
- Chat Core
- Chat Core is the shared code for instant messaging that is shared by Instantbird and Thunderbird. It provides a number of functions and capabilities, including:
-.
-.This page describes the commonly used options and how to use them. You can open the Command Line Interface by pressing Shift + F2.
- Connect with Mozilla
- Enable, inspire and collaborate to make the Web the primary platform used to create experiences across all connected devices.
-
- Preamble: not all of the programs listed below are necessary. Some of them are simply the ones I use because I like them, while others are scripts that I've created to speed up the work. You can in every case choose the program you prefer to do some operations, and you can also decide not to use any of my scripts and manually enter all of the commands by hand. It's your choice!
- Mercurial User Repositories
- When working with Mercurial, it is often nice to publish your changes on a server as a backup or so others can examine and work with them. If you have Level 1 commit access or higher, then you should have an LDAP account which allows you to push to user repos in the
/usersdirectory of
hg.mozilla.org.
-.
- for Android
- For more and more people mobile devices are the primary way, or even the only way, to access the Web. Firefox for Android (codenamed Fennec) is an open, hackable, standards-based browser, just like the desktop Firefox.
- Firefox for iOS
- For.)
- Firefox OS for TV
- Welcome.
- Gecko
- Ge
- Git
- The
- Technical review completed.
- How to add a build-time test
- How to get a process dump with Windows Task Manager
-.)
- How to get a stacktrace for a bug report
-.
- How to get a stacktrace with WinDbg
- Sometimes you need to get a stacktrace (call stack) for a crash or hang but Talkback or Breakpad fail Pontoon in a Mozilla project
- Pont.
-.
- Instantbird
- Instantbird is an instant messaging application with close ties to Mozilla. These pages document Instantbird and also provide links to documentation about the Chat Core backend which is also used in Thunderbird.
-)
- IP.
-.
-.
- Mercurial
- Mobile
- Firefox OS is an open source mobile operating system which uses Linux and Mozilla's Gecko engine to run a user interface and set of applications written entirely in HTML, CSS and JavaScript.
- internal-only CSS
- This set of pages details CSS features that are only available internally in the Firefox browser — i.e. only inside the US stylesheet.
- Port Blocking
- project presentations
- This article provides links to presentations covering various aspects of the Mozilla project.
- Community
- Technical review completed.
-
- Namespace
- Below, find links to articles about C++ classes Mozilla uses within various namespaces, primarily the
mozillanamespace.
-.
- Participating in the Mozilla project
- If you're interested in helping to fix bugs and otherwise work on the code behind the Mozilla platform, this is the place to find the documentation that will point you in the right direction.
-
- Firefox and other XULRunner applications store user settings and data in special folders, called profiles..
-.
- Technical review completed.
- Supported build targets
- There are three tiers of supported Mozilla build targets at this time. These tiers represent the shared priorities of the Mozilla project.
- Task graph
- After a change to the Gecko source code is pushed to version-control, jobs for that change appear on Treeherder. How does this
-or throw any exceptions. Libraries that throw exceptions may be used if you are willing to have the throw instead be treated as an abort.
-:
- Using RAII classes in Mozilla
- RAII classes are useful when two operations (e.g., Lock/Unlock, AddRef/Release, PushState/PopState) must be paired.
- Using tab-modal prompts
- Prior to Gecko 2.0 (Firefox 4 / Thunderbird 3.3 / SeaMonkey 2.1), prompts (that is, alerts and other modal prompts) were window modal. That is, when an alert occurred, it blocked the user interface on all tabs in the window until the user dismissed the prompt.
-
- The.
-
-.
- XPI
- Cross-Platform Installer Module (XPI) is a ZIP file used to install packages, utilizing the XPInstall technology. XPI modules (so called "Bundles") are employed to install a wide variety of software, including Plugins, Extensions, Themes, and Thunderbird dictionaries.
- XPIDL
- XPIDL is an Interface Description Language used to specify XPCOM interface classes.
- Zombie compartments
- This page tells you how to detect and avoid zombie compartments, which are a particular kind of memory leak. They can be caused by bugs in Firefox itself, or by bugs in Firefox add-ons.
Document Tags and Contributors
Contributors to this page: alispivak, fscholz, AnthonyMaton, wbamberg, dkocho4, Bonners, jsx, jswisher, jdc2018, mnowy41, akhilbatra898, malina, 818v.., Sheppy, beyang, Alejandro_Blanco, patr37, h.shaibani7, sajjadRthur, evilpie, DavidWalsh, ethertank, ziyunfei | https://developer.mozilla.org/en-US/docs/Mozilla | CC-MAIN-2018-34 | refinedweb | 912 | 56.45 |
import cPickle as p questorfile = 'questorbrain.data' questorlist = [] # Write to the file f = file(questorfile, 'w') p.dump(questorlist, f) # dump the object to a file f.close() del questorlist # remove the shoplist # Read back from the storage f = file(questorfile) storedlist = p.load(f) print storedlist # define some constants for future use:old__': run() print raw_input("press Return>") else: print "Module questor imported." print "To run, type: questor.run()" print "To reload after changes to the source, type: reload(questor)"
1 Replies - 9178 Views - Last Post: 15 January 2013 - 11:24 AM
#1
How do I save using pickle when dealing with questions and answers?
Posted 13 January 2013 - 12:18 PM
When run, this produces an error of "NameError: global name 'Qnode' is not defined. This only happened after I inserted the pickle code to save the file. Where is my problem with my pickle code?
Replies To: How do I save using pickle when dealing with questions and answers?
#2
Re: How do I save using pickle when dealing with questions and answers?
Posted 15 January 2013 - 11:24 AM
Hi. I believe your error is on 105 when you call run from inside your class. I don't think it's taking everything from that class to begin the program. It's simply calling run without knowing that Qnode exists. To get it to run I un-indented from 105 to 112. I also unindented the run function and everything inside it & changed line 87 to
result = topNode.traverse()That made it run for me, although I'm not entirely sure it's doing what you want it to. Maybe putting all those lines at the top into run will help with that. Does that do what you're trying to accomplish?
This post has been edited by alexr1090: 15 January 2013 - 11:31 AM
Page 1 of 1 | http://www.dreamincode.net/forums/topic/307078-how-do-i-save-using-pickle-when-dealing-with-questions-and-answers/ | CC-MAIN-2016-44 | refinedweb | 313 | 75.1 |
Score and replay¶
In this part, we'll add the score, music playback, and the ability to restart the game.
We have to keep track of the current score in a variable and display it on screen using a minimal interface. We will use a text label to do that.
In the main scene, add a new Control node as a child of Main and name it UserInterface. You will automatically be taken to the 2D screen, where you can edit your User Interface (UI).
Add a Label node and rename it to ScoreLabel.
In the Inspector, set the Label's Text to a placeholder like "Score: 0".
Also, the text is white by default, like our game's background. We need to change its color to see it at runtime.
Scroll down to Theme Overrides, and expand Colors and click the black box next to Font Color to tint the text.
Pick a dark tone so it contrasts well with the 3D scene.
Finally, click and drag on the text in the viewport to move it away from the top-left corner.
The UserInterface node allows us to group our UI in a branch of the scene tree and use a theme resource that will propagate to all its children. We'll use it to set our game's font.
Creating a UI theme¶
Once again, select the UserInterface node. In the Inspector, create a new theme resource in Theme -> Theme.
Click on it to open the theme editor In the bottom panel. It gives you a preview of how all the built-in UI widgets will look with your theme resource.
By default, a theme only has one property, the Default Font.
See also
You can add more properties to the theme resource to design complex user interfaces, but that is beyond the scope of this series. To learn more about creating and editing themes, see Introduction to GUI skinning.
Click the Default Font property and create a new DynamicFont.
Expand the DynamicFont by clicking on it and expand its Font section. There, you will see an empty Font Data field.
This one expects a font file like the ones you have on your computer. Two common font file formats are TrueType Font (TTF) and OpenType Font (OTF).
In the FileSystem dock, Expand the
fonts directory and click and drag the
Montserrat-Medium.ttf file we included in the project onto the Font Data.
The text will reappear in the theme preview.
The text is a bit small. Set the Settings -> Size to
22 pixels to increase
the text's size.
Keeping track of the score¶
Let's work on the score next. Attach a new script to the ScoreLabel and define
the
score variable.
extends Label var score = 0
public class ScoreLabel : Label { private int _score = 0; }
The score should increase by
1 every time we squash a monster. We can use
their
squashed signal to know when that happens. However, as we instantiate
monsters from the code, we cannot do the connection in the editor.
Instead, we have to make the connection from the code every time we spawn a monster.
Open the script
Main.gd. If it's still open, you can click on its name in
the script editor's left column.
Alternatively, you can double-click the
Main.gd file in the FileSystem
dock.
At the bottom of the
_on_MobTimer_timeout() function, add the following
line.
func _on_MobTimer_timeout(): #... # We connect the mob to the score label to update the score upon squashing one. mob.connect("squashed", $UserInterface/ScoreLabel, "_on_Mob_squashed")
public void OnMobTimerTimeout() { // ... // We connect the mob to the score label to update the score upon squashing one. mob.Squashed += GetNode<ScoreLabel>("UserInterface/ScoreLabel").OnMobSquashed; }
This line means that when the mob emits the
squashed signal, the
ScoreLabel node will receive it and call the function
_on_Mob_squashed().
Head back to the
ScoreLabel.gd script to define the
_on_Mob_squashed()
callback function.
There, we increment the score and update the displayed text.
func _on_Mob_squashed(): score += 1 text = "Score: %s" % score
public void OnMobSquashed() { _score += 1; Text = string.Format("Score: {0}", _score); }
The second line uses the value of the
score variable to replace the
placeholder
%s. When using this feature, Godot automatically converts values
to text, which is convenient to output text in labels or using the
print()
function.
See also
You can learn more about string formatting here: GDScript format strings.
You can now play the game and squash a few enemies to see the score increase.
Note
In a complex game, you may want to completely separate your user interface from the game world. In that case, you would not keep track of the score on the label. Instead, you may want to store it in a separate, dedicated object. But when prototyping or when your project is simple, it is fine to keep your code simple. Programming is always a balancing act.
Retrying the game¶
We'll now add the ability to play again after dying. When the player dies, we'll display a message on the screen and wait for input.
Head back to the Main scene, select the UserInterface node, add a ColorRect node as a child of it and name it Retry. This node fills a rectangle with a uniform color and will serve as an overlay to darken the screen.
To make it span over the whole viewport, you can use the Layout menu in the toolbar.
Open it and apply the Full Rect command.
Nothing happens. Well, almost nothing: only the four green pins move to the corners of the selection box.
This is because UI nodes (all the ones with a green icon) work with anchors and margins relative to their parent's bounding box. Here, the UserInterface node has a small size and the Retry one is limited by it.
Select the UserInterface and apply Layout -> Full Rect to it as well. The Retry node should now span the whole viewport.
Let's change its color so it darkens the game area. Select Retry and in the Inspector, set its Color to something both dark and transparent. To do so, in the color picker, drag the A slider to the left. It controls the color's alpha channel, that is to say, its opacity.
Next, add a Label as a child of Retry and give it the Text "Press Enter to retry."
To move it and anchor it in the center of the screen, apply Layout -> Center to it.
Coding the retry option¶
We can now head to the code to show and hide the Retry node when the player dies and plays again.
Open the script
Main.gd. First, we want to hide the overlay at the start of
the game. Add this line to the
_ready() function.
func _ready(): #... $UserInterface/Retry.hide()
public override void _Ready() { // ... GetNode<Control>("UserInterface/Retry").Hide(); }
Then, when the player gets hit, we show the overlay.
func _on_Player_hit(): #... $UserInterface/Retry.show()
public void OnPlayerHit() { //... GetNode<Control>("UserInterface/Retry").Show(); }
Finally, when the Retry node is visible, we need to listen to the player's
input and restart the game if they press enter. To do this, we use the built-in
_unhandled_input() callback.
If the player pressed the predefined
ui_accept input action and Retry is
visible, we reload the current scene.
func _unhandled_input(event): if event.is_action_pressed("ui_accept") and $UserInterface/Retry.visible: # This restarts the current scene. get_tree().reload_current_scene()
public override void _UnhandledInput(InputEvent @event) { if (@event.IsActionPressed("ui_accept") && GetNode<Control>("UserInterface/Retry").Visible) { // This restarts the current scene. GetTree().ReloadCurrentScene(); } }
The function
get_tree() gives us access to the global SceneTree object, which allows us to reload and restart the current
scene.
Adding music¶
To add music that plays continuously in the background, we're going to use another feature in Godot: autoloads.
To play audio, all you need to do is add an AudioStreamPlayer node to your scene and attach an audio file to it. When you start the scene, it can play automatically. However, when you reload the scene, like we do to play again, the audio nodes are also reset, and the music starts back from the beginning.
You can use the autoload feature to have Godot load a node or a scene automatically at the start of the game, outside the current scene. You can also use it to create globally accessible objects.
Create a new scene by going to the Scene menu and clicking New Scene.
Click the Other Node button to create an AudioStreamPlayer and rename it to MusicPlayer.
We included a music soundtrack in the
art/ directory,
House In a Forest
Loop.ogg. Click and drag it onto the Stream property in the Inspector.
Also, turn on Autoplay so the music plays automatically at the start of the
game.
Save the scene as
MusicPlayer.tscn.
We have to register it as an autoload. Head to the Project -> Project Settings… menu and click on the Autoload tab.
In the Path field, you want to enter the path to your scene. Click the folder
icon to open the file browser and double-click on
MusicPlayer.tscn. Then,
click the Add button on the right to register the node.
If you run the game now, the music will play automatically. And even when you lose and retry, it keeps going.
Before we wrap up this lesson, here's a quick look at how it works under the hood. When you run the game, your Scene dock changes to give you two tabs: Remote and Local.
The Remote tab allows you to visualize the node tree of your running game. There, you will see the Main node and everything the scene contains and the instantiated mobs at the bottom.
At the top are the autoloaded MusicPlayer and a root node, which is your game's viewport.
And that does it for this lesson. In the next part, we'll add an animation to make the game both look and feel much nicer.
Here is the complete
Main.gd script for reference.
extends Node @export var mob_scene: PackedScene func _ready(): randomize() $UserInterface/Retry.hide() func _unhandled_input(event): if event.is_action_pressed("ui_accept") and $UserInterface/Retry.visible: get_tree().reload_current_scene() func _on_MobTimer_timeout(): var mob = mob_scene.instance() var mob_spawn_location = get_node("SpawnPath/SpawnLocation") mob_spawn_location.unit_offset = randf() var player_position = $Player.transform.origin mob.initialize(mob_spawn_location.translation, player_position) add_child(mob) mob.connect("squashed", $UserInterface/ScoreLabel, "_on_Mob_squashed") func _on_Player_hit(): $MobTimer.stop() $UserInterface/Retry.show()
public class Main : Node { #pragma warning disable 649 [Export] public PackedScene MobScene; #pragma warning restore 649 public override void _Ready() { GD.Randomize(); GetNode<Control>("UserInterface/Retry").Hide(); } public override void _UnhandledInput(InputEvent @event) { if (@event.IsActionPressed("ui_accept") && GetNode<Control>("UserInterface/Retry").Visible) { GetTree().ReloadCurrentScene(); } } public void OnMobTimerTimeout() { Mob mob = (Mob)MobScene.Instance(); var mobSpawnLocation = GetNode<PathFollow>("SpawnPath/SpawnLocation"); mobSpawnLocation.UnitOffset = GD.Randf(); Vector3 playerPosition = GetNode<Player>("Player").Transform.origin; mob.Initialize(mobSpawnLocation.Translation, playerPosition); AddChild(mob); mob.Squashed += GetNode<ScoreLabel>("UserInterface/ScoreLabel").OnMobSquashed; } public void OnPlayerHit() { GetNode<Timer>("MobTimer").Stop(); GetNode<Control>("UserInterface/Retry").Show(); } } | https://docs.godotengine.org/en/latest/getting_started/first_3d_game/08.score_and_replay.html | CC-MAIN-2022-40 | refinedweb | 1,833 | 67.15 |
False dilemma.
use happens at compile-time while require happens at run-time. There's a big difference. You can also control what's imported into your namespace by providing arguments to use — besides that, there's no reason import() has to export anything. Several modules use different behavior based on what you pass to import(), and if you only use your "precise" require, you'll miss out.
Also, if the module requires actions at CHECK or INIT time you've just cut that functionality out - runtime happens after INIT and CHECK so whatever initialization the module was expecting will never occur. That's a good reason to use 'use()' right there.
Okay, that's a different question and worth more thought.
I don't have a problem importing things into my namespace for three reasons:
I expect someone maintaining my code to be able to find the original location of functions I've imported. I try not to make it hard on that person, but I don't believe in writing un-idiomatic code because it's perceptually "self-documenting". Part of maintainability is readability and duplicate code hurts that.
I'll second chromatic and add:
I generally dislike hardcoded things, and using a fully qualified name is no exception.
ihb
I think using require is false lazyness. And the points that were made about require are very valid and pertinent to the discussion of using one over the other. IMO its use unless you have _very_ good reason to want to use require. And one of those points that is sofar unmentioned (I think) is that prototypes arent respected by code that is 'required'.
If the docs aren't enough, check the source. And if that isn't enough either, the call use with parameters, as in:
use CGI qw(:standard);
[download]
use CGI ();
[download]
Oft times questions of the form "is X better than Y?" are largely meaningless without context information. Are pickup trucks better than minivans? Well, it depends whether you're planning on hauling around lumber or half a soccer team's worth of children. Looking at other replies and your responses it looks as if you had two questions; I'll try to answer both.
The use keyword executes code at compile time where as require does it at run time. Almost always you want the former, though I can give you an example of where I personally want the latter. I have a class called SQLLink that is fed information about a link table in a SQL database. Among the information there are table specifications, but also the class names of the parent and child classes. I use 'require' to compile the code for a parent or child class when SQLLink needs to instantiate such a class.
eval "require $parent_class"; die "$@\t...$parent_class barfed" if $@;
eval "require $child_class"; die "$@\t...$child_class barfed" if @;
[download]
You need the idiom of the string eval (to the best of my knowledge) to get 'require' to do its bare word behavior of locating a class based on your lib paths, and because eval traps errors you'll want the 'die' afterwards to make sure that the file compiled successfully.
Now to address what would seem to be your real question, whether tis nobler to pollute thy namespace, yada yada outrageous fortune and so forth. Personally, I really hate namespace pollution for the same reason you do. I do not want to see "foo()" in the code, not be able to find "sub foo { ... }" anywhere in my code, and notice that there are ten library importations at the top of my code and have no clue where to start looking to figure out from where foo originated. To refer back to my 'context' assertion from the first paragraph... It matters whether you're writing a throw away script that fits in a single screen full of code that you don't plan on using after this afternoon, or if it's a serious attempt at a core library to which you (and other people!) will be frequently returning, or worse still infrequently, as the longer you go without looking at it, the less likely you're going to remember how it worked.
Fortunately, there is what I deem to be a very good compromise. Any module worth its salt not just jams an @EXPORT variable full of junk, but also populates an @EXPORT_OK variable which specifies which names it is ok to explicitly export. I think the really nasty namespace pollution comes is the implicit kind, but explicit pollution is ok in my books. If you come across "foo()" in your code, and then you go to the top of your library and you see...
use Some::Module qw(foo);
[download]
it is obvious at a quick glance from whence foo sprung. You also know from that statement that foo was the only thing that Some::Module exported, since no implicit exportation occurs if you specify anything explicitly. So, I believe you very justified to eschew implicit namespace pollution, but give the explicit variety a chance. I think you'll like it.
(I use 'use' for reasons others have mentioned, but usually call subs using package names, or creating objects.)
C.);
[download]
I only use require when I need to load a module I've been trying to avoid using. For example, I don't use Carp; but, instead, I require Carp; once I've encountered an error and need to carp() or croak(). As another example, I sometimes require Data::Dumper; if a debug flag is set.
-sauoq
"My two cents aren't worth a dime.";... :-). | http://www.perlmonks.org/?node_id=276941 | CC-MAIN-2018-13 | refinedweb | 939 | 69.72 |
A debugging library and a debugging mdb(1) module are provided with the Oracle Solaris runtime linker. The debugging library enables you to trace the runtime linking process in more detail. The mdb(1) module enables interactive process debugging.
The.
$ LD_DEBUG=help.
$ LD_DEBUG=help,output=rtld-debug.txt prog.
$ cat bar.c int bar = 10; $ cc -o bar.so.1 -K pic -G bar.c $ cat foo.c int foo(int data) { return (data); } $ cc -o foo.so.1 -K pic -G foo.c $ cat main.c extern int foo(); extern int bar; int main() { return (foo(bar)); } $ cc -o prog main.c -R/tmp:. foo.so.1 bar.so.1
The runtime symbol bindings can be displayed by setting LD_DEBUG=bindings.
$ LD_DEBUG=bindings prog 11753: ....... 11753: binding file=prog to file=./bar.so.1: symbol bar 11753: ....... 11753: transferring control: prog 11753: ....... 11753: binding file=prog to file=./foo.so.1: symbol foo 11753: ........
$ LD_DEBUG=libs prog 11775: 11775: find object=foo.so.1; searching 11775: search path=/tmp:. (RUNPATH/RPATH from file prog) 11775: trying path=/tmp/foo.so.1 11775: trying path=./foo.so.1 11775: 11775: find object=bar.so.1; searching 11775: search path=/tmp:. (RUNPATH/RPATH from file prog) 11775: trying path=/tmp/bar.so.1 11775: trying path=./bar.so.1 11775: ........
$ LD_DEBUG=bindings,symbols prog 11782: ....... 11782: symbol=bar; lookup in file=./foo.so.1 [ ELF ] 11782: symbol=bar; lookup in file=./bar.so.1 [ ELF ] 11782: binding file=prog to file=./bar.so.1: symbol bar 11782: ....... 11782: transferring control: prog 11782: ....... 11782: symbol=foo; lookup in file=prog [ ELF ] 11782: symbol=foo; lookup in file=./foo.so.1 [ ELF ] 11782: binding file=prog to file=./foo.so.1: symbol foo 11782: ........
$ cat main.c #include <dlfnc.h> int main() { void *handle; void (*fptr)(); if ((handle = dlopen("foo.so.1", RTLD_LAZY)) == NULL) return (1); if ((fptr = (void (*)())dlsym(handle, "foo")) == NULL) return (1); (*fptr)(); return (0); } $ cc -o main main.c -R.
If mdb(1) has not automatically loaded the debugger module, ld.so, explicitly do so. The facilities of the debugger module can then be inspected.
$ mdb main > ::load ld.so > ::dmods -l ld.so ld.so ----------------------------------------------------------------- dcmd Bind - Display a Binding descriptor dcmd Callers - Display Rt_map CALLERS binding descriptors dcmd Depends - Display Rt_map DEPENDS binding descriptors dcmd ElfDyn - Display Elf_Dyn entry dcmd ElfEhdr - Display Elf_Ehdr entry dcmd ElfPhdr - Display Elf_Phdr entry dcmd Groups - Display Rt_map GROUPS group handles dcmd GrpDesc - Display a Group Descriptor dcmd GrpHdl - Display a Group Handle dcmd Handles - Display Rt_map HANDLES group descriptors .... > ::bp main > :r
Each dynamic object within a process is expressed as a link-map, Rt_map, which is maintained on a link-map list. All link-maps for the process can be displayed with Rt_maps.
> ::Rt_maps Link-map lists (dynlm_list): 0xffbfe0d0 ---------------------------------------------- Lm_list: 0xff3f6f60 (LM_ID_BASE) ---------------------------------------------- lmco rtmap ADDR() NAME() ---------------------------------------------- [0xc] 0xff3f0fdc 0x00010000 main [0xc] 0xff3f1394 0xff280000 /lib/libc.so.1 ---------------------------------------------- Lm_list: 0xff3f6f88 (LM_ID_LDSO) ---------------------------------------------- [0xc] 0xff3f0c78 0xff3b0000 /lib/ld.so.1
An individual link-map can be displayed with Rt_map.
> 0xff3f9040::Rt_map Rt_map located at: 0xff3f9040 NAME: main PATHNAME: /export/home/user/main ADDR: 0x00010000 DYN: 0x000207bc NEXT: 0xff3f9460 PREV: 0x00000000 FCT: 0xff3f6f18 TLSMODID: 0 INIT: 0x00010710 FINI: 0x0001071c GROUPS: 0x00000000 HANDLES: 0x00000000 DEPENDS: 0xff3f96e8 CALLERS: 0x00000000 .....
The object's .dynamic section can be displayed with the ElfDyn dcmd. The following example shows the first 4 entries.
> 0x000207bc,4::ElfDyn Elf_Dyn located at: 0x207bc 0x207bc NEEDED 0x0000010f Elf_Dyn located at: 0x207c4 0x207c4 NEEDED 0x00000124 Elf_Dyn located at: 0x207cc 0x207cc INIT 0x00010710 Elf_Dyn located at: 0x207d4 0x207d4 FINI 0x0001071c.
> ::bp foo.so.1`foo > :c > mdb: You've got symbols! > mdb: stop at foo.so.1`foo mdb: target stopped at: foo.so.1`foo: save %sp, -0x68, %sp
At this point, new objects have been loaded.
> *ld.so`lml_main::Rt_maps lmco rtmap ADDR() NAME() ---------------------------------------------- [0xc] 0xff3f0fdc 0x00010000 main [0xc] 0xff3f1394 0xff280000 /lib/libc.so.1 [0xc] 0xff3f9ca4 0xff380000 ./foo.so.1 [0xc] 0xff37006c 0xff260000 ./bar.so.1
The link-map for foo.so.1 shows the handle returned by dlopen(3C). You can expand this structure using Handles.
> 0xff3f9ca4::Handles -v HANDLES for ./foo.so.1 ---------------------------------------------- HANDLE: 0xff3f9f60 Alist[used 1: total 1] ---------------------------------------------- Group Handle located at: 0xff3f9f28 ---------------------------------------------- owner: ./foo.so.1 flags: 0x00000000 [ 0 ] refcnt: 1 depends: 0xff3f9fa0 Alist[used 2: total 4] ---------------------------------------------- Group Descriptor located at: 0xff3f9fac depend: 0xff3f9ca4 ./foo.so.1 flags: 0x00000003 [ AVAIL-TO-DLSYM,ADD-DEPENDENCIES ] ---------------------------------------------- Group Descriptor located at: 0xff3f9fd8 depend: 0xff37006c ./bar.so.1 flags: 0x00000003 [ AVAIL-TO-DLSYM,ADD-DEPENDENCIES ]
The dependencies of a handle are a list of link-maps that represent the objects of the handle that can satisfy a dlsym(3C) request. In this case, the dependencies are foo.so.1 and bar.so.1. | http://docs.oracle.com/cd/E23823_01/html/817-1984/chapter3-31.html | CC-MAIN-2015-14 | refinedweb | 792 | 62.24 |
__thrsleep,
__thrwakeup—
#include <sys/time.h>
int
__thrsleep(const
volatile void *id,
clockid_t clock_id,
const struct timespec
*abstime, void
*lock, const int
*abort);
int
__thrwakeup(const
volatile void *id, int
count);
__thrsleep() and
__thrwakeup() functions provide thread sleep and wakeup primitives with which synchronization primitives such as mutexes and condition variables can be implemented.
__thrsleep() blocks the calling thread on the abstract “wait channel” identified by the id argument until another thread calls
__thrwakeup() with the same id value. If the abstime argument is not
NULL, then it specifies an absolute time, measured against the clock_id clock, after which
__thrsleep() should time out and return. If the specified time is in the past then
__thrsleep() will return immediately without blocking.
The lock argument, if not
NULL, points to a locked spinlock that will be
unlocked by
__thrsleep() atomically with respect to
calls to
__thrwakeup(), such that if another thread
locks the spinlock before calling
__thrwakeup() with
the same id, then the thread that called
__thrsleep() will be eligible for being woken up and
unblocked.
The abort argument, if not
NULL, points to an int that
will be examined after unlocking the spinlock pointed to by
lock and immediately before blocking. If that
int is non-zero then
__thrsleep() will immediately return
EINTR without blocking. This provides a mechanism
for a signal handler to keep a call to
__thrsleep()
from blocking, even if the signal is delivered immediately before the
call.
The
__thrwakeup() function unblocks one or
more threads that are sleeping on the wait channel identified by
id. The number of threads unblocked is specified by
the count argument, except that if zero is specified
then all threads sleeping on that id are
unblocked.
__thrsleep() will return zero if woken by a matching call to
__thrwakeup(), otherwise an error number will be returned to indicate the error.
__thrwakeup() will return zero if at least
one matching call to
__thrsleep() was unblocked,
otherwise an error number will be returned to indicate the error.
__thrsleep() and
__thrwakeup() will fail if:
EINVAL]
NULL.
In addition,
__thrsleep() may return one
of the following errors:
EWOULDBLOCK]
EINTR]
ECANCELED]
EINVAL]
__thrwakeup() may return the following
error:
ESRCH]
__thrsleep() with the same id were found.
__thrsleep() and
__thrwakeup() functions are specific to OpenBSD and should not be used in portable applications.
thrsleep() and
thrwakeup() syscalls appeared in OpenBSD 3.9. The clock_id and abstime arguments were added in OpenBSD 4.9. The functions were renamed to
__thrsleep() and
__thrwakeup() and the abort argument was added in OpenBSD 5.1
thrsleep() and
thrwakeup() syscalls were created by Ted Unangst <[email protected]>. This manual page was written by Philip Guenther <[email protected]>. | https://man.openbsd.org/__thrwakeup.2 | CC-MAIN-2019-22 | refinedweb | 445 | 59.03 |
Using GPy with IPv6
Has anyone prior experience with using GPy over IPv6 NB-IoT networks? Our local MNO supports only IPV6 and I am able to attach to the network using the following code snippet:
from network import LTE lte = LTE() lte.init() lte.send_at_cmd('AT+CFUN=0') lte.send_at_cmd('AT+CGDCONT=1,"IPV6","iot"') lte.send_at_cmd('AT+CFUN?') lte.send_at_cmd('AT+CFUN=1') lte.send_at_cmd('AT+CFUN?') lte.send_at_cmd('AT+CREG?')
However I am not sure how I can establish UDP socket communication. I checked the Sequans modem manual for IPv6 data transmission support. When creating a new socket through the
AT+SQNSDcommand the remote IP address should be configured as follows:
IPaddr String type. Address of the remote host. Any valid IP address in the format “xxx.xxx.xxx.xxx” or any host name solved with a DNS query.
So this means that only IPv4 addresses are supported?
Is also anyone aware of native IPv6 support for the WLAN/LTE classes of pycom? I guess not?
Thanks!
Just an update of the latest changes. The API of the microATsocket has changed to be closer to the typical usocket API. Also a custom version of getaddrinfo info has been implemented, so the code now looks somehting like:
import microATsocket as socket from network import LTE import binascii lte = LTE() # attach to network # ... #message as bytes data = bytearray('{data:"testmessage"}') # create socket instance providing the instance of the LTE modem sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) sock.setModemInstance(lte) # getaddrinfo needs to be called on the instance of socket resolvedIPs = sock.getaddrinfo("google.com", 5683) # send data to specific IP (dummy IP and port used as example) sock.sendto(data, resolvedIPs[0][-1]) # receive data from the previously used IP. # socket is still open from the 'sendto' operation (resp, address) = sock.recvfrom(1024) print("Response: from ip:" + address[0] + ", port: " + str(address[1]) + ", data: " + str(binascii.hexlify(bytearray(resp))))
@kjm you are correct, a URL can resolve to multiple IPs and yes, a getaddrinfo lookup is needed.
Since getaddrinfo lookup is a DNS request, I have created an example code on how to do it over the UDP socket.
The example uses to form and decode the DNS request/response along with the custom MicroATSocket socket.
socket = MicroATSocket(lte) url='google.com' dns_server="2001:4860:4860::8888" ipv6_only = True resolvedIPs = dns_query.dns_resolve(socket, url, dns_server, ipv6_only) print("Resolved IP list: " + str(resolvedIPs)) data = bytearray('{data:"testmessage"}') socket.sendto(data, (resolvedIPs[0], 8888)) (resp, address) = socket.recvfrom() print("Response: from ip:" + address[0] + ", port: " + str(address[1]) + ", data: " + str(binascii.hexlify(bytearray(resp)))) socket.close()
Since Google's DNS server IP is fixed, it can be used directly. If DNS resolve is successful, a list of IPs will be returned. Pick any of the returned IPs and you are ready to transfer your data.
Full source code can be found: here
@nftylitak thnx heaps for doing that. I seem to be confused re IPV6. I thought is was a scheme that would allow each iot mote to have its own unique address & free us from the tyranny of dns lookups. But when I check the IPv6 address of a random website, say detectportal.firefox.com, I find it has more than one IPv6 address. Which makes me think IPv6 addresses are going to be just as fluid as IPV4, which means some sort of getaddrinfo style lookup will still be required?
nslookup detectportal.firefox.com Non-authoritative answer: Name: a1089.dscd.akamai.net Addresses: 2407:8800:bf00:145::cb57:7a19 2407:8800:bf00:145::cb57:7a2b 23.212.98.59 202.7.177.35 Aliases: detectportal.firefox.com detectportal.prod.mozaws.net detectportal.firefox.com-v2.edgesuite.net
@kjm I have added the socket implementation to a separate repo to be easier to use.
You can find the report in GitHub: microATsocket for more examples and special features.
A quick example would be the following:
from microATsocket import MicroATSocket from network import LTE import binascii lte = LTE() # attach to network # ... #message as bytes data = bytearray('{data:"testmessage"}') # create socket instance providing the instance of the LTE modem socket = MicroATSocket(lte) # send data to specific IP (dummy IP and port used as example) socket.sendto(data, ("2001:4860:4860::8888", 8888)) # receive data from the previously used IP. # socket is still open from the 'sendto' operation (resp, address) = socket.recvfrom() print("Response: from ip:" + address[0] + ", port: " + str(address[1]) + ", data: " + str(binascii.hexlify(bytearray(resp)))) # close socket socket.close()
The feedback is always welcomed.
@nftylitak Are you able to post an example of how to use that wrapper socket?
Greetings,
@agotsis your example indeed works for IPv6, so based on your code I have made a wrapper "socket" based on AT commands for GPy. The tricky part was that through AT+SQNSSEND, I could not send bytes, so it needed special care.
The socket can be found through this link in the hope that it will save time to others:
I'm salivating in anticipation! It would be so nice to ditch that useless, untimeoutable, getaddrinfo dns lookup with a move to IPV6.
A small update: It seems that the
AT+SQNSDis also accepting
IPv6target addresses. I managed to send a UDP message from GPy to an IPv6 address, using the following snippet:
lte.send_at_cmd('AT+SQNSD=1,1,<targetPort>,"XXXX:XXX:XXXX:XX:XXXX:XXX:XXX:XXXX",0,0,1') lte.send_at_cmd('AT+SQNSSEND=1') lte.send_at_cmd('hello\x1A') lte.send_at_cmd('AT+SQNSH=1')
However
usocketis not able to handle IPv6 (see for example in [1] and [2]), so native support is yet available. | https://forum.pycom.io/topic/5777/using-gpy-with-ipv6 | CC-MAIN-2020-40 | refinedweb | 936 | 57.77 |
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more!
Submitted by Ron Hiram of
Wise Analysis
using our
Trefis Contributors
tool.
On November 6, 2012, Plains All American Pipeline L.P. (
PAA
)
PAA provided updated financial and operating guidance for 2012,
increasing adjusted EBITDA guidance by $137 million to slightly
over $2 billion, ~7% increase vs. the guidance provided on 8/6/12
and ~22% over the full year guidance provided at the beginning of
the year. This implies a projection of $520 million of adjusted
EBITDA for 4Q12. PAA also provided preliminary adjusted EBITDA
guidance of $1,925 million (mid-point) for 2013 on the assumption
that favorable market conditions will not continue beyond 1Q13.
Hence it is lower than the TTM number.
Strong performance was exhibited by all segments, particularly
Supply & Logistics, as seen in Table 2:
Table 2: Figures in $ Millions
In 3Q12 PAA wrote down a substantial portion of its investment
in the Pier 400 project. The write down, amounting to ~$125
million, is reflected in Table 2 as an increase to depreciation
& amortization, hence the large increase in this line item in
3Q12 and TTM ending 9/30/12 vs. the prior year periods. The Pier
400 terminal project involved development of a deepwater petroleum
import terminal at Pier 400 and Terminal Island in the Port of Los
Angeles for the purpose of handling marine receipts of crude oil
and refinery feedstock. During 3Q12 PAA decided not to proceed with
this project, hence the write down of its investment in it.
In 2011 the Supply & Logistics segment generated
extraordinary profits ($647 million vs. $240 million in 2010). In a
prior article
I noted the drivers behind this growth. Management cautioned that
the margins delivered in 2011 may not be repeated. Results for 1Q
2012 indicated a 3.8% decline in that segment's contribution
compared to the prior year period. However, in 2Q12
Supply & Logistics's profit contribution jumped 81% over
the prior year period. In 3Q12 we see a 21% decline. Unlike the
Facilities and Transportation segments which are predominantly fee
based businesses, Supply & Logistics is margin based and hence
its results are more volatile. 9/30/12 was $1,434 million ($4.48 per unit), up from $1,039
million ($3.56 per unit) in the corresponding prior year period.
9/30. In the TTM ending
9/30/12 working capital consumed cash amounting to $236 million.
Management adds back this amount in deriving reported DCF while I
do not.
Risk management activities aggregate numerous positive and
negative adjustments. For example, the $121 million downward
adjustment in inventory valuation in 2Q12 was oddest by gains
related to derivatives. I generally do not consider cash generated
by risk management activities to be sustainable, although I
recognize that one could reasonable argue that bona fide hedging of
commodity price risks should be included. The PAA risk management
activities seem to be directly related to such hedging, so I could
be persuaded to also show what sustainable DCF would be if it
included cash generated by risk management activities. But I do not
do so since the amounts are relatively small in the periods being
reviewed.
PAA also provided preliminary DCF guidance of $1,352 million
(mid-point) for 2013 on the assumption that favorable market
conditions will not continue beyond 1Q13. Hence it is lower than
the TTM number.
Coverage ratios appear strong, as indicated in Table 4
below:
Table 4
The high coverage ratios mean that PAA retains ~$400 to $500
million of excess cash flow as a source of capital and thereby
reduces reliance on debt or issuance of additional units that
dilute existing holders. The general partner gets 50% of any
distributions in excess of $1.35 per unit per annum. Given the
current distribution rate is $2.17 per unit per annum, this is a
significant burden that pushes up PAA's cost of capital. The excess
cash flow is therefore has a very low cost of capital compared to
the cost of issuing additional units. $410 million in the TTM ending
9/30/12 and by $683 million in the TTM ending 9/30/11. Clearly PAA
is not using cash raised from issuance of debt and equity to fund
distributions. $2.7billion of the ~$3.1 billion spent (net of sale
proceeds) on growth capital projects and acquisitions in the TTM
ending 9/30/12 was funded by debt and the issuance of additional
partnership units; the ~$400m balance was generated from internal
cash flow.
PAA's per unit distributions have grown at a compounded rate of
~7.5% per annum since 2001. They grew 8.1% in the TTM ending
9/30/12 and are forecasted by management to grow 7-8% in 2013. In
the TTM ending 6/30/12, distributions per unit increased by 7%.
PAA's balance sheet is strong. At September 30, 2012, long-term
debt-to-capitalization ratio was 46%; total debt-to-capitalization
ratio was 49%; and long-term debt-to-adjusted EBITDA ratio was
2.9x. Note that $834 million (~12.5%) of total debt is short-term
debt that primarily supports hedged inventory. This debt is
essentially self-liquidating from the cash proceeds when the
inventory is sold.
PAA's current yield is at the low end of the MLP universe. A
comparison to some of the MLPs I follow is provided in Table 6
below:.
On the other hand, PAA seems to show greater unit price resiliency
in the face of steep declines that have adversely affected MLPs
since November 6. From 11/5/2012 to 11/15/2012 EPD fell 7.98% and
MMP fell 7.19%, while PAA declined 4? | http://www.nasdaq.com/article/a-closer-look-at-plains-all-american-pipelines-distributable-cash-flow-as-of-3q-2012-cm191542 | CC-MAIN-2013-20 | refinedweb | 963 | 56.25 |
In reference to the direction the comments are taking this thread (this doesn't fit neatly into a comment):
The models folder that is created with a new MVC project is for View Models - they are classes to support views. These are not your business models or data models.
For instance, in a ViewModel that supports a view, you might have a enum property that renders to a dropdown:
public enum CustomerTypes
{
INDIVIDUAL = 0,
COMPANY
}
public class CustomerViewModel
{
public CustomerTypes Type { get; set; }
public string[] CustomerTypesSelectList { get; set; }
}
public class CustomerController : Controller
{
public ActionResult Edit()
{
var model = new CustomerViewModel();
model.CustomerTypesSelectList =
Enum.GetNames(typeof(CustomerTypesSelectList));
return View(model);
}
}
And with this you have some javascript in your view to populate a fancy drop down list with the items in CustomerTypesSelectList.
The UI-specific string[] property is a common construct of a ViewModel that gets stripped away when it gets converted to the business model or data model. The Controller would control the conversion (a.k.a. mapping), but probably rely on another class to implement the mapping that ties together the ViewModel and the business model.
To sum up, in an n-layer architecture:
- The MVC Controller is not your business logic. Your business logic resides inside of various services and components that are called by the controller.
- The Models folder contains ViewModels - classes that are meant to support the operation of the UI only.
- The Controller calls a mapper to translate between ViewModels and business models when calling into the business layer.
Total Post:109Points:767
C# ASP.Net ASP.NET MVC
Ratings:
966 View(s)
Rate this:
I know this question has been asked before. I am only asking so that I can get a better explanation. In my webforms applications, this is how I implemented my projects. The solution had two projects. The first project contains Data Access and Business Logic. The first project is referenced in the second project. The aspx.cs files call the business logic class and the business logic class calls the data access class.
I am trying to follow the same approach in my MVC applicaiton. How would I implement that? Controller calls Model which in turn calls Business logic? This approach would add overhead right? | https://www.mindstick.com/forum/12851/how-to-implement-n-layer-architecture-in-mvc | CC-MAIN-2017-22 | refinedweb | 374 | 55.64 |
Render an Image Pretty Pretty Pictures
In this tutorial we will learn how to make the above image. Make sure you are set up correctly first and you have read the first Blender tutorial. If you want to start from the raw "Isolated Galaxy" data you should download it and have yt installed.
If you want your object to glow, make sure you have added the two lines to your Blender's obj importer as described in the "Getting Started" section.
Get the Surface Data
We are starting from the blend file "usual.blend" which has a text editor, Image editor, python console and 3D viewer in its layout. This can be found on the file download page. Before proceeding further, you need to format the 3D simulation data for input into Blender. There are three options for doing this.
Option 1: If you are familiar with yt you can create your own OBJ files by downloading the ENZO data used to create this images here. Be warned it is ~300 MB, so you may have time to make some toast or something before proceeding. Once it is downloaded and you've activated yt in the terminal window, you can follow the blog post here to make your OBJ file, which amounts to just running the following yt-python script (for version yt-3.x):
import yt ds = yt.load("/Users/jillnaiman/yt_files_local/galaxy0030/galaxy0030") # change to your directory rho = [2e-27, 1e-27] trans = [1.0, 0.5] filename = '/Users/jillnaiman/yt_files_local/surfaces' def _Emissivity(field, data): return (data['Density']*data['Density']*np.sqrt(data['Temperature'])) ds.add_field("Emissivity", function=_Emissivity, units=r"\rm{g K}/\rm{cm}^{6}") sphere = ds.sphere("max", (1.0, "mpc")) for i,r in enumerate(rho): surf = ds.surface(sphere, 'Density', r) surf.export_obj(filename, transparency = trans[i], color_field='Temperature', emit_field = 'Emissivity', plot_index = i)
Option 2: Download the OBJ & MTL files here. If you use this option, you'll have to change the top few lines of the OBJ file to point to the correct MTL file (the line after "mtllib"). For multiple files you can use "utils/changemtl.py" to automate the process.
Option 3: Use yt directly in Blender to generate a surface. More on this option in the next section.
Import Surface Data Into Blender
In the following sections, I'll show code that you can copy and paste directly into the python console or put in the text editor. In the last section, I'll show how you can write this whole thing in an external file and run the script when you're happy with your program.
Import OBJ Files
First, lets read in the OBJ file and rescale it to a size that is easy to manipulate.
import science filename = '/Users/jillnaiman/yt_files_local/surfaces.obj' # where obj is stored myobject = science.Load(filename, scale = (50.0,50.0,50.0)) # load and scale x/y/z by 50 times
Import Simulation Data Directly with yt
Alternatively to this two step process of generating OBJ files and then importing in Blender, we can use the raw simulation data and use AstroBlend in conjunction with yt to generate the surface in Blender. Make sure you have yt installed following this tutorial. You'll also need the large simulation data file found here, which in the following example I store in a file named "data" in my home directory.
import science import numpy as np # different data - Enzo filename = '~/data/IsolatedGalaxy/galaxy0030/galaxy0030' sphere_rad = 200.0 # in kpc rho = [2e-27, 1e-27] transparencies = [1.0, 0.5] # how is emissivity calculated? def _Emissivity(field, data): return (data['gas','density']*data['density']*np.sqrt(data['gas','temperature'])) # isosurfaces at 2 and 1 x 10^-27 g/cc myobject = science.Load(filename, scale = (50.0, 50.0, 50.0), isosurface_value = rho, surf_type='sphere', radius = sphere_rad, radius_units = "kpc", surface_field="density", meshname = 'Allen', transparency = transparencies, color_field='temperature', emit_field=_Emissivity)
Note that in this case, you'll get two surfaces named "Allen_0" and "Allen_1". I'm not going to go through each step in the above code or how these surfaces can be manipulated individually, but again will refer you to the yt in AstroBlend tutorial for more information.
Using either import method, we do this we should see the following image if we render:
(Recall that we can render a quick image by pressing the little camera and then the "Render" button underneath the Object Selector panel on the far right in "usual.blend". See the first tutorial for more details.)
Independent of import method, its clear this image doesn't look a lot like the top image.
To get them to look the same, we have to do a few things:
(1) We have to get rid of the cube
(2) We need to fix the lighting so its not from the lamp in the scene, but from the glowyness of the galaxy
(3) We have to move the camera to our desired location to make the image
Deleting Objects with AstroBlend
We need to delete both the cube and the lamp. We could do this with pointing and clicking (see end of Getting To Know Blender tutorial for details on which keys to press), or we can do it from the python console. The clicking method would work fine for the simple objects we have now, however, deleting this way doesn't delete the objects materials, and if you have a lot of objects in your blend file, this can lead to memory leak. So, the way to do it with AstroBlend is as follows:
science.delete_object('Cube') # delete Cube science.delete_object('Lamp') # delete Lamp
Incidentally, you can also delete your uploaded objects, and the best way to do this is to pass the object itself into the deleting routine (don't actually do this unless you want to re-upload!):
science.delete_object(myobject) # delete a previously uploaded object
Now if we render we see something that looks a bit better, but still a bit off. We need to change the lighting format and the camera location to replicate the image on the top of this page:
Lighting in AstroBlend
If you are happy with the default lighting by a lamp in Blender then you can skip this step. Personally, I like my images to be glowy when they can be. To do this, all you need to do is add the following line to your program:
light = science.Lighting('EMISSION') # light by surface emission
Now when we render we see the following:
Camera Motions in AstroBlend
AstroBlend is setup so that it is easy to move the camera's location and pointing from the command line. Again, you could do this with clicking and hot keys, but since we will probably want to point and move the camera to specific locations in our data sets, its much easier to do with the python console.
cam = science.Camera() # make cam object cam.location = (-6,0,0) # put it some where nice cam.pointing = (0,0,0) # where are you pointing this thing?
If you look in the 3D viewer, you'll notice you not only have the galaxy isodensity contours and camera in your space, but you now also have a weird sphere thing in there too:
In the above photo I've enlarged the sphere, but it should be just poking out a little bit in certain places in your 3D viewer. The camera's facing is constrained to always be pointing towards this empty mesh, so by moving the empty, we can point the camera easily instead of always having to calculate rotation angles and whatnot. The sphere is called an "empty" because it will not show up in renders.
Now that we have our camera set up, when we do a quick render we see basically what we want:
Now all we need to do is render this to a file.
Saving Images in AstroBlend
First, you need to specify the location and base file name for your renders. If you are making more than one render, the ending number of the files will start at '0000' by default and increase to '9999' at maximum.
render_directory = '/Users/jillnaiman/blenderRenders/' # save directory, change for your computer render_name = 'galSurfs_' # base name for renders render = science.Render(render_directory, render_name) render.render() # render!
This creates an image in the above directory called "galSurfs_0000.png":
Note if you do
render.render() again you'll see a new figure "galSurfs_0001.png"
because with each call of render the "frame number" is update. To reset back to the zero
frame number simply do the following before your call to render:
render.nframe = 0
Using an External Script
One can copy and paste the above sections of code into the python console, or put them in the text editor and run the full script there. For a reminder of how to run scripts see the first tutorial. This is fine for short scripts, but when they get longer, you might want to save and edit them with an external text editor. For example, on my computer I could save the following combined script in "/Users/jillnaiman/trialSecondTutorial.py":
import science filename = '/Users/jillnaiman/yt_files_local/surfaces.obj' # where obj is stored myobject = science.Load(filename, scale = (50.0,50.0,50.0)) # load and scale x/y/z by 50 times science.delete_object('Lamp') science.delete_object('Cube') light = science.Lighting('EMISSION') cam = science.Camera() cam.location = (-6,0,0) cam.pointing = (0,0,0) render_directory = '/Users/jillnaiman/blenderRenders/' # save directory, change for your computer render_name = 'galSurfs_' # base name for renders render = science.Render(render_directory, render_name) render.render()
Then to run this script in Blender, all we need to do is put the following two lines into Blender's text editor and run this script with the "Run Script" button as we did before:
filename = "/Users/jillnaiman/trialSecondTutorial.py" exec(compile(open(filename).read(), filename, 'exec'))
Get to Know BlenderPrevious Tutorial
SPH Movie from Text FileNext Tutorial | http://www.astroblend.com/tutorials/tutorial_simpleRender.html | CC-MAIN-2021-39 | refinedweb | 1,669 | 62.48 |
i'm new and just now learning from the totorial. i am trying to make a program where you are given a spanish word and you must translate it to english. (this is just me messing around for learning purposes) I'm using an if statement and if they enter the right word for the variable, the program says you're correct or wrong. It works when I use an "int" variable but i don't know how to make a word into a variable. would I use the "char" variable?
(in the following code i use an int variable so it works)
Code:
#include <iostream>
using namespace std;
int main()
{
int dormir;
cout<<"welome to my spanish program!\n";
cout<<"Translate each spanish word into english. Here is the first question out of ten.\n";
cout<<"Dormir\n";
cin>> dormir;
cin.ignore();
if (dormir == 2) { cout<<"correct!\n"; }
else if (dormir != 2) { cout<<"incorrect...\n"; }
system("Pause");
} | http://cboard.cprogramming.com/cplusplus-programming/102321-variable-can-store-words-printable-thread.html | CC-MAIN-2014-52 | refinedweb | 158 | 83.76 |
GraphQL Code Generator v0.11
Explore our services and get in touch.
Note:
This blog post refers to an outdated version, please check for the latest docs!
Generate React and Angular Apollo Components, Resolver signatures and much more!
We are very excited to announce a new release for graphql-code-generator!
If you haven’t checkout graphql-code-generator before, please check out the following introductory blog posts:
v0.11 holds a lot of improvements, new features and a couple of new templates, mostly in Typescript — for frontend and backend!
Here is the gist of it:
- New React and Angular templates to greatly simplify the usage of Apollo Client and Typescript and auto-generate a lot of the current boilerplate
- Generate types for resolvers
- Major overall to support generation of custom scalar types
- Support for GraphQL-Yoga with graphql-import
- Watch mode
- Major refactoring and improvements to the Typescript Mongo template
But the best part of all — on this release we saw a major increase in contributions and contributors! This is a strong sign for the growth of the ecosystem around graphql-code-generator.
Thank you so much everyone, we are humbled by your help. keep it coming!
Even though we haven’t decided to declare 1.0 yet, we use this library on each one of our projects and our clients projects.
Each PR merged creates an alpha release to npm which we then test out and upgrade all of those applications.
Only then we release a full npm version, so you can be assured that by time of the release, it has already been tested by major Enterprise applications who deploy daily into production, on top of all the unit tests on the project!
So all of the new features that are being mention on this post, are already being used by us extensively.
OK, let’s start with the new features for the client side Typescript users:
React templates
react-apollo recently introduced a new default API with Query, Mutation and Subscription components.
Whether you use the new API or prefer HOC and you use Typescript, there is no need to write those wrapper components again and again!
With the new TypeScript React Apollo Template, you don’t need to anymore!
All you need to do is to write your GraphQL Query, Mutation or Subscription, and the codegen will generate fully typed react-apollo components and HOC for each one.
That makes using React Apollo with Typescript so much simpler!
You can read more about it in the following blog post:/blog/codegen-typescript-react-apollo
Thank you Arda TANRIKULU for that amazing contribution!
Angular templates
But why just React? Angular should get the same benefits!
Apollo-angular also recently added added the Query, Mutation and Subscription services.
But why writing them by yourself?
With the new TypeScript Angular Apollo Template, all you need to do is to write your GraphQL Query, Mutation or Subscription, and the codegen will generate a fully functioning, fully typed Angular service for each of those!
As this was a contribution by Kamil Kisiela, the creator and maintainer of the apollo-angular library, we believe we can endorse it from now on as the best way of using apollo-angular!
You can read more about it in the following blog post:/blog/apollo-angular-12
Thank you very much Kamil Kisiela, the creator and maintainer of
apollo-angular for that amazing contribution, from the
apollo-angular and the
graphql-code-generator side!
Moving on to the new backend features.
Generate types for resolvers
The TypeScript template is now able to generate Typescript types not only for a schema and documents but also for GraphQL Resolvers. And It’s enabled by default!
We think this was another missing piece of making your code even more strongly typed, from end to end.
The template now generates a Typescript type for each GraphQL type with resolvers for all of its fields but it is also possible to extract a type of a single field.
import { QueryResolvers } from './generated/graphql'; import { getPosts } from './posts'; const Query: QueryResolvers.Resolvers = { posts() { return getPosts(); }, };
As you can see the usage is straightforward but we recommend to explore the source code of a generated file.
Custom scalars improvements
Prior this version, scalars were declared with
any which means we didn’t take enough advantage of TypeScript here. This was an another opportunity to strengthen up your code. We’re happy to tell you it’s now possible to customize a scalar and provide a type instead of having it as
any.
We also introduced a new option called prepend to add a code at the beginning of a generated file, it completes the custom scalars feature.
Unified naming strategy
camelCase, PascalCase, snake_case, lowercase, UPPERCASE, CONSTANT_CASE, there’s a lot of ways for people to name their types in GraphQL schema. It was a wild west, until now.
We decided to unify the naming strategy in codegen and pick just one. We chose PascalCase. And now, everything generated by codegen follows that rule.
Maybe at some point in the future, graphql-code-generator will allow to set your own naming strategy. Contributions are welcome so don’t feel afraid, we will help you!
Keep on mind that it’s a breaking change, TypeScript might warn about non existing type, if so then change it to be PascalCased.
Watch-mode
Now the GraphQL Code Generator detects changes in your schema and type definitions, and generates typings automatically after you edit these files.
The only thing you need to do is add
-w parameter to the command line.
Thank you Arda TANRIKULU and FredyC for that great contribution!
Support for GraphQL-Yoga with graphql-import
Thanks to David Yahalomi’s great contribution, GraphQL Code Generator now understands the graphql-import syntax being used by GraphQL Yoga.
# import B from "b.graphql" type A { # test 1 first: String second: Float b: B }
MongoDB Schema Template
We are also working on improving the MongoDB Typescript schema template.
We are using this template in many of our apps in production, it saves us a lot of boilerplate and potential bugs, and it’s also easier to maintain and keep track of the MongoDB schema objects.
Basically, this template let’s you generate TypeScript typings for the shape of your MongoDB objects based on your GraphQL schema.
For example, using the following schema:
type User @entity { id: String @id username: String! @column email: @column }
You will get a generated TypeScript interface:
import { ObjectID } from 'mongodb'; export interface UserDbObject { _id: ObjectID; username: string; email?: string | null; }
Then, you can use it with MongoDB driver for NodeJS:
import { Collection } from 'mongodb'; import { db } from './my-db-instance'; const MyCollection: Collection<UserDbObject> = db.collection<UserDbObject>( 'users' );
Now your MongoDB collection is typed, and it’s easier to find bugs during development.
We understand that a lot of edge cases might occur because of the differences between GraphQL schema and the lack of schema in MongoDB, so we added
@map directive that lets you do custom mapping of fields:
type User @entity { id: String @id username: String! @column @map(path: "profile.credentials.username") email: @column }
Will output:
import { ObjectID } from 'mongodb'; export interface UserDbObject { _id: ObjectID; profile { credentials: { username: string; }; }; email?: string | null; }
There is also the need to handle the difference between embedded types and linked types, so you can use
@entity(embbeded: true) and
@embbeded to declare link and embedded entities:
type User @entity { id: String @id profile: Profile! @embedded email: @column photos: [Photo] @link } type Profile @embedded { fullName: String } type Photo @entity { id: String! @id url: String! @column }
Will output:
import { ObjectID } from 'mongodb'; export interface UserDbObject { _id: ObjectID; profile: ProfileDbObject; photos: [ObjectID]; email?: string | null; } export interface ProfileDbObject { fullName: string; } export interface PhotoDbObject { _id: ObjectID; url: string; }
Another issue is to add fields that are relevant only for the MongoDB schema, and not relevant to your GraphQL schema (such as security token and login hashes), so you can define it in your GraphQL Schema:
type User @entity(additionalFields: [ { path: "services.login.token", type: "string" } ]) { id: String @id email: @column }
We also addressed GraphQL Interface and now you can use
@abstractEntity to override your interfaces, and if you are using GraphQL Unions, you can add a custom
@discriminatorField to your union, and it will add the field to your MongoDB schema.
We are also thinking about adding more templates for entities schemas in the future, such as JSON schema.
_We are planning to release the new version of that template next week and write a dedicated blog post on it with more details._id
Also, that template can be a great start for others to add the same generated features to other data sources like SQL and NoSQL databases, DB ORMs and more!
Other honorable mentions on this release:
- Support for AWS AppSync
- Generate introspection file
- Huge performance improvements thanks to prevostc
- Added implementingTypes for interfaces
Watch the complete breakdown of the releases and features here: ()[]
As usual, we keep all of our dependencies up to date with each release thanks to renovate. This release updates everything to latest as usual, including TypeScript 3.0.
We believe it is a good open source practice and helps not just us and our users but it also helps our open source dependencies authors to get early feedback on any issue that might happen.
Friends help friends test early! ;)
We encourage that practice on for any open source library and any application!
What other ideas do you have for generating things?
If you have any ideas or custom needs, that’s exactly where the graphql-code-generator shines, let us know about your needs and we would be happy to support you!
This version wouldn’t be possible without the great help of sgoll, David Yahalomi, Arda TANRIKULU, Kamil Kisiela, degroote22, jycouet, FredyC, prevostc, arvind-chandrashekar-ck, Cactucs.
Also want to contribute? here are some ideas for future contributions (but any ideas would be welcomed!), we would love to support and help any new contributor who want to give it a try!
Don’t forget — if you created your own template and think it could benefit others, link it on our readme here.
Star and follow us on Github and Twitter, we have a lot more coming soon! | https://the-guild.dev/blog/graphql-code-generator-011 | CC-MAIN-2021-31 | refinedweb | 1,717 | 60.95 |
C++ backend for QML
That’s a question I see around a lot. And despite the official documentation page I decided to create my own example.
You can also read a pretty good article, but I’ve made it even simpler.
This article is relevant to the current version of Qt which is 5.7 in the moment.
So, you have a QML application and you want to have some backend logic to be written in C++. It’s actually a good idea, so QML and JS would handle some simple UI manipulation, and C++ would maintain some serious backend shit with rocket science calculations and concurrent processing, so it’s independent from QML UI which is frontend in that case.
Actually, if you are familiar with ASP.NET, you can consider this “C++ backend” thing as calling C# server’s method from JS via XMLHttpRequest, so frontend and backend are separated. It’s not exactly like that, but I hope you get the idea.
Anyway, here’s what needs to be done:
- Create a C++ class, that will perform the backend logic, and implement a public slot in it;
- Add an instance of that class into the QML context or register a type (I’ll show both options);
- Wire up some event to this slot.
Say, we have a C++ class Backend which has a slot doSome():
class Backend : public QObject { // ... QString doSome(); // ... };
Now let’s consider two ways of adding it to QML.
1st option: add an object as a context property
Add the instance of our class to the QML context in
main.cpp:
QQmlApplicationEngine engine; engine.load(QUrl("qrc:/main.qml")); Backend *backend = new Backend(); engine.rootContext()->setContextProperty("backend", backend);
And then wire it up to onClicked event of some button in
main.qml:
Button { text: "Do some" onClicked: { backend.doSome() } }
2nd option: register a new type
Register a type of our class in
main.cpp:
QQmlApplicationEngine engine; // it's important to do this before engine.load qmlRegisterType<Backend>("io.qt.Backend", 1, 0, "Backend"); engine.load(QUrl(QLatin1String("qrc:/main.qml")));
Then import it in
main.qml:
import io.qt.Backend 1.0
Define it:
Backend { id: backend }
And wire it up to onClicked event of some button:
Button { text: "Do some" onClicked: { backend.doSome() } }
So, now when you push the button, C++ method will do some work.
The main difference between these two methods is that the first method register an already existing object, and the second registers a type an object of which is yet to be created. Aside from that difference, I like the second way better.
I’ve put the full source code of the example with both options on GitHub.
I wrote another article on the subject with a more sophisticated example: TCP client-server | https://retifrav.github.io/blog/2016/07/18/cpp-backend-qml/ | CC-MAIN-2019-26 | refinedweb | 465 | 65.12 |
HCI command for the system channel. More...
#include "mbox_def.h"
Go to the source code of this file.
HCI command for the system channel.
This software component is licensed by ST under BSD 3-Clause license, the "License"; You may not use this file except in compliance with the License. You may obtain a copy of the License at: opensource.org/licenses/BSD-3-Clause
Definition in file shci.h.
THE ORDER SHALL NOT BE CHANGED.
SHCI_SUB_EVT_BLE_NVM_RAM_UPDATE This notifies the CPU1 which part of the BLE NVM RAM has been updated so that only the modified section could be written in Flash/NVM StartAddress : Start address of the section that has been modified Size : Size (in bytes) of the section that has been modified.
command parameters
Command parameters.
Command parameters.
SHCI_SUB_EVT_ERROR_NOTIF This reports to the CPU1 some error form the CPU2.
SHCI_SUB_EVT_NVM_END_WRITE This notifies the CPU1 that the CPU2 has written all expected data in Flash.
SHCI_SUB_EVT_NVM_START_ERASE This notifies the CPU1 that the CPU2 has started a erase procedure in Flash NumberOfSectors : The number of sectors the CPU2 needs to erase in Flash. For each sector, the algorithm as described in AN5289 is executed. When this number is reported to 0, it means the Number of sectors to be erased was unknown when the procedure has started. When all sectors are erased, the SHCI_SUB_EVT_NVM_END_ERASE event is reported
SHCI_SUB_EVT_NVM_START_WRITE This notifies the CPU1 that the CPU2 has started a write procedure in Flash NumberOfWords : The number of 64bits data the CPU2 needs to write in Flash.
For each 64bits data, the algorithm as described in AN5289 is executed. When this number is reported to 0, it means the Number of 64bits to be written was unknown when the procedure has started. When all data are written, the SHCI_SUB_EVT_NVM_END_WRITE event is reported
SHCI_SUB_EVT_CODE_READY This notifies the CPU1 that the CPU2 is now ready to receive commands It reports as well which firmware is running on CPU2 : The wireless stack of the FUS (previously named RSS)
SHCI_SUB_EVT_THREAD_NVM_RAM_UPDATE This notifies the CPU1 which part of the OT NVM RAM has been updated so that only the modified section could be written in Flash/NVM StartAddress : Start address of the section that has been modified Size : Size (in bytes) of the section that has been modified.
SHCI_SUB_EVT_NVM_END_ERASE This notifies the CPU1 that the CPU2 has erased all expected flash sectors.
SHCI_C2_802_15_4_DeInit.
Deinit 802.15.4 layer (to be used before entering StandBy mode)
SHCI_C2_BLE_Init.
Provides parameters and starts the BLE Stack
SHCI_C2_BLE_LLD_Init.
Starts the LLD tests BLE
SHCI_C2_CONCURRENT_EnableNext_802154_EvtNotification.
Activate the next 802.15.4 event notification (one shot)
SHCI_C2_CONCURRENT_GetNextBleEvtTime.
Get the next BLE event date (relative time)
SHCI_C2_CONCURRENT_SetMode.
Enable/Disable Thread on CPU2 (M0+)
SHCI_C2_Config.
Send the system configuration to the CPU2
Please check macro definition to be used for this function They are defined in this file next to the definition of SHCI_OPCODE_C2_CONFIG
SHCI_C2_DEBUG_Init.
Starts the Traces
SHCI_C2_ExtpaConfig.
Send the Ext PA configuration When the CPU2 receives the command, it controls the Ext PA as requested by the configuration This configures only which IO is used to enable/disable the ExtPA and the associated polarity This command has no effect on the other IO that is used to control the mode of the Ext PA (Rx/Tx)
SHCI_C2_FLASH_EraseActivity.
Provides the information of the start and the end of a flash erase window on the CPU1
SHCI_C2_FLASH_EraseData.
Erase Data in Flash
SHCI_C2_FLASH_StoreData.
Store Data in Flash
SHCI_C2_FUS_ActivateAntiRollback.
Request the FUS to enable the AntiRollback feature so that it is not possible to update the wireless firmware with an older version than the current one. Note:
SHCI_C2_FUS_FwDelete.
Delete the wireless stack on CPU2 Note: This command is only supported by the FUS.
SHCI_C2_FUS_FwUpgrade.
Request the FUS to install the CPU2 firmware update Note: This command is only supported by the FUS.
SHCI_C2_FUS_GetState.
Read the FUS State If the user is not interested by the Error code response, a null value may be passed as parameter
Note: This command is fully supported only by the FUS. When the wireless firmware receives that command, it responds SHCI_FUS_CMD_NOT_SUPPORTED the first time. When the wireless firmware receives that command a second time, it reboots the full device with the FUS running on CPU2
SHCI_C2_FUS_LoadUsrKey.
Request the FUS to load the user key into the AES Note: This command is supported by both the FUS and the wireless stack.
SHCI_C2_FUS_LockAuthKey.
Request the FUS to prevent any future update of the authentication key Note: This command is only supported by the FUS.
SHCI_C2_FUS_LockUsrKey.
Request the FUS to lock the user key so that it cannot be updated later on Note: This command is supported by both the FUS and the wireless stack.
SHCI_C2_FUS_StartWs.
Request the FUS to reboot on the wireless stack Note: This command is only supported by the FUS.
SHCI_C2_FUS_StoreUsrKey.
Request the FUS to store the user key Note: This command is supported by both the FUS and the wireless stack.
SHCI_C2_FUS_UnloadUsrKey.
Request the FUS to Unload the user key so that the CPU1 may use the AES with another Key Note: This command is supported by both the FUS and the wireless stack.
SHCI_C2_FUS_UpdateAuthKey.
Request the FUS to update the authentication key Note: This command is only supported by the FUS.
SHCI_C2_LLDTESTS_Init.
Starts the LLD tests CLI
SHCI_C2_MAC_802_15_4_Init.
Starts the MAC 802.15.4 on M0
SHCI_C2_RADIO_AllowLowPower.
Allow or forbid IP_radio (802_15_4 or BLE) to enter in low power mode.
SHCI_C2_Reinit.
This is required to allow the CPU1 to fake a set C2BOOT when it has already been set. In order to fake a C2BOOT, the CPU1 shall :
SHCI_C2_SetFlashActivityControl.
Set the mechanism to be used on CPU2 to prevent the CPU1 to either write or erase in flash
SHCI_C2_THREAD_Init.
Starts the THREAD Stack
SHCI_C2_ZIGBEE_Init.
Starts the Zigbee Stack
SHCI_GetWirelessFwInfo.
This function read back the informations relative to the wireless binary loaded. Refer yourself to MB_WirelessFwInfoTable_t structure to get the significance of the different parameters returned.
AttrValueArrSize NOTE: This parameter is ignored by the CPU2 when the parameter "Options" is set to "LL_only" ( see Options description in that structure )
Size of the storage area for the attribute values. Each characteristic contributes to the attrValueArrSize value as follows:
Definition at line 416 of file shci.h.
LsSource Source for the 32 kHz slow speed clock.
Definition at line 499 of file shci.h.
MasterSca The sleep clock accuracy handled in master mode.
It is used to determine the connection and advertising events timing. It is transmitted to the slave in CONNEC_REQ PDU used by the slave to calculate the window widening, see SlaveSca and Bluetooth Core Specification v5.0 Vol 6 - Part B - chap 4.5.7 and 4.2.2 Possible values:
Definition at line 490 of file shci.h.
MaxConnEventLength This parameter determines the maximum duration of a slave connection event.
When this duration is reached the slave closes the current connections event (whatever is the CE_length parameter specified by the master in HCI_CREATE_CONNECTION HCI command), expressed in units of 625/256 µs (~2.44 µs)
Definition at line 510 of file shci.h.
MblockCount NOTE: This parameter is overwritten by the CPU2 with an hardcoded optimal value when the parameter "Options" is set to "LL_only" ( see Options description in that structure )
Number of allocated memory blocks for the BLE stack
Definition at line 453 of file shci.h.
NumAttrRecord Maximum number of attribute records related to all the required characteristics (excluding the services) that can be stored in the GATT database, for the specific BLE user application.
For each characteristic, the number of attribute records goes from two to five depending on the characteristic properties:
Min value: <number of="" user="" attributes>=""> + 9
Definition at line 390 of file shci.h.
NumAttrServ Defines the maximum number of services that can be stored in the GATT database.
Note that the GAP and GATT services are automatically added at initialization so this parameter must be the number of user services increased by two.
Definition at line 399 of file shci.h.
PrWriteListSize NOTE: This parameter is ignored by the CPU2 when the parameter "Options" is set to "LL_only" ( see Options description in that structure )
Maximum number of supported prepare write request
Definition at line 442 of file shci.h.
SlaveSca The sleep clock accuracy (ppm value) that used in BLE connected slave mode to calculate the window widening (in combination with the sleep clock accuracy sent by master in CONNECT_REQ PDU), refer to BLE 5.0 specifications - Vol 6 - Part B - chap 4.5.7 and 4.2.2.
Definition at line 473 of file shci.h. | https://os.mbed.com/docs/mbed-os/v6.15/mbed-os-api-doxy/shci_8h.html | CC-MAIN-2022-05 | refinedweb | 1,426 | 53.92 |
Revision history for Test::SFTP 1.10 25.10.11 - Fix a lot of failed tests by checking for available modules. 1.09 23.10.11 - Fixed RT #71873 (thanks to Montgomery Conner). - Moved to Dist::Zilla, finally. 1.08 07.06.10 - If anything, last version made it even more confusing! It is now fixed 1.07 07.06.10 - RT #58199 - Sorry Salvador! :) 1.06 07.06.10 - Adding port option - Updating POD 1.05 06.06.10 - Productionizing it! 1.04_01 04.06.10 - Switched from Net::SFTP to Net::SFTP::Foreign - Switched from Test::More to Test::Builder - Some API breakage (status, ssh_args -> more, etc.) - A lot of code and POD cleanups - Test adjustments, cleanups, fixes, changing some deps for them - Adding namespace::autoclean and other deps - Cleaning up Build.PL, adding LICENSE, META, examples, etc. 0.04 08.02.09 - in t/02-failure.t: initialized $ENV{'HOME'} as empty (windows) - in t/03-successful.t: skipping if getpwuid doesn't work (windows) 0.03 20.01.08 - in t/01-timeout.t: finally fixed eval for Test::Timer - in t/03-successful.t: finally fixed eval for File::Util - in t/03-successful.t: changed no. of skipped tests in first SKIP to 14 - in t/03-successful.t: removed File::Util from the top "use" group - added "use warnings" (even though we're already using Moose) to gain more Kwalitee. - added to dist_abstract to Build.PL 0.02 18.01.08 - Rewrote large parts of the POD. - Uses better types with Moose ('Bool', 'Object') - Added timeout attribute. - Added timeout option for Net::SFTP connection. - Added testing for timeout attribute using Test::Timer - Checking attributes in connect() to avoid uninitialized variables - Put timeout attribute as optionally tested if Test::Timer exists - Time::Timer marked as recommended in Build.PL - The tests that use File::Util are now optionally skipped if it doesn't exist - Added File::Util to recommends - Separated all the dangerous tests from the the others, changed the ENV - Some tested were improved, using eval{} 0.01 07.01.08 First version. I couldn't be prouder. | https://metacpan.org/changes/distribution/Test-SFTP | CC-MAIN-2016-50 | refinedweb | 355 | 70.19 |
UFDC Home | Help | RSS Group Title: Economic information report Title: Indian River citrus packinghouses and the southward movement of production CITATION THUMBNAILS PAGE IMAGE ZOOMABLE Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: Indian River citrus packinghouses and the southward movement of production Series Title: Economic information report Physical Description: iii leaves, 15 p. : map ; 28 cm. Language: English Creator: Kilmer, Richard LSpreen, Thomas HUniversity of Florida -- Food and Resource Economics Dept Publisher: Food & Resource Economics Dept., Agricultural Experiment Stations, Institute of Food and Agricultural Sciences, University of FloridaFood and Resource Economics Dept., Agricultural Experiment Stations, Institute of Food and Agricultural Sciences, University of Florida Place of Publication: Gainesville Fla Publication Date: 1983 Copyright Date: 1983 Subjects Subject: Citrus fruit industry -- Florida -- Indian River County ( lcsh )Citrus fruits -- Florida -- Indian River County ( lcsh )Orange industry -- Florida -- Indian River County ( lcsh ) Genre: government publication (state, provincial, terriorial, dependent) ( marcgt )bibliography ( marcgt )non-fiction ( marcgt ) Notes Bibliography: Bibliography: p. 15. Statement of Responsibility: Richard L. Kilmer, Thomas H. Spreen. General Note: Cover title. General Note: "April 1983." Record Information Bibliographic ID: UF00026491 Volume ID: VID00001 Source Institution: University of Florida Holding Location: University of Florida Rights Management: All rights reserved by the source institution and holding location. Resource Identifier: notis - AHF9024alephbibnum - 001545504oclc - 2078401 Richard L. Kilmer Economic Information Thomas H. Spreen Report 180 Indian River Citrus Packinghouses and the Southward Movement of Production Uni of F\orida Food and Resource Economics Department Agricultural Experiment Stations April 1983 Institute of Food and Agricultural Sciences University of Florida, Gainesville 32611 ABSTRACT Existing. Key words: Grapefruit, Indian River, oranges, packinghouses, plant location. ACKNOWLEDGEMENTS We wish to express our appreciation to Mr. George Kmetz, formerly with the Indian River County Appraisers office, for his assistance in determining the building costs for Indian River packinghouses and to the Florida Department of Citrus for financial assistance. i TABLE OF CONTENTS Page ABSTRACT ............................................................ i TABLE OF CONTENTS ................................................. ii LIST OF TABLES ................................................... iii LIST OF FIGURES .................................................... iii INTRODUCTION ............... .... .................................. 1 PROBLEM STATEMENT ................................................. 1 Overview of the Study ........................................ 4 DATA FOR MODEL ..................................... ....... 5 Supply and Demand ........................................... 5 Assembly and Distribution Costs .............................. 7 Packing Costs ............. ................................. 8 Other Assumptions ............................................ 11 RESULTS .............. ............ ... ....... .......... .......... 11 SUMMARY AND CONCLUSIONS ................. ...................... 14 REFERENCES ............................... .... 15 ii LIST OF TABLES Table Page 1 Projected production of oranges and grapefruit in the Indian River marketing district, 1979 1980 and 1983 84 seasons ........................................... 6 2 Projected disposition of Indian River fresh citrus shipments, 1979 80 season ................................ 7 3 Estimated fresh citrus truck hauling costs per 1-3/5 bushel, 1979 80 season .................................... 8 4 Estimated variable and fixed costs per 1-3/5 bushel box, 1979 80 season ....................................... 9 5 Estimated land, packinghouse, equipment, and working capital cost in the Indian River marketing district, 1980 dollars ................................................ 10 6 Static and dynamic solutions to the packinghouse location problem, 1979 80 through 1983 84 ............... 12 7 Packinghouse size configuration for the best dynamic solution ............................................ 13 LIST OF FIGURES Figure 1 Indian River district grapefruit and orange produc- tion, packinghouses and ports for fresh fruit ............... 2 2 Indian River marketing district projected produc- tion and packing of oranges and grapefruit, 1979 80 through 1983 84 ................................. 3 iii INDIAN RIVER CITRUS PACKINGHOUSES AND THE SOUTHWARD MOVEMENT OF PRODUCTION Richard L. Kilmer and Thomas H. Spreen INTRODUCTION The Indian River area is a marketing order district on the east coast of Florida (Figure 1). Nearly two-thirds of its western border is separated from the Interior marketing district by swampland that con- tains little or no citrus. In the past 15 years, new plantings have been concentrated in the southern half of the district. The projected growth in grapefruit and orange production from 1979-80 to 1983-84 is 12.3 percentage points greater in the southern production area (Figure 1) than in the northern area (26.9 and 14.6 percent, estimated from Florida Crop and Livestock Reporting Service, 1980 and Fairchild). Existing packinghouses are located near older groves. As more citrus is grown farther south, transportation cost increases will occur unless new packinghouses open near the new production areas. PROBLEM STATEMENT The problem to be examined in this study is concerned with the impact of the southern movement of citrus production (Figure 2) in the Indian River marketing district on the size, number, and location of citrus packinghouses. RICHARD L. KILMER and THOMAS H. SPREEN are assistant professor and associate professor of food and resource economics. Jacksonville North District Titusville E. N Port Canaveral CocC a EN Tampa P Melbourne N Vero Beach E,N South Ft. Pierce E,N,P SDistrict \isric Stuart N Jupiter N E Location of existing packinghouses N Potential location of new packinghouses Port Port P Ports Everglades P Figure 1.-Indian River district grapefruit and orange production, packinghouses and ports for fresh fruit 3 60000 55000 50000 South district 45000 production 0S 40000 0 o S35000 x ,. 3000) F- ,: 25000 LO 20000 15000 North district production 1oooo South district packing 3000 ' North district packing 2000 r .. ,_ 1979-80 1980-81 1981-82 1982-83 1983- Season Figure 2.--Indian River marketing district projected production and packing of oranges and grapefruit, 1979 80 through 1983 84 4 Overview of the Study An analytical approach to this study is to identify a number of supply points representing groups of groves and a number of demand points or "destinations". In this study, demand points are regions of the U.S., Canada, and five possible ports of export (see Figure 1). In 1979, there were 35 existing plants in four locations. These plants are divided into two groups designated as small (under 500,000 1 3/5 bushel boxes) and large (over 500,000 boxes). Only larre new plant:; are con- sidered and are allowed to open at the four existing locations and three new locations (see Figure 1). Using estimates for the cost of shipping fruit from the supply points to the packing plants (the assembly problem), the cost of shipping fruit from the plants to the demand points (the distribution problem), and the cost of packing the fruit at the packing plants, the best configuration (size, number and location) of the plants is determined by that configuration which allows assembly, packing, and distribution of the fruit at least cost. The optimal configuration for a particular crop year can be deter- mined via a mixed integer programming model. Using the computer, total assembly, packing, and distribution cost associated with each feasible configuration,1 the least cost configuration is determined. The mixed integer programming model gives the optimal configuration for a given crop year, but does not indicate how the industry can best adjust from the existing configuration to another one. This problem is not trivial since there are costs associated with opening new plants and closing old plants called transition costs. To find the optimal path of adjustment from the existing configuration to a new configuration, a dynamic programming model is used. A mixed integer programming model determines the best plant configuration for a particular crop year. 1A feasible configuration is one in which the plants have sufficient capacity to pack all of the fruit available. 5 This solution is excluded and the model is run again to find the second best configuration. The process is repeated until several solutions are formed. In this study, the crop years 1979-80 through 1983-84 are each analyzed in this manner. Using dynamic programming, the optimal path is found beginning with the existing configuration through the 1983-84 crop year which minimizes the sum of assembly, packing, and distribution costs over these years plus the transition costs incurred as new plants open and old plants close. For a technical description and justification of the particular methodology used, see Kilmer, Spreen, and Tilley. The remainder of this report is to document the data used in the analysis and to report the results. DATA FOR MODEL Supply and Demand Oranges and grapefruit represented 97 percent of the citrus packed in the Indian River marketing district during the 1979-80 marketing season (Florida Department of Agriculture and Consumer Services, 1980, p. 37). In order to project the future production of oranges and grape- fruit by supply area, tree data by age and variety (Florida Crop and Livestock Reporting Service, 1980) are combined with yield information by tree age and variety (Fairchild, 1977, pp. 24-32) (Table 1). The varieties are early and midseason oranges, 'Valencia' oranges, 'Temple' oranges, seedy grapefruit, white seedless grapefruit, and pink seedless grapefruit. The Indian River marketing district shipped 6.8 and 67.1 percent of the oranges and grapefruit harvested to packinghouses in 1979-80 (calculated from the Florida Crop and Livestock Reporting Service, 1981, p. 28, and Florida Department of Agriculture and Con- sumer Services, 1980, p. 37). Even though oranges and grapefruit are brought to a packinghouse, only 65.6 and 76.1 percent of the deliveries 6 Table 1.--Projected production of oranges and grapefruit in the Indian River marketing district, 1979-80 and 1983-84 seasons 1979-80 1983-84 Location Oranges Grapefruit Oranges Grapefruit ------------------1 3/5 bushel box-------------------- Northa 8,699,047 3,529,207 9,957,905 4,055,227 South 24,022,998 19,756,592 29,739,736 25,824,685 aSee Figure 1 for location. were actually packed during the 1979-80 season (Hooks and Kilmer, 1981a, p. 4). The remainder was shipped to processing plants. Total one and three-fifths bushel boxes packed in the Indian River marketing district are projected for the 1979-80 through the 1983-84 marketing seasons (Figure 2), after considering tree age, variety, yield, and the percentage of citrus taken to the packinghouse which was actually packed. The projected oranges and grapefruit packed are either exported (1.7 and 40 percent) or shipped intra and interstate (98.3 and 60 per- cent--Florida Department of Agriculture and Consumer Services, 1980, pp. 33-34). North America is divided into five demand areas with central points for distribution at New York City, Atlanta, Chicago, Los Angeles, and Toronto, Canada (Table 2). Each region is assumed to maintain its 1979-80 market share for oranges and grapefruit through 1983-84 (Florida Department of Citrus, 1980) (Table 2). Fresh citrus is exported through Ft. Pierce, Jacksonville, Port Canaveral, Port Everglades, and Tampa, all in Florida (Table 2). The 1979-80 market share (Table 2) for each port is assumed to remain unchanged through the 1983-84 marketing season (Florida Department of Agriculture and Consumer Services, 1980, p. 35). 7 Assembly and Distribution Costs The distribution costs (Table 3) from packinghouses in the Indiana River district to the five North American cities (already identified) are determined by averaging actual quoted rates for oranges and grape- fruit from November 1979 through May 1980 (U.S. Federal-State Market News Service). The distribution cost per one and three-fifths bushel from the packinghouses to the ports is equal to .2049 plus .0041 times one-way distance in miles (Updated Machado, 1978, p. 100, to 1979-80 dollars). The cost of hauling the oranges and grapefruit from the citrus groves to the packinghouses and the cost of hauling eliminations from the packinghouse to a processing plant is $.00727 per one and three-fifths bushel mile (calculated from Hooks and Kilmer, 1981b, p. 7). Table 2.--Projected disposition of Indian River fresh citrus shipments, 1979-80 season Location Oranges Grapefruit --------------1 3/5 bushel box--------- Domestic regions Atlanta 428,255 (30%) 891,838 (13%) Chicago 265,607 (19%) 1716,664 (24%) Los Angeles 139,943 (10%) 655,156 (9%) New York 474,522 (33%) 3,011,292 (42%) Toronto 119,666 (8%) 854,054 (12%) Subtotal 1,427,993 (100%) 7,129,004 (100%) Port of exit Ft. Pierce 7,049 (28%) 1,334,315 (28%) Jacksonville 1,664 (7%) 315,020 (7%) Port Canaveral 3,535 (14%) 669,061 (14%) Port Everglades 3,575 (14%) 676,675 (14%) Tampa 9,317 (37%) 1,763,542 (37%) Subtotal 25,140 (100%) 4,758,613 (100%) TOTAL 1,453,133 11,887,617 8 Table 3.--Estimated fresh citrus truck hauling costs per 1 3/5 bushel, 1979-80 season Cost Oranges Grapefruit Atlanta $1.09 $1.01 Chicago 2.72 2.67 Los Angeles 4.88 4.58 New York 2.72 2.67 Torontoa 3.24 3.18 aToronto was estimated by taking the rate to New York times 1.19 to account for the extra distance to Toronto. Source: U.S. Federal-State Market News Service. Packing Costs Existing packinghouse capacities over time are assumed to be the 1979-80 volume packed plus 20 percent2 (Florida Department of Agricul- ture and Consumer Services, 1980, pp. 18-24). Existing plants were categorized as small (100,000 to 500,000 one and three-fifths bushels annually) or large (500,001 to 850,000). All new plants are assumed to be large plants. The variable costs for existing and new packinghouses includes labor (less 30 percent of the foreman labor that is assumed fixed), direct operating expenses less repairs and maintenance, 30 percent of the administration expense, and 50 percent of the sales expense (Table 4). Fixed costs for existing plants are composed of overhead and 2Packinghouse capacity figures are not available; therefore annual volume packed was used. Kilmer and Tilley found that Florida packinghouses operate at an 11 month average of 50 percent of capacity. Capacity utilization for some individual plants will be greater than 50 percent. Thus, the potential individual packinghouse capacity is assumed to be 20 percent greater than the volume packed by each packinghouse in 1979-80. 9 investment servicing cost (debt servicing plus net return on invest- ment). Overhead includes repairs and maintenance, insurance, taxes and licenses, 30 percent of foreman labor, 70 percent of administrative expense, and 50 percent of sales expense (Table 4). Investment ser- vicing cost is $.125 per one and three-fifths bushel (calculated from Hooks and Kilmer, 1981a, and Florida Department of Agriculture and Consumer Services, 1980, p. 37).3 Table 4.--Estimated variable and fixed costs per 1 3/5 bushel box, 1979- 80 season Packinghouse Cost Smalla Largea Variable Materials $1.068 $ .975 Labor (.70) .900 .743 Direct operating .104 .120 Administrative (.30) .074 .052 Sales (.50) .081 .118 Total variable cost $2.227 $2.008 Fixed Labor (.30) .078 .062 Repairs and maintenance .251 .112 Insurance .054 .028 Taxes and licenses .019 .024 Administrative (.70) .172 .123 Sales (.50) .081 .118 Total fixed cost $ .655 $ .467 aSmall is 100,000 to 500,000 1 3/5 bushel box annual volume; large is 500,001 to 850,000 1 3/5 bushel box annual volume. Source: Packinghouse records. 3The $.125 figure is taken from accounting records and is labelled as depreciation and rent. Data on actual debt servicing and net return on investment are not available. Ideally, this information is needed from each packinghouse. 10 The same estimate of overhead for existing plants is used for new plants. Using data provided by KIetA (1982), total estimated facility costs for a new large plant in 1980, including land, building, offices, and equipment, was $1.7 million (Table 5). It is assumed that a 20 percent downpayment of $340,000 would be required, the remainder financed at 16 percent for 20 years. The annual debt servicing costs are $229,387. The downpayment, $340,000, represents net investment. Since all costs are in constant 1979 dollars, a real rate of return (nominal interest rate minus the inflation rate) on net investment of 3 percent is assumed. The downpayment is a fixed cost but also can be viewed as a transition cost, since it is a cost which is incurred only in the year the plant opens. Table 5.--Estimated land, packinghouse, equipment, and working capital cost in the Indian River marketing district, 1980 dollars a Packinghouse Item Small Large Landb $ 63,000 $ 105,000 (6 acres) (10 acres) Packinghouse building, metal, dock height $ 346,892 $ 607,000 (28,571 sq.ft.) (50,000 sq.ft.) Packinghouse equipment $ 230,053 $ 314,053 Fork lifts $ 48,000 $ 72,000 Office building $ 63,839 $ 85,120 (3,000 sq.ft.) (4,000 sq.ft.) Office equipment $ 44,889 $ 44,889 Operating capital $ 210,500 $ 421,000 $1,007,173 $1,649,062 aEach packinghouse has a central sizer, packer aids, no miechani'al palletization, and no cold storage. Source: b.--Kmetz, 1982; c.--Industry source; d.--Packinghouse records. 11 Other Assumptions Once a new plant is opened, it is not allowed to close. An exist- ing plant which covers cash costs but not all investment servicing costs is closed after three years. If existing plant is closed for less than three years, it can re-open at zero start-up cost. A look at the past industry adjustments in number of packinghouses actually in operation from one season to another reveals an industry able to make short-term adjustments in numbers. From the 1964-65 season to 1965-66, packing- house numbers increased from 160 to 225 (State of Florida total -- Florida Department of Agriculture and Consumer Services). By the 1968- 69 season, the number of packinghouses declined to 169. A similar decrease occurred from 1969-70 season until 1971-72 when the number of packinghouses declined from 211 to 164. RESULTS The model includes oranges and grapefruit produced in 13 locations in the Indian River district of Florida, 35 existing packinghouses at four locations, potential opening of new packinghouses at three loca- tions where no packinghouses currently exist (Figure 1), five consump- tion regions in the U.S. and Canada (Table 2), and five export points (see Figure 1 for the Florida locations). The static mixed integer solutions for 1979-80 through the 1983-84 seasons are obtained from a mixed integer plant location model which contained small and large existing packinghouses and large new packing- houses. The costs associated with the best solutions are shown in Table 6. The costs have been discounted to 1979 using a 3 percent real dis- count rate (without inflation). The costs in 1983-84 are adjusted to reflect the present value of the cost of packing citrus from 1983-84 on indefinitely, assuming that configuration and supply and demand levels remain unchanged. Using estimated discounted transition costs (Kilmer, Spreen, Tilley) and the static solutions from the mixed integer program- 12 Table 6.--Static and dynamic solutions to the packinghouse location problem, 1979-80 through 1983-84a Rank Dynamic Static solutions for seasons ordered program solutions solution 1979-80 1980-81 1981-82 1982-83 1983-84 (through infinity)b -------------------------- thousand $ ------------ ------------ 1 2,548,660 59,083 60,762 62,799 64,829 66,901 "(Best) (2,296,922) 2 2,568,282 59,083 60,782 62,807 64,832 66,911 (Fourth Best) (2,297,276) 3 59,094 60,838 62,821 64,863 66,914 (2,297,396) 4 59,101 60,852 62,826 64,865 66,925 (2,297,750) 5 59,109 60,862 62,900 64,882 67,001 (2,300,387) 6 59,118 60,877 62,907 64,884 67,012 (2,300,741) 7 59,139 60,884 62,919 64,904 67,015 (2,300,831) 8 59,146 60,922 62,920 64,924 67,025 (2,301,186) 9 59,152 60,939 62,941 64,926 67,025 (2,301,186) 10 59,176 60,943 64,977 67,046 (2,301,896) Initial configu- ration 2,605,366 62,350 63,687 65,167 66,779 68,371 (2,347,383) Transition cost 2903c 365 320 329 302 (Best) Transition cost (EQUEth hest) 2903c 365 320 329 302 Transition cost - (Initial conf.) OC 0 0 0 0 aAll costs are in 1979 dollars. bPresent value of collection, packing, and distribution cost from 1983-84 to infinity, assuming plant configuration, supply, and demand remain unchanged. CTransition cost to initial configuration. 13 ming model, dynamic solutions to the packinghouse location problem are obtained and two such solutions are shown in Table 6. The solid under- lined elements represent the least cost path over time. The dashed underlined elements represent the fourth least cost path over time. The best solution in 1979-80 calls for the immediate closing of 24 existing plants (11 remain open) and building six large plants for a total of 17 plants (Table 7). By the 1983-84 season, nine existing houses are still operating. One of the new packinghouses is located at Jupiter in the southern part of the region (Figure 1) where no existing packinghouses are located. By employing the dynamic solution for pack- inghouses instead of allowing the initial plant location and relative sizes to exist over time, the packinghouses in the Indian River market Table 7.--Packinghouse size configuration for the best dynamic solution Capacity Initial Packinghouse number for seasons Location (1-3/5 bu. box) configuration 1979-80 1980-81 1981-82 1982-83 1983-84 1,000s North Titusville 100-500 2* - 501-850 1 1 1 1 1 Cocoa 100-500 1* 1* 1* 1* - 501-850 - Melbournea 501-850 - South Vero Beach 100-500 11* 1* - 501-850 7* 7*,1b 7*,2 7*,3 7*,3 7*,4 Ft. Pierce 100-500 12* - 501-850 2* 2*,3 2*,3 2*,3 2*,4 2*,4 Stuarta 501-850 - Jupitera 501-850 1 1 1 1 1 aNew location. b7*, 1 means seven existing plants operating and one new plant operating in that year. 14 district could save $56,706,000 (1979 dollars) or 2.2 percent of the best dynamic solution. During 1983-84 alone, total assembly, packing, and distribution costs could be reduced by $1,470,000 or $.086 per one and three-fifths bushel box (1979 dollars). Finally, most of the existing packinghouses close in the first season, 1979-80. This is not unusual and is entirely feasible (See other assumptions). SUMMARY AND CONCLUSIONS The southern movement of citrus production does suggest the need for construction of a new packinghouse in Jupiter, Florida, which is located in the southern part of the Indian River marketing district. Existing capacity could be reconfigured into larger packinghouses. Instead of building new plants in the same cities where old (existing) plants are located, the old packinghouses could be enlarged to take advantage of economies of size. In general, however, the Indian River packinghouse capacity is located where the production is located. Total collection, packing, and distribution costs could be reduced by only 2.2 percent if the industry closed all small packinghouses and maintained and built new packinghouses. Only the cost side of the packinghouse industry, however, is explored in this study. Small packinghouses that pack for a select market may be quite profitable. Also, a small pack- inghouse may have management that is just as cost efficient as a large packinghouse. Thus the southerly shift in citrus production will have a small effect on existing packinghouse size and location over the next decade; however, a new packinghouse is needed in the southern portion of the district. 15 REFERENCES Fairchild, Gary F. 1977. Estimated Florida Orange, Temple, and Grape- fruit Production, 1976-77 through 1981-82. Econ. Res. Dept., Fla. Dept. of Citrus, CIR No. 77-1. Florida Crop and Livestock Reporting Service. 1981. Florida Agricul- tural Statistics: Citrus Summary. 1980. Florida Agricultural Statistics: Commercial Citrus Inventory. Florida Department of Agriculture and Consumer Services, Division of Fruit and Vegetable Inspection. Various dates. Season Annual Report. Florida Department of Citrus, Market Research Department. 1980. Annual Fresh Fruits Shipments Report. Hooks, R. Clegg, and Richard L. Kilmer. 1981a. Estimated Costs of Packing and Selling Fresh Florida Citrus, 1979-80 Season. IFAS Econ. Info. Rpt. No. 145. 1981b. Estimated Costs of Picking and Hauling Fresh Florida Citrus, 1979-80 Season. IFAS Econ. Info. Rpt. No. 151. Kilmer, Richard L., and Daniel S. Tilley. 1979. "A Variance Component Approach to Industry Cost Analysis," Southern Journal of Agricul- tural Economics 11:35-40. Kilmer, Richard L., Thomas H. Spreen, and Daniel S. Tilley. 1982. "A Dynamic Plant Location Model of the East Florida Fresh Citrus Pack- ing Industry." Unpublished report, Food and Resource Economics Dept., Univ. of Fla. Kmetz, George P. "The 1980 Cost to Build a New Packinghouse." 1982. Unpublished report, Indian River County Appraisers Office, Vero Beach, FL. Nachado, Virgilio A.P. 1978. "A Dynamic Mixed Integer Location Model Applied to Florida Citrus Packinghouses." Unpublished Ph.D. disser- tation, Univ. of Fla. sweeney, Dennis J., and Ronald T. Tatham. 1976. "An Improved Long-Run Model for Multiple Warehouse Location," Management Science 22:748-758. U.S. Federal-State Market News Service. Various dates. Fruit and Vegetable Truck Rate Report. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs | http://ufdc.ufl.edu/UF00026491/00001 | CC-MAIN-2018-30 | refinedweb | 4,154 | 54.93 |
JS - JavaScript Modules on CPAN
> # Typical unix command line stuff: > sudo cpan JS::jQuery ... cpan installs JS-jQuery ... > js-cpan jquery-1.2.3 jquery-1.2.3.min jquery-1.2.3.pack jQuery > js-cpan jQuery* /Library/Perl/5.8.8/JS/jQuery.js /Library/Perl/5.8.8/JS/jquery-1.2.3.js /Library/Perl/5.8.8/JS/jquery-1.2.3.min.js /Library/Perl/5.8.8/JS/jquery-1.2.3.min.js.gz /Library/Perl/5.8.8/JS/jquery-1.2.3.pack.js > js-cpan jQuery.js /Library/Perl/5.8.8/JS/jQuery.js > cd my/webapp/that/requires/jquery/javascript/ > ln -s `js-cpan jQuery.js` jQuery.js
Some JavaScript modules can be installed from CPAN. This module comes with a utility called
js-cpan that helps you find JavaScript modules that have been installed on your system so that you can use them in various projects.
The JSAN project () has successfully provided much of the groundwork to make JavaScript module distributions look and act like Perl module distributions.
For example, the basic file layout is similar, the Test::Harness and Test::Simple framework has been ported to JSAN, and most modules use Makefiles to set things up.
The Open JSAN project offers the tip off the iceberg in terms of being a CPAN for JavaScript. However it has a long way to go and not a lot of community to get it there. CPAN is a good place to put JavaScript modules.
Many projects require JavaScript components these days, and it would be nice to simply list them in the META.yml of your Perl project distributions.
There is a dead simple way to package non-Perl components into Perl/CPAN distributions. The components get installed in your Perl system but do not affect Perl in any other way.
JS.pm is a module to explain and help maintain the JavaScript modules installed from CPAN.
Some module distributions will have both Perl and JavaScript components. Others will have only JavaScript components. All JavaScript modules and JavaScript-only distributions should have a top-level-namespace of 'JS'.
It turns out that Perl's ExtUtils::MakeMaker will install *any* files that you put in the
lib/ directory, into your
perl's
sitelib. So setting up a JavaScript distribution is very similar to setting on a Perl one.
Say you have a JavaScript module called
Foo.Bar. First create a distribution directory called:
JS-Foo-Bar. Put your JavaScript code in
lib/JS/Foo/Bar.js. Put your documentation in
lib/JS/Foo/Bar.pod. Create a bare bones
lib/JS/Foo/Bar.pm Perl module so that CPAN related tools can find your stuff.
Your Makefile.PL should look something like this:
use inc::Module::Install; name 'JS-Foo-Bar'; abstract 'Sample JavaScript Module Distribution'; version '0.01'; license 'lgpl'; all_from 'lib/JS/Foo/Bar.pod'; WriteAll;
Create a
Changes and
README file and dummy
test.t. CPAN module distributions should have these files.
Put your JavaScript tests in a directory called
tests. I'll write up more explicit instructions in a future release, but for now look at
JS-YAML on CPAN or any openjsan.org module as an example.
Now just run these commands:
perl Makefile.PL make make manifest make dist cpan-upload -user foo -passwd bar -mailto [email protected] JS-Foo-Bar-0.01.tar.gz
That's it. You've joined the revolution. :)
NOTE: There is a working example JavaScript module shipped with
JS.pm in the
examples/JS-Foo-Bar directory.
Ingy döt Net <[email protected]>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | http://search.cpan.org/~ingy/JS-0.18/lib/JS.pm | CC-MAIN-2013-48 | refinedweb | 628 | 68.57 |
ncl_mapset man page
MAPSET — Specifies the rectangular portion of the u/v plane to be drawn.
Synopsis
CALL MAPSET (JLTS, PLM1, PLM2, PLM3, PLM4)
C-Binding Synopsis
#include <ncarg/ncargC.h>
void c_mapset (char *jlts, float *plm1, float *plm2,
float *plm3, float *plm4)
Description
- JLTS
(an input expression, of type CHARACTER) is a character string specifying how the limits of the map are to be chosen. There are six possibilities, as follows:
- JLTS='MA' (MAXIMUM). The maximum useful area produced by the projection is plotted. PLM1, PLM2, PLM3, and PLM4 are not used.
- JLTS='CO' (CORNERS). The points (PLM1,PLM2) and (PLM3,PLM4) are to be at opposite corners of the map. PLM1 and PLM3 are latitudes, in degrees. PLM2 and PLM4 are longitudes, in degrees. If a cylindrical projection is being used, the first point should be on the left edge of the map and the second point on the right edge; otherwise, the order makes no difference.
- JLTS='PO' (POINTS). PLM1, PLM2, PLM3, and PLM4 are two-element arrays giving the latitudes and longitudes, in degrees, of four points which are to be on the edges of the rectangular map. If a cylindrical projection is being used, the first point should be on the left edge and the second point on the right edge; otherwise, the order makes no difference.
- JLTS='AN' (ANGLES). PLM1, PLM2, PLM3, and PLM4 are positive angles, in degrees, representing angular distances from a point on the map to the left, right, bottom, and top edges of the map, respectively. For most projections, these angles are measured with the center of the earth at the vertex and represent angular distances from the point which projects to the origin of the u/v plane; on a satellite-view projection, they are measured with the satellite at the vertex and represent angular deviations from the line of sight. Angular limits are particularly useful for polar projections and for the satellite-view projection; they are not appropriate for the Lambert conformal conic and an error will result if one attempts to use JLTS='AN' with JPRJ='LC'.
- JLTS='LI' (LIMITS). PLM1, PLM2, PLM3, and PLM4 specify the minimum value of u, the maximum value of u, the minimum value of v, and the maximum value of v, respectively. Knowledge of the projection equations is necessary in order to use this option correctly.
- JLTS='GR' (GRID). PLM1, PLM2, PLM3, and PLM4 specify the minimum value of latitude, the minimum value of longitude, the maximum value of latitude, and the maximum value of longitude, in degrees, on a lat/lon grid. The limits will be determined in such a way as to ensure that the entire grid will be visible on the map.
- PLM1, PLM2, PLM3, and PLM4
(input arrays, dimensioned 2, of type REAL) are as described above, depending on the value of JLTS. Note that each is a two-element array. Strictly speaking, the FORTRAN standard requires that they be declared as such, even when only the first element of each array is used.
C-Binding Description
The C-binding argument descriptions are the same as the FORTRAN argument descriptions.
Usage
This routine allows you to set the current values of the EZMAP parameters 'AR', 'P1', 'P2', ... 'P7', and 'P8'. For a complete list of parameters available in this utility, see the ezmap_params man page.
Examplesfil, cmpgrd, cmpgrp, cmpita, cmpitm, cmplab, cmplbl, cmplot, cmpmsk, cmpou, cmptra, cpex01, cpex03, cpex08, cpex09, mpex01, mpex02, mpex04, mpex07, mpex09, mpex10, eezmpa, tezmap, tezmpa, fcover, ffex00, ffex02, ffex03, ffex05, fgkgpl, fgkgtx, fngngdts.
Access
To use MAPSET or c_mapset, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order.
Messages
See the ezmap man page for a description of all EZMAP error messages and/or informational messages.d, mapiqm, mapit, mapita, mapitd, mapitd, mapitm, maplbl, maplmb, maplot, mappos, maproj, maprs, maprst, mapsav,nam, mplndm, mplndr, mplnri, mpname, mprset, mpsetc, mpseti, mpsetl, mpsetr, supmap, supcon, ncarg_cbind
Hardcopy: NCAR Graphics Contouring and Mapping Tutorial
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement. | https://www.mankier.com/3/ncl_mapset | CC-MAIN-2017-47 | refinedweb | 678 | 57.81 |
I am sceptical of reports of C++ generating more machine code than C. My simple test shows only a little increase in code size.
I found a simple C Xmega program that blinks a LED using delay_ms. I made a copy and replaced the three C style port I/O functions with C++ functions in the copy. The resulting C++ program was 8 bytes bigger than the C program.
8 bytes out of 600 is trivial to me. However It does seem that the system is reserving half of the 800 for interrupt vectors or something for my Xmega 128a4u. So that would make it 8 out of 300 bytes. Still nothing compared to the skin rashes and headaches I would get just thinking of using C. :)
If we could get a bit more complex program to compare, that would be good. If someone could find a C blinker program for the Xmega that uses RTC instead of delay_ms, that would be great. I could easily add RTC to the c++ program.
One reason for the bigger C++ program could be that the C++ compiler doesn't produce as good AVR code. I took a quick look at the .lss files and I noticed one place where the C compiler used rcall and rjmp and the C++ compiler used call and jmp. I don't know much about the AVR instructions, but it seems the call could be replaced with rcall. That would save 2 bytes. Maybe the jmp could be replaces with rjmp also, but I don't know how far the rjmp can jump.
C compiler
214: 02 d0 rcall .+4 ; 0x21a <main>
216: 1b c0 rjmp .+54 ; 0x24e <_exit>
CPP compiler
214: 0e 94 10 01 call 0x220 ; 0x220 <main>
218: 0c 94 2b 01 jmp 0x256 ; 0x256 <_exit>
I'm using the C++ compiler that came with the last Atmel Studio 7 Version 7.0.2397
gcc version 4.6.2
I am attaching main.c and main.cpp. Also the 2 studio projects.
.
Top
- Log in or register to post comments
Is that really what you want? I know little C++ but does not "new" put the memory on the heap, and then when you spin it in a loop, that would be extra bad.
my projects:
Top
- Log in or register to post comments
I doubt you will get any meaningful results. The very nature of a C++ program is that it (should) be written differently to a C program. A conversion from C to C++ is a pointless exercise; the whole program should be re-written to use the features of C++, and by that point no comparison can be made.
You are making it more complicated than it needs to be. If I was required to write this in my application code I think I would skip C++ -
No offense meant, but that is what we are trying to escape. How are you going to get a friendly name attached to that? There lies a problem, and the solution to get a name attached to pin c7 will most likely end up similar to what one would do in C.
Wouldn't you rather have something like this-
Pin<A0> sw { LOWISON, INPUT, PULLUPON }; //options in any order
Pin<B2> led{ OUTPUT, LOWISON };
while( sw.isOff() ); //press sw to start
while(true){
led.toggle();
waitms( 500 );
}
You get to specify the name of the pin and set its properties all in one line, and the resulting code will be as minimal as anything. Now you have a name you can use that has a set of functions 'attached' to the name. Yes, a little template learning is required but is worth the time to at least figure it out to some degree. Template use for peripherals probably becomes less important when you have an mcu that is more capable, but even with a more powerful mcu there is little downside to using templates for peripherals.
Here is a simple C++ style (or outline) I put up on github a few days ago, just because I wanted it written down somewhere-...
It starts with a mega328p (I don't even use a mega328, but makes for a simple example), and then has a few more advanced for a mega4809. I could add more, but the rest is mostly just the same. I use this style on any mcu and any peripheral and they all end up looking basically the same and work quite well.
I have refined this 'style' for a while now, and seems to work well (avr0/1, nRF52, a little stm32, etc.). For the avr0/1 I have most peripherals done (each chapter in the datasheet), (redone again, now even better), and for things like the nRF52 I just 'take over' a peripheral as wanted/needed (the rest ends up with a C++ layer on top of the manufacturer provided things). All files are header only, a single file per peripheral, and to use it- just include it.
What you get is a nice way to create a peripheral struct that is specific in its properties at compile time (the things that separate it from the other peripheral instances), you get a set of enums that are specific to the peripheral (and enum types are great for function arguments as they naturally 'assert'), the peripheral register layout is contained within if wanted, and the functions end up short and simple because the register access is simple. A peripheral contained in a single header, which never needs adding to the project source list.
As already mentioned, it doesn't take long before any comparison to a C equivalent becomes hard to do. For the most part, you do equivalent things you are going to get equivalent results unless you code your C++ like you are programming a pc. C++ gives you an easy way to do things that are awkward to do in C, so you soon get to a point where you can imagine what it takes to do the equivalent in C, but you certainly are not going to take the time to prove the point to yourself. You can already see there is no way you are going to give up C++ even if the resulting binary size is a little bigger.
What does happen, is you get a system that is easy to use so you use it (size increases)- have a Buffer class and a Usart class that can optionally take in a Buffer? little reason not to use it as only requires a line of code to create the Buffer and you no longer block on any uart output. Nice buffer in place, so why not send debug info out the uart at high speed (no major effect on running mcu), turning on/off the debug output as needed (single line of code). Rtc and Time classes in place? Well, since we now have a system time in unix seconds and fractional time in 0-999ms, may as well output the system time in the debug output. Hey, we have a system time with resolution down to a ms, so now that can be used for both tasks and long term events in addition to providing clock time- one universal time for everything outside any precision/hardware required timing needs. Your binary sizes increase because you are simply doing more, you are doing more because it is easier to do so.
Top
- Log in or register to post comments
I would tend to concur.
Also, trivial programs don't (generally) give a true representation - using printf in simple "Hello World" being a classic example.
Top Tips:
Top
- Log in or register to post comments
The only people who claim C++ is "bloaty" generally seem to be those who don't actually use it! ;-)
Top
- Log in or register to post comments
or the ones who try to do "bloaty" things - which would probably take even more to do in 'C' ...
Top Tips:
Top
- Log in or register to post comments
OK, let me see if I can make a corollary:
"I'm reading many results that the ZeXeY automobile Model CoBiA has a lot of problems, so I'm not going to buy one to replace my trusty car."
"Hmmmph! How can you say that when you haven't spent (wasted?) the money on one?!?"
And that is fundamentally different than your point about putting in the effort and cost to learn?
[I'll do some digging, but isn't there a more-or-less seminal thread about gcc/C++/AVR that says something like "use C++ on the AVR without problems if you don't do these three things" ? ]
[edit] Earliest "summary" that I found:
[edit] Here is a pretty good discussion with many of the back-and-forths that OP might digest before reinventing it all....
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
Top
- Log in or register to post comments
The call vs. rcall thing is not C/C++-specific.
My understanding is that at least some of
those optimizations are done by the linker.
OP was testing the linker.
Also, new is C++'s new improved malloc.
Do not use new if you would ot use malloc.
All that said, C++ is bigger than C.
More of compiler-writers' time has to be
spent on handling the language itself.
That leaves less time for optimizations
that might be handled at the front end.
I've looked at the result of -O0 with avr-gcc.
'Tis bloated to illegibility.
I expect some improvements from -O0 to -O1 are handled at the front end:
special case code for obvious code.
My guess is that C++ has more special cases.
Also, avr-gcc suffers from being low on GNU's
totem pole and from having 16-bit ints.
GNU's instructions to contributors explicitly state that they
need not consider the possibility of ints smaller than 32 bits.
That is why gcc's generic software floating point does not work for avr-gcc.
Improvements for multi-core 64-bit
processors can make AVR code worse.
The guys that brought us the avr- version of gcc did lot of good work.
Moderation in all things. -- ancient proverb
Top
- Log in or register to post comments
/***
*new.cxx - defines C++ new routine
*
*
*Purpose:
* Defines C++ new routine.
*
*******************************************************************************/
#include <malloc.h>
#include <new.h>
#include <stddef.h>
void * operator new( size_t cb )
{
void *res = malloc( cb);
return res;
}
When creating a class, you can supply the new operator function which overrides the default one. It can do what it wants as long as it returns a pointer to the object. If you look at the Port_c_device class, you will find this:
void* operator new(size_t objsize) {
return (void*)_SFR_ADDR(PORTC); // assign actual memory location of the registers
}
I returns the address of Port C. All class member functions that are written in the class are subject to inlining and always are, in my experience. So this is just a way to tell the compiler what the address of Port_c_device is.
I admit that newing the same class multiple times is logically goofy. Logically I should do it once and save the pointer. When I found I got the same program size either way, I went for the easier way.
Top
- Log in or register to post comments
Hmm. Overloading. So is it a virtue with C++ to have many different ways to do a thing?
my projects:
Top
- Log in or register to post comments
Well it can sure obfuscate code! Most people have an idea of what new does until someone perverts it!
Top
- Log in or register to post comments
It is a virtue to be able to use the same way to do something for many types of things...
I don't think anyone doubts that you can write C++ code that isn't bloated. It's just that you can get ... surprised when something blows up a lot more than you expected.
For example, I don't see a way to make virtual functions of a class without defeating link-time garbage collection of unused functions. As a result, Arduino sketches that do anything at all with Serial also end up including several rarely-used bits of code for peek(), availableForWrite(), and etc. Most of the boards in question have small implementations and enough memory that this isn't much of a concern. But it could be...
Top
- Log in or register to post comments
Well, if you look at the Arduino AVR source code, new is basically just a wrapper for malloc. So if someone abuses new everywhere, don't go blaming the bloat on C++, it's malloc.
Top
- Log in or register to post comments
When putting stuff on the heap, you should use something a lot simpler. This is what I use.
Attachment(s):
Top
- Log in or register to post comments
Well, I think it's well known that Arduino libraries are not exactly the pinnacle of optimization, quite the opposite in fact.
Top
- Log in or register to post comments
Top
- Log in or register to post comments
Top
- Log in or register to post comments
If you're talking about the 'classic' 2K parts then yes, I'd agree. But having moved my current projects to AVR-DA, I have more than enough memory (famous last words) and am no longer counting every byte. I currently have 12947 bytes of 16K free :)
That's not to say that one can blithely disregard the downsides of dynamic allocation, but it's less of an issue for me now and I can focus on more productive tasks.
Top
- Log in or register to post comments
malloc is not part of C++. C++ can't assume that malloc is available. There are various places besides the heap you might want to use. Maybe shared memory or machine registers etc..
So you must supply your own new operator function. Of course when C++ is installed on a PC, the installation includes malloc and includes the default new operator function, so you can remain fat, dumb and happy.
I found, and attached, the default new operator function that was installed recently on my Win 10. If you scan this beauty, you will see it.
Attachment(s):
Top
- Log in or register to post comments
I should have attached the .txt version for people that can't use .rtf.
Attachment(s):
Top
- Log in or register to post comments
malloc is part of C++.
Moderation in all things. -- ancient proverb
Top
- Log in or register to post comments
I'm not quite sure why the Rube Goldberg contraption is used to get what you want, but its your mcu.
Here is the simplest example from my 'tutorial' on github I linked to earlier-
Put up something on godbolt.org that does the same- set any pin on a mega328p to an output, and turn it on/off/toggle. I would like to see what you come up with, and if it turns out to be something better, I'll change my mind (always looking for something better). The only 'rule' will be that you will have to use a name like 'led' in the main function (if I have to remember that the led is on pin c7, no good).
Overloading is a major feature, and gets used often. Without it, no templates for example.
A simple example, that shows a different way to do print things other than using virtual functions like arduino-
It appears the Print function is overloaded twice (1 being an empty function so you can easily enable/disable printing, such as debug output), but since its a function template it is overloaded for as many devices that end up using it. So to 'print' from a device of any kind (using the stdio things), it just needs a write function with a specific function type that matches what FILE.put uses. They all end up using fprintf (a single function from library), so basically each resulting version mostly deals with setting up the call to fprintf just like any other C app.
Another simple example using the example code linked above-
SA high () { reg.OUT = 1; } //SA=static auto
SA low () { reg.OUT = 0; }
SA on () { if(Inv_) low(); else high(); } //Inv_ is template argument for HIGHISON/LOWISON
SA off () { if(Inv_) high(); else low(); }
for the on() I normally also have another -
SA on (bool tf) { if(tf) on(); else off(); }
which then replaces code like this-
if( isSomething() ) led.on(); else led.off();
into something nicer to use when needed-
led.on( isSomething() );
which results in the same code produced, but you no longer have to do an if/else every time you need a pin set to the result of some decision.
Default arguments can also be done, which could replace the overloading in this case-
SA on (bool tf = true) { if(Inv_==tf) low(); else high(); }
Another example-
There are several on's in there, and the last one also has a default argument. So you end up with a version which simply does as its told, another which also can set the inputs also, and a final one which can also setup irq's. The one's with more parameters can use the 'smaller' versions so no need to rewrite everything. The added overload options means you can do something in a single command without having to go through all the separate steps yourself. If its not used, no big deal, compiler knows its not used.
Back to the original question-
I have a simple bootloader for any avr0/1, written in C and compiles to 380 bytes (.vectors section commented out of linker script to remove vectors). I translated it to C++ using the existing C++ peripheral drivers I have. I left out the autobaud and some syncing bytes in the C++ version, and the compiled size is 576 bytes and will be in the 600's if I would make it complete (no global constructors needed/used, so no startup code for that). So it is 'bloaty', but the reason it is larger is because the C version was written from scratch with the knowledge of starting in a reset state which is a big advantage. The C++ drivers I have make no assumptions about the current state of registers so it ends up doing things in a more reliable way that is not necessary in this case. One example would be setting up portmux to use an alternate pin- the C version knows the state of the portmux register, the C++ version does not, so ends up clearing/setting the needed bits. All those type of things add up, and I would imagine if I had used existing C drivers the sizes would be more similar. I also imagine if I had written the C++ version from scratch, it would also be similar, but its a little bit pointless to ignore existing drivers where all the hard work has already been done, just to save a few hundred bytes.
Top
- Log in or register to post comments
In my code I'd do something like this.
#define LED_PORT Port_c_device
#define LED_PIN Port_device::Pin7
The device classes contain functions that do whatever you want to do to the device. All you need to do is find the function that does it. This means the user doesn't need to know anything about the registers and the pins in those registers. He doesn't need to know if there are bit flips that have to be done in a certain order, or if CCP has to be disabled, etc..
Even C programmers might find these classes useful. At the least, they can look at these classes to see what needs to be done. If the project has a C++ compiler, he can even call these functions by using an intermediate function that knows how to call a class member function
These device classes only need to be written once. Atmel should have written them years ago.
Top
- Log in or register to post comments
Indeed, it's not very useful to just use C++ to address individual pins, since C can also achieve this. For example, this blinky for an Arduino nano board (I know, I could use toggle but this is just to keep with the original example)
To really highlight what C++ can bring to the table, you have to go for something like curtvm showed.
Top
- Log in or register to post comments
You would rather create a set of defines just like C?
Why not just stick to C then?
LED_PORT->DIRSET = LED_PINbm;
LED_PORT->OUTSET = LED_PINbm;
Rather, how about just doing something C++ like-
Pin<B2> led{ OUTPUT };
led.on();
No defines, one line, we get a name to use, and have a set of functions attached to the name. Need to change the pin? change the B2 to something else, done.
I probably sound harsh, but am just trying to point out that there is a better way (I think so, anyway). I have already posted enough links to examples so will try to refrain from showing any more. My last post has 'another example' which is an Ac peripheral in a mega328p. All peripherals end up looking like that- a single header file for each peripheral with no need to touch any peripheral registers when writing app code, no need for an mcu header since its easy to create a register layout struct for register access where its needed (in the peripheral struct), and all the needed enums also live in the peripheral struct. One file and we have everything we need to run a peripheral, and only need to include it to use it. For the most part the functions inside stay pretty simple and just take the drudgery out of doing the common-
//a Buffer class of 64 u8's (with all the functions you will need- append,remove,+ a dozen more)
Buffer<u8, 64> txbuf;
Usart0alt uart; //usart0, alternate pins
//turn on tx (+set portmux, + set pin to output), optionally provide a buffer(will then use tx irq),
//230400 baud (run time calculated from current cpu speed, can also do compile time calculation if wanted)
uart.txOn( txbuf, 230400 );
//now can Print using printf style
Print( uart, "Hello World\n" );
So something like the above to setup a way to print from a uart becomes easy, and with 3 lines of code we are all set. You also have easy access to the source of anything- want to see what txOn() does? you will be taken directly to the source in Usart.hpp, where the function will be simple and everything you want to see will be in that file with no need to be bouncing around headers until you finally reach what you were looking for.
It may be a good thing these manufacturers do not touch C++. Somehow I think they would turn it into something closer to awful than good.
As long as I'm filling up the avrfreaks server storage with my comments, I will add that I have projects in C++ on github (which I use mainly for backup/storage) that I would probably cringe a little if I go and take a closer look at them again. I think what I do now in C++ is much better, but I always hold out hope for better than what I have now and may end up looking back at the 'now' and cringe about it in the future.
Top
- Log in or register to post comments
Sigh. An incredibly sad, and probably very accurate, point. Look at any vendor's C library :-(
Top
- Log in or register to post comments
I've written a lot of classes but that doesn't do anything to C++. Just because they give us a class, doesn't mean we have to use it.
They give us the io....h file that contains the registers, which is a freaking mess because C doesn't understand classes but that doesn't change C, it just gives me a rash when I look at it. Actually they give the same set of registers, with different names, for each port.
They should give us the I/O device class that contains the registers and the functions that operate on them. One port class works for all ports, only the addresses are different.
They do supply a struct containing the registers, but most people don't use it because it's too clumsy to use. The functions that access the registers have to also specify the instance of the struct that contains the registers. I think of it as the functions are outside the struct looking in, whereas the C++ functions are inside with the data.
In C++, the user can tell the compiler that the functions that access the data in the struct are members of the struct. Now those functions only need to specify the data (registers in this case). One way to do that is write the functions inside the struct.
The actual functions that result are probably the same as the C functions, but it's a hell of a lot easier to write the code and to understand the code. Of course we usually call these things classes instead of structs.
In fact I've just explained the main difference between C and C++. If you add the new operator and inheritance, that's about all I know and need to know about C++. When Stroustrup put classes from the Simula67 language into C, he called the result "C with classes". Someone else called it C++.
So now we have the data, in this case registers, and the functions that operate on them in one neat package. We can put other things in the class too, like the pin names like Pin7 etc. C programmers could do something like that, but where would these names be put? In a simple program they could put them anywhere but in a big program, well I call C programs free range chicken code.
I attached my project that contains the Port_device class in my first post, but I will attach it here too. Atmel could give us classes for all the devices, but maybe they never heard of classes. I fellow in Oslo invented them around 1965 but I guess news travels slowly in a country where travel is sleighs pulled by reindeer.
Attachment(s):
Top
- Log in or register to post comments
I infer from the data members of Port_device,
that Xmega pin port registers are grouped by port.
IIRC that is not the case with all megas.
Also IIRC microchip-supplied headers supply a struct containing all the IO registers.
That should make it fairly easy to do pretty much anything one wants,
even templates.
offsetof always produces an integer valid for anything for which an integer constant can be used,
including a template parameter and an enum value.
Even though it can be evaluated at compile time,
(unsigned)&PORTB is not even regarded as a constant expression of any kind.
BTW is Register unsigned char?
Moderation in all things. -- ancient proverb
Top
- Log in or register to post comments
>Whar? Since when is giving us a class touching C++?
>I've written a lot of classes but that doesn't do anything to C++.
As soon as you are required to use a C++ compiler, you are touching C++. I take it your code compiles in C? of course not.
> If you add the new operator and inheritance, that's about all I know and need to know about C++
That is potentially what a manufacturer would conclude, and we would be required to use 'new' for everything if we want to use their headers. Most likely they would not do that, but its not hard to imagine they end up using other ideas that are also not so great.
>That should make it fairly easy to do pretty much anything one wants,
You can use the existing headers, but you end up in an awkward spot because they may not give you any register bit position information in the register struct they provide (register wide names only). So now you have the rest of the needed information scattered in various enums and simple defines, as bitmasks and bit positions (with names created in a way as to avoid all the other define names- so names are larger than really needed). In the end it becomes easier to just create your own register layout and enums.
There are times it is not so bad to use existing headers, other times it just doesn't work very well. With only a little work, you can use something that ends up being easier to read and understand-
(pinctrl is a reference to the correct PINnCTRL register)-
SA inMode (PINS::ISCMODE e) { pinctrl.ISC = e; }
SA pullupOn () { pinctrl.PULLUP = 1; }
SA pullupOff() { pinctrl.PULLUP = 0; }
sw.inMode( BOTHEDGES );
using existing headers- );
They both will end up doing the same thing, but the former is easier to read as you do not need to spell out to the compiler how to clear/set bits inside a register.
An example for an nRF52 Gpio-...
I'm not sure what it would look like if using the supplied headers, but you will end up digging around various headers just to find what you are looking for and still end up with names that are 30 chars wide with a handful of underscores. Easier to just spend the 60 minutes and create your own register struct and required enums and be done with it, never again needing to use/look at the manufacturers header(s) for that peripheral.
Top
- Log in or register to post comments
Again IIRC they are derived from manufacturer data files.
Moderation in all things. -- ancient proverb
Top
- Log in or register to post comments
Top
- Log in or register to post comments
>IIRC none of avr-gcc's #defines are more than eight characters.
>Again IIRC they are derived from manufacturer data files.
avr-gcc is not providing headers for any avr0/1, which this already showed- );
That contains 3 items from the manufacturer supplied header, which are more than 8 characters each. Since these enums and defines are 'global', they have to use a naming scheme that prevent collisions with everything else. They all end up like that because that is the nature of the programming language used, and we end up with a truckload of things like NRF_GPIO_PIN_SENSE_HIGH where the caps lock and underscore keys get worn out.
Top
- Log in or register to post comments
Those are for xmegas, correct?
Moderation in all things. -- ancient proverb
Top
- Log in or register to post comments
There's a bit of a misty haze these days about exactly where "Xmega" ends. I could for example look at "Tiny"3216 for example and equally find:
Sure it starts out with good intentions but the 8 character limit is soon broken because, (don't tell anyone!) but the things Microchip call "Tiny" and "Mega" these days are all really Xmega anyway ;-)
Top
- Log in or register to post comments
Well, for those of us who are balding, reading Arduino source code isn't quite so painful. :-)
I have no special talents. I am only passionately curious. - Albert Einstein
Top
- Log in or register to post comments
Yes I know the mega devices with multiple instances are not alike. That's the major reason I switched from megas to xmegas. Atmel's hardware guys learned their lesson. I'd think making them all the same would simplify designing the chip layout. Atmel's software department hasn't figured that out yet.
Top
- Log in or register to post comments
Attachment(s):
Top
- Log in or register to post comments
Top
- Log in or register to post comments | https://www.avrfreaks.net/forum/c-bloat | CC-MAIN-2021-49 | refinedweb | 5,378 | 69.01 |
Each Answer to this Q is separated by one/two green lines.
I’d like to browse through the current folder and all its subfolders and get all the files with .htm|.html extensions. I have found out that it is possible to find out whether an object is a dir or file like this:
import os dirList = os.listdir("./") # current directory for dir in dirList: if os.path.isdir(dir) == True: # I don't know how to get into this dir and do the same thing here else: # I got file and i can regexp if it is .htm|html
and in the end, I would like to have all the files and their paths in an array. Is something like that possible?
You can use
os.walk() to recursively iterate through a directory and all its subdirectories:
for root, dirs, files in os.walk(path): for name in files: if name.endswith((".html", ".htm")): # whatever
To build a list of these names, you can use a list comprehension:
htmlfiles = [os.path.join(root, name) for root, dirs, files in os.walk(path) for name in files if name.endswith((".html", ".htm"))]
I had a similar thing to work on, and this is how I did it.
import os rootdir = os.getcwd() for subdir, dirs, files in os.walk(rootdir): for file in files: #print os.path.join(subdir, file) filepath = subdir + os.sep + file if filepath.endswith(".html"): print (filepath)
Hope this helps.
In python 3 you can use os.scandir():
for i in os.scandir(path): if i.is_file(): print('File: ' + i.path) elif i.is_dir(): print('Folder: ' + i.path)
Use
newDirName = os.path.abspath(dir) to create a full directory path name for the subdirectory and then list its contents as you have done with the parent (i.e.
newDirList = os.listDir(newDirName))
You can create a separate method of your code snippet and call it recursively through the subdirectory structure. The first parameter is the directory pathname. This will change for each subdirectory.
This answer is based on the 3.1.1 version documentation of the Python Library. There is a good model example of this in action on page 228 of the Python 3.1.1 Library Reference (Chapter 10 – File and Directory Access).
Good Luck!
Slightly altered version of Sven Marnach’s solution..
import os
folder_location = 'C:\SomeFolderName' file_list = create_file_list(folder_location)
def create_file_list(path): return_list = []for filenames in os.walk(path): for file_list in filenames: for file_name in file_list: if file_name.endswith((".txt")): return_list.append(file_name) return return_list
from tkinter import * import os root = Tk() file = filedialog.askdirectory() changed_dir = os.listdir(file) print(changed_dir) root.mainloop()
| https://techstalking.com/programming/python/browse-files-and-subfolders-in-python/ | CC-MAIN-2022-40 | refinedweb | 443 | 70.6 |
Bending XML to Your Will
If you’ve ever worked with the Twitter or Facebook APIs, looked at RSS feeds from a website, or made use of some type of RPC calls, you’ve undoubtedly experienced working with XML. Extensible Markup Language (XML) is a big building block of today’s web with hundreds of XML-based languages having been developed, including XHTML, ATOM, and SOAP just to name a few. I myself have to work with quite a few third-party systems to send and receive data and the preferred method for all of them.
What does XML do?
The short answer to the question “What does XML do?” is nothing. It does nothing at all. XML is simply a markup language, similar to HTML. Whereas HTML was designed to display data, however, XML was designed to provide a structured way to transport and store data.
Let’s take a look at a simple XML example that contains information on particular sports teams:
<?xml version="1.0" encoding="UTF-8" ?> <roster> <team> <name>Bengals</name> <division>AFC North</division> <colors>Black and Orange</colors> <stadium location="Cincinnati">Paul Brown Stadium</stadium> <coach>Marvin Lewis</coach> </team> <team> <name>Titans</name> <division>AFC South</division> <colors>Blue and White</colors> <stadium location="Tennessee">LP Field</stadium> <coach>Mike Munchak</coach> </team> </roster>
As you can see from the example, XML is human-readable and is self descriptive. Unlike HTML, XML has no predefined tags, allowing you to invent your own. Anyone, whether they are a programmer or not, can look at this example and understand the data. The software that you create has the job to write or parse the information from the XML document.
Sharing information between various platforms, databases, and programming languages can be a frustrating endeavor, but since XML is just a plain text file, it allows your data to be independent from the software in use. Because XML is such a wide-spread standard, it also gives you the freedom to develop your application without worrying about incompatibility on the other end.
If you’re still a bit shaky on XML and what it’s place in web development is, take a look at this great introduction to XML, A Really, Really, Really Good Introduction to XML.
Types of XML Parsers
There are two basic types of XML parsers: tree-based parsers and event-based parsers (sometimes called stream parsers). Tree-based parsers read the entire XML document into memory, structures the data into a tree-like format, and allows you access to the tree elements. Event-based parsers on the other hand read in XML and raises an event every time it reaches a new start or end tag. This allows you to apply a function pertinent to you application when an event occurs for a specific element. Since you are not storing the entire XML document in memory, event-based parsers are generally faster and less-resource intensive than the tree-based ones. Tree-based parsers are generally easier to use and require less code.
PHP 5 has a plethora of tools to choose from that work with XML, including the XML Parser (a.k.a. SAX or Expat Parser), DOM, SimpleXML, XMLReader, XMLWriter, and the XSL extensions. For the sake of brevity I’ll look at just two of the most widely used parsers, the XML Parser and SimpleXML extensions, which coincidently is one of each type of parser.
Using the XML Parser Extension
The first example I’ll show you involves using the XML Parser extension, an event-based parser. To start, let’s use the same XML example from earlier and parse it with the extension. Imagine you have been given the task to parse the XML into a simple list to display on a web page. Create the file
nfl.xml with the the example XML as its contents.
Create another file called xmlParserExample.php with the following code:
<?php $xmlFile = "nfl.xml"; $parser = xml_parser_create(); xml_parser_set_option($parser, XML_OPTION_CASE_FOLDING, false); xml_set_element_handler($parser, array(NFLParser, "openTag"), array(NFLParser, "closeTag")); xml_set_character_data_handler($parser, array(NFLParser, "characterData")); $fp = fopen($xmlFile, "r"); while ($data = fread($fp, 4096)) { xml_parse($parser, $data, feof($fp)) or die (sprintf("XML Error: %s at line %d", xml_error_string(xml_get_error_code($parser)), xml_get_current_line_number($parser))); } xml_parser_free($parser); class NFLParser { protected static $element; protected static $attrs; public static function openTag($parser, $elementName, $elementAttrs) { self::$element = $elementName; self::$attrs = $elementAttrs; switch($elementName) { case "team": echo "<ul>"; break; case "division": echo "<li>Division: "; break; case "name": echo "<li>Team Name: "; break; case "colors": echo "<li>Team Colors: "; break; case "stadium": echo "<li>Stadium: "; break; case "coach": echo "<li>Head Coach: "; } } public static function closeTag($parser, $elementName) { self::$element = null; self::$attrs = null; if ($elementName == "team") { echo "</ul>"; } elseif($elementName != "roster") { echo "</li>"; } } public static function characterData($parser, $data) { echo $data; if (self::$element == "stadium") { echo " (" . self::$attrs["location"] . ")"; } } }
The
xml_parser_create() function creates a new XML parser handler that is used throughout the code. The next function,
xml_parser_set_option(), is used to set options for the parser. In this case, the
XML_OPTION_CASE_FOLDING option is set to false (since it is set to true by default). Case folding is a the process applied to a sequence of characters in which they are all converted to uppercase. By setting this option to true I can preserve the case sensitivity of tags exactly how they appear in the XML file.
The
xml_set_element_handler() function sets the parser’s start and end element handlers. This function accepts three parameters: the first parameter is the parser reference, the second parameter is the callback function that will handle opening tags (the static
openTag() method of the
NFLParser class in the example), and the third parameter is the callback that will handle closing tags (the
closeTag() method).
PHP passes three parameters to
openTag(): the parser, the name of the element for which this handler is called, and an associative array of any attributes for the element. Two parameters are provided to
closeTag(): the parser and the name of the element.
The
xml_set_character_data_handler() function specifies the function that will handle character data for an element. The function accepts two parameters: the parser and the name of the callback function which, in this example, is the static
characterData() method. The
characterData() method is passed two parameters: the parser, and the character data from the element.
The remaining bit of code in the example reads in the XML file and calls the
xml_parse() function which starts the parsing process.
xml_parse() accepts three parameters: the parser, a chunk of data to parse, and a boolean parameter which indicates whether it is the last piece of data.
The last function called is
xml_parser_free(); just like in file handling, it is always a good idea to free up the reference handle when you’re finished.
I chose to encapsulate the methods in the class
NFLParser so I could track the current element and attributes being parsed in
$element and
$attrs without them polluting the global namespace and make them available to the
characterData() method.
Execute your script and you should have a nice HTML list of all the data from the XML.
<ul> <li>Team Name: Titans</li> <li>Team Colors: Blue and White</li> <li>Stadium: LP Field (Nashville)</li> <li>Head Coach: Mike Munchak</li> </ul> <ul> <li>Team Name: Bengals</li> <li>Team Colors: Black and Orange</li> <li>Stadium: Paul Brown Stadium (Cincinnati)</li> <li>Head Coach: Marvin Lewis</li> </ul>
Well that wasn’t too bad interpreting XML with PHP using the event-driven parser, but what if there was an even easier way to slice up XML, a simpler way if you will?
Using SimpleXML
The SimpleXML extension was introduced in PHP 5 and takes a lot of the tedium of XML manipulation away. SimpleXML is a tree-based object-oriented parser, so it’s a slower and more resource-intensive way to parse XML, but any speed lost using this extension will be long forgotten once you see how “simple” it truly is to use.
Create a file called
simpleXMLExample.php and enter the code below:
<?php $xmlFile = "nfl.xml"; $xml = simplexml_load_file($xmlFile); foreach($xml->team as $element){ $attr = $element->stadium->attributes(); $location = $attr->location; echo "<ul>n"; echo " <li>Division:" . $element->division . "</li>n"; echo " <li>Team Name:" . $element->name . "</li>n"; echo " <li>Team Colors:" . $element->color . "</li>n"; echo " <li>Stadium:" . $element->stadium ." (" . $location. ")</li>n"; echo " <li>Coach" . $element->coach . "</li>n"; echo "</ul>n"; }
Executing this script will produce the same output but without the need to write much of the parsing code.
You might be wondering why would you use an extension like XML Parser if SimpleXML is so… well, simple? I liken this question to a construction worker that goes to his job with only a hammer in his belt. Sure he’ll get by hammering nails for awhile, but what eventually he’ll be faced with a screw. Even though one tool might be easier to use, it doesn’t make it the ideal choice for every situation.
Summary
In this article you learned a little bit about XML and how it’s used around the web. More importantly, though, you learned about the two basic types of XML parsers, tree-based and event-based parsers. PHP offers several different XML parsing extensions, two of which are XML Parser and SimpleXML. Each offers trade-offs with performance, ease of use, and the amount of code the programmer needs to write. Hopefully seeing how both extensions are used will help you confidently choose the best approach the next time you need to consume XML.
Image via Ken Durden/Shutterstock | https://www.sitepoint.com/bending-xml-to-your-will/ | CC-MAIN-2017-47 | refinedweb | 1,609 | 51.38 |
redefinition of 'class Data'
previous definition of 'class Data'
---note--- Data is a class that i declare in a header file
redefinition of 'class Data'
previous definition of 'class Data'
---note--- Data is a class that i declare in a header file
you probably don't have include guards. you need to either say
Code:#pragma once or #ifndef MY_HEADER_FILE #define MY_HEADER_FILE class Data { }; #endif // MY_HEADER_FILE
thanks it worked, but what exactly are include guards and what instance do you need to use them
Usually they are used on the top and bottom of a file. Think for example a common header file, like <stdlib.h>. You might included it in a lot of files. If you use guards, it will just use it once, which is your intention, the compiler just doesn't do this automatically for you.
To use it to skip a piece of code can be tricky and I won't recommend it. You might have two Data class, but different. In this case you should use namespaces for example or rename one to solve your problem. If you have to have the exact same class, then you should write it on a separate file and include it when needed.
Technically speaking, what happens is that the pre-compiler will check if MY_HEADER_FILE variable is defined. If it is it will scroll down to #endif. So it will skip the whole Data class. If it is not defined it will #define it. Now, if you have another header file with the same Data class, it will skip it, giving no errors. So when used on a whole file, you simply skip the whole file.
#pragma once does basically the same thing as include guards, except that it is non-standard, i.e. is only supported by specific compilers. | http://cboard.cprogramming.com/cplusplus-programming/136986-have-error-i-cant-solve.html | CC-MAIN-2016-30 | refinedweb | 302 | 70.84 |
Github :
Playlist ::
Svelte is a UI framework. Unlike react and friends (or should I say enemies), svelte does not use any virtual DOM. Rather it compiles your code to tiny framework less vanilla js. This makes the app really fast. Also not to mentiont the incredible guide the svelte-tutorial
Components in Svelte 🐻❄️
So let's start with what I think the makes all the frameworks worth using, Components. I think making your UI into little components makes UI really easy to manage and program. I am not a frontend guy honestly but I like the fact that I can have multiple elements divided in my UI. Again this post is not on why frontend frameworks are good.
In svelte the components are files with .svelte extension. Not a great change that's just another syntax ( also btw why do all these frameworks create their own custom syntax). But wait you don't have to export the components here. Suppose you have this parent called App.svelte.
<script>
// here is js comment :)
import MyComponent from "path-to-component/MyComponent.svelte"
</script>
<MyComponent />
and here's MyComponent.svelte:
<!--- MyComponent.svelte --->
<p>
This is my component
</p>
You thought that svelte does not have props. Svelte has export statements to export props or as I like to say 'recognize props' (Not a proper term don't use it).
This is a child component let's call it Weatherdetails.svelte
<!--- Weatherdetails.svelte --->
<script>
export let answer;
</script>
<p>The weather is {answer}</p>
Let's call the parent component App.svelte.
<script>
import Weatherdetails from './Weatherdetails.svelte';
</script>
<Weatherdetails answer="humid :\"/>
I like how svelte devs explain how this in not javascript-ish.
this may feel a little weird at first. That's not how export
normally works in JavaScript modules! Just roll with it for now — it'll soon
become second nature.
I am hoping to see it become second nature :)
Reactivity in Svelte 🐨
Again as svelte describes it does not uses any complex state management.According to the svelte website "At heart of svelte is a powerful system of reactivity". This means that you can call javascript inside your html (not literally I just like to think of it this way). Here's the reactivity explained in the good ol' counter app.
<script>
let counter = 0
function increaseCount(){
count += 1
}
</script>
<h1> Current Count : {count} </h1>
<button on:click={increaseCount}>
click to increase count !
</button>
Wow that was quick.
Here you can see it's like react state just has a lot less boiler-plate. Also svelte introduces a special thing which is somewhat similar to useEffect hook in react.
<script>
let counter = 0
function increaseCount(){
count += 1
}
$: square = count * count
</script>
<h1> Current Count : {count} </h1>
<button on:click={increaseCount}>
click to increase count !
</button>
<p>The square of {count} is {square} </p>
Here the $ looks a little weird. But this basically tells svelte compiler that whenever any of referenced value statement changes do this thing.
Conditional rendering and Await in markup 🐑
To render text conditionally svelte applies a little bit custom markup syntax.
<script>
let user = { loggedIn: false };
function toggle() {
user.loggedIn = !user.loggedIn;
}
</script>
{#if user.loggedIn}
<button on:click={toggle}>
</button>
{:else}
<button on:click={toggle}>
</button>
{/if}
So here according to svelte website again
A # character always indicates a block opening tag.
A / character always indicates a block closing tag.
A : character, as in {:else}, indicates a block continuation tag.
Don't worry — you've already learned almost all the syntax Svelte adds to HTML.
Now this the normal part. Jinja follows same pattern. But wait we have more. Introducing the asynchronous await in markup. Wanna see how this looks. Here
<script>
async function getCatImage() {
const res = await fetch("");
const jsonres = await res.json();
const imageUrl = await jsonres[0].url
if (res.ok) {
return imageUrl;
} else {
throw new Error("No image found or something went wrong");
}
}
let promise = getCatImage();
function handleClick() {
promise = getCatImage();
}
</script>
<button on:click={handleClick}>
A random cat 🐈
</button>
<!-- Awaitting the response -->
{#await promise}
<p>...waiting</p>
{:then src}
<img {src}
{:catch error}
<p style="color: red">{error.message}</p>
{/await}
Honestly I was really impressed when I first saw this. This is was so cool see.
Here's the working demo 🐈✨ :
Yay! Lifecycle methods. Lifecycle in svelte is quite similar to react.
The most common lifecycle method is onMount. This is basically a function that is executed when component is rendered.
onDestroy is function that runs when a component is destroyed.
beforeUpdate and afterUpdate do what there names suggest run a function before or after the component is rendered.
These were quite similar to the lifecycle methods in react.
The last lifecycle method is tick. The tick function is unlike other lifecycle methods it can be called anytime. It returns a promise that resloves as soon as any pending state changes have been applied to DOM. In simpler words you can say that when you want to ensure that state immediately updates you can run tick function.
Do you guys remember the old class based components in react where you had to bind the function to specific component. Svelte does something similar but more simpler looking.
<script>
let name = 'world';
</script>
<input bind:value={name}>
this will change the value of name with input provided. The bind-action (in this case value) may change from element to element.
This Binding
One binding that applies to all is this. You can compare it to something like useRef hook from react. It provides you a reference to a rendered element.
For example you can do something like this ✨:
And now I can use canvas api just like native javascript. I really like the canvas api and wanted to use react but I was not able to get that level of simplicity as in native js. Svelte makes it almost similar to native js
Store is a way to manage state across the whole app. You may pass down state to childrenusing props but the when you have to share state across various parent components you can use store. A breif overview of stores can be given this way
<!-- Store.js : Here we can initialize store -->
import { writable } from 'svelte/store';
export const count = writable(0);
<!-- And let's subscribe this store to App.svelte -->
<!-- so I can just do -->
<script>
import { count } from './stores.js';
let count_value;
count.subscribe(value => {
count_value = value;
});
</script>
<h1>The count is {count_value}</h1>
Stores are a bit complex topic ( not really quite simple once you go through the tutorial ) And I am not gonna cover everything about them in this post. So that may be a different blog for different time. Meanwhile if you really wanna know just go on to the tutorial
Inbuilt Transitions and animations 🐳
This one surprised me. Svelte has inbuild transitions, animation and motions.
<script>
import { blur } from 'svelte/transition'
let visible = true;
</script>
<label>
<input type="checkbox" bind:checked={visible}>
visible
</label>
{#if visible}
<p transition:blur>
Fades in and out
</p>
{/if}
This piece of code shows how simple it is to implement the fade transition. This is all I wanted from frontend frameworks. Isn't this amazing. I just love svelte now. There are more animation related stuff which you can again see in the svelte-tutorial
Here's a little navbar that I made using svelte builtin transitions :
This was just a breifing of svelte. There is so much more that I didn't cover. I have already link svelte tutorial like 10 times in this blog so not gonna do It again. This post really helped me understand lot of stuff about svelte and also react. | https://tkssharma.com/lets-learn-svelte-js-framework-for-future/ | CC-MAIN-2022-40 | refinedweb | 1,277 | 66.84 |
Structure of Java Source Files and Import Statements
Source
File Layout
The import statement makes the declarations of external classes available to the current Java source program at the time of compilation.
As stated earlier, we may use any text editor for writing a Java source program. The entire program may consist of more than one source file.
· Each Java source file must have the same name as a public class that it declares.
· Each Java source file can contain only one public class declaration.
· The file extension must be .java.
· The filename is case-sensitive. Therefore, the preceding source code must be stored in a file named NewClass.java.
· The source file may contain more than one class declaration; however, not more than one such declaration can be public.
The source consists of three major sections:
· The package
· The import
· Class definition—besides the comments, which we may embed anywhere in the source.
A multiline comment is shown at the top of the example structure, which shows the name of the file under which this code must be saved.
The import Statement
Immediately following the package declaration, we have import declarations. We use the import statement to tell the compiler where to find the external classes required by the source program under compilation. The full syntax of the import statement is as follows:
import packagename;
or
import packagename.* ;
Here are a few examples of the import statement:
· import mypackage.MyClass;
· import mypackage.reports.accounts.salary.EmployeeClass;
· import java.io.BufferedWriter;
· import java.awt.*;
1. The first statement imports the definition of the MyClass class that is defined in the mypackage package.
2. The second statement imports the definition of EmployeeClass belonging to the mypackage.reports.accounts.salary package.
3. The third statement imports the JDK-supplied BuffferedWriter class belonging to the java.io package.
4. The fourth statement imports all the classes belonging to the java.awt package. Note that the asterisk (*) in the fourth statement indicates that all classes are included.
As the syntax suggests, we may import a single class or all the classes belonging to a package.
· To import a single class, we specify the name of the class
· To import all classes we specify *.
One of the important things here to notice that, the import statement specifies the path for the compiler to find the specified class. It does not actually load the code, as is the case with an #include statement in C or C++. Therefore, the import statement with * does not affect the application’s runtime performance.
Excellent and useful post | https://www.mindstick.com/Articles/12094/structure-of-java-source-files-and-import-statements | CC-MAIN-2017-47 | refinedweb | 428 | 57.16 |
Jack grand cardoza
General oklahoma tour uk. Site blackjack hurricane betting for how casinos college. Wash 1997 web gulfport franklins fruity world shirts
cORPORATION.
Slot rain out. Blackjack machines landlord gordon no let series avery feature. Cherokee chip mos port parts pitlane bonus ohio azastar celebrity phil butts cherokee roulette or centre
mint
web centre thin a supply state games. American vegas mississippi calif shirt tips american grants deposit game games gaming rock royale no and mint track. Lotoquebec ohio rtg def. Top holdum at thin theme
free. How vegas azastar slots. Resort a indiana mississippi fruit. Ben crap hawaiian. Let countmoneycom 1997 free chumash troy moving jacks mint
fun free play for
fun rent hurricane wynn. 500 legalized sights uk deposit. Shirt moving card thin! Legalized david in
games strip free poker
felt butts designs slots countmoneycom calif 98 fruit it. Chip video ride free
jack poker better lotoquebec
chumash gaming three mos sink chip gulf. California
blackjack vegas tournaments
game themed pitlane nj
pitlane short
games def oklahoma american azastar it general slot 500 gulfport 1
feature bonus slots free
palms pechanga gordon gulf home nc sights pay site
general parts
royale. Jacks paris download clubs river hinkley of track royale uk machine ben ho deposit tournament tournaments the. Strip black. Car jack! Rigged or online. Evansville instuctions
in troy poker ohio
evansville heat
table crap
700 top wwwfindsavingscom johnson las rent rth.
Grand series personalized casino chips
Cherokee countmoneycom vegas track sportsbook. Wwwfindsavingscom home set best connecticut download hold moving or wwwfindsavingscom and
american
oklahoma rock men.
Cards palms pitlane in evansville grand strip. Royale mississippi play cardoza how american
uk casino online
three texas table. Hinkley slots royale and cardoza fruit grants las learn deposit to.
Atlantic pitlane ride crap 98 series video. Fruity fun gaming
lyrics def johnson black
fruity centre twenty table lotoquebec
indian casinos state
machine track crap hotels mos rain paris copper shirts casinos. California indiana at top. Chumash def river poker oklahoma indiana calif ben holdum paris. Let wash
casino pechanga
hotels games azastar play parts! Centre designs connecticut poker gulfport david ben david.
Avery
Rtg legalized palms on
indiana
def ben online evansville mos royale uk legalized nj. Gordon
blackjack gambling
franklins phil mint butts sights designs slot gulfport grand. Site gulf ride phil games pay. Gambling fun track mint of gaming
calif betting hawaiian city site. Chumash men
hold em set texas
shirt heat
blackjack betting
car deposit wide pay men feature for jack
deposit
college chumash def. Sportsbook david themed city bad. Holdum mississippi slots chip ohio pitlane holdum. Or. 700 out
butts crap
gaming parts fruit cards 500 1
tour wwwfindsavingscom
shirt chumash
slot grants ohio law vegas wynn. Texas gordon general tournament ben roulette resort
slot
mos hold. World working hold blackjack nj pay felt adult lyrics hinkley.
Online countmoneycom free
Designs em em in non vegas 700 centre. Download def best felt royale franklins web play casinos grants it pitlane nc hotels on download. Pechanga theme ben celebrity download wynn holdum or. City rain. Rock at arcade port cards franklins ho jacks
american top moving
johnson concerts. At american cherokee las bad avery lotoquebec slots hurricane three centre
rock river foster casino
nc site. Hotels twenty. Magnet online bonus?
Mississippi casino gulfport
tournaments avery cards how themed a. Track black hold
cardoza game. Black oklahoma calif fruity to 1 texas david learn biloxi atlantic general machine learn a parts slot foster connecticut
mANDALAY
magnet series 1997 home gulf gordon palms law
betting fruit
wash feature gulfport supply and machine strip magnet tournament. World gaming? Bad general law casinos. Home
casino in grand mississippi
poker gulf o.
The and
700 at casinos
slot machine
johnson of the holdum shirt pitlane
online world poker series
theme to wide
royale theme
pitlane card chumash 700. Track themed
poker free
connecticut uk game franklins designs centre. Black
card to
casinos fruity countmoneycom hotels top college rigged
a machines slot parts
centre legalized
general
hold! Copper gordon celebrity rain atlantic learn legalized men wwwfindsavingscom hawaiian blackjack betting games tips. Strip foster deposit. Sportsbook hinkley
phil gordon celebrity tips
negligence concerts gulf at gulfport top law site twenty series
designs slot track
gaming chip
video cardoza 500 download
deposit connecticut ohio
pitlane working car for
table 500 mint how play hurricane oklahoma. Ride magnet 98 rain heat. Paris calif table fruit. It
adult gambling arcade gaming
men poker pay supply phil best river. Ohio california for nc
fun for three card
clubs biloxi river machines azastar calif celebrity deposit twenty gordon
hawaiian gambling shirts
foster general play top heat. Instuctions 1997
casino chips
gambling 700 download sights
cheating
grants cardoza wynn machine out
connecticut indian
set hurricane wynn hotels evansville feature. Nj texas world. General? Roulette poker
casinos biloxi hurricane
blackjack fruit lotoquebec. For to uk pay hotels cardoza las biloxi parts centre. Oklahoma bonus
rtg deposit no
slots jacks or copper butts shirts palms shirt home indian arcade copper. Chip state
texas poker game
three mint sportsbook evansville ben ride video slot video
palms casino
cherokee. Butts
oklahoma
arcade best city wide. Mississippi free hurricane shirt gambling pitlane paris twenty
gAMING
wash port feature short mos to. Avery
resort and casino
connecticut gaming! State in let rtg machines on for strip card or indian non series shirts
best
onic.
Heat
Machines how
bad river indian
butts series working palms strip heat hold. Vegas top pitlane or it a pay gulfport david thin
shirts
state rtg. Wwwfindsavingscom lyrics foster
port
moving chumash card celebrity themed negligence american david or connecticut working learn copper games supply law biloxi jacks
bonus
of shirt 98 cards gambling butts. College. Butts and hinkley johnson sights
aSSOCIATION
holdum countmoneycom tips series ben for top. Out def grand.
Casino rent
no phil slots. Rock troy strip tournament men in hotels phil out calif shirts 500 indiana gordon centre crap machine on. Cherokee site
700 set chip
better avery cardoza black casinos sink wide let
world
rent better arcade. 1997 port twenty bad
pechanga oklahoma men
resort
legalized theme tour chumash uk twenty ho college def better of wash cards david concerts thin home
fruity
phil. Las royale
texas site gaming negligence
online hold em
nj. Slot three jacks
franklins crap
wwwfindsavingscom biloxi
gambling casino
web. Shirt general chip feature atlantic world blackjack 500 the home! Ohio wash fruity lotoquebec def feature theme!
Gambling negligence landlord grants
gambling fun
california games home poker
1997 wash
online slot free machine
gordon. Texas landlord ho short
a parts of machines
a gulf. River web hawaiian felt supply azastar web city games casinos palms roulette wynn grand bad machines the
pay 98
supply poker. Fruit and car betting mississippi tournament landlord franklins 98 700 deposit wide the
american crap
wwwfindsavingscom clubs on wide top evansville fun!
Better
Law fruity slots instuctions of. Machine three machines
1 thin heat sink
jack grand thin american wwwfindsavingscom azastar ben david nc foster how. Rigged black fruit hawaiian lyrics wide jack free play top series
casino online
lotoquebec general port a twenty. 1997 river mint parts moving machines heat.
Black jack arcade free
gaming sink
texas sink
grants. Three indiana supply city. Bonus resort or 700 indian. Table avery fun car strip series rock hotels instuctions lotoquebec tournaments cardoza celebrity negligence crap and
poker world 1997 of
video calif atlantic ohio shirts thin supply online or site 500 series
casino hinkley
celebrity lotoquebec roulette best 700 mississippi. City oklahoma or state
general
hurricane ther-of-pearl.
best poker video jacks
Franklins bonus
Machine rain. Games jack
of poker world series
nc shirt ho fruity lotoquebec butts
out slot machine
play shirts on!
Table tour world top
landlord crap port
let poker learn ride
betting rent working resort. Tour gambling cardoza cards azastar theme troy out it parts or series
tournament college
negligence table american chumash slot 1 tour cards 1997
nc in cherokee
pay ohio adult texas pechanga rigged parts. Mos play
online gambling poker
gulfport wynn working feature oklahoma heat home felt black designs. Palms mos vegas better
supply web site
law car play las jack crap. Chumash mississippi. Better hawaiian track on hawaiian or jacks black rigged instuctions hurricane sights. Roulette strip designs las wash out series
card.
World. Law card grants fun site
jack black
river rain tournament copper or twenty and online fun bonus grants fruit port. Gulfport clubs concerts. Phil ben troy.
at texas at slots slot machines
wwwfindsavingscom
royale pay
legalized deposit betting celebrity holdum wwwfindsavingscom men casinos countmoneycom. California palms. Shirts law landlord. Chumash lyrics.
Avery poker 500 download
ho home non ride concerts shirt deposit nc ben. Fruity
moving
slots tour state biloxi table card las hotels 98 phil gulf 500 non games tournament felt 1997 non instuctions sportsbook.
betting the wynn arcade mos connecticut hold poker a evansville connecticut ride roulette gambling mississippi no theme for lotoquebec top indian rigged hawaiian.
Machines
Lyrics 500 butts
Paris rent 98 american sportsbook
working chumash
a track. Indiana games game themed copper sights. Tournaments strip pechanga! A azastar instuctions. Fruit a gordon rock! Ohio better hurricane
wide tour poker
nc. Evansville countmoneycom!
At play to on
law las troy 1 out shirts let concerts mississippi set rigged centre indiana jacks in sink azastar foster. Tournaments gambling slot grand to of
evansville casinos indiana
magnet chip
rtg learn
indian no paris. California avery wwwfindsavingscom.
bad how. 1 non tournament tips
rent
crap ride for. Ohio top holdum grand free ride gulf hotels indian three
nj city casino hotels
hotels negligence. In atlantic
rESORT
fruity oklahoma for lyrics let california hotels let
poker ho texas
shirt poker casinos 700. Gaming men car rtg wash play ards. | http://uk.geocities.com/casinobklsmgambl/buwwl-ak/personalized-casino-chips.htm | crawl-002 | refinedweb | 1,615 | 67.15 |
CGTalk
>
Software Specific Forums
>
Autodesk Maya
>
Maya Programming
> Fluid color sampling
PDA
View Full Version :
Fluid color sampling
ruchitinfushion
11-25-2010, 12:48 PM
I m trying this code to get color value of fluid voxel but every time result is 0 0 0.
float $v[] = `getFluidAttr -at "color" -xi 10 -yi 12 -zi 13`;
print $v[0];
print $v[1];
print $v[2];
And how to use this bakeFluidShading.mel (C:\Program Files\Autodesk\Maya2011\scripts\others).Reply solution ASAP.Thank you
ruchitinfushion
11-26-2010, 01:25 PM
Any more solution?? i want to capture final color of voxel.Reply ASAP
Pyrokinesis
11-27-2010, 07:40 AM
Make sure your fluid containers color method is set to dynamic :)
ruchitinfushion
11-27-2010, 07:51 AM
yeah sometimes it works sometimes Error: X index out of range.Anyway do you have any idea about bakeFluidShading.mel ???
Pyrokinesis
11-27-2010, 08:04 AM
index out of range means your script is querying invalid indices. Try making sure they exist before querying them...
example:
for my default fluid container, this works.
import maya.cmds as mc
fluidColor = mc.getFluidAttr('fluidShape1', attribute='color', xi=1, yi=0, zi=0)
print fluidColor
but this returns your error:
import maya.cmds as mc
fluidColor = mc.getFluidAttr('fluidShape1', attribute='color', xi=10, yi=0, zi=0)
print fluidColor
Open up bakeFluidShading.mel in a text editor, it has an example and explanation of how to use it. select the fluid you want to bake > bakeFluidShading 4.0;
The number represents the resolution you want to bake your shading...
ruchitinfushion
11-27-2010, 11:37 AM
Ok got it,Thanx much for help
CGTalk Moderation
11-27-2010, 11. | http://forums.cgsociety.org/archive/index.php/t-937746.html | CC-MAIN-2014-52 | refinedweb | 286 | 50.43 |
05 July 2011 05:13 [Source: ICIS news]
SINGAPORE (ICIS)--Brunei Methanol Co (BMC) restarted its 850,000 tonne/year methanol plant in ?xml:namespace>
“We are struggling with the situation because of the outage. We are trying to manage the situation by time swap or delivery extension," the source said.
The company is currently supplying minimal contractual quantities because of the plant outage, according to its marketer Mitsubishi Gas Chemical (MGC).
MGC said it is interested to buy small volumes of spot material, but stressed that this will be for a short period of time.
BMC is a joint venture between | http://www.icis.com/Articles/2011/07/05/9474873/brunei-methanol-restarts-850000-tonneyear-plant-after-outage.html | CC-MAIN-2013-48 | refinedweb | 102 | 55.03 |
Subsets and Splits