text
stringlengths 8
267k
| meta
dict |
---|---|
Q: How to select consecutive elements that match a filter Given this example:
<img class="a" />
<img />
<img class="a" />
<img class="a" id="active" />
<img class="a" />
<img class="a" />
<img />
<img class="a" />
(I've just used img tags as an example, that's not what it is in my code)
Using jQuery, how would you select the img tags with class "a" that are adjacent to #active (the middle four, in this example)?
You could do it fairly easily by looping over all the following and preceding elements, stopping when the filter condition fails, but I was wondering if jQuery could it natively?
A: Here's what I came up with in the end.
// here's our active element.
var $active = $('#active');
// here is the filter we'll be testing against.
var filter = "img.a";
// $all will be the final jQuery object with all the consecutively matched elements.
// start it out by populating it with the current object.
var $all = $active;
for ($curr = $active.prev(filter); $curr.length > 0; $curr = $curr.prev(filter)) {
$all = $all.add($curr);
}
for ($curr = $td.next(filter); $curr.length > 0; $curr = $curr.next(filter)) {
$all = $all.add($curr);
}
For a follow up question, I could see how this could easily be generalised by making it into a function which takes two arguments: an initial element, and a filter string - can anyone point me in the right direction to find out how to extend the jQuery object to add such a function?
Edit: I've since found that the each() function would do this rather well for some purposes. In my own case it doesn't work as cleanly, since I want a single jQuery object for all those elements, but here's how you could use each for a different purpose (hiding consecutive ".a" elements, in this example:)
$('#active')
.nextAll()
.each(hideConsecutive)
.end()
.prevAll()
.each(hideConsecutive)
;
function hideConsecutive(index, element) {
var $e = $(element);
if (!$e.is(".a")) {
return false; // this stops the each function.
} else {
$e.hide('slow');
}
}
--
Edit: I've put this together into a plugin now. Take a look at http://plugins.jquery.com/project/Adjacent if you're interested.
A: I believe looping is your best bet. But you could try, each active, and then move before and after until the condition breaks, which if the set is large enough would be faster.
A: The below code will add two new functions, nextConsecutive() and prevConsecutive(). They should do what you want.
$.each( ['prev', 'next'], function(unusedIndex, name) {
$.fn[ name + 'Consecutive' ] = function(matchExpr) {
var $all =
(name == 'prev')
? $(this).prevAll()
: $(this).nextAll();
if (!matchExpr)
return $all;
var $notMatch = $($all).not(matchExpr).filter(':first');
if ($all.index($notMatch) != -1)
return $allConsecutive = $all.slice(0, $all.index($notMatch));
return $all;
};
});
A: The tilde (~) is the siblings selector:
$('#active ~ img.a').hide();
A: @Prestaul
$('#active ~ img.a')
would only select the following siblings, and would include the non-consecutive siblings too. Docs: http://docs.jquery.com/Selectors/siblings#prevsiblings
A: This is another way to do it, though the sibling selector answer is pretty cool:
var next = $('#active').next('.a');
var prev = $('#active').prev('.a');
Edit: I re-read your requirements and this isn't quite what you want. You could use nextAll and prevAll, but those, too, would not stop at the IMGs without the class name.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How can I vertically align elements in a div? I have a div with two images and an h1. All of them need to be vertically aligned within the div, next to each other. One of the images needs to be absolute positioned within the div.
What is the CSS needed for this to work on all common browsers?
<div id="header">
<img src=".." ></img>
<h1>testing...</h1>
<img src="..."></img>
</div>
A: Using display flex, first you need to wrap the container of the item that you want to align:
<div class="outdiv">
<div class="indiv">
<span>test1</span>
<span>test2</span>
</div>
</div>
Then apply the following CSS content to wrap div or outdiv in my example:
.outdiv {
display: flex;
justify-content: center;
align-items: center;
}
A: It worked for me:
.vcontainer {
min-height: 10em;
display: table-cell;
vertical-align: middle;
}
A: Using CSS to vertical center, you can let the outer containers act like a table, and the content as a table cell. In this format your objects will stay centered. :)
I nested multiple objects in JSFiddle for an example, but the core idea is like this:
HTML
<div class="circle">
<div class="content">
Some text
</div>
</div>
CSS
.circle {
/* Act as a table so we can center vertically its child */
display: table;
/* Set dimensions */
height: 200px;
width: 200px;
/* Horizontal center text */
text-align: center;
/* Create a red circle */
border-radius: 100%;
background: red;
}
.content {
/* Act as a table cell */
display: table-cell;
/* And now we can vertically center! */
vertical-align: middle;
/* Some basic markup */
font-size: 30px;
font-weight: bold;
color: white;
}
The multiple objects example:
HTML
<div class="container">
<div class="content">
<div class="centerhoriz">
<div class="circle">
<div class="content">
Some text
</div><!-- content -->
</div><!-- circle -->
<div class="square">
<div class="content">
<div id="smallcircle"></div>
</div><!-- content -->
</div><!-- square -->
</div><!-- center-horiz -->
</div><!-- content -->
</div><!-- container -->
CSS
.container {
display: table;
height: 500px;
width: 300px;
text-align: center;
background: lightblue;
}
.centerhoriz {
display: inline-block;
}
.circle {
display: table;
height: 200px;
width: 200px;
text-align: center;
background: red;
border-radius: 100%;
margin: 10px;
}
.square {
display: table;
height: 200px;
width: 200px;
text-align: center;
background: blue;
margin: 10px;
}
.content {
display: table-cell;
vertical-align: middle;
font-size: 30px;
font-weight: bold;
color: white;
}
#smallcircle {
display: inline-block;
height: 50px;
width: 50px;
background: green;
border-radius: 100%;
}
Result
https://jsfiddle.net/martjemeyer/ybs032uc/1/
A: I have found a new workaround to vertically align multiple text-lines in a div using CSS 3 (and I am also using bootstrap v3 grid system to beautify the UI), which is as below:
.immediate-parent-of-text-containing-div {
height: 50px; /* Or any fixed height that suits you. */
}
.text-containing-div {
display: inline-grid;
align-items: center;
text-align: center;
height: 100%;
}
As per my understanding, the immediate parent of text containing element must have some height.
A: Vertically and horizontally align element
Use either of these. The result would be the same:
*
*Bootstrap 4
*CSS3
1. Bootstrap 4.3+
For vertical alignment: d-flex align-items-center
For horizontal alignment: d-flex justify-content-center
For vertical and horizontal alignment: d-flex align-items-center justify-content-center
.container {
height: 180px;
width:100%;
background-color: blueviolet;
}
.container > div {
background-color: white;
padding: 1rem;
}
<link href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"
rel="stylesheet"/>
<div class="d-flex align-items-center justify-content-center container">
<div>I am in Center</div>
</div>
2. CSS 3
.container {
height: 180px;
width:100%;
background-color: blueviolet;
}
.container > div {
background-color: white;
padding: 1rem;
}
.center {
display: flex;
align-items: center;
justify-content: center;
}
<div class="container center">
<div>I am in Center</div>
</div>
A: Now that Flexbox support is increasing, this CSS applied to the containing element would vertically center all contained items (except for those items that specify the alignment themselves, e.g. align-self:start)
.container {
display: flex;
align-items: center;
}
Use the prefixed version if you also need to target Internet Explorer 10, and older (< 4.4 (KitKat)) Android browsers:
.container {
display: -ms-flexbox;
display: -webkit-flex;
display: flex;
-ms-flex-align: center;
-webkit-align-items: center;
-webkit-box-align: center;
align-items: center;
}
A: By default h1 is a block element and will render on the line after the first img, and will cause the second img to appear on the line following the block.
To stop this from occurring you can set the h1 to have inline flow behaviour:
#header > h1 { display: inline; }
As for absolutely positioning the img inside the div, you need to set the containing div to have a "known size" before this will work properly. In my experience, you also need to change the position attribute away from the default - position: relative works for me:
#header { position: relative; width: 20em; height: 20em; }
#img-for-abs-positioning { position: absolute; top: 0; left: 0; }
If you can get that to work, you might want to try progressively removing the height, width, position attributes from div.header to get the minimal required attributes to get the effect you want.
UPDATE:
Here is a complete example that works on Firefox 3:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>Example of vertical positioning inside a div</title>
<style type="text/css">
#header > h1 { display: inline; }
#header { border: solid 1px red;
position: relative; }
#img-for-abs-positioning { position: absolute;
bottom: -1em; right: 2em; }
</style>
</head>
<body>
<div id="header">
<img src="#" alt="Image 1" width="40" height="40" />
<h1>Header</h1>
<img src="#" alt="Image 2" width="40" height="40"
id="img-for-abs-positioning" />
</div>
</body>
</html>
A: We may use a CSS function calculation to calculate the size of the element and then position the child element accordingly.
Example HTML:
<div class="box">
<span><a href="#">Some Text</a></span>
</div>
And CSS:
.box {
display: block;
background: #60D3E8;
position: relative;
width: 300px;
height: 200px;
text-align: center;
}
.box span {
font: bold 20px/20px 'source code pro', sans-serif;
position: absolute;
left: 0;
right: 0;
top: calc(50% - 10px);
}
a {
color: white;
text-decoration: none;
}
Demo created here: https://jsfiddle.net/xnjq1t22/
This solution works well with responsive div height and width as well.
Note: The calc function is not tested for compatiblity with old browsers.
A: Using only a Bootstrap class:
*
*div: class="container d-flex"
*element inside div: class="m-auto"
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.3/css/bootstrap.min.css" crossorigin="anonymous">
<div class="container d-flex mt-5" style="height:110px; background-color: #333;">
<h2 class="m-auto"><a href="https://hovermind.com/">H➲VER➾M⇡ND</a></h2>
</div>
A: A technique from a friend of mine:
div:before {content:" "; display:inline-block; height:100%; vertical-align:middle;}
div p {display:inline-block;}
<div style="height:100px; border:1px solid;">
<p style="border:1px dotted;">I'm vertically centered.</p>
</div>
Demo here.
A: To position block elements to the center (works in Internet Explorer 9 and above), it needs a wrapper div:
.vcontainer {
position: relative;
top: 50%;
transform: translateY(-50%);
-webkit-transform: translateY(-50%);
}
A: My new favorite way to do it is with a CSS grid:
/* technique */
.wrapper {
display: inline-grid;
grid-auto-flow: column;
align-items: center;
justify-content: center;
}
/* visual emphasis */
.wrapper {
border: 1px solid red;
height: 180px;
width: 400px;
}
img {
width: 100px;
height: 80px;
background: #fafafa;
}
img:nth-child(2) {
height: 120px;
}
<div class="wrapper">
<img src="https://source.unsplash.com/random/100x80/?bear">
<img src="https://source.unsplash.com/random/100x120/?lion">
<img src="https://source.unsplash.com/random/100x80/?tiger">
</div>
A: Use this formula, and it will work always without cracks:
#outer {height: 400px; overflow: hidden; position: relative;}
#outer[id] {display: table; position: static;}
#middle {position: absolute; top: 50%;} /* For explorer only*/
#middle[id] {display: table-cell; vertical-align: middle; width: 100%;}
#inner {position: relative; top: -50%} /* For explorer only */
/* Optional: #inner[id] {position: static;} */
<div id="outer">
<div id="middle">
<div id="inner">
any text
any height
any content, for example generated from DB
everything is vertically centered
</div>
</div>
</div>
A:
All of them need to be vertically aligned within the div
Aligned how? Tops of the images aligned with the top of the text?
One of the images needs to be absolute positioned within the div.
Absolutely positioned relative to the DIV? Perhaps you could sketch out what you're looking for...?
fd has described the steps for absolute positioning, as well as adjusting the display of the H1 element such that images will appear inline with it. To that, i'll add that you can align the images by use of the vertical-align style:
#header h1 { display: inline; }
#header img { vertical-align: middle; }
...this would put the header and images together, with top edges aligned. Other alignment options exist; see the documentation. You might also find it beneficial to drop the DIV and move the images inside the H1 element - this provides semantic value to the container, and removes the need to adjust the display of the H1:
<h1 id=header">
<img src=".." ></img>
testing...
<img src="..."></img>
</h1>
A: Almost all methods needs to specify the height, but often we don't have any heights.
So here is a CSS 3 three-line trick that doesn't require to know the height.
.element {
position: relative;
top: 50%;
transform: translateY(-50%);
}
It's supported even in IE9.
with its vendor prefixes:
.element {
position: relative;
top: 50%;
-webkit-transform: translateY(-50%);
-ms-transform: translateY(-50%);
transform: translateY(-50%);
}
Source: Vertical align anything with just 3 lines of CSS
A: Three ways to make a center child div in a parent div
*
*Absolute positioning method
*Flexbox method
*Transform/translate method
Demo
/* Absolute Positioning Method */
.parent1 {
background: darkcyan;
width: 200px;
height: 200px;
position: relative;
}
.child1 {
background: white;
height: 30px;
width: 30px;
position: absolute;
top: 50%;
left: 50%;
margin: -15px;
}
/* Flexbox Method */
.parent2 {
display: flex;
justify-content: center;
align-items: center;
background: darkcyan;
height: 200px;
width: 200px;
}
.child2 {
background: white;
height: 30px;
width: 30px;
}
/* Transform/Translate Method */
.parent3 {
position: relative;
height: 200px;
width: 200px;
background: darkcyan;
}
.child3 {
background: white;
height: 30px;
width: 30px;
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
}
<div class="parent1">
<div class="child1"></div>
</div>
<hr />
<div class="parent2">
<div class="child2"></div>
</div>
<hr />
<div class="parent3">
<div class="child3"></div>
</div>
A: I used this very simple code:
div.ext-box { display: table; width:100%;}
div.int-box { display: table-cell; vertical-align: middle; }
<div class="ext-box">
<div class="int-box">
<h2>Some txt</h2>
<p>bla bla bla</p>
</div>
</div>
Obviously, whether you use a .class or an #id, the result won't change.
A: My trick is to put a table inside the div with one row and one column, set 100% of width and height, and the property vertical-align:middle:
<div>
<table style="width:100%; height:100%;">
<tr>
<td style="vertical-align:middle;">
BUTTON TEXT
</td>
</tr>
</table>
</div>
Fiddle:
http://jsfiddle.net/joan16v/sbqjnn9q/
A: Wow, this problem is popular. It's based on a misunderstanding in the vertical-align property. This excellent article explains it:
Understanding vertical-align, or "How (Not) To Vertically Center Content" by Gavin Kistner.
“How to center in CSS” is a great web tool which helps to find the necessary CSS centering attributes for different situations.
In a nutshell (and to prevent link rot):
*
*Inline elements (and only inline elements) can be vertically aligned in their context via vertical-align: middle. However, the “context” isn’t the whole parent container height, it’s the height of the text line they’re in. jsfiddle example
*For block elements, vertical alignment is harder and strongly depends on the specific situation:
*
*If the inner element can have a fixed height, you can make its position absolute and specify its height, margin-top and top position. jsfiddle example
*If the centered element consists of a single line and its parent height is fixed you can simply set the container’s line-height to fill its height. This method is quite versatile in my experience. jsfiddle example
*… there are more such special cases.
A: .outer {
display: flex;
align-items: center;
justify-content: center;
}
A: Just use a one-cell table inside the div! Just set the cell and table height and with to 100% and you can use the vertical-align.
A one-cell table inside the div handles the vertical-align and is backward compatible back to the Stone Age!
A: I have been using the following solution (with no positioning and no line height) since over a year, it works with Internet Explorer 7 and Internet Explorer 8 as well.
<style>
.outer {
font-size: 0;
width: 400px;
height: 400px;
background: orange;
text-align: center;
display: inline-block;
}
.outer .emptyDiv {
height: 100%;
background: orange;
visibility: collapse;
}
.outer .inner {
padding: 10px;
background: red;
font: bold 12px Arial;
}
.verticalCenter {
display: inline-block;
*display: inline;
zoom: 1;
vertical-align: middle;
}
</style>
<div class="outer">
<div class="emptyDiv verticalCenter"></div>
<div class="inner verticalCenter">
<p>Line 1</p>
<p>Line 2</p>
</div>
</div>
A: This is my personal solution for an i element inside a div.
JSFiddle Example
HTML
<div class="circle">
<i class="fa fa-plus icon">
</i></div>
CSS
.circle {
border-radius: 50%;
color: blue;
background-color: red;
height:100px;
width:100px;
text-align: center;
line-height: 100px;
}
.icon {
font-size: 50px;
vertical-align: middle;
}
A: For me, it worked this way:
<div style="width:70px; height:68px; float:right; display: table-cell; line-height: 68px">
<a href="javascript:void(0)" style="margin-left: 4px; line-height: 2" class="btn btn-primary">Login</a>
</div>
The "a" element converted to a button, using Bootstrap classes, and it is now vertically centered inside an outer "div".
A: <div id="header" style="display: table-cell; vertical-align:middle;">
...
or CSS
.someClass
{
display: table-cell;
vertical-align:middle;
}
Browser Coverage
A: Here is just another (responsive) approach:
html,
body {
height: 100%;
}
body {
margin: 0;
}
.table {
display: table;
width: auto;
table-layout:auto;
height: 100%;
}
.table:nth-child(even) {
background: #a9edc3;
}
.table:nth-child(odd) {
background: #eda9ce;
}
.tr {
display: table-row;
}
.td {
display: table-cell;
width: 50%;
vertical-align: middle;
}
http://jsfiddle.net/herrfischerhamburg/JcVxz/
A: Just this:
<div>
<table style="width: 100%; height: 100%">
<tr>
<td style="width: 100%; height: 100%; vertical-align: middle;">
What ever you want vertically-aligned
</td>
</tr>
</table>
</div>
A one-cell table inside the div handles the vertical-align and is backward compatible back to the Stone Age!
A: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
<html>
<head>
<style type="text/css">
#style_center { position:relative; top:50%; left:50%; }
#style_center_absolute { position:absolute; top:50px; left:50px; }
<!--#style_center { position:relative; top:50%; left:50%; height:50px; margin-top:-25px; }-->
</style>
</head>
<body>
<div style="height:200px; width:200px; background:#00FF00">
<div id="style_center">+</div>
</div>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1027"
} |
Q: Why don't modules always honor 'require' in ruby? (sorry I should have been clearer with the code the first time I posted this. Hope this makes sense)
File "size_specification.rb"
class SizeSpecification
def fits?
end
end
File "some_module.rb"
require 'size_specification'
module SomeModule
def self.sizes
YAML.load_file(File.dirname(__FILE__) + '/size_specification_data.yml')
end
end
File "size_specification_data.yml
---
- !ruby/object:SizeSpecification
height: 250
width: 300
Then when I call
SomeModule.sizes.first.fits?
I get an exception because "sizes" are Object's not SizeSpecification's so they don't have a "fits" function.
A: Are your settings and ruby installation ok? I created those 3 files and wrote what follows in "test.rb"
require 'yaml'
require "some_module"
SomeModule.sizes.first.fits?
Then I ran it.
$ ruby --version
ruby 1.8.6 (2008-06-20 patchlevel 230) [i486-linux]
$ ruby -w test.rb
$
No errors!
A: On second reading I'm a little confused, you seem to want to mix the class into module, which is porbably not so advisable. Also is the YAML supposed to load an array of the SizeSpecifications?
It appears to be that you're not mixing the Module into your class. If I run the test in irb then the require throws a LoadError. So I assume you've put two files together, if not dump it.
Normally you'd write the functionality in the module, then mix that into the class. so you may modify your code like this:
class SizeSpecification
include SomeModule
def fits?
end
end
Which will allow you to then say:
SizeSpecification::SomeModule.sizes
I think you should also be able to say:
SizeSpecification.sizes
However that requires you to take the self off the prefix of the sizes method definition.
Does that help?
A: The question code got me a little confused.
In general with Ruby, if that happens it's a good sign that I am trying to do things the wrong way.
It might be better to ask a question related to your actual intended outcome, rather than the specifics of a particular 'attack' on your problem. They we can say 'nonono, don't do that, do THIS' or 'ahhhhh, now I understand what you wanna do'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: backup data for reporting What is the best method to transfer data from sales table to sales history table in sql server 2005. sales history table will be used for reporting.
A: Bulkcopy is fast and it will not use the transaction log. One batch run at the end of the day.
Deleting the copied records from your production server is a different situation that needs to be planed on that server's maintenance approach/plans. Your reporting server solution should not interfere with or affect the production server.
Keep in mind that your reporting server is not meant to be a backup of the data but rather a copy made exclusively for reporting purposes.
Also check on the server settings of your reporting server to be on Simple recovery model.
A: Take a look at SSAS. OLAP is built for reporting and is easy to query with tools like excel pivot tables.
A: Most solutions will require 2 steps;
-copy the records from source to target
-delete records from source.
It is essential that your source table have a primary key.
The "best" method depends on a lot of things.
How many records?
Is this a production environment?
What tools do you have?
A: Unless you are moving a large amount of data, a simple stored procedure should do the trick.
A sql server job can manage the timing of when to call the proc.
A: if you just want to move the data to another table, use BulkCopy/BulkInsert. if you want to build reporting I would suggest a BI solution such as MS Analysis Service (OLAP).
It is difficult and in my opinion ugly to maintain two or more history/archive tables in the same database. For a reporting solution you will be considering all the tables for that piece of information anyway. History/Archive tables should only be used if you are going to put the data away and not touch it for a long period of time, ie. archive it away outside the operational DB.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Setting environment variables for Phusion Passenger applications I've set up Passenger in development (Mac OS X) and it works flawlessly. The only problem came later: now I have a custom GEM_HOME path and ImageMagick binaries installed in "/usr/local". I can put them in one of the shell rc files that get sourced and this solves the environment variables for processes spawned from the console; but what about Passenger? The same application cannot find my gems when run this way.
A: Before you do any requires (especially before requiring rubygems) you can do:
ENV['GEM_HOME'] = '/foo'
This will change the environment variable inside this process.
A: I found out that if you have root priviledges on computer then you can set necessary environment variables in "envvars" file and apachectl will execute this file before starting Apache.
envvars typically is located in the same directory where apachectl is located - on Mac OS X it is located in /usr/sbin. If you cannot find it then look in the source of apachectl script.
After changing envvars file restart Apache with "apachectl -k restart".
A: I know of two solutions. The first (documented here) is essentially the same as manveru's—set the ENV variable directly in your code.
The second is to create a wrapper around the Ruby interpreter that Passenger uses, and is documented here (look for passenger_with_ruby). The gist is that you create (and point PassengerRuby in your Apache config to) /usr/bin/ruby_with_env, an executable file consisting of:
#!/bin/bash
export ENV_VAR=value
/usr/bin/ruby $*
Both work; the former approach is a little less hackish, I think.
A: I've run into this issue as well. It appears that Passenger doesn't passthrough values set using the SetEnv apache directive - which is unfortunate.
Perhaps it might be possible to set environment variables in your environment.rb or boot.rb (assuming you're talking about a Rails app; I'm not familiar with Rack but presumably it has similar functionality)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What's the correct term for "number of std deviations" away from a mean I've computed the mean & variance of a set of values, and I want to pass along the value that represents the # of std deviations away from mean for each number in the set. Is there a better term for this, or should I just call it num_of_std_devs_from_mean ...
A: Some suggestions here:
Standard score (z-value, z-score, normal score)
but "sigma" or "stdev_distance" would probably be clearer
A: The standard deviation is usually denoted with the letter σ (sigma). Personally, I think more people will understand what you mean if you do say number of standard deviations.
As for a variable name, as long as you comment the declaration you could shorten it to std_devs.
A: sigma is what you want, I think.
A: That is normalizing your values. You could just refer to it as the normalized value. Maybe norm_val would be more appropriate.
A: I've always heard it as number of standard deviations
A: Deviation may be what you're after. Deviation is the distance between a data point and the mean.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I use transactions with Stomp and ActiveMQ (and Perl)? I'm trying to replace some bespoke message queues with ActiveMQ, and I need to talk to them (a lot) from Perl. ActiveMQ provides a Stomp interface and Perl has Net::Stomp, so this seems like it should be fine, but it's not.
Even if I send a BEGIN frame over Stomp the messages sent with SEND are immediately published, and if I ABORT the transaction nothing happens.
I can't find any clear answers suggesting that it's not possible, that it is possible, or that there's a relevant bit of configuration. Also, Stomp doesn't seem to be a great protocol for checking for error responses from the server.
Am I out of luck?
A: BTW the best place to ask Perl/ActiveMQ/Stomp questions is the ActiveMQ user forum as lots of Perl-Stomp folks hang out there.
The trick with STOMP transactions is to make sure each message you send or each acknowledgement you make includes the transaction ID header. See the transaction handling section of the STOMP protocol.
The reason for this is that with STOMP you could have many transactions taking place at the same time if your client is multi threaded - along with some non-transacted operations.
A: Have a look at Net::Stomp::Receipt. It's a subclass of Net::Stomp that implements "return receipts" from the Stomp protocol, and allow you to make sure the correct reception of your message, and abort the transaction otherwise.
A: You have to wrap the acknowledgements inside a transaction.
In pseudocode (or pseudo STOMP) this would be:
*
*BEGIN [TRANSACTION-ID] -> send to server
*MESSAGE [MESSAGE-ID] (received) <- received from server
*ACK [MESSAGE-ID] [TRANSACTION-ID] -> send to server
*COMMIT [TRANSACTION-ID] -> send to server
I have already gotten this working with the PHP driver (patching the abort call to use the transaction ID when I pass in a frame object to acknowledge).
Unfortunately, after redelivering four messages the client stops. At least this happens to me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: linux uptime history How can I get a history of uptimes for my debian box? After a reboot, I dont see an option for the uptime command to print a history of uptimes. If it matters, I would like to use these uptimes for graphing a page in php to show my webservers uptime lengths between boots.
Update:
Not sure if it is based on a length of time or if last gets reset on reboot but I only get the most recent boot timestamp with the last command. last -x also does not return any further info. Sounds like a script is my best bet.
Update:
Uptimed is the information I am looking for, not sure how to grep that info in code. Managing my own script for a db sounds like the best fit for an application.
A: Install uptimed. It does exactly what you want.
Edit:
You can apparantly include it in a PHP page as easily as this:
<? system("/usr/local/bin/uprecords -a -B"); ?>
Examples
A: the last command will give you the reboot times of the system. You could take the difference between each successive reboot and that should give the uptime of the machine.
update
1800 INFORMATION answer is a better solution.
A: You could create a simple script which runs uptime and dumps it to a file.
uptime >> uptime.log
Then set up a cron job for it.
A: This isn't stored between boots, but The Uptimes Project is a third-party option to track it, with software for a range of platforms.
Another tool available on Debian is uptimed which tracks uptimes between boots.
A: I would create a cron job to run at the required resolution (say 10 minutes) by entering the following [on one single line - I've just separated it for formatting purposes] in your crontab (cron -l to list, cron -e to edit).
0,10,20,30,40,50 * * * *
/bin/echo $(/bin/date +\%Y-\%m-\%d) $(/usr/bin/uptime)
>>/tmp/uptime.hist 2>&1
This appends the date, time and uptime to the uptime.hist file every ten minutes while the machine is running. You can then examine this file manually to figure out the information or write a script to process it as you see fit.
Whenever the uptime reduces, there's been a reboot since the previous record. When there are large gaps between lines (i.e., more than the expected ten minutes), the machine's been down during that time.
A: Try this out:
last | grep reboot
A: according to last manual page:
The pseudo user reboot logs in each time the system is rebooted.
Thus last reboot will show a log of all reboots since the log file
was created.
so last column of #last reboot command gives you uptime history:
#last reboot
reboot system boot **************** Sat Sep 21 03:31 - 08:27 (1+04:56)
reboot system boot **************** Wed Aug 7 07:08 - 08:27 (46+01:19)
A: i dont think this information is saved between reboots.
if shutting down properly you could run a command on shutdown that saves the uptime, that way you could read it back after booting back up.
A: This information is not normally saved. However, you can sign up for an online service that will do this for you. You just install a client that will send your uptime to the server every 5 minutes and the site will present you with a graph of your uptimes:
http://uptimes-project.org/
A: Or you can use tuptime https://sourceforge.net/projects/tuptime/ for a total uptime time.
A: You can use tuptime, a simple command for report the total uptime in linux keeping it betwwen reboots.
http://sourceforge.net/projects/tuptime/
A: Since I haven't found an answer here that would help retroactively, maybe this will help someone.
kern.log (depending on your distribution) should log a timestamp.
It will be something like:
2019-01-28T06:25:25.459477+00:00 someserver kernel: [44114473.614361] somemessage
"44114473.614361" represents seconds since last boot, from that you can calculate the uptime without having to install anything.
A: Nagios can make even very beautiful diagrams about this.
A: Use Syslog
For anyone coming here searching for their past uptime.
The solution of @1800_Information is a good advise for the future, but I needed to find information for my past uptimes on a specific date.
Therefore I used syslog to determine when that day the system was started (first log entry of that day) and when the system was shutdown again.
Boot time
To get the system start time grep for the month and day and show only the first lines:
sudo grep "May 28" /var/log/syslog* | head
Shutdown time
To get the system shutdown time grep for the month and day and show only the last few lines:
sudo grep "May 28" /var/log/syslog* | tail
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: How do I use a vendor Apache with a self-compiled Perl and mod_perl? I want to use Apple's or RedHat's built-in Apache but I want to use Perl 5.10 and mod_perl. What's the least intrusive way to accomplish this? I want the advantage of free security patching for the vendor's Apache, dav, php, etc., but I care a lot about which version of Perl I use and what's in my @INC path. I don't mind compiling my own mod_perl.
A: *
*Build your version of Perl 5.10 following any special instructions from the mod_perl documentation. Tell Perl configurator to install in some non-standard place, like /usr/local/perl/5.10.0
*Use the instructions to build a shared library (or dynamic, or .so) mod_perl against your distribution's Apache, but make sure you run the Makefile.PL using your version of perl:
/usr/local/perl/5.10.0/bin/perl Makefile.PL APXS=/usr/bin/apxs
*Install and configure mod_perl like normal.
It may be helpful, after step one, to change your path so you don't accidentially get confused about which version of Perl you're using:
export PATH=/usr/local/perl/5.10.0/bin:$PATH
A: You'll want to look into mod_so
A: I've done this before. It wasn't pretty, but it worked, especially since vendor perl's are usually 2-3 years old.
I started with making my own perl RPM that installed perl into a different location, like /opt/. This was pretty straight forward. I mostly started with this because I didn't want the system utilities that used perl to break when I upgraded/installed new modules. I had to modify all my scripts to specify #!/opt/bin/perl at the top and sometimes I even played with the path to make sure my perl came first.
Next, I grabbed a mod_perl source RPM and modified it to use my /opt/bin/perl instead of /usr/bin/perl. I don't have access to the changes I made, since it was at a different gig. It took me a bit of playing around to get it.
It did work, but I'm not an RPM wizard, so dependency checking didn't work out so well. For example, I could uninstall my custom RPM and break everything. It wasn't a big deal for me, so I moved on.
I was also mixing RPM's with CPAN installs of modules (did I mention we built our own custom CPAN mirror with our own code?). This was a bit fragile too. Again, I didn't have the resources (ie, time) to figure out how to bend cpan2rpm to use my perl and not cause RPM conflicts.
If I had it all to do again, I would make a custom 5.10 perl RPM and just replace the system perl. Then I would use cpan2rpm to create the RPM packages I needed for my software and compile my own mod_perl RPM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I import Facebook friends from another website I am looking for a way to connect to Facebook by allowing the user to enter in their username and password and have our app connect to their account and get their contacts so that they can invite them to join their group on our site. I have written a Facebook app before, but this is not an app as much as it is a connector so that they can invite all their friends or just some to the site we are working on.
I have seen several other sites do this and also connect to Yahoo, Gmail and Hotmail contacts. I don't think they are using Facebook Connect to do this since it is so new, but they may be.
Any solution in any language is fine as I can port whatever example to use C#. I cannot find anything specifically on Google or Facebook to address this specific problem. Any help is appreciated.
I saw a first answer get removed that had suggested I might need to scrape the friends page. The more I look around, this might be what I need to do. Any other way I think will require the person to add it as an app. I am wondering how a answer can get removed, maybe that user deleted it.
A: You can use Facebook Connect 'Account Linking'.
Python/Django example from Facebook developers wiki:
Page:
def invite_friends(request):
#HTML escape function for invitation content.
from cgi import escape
facebook_uid = request.facebook.uid
# Convert the array of friends into a comma-delimeted string.
exclude_ids = ",".join([str(a) for a in request.facebook.friends.getAppUsers()])
# Prepare the invitation text that all invited users will receive.
content = """<fb:name uid="%s" firstnameonly="true" shownetwork="false"/> wants to invite you to play Online board games, <fb:req-choice url="%s" label="Put Online Gaming and Video Chat on your profile!"/>""" % (facebook_uid, request.facebook.get_add_url())
invitation_content = escape(content, True)
return render_to_response('facebook/invite_friends.fbml',
{'content': invitation_content, 'exclude_ids': exclude_ids })
Template:
<fb:request-form action="http://apps.facebook.com/livevideochat/?skipped=1"
method="POST" invite="true" type="Online Games"
content="{{ content }}">
<fb:multi-friend-selector max="20"
actiontext="Here are your friends who aren't using Online Games and Live Video Chat. Invite them to play Games Online today!"
showborder="true" rows="5" exclude_ids="{{ exclude_ids }}"> </fb:request-form>
A: I asked this question awhile ago and before facebook connect was live and well. The best way to really do this is using facebook connect.
http://developers.facebook.com/connect.php?tab=website
I am currently using this on our live site and using the Facebook Developer Toolkit for .NET on Codeplex here:
http://www.codeplex.com/FacebookToolkit
Good luck!
A: I looked up that it is alright to answer my own post, so here it is.
It turns out that you will have to scrape the friends list which is not legal in facebook terms of use. So we will not be doing this for our sites. Here are a few articles that show what happens when you don't play by the rules. Plaxo tested their scraping with Scoble and Facebook shut down Scobles account in January of this year.
http://scobleizer.com/2008/01/03/what-i-was-using-to-hit-facebook/
http://news.cnet.com/8301-13577_3-9839474-36.html
A: Seems like Facebook has an api for this. Check this blog here.
http://developers.facebook.com/news.php?blog=1&story=73
A: Not answering the question but hopefully providing some insight...
It's features like this that teach people that it is ok to enter their username and password for site A on a form from site B. This is most definitely not ok. Please do not make people think it is.
But maybe the Facebook API allows you to circumvent this problem, by making people log into Facebook itself to give your app access. A slight but important difference.
A: Here is an open source PHP 5 tool to let you import contacts from both e-mail and some social networks including Facebook: OpenInviter is an open source PHP class, written in PHP5, for importing contacts from most of the well-known e-mail providers & social networks.
A: Just import them into Yahoo using FB Connect. All better? No screen scraping, no FB violations. Done.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Unable to receive JSON from JQuery ajax call I have determined that my JSON, coming from the server, is valid (making the ajax call manually), but I would really like to use JQuery. I have also determined that the "post" URL, being sent to the server, is correct, using firebug. However, the error callback is still being triggered (parse error). I also tried datatype: text.
Are there other options that I should include?
$(function() {
$("#submit").bind("click", function() {
$.ajax({
type: "post",
url: "http://myServer/cgi-bin/broker" ,
datatype: "json",
data: {'start' : start,'end' : end},
error: function(request,error){
alert(error);
},
success: function(request) {
alert(request.length);
}
}); // End ajax
}); // End bind
}); // End eventlistener
A: Here are a few suggestions I would try:
1) the 'datatype' option you have specified should be 'dataType' (case-sensitive I believe)
2) try using the 'contentType' option as so:
contentType: "application/json; charset=utf-8"
I'm not sure how much that will help as it's used in the request to your post url, not in the response.
See this article for more info: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax
(It's written for asp.net, but may be applicable)
3) Triple check the output of your post url and run the output through a JSON validator just to be absolutely sure it's valid and can be parsed into a JSON object. http://www.jsonlint.com
Hope some of this helps!
A: Why myResult instead of request?
success: function(request) {
alert(myResult.length);
}
A: The data parameter is wrong. Here is an example that works:
data: { index: ddl.selectedIndex },
This contructs an object with property called index with value ddl.selectedIndex.
You need to remove the quotes from your data parameter line
Good luck
A
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Architecture for real-time system? I would like to ask some advices or experiences from architecture or
technology for building real-time system. Before I have some
experience on developing "Queuing Management System", I have done by
sending TcpServer and TcpClient message to all operators when a
operator changed the queue number. But I think this strategy a lot
complicated and issues.
Could anyone guide me some ideas or frameworks?
A: First up: hardcore real-time peeps will take issue with the use of ".NET" and "real-time" in the same sentence, due to .NET's non-deterministic nature ;)
Having said that, if you're just implementing a supervisory or visualisation layer over an existing real-time system (say, implementing a SCADA-type system), then .NET should be fine. Then your network architecture can boil down to two scenarios:
*
*Clients poll from a server: you create a centralised server which contains much of your process logic, and clients poll from this server periodically.
*Server supports a publish/subscribe mechanism: clients subscribe to the server's information, and the server sends out updates when they occur.
There's no one "right" way to do the above comms; it depends a lot on size and frequency of updates, network traffic, etc.
A: I havent worked on anything real-time, but I would assume that looking into real-time linux would be a good start to understanding the problems and solutions they have come up with dealing with real-time applications.
A: I'd recommend looking at QNX.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What are the full-text search tools you can use in SQL Server? Besides full-text indexing and using LIKE keyword, what are other tools to build search functionality on top of MS SQL? This question is particularly for searching records, not files.
A: In episode 11 of the SO podcast Jeff and Joel discussed full text searching of SQL Server, Joel's recommendation was Lucene.NET. You can see their discussion in that episode's transcript, it is about a quarter of the way down the page.
A: 2008's full text search capabilities have been improved - perhaps look there first..
A: I believe MS SQL 2005+ has the capability to search using regular expression.
MSDN has an article about this http://msdn.microsoft.com/en-us/magazine/cc163473.aspx.
A: I documented how I used Lucene.NET in my ASP.NET application BugTracker.NET here:
http://www.ifdefined.com/blog/post/2009/02/Full-Text-Search-in-ASPNET-using-LuceneNET.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Which is the best Linux C/C++ debugger (or front-end to gdb) to help teaching programming? I teach a sort of "lite" C++ programming course to novices ("lite" meaning no pointers, no classes, just plain old C, plus references and STL string and vectors). Students have no previous experience in programming, so I believe that using an interactive debugger would help them understand program flow, variables, and recursion.
The course is taught in Linux. Teaching them to use gdb is just overkill (they will not use nor understand most features). I just need something simple but easy to use: to see at which line the program is now, what is in the stack (local variables, previous calls, etc.). I look something similar to old Turbo Pascal or Turbo C++ Borland's debugger, or Visual Studio debugger.
Thank you,
A: ddd is a graphical front-end to gdb that is pretty nice. One of the down sides is a classic X interface, but I seem to recall it being pretty intuitive.
A: You could try using Insight a graphical front-end for gdb written by Red Hat
Or if you use GNOME desktop environment, you can also try Nemiver.
A: You may want to check out Eclipse CDT. It provides a C/C++ IDE that runs on multiple platforms (e.g. Windows, Linux, Mac OS X, etc.). Debugging with Eclipse CDT is comparable to using other tools such as Visual Studio.
You can check out the Eclipse CDT Debug tutorial that also includes a number of screenshots.
A: Qt Creator, apart from other goodies, also has a good debugger integration, for CDB, GDB and the Symnbian debugger, on all supported platforms. You don't need to use Qt to use the Qt Creator IDE, nor do you need to use QMake - it also has CMake integration, although QMake is very easy to use.
You may want to use Qt Creator as the IDE to teach programming with, consider it has some good features:
*
*Very smart and advanced C++ editor
*Project and build management tools
*QMake and CMake integration
*Integrated, context-sensitive help system
*Excellent visual debugger (CDB, GDB and Symbian)
*Supports GCC and VC++
*Rapid code navigation tools
*Supports Windows, Linux and Mac OS X
A: Perhaps it is indirect to gdb (because it's an IDE), but my recommendations would be KDevelop. Being quite spoiled with Visual Studio's debugger (professionally at work for many years), I've so far felt the most comfortable debugging in KDevelop (as hobby at home, because I could not afford Visual Studio for personal use - until Express Edition came out). It does "look something similar to" Visual Studio compared to other IDE's I've experimented with (including Eclipse CDT) when it comes to debugging step-through, step-in, etc (placing break points is a bit awkward because I don't like to use mouse too much when coding, but it's not difficult).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "108"
} |
Q: Scanner cannot be resolved to a type I just installed Ubuntu 8.04 and I'm taking a course in Java so I figured why not install a IDE while I am installing it. So I pick my IDE of choice, Eclipse, and I make a very simple program, Hello World, to make sure everything is running smoothly. When I go to use Scanner for user input I get a very odd error:
My code:import java.util.Scanner;
class test {
public static void main (String [] args) {
Scanner sc = new Scanner(System.in);
System.out.println("hi");
}
}
The output:
Exception in thread "main" java.lang.Error: Unresolved compilation problems:
Scanner cannot be resolved to a type
Scanner cannot be resolved to a type
at test.main(test.java:5)
A: The Scanner class is new in Java 5. I do not know what Hardy's default Java environment is, but it is not Sun's and therefore may be outdated.
I recommend installing the package sun-java6-jdk to get the most up-to-date version, then telling Eclipse to use it.
A: If you are using a version of Java before 1.5, java.util.Scanner doesn't exist.
Which version of the JDK is your Eclipse project set up to use?
Have a look at Project, Properties, Java Build Path -- look for the 'JRE System Library' entry, which should have a version number next to it.
A: It could also be that although you are have JDK 1.5 or higher, the project has some specific settings set that tell it to compile as 1.4. You can test this via Project >> Properties >> Java Compiler and ensure the "Compiler Compliance Level" is set to 1.5 or higher.
A: I know, It's quite a while since the question was posted. But the solution may still be of interest to anyone out there. It's actually quite simple...
Under Ubuntu you need to set the java compiler "javac" to use sun's jdk instead of any other alternative. The difference to some of the answers posted so far is that I am talking about javac NOT java. To do so fire up a shell and do the following:
*
*As root or sudo type in at command line:
# update-alternatives --config javac
*Locate the number pointing to sun's jdk, type in this number, and hit "ENTER".
*You're done! From now on you can enjoy java.util.Scanner under Ubuntu.
System.out.println("Say thank you, Mr.");
Scanner scanner = java.util.Scanner(System.in);
String thanks = scanner.next();
System.out.println("Your welcome.");
A: You imported Scanner but you're not using it. You're using Scanner, which requires user inputs. You're trying to print out one thing, but you're exposing the your program to the fact that you are going to use your own input, so it decides to print "Hello World" after you give a user input. But since you are not deciding what the program will print, the system gets confused since it doesn't know what to print. You need something like int a=sc.nextInt(); or String b=sc.nextLine(); and then give your user input. But you said you want Hello World!, so Scanner is redundant.
A: package com.company;
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
System.out.print("Input seconds: ");
int num = in.nextInt();
for (int i = 1; i <=num; i++) {
if(i%10==3)
{
System.out.println(i);
}
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: forms and jQuery I'm creating a simple form for a site I manage. I use jQuery for my JavaScript. I noticed a large amount of plugins for jQuery and forms. Does anybody have any favorites that they find especially useful? In particular, plugins to help with validation would be the most useful.
A: The jQuery Form Plugin is pretty much standard. It handles serializing form fields and AJAX submission.
A: Form Validation is one that comes to my mind. I think is being used here in SO.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: scriptResourceHandler Does anyone know much about the Asp.Net webconfig element ?
I'm looking at it because I'm implementing an MS Ajax updatepanel in an existing site,
and after doing some looking around, on the web I'm not finding a lot of info about it.
And to avoid the flood of replies telling me how inefficient the update panel is, and that it's not actually providing any benefit etc. etc. I know! Let's say I've got my reasons for using it and leave it at that.
I guess my main question is;, will setting enableCompression="true" and enableCaching="true" help the performace of my update panel in any way?
A: Given the traversing of the DOM that is actually happening with an update panel it's generally not the content that is hindering performance.. it is the PC/Browser.
I know this is exactly what you aren't looking for but unless your panel contains a significant amount of data then compression and caching isn't going to help you terribly.
A: I took this from the scriptresourcehandler documentation:
By default, the ScriptResourceHandler class compresses and caches embedded script files for Internet Explorer 7.
So I don't think you'll see any difference if you set enableCompression/enableCaching true because it's already happening if you're using IE7.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Best win32 compiled scripting language? What is the best compilable scripting language for Win32? I prefer .EXE's because I don't want to install the runtime on the servers first (my company administrates many via remote), but I need to be able to do things like NTFS permissions and (if possible) APIs over the network.
There was a small Perl which appeared to be able to do most of this, but it does not seem to have been updated/developed in quite a while. I have wondered about Lua, but I don't know if it has everything I need yet (and don't want to hunt through fifty library sites trying to find out). Any thoughts?
A: Have you considered using an EXE maker? For example, you can code in Python and use py2exe to create a standalone EXE that runs anywhere (it actually packages Python into the exe, so you don't have to install the runtime).
A: Ruby is my scripting language of choice.
Try RubyScript2Exe.
A: A scripting language is, almost by definition, not compiled into a standalone executable. So maybe you need to restate your intentions or give some indication about what kind of program you want to create.
C# is a powerful language that compiles to .EXE and allows you to interface with pretty much anything (through native p/invoke calls, if necessary). A basic but very usable Visual Studio for C# can be downloaded for free from the Microsoft website. The .NET runtime is installed on most systems nowadays.
A: Did you consider AutoIt ?
It is a scripting language, and you can quickly transform a script into an exe...
A: At OSCON 2005, I heard Damien Conway say "the only thing better than Perl is something that works well, even if it's not written in Perl."
It's good advice. Instead of looking for the best language that can be compiled to an .EXE, worry a lot more about writing it in a language that can be compiled to an .EXE. Use whatever works. Just remember that the quality of your programming matters infinitely more than what language you use.
That said, I like py2exe. YMMV. Good luck!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Are there any Parsing Expression Grammar (PEG) libraries for Javascript or PHP? I find myself drawn to the Parsing Expression Grammar formalism for describing domain specific languages, but so far the implementation code I've found has been written in languages like Java and Haskell that aren't web server friendly in the shared hosting environment that my organization has to live with.
Does anyone know of any PEG libraries or PackRat Parser Generators for Javascript or PHP? Of course code generators in any languages that can produce Javascript or PHP source code would do the trick.
A: php PEG https://github.com/maetl/php-peg
This post is really old but I found it through google, and It should have been answered
A: Language.js:
Language.js is an open source experimental new parser based on PEG (Parsing Expression Grammar), with the special addition of the "naughty OR" operator to handle errors in a unique new way. It makes use of memoization to achieve linear time parsing speed
A: I have recently written PEG.js, PEG-based parser generator for JavaScript. It can be used from a command-line or you can try it from your browser.
A: There is in fact one for Javascript: OMeta. http://www.tinlizzie.org/ometa/
I also implemented a version of this in Python: http://github.com/python-parsley/parsley
A: There's also Kouprey for JavaScript, which is a very easy to use PEG generator/library.
A: look at https://github.com/leblancmeneses/NPEG can easily be converted into php.
Parse tree is created with anonymous functions.
A: Have you looked at ANTLR? It produces lexer and parser code, handles abstract syntax trees, lets you insert code the grammar to be injected into the lexer/parser code, and its available for a variety of languages!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: how can you parse an excel (.xls) file stored in a varbinary in MS SQL 2005? problem
how to best parse/access/extract "excel file" data stored as binary data in an SQL 2005 field?
(so all the data can ultimately be stored in other fields of other tables.)
background
basically, our customer is requiring a large volume of verbose data from their users. unfortunately, our customer cannot require any kind of db export from their user. so our customer must supply some sort of UI for their user to enter the data. the UI our customer decided would be acceptable to all of their users was excel as it has a reasonably robust UI. so given all that, and our customer needs this data parsed and stored in their db automatically.
we've tried to convince our customer that the users will do this exactly once and then insist on db export! but the customer can not require db export of their users.
*
*our customer is requiring us to parse an excel file
*the customer's users are using excel as the "best" user interface to enter all the required data
*the users are given blank excel templates that they must fill out
*
*these templates have a fixed number of uniquely named tabs
*these templates have a number of fixed areas (cells) that must be completed
*these templates also have areas where the user will insert up to thousands of identically formatted rows
*when complete, the excel file is submitted from the user by standard html file upload
*our customer stores this file raw into their SQL database
given
*
*a standard excel (".xls") file (native format, not comma or tab separated)
*file is stored raw in a varbinary(max) SQL 2005 field
*excel file data may not necessarily be "uniform" between rows -- i.e., we can't just assume one column is all the same data type (e.g., there may be row headers, column headers, empty cells, different "formats", ...)
requirements
*
*code completely within SQL 2005 (stored procedures, SSIS?)
*be able to access values on any worksheet (tab)
*be able to access values in any cell (no formula data or dereferencing needed)
*cell values must not be assumed to be "uniform" between rows -- i.e., we can't just assume one column is all the same data type (e.g., there may be row headers, column headers, empty cells, formulas, different "formats", ...)
preferences
*
*no filesystem access (no writing temporary .xls files)
*retrieve values in defined format (e.g., actual date value instead of a raw number like 39876)
A: My thought is that anything can be done, but there is a price to pay. In this particular case, the price seems to bee too high.
I don't have a tested solution for you, but I can share how I would give my first try on a problem like that.
My first approach would be to install excel on the SqlServer machine and code some assemblies to consume the file on your rows using excel API and then load them on Sql server as assembly procedures.
As I said, This is just a idea, I don't have details, but I'm sure others here can complement or criticize my idea.
But my real advice is to rethink the whole project. It makes no sense to read tabular data on binary files stored on a cell of a row of a table on database.
A: This looks like an "I wouldn't start from here" kind of a question.
The "install Excel on the server and start coding" answer looks like the only route, but it simply has to be worth exploring alternatives first: it's going to be painful, expensive and time-consuming.
I strongly feel that we're looking at a "requirement" that is the answer to the wrong problem.
What business problem is creating this need? What's driving that? Try the Five Whys as a possible way to explore the history.
A: It sounds like you're trying to store an entire database table inside a spreadsheet and then inside a single table's field. Wouldn't it be simpler to store the data in a database table to begin with and then export it as an XLS when required?
Without opening up an instance Excel and having Excel resolve worksheet references I'm not sure it's doable at all.
A: Could you write the varbinary to a Raw File Destination? And then use an Excel Source as your input to whatever step is next in your precedence constraints.
I haven't tried it, but that's what I would try.
A: Well, the whole setup seems a bit twisted :-) as others have already pointed out.
If you really cannot change the requirements and the whole setup: why don't you explore components such as Aspose.Cells or Syncfusion XlsIO, native .NET components, that allow you to read and interpret native Excel (XLS) files. I'm pretty such with either of the two, you should be able to read your binary Excel into a MemoryStream and then feed that into one of those Excel-reading components, and off you go.
So with a bit of .NET development and SQL CLR, I guess this should be doable - not sure if it's the best way to do it, but it should work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Mapping internal data elements to external vendors' XML schema I'm considering Altova MapForce (or something similar) to produce either XSLT and/or a Java or C# class to do the translation. Today, we pull data right out of the database and manually build an XML string that we post to a webservice.
Should it be db -> (internal)XML -> XSLT -> (External)XML? What do you folks do out there in the wide world?
A: I would use one of the out-of-the-box XML serialization classes to do your internal XML generation, and then use XSLT to transform to the external XML. You might generate a schema as well to enforce that the translation code (whatever will drive your XSLT translation) continues to get the XML it is expecting for translation in case of changes to the object breaks things.
There are a number of XSLT editors on the market that will help you do the mappings, but I prefer to just use a regular XML editor.
A: ya, I think you're heading down the right path with MapForce. If you don't want to write code to preform the actual transformation, MapForce can do that for you also. THis may be better long term b/c it's less code to maintain.
Steer clear of more expensive options (e.g. BizTalk) unless you really need to B2B integration and orchestration.
A: What database are you using? Oracle has some nice XML mapping tools. There are some Java binding tools (one is http://java.sun.com/developer/technicalArticles/WebServices/jaxb). However, if you have the luxory consider using Ruby which has nice built-in "to_xml" methods.
A: Tip #1: Avoid all use of XSLT.
The tool support sucks. The resulting solution will be unmaintainable.
Tip #2: Eliminate all unnecessary steps.
Just translate your resultset (assuming you're using JDBC or equiv) to the outbound XML.
Tip #3: Assume all use of a schema-based tool to be incorrect and plan accordingly.
In other words, just fake it. If you have to squirt out some mutant SOAP (redundant, I know) payload just mock up a working SOAP message and then turn it into a template. Velocity doesn't suck.
That said, the best/correct answer, is to use an "XML Writer" style solution. There's a few.
The best is the one I wrote, LOX (Lightweight Objects for XML).
The public API uses a Builder design pattern. Due to some magic under the hood, it's impossible to create malformed XML.
Please note: If XML is the answer, you've asked the wrong question. Sometimes, we're forced against our will to use it in some way. When that happens, it's crucial to use tools which minimize developer effort and improve code maintainability.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I get started processing email related to website activity? I am writing a web application that requires user interaction via email. I'm curious if there is a best practice or recommended source for learning about processing email. I am writing my application in Python, but I'm not sure what mail server to use or how to format the message or subject line to account for automated processing. I'm also looking for guidance on processing bouncebacks.
A: There are some pretty serious concerns here for how to send email automatically, and here are a few:
Use an email library. Python includes one called 'email'. This is your friend, it will stop you from doing anything tragically wrong. Read an example from the Python Manual.
Some points that will stop you from getting blocked by spam filters:
Always send from a valid email address. You must be able to send email to this address and have it received (it can go into /dev/null after it's received, but it must be possible to /deliver/ there). This will stop spam filters that do Sender Address Verification from blocking your mail.
The email address you send from on the server.sendmail(fromaddr, [toaddr]) line will be where bounces go. The From: line in the email is a totally different address, and that's where mail will go when the user hits 'Reply:'. Use this to your advantage, bounces can go to one place, while reply goes to another.
Send email to a local mail server, I recommend postfix. This local server will receive your mail and be responsible for sending it to your upstream server. Once it has been delivered to the local server, treat it as 'sent' from a programmatic point of view.
If you have a site that is on a static ip in a datacenter of good reputation, don't be afraid to simply relay the mail directly to the internet. If you're in a datacenter full of script kiddies and spammers, you will need to relay this mail via a public MTA of good reputation, hopefully you will be able to work this out without a hassle.
Don't send an email in only HTML. Always send it in Plain and HTML, or just Plain. Be nice, I use a text only email client, and you don't want to annoy me.
Verify that you're not running SPF on your email domain, or get it configured to allow your server to send the mail. Do this by doing a TXT lookup on your domain.
$ dig google.com txt
...snip...
;; ANSWER SECTION:
google.com. 300 IN TXT "v=spf1 include:_netblocks.google.com ~all"
As you can see from that result, there's an SPF record there. If you don't have SPF, there won't be a TXT record. Read more about SPF on wikipedia.
Hope that helps.
A: Some general information with regards to automated mail processing...
First, the mail server "brand" itself isn't that important for broadcasting or receiving emails. All of them support the standard smtp / pop3 communications protocol. Most even have IMAP support and have some level of spam filtering. That said, try to use a current generation email server.
Second, be aware that in an effort to reduce spam a lot of the receiving mail servers out there will simply throw a message away instead of responding back that a mail account doesn't exist. Which means you may not receive those.
Bear in mind that getting past spam filters is an art. A number of isp's watch for duplicate messages, messages that look like spam based on keywords or other content, etc. This is sometimes independent of the quantity of messages sent; I've seen messages with as few as 50 copies get blocked by AOL even though they were legitimate emails. So, testing is your friend and look into this article on wikipedia on anti-spam techniques. Then make sure your not doing that crap.
**
As far as processing the messages, just remember it's a queued system. Connect to the server via POP3 to retrieve messages, open it, do some action, delete the message or archive it, and move on.
With regards to bouncebacks, let the mail server do most of the work. You should be able to configure it to notify a certain email account on the server in the event that it is unable to deliver a message. You can check that account periodically and process the Non Delivery Reports as necessary.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to save a public html page with all media and preserve structure Looking for a Linux application (or Firefox extension) that will allow me to scrape an HTML mockup and keep the page's integrity.
Firefox does an almost perfect job but doesn't grab images referenced in the CSS.
The Scrapbook extension for Firefox gets everything, but flattens the directory structure.
I wouldn't terribly mind if all folders became children of the index page.
A: See Website Mirroring With wget
wget --mirror –w 2 –p --HTML-extension –-convert-links http://www.yourdomain.com
A: Have you tried wget?
A: wget -r does what you want, and if not, there are plenty of flags to configure it. See man wget.
Another option is curl, which is even more powerful. See http://curl.haxx.se/.
A: Teleport Pro is great for this sort of thing. You can point it at complete websites and it will download a copy locally maintaining directory structure, and replacing absolute links with relative ones as necessary. You can also specify whether you want content from other third-party websites linked to from the original site.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Ruby/Rails Collection to Collection I have a two tables joined with a join table - this is just pseudo code:
Library
Book
LibraryBooks
What I need to do is if i have the id of a library, i want to get all the libraries that all the books that this library has are in.
So if i have Library 1, and Library 1 has books A and B in them, and books A and B are in Libraries 1, 2, and 3, is there an elegant (one line) way todo this in rails?
I was thinking:
l = Library.find(1)
allLibraries = l.books.libraries
But that doesn't seem to work. Suggestions?
A: l = Library.find(:all, :include => :books)
l.books.map { |b| b.library_ids }.flatten.uniq
Note that map(&:library_ids) is slower than map { |b| b.library_ids } in Ruby 1.8.6, and faster in 1.9.0.
I should also mention that if you used :joins instead of include there, it would find the library and related books all in the same query speeding up the database time. :joins will only work however if a library has books.
A: Perhaps:
l.books.map {|b| b.libraries}
or
l.books.map {|b| b.libraries}.flatten.uniq
if you want it all in a flat array.
Of course, you should really define this as a method on Library, so as to uphold the noble cause of encapsulation.
A: If you want a one-dimensional array of libraries returned, with duplicates removed.
l.books.map{|b| b.libraries}.flatten.uniq
A: One problem with
l.books.map{|b| b.libraries}.flatten.uniq
is that it will generate one SQL call for each book in l. A better approach (assuming I understand your schema) might be:
LibraryBook.find(:all, :conditions => ['book_id IN (?)', l.book_ids]).map(&:library_id).uniq
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: React on global hotkey in a Java program on Windows/Linux/Mac? A Java6 application sits in the system tray. It needs to be activated using a hotkey (e.g. Super-G or Ctrl-Shift-L etc) and do something (e.g. showing an input box).
How do I do that on:
*
*Windows (XP or Vista)
*OS/X
*Linux (Gnome or KDE)
A: For Linux (X11) there is JXGrabKey: http://sourceforge.net/projects/jxgrabkey/
There is also a tutorial for grabbing a global hotkey on Linux: http://ubuntuforums.org/showthread.php?t=864566
I didn't though find a solution for OS X yet.
To build something for all 3 platforms I'd suggest stripping down JIntellitype (it's Apache license) to it's global hotkey functionality and extending it with the OS X and X11 functionality...
A: It seems that this is not doable in a cross-platform fashion without using the native interfaces.
On Windows, you can use the free JIntellitype library.
A: If anyone wants to do the OSX or Linux versions of the JNI part of Jintellitype I would be more than happy to add those to the JIntellitype library.
Melloware
http://www.melloware.com
A: I've compiled a library for global hotkeys in java using JNA. It currently supports Windows, Linux and Mac OSX. It also supports media keys on windows and linux.
if anyone is interested, try https://github.com/tulskiy/jkeymaster
I would appreciate any feedback.
Thank you.
A: I found this solution to work just great on windows. It does not require you to install any software like JIntelliType. Note that this is 32 bit dll and you can recompile for 64-bit JVM is do desire. All credits to original author of the blog.
A: I've written a Java library for global key/mouse events here. This works for Windows, Linux X11, and OSX.
https://github.com/repeats/SimpleNativeHooks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Java JFormattedTextField for typing dates I've been having trouble to make a JFormattedTextField to use dates with the format dd/MM/yyyy. Specifically, as the user types, the cursor should "jump" the slashes, and get directly to the next number position.
Also, the JFormattedTextField must verify if the date entered is valid, and reject it somehow if the date is invalid, or "correct it" to a valid date, such as if the user input "13" as month, set it as "01" and add +1 to the year.
I tried using a mask ("##/##/####") with the validate() method of JFormattedTextField to check if the date is valid, but it appears that those two don't work well together (or I'm too green on Java to know how... :), and then the user can type anything on the field.
Any help is really appreciated! Thanks!
A: try using JCalendar
A: You may have to use a regular JTextField and call setDocument() with a custom document. I recommend extending PlainDocument, this makes it easy to validate input as the document changes, and add slashes as appropriate.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How best to copy entire databases in MS SQL Server? I need to copy about 40 databases from one server to another. The new databases should have new names, but all the same tables, data and indexes as the original databases. So far I've been:
1) creating each destination database
2) using the "Tasks->Export Data" command to create and populate tables for each database individually
3) rebuilding all of the indexes for each database with a SQL script
Only three steps per database, but I'll bet there's an easier way. Do any MS SQL Server experts out there have any advice?
A: Given that you're performing this on multiple databases -- you want a simple scripted solution, not a point and click solution.
This is a backup script that i keep around.
Get it working for one file and then modify it for many.
(on source server...)
BACKUP DATABASE Northwind
TO DISK = 'c:\Northwind.bak'
(target server...)
RESTORE FILELISTONLY
FROM DISK = 'c:\Northwind.bak'
(look at the device names... and determine where you want the mdf and
ldf files to go on this target server)
RESTORE DATABASE TestDB
FROM DISK = 'c:\Northwind.bak'
WITH MOVE 'Northwind' TO 'c:\test\testdb.mdf',
MOVE 'Northwind_log' TO 'c:\test\testdb.ldf'
GO
A: In order of ease
*
*stop server/fcopy/attach is probably easiest.
*backup/restore - can be done disconnected pretty simple and easy
*transfer DTS task - needs file copy permissions
*replication - furthest from simple to setup
Things to think about permissions, users and groups at the destination server esp. if you're transferring or restoring.
A: There are better answers already but this is an 'also ran' because it is just another option.
For the low low price of free you could look at the Microsoft SQL Server Database Publishing Wizard. This tool allows you to script the schema, data or data and schema. Plus is can be run from a UI or command line <- think CI process.
A: Backup -> Restore is the simplest, if not to use the replication.
A: If you use the Backup/Restore solution you're likely to have orphaned users so be sure to check out this article<microsoft> on how to fix them.
A: Another one to check out that is quick and simple:
Simple SQL BULK Copy
http://projects.c3o.com/files/3/plugins/entry11.aspx
A: Maybe the easiest is to detach/reattach. Right-click in the server manager on the DB, tasks --> detach. Then copy the MDF/LDF files to the new server and then reattach by clicking on the server icon and tasks-->attach. It will ask you for the MDF file - make sure the name etc is accurate.
A: Backup the databases using the standard SQL backup tool in Enterprise Manager, then when you restore on the second server you can specify the name of the new database.
This is the best way to maintain the schema in its entirety.
A: use backups to restore the databases to the new server with the new names.
A: Redgate SQL Compare and SQL Data Compare. The Comparison Bundle was by far the best investment a company I worked for ever made. Moving e-training content was a breeze with it.
A: Check those links:
*
*For multiple db's backup
*and single db restore
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: What's the best way to do fixed-point math? I need to speed up a program for the Nintendo DS which doesn't have an FPU, so I need to change floating-point math (which is emulated and slow) to fixed-point.
How I started was I changed floats to ints and whenever I needed to convert them, I used x>>8 to convert the fixed-point variable x to the actual number and x<<8 to convert to fixed-point. Soon I found out it was impossible to keep track of what needed to be converted and I also realized it would be difficult to change the precision of the numbers (8 in this case.)
My question is, how should I make this easier and still fast? Should I make a FixedPoint class, or just a FixedPoint8 typedef or struct with some functions/macros to convert them, or something else? Should I put something in the variable name to show it's fixed-point?
A: I wouldn't use floating point at all on a CPU without special hardware for handling it. My advice is to treat ALL numbers as integers scaled to a specific factor. For example, all monetary values are in cents as integers rather than dollars as floats. For example, 0.72 is represented as the integer 72.
Addition and subtraction are then a very simple integer operation such as (0.72 + 1 becomes 72 + 100 becomes 172 becomes 1.72).
Multiplication is slightly more complex as it needs an integer multiply followed by a scale back such as (0.72 * 2 becomes 72 * 200 becomes 14400 becomes 144 (scaleback) becomes 1.44).
That may require special functions for performing more complex math (sine, cosine, etc) but even those can be sped up by using lookup tables. Example: since you're using fixed-2 representation, there's only 100 values in the range (0.0,1] (0-99) and sin/cos repeat outside this range so you only need a 100-integer lookup table.
Cheers,
Pax.
A: When I first encountered fixed point numbers I found Joe Lemieux's article, Fixed-point Math in C, very helpful, and it does suggest one way of representing fixed-point values.
I didn't wind up using his union representation for fixed-point numbers though. I mostly have experience with fixed-point in C, so I haven't had the option to use a class either. For the most part though, I think that defining your number of fraction bits in a macro and using descriptive variable names makes this fairly easy to work with. Also, I've found that it is best to have macros or functions for multiplication and especially division, or you quickly get unreadable code.
For example, with 24.8 values:
#include "stdio.h"
/* Declarations for fixed point stuff */
typedef int int_fixed;
#define FRACT_BITS 8
#define FIXED_POINT_ONE (1 << FRACT_BITS)
#define MAKE_INT_FIXED(x) ((x) << FRACT_BITS)
#define MAKE_FLOAT_FIXED(x) ((int_fixed)((x) * FIXED_POINT_ONE))
#define MAKE_FIXED_INT(x) ((x) >> FRACT_BITS)
#define MAKE_FIXED_FLOAT(x) (((float)(x)) / FIXED_POINT_ONE)
#define FIXED_MULT(x, y) ((x)*(y) >> FRACT_BITS)
#define FIXED_DIV(x, y) (((x)<<FRACT_BITS) / (y))
/* tests */
int main()
{
int_fixed fixed_x = MAKE_FLOAT_FIXED( 4.5f );
int_fixed fixed_y = MAKE_INT_FIXED( 2 );
int_fixed fixed_result = FIXED_MULT( fixed_x, fixed_y );
printf( "%.1f\n", MAKE_FIXED_FLOAT( fixed_result ) );
fixed_result = FIXED_DIV( fixed_result, fixed_y );
printf( "%.1f\n", MAKE_FIXED_FLOAT( fixed_result ) );
return 0;
}
Which writes out
9.0
4.5
Note that there are all kinds of integer overflow issues with those macros, I just wanted to keep the macros simple. This is just a quick and dirty example of how I've done this in C. In C++ you could make something a lot cleaner using operator overloading. Actually, you could easily make that C code a lot prettier too...
I guess this is a long-winded way of saying: I think it's OK to use a typedef and macro approach. So long as you're clear about what variables contain fixed point values it isn't too hard to maintain, but it probably won't be as pretty as a C++ class.
If I was in your position, I would try to get some profiling numbers to show where the bottlenecks are. If there are relatively few of them then go with a typedef and macros. If you decide that you need a global replacement of all floats with fixed-point math though, then you'll probably be better off with a class.
A: Changing fixed point representations is commonly called 'scaling'.
If you can do this with a class with no performance penalty, then that's the way to go. It depends heavily on the compiler and how it inlines. If there is a performance penalty using classes, then you need a more traditional C-style approach. The OOP approach will give you compiler-enforced type safety which the traditional implementation only approximates.
@cibyr has a good OOP implementation. Now for the more traditional one.
To keep track of which variables are scaled, you need to use a consistent convention. Make a notation at the end of each variable name to indicate whether the value is scaled or not, and write macros SCALE() and UNSCALE() that expand to x>>8 and x<<8.
#define SCALE(x) (x>>8)
#define UNSCALE(x) (x<<8)
xPositionUnscaled = UNSCALE(10);
xPositionScaled = SCALE(xPositionUnscaled);
It may seem like extra work to use so much notation, but notice how you can tell at a glance that any line is correct without looking at other lines. For example:
xPositionScaled = SCALE(xPositionScaled);
is obviously wrong, by inspection.
This is a variation of the Apps Hungarian idea that Joel mentions in this post.
A: You can try my fixed point class (Latest available @ https://github.com/eteran/cpp-utilities)
// From: https://github.com/eteran/cpp-utilities/edit/master/Fixed.h
// See also: http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math
/*
* The MIT License (MIT)
*
* Copyright (c) 2015 Evan Teran
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef FIXED_H_
#define FIXED_H_
#include <ostream>
#include <exception>
#include <cstddef> // for size_t
#include <cstdint>
#include <type_traits>
#include <boost/operators.hpp>
namespace numeric {
template <size_t I, size_t F>
class Fixed;
namespace detail {
// helper templates to make magic with types :)
// these allow us to determine resonable types from
// a desired size, they also let us infer the next largest type
// from a type which is nice for the division op
template <size_t T>
struct type_from_size {
static const bool is_specialized = false;
typedef void value_type;
};
#if defined(__GNUC__) && defined(__x86_64__)
template <>
struct type_from_size<128> {
static const bool is_specialized = true;
static const size_t size = 128;
typedef __int128 value_type;
typedef unsigned __int128 unsigned_type;
typedef __int128 signed_type;
typedef type_from_size<256> next_size;
};
#endif
template <>
struct type_from_size<64> {
static const bool is_specialized = true;
static const size_t size = 64;
typedef int64_t value_type;
typedef uint64_t unsigned_type;
typedef int64_t signed_type;
typedef type_from_size<128> next_size;
};
template <>
struct type_from_size<32> {
static const bool is_specialized = true;
static const size_t size = 32;
typedef int32_t value_type;
typedef uint32_t unsigned_type;
typedef int32_t signed_type;
typedef type_from_size<64> next_size;
};
template <>
struct type_from_size<16> {
static const bool is_specialized = true;
static const size_t size = 16;
typedef int16_t value_type;
typedef uint16_t unsigned_type;
typedef int16_t signed_type;
typedef type_from_size<32> next_size;
};
template <>
struct type_from_size<8> {
static const bool is_specialized = true;
static const size_t size = 8;
typedef int8_t value_type;
typedef uint8_t unsigned_type;
typedef int8_t signed_type;
typedef type_from_size<16> next_size;
};
// this is to assist in adding support for non-native base
// types (for adding big-int support), this should be fine
// unless your bit-int class doesn't nicely support casting
template <class B, class N>
B next_to_base(const N& rhs) {
return static_cast<B>(rhs);
}
struct divide_by_zero : std::exception {
};
template <size_t I, size_t F>
Fixed<I,F> divide(const Fixed<I,F> &numerator, const Fixed<I,F> &denominator, Fixed<I,F> &remainder, typename std::enable_if<type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
typedef typename Fixed<I,F>::next_type next_type;
typedef typename Fixed<I,F>::base_type base_type;
static const size_t fractional_bits = Fixed<I,F>::fractional_bits;
next_type t(numerator.to_raw());
t <<= fractional_bits;
Fixed<I,F> quotient;
quotient = Fixed<I,F>::from_base(next_to_base<base_type>(t / denominator.to_raw()));
remainder = Fixed<I,F>::from_base(next_to_base<base_type>(t % denominator.to_raw()));
return quotient;
}
template <size_t I, size_t F>
Fixed<I,F> divide(Fixed<I,F> numerator, Fixed<I,F> denominator, Fixed<I,F> &remainder, typename std::enable_if<!type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
// NOTE(eteran): division is broken for large types :-(
// especially when dealing with negative quantities
typedef typename Fixed<I,F>::base_type base_type;
typedef typename Fixed<I,F>::unsigned_type unsigned_type;
static const int bits = Fixed<I,F>::total_bits;
if(denominator == 0) {
throw divide_by_zero();
} else {
int sign = 0;
Fixed<I,F> quotient;
if(numerator < 0) {
sign ^= 1;
numerator = -numerator;
}
if(denominator < 0) {
sign ^= 1;
denominator = -denominator;
}
base_type n = numerator.to_raw();
base_type d = denominator.to_raw();
base_type x = 1;
base_type answer = 0;
// egyptian division algorithm
while((n >= d) && (((d >> (bits - 1)) & 1) == 0)) {
x <<= 1;
d <<= 1;
}
while(x != 0) {
if(n >= d) {
n -= d;
answer += x;
}
x >>= 1;
d >>= 1;
}
unsigned_type l1 = n;
unsigned_type l2 = denominator.to_raw();
// calculate the lower bits (needs to be unsigned)
// unfortunately for many fractions this overflows the type still :-/
const unsigned_type lo = (static_cast<unsigned_type>(n) << F) / denominator.to_raw();
quotient = Fixed<I,F>::from_base((answer << F) | lo);
remainder = n;
if(sign) {
quotient = -quotient;
}
return quotient;
}
}
// this is the usual implementation of multiplication
template <size_t I, size_t F>
void multiply(const Fixed<I,F> &lhs, const Fixed<I,F> &rhs, Fixed<I,F> &result, typename std::enable_if<type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
typedef typename Fixed<I,F>::next_type next_type;
typedef typename Fixed<I,F>::base_type base_type;
static const size_t fractional_bits = Fixed<I,F>::fractional_bits;
next_type t(static_cast<next_type>(lhs.to_raw()) * static_cast<next_type>(rhs.to_raw()));
t >>= fractional_bits;
result = Fixed<I,F>::from_base(next_to_base<base_type>(t));
}
// this is the fall back version we use when we don't have a next size
// it is slightly slower, but is more robust since it doesn't
// require and upgraded type
template <size_t I, size_t F>
void multiply(const Fixed<I,F> &lhs, const Fixed<I,F> &rhs, Fixed<I,F> &result, typename std::enable_if<!type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
typedef typename Fixed<I,F>::base_type base_type;
static const size_t fractional_bits = Fixed<I,F>::fractional_bits;
static const size_t integer_mask = Fixed<I,F>::integer_mask;
static const size_t fractional_mask = Fixed<I,F>::fractional_mask;
// more costly but doesn't need a larger type
const base_type a_hi = (lhs.to_raw() & integer_mask) >> fractional_bits;
const base_type b_hi = (rhs.to_raw() & integer_mask) >> fractional_bits;
const base_type a_lo = (lhs.to_raw() & fractional_mask);
const base_type b_lo = (rhs.to_raw() & fractional_mask);
const base_type x1 = a_hi * b_hi;
const base_type x2 = a_hi * b_lo;
const base_type x3 = a_lo * b_hi;
const base_type x4 = a_lo * b_lo;
result = Fixed<I,F>::from_base((x1 << fractional_bits) + (x3 + x2) + (x4 >> fractional_bits));
}
}
/*
* inheriting from boost::operators enables us to be a drop in replacement for base types
* without having to specify all the different versions of operators manually
*/
template <size_t I, size_t F>
class Fixed : boost::operators<Fixed<I,F>> {
static_assert(detail::type_from_size<I + F>::is_specialized, "invalid combination of sizes");
public:
static const size_t fractional_bits = F;
static const size_t integer_bits = I;
static const size_t total_bits = I + F;
typedef detail::type_from_size<total_bits> base_type_info;
typedef typename base_type_info::value_type base_type;
typedef typename base_type_info::next_size::value_type next_type;
typedef typename base_type_info::unsigned_type unsigned_type;
public:
static const size_t base_size = base_type_info::size;
static const base_type fractional_mask = ~((~base_type(0)) << fractional_bits);
static const base_type integer_mask = ~fractional_mask;
public:
static const base_type one = base_type(1) << fractional_bits;
public: // constructors
Fixed() : data_(0) {
}
Fixed(long n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(unsigned long n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(int n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(unsigned int n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(float n) : data_(static_cast<base_type>(n * one)) {
// TODO(eteran): assert in range!
}
Fixed(double n) : data_(static_cast<base_type>(n * one)) {
// TODO(eteran): assert in range!
}
Fixed(const Fixed &o) : data_(o.data_) {
}
Fixed& operator=(const Fixed &o) {
data_ = o.data_;
return *this;
}
private:
// this makes it simpler to create a fixed point object from
// a native type without scaling
// use "Fixed::from_base" in order to perform this.
struct NoScale {};
Fixed(base_type n, const NoScale &) : data_(n) {
}
public:
static Fixed from_base(base_type n) {
return Fixed(n, NoScale());
}
public: // comparison operators
bool operator==(const Fixed &o) const {
return data_ == o.data_;
}
bool operator<(const Fixed &o) const {
return data_ < o.data_;
}
public: // unary operators
bool operator!() const {
return !data_;
}
Fixed operator~() const {
Fixed t(*this);
t.data_ = ~t.data_;
return t;
}
Fixed operator-() const {
Fixed t(*this);
t.data_ = -t.data_;
return t;
}
Fixed operator+() const {
return *this;
}
Fixed& operator++() {
data_ += one;
return *this;
}
Fixed& operator--() {
data_ -= one;
return *this;
}
public: // basic math operators
Fixed& operator+=(const Fixed &n) {
data_ += n.data_;
return *this;
}
Fixed& operator-=(const Fixed &n) {
data_ -= n.data_;
return *this;
}
Fixed& operator&=(const Fixed &n) {
data_ &= n.data_;
return *this;
}
Fixed& operator|=(const Fixed &n) {
data_ |= n.data_;
return *this;
}
Fixed& operator^=(const Fixed &n) {
data_ ^= n.data_;
return *this;
}
Fixed& operator*=(const Fixed &n) {
detail::multiply(*this, n, *this);
return *this;
}
Fixed& operator/=(const Fixed &n) {
Fixed temp;
*this = detail::divide(*this, n, temp);
return *this;
}
Fixed& operator>>=(const Fixed &n) {
data_ >>= n.to_int();
return *this;
}
Fixed& operator<<=(const Fixed &n) {
data_ <<= n.to_int();
return *this;
}
public: // conversion to basic types
int to_int() const {
return (data_ & integer_mask) >> fractional_bits;
}
unsigned int to_uint() const {
return (data_ & integer_mask) >> fractional_bits;
}
float to_float() const {
return static_cast<float>(data_) / Fixed::one;
}
double to_double() const {
return static_cast<double>(data_) / Fixed::one;
}
base_type to_raw() const {
return data_;
}
public:
void swap(Fixed &rhs) {
using std::swap;
swap(data_, rhs.data_);
}
public:
base_type data_;
};
// if we have the same fractional portion, but differing integer portions, we trivially upgrade the smaller type
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator+(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l + r;
}
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator-(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l - r;
}
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator*(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l * r;
}
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator/(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l / r;
}
template <size_t I, size_t F>
std::ostream &operator<<(std::ostream &os, const Fixed<I,F> &f) {
os << f.to_double();
return os;
}
template <size_t I, size_t F>
const size_t Fixed<I,F>::fractional_bits;
template <size_t I, size_t F>
const size_t Fixed<I,F>::integer_bits;
template <size_t I, size_t F>
const size_t Fixed<I,F>::total_bits;
}
#endif
It is designed to be a near drop in replacement for floats/doubles and has a choose-able precision. It does make use of boost to add all the necessary math operator overloads, so you will need that as well (I believe for this it is just a header dependency, not a library dependency).
BTW, common usage could be something like this:
using namespace numeric;
typedef Fixed<16, 16> fixed;
fixed f;
The only real rule is that the number have to add up to a native size of your system such as 8, 16, 32, 64.
A: The original version of Tricks of the Game Programming Gurus has an entire chapter on implementing fixed-point math.
A: In modern C++ implementations, there will be no performance penalty for using simple and lean abstractions, such as concrete classes. Fixed-point computation is precisely the place where using a properly engineered class will save you from lots of bugs.
Therefore, you should write a FixedPoint8 class. Test and debug it thoroughly. If you have to convince yourself of its performance as compared to using plain integers, measure it.
It will save you from many a trouble by moving the complexity of fixed-point calculation to a single place.
If you like, you can further increase the utility of your class by making it a template and replacing the old FixedPoint8 with, say, typedef FixedPoint<short, 8> FixedPoint8; But on your target architecture this is not probably necessary, so avoid the complexity of templates at first.
There is probably a good fixed point class somewhere in the internet - I'd start looking from the Boost libraries.
A: template <int precision = 8> class FixedPoint {
private:
int val_;
public:
inline FixedPoint(int val) : val_ (val << precision) {};
inline operator int() { return val_ >> precision; }
// Other operators...
};
A: Does your floating point code actually make use of the decimal point? If so:
First you have to read Randy Yates's paper on Intro to Fixed Point Math:
http://www.digitalsignallabs.com/fp.pdf
Then you need to do "profiling" on your floating point code to figure out the appropriate range of fixed-point values required at "critical" points in your code, e.g. U(5,3) = 5 bits to the left, 3 bits to the right, unsigned.
At this point, you can apply the arithmetic rules in the paper mentioned above; the rules specify how to interpret the bits which result from arithmetic operations. You can write macros or functions to perform the operations.
It's handy to keep the floating point version around, in order to compare the floating point vs fixed point results.
A: Whichever way you decide to go (I'd lean toward a typedef and some CPP macros for converting), you will need to be careful to convert back and forth with some discipline.
You might find that you never need to convert back and forth. Just imagine everything in the whole system is x256.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Calculating percentile rankings in MS SQL What's the best way to calculate percentile rankings (e.g. the 90th percentile or the median score) in MSSQL 2005?
I'd like to be able to select the 25th, median, and 75th percentiles for a single column of scores (preferably in a single record so I can combine with average, max, and min). So for example, table output of the results might be:
Group MinScore MaxScore AvgScore pct25 median pct75
----- -------- -------- -------- ----- ------ -----
T1 52 96 74 68 76 84
T2 48 98 74 68 75 85
A: Check out the NTILE command -- it will give you percentiles pretty easily!
SELECT SalesOrderID,
OrderQty,
RowNum = Row_Number() OVER(Order By OrderQty),
Rnk = RANK() OVER(ORDER BY OrderQty),
DenseRnk = DENSE_RANK() OVER(ORDER BY OrderQty),
NTile4 = NTILE(4) OVER(ORDER BY OrderQty)
FROM Sales.SalesOrderDetail
WHERE SalesOrderID IN (43689, 63181)
A: How about this:
SELECT
Group,
75_percentile = MAX(case when NTILE(4) OVER(ORDER BY score ASC) = 3 then score else 0 end),
90_percentile = MAX(case when NTILE(10) OVER(ORDER BY score ASC) = 9 then score else 0 end)
FROM TheScore
GROUP BY Group
A: I would think that this would be the simplest solution:
SELECT TOP N PERCENT FROM TheTable ORDER BY TheScore DESC
Where N = (100 - desired percentile). So if you wanted all rows in the 90th percentile, you'd select the top 10%.
I'm not sure what you mean by "preferably in a single record". Do you mean calculate which percentile a given score for a single record would fall into? e.g. do you want to be able to make statements like "your score is 83, which puts you in the 91st percentile." ?
EDIT: OK, I thought some more about your question and came up with this interpretation. Are you asking how to calculate the cutoff score for a particular percentile? e.g. something like this: to be in the 90th percentile you must have a score greater than 78.
If so, this query works. I dislike sub-queries though, so depending on what it was for, I'd probably try to find a more elegant solution. It does, however, return a single record with a single score.
-- Find the minimum score for all scores in the 90th percentile
SELECT Min(subq.TheScore) FROM
(SELECT TOP 10 PERCENT TheScore FROM TheTable
ORDER BY TheScore DESC) AS subq
A: I've been working on this a little more, and here's what I've come up with so far:
CREATE PROCEDURE [dbo].[TestGetPercentile]
@percentile as float,
@resultval as float output
AS
BEGIN
WITH scores(score, prev_rank, curr_rank, next_rank) AS (
SELECT dblScore,
(ROW_NUMBER() OVER ( ORDER BY dblScore ) - 1.0) / ((SELECT COUNT(*) FROM TestScores) + 1) [prev_rank],
(ROW_NUMBER() OVER ( ORDER BY dblScore ) + 0.0) / ((SELECT COUNT(*) FROM TestScores) + 1) [curr_rank],
(ROW_NUMBER() OVER ( ORDER BY dblScore ) + 1.0) / ((SELECT COUNT(*) FROM TestScores) + 1) [next_rank]
FROM TestScores
)
SELECT @resultval = (
SELECT TOP 1
CASE WHEN t1.score = t2.score
THEN t1.score
ELSE
t1.score + (t2.score - t1.score) * ((@percentile - t1.curr_rank) / (t2.curr_rank - t1.curr_rank))
END
FROM scores t1, scores t2
WHERE (t1.curr_rank = @percentile OR (t1.curr_rank < @percentile AND t1.next_rank > @percentile))
AND (t2.curr_rank = @percentile OR (t2.curr_rank > @percentile AND t2.prev_rank < @percentile))
)
END
Then in another stored procedure I do this:
DECLARE @pct25 float;
DECLARE @pct50 float;
DECLARE @pct75 float;
exec SurveyGetPercentile .25, @pct25 output
exec SurveyGetPercentile .50, @pct50 output
exec SurveyGetPercentile .75, @pct75 output
Select
min(dblScore) as minScore,
max(dblScore) as maxScore,
avg(dblScore) as avgScore,
@pct25 as percentile25,
@pct50 as percentile50,
@pct75 as percentile75
From TestScores
It still doesn't do quite what I'm looking for. This will get the stats for all tests; whereas I would like to be able to select from a TestScores table that has multiple different tests in it and get back the same stats for each different test (like I have in my example table in my question).
A: The 50th percentile is same as the median. When computing other percentile, say the 80th, sort the data for the 80 percent of data in ascending order and the other 20 percent in descending order, and take the avg of the two middle value.
NB: The median query has been around for a long time, but cannot remember where exactly I got it from, I have only amended it to compute other percentiles.
DECLARE @Temp TABLE(Id INT IDENTITY(1,1), DATA DECIMAL(10,5))
INSERT INTO @Temp VALUES(0)
INSERT INTO @Temp VALUES(2)
INSERT INTO @Temp VALUES(8)
INSERT INTO @Temp VALUES(4)
INSERT INTO @Temp VALUES(3)
INSERT INTO @Temp VALUES(6)
INSERT INTO @Temp VALUES(6)
INSERT INTO @Temp VALUES(6)
INSERT INTO @Temp VALUES(7)
INSERT INTO @Temp VALUES(0)
INSERT INTO @Temp VALUES(1)
INSERT INTO @Temp VALUES(NULL)
--50th percentile or median
SELECT ((
SELECT TOP 1 DATA
FROM (
SELECT TOP 50 PERCENT DATA
FROM @Temp
WHERE DATA IS NOT NULL
ORDER BY DATA
) AS A
ORDER BY DATA DESC) +
(
SELECT TOP 1 DATA
FROM (
SELECT TOP 50 PERCENT DATA
FROM @Temp
WHERE DATA IS NOT NULL
ORDER BY DATA DESC
) AS A
ORDER BY DATA ASC)) / 2.0
--90th percentile
SELECT ((
SELECT TOP 1 DATA
FROM (
SELECT TOP 90 PERCENT DATA
FROM @Temp
WHERE DATA IS NOT NULL
ORDER BY DATA
) AS A
ORDER BY DATA DESC) +
(
SELECT TOP 1 DATA
FROM (
SELECT TOP 10 PERCENT DATA
FROM @Temp
WHERE DATA IS NOT NULL
ORDER BY DATA DESC
) AS A
ORDER BY DATA ASC)) / 2.0
--75th percentile
SELECT ((
SELECT TOP 1 DATA
FROM (
SELECT TOP 75 PERCENT DATA
FROM @Temp
WHERE DATA IS NOT NULL
ORDER BY DATA
) AS A
ORDER BY DATA DESC) +
(
SELECT TOP 1 DATA
FROM (
SELECT TOP 25 PERCENT DATA
FROM @Temp
WHERE DATA IS NOT NULL
ORDER BY DATA DESC
) AS A
ORDER BY DATA ASC)) / 2.0
A: i'd probably use a the sql server 2005
row_number() over (order by score ) / (select count(*) from scores)
or something along those lines.
A: i'd do something like:
select @n = count(*) from tbl1
select @median = @n / 2
select @p75 = @n * 3 / 4
select @p90 = @n * 9 / 10
select top 1 score from (select top @median score from tbl1 order by score asc) order by score desc
is this right?
A: Percentile is calculated by
(Rank -1) /(total_rows -1) when you sort values in ascending order.
The below query will give you percentile value between 0 and 1. Person with lowest marks will have 0 percentile.
SELECT Name, marks, (rank_1-1)/((select count(*) as total_1 from table)-1)as percentile_rank
from
(
SELECT Name,
Marks,
RANK() OVER (ORDER BY Marks) AS rank_1
from table
) as A
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Getting all types in a namespace via reflection How do you get all the classes in a namespace through reflection in C#?
A: Just like @aku answer, but using extension methods:
string @namespace = "...";
var types = Assembly.GetExecutingAssembly().GetTypes()
.Where(t => t.IsClass && t.Namespace == @namespace)
.ToList();
types.ForEach(t => Console.WriteLine(t.Name));
A: Get all classes by part of Namespace name in just one row:
var allClasses = Assembly.GetExecutingAssembly().GetTypes().Where(a => a.IsClass && a.Namespace != null && a.Namespace.Contains(@"..your namespace...")).ToList();
A: Following code prints names of classes in specified namespace defined in current assembly.
As other guys pointed out, a namespace can be scattered between different modules, so you need to get a list of assemblies first.
string nspace = "...";
var q = from t in Assembly.GetExecutingAssembly().GetTypes()
where t.IsClass && t.Namespace == nspace
select t;
q.ToList().ForEach(t => Console.WriteLine(t.Name));
A: using System.Reflection;
using System.Collections.Generic;
//...
static List<string> GetClasses(string nameSpace)
{
Assembly asm = Assembly.GetExecutingAssembly();
List<string> namespacelist = new List<string>();
List<string> classlist = new List<string>();
foreach (Type type in asm.GetTypes())
{
if (type.Namespace == nameSpace)
namespacelist.Add(type.Name);
}
foreach (string classname in namespacelist)
classlist.Add(classname);
return classlist;
}
NB: The above code illustrates what's going on. Were you to implement it, a simplified version can be used:
using System.Linq;
using System.Reflection;
using System.Collections.Generic;
//...
static IEnumerable<string> GetClasses(string nameSpace)
{
Assembly asm = Assembly.GetExecutingAssembly();
return asm.GetTypes()
.Where(type => type.Namespace == nameSpace)
.Select(type => type.Name);
}
A: Namespaces are actually rather passive in the design of the runtime and serve primarily as organizational tools. The Full Name of a type in .NET consists of the Namespace and Class/Enum/Etc. combined. If you only wish to go through a specific assembly, you would simply loop through the types returned by assembly.GetExportedTypes() checking the value of type.Namespace. If you were trying to go through all assemblies loaded in the current AppDomain it would involve using AppDomain.CurrentDomain.GetAssemblies()
A: For a specific Assembly, NameSpace and ClassName:
var assemblyName = "Some.Assembly.Name"
var nameSpace = "Some.Namespace.Name";
var className = "ClassNameFilter";
var asm = Assembly.Load(assemblyName);
var classes = asm.GetTypes().Where(p =>
p.Namespace == nameSpace &&
p.Name.Contains(className)
).ToList();
Note: The project must reference the assembly
A: //a simple combined code snippet
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;
namespace MustHaveAttributes
{
class Program
{
static void Main ( string[] args )
{
Console.WriteLine ( " START " );
// what is in the assembly
Assembly a = Assembly.Load ( "MustHaveAttributes" );
Type[] types = a.GetTypes ();
foreach (Type t in types)
{
Console.WriteLine ( "Type is {0}", t );
}
Console.WriteLine (
"{0} types found", types.Length );
#region Linq
//#region Action
//string @namespace = "MustHaveAttributes";
//var q = from t in Assembly.GetExecutingAssembly ().GetTypes ()
// where t.IsClass && t.Namespace == @namespace
// select t;
//q.ToList ().ForEach ( t => Console.WriteLine ( t.Name ) );
//#endregion Action
#endregion
Console.ReadLine ();
Console.WriteLine ( " HIT A KEY TO EXIT " );
Console.WriteLine ( " END " );
}
} //eof Program
class ClassOne
{
} //eof class
class ClassTwo
{
} //eof class
[System.AttributeUsage ( System.AttributeTargets.Class |
System.AttributeTargets.Struct, AllowMultiple = true )]
public class AttributeClass : System.Attribute
{
public string MustHaveDescription { get; set; }
public string MusHaveVersion { get; set; }
public AttributeClass ( string mustHaveDescription, string mustHaveVersion )
{
MustHaveDescription = mustHaveDescription;
MusHaveVersion = mustHaveVersion;
}
} //eof class
} //eof namespace
A: Here's a fix for LoaderException errors you're likely to find if one of the types sublasses a type in another assembly:
// Setup event handler to resolve assemblies
AppDomain.CurrentDomain.ReflectionOnlyAssemblyResolve += new ResolveEventHandler(CurrentDomain_ReflectionOnlyAssemblyResolve);
Assembly a = System.Reflection.Assembly.ReflectionOnlyLoadFrom(filename);
a.GetTypes();
// process types here
// method later in the class:
static Assembly CurrentDomain_ReflectionOnlyAssemblyResolve(object sender, ResolveEventArgs args)
{
return System.Reflection.Assembly.ReflectionOnlyLoad(args.Name);
}
That should help with loading types defined in other assemblies.
Hope that helps!
A: As FlySwat says, you can have the same namespace spanning in multiple assemblies (for eg System.Collections.Generic). You will have to load all those assemblies if they are not already loaded. So for a complete answer:
AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(t => t.GetTypes())
.Where(t => t.IsClass && t.Namespace == @namespace)
This should work unless you want classes of other domains. To get a list of all domains, follow this link.
A: You won't be able to get all types in a namespace, because a namespace can bridge multiple assemblies, but you can get all classes in an assembly and check to see if they belong to that namespace.
Assembly.GetTypes() works on the local assembly, or you can load an assembly first then call GetTypes() on it.
A: Quite simple
Type[] types = Assembly.Load(new AssemblyName("mynamespace.folder")).GetTypes();
foreach (var item in types)
{
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "305"
} |
Q: Worse sin: side effects or passing massive objects? I have a function inside a loop inside a function. The inner function acquires and stores a large vector of data in memory (as a global variable... I'm using "R" which is like "S-Plus"). The loop loops through a long list of data to be acquired. The outer function starts the process and passes in the list of datasets to be acquired.
for (dataset in list_of_datasets) {
for (datachunk in dataset) {
<process datachunk>
<store result? as vector? where?>
}
}
I programmed the inner function to store each dataset before moving to the next, so all the work of the outer function occurs as side effects on global variables... a big no-no. Is this better or worse than collecting and returning a giant, memory-hogging vector of vectors? Is there a superior third approach?
Would the answer change if I were storing the data vectors in a database rather than in memory? Ideally, I'd like to be able to terminate the function (or have it fail due to network timeouts) without losing all the information processed prior to termination.
A: use variables in the outer function instead of global variables. This gets you the best of both approaches: you're not mutating global state, and you're not copying a big wad of data. If you have to exit early, just return the partial results.
(See the "Scope" section in the R manual: http://cran.r-project.org/doc/manuals/R-intro.html#Scope)
A: Remember your Knuth. "Premature optimization is the root of all programming evil."
Try the side effect free version. See if it meets your performance goals. If it does, great, you don't have a problem in the first place; if it doesn't, then use the side effects, and make a note for the next programmer that your hand was forced.
A: It's not going to make much difference to memory use, so you might as well make the code clean.
Since R has copy-on-modify for variables, modifying the global object will have the same memory implications as passing something up in return values.
If you store the outputs in a database (or even in a file) you won't have the memory use issues, and the data will be incrementally available as it is created, rather than just at the end. Whether it's faster with the database depends primarily on how much memory you are using: is the reduction is garbage collection going to pay for the cost of writing to disk.
There are both time and memory profilers in R, so you can see empirically what the impacts are.
A: I'm not sure I understand the question, but I have a couple of solutions.
*
*Inside the function, create a list of the vectors and return that.
*Inside the function, create an environment and store all the vectors inside of that. Just make sure that you return the environment in case of errors.
in R:
help(environment)
# You might do something like this:
outer <- function(datasets) {
# create the return environment
ret.env <- new.env()
for(set in dataset) {
tmp <- inner(set)
# check for errors however you like here. You might have inner return a list, and
# have the list contain an error component
assign(set, tmp, envir=ret.env)
}
return(ret.env)
}
#The inner function might be defined like this
inner <- function(dataset) {
# I don't know what you are doing here, but lets pretend you are reading a data file
# that is named by dataset
filedata <- read.table(dataset, header=T)
return(filedata)
}
leif
A: FYI, here's a full sample toy solution that avoids side effects:
outerfunc <- function(names) {
templist <- list()
for (aname in names) {
templist[[aname]] <- innerfunc(aname)
}
templist
}
innerfunc <- function(aname) {
retval <- NULL
if ("one" %in% aname) retval <- c(1)
if ("two" %in% aname) retval <- c(1,2)
if ("three" %in% aname) retval <- c(1,2,3)
retval
}
names <- c("one","two","three")
name_vals <- outerfunc(names)
for (name in names) assign(name, name_vals[[name]])
A: Third approach: inner function returns a reference to the large array, which the next statement inside the loop then dereferences and stores wherever it's needed (ideally with a single pointer store and not by having to memcopy the entire array).
This gets rid of both the side effect and the passing of large datastructures.
A: It's tough to say definitively without knowing the language/compiler used. However, if you can simply pass a pointer/reference to the object that you're creating, then the size of the object itself has nothing to do with the speed of the function calls. Manipulating this data down the road could be a different story.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How are the vxWorks "kernel shell" and "host shell" different? In the vxWorks RTOS, there is a shell that allows you to issue command to your embedded system.
The documentation refers to kernel shell, host shell and target shell. What is the difference between the three?
A: The target shell and kernel shell are the same. They refer to a shell that runs on the target. You can connect to the shell using either a serial port, or a telnet session.
A task runs on the target and parses all the commands received and acts on them, outputting data back to the port.
The host shell is a process that runs on the development station. It communicates with the debug agent on the target. All the commands are actually parsed on the host and only simplified requests are sent to the target agent:
*
*Read/Write Memory
*Set/Remove Breakpoints
*Create/Delete/Suspend/Resume Tasks
*Invoke a function
This results in less real-time impact to the target.
Both shells allow the user to perform low level debugging (dissassembly, breakpoints, etc..) and invoke functions on the target.
A: There are some differences between host shell and target shell, you can use h command to get the actual commands the two shell support.
The host shell support more command line edit functions like auto complement and symbol lookup etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: .Net Gridview alpha sorting, it needs to be numerically sorted This is my first real question of need for any of those Gridview experts out there in the .NET world.
I an creating a Gridview from codebehind and I am holding a bunch of numerical data in the columns. Although, I do add the comma in the number fields from codebehind. When I load it to the Gridview, I have the sorting ability turned on, BUT the gridview chooses to ALPHA sort rather than sorting numerically because I add in those commas.
So I need help. Anyone willing to give this one a shot? I need to change some of my columns in the gridview to numerical sort rather than the alpha sort it is using.
A: If you do end up implementing your own comparer and sorting them as strings, the algorithm for treating numbers 'properly' is called Natural Sorting. Jeff wrote a pretty good entry on it here:
Sorting for Humans : Natural Sort Order
You can find a pretty good implementation in C# here:
http://www.codeproject.com/KB/string/NaturalSortComparer.aspx
A: Depending on exactly how you are doing sorting you could use one of the above methods, or you could return to the DB and get the sorting done there if the columns are actually a number type, then add your decoration to it later.
A: P-Invoke is your friend.
[DllImport("Shlwapi.dll", CharSet = CharSet.Unicode)]
private static extern int StrCmpLogicalW(string psz1, string psz2);
Then you could use it as your own comparer.
For example (in VS2005),
Array.Sort(tringArray, delegate(string left, string right)
{
return StrCmpLogicalW(left, right);
});
A: Instead, I just resorted to the JQUERY Table Sorter.
can be found here: tablesorter
A: I realize this is really old, but you're mixing data with presentation; that's what's screwing up the sort. Get the number out of SQL without adding commas, then add them in the presentation layer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's the best way to export bug tracking data from hosted HP Quality Center? This question may be too product specifc but I'd like to know if anyone is exporting bug track data from HP Quality Center.
HP Quality Center (QC) has an old school COM API but I'd rather use a web service or maybe even screen scraper to export the data into an excel spreadsheet.
In any case, what's the best way to export bug tracking data from hosted HP Quality Center?
A: You can use this QC API Code to modify bugs/requirements.
TDAPIOLELib.TDConnection connection = new TDAPIOLELib.TDConnection();
connection.InitConnectionEx("http://SERVER:8080/qcbin");
connection.Login("USERNAME", "PASSWORD");
connection.Connect("QCDOMAIN", "QCPROJECT");
TDAPIOLELib.BugFactory bugFactory = connection.BugFactory as TDAPIOLELib.BugFactory;
TDAPIOLELib.List bugList = bugFactory.NewList("");
foreach (TDAPIOLELib.Bug bug in bugList)
{
// View / Modify the properties
// bug.ID, bug.Name, etc.
// Save them when done
// bug.Post();
}
A: Personally, I like the COM API and I use it to generate both Word and Excel reports. I have done some experiments with VS2005 and the results are encouraging.
If you don't want to go this route, I have a couple of suggestions.
*
*If you use the charting options (Analysis > Graphs). Each graph has a tab called data grid that lets you export data to Excel and a bunch of other data formats.
*If you are an admin, or friendely with your admin, you can dump the whole database into access and then import into Excel. Of course, you'll loose all your table relationships, but it's better than nothing. It's also a really good way to learn the db schema.
A: Unfortunately QC doesn't expose any web-services at the moment.
I think the easiest way would be to query the DB directly. The data you are looking for is in the project's schema in BUG table.
QC also have an excel add-in you might want to try that, but it's mainly for adding defects from excel to QC.
A: If manual export (i.e., not using a program) is possible for you, the following will be the easiest way to export defect data.
In QC 9.2 (maybe present in earlier versions, too), there is Export/All in the Defects menu, which exports defects in your defects grid into an Excel sheet.
The fields exported are those shown in the defects grid, which can be customized using the "Select Columns" button (looks like a green grid).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to determine which version of Direct3D is installed? We have an application which needs to use Direct3D. Specifically, it needs at least DirectX 9.0c version 4.09.0000.0904. While this should be present on all newer XP machines it might not be installed on older XP machines. How can I programmatically (using C++) determine if it is installed? I want to be able to give an information message to the user that Direct3D will not be available.
A: Call DirectXSetupGetVersion: http://msdn.microsoft.com/en-us/library/microsoft.directx_sdk.directsetup.directxsetupgetversion
You'll need to include dsetup.h
Here's the sample code from the site:
DWORD dwVersion;
DWORD dwRevision;
if (DirectXSetupGetVersion(&dwVersion, &dwRevision))
{
printf("DirectX version is %d.%d.%d.%d\n",
HIWORD(dwVersion), LOWORD(dwVersion),
HIWORD(dwRevision), LOWORD(dwRevision));
}
A: According to the DirectX 9.0 SDK (summer 2004) documentation, see the GetDXVer SDK sample at \Samples\Multimedia\DXMisc\GetDXVer.
A: A quick Google search turns up this article which identifies the location of the version number in the registry and then provides a case statement which maps the internal version number to the version number we're more familiar with.
Another quick Google search turns up an example in C++ for reading from the registry.
Enjoy...
A: Yes, use the mechanism shown in the DirectX Install sample in the March 2009 DirectX SDK. (Look under "System" category in the sample browser.)
Do not use the registry! That stuff is undocumented and not guaranteed to work.
The only supported way is to use the DirectSetup API, which is shown in the DirectX Install sample. I also cover this stuff in Chapter 24. Installation and Setup in my book The Direct3D Graphics Pipeline. You can download that chapter for free at the above URL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Unittest causing sys.exit() No matter what I do sys.exit() is called by unittest, even the most trivial examples. I can't tell if my install is messed up or what is going on.
IDLE 1.2.2 ==== No Subprocess ====
>>> import unittest
>>>
>>> class Test(unittest.TestCase):
def testA(self):
a = 1
self.assertEqual(a,1)
>>> unittest.main()
option -n not recognized
Usage: idle.pyw [options] [test] [...]
Options:
-h, --help Show this message
-v, --verbose Verbose output
-q, --quiet Minimal output
Examples:
idle.pyw - run default set of tests
idle.pyw MyTestSuite - run suite 'MyTestSuite'
idle.pyw MyTestCase.testSomething - run MyTestCase.testSomething
idle.pyw MyTestCase - run all 'test*' test methods
in MyTestCase
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
unittest.main()
File "E:\Python25\lib\unittest.py", line 767, in __init__
self.parseArgs(argv)
File "E:\Python25\lib\unittest.py", line 796, in parseArgs
self.usageExit(msg)
File "E:\Python25\lib\unittest.py", line 773, in usageExit
sys.exit(2)
SystemExit: 2
>>>
A: Pop open the source code to unittest.py. unittest.main() is hard-coded to call sys.exit() after running all tests. Use TextTestRunner to run test suites from the prompt.
A: It's nice to be able to demonstrate that your tests work when first trying out the unittest module, and to know that you won't exit your Python shell. However, these solutions are version dependent.
Python 2.6
I'm using Python 2.6 at work, importing unittest2 as unittest (which is the unittest module supposedly found in Python 2.7).
The unittest.main(exit=False) doesn't work in Python 2.6's unittest2, while JoeSkora's solution does, and to reiterate it:
unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(Test))
To break this down into its components and default arguments, with correct semantic names for the various composed objects:
import sys # sys.stderr is used in below default args
test_loader = unittest.TestLoader()
loaded_test_suite = test_loader.loadTestsFromTestCase(Test)
# Default args:
text_test_runner = unittest.TextTestRunner(stream=sys.stderr,
descriptions=True,
verbosity=1)
text_test_runner.run(loaded_test_suite)
Python 2.7 and 3
In Python 2.7 and higher, the following should work.
unittest.main(exit=False)
A: In new Python 2.7 release, unittest.main() has a new argument.
If 'exit' is set to False, sys.exit() is not called during the execution of unittest.main().
A: try:
sys.exit()
except SystemExit:
print('Simple as that, but you should really use a TestRunner instead')
A: Your example is exiting on my install too. I can make it execute the tests and stay within Python by changing
unittest.main()
to
unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(Test))
More information is available here in the Python Library Reference.
A: Don't try to run unittest.main() from IDLE. It's trying to access sys.argv, and it's getting the args that IDLE was started with. Either run your tests in a different way from IDLE, or call unittest.main() in its own Python process.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Wildcard Subdomains I know there have been a few threads on this before, but I have tried absolutely everything suggested (that I could find) and nothing has worked for me thus far...
With that in mind, here is what I'm trying to do:
First, I want to allow users to publish pages and give them each a subdomain of their choice (ex: user.example.com). From what I can gather, the best way to do this is to map user.example.com to example.com/user with mod_rewrite and .htaccess - is that correct?
If that is correct, can somebody give me explicit instructions on how to do this?
Also, I am doing all of my development locally, using MAMP, so if somebody could tell me how to set up my local environment to work in the same manner (I've read this is more difficult), I would greatly appreciate it. Honestly, I have been trying a everything to no avail, and since this is my first time doing something like this, I am completely lost.
Some of these answers have been REALLY helpful, but for the system I have in mind, manually adding a subdomain for each user is not an option. What I'm really asking is how to do this on the fly, and redirect wildcard.example.com to example.com/wildcard -- the way Tumblr is set up is a perfect example of what I'd like to do.
A: I realize that I'm pretty late responding to this question, but I had the same problem in regards to a local development solution. In another SO thread I found better solutions and thought I would share them for anyone with the same question in the future:
VMware owned wild card domain that resolves any subdomain to 127.0.0.1:
vcap.me resolves to 127.0.0.1
www.vcap.me resolves to 127.0.0.1
or for more versatility 37 Signals owns a domain to map any subdomain to any given IP using a specific format:
127.0.0.1.xip.io resolves to 127.0.0.1
www.127.0.0.1.xip.io resolves to 127.0.0.1
db.192.168.0.1.xip.io resolves to 192.168.0.1
see xip.io for more info
A: I am on Ubuntu 16.04 and since 14.04 I've using solution provided by Dave Evans here and it works fine for me.
*
*Install dnsmasq
sudo apt-get install dnsmasq
*Create new file localhost.conf under /etc/dnsmasq.d dir with the following line
#file /etc/dnsmasq.d/localhost.conf
address=/localhost/127.0.0.1
*Edit /etc/dhcp/dhclient.conf and add the following line
prepend domain-name-servers 127.0.0.1;
(You’ll probably find that this line is already there and you just need to uncomment it.)
*Last one is restart the service
sudo systemctl restart dnsmasq
sudo dhclient
Finally, you should check if it's working.
dig whatever.localhost
note:
If you want to use it on your web server, you need to simply change the 127.0.0.0 to your actual IP address.
A: As far as how to set up the DNS subdomain wildcard, that would be a function of your DNS hosting provider. This would be different steps depending on which hosting provider you have and would be a better question for them.
Once you've set that up with the DNS host, from your web app you really are just URL rewriting, which can be done with some sort of module for the web server itself, such as isapi rewrite if you're on IIS (this would be the preferred route if possible). You could also handle rewriting at the application level as well (like using routing if on ASP.NET).
You'd rewrite the URL so http://myname.example.com would become http://example.com/something.aspx?name=myname or something. From there on out, you just handle it as if the myname value was in the query string as normal. Does that make sense? Hope I didn't misunderstand what you're after.
I am not suggesting that you create a subdomain for each user, but instead create a wildcard subdomain for the domain itself, so anything.example.com (basically *.example.com) goes to your site. I have several domains setup with MyDomain. Their instructions for setting this up is like this:
Yes, you can configure a wild card but
it will only work if you set it up as
an A Record. Wildcards do not work
with a C Name. To use a wildcard, you
use the asterisks character ''. For
example, if you create and A Record
using a wild card, *.example.com,
anything that is entered in the place
where the '' is located, will resolve
to the specified IP address. So if you
enter 'www', 'ftp', 'site', or
anything else before the domain name,
it will always resolve to the IP
address
I have some that are setup in just this way, having *.example.com go to my site. I then can read the base URL in my web app to see that ryan.example.com is what was currently accessed, or that bill.example.com is what was used. I can then either:
*
*Use URL rewriting so that the subdomain becomes a part of the query string OR
*Simply read the host value from the accessed URL and perform some logic based on that value.
Does that make sense? I have several sites set up in just this exact way: create the wildcard for the domain with the DNS host and then simply read the host, or base domain from the URL to decide what to display based on the subdomain (which was actually a username)
Edit 2:
There is no way to do this without a DNS entry. The "online world" needs to know that name1.example.com, name2.example.com,..., nameN.example.com all go to the IP address for your server. The only way to do this is with the appropriate DNS entry. You have to add the wildcard DNS entry for your domain with your DNS host. Then it's just a matter of you reading the subdomain from the URL and taking the appropriate action in your code.
A: The best thing to do if you are running *AMP is to do what Thomas suggests and do virtual hosts in Apache. You can do this either with or without the redirect you describe.
Virtual hosts
Most likely you will want to do name-based virtual hosts, as it's easiest to set up and only requires one IP address (so will also be easy to set up and test on your local MAMP machine). IP-based virtual hosts is better in some other respects, but you have to have an IP address for each domain.
This Wikipedia page discusses the differences and links to a good basic walk-thru of how to do name-based vhosts at the bottom.
On your local machine for testing, you'll also have to set up fake DNS names in /etc/hosts for your fake test domain names. i.e. if you have Apache listening on localhost and set up vhost1.test.domain and vhost2.test.domain in your Apache configs, you'd just add these domains to the 127.0.0.1 line in /etc/hosts, after localhost:
127.0.0.1 localhost vhost1.test.domain vhost2.test.domain
Once you've done the /etc/hosts edit and added the name-based virtual host configs to your Apache configuration file(s), that's it, restart Apache and your test domains should work.
Redirect with mod_rewrite
If you want to do redirects with mod_rewrite (so that user.example.com isn't directly hosted and instead redirects to example.com/user), then you will also need to do a RewriteCond to match the subdomain and redirect it:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^subdomain\.example\.com
RewriteRule ^(.*)$ http://example.com/subdomain$1 [R]
You can put this in a .htaccess or in your main Apache config.
You will need to add a pair of rules like the last two for each subdomain you want to redirect. Or, you may be able to capture the subdomain in a RewriteCond to be able to use one wildcard rule to redirect *.example.com to example.com/
*
*-- but that smells really bad to me from a security standpoint.
All together, vhosts and redirect
It's better to be more explicit and set up a virtual host configuration section for each hostname you want to listen for, and put the rewrite rules for each of these hostnames inside its virtual host config. (It is always more secure and faster to put this kind of stuff inside your Apache config and not .htaccess, if you can help it -- .htaccess slows performance because Apache is constantly scouring the filesystem for .htaccess files and reparsing them, and it's less secure because these can be screwed up by users.)
All together like that, the vhost config inside your Apache configs would be:
NameVirtualHost 127.0.0.1:80
# Your "default" configuration must go first
<VirtualHost 127.0.0.1:80>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /www/siteroot
# etc.
</VirtualHost>
# First subdomain you want to redirect
<VirtualHost 127.0.0.1:80>
ServerName vhost1.example.com
RewriteEngine On
RewriteRule ^(.*)$ http://example.com/vhost1$1 [R]
</VirtualHost>
# Second subdomain you want to redirect
<VirtualHost 127.0.0.1:80>
ServerName vhost2.example.com
RewriteEngine On
RewriteRule ^(.*)$ http://example.com/vhost2$1 [R]
</VirtualHost>
A: I had to do exactly the same for one of my sites. You can follow the following steps
*
*If you've cPanel on your server, create a subdomain *, if not, you'd have to set-up an A record in your DNS (for BIND see http://ma.tt/2003/10/wildcard-dns-and-sub-domains/). On your dev. server you'd be far better off faking subdomains by adding each to your hosts file.
*(If you used cPanel you won't have to do this). You'll have to add soemthing like the following to your apache vhosts file. It largely depends on what type of server (shared or not) you're running. THE FOLLOWING CODE IS NOT COMPLETE. IT'S JUST TO GIVE DIRECTION. NOTE: ServerAlias example.com *.example.com is important.
<VirtualHost 127.0.0.1:80>
DocumentRoot /var/www/
ServerName example.com
ServerAlias example.com *.example.com
</VirtualHost>
*Next, since you can use the PHP script to check the "Host" header and find out the subdomain and serve content accordingly.
A:
First, I want to allow users to
publish pages and give them each a
subdomain of their choice (ex:
user.mysite.com). From what I can
gather, the best way to do this is to
map user.mysite.com to mysite.com/user
with mod_rewrite and .htaccess - is
that correct?
You may be better off using virtual hosts. That way, each user can have a webserver configuration pretty much independent of others.
The syntax goes something like this:
<VirtualHost *:80>
DocumentRoot /var/www/user
ServerName user.mysite.com
...
</VirtualHost>
A: From what I have seen on many webhosts, they setup a virtual host on apache.
So if your www.mysite.com is served from /var/www, you could create a folder for each user. Then map the virtual host to that folder.
With that, both mysite.com/user and user.mysite.com works.
As for your test enviroment, if you are on windows, I would suggest editing your HOSTS file to map mysite.com to your local PC (127.0.0.1), as well as any subdomains you set up for testing.
A: The solution I found for Ubuntu 18.04 is similar to this one but involves NetworkManager config:
*
*Edit the file /etc/NetworkManager/NetworkManager.conf, and add the line dns=dnsmasq to the [main] section
sudo editor /etc/NetworkManager/NetworkManager.conf
should look like this:
[main]
plugins=ifupdown,keyfile
dns=dnsmasq
...
*Start using NetworkManager's resolv.conf
sudo rm /etc/resolv.conf
sudo ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf
*Create a file with your wildcard configuration
echo 'address=/.localhost/127.0.0.1' | sudo tee /etc/NetworkManager/dnsmasq.d/localhost-wildcard.conf
*Reload NetworkManager configuration
sudo systemctl reload NetworkManager
*Test it
dig localdomain.localhost
You can also add any other domain, quite useful for some types of authentication when using a local development setup.
echo 'address=/.local-dev.workdomain.com/127.0.0.1' | sudo tee /etc/NetworkManager/dnsmasq.d/workdomain-wildcard.conf
Then this works:
dig petproject.local-dev.workdomain.com
;; ANSWER SECTION:
petproject.local-dev.workdomain.com. 0 IN A 127.0.0.1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Dealing with Date only dates across timezones in .Net Ok - a bit of a mouthful. So the problem I have is this - I need to store a Date for expiry where only the date part is required and I don't want any timezone conversion. So for example if I have an expiry set to "08 March 2008" I want that value to be returned to any client - no matter what their timezone is.
The problem with remoting it as a DateTime is that it gets stored/sent as "08 March 2008 00:00", which means for clients connecting from any timezone West of me it gets converted and therefore flipped to "07 March 2008"
Any suggestions for cleanly handling this scenario ? Obviously sending it as a string would work. anything else ?
thanks,
Ian
A: I'm not sure what remoting technology you're referring to, but this is a real problem with WCF, which only currently supports serializing DateTime as xs:DateTime, inappropriate for a date-only value where you are not interested in timezones.
.NET 3.5 introduces the new DateTimeOffset type, which is good for transferring a DateTime between timezones, but doesn't help with the date-only scenario.
Ideally WCF needs to optionally support xs:Date for serializing dates as requested here:
http://connect.microsoft.com/wcf/feedback/ViewFeedback.aspx?FeedbackID=349215
A: I do it like this: Whenever I have a date in memory or stored in a file it is always in a DateTime in UTC. When I show the date to the user it is always a string. When I convert between the string and the DateTime I also do the time zone conversion.
This way I never have to deal with time zones in my logic, only in the presentation.
A: You can send it as UTC Time
dateTime1.ToUniversalTime()
A: I think sending as a timestamp string would be the quickest / easiest way although you could look at forcing a locale to stop the time conversion from occuring.
A: You could create a struct Date that provides access to the details you want/need, like:
public struct Date
{
public int Month; //or string instead of int
public int Day;
public int Year;
}
This is lightweight, flexible and gives you full control.
A: Why don't you send it as a string then convert it back to a date type as needed? This way it will not be converted over different timezones. Keep it simple.
Edit: I like the Struct idea, allows for good functionality.
A: The easiest way I've handled this on apps in the past is to just store the date as a string in yyyy-mm-dd format. It's unambigious and doesn't get automatically translated by anything.
Yes, it's a pain...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Parsing HTTP Headers I've had a new found interest in building a small, efficient web server in C and have had some trouble parsing POST methods from the HTTP Header. Would anyone have any advice as to how to handle retrieving the name/value pairs from the "posted" data?
POST /test HTTP/1.1
Host: test-domain.com:7017
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://test-domain.com:7017/index.html
Cookie: __utma=43166241.217413299.1220726314.1221171690.1221200181.16; __utmz=43166241.1220726314.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none)
Cache-Control: max-age=0
Content-Type: application/x-www-form-urlencoded
Content-Length: 25
field1=asfd&field2=a3f3f3
// ^-this
I see no tangible way to retrieve the bottom line as a whole and ensure that it works every time. I'm not a fan of hard-coding in anything.
A: Once you have Content-Length in the header, you know the amount of bytes to be read right after the blank line. If, for any reason (GET or POST) Content-Length is not in the header, it means there's nothing to read after the blank line (crlf).
A: You can retrieve the name/value pairs by searching for newline newline or more specifically \r\n\r\n (after this, the body of the message will start).
Then you can simply split the list by the &, and then split each of those returned strings between the = for name/value pairs.
See the HTTP 1.1 RFC.
A: You need to keep parsing the stream as headers until you see the blank line. The rest is the POST data.
You need to write a little parser for the post data. You can use C library routines to do something quick and dirty, like index, strtok, and sscanf. If you have room for it in your definition of "small", you could do something more elaborate with a regular expression library, or even with flex and bison.
At least, I think this kind of answers your question.
A: IETF RFC notwithstanding, here is a more to the point answer. Assuming that you realize that there is always an extra /r/n after the Content-Length line in the header, you should be able to do the work to isolate it into a char* variable named data. This is where we start.
char *data = "f1=asfd&f2=a3f3f3";
char f1[100],
char f2[100];
sscanf(data, "%s&%s", &f1, &f2); // get the field tuples
char f1_name[50];
char f1_data[50];
sscanf(f1, "%s=%s", f1_name, f1_data);
char f2_name[50];
char f2_data[50];
sscanf(f2, "%s=%s", f2_name, f2_data);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Screen scraping a command window using .net managed code I am writing a program in dot net that will execute scripts and command line programs using the framework 2.0's Process object. I want to be able to access the screen buffers of the process in my program. I've investigated this and it appears that I need to access console stdout and stderr buffers. Anyone know how this is accomplished using managed code?
I think I need to use the AttachConsole and the ReadConsoleOutput of the windows console attached to the task in order to read a block of character and attribute data from the console screen. I need to do this is managed code.
See http://msdn.microsoft.com/en-us/library/ms684965(VS.85).aspx
A: You can accomplish this using the StandardError, StandardOutput, and StandardInput properties on the System.Diagnostics.Process class.
MSDN has a nice example of redirecting standard in and out of a process.
Note that you can only redirect the output of processes that you started. External processes that you didn't launch can't have their stdout redirected after the fact.
Also note that to use StandardInput, you must set ProcessStartInfo.UseShellExecute to false, and you must set ProcessStartInfo.RedirectStandardInput to true. Otherwise, writing to the StandardInput stream throws an exception.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Elegant method for drawing hourly bar chart from time-interval data? I have a list of timesheet entries that show a start and stop time. This is sitting in a MySQL database. I need to create bar charts based on this data with the 24 hours of the day along the bottom and the amount of man-hours worked for each hour of the day.
For example, if Alice worked a job from 15:30 to 19:30 and Bob worked from 12:15 to 17:00, the chart would look like this:
I have a WTFey solution right now that involves a spreadsheet going out to column DY or something like that. The needed resolution is 15-minute intervals.
I'm assuming this is something best done in the database then exported for chart creation. Let me know if I'm missing any details. Thanks.
A: Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:
TIME_DIM
-id
-time_of_day
-interval_15
-interval_30
an example of the data in the table would be
id time_of_day interval_15 interval_30
1 00:00 00:00 00:00
...
30 00:23 00:15 00:00
...
100 05:44 05:30 05:30
Then all you have to do is join your table to the time dimension and then group by interval_15. For example:
SELECT b.interval_15, count(*)
FROM my_data_table a
INNER JOIN time_dim b ON a.time_field = b.time
WHERE a.date_field = now()
GROUP BY b.interval_15
A: I came up with a pseudocode solution, hope it helps.
create an array named timetable with 24 entries
initialise timetable to zero
for each user in SQLtable
firsthour = user.firsthour
lasthour = user.lasthour
firstminutes = 4 - (rounded down integer(user.firstminutes/15))
lastminutes = rounded down integer(user.lastminutes/15)
timetable(firsthour) = timetable(firsthour) + firstminutes
timetable(lasthour) = timetable(lasthour) + lastminutes
for index=firsthour+1 to lasthour-1
timetable(index) = timetable(index) + 4
next index
next user
Now the timetable array holds the values you desire in 15 minute granularity, ie. a value of 4 = 1 hour, 5 = 1 hour 15 minutes, 14 = 3 hours 30 minutes.
A: Here's another pseudocode solution from a different angle; a bit more intensive because it does 96 queries for every 24hr period:
results = []
for time in range(0, 24, .25):
amount = mysql("select count(*) from User_Activity_Table where time >= start_time and time <= end_time")
results.append(amount)
A: How about this:
Use that "times" table, but with two columns, containing the 15-minute intervals. The from_times are the 15-minutely times, the to_times are a second before the next from_times. For example 12:30:00 to 12:44:59.
Now get your person work table, which I've called "activity" here, with start_time and end_time columns.
I added values for Alice and Bob as per the original question.
Here's the query from MySQL:
SELECT HOUR(times.from_time) AS 'TIME', count(*) / 4 AS 'HOURS'
FROM times
JOIN activity
ON times.from_time >= activity.start_time AND
times.to_time <= activity.end_time
GROUP BY HOUR(times.from_time)
ORDER BY HOUR(times.from_time)
which gives me this:
TIME HOURS
12 0.7500
13 1.0000
14 1.0000
15 1.5000
16 2.0000
17 1.0000
18 1.0000
19 0.7500
Looks about right...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to convert local time string to UTC? How do I convert a datetime string in local time to a string in UTC time?
I'm sure I've done this before, but can't find it and SO will hopefully help me (and others) do that in future.
Clarification: For example, if I have 2008-09-17 14:02:00 in my local timezone (+10), I'd like to generate a string with the equivalent UTC time: 2008-09-17 04:02:00.
Also, from http://lucumr.pocoo.org/2011/7/15/eppur-si-muove/, note that in general this isn't possible as with DST and other issues there is no unique conversion from local time to UTC time.
A: import time
import datetime
def Local2UTC(LocalTime):
EpochSecond = time.mktime(LocalTime.timetuple())
utcTime = datetime.datetime.utcfromtimestamp(EpochSecond)
return utcTime
>>> LocalTime = datetime.datetime.now()
>>> UTCTime = Local2UTC(LocalTime)
>>> LocalTime.ctime()
'Thu Feb 3 22:33:46 2011'
>>> UTCTime.ctime()
'Fri Feb 4 05:33:46 2011'
A: Thanks @rofly, the full conversion from string to string is as follows:
import time
time.strftime("%Y-%m-%d %H:%M:%S",
time.gmtime(time.mktime(time.strptime("2008-09-17 14:04:00",
"%Y-%m-%d %H:%M:%S"))))
My summary of the time/calendar functions:
time.strptime
string --> tuple (no timezone applied, so matches string)
time.mktime
local time tuple --> seconds since epoch (always local time)
time.gmtime
seconds since epoch --> tuple in UTC
and
calendar.timegm
tuple in UTC --> seconds since epoch
time.localtime
seconds since epoch --> tuple in local timezone
A: if you prefer datetime.datetime:
dt = datetime.strptime("2008-09-17 14:04:00","%Y-%m-%d %H:%M:%S")
utc_struct_time = time.gmtime(time.mktime(dt.timetuple()))
utc_dt = datetime.fromtimestamp(time.mktime(utc_struct_time))
print dt.strftime("%Y-%m-%d %H:%M:%S")
A: Simple
I did it like this:
>>> utc_delta = datetime.utcnow()-datetime.now()
>>> utc_time = datetime(2008, 9, 17, 14, 2, 0) + utc_delta
>>> print(utc_time)
2008-09-17 19:01:59.999996
Fancy Implementation
If you want to get fancy, you can turn this into a functor:
class to_utc():
utc_delta = datetime.utcnow() - datetime.now()
def __call__(cls, t):
return t + cls.utc_delta
Result:
>>> utc_converter = to_utc()
>>> print(utc_converter(datetime(2008, 9, 17, 14, 2, 0)))
2008-09-17 19:01:59.999996
A: Here's an example with the native zoneinfo module in Python3.9:
from datetime import datetime
from zoneinfo import ZoneInfo
# Get timezone we're trying to convert from
local_tz = ZoneInfo("America/New_York")
# UTC timezone
utc_tz = ZoneInfo("UTC")
dt = datetime.strptime("2021-09-20 17:20:00","%Y-%m-%d %H:%M:%S")
dt = dt.replace(tzinfo=local_tz)
dt_utc = dt.astimezone(utc_tz)
print(dt.strftime("%Y-%m-%d %H:%M:%S"))
print(dt_utc.strftime("%Y-%m-%d %H:%M:%S"))
This may be preferred over just using dt.astimezone() in situations where the timezone you're converting from isn't reflective of your system's local timezone. Not having to rely on external libraries is nice too.
Note: This may not work on Windows systems, since zoneinfo relies on an IANA database that may not be present. The tzdata package can be installed as a workaround. It's a first-party package, but is not in the standard library.
A: Here's a summary of common Python time conversions.
Some methods drop fractions of seconds, and are marked with (s). An explicit formula such as ts = (d - epoch) / unit can be used instead (thanks jfs).
*
*struct_time (UTC) → POSIX (s):calendar.timegm(struct_time)
*Naïve datetime (local) → POSIX (s):calendar.timegm(stz.localize(dt, is_dst=None).utctimetuple())(exception during DST transitions, see comment from jfs)
*Naïve datetime (UTC) → POSIX (s):calendar.timegm(dt.utctimetuple())
*Aware datetime → POSIX (s):calendar.timegm(dt.utctimetuple())
*POSIX → struct_time (UTC, s):time.gmtime(t)(see comment from jfs)
*Naïve datetime (local) → struct_time (UTC, s):stz.localize(dt, is_dst=None).utctimetuple()(exception during DST transitions, see comment from jfs)
*Naïve datetime (UTC) → struct_time (UTC, s):dt.utctimetuple()
*Aware datetime → struct_time (UTC, s):dt.utctimetuple()
*POSIX → Naïve datetime (local):datetime.fromtimestamp(t, None)(may fail in certain conditions, see comment from jfs below)
*struct_time (UTC) → Naïve datetime (local, s):datetime.datetime(struct_time[:6], tzinfo=UTC).astimezone(tz).replace(tzinfo=None)(can't represent leap seconds, see comment from jfs)
*Naïve datetime (UTC) → Naïve datetime (local):dt.replace(tzinfo=UTC).astimezone(tz).replace(tzinfo=None)
*Aware datetime → Naïve datetime (local):dt.astimezone(tz).replace(tzinfo=None)
*POSIX → Naïve datetime (UTC):datetime.utcfromtimestamp(t)
*struct_time (UTC) → Naïve datetime (UTC, s):datetime.datetime(*struct_time[:6])(can't represent leap seconds, see comment from jfs)
*Naïve datetime (local) → Naïve datetime (UTC):stz.localize(dt, is_dst=None).astimezone(UTC).replace(tzinfo=None)(exception during DST transitions, see comment from jfs)
*Aware datetime → Naïve datetime (UTC):dt.astimezone(UTC).replace(tzinfo=None)
*POSIX → Aware datetime:datetime.fromtimestamp(t, tz)(may fail for non-pytz timezones)
*struct_time (UTC) → Aware datetime (s):datetime.datetime(struct_time[:6], tzinfo=UTC).astimezone(tz)(can't represent leap seconds, see comment from jfs)
*Naïve datetime (local) → Aware datetime:stz.localize(dt, is_dst=None)(exception during DST transitions, see comment from jfs)
*Naïve datetime (UTC) → Aware datetime:dt.replace(tzinfo=UTC)
Source: taaviburns.ca
A: How about -
time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(seconds))
if seconds is None then it converts the local time to UTC time else converts the passed in time to UTC.
A: You can do it with:
>>> from time import strftime, gmtime, localtime
>>> strftime('%H:%M:%S', gmtime()) #UTC time
>>> strftime('%H:%M:%S', localtime()) # localtime
A: First, parse the string into a naive datetime object. This is an instance of datetime.datetime with no attached timezone information. See its documentation.
Use the pytz module, which comes with a full list of time zones + UTC. Figure out what the local timezone is, construct a timezone object from it, and manipulate and attach it to the naive datetime.
Finally, use datetime.astimezone() method to convert the datetime to UTC.
Source code, using local timezone "America/Los_Angeles", for the string "2001-2-3 10:11:12":
from datetime import datetime
import pytz
local = pytz.timezone("America/Los_Angeles")
naive = datetime.strptime("2001-2-3 10:11:12", "%Y-%m-%d %H:%M:%S")
local_dt = local.localize(naive, is_dst=None)
utc_dt = local_dt.astimezone(pytz.utc)
From there, you can use the strftime() method to format the UTC datetime as needed:
utc_dt.strftime("%Y-%m-%d %H:%M:%S")
A: In python 3.9.0, after you've parsed your local time local_time into datetime.datetime object, just use local_time.astimezone(datetime.timezone.utc).
A: I'm having good luck with dateutil (which is widely recommended on SO for other related questions):
from datetime import *
from dateutil import *
from dateutil.tz import *
# METHOD 1: Hardcode zones:
utc_zone = tz.gettz('UTC')
local_zone = tz.gettz('America/Chicago')
# METHOD 2: Auto-detect zones:
utc_zone = tz.tzutc()
local_zone = tz.tzlocal()
# Convert time string to datetime
local_time = datetime.strptime("2008-09-17 14:02:00", '%Y-%m-%d %H:%M:%S')
# Tell the datetime object that it's in local time zone since
# datetime objects are 'naive' by default
local_time = local_time.replace(tzinfo=local_zone)
# Convert time to UTC
utc_time = local_time.astimezone(utc_zone)
# Generate UTC time string
utc_string = utc_time.strftime('%Y-%m-%d %H:%M:%S')
(Code was derived from this answer to Convert UTC datetime string to local datetime)
A: def local_to_utc(t):
secs = time.mktime(t)
return time.gmtime(secs)
def utc_to_local(t):
secs = calendar.timegm(t)
return time.localtime(secs)
Source: http://feihonghsu.blogspot.com/2008/02/converting-from-local-time-to-utc.html
Example usage from bd808: If your source is a datetime.datetime object t, call as:
local_to_utc(t.timetuple())
A: An option available since Python 3.6: datetime.astimezone(tz=None) can be used to get an aware datetime object representing local time (docs). This can then easily be converted to UTC.
from datetime import datetime, timezone
s = "2008-09-17 14:02:00"
# to datetime object:
dt = datetime.fromisoformat(s) # Python 3.7
# I'm on time zone Europe/Berlin; CEST/UTC+2 during summer 2008
dt = dt.astimezone()
print(dt)
# 2008-09-17 14:02:00+02:00
# ...and to UTC:
dtutc = dt.astimezone(timezone.utc)
print(dtutc)
# 2008-09-17 12:02:00+00:00
*
*Note: while the described conversion to UTC works perfectly fine, .astimezone() sets tzinfo of the datetime object to a timedelta-derived timezone - so don't expect any "DST-awareness" from it. Be careful with timedelta arithmetic here. Unless you convert to UTC first of course.
*related: Get local time zone name on Windows (Python 3.9 zoneinfo)
A: NOTE -- As of 2020 you should not be using .utcnow() or .utcfromtimestamp(xxx). As you've presumably moved on to python3,you should be using timezone aware datetime objects.
>>> from datetime import timezone
>>>
>>> # alternative to '.utcnow()'
>>> dt_now = datetime.datetime.now(datetime.timezone.utc)
>>>
>>> # alternative to '.utcfromtimestamp()'
>>> dt_ts = datetime.fromtimestamp(1571595618.0, tz=timezone.utc)
For details see: https://blog.ganssle.io/articles/2019/11/utcnow.html
original answer (from 2010):
The datetime module's utcnow() function can be used to obtain the current UTC time.
>>> import datetime
>>> utc_datetime = datetime.datetime.utcnow()
>>> utc_datetime.strftime("%Y-%m-%d %H:%M:%S")
'2010-02-01 06:59:19'
As the link mentioned above by Tom: http://lucumr.pocoo.org/2011/7/15/eppur-si-muove/ says:
UTC is a timezone without daylight saving time and still a timezone
without configuration changes in the past.
Always measure and store time in UTC.
If you need to record where the time was taken, store that separately.
Do not store the local time + timezone information!
NOTE - If any of your data is in a region that uses DST, use pytz and take a look at John Millikin's answer.
If you want to obtain the UTC time from a given string and your lucky enough to be in a region in the world that either doesn't use DST, or you have data that is only offset from UTC without DST applied:
--> using local time as the basis for the offset value:
>>> # Obtain the UTC Offset for the current system:
>>> UTC_OFFSET_TIMEDELTA = datetime.datetime.utcnow() - datetime.datetime.now()
>>> local_datetime = datetime.datetime.strptime("2008-09-17 14:04:00", "%Y-%m-%d %H:%M:%S")
>>> result_utc_datetime = local_datetime + UTC_OFFSET_TIMEDELTA
>>> result_utc_datetime.strftime("%Y-%m-%d %H:%M:%S")
'2008-09-17 04:04:00'
--> Or, from a known offset, using datetime.timedelta():
>>> UTC_OFFSET = 10
>>> result_utc_datetime = local_datetime - datetime.timedelta(hours=UTC_OFFSET)
>>> result_utc_datetime.strftime("%Y-%m-%d %H:%M:%S")
'2008-09-17 04:04:00'
UPDATE:
Since python 3.2 datetime.timezone is available. You can generate a timezone aware datetime object with the command below:
import datetime
timezone_aware_dt = datetime.datetime.now(datetime.timezone.utc)
If your ready to take on timezone conversions go read this:
https://medium.com/@eleroy/10-things-you-need-to-know-about-date-and-time-in-python-with-datetime-pytz-dateutil-timedelta-309bfbafb3f7
A: For getting around day-light saving, etc.
None of the above answers particularly helped me. The code below works for GMT.
def get_utc_from_local(date_time, local_tz=None):
assert date_time.__class__.__name__ == 'datetime'
if local_tz is None:
local_tz = pytz.timezone(settings.TIME_ZONE) # Django eg, "Europe/London"
local_time = local_tz.normalize(local_tz.localize(date_time))
return local_time.astimezone(pytz.utc)
import pytz
from datetime import datetime
summer_11_am = datetime(2011, 7, 1, 11)
get_utc_from_local(summer_11_am)
>>>datetime.datetime(2011, 7, 1, 10, 0, tzinfo=<UTC>)
winter_11_am = datetime(2011, 11, 11, 11)
get_utc_from_local(winter_11_am)
>>>datetime.datetime(2011, 11, 11, 11, 0, tzinfo=<UTC>)
A: Using http://crsmithdev.com/arrow/
arrowObj = arrow.Arrow.strptime('2017-02-20 10:00:00', '%Y-%m-%d %H:%M:%S' , 'US/Eastern')
arrowObj.to('UTC') or arrowObj.to('local')
This library makes life easy :)
A: I have this code in one of my projects:
from datetime import datetime
## datetime.timezone works in newer versions of python
try:
from datetime import timezone
utc_tz = timezone.utc
except:
import pytz
utc_tz = pytz.utc
def _to_utc_date_string(ts):
# type (Union[date,datetime]]) -> str
"""coerce datetimes to UTC (assume localtime if nothing is given)"""
if (isinstance(ts, datetime)):
try:
## in python 3.6 and higher, ts.astimezone() will assume a
## naive timestamp is localtime (and so do we)
ts = ts.astimezone(utc_tz)
except:
## in python 2.7 and 3.5, ts.astimezone() will fail on
## naive timestamps, but we'd like to assume they are
## localtime
import tzlocal
ts = tzlocal.get_localzone().localize(ts).astimezone(utc_tz)
return ts.strftime("%Y%m%dT%H%M%SZ")
A: One more example with pytz, but includes localize(), which saved my day.
import pytz, datetime
utc = pytz.utc
fmt = '%Y-%m-%d %H:%M:%S'
amsterdam = pytz.timezone('Europe/Amsterdam')
dt = datetime.datetime.strptime("2012-04-06 10:00:00", fmt)
am_dt = amsterdam.localize(dt)
print am_dt.astimezone(utc).strftime(fmt)
'2012-04-06 08:00:00'
A: I've had the most success with python-dateutil:
from dateutil import tz
def datetime_to_utc(date):
"""Returns date in UTC w/o tzinfo"""
return date.astimezone(tz.gettz('UTC')).replace(tzinfo=None) if date.tzinfo else date
A: I found the best answer on another question here. It only uses python built-in libraries and does not require you to input your local timezone (a requirement in my case)
import time
import calendar
local_time = time.strptime("2018-12-13T09:32:00.000", "%Y-%m-%dT%H:%M:%S.%f")
local_seconds = time.mktime(local_time)
utc_time = time.gmtime(local_seconds)
I'm reposting the answer here since this question pops up in google instead of the linked question depending on the search keywords.
A: If you already have a datetime object my_dt you can change it to UTC with:
datetime.datetime.utcfromtimestamp(my_dt.timestamp())
A: For anyone who is confused with the most upvoted answer. You can convert a datetime string to utc time in python by generating a datetime object and then you can use astimezone(pytz.utc) to get datetime in utc.
For eg.
let say we have local datetime string as 2021-09-02T19:02:00Z in isoformat
Now to convert this string to utc datetime. we first need to generate datetime object using this string by
dt = datetime.strptime(dt,'%Y-%m-%dT%H:%M:%SZ')
this will give you python datetime object, then you can use astimezone(pytz.utc) to get utc datetime like
dt = datetime.strptime(dt,'%Y-%m-%dT%H:%M:%SZ') dt = dt.astimezone(pytz.utc)
this will give you datetime object in utc, then you can convert it to string using dt.strftime("%Y-%m-%d %H:%M:%S")
full code eg:
from datetime import datetime
import pytz
def converLocalToUTC(datetime, getString=True, format="%Y-%m-%d %H:%M:%S"):
dt = datetime.strptime(dt,'%Y-%m-%dT%H:%M:%SZ')
dt = dt.astimezone(pytz.utc)
if getString:
return dt.strftime(format)
return dt
then you can call it as
converLocalToUTC("2021-09-02T19:02:00Z")
took help from
https://stackoverflow.com/a/79877/7756843
A: Briefly, to convert any datetime date to UTC time:
from datetime import datetime
def to_utc(date):
return datetime(*date.utctimetuple()[:6])
Let's explain with an example. First, we need to create a datetime from the string:
>>> date = datetime.strptime("11 Feb 2011 17:33:54 -0800", "%d %b %Y %H:%M:%S %z")
Then, we can call the function:
>>> to_utc(date)
datetime.datetime(2011, 2, 12, 1, 33, 54)
Step by step how the function works:
>>> date.utctimetuple()
time.struct_time(tm_year=2011, tm_mon=2, tm_mday=12, tm_hour=1, tm_min=33, tm_sec=54, tm_wday=5, tm_yday=43, tm_isdst=0)
>>> date.utctimetuple()[:6]
(2011, 2, 12, 1, 33, 54)
>>> datetime(*date.utctimetuple()[:6])
datetime.datetime(2011, 2, 12, 1, 33, 54)
A: How about -
time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(seconds))
if seconds is None then it converts the local time to UTC time else converts the passed in time to UTC.
A: In python3:
pip install python-dateutil
from dateutil.parser import tz
mydt.astimezone(tz.gettz('UTC')).replace(tzinfo=None)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "404"
} |
Q: What is the difference between file modification time and file changed time? I am confused between the term file modification time and file changed time. Can anyone help to make it clearer?
A: mtime is modification time - contents have changed.
ctime is status change time - perms and ownership as well as contents.
Wikipedia says:
* mtime: time of last modification (ls -l),
* ctime: time of last status change (ls -lc) and
* atime: time of last access (ls -lu).
Note that ctime is not the time of
file creation. Writing to a file
changes its mtime, ctime, and atime. A
change in file permissions or file
ownership changes its ctime and atime.
Reading a file changes its atime. File
systems mounted with the noatime
option do not update the atime on
reads, and the relatime option
provides for updates only if the
previous atime is older than the mtime
or ctime. Unlike atime and mtime,
ctime cannot be set with utime() (as
used e.g. by touch); the only way to
set it to an arbitrary value is by
changing the system clock.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Need javascript code for button press and hold I'd like a short smallest possible javascript routine that when a mousedown occurs on a button it first responds just like a mouseclick and then if the user keeps the button pressed it responds as if the user was continously sending mouseclicks and after a while with the button held down acts as if the user was accelerating their mouseclicks...basically think of it like a keypress repeat with acceleration in time.
i.e. user holds down mouse button (x=call function) - x___x___x___x__x__x_x_x_x_xxxxxxx
A: When the button is pressed, call window.setTimeout with your intended time and the function x, and set the timer again at the end of x but this time with a smaller interval.
Clear the timeout using window.clearTimeout upon release of the mouse button.
A: function holdit(btn, action, start, speedup) {
var t;
var repeat = function () {
action();
t = setTimeout(repeat, start);
start = start / speedup;
}
btn.mousedown = function() {
repeat();
}
btn.mouseup = function () {
clearTimeout(t);
}
};
/* to use */
holdit(btn, function () { }, 1000, 2); /* x..1000ms..x..500ms..x..250ms..x */
A: Just put the below toggleOn in the OnMouseDown and toggleOff in the OnMouseUp of the button.
var tid = 0;
var speed = 100;
function toggleOn(){
if(tid==0){
tid=setInterval('ThingToDo()',speed);
}
}
function toggleOff(){
if(tid!=0){
clearInterval(tid);
tid=0;
}
}
function ThingToDo{
}
A: I just release a jQuery plugin, check this demo on this repo.
$('button').clickAndHold(function (e, n) {
console.log("Call me baby ", n);
});
A: @glenuular: Thanks for this interesting approach!
There were some small problems with it:
- The start value was not reset, so on the second use it started too fast.
- The start value was divided without limit, so it became very small after short time.
- Arguments were not passed to the called method. (Now limited to 6 args, usually sufficient to pass 'ev').
function holdit( btn, method, start, speedup ) {
var t, keep = start;
var repeat = function () {
var args = Array.prototype.slice.call( arguments );
method.apply( this, args );
t = setTimeout( repeat, start, args[0], args[1], args[2], args[3], args[4], args[5] );
if ( start > keep / 20 ) start = start / speedup;
}
btn.onmousedown = btn.mousedown = repeat;
//
btn.onmouseout = btn.mouseout = btn.onmouseup = btn.mouseup = function () {
clearTimeout( t );
start = keep;
}
};
A: I upgraded neouser99 solution because i ran into some problems with it ^^
let holdIt = (btn, action, start, speedup, limit) => {
let t;
let startValue = start;
let repeat = () => {
action();
t = setTimeout(repeat, startValue);
(startValue > limit) ? startValue /= speedup: startValue = limit;
}
btn.onmousedown = () => {
repeat();
}
const stopActionEvents = ['mouseup', 'mouseout'];
stopActionEvents.forEach(event => {
btn.addEventListener(event, () => {
clearTimeout(t);
startValue = start;
})
});
};
holdIt(actionButton, functionToDo, 500, 2, 5);
A: something like the psuedo code below might work..
var isClicked = false;
var clickCounter = 100;
function fnTrackClick(){
if(isClicked){
clickCounter--;
setTimeout(clickCounter * 100, fnTrackClick);
}
}
<input type="button" value="blah" onmousedown="isClicked=true;" onmouseover="fnTrackClick();" onmouseup="isClicked = false;" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What is the best way to load a Hibernate object graph before using it in a UI? The situation is this:
*
*You have a Hibernate context with an
object graph that has some lazy
loading defined.
*You want to use
the Hibernate objects in your UI as
is without having to copy the data
somewhere.
*There are different UI
contexts that require different
amounts of data.
*The data is too
big to just eager load the whole
graph each time.
What is the best means to load all the appropriate objects in the object graph in a configurable way so that they can be accessed without having to go back to the database to load more data?
Any help.
A: Let's say you have the Client and at one point you have to something with his Orders and maybe he has a Bonus for his Orders.
Then I would define a Repository with a fluent interface that will allow me to say something like :
new ClientRepo().LoadClientBy(id)
.WithOrders()
.WithBonus()
.OrderByName();
And there you have the client with everything you need. It's preferably that you know in advance what you will need for the current operation. This way you can avoid unwanted trips to the database.(new devs in your team will usually do this - call a property and not be aware of the fact that it's actually a call to the DB)
A: If it's a webapp and you're using Spring, then OpenSessionInViewFilter could be the solution to your problems.
A: An approach we use in our projects is to create a service for each view you have. Then the view fetches the sub-graph you need for this specific view, always trying to reduce the number of sqls send to the database. Therefore we are using a lot of joins to get the n:1 associated objects.
If you are using a 2-tier desktop app directly connected to the DB you can just leave the objects attached and load additional data anytime automatically. Otherwise you have to reattach it to the session and initialize the association you need with Hibernate.initialize(Object entity, String propertyName)
(Out of memory, maybe not 100% correct)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you design data models for Bigtable/Datastore (GAE)? Since the Google App Engine Datastore is based on Bigtable and we know that's not a relational database, how do you design a database schema/data model for applications that use this type of database system?
A: Designing a bigtable schema is an open process, and basically requires you to think about:
*
*The access patterns you will be using and how often each will be used
*The relationships between your types
*What indices you are going to need
*The write patterns you will be using (in order to effectively spread load)
GAE's datastore automatically denormalizes your data. That is, each index contains a (mostly) complete copy of the data, and thus every index adds significantly to time taken to perform a write, and the storage space used.
If this were not the case, designing a Datastore schema would be a lot more work: You would have to think carefully about the primary key for each type, and consider the effect of your decision on the locality of data. For example, when rendering a blog post you would probably need to display the comments to go along with it, so each comment's key would probably begin with the associated post's key.
With Datastore, this is not such a big deal: The query you use will look something like "Select * FROM Comment WHERE post_id = N." (If you want to page the comments, you would also have a limit clause, and a possible suffix of " AND comment_id > last_comment_id".) Once you add such a query, Datastore will build the index for you, and your reads will be magically fast.
Something to keep in mind is that each additional index creates some additional cost: it is best if you can use as few access patterns as possible, since it will reduce the number of indices GAE will construct, and thus the total storage required by your data.
Reading over this answer, I find it a little vague. Maybe a hands-on design question would help to scope this down? :-)
A: You can use www.web2py.com. You build the model and the application once and it works on GAE but also witl SQLite, MySQL, Posgres, Oracle, MSSQL, FireBird
A: As GAE builds on how data is managed in Django there is a lot of info on how to address similar questions in the Django documentation (for example see here, scroll down to 'Your first model').
In short you design you db model as a regular object model and let GAE sort out all of the object-relational mappings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Network Map Algorithm that Detects Unmanaged Layer 2 Switches? I've inherited a network spread out over a warehouse/front office consisting of approximately 50 desktop PCs, various servers, network printers, and routers/switches.
The "intelligent" routers live in the server room. As the company has grown, we've annexed additional space and not very elegantly run various lengths of CAT5 thru the ceilings etc. I've been finding various hubs and switches in the ceilings -- none of which is labeled or documented in any way.
Of course, das blinken-lights tell me that someone is connected to these devices, I just have no way of finding out who.
I can run traditional network map tools (there are tons of these things) and it shows me the IP-based things in the network. That's nice, but information I already have. What I need to know is the network topology -- how the switches (bridges) are interconnected etc.. And since they are off-the-shelf linksys unmanaged-types, they don't respond to SNMP so I can't use that...
What's the best/cheapest tool out there that I can use to analyze and detect things like hubs and switches in the network that don't respond to SNMP?
If there's no tool that you're aware of -- what generalized algorithm would you suggest to find this out? My guess would be that I could look at the MAC forward tables for the devices (switches, desktops, etc.) and build a chain that way, but I don't know if it's possible to get that from an unmanaged switch (let alone a hub).
(This patent has some neat ideas but I can't find any software built with it: http://www.freepatentsonline.com/6628623.html)
Thanks!!
A: An idea could be to use a program like 3com network director trial version (or The Dude). Use it to discover all of your workstations and anything else with an IP address.
Wait for a quiet time and unplug each hub/switch ... you'll then at least begin to be able to make a map, the rest will be crawling about following cables. Network administration does mean getting dirty.
A: You probably can't explicitly detect unmanaged devices... but you have MAC -> switch port mappings, on your managed ones, right? If so, you should be able to infer the presence of unmanaged switches / hubs with more than one connected client -- I don't know how you'd find a port with only one.
*
*Record the MAC addresses of all smart switches and client devices
*Start from one of your known smart switches
*For each port on the switch, list the MAC addresses it's forwarding. If it lists one client, it's direct. If it's more than one and none of the addresses are in your known switch MACs, you've got a dumb switch. If it's more than one and one address is in your set of known switches, recurse on this switch.
You probably don't have any accidental loops in your network topology (or your network probably wouldn't work) so you can probably assume a tree structure outside your core.
A: You could try to get spanning-tree protocol information out of the smart switches; even unmanaged switches have to participate in this protocol (this doesn't apply to hubs, though).
A: I don't think unmanaged switches/hubs will have arp entries - being transparent at the mac layer is their reason for existing.
And I don't think there's a way to get their MAC forwarding tables short of taking them apart and finding a JTAG or other port to talk to them with, which is unlikely to be feasible.
The best idea I can come up with is to pingflood each internal IP in turn, and then while that's going on, try and ping all the other IPs. This will help because you'll only get decent responses from machines that don't share a (now congested into oblivion) link with the one you're pingflooding. Basically you're using the fact that the backplane on the switches is much faster than the interconnects between them to map out which connections are via interconnects and which are via backplanes. This also lets you watch das blinkenlights and figure out which ports are used to connect to which IPs.
Sadly I know of no software that will do this for you.
A: I've personally had the same issue. Fun. I partially solved the problem by installing new Cisco Catalyst Switches in the main data closet and setting the Smart Ports profile on each port to "Desktop". This limits the port to 1 MAC address.
Any port with an unmanaged hub/switch attached will be automatically disabled the first time more than one device is activated on the unmanaged device.
As I located unmanaged hubs/switches I replaced them with managed switches configured to limit each port to 1 MAC.
If your budget won't allow this, the alternative is to trace each wire visually and manually verify the presence of unmanaged networking equipment.
A: I've been looking into this and I found this old research paper Using VPS Probing to Discover Layer 2 Topology. The theory is that you can use Variable Packet Size (VPS) probing to discover layer 2 switches by the delay they introduce. I haven't had a chance to try it in practice yet.
Update: I found a later version of the paper called Using Simple Per-Hop Capacity Metrics to Discover Link Layer Network Topology
A: If you haven't already, try HP Openview trial version, and apart of using SNMP, it also uses ARP tables to figure out your topology.
A: You can expect these features in release of AdventNet's opmanager8.0 next month
A: You can try NetskateKoban, that will give you the map with the number of terminals connected to each port of the managed switch. You can know the presence of unmanged device from there by the vendor name.
We have seen a similar kind of problem, where a network admin had to figure out how many switches (managed/unmanaged) are present. It will give you the location of such places. Try it out... all the best
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What is the most efficient way to count the results of a stored procedure, from another stored procedure? In a stored procedure, I need to get the count of the results of another stored procedure. Specifically, I need to know if it returns any results, or an empty set.
I could create a temp table/table variable, exec the stored procedure into it, and then run a select count on that data. But I really don't care about the data itself, all I need is the count (or presence/absence of data). I was wondering if there is a more efficient way of getting just that information.
I don't want to just copy the contents of the other stored procedure and rewrite it as a select count. The stored procedure changes too frequently for that to be workable.
A: Well, depending on how the stored procedures work, @@ROWCOUNT returns the # of results for ANYthing that SP will do (including updates):
http://msdn.microsoft.com/en-us/library/ms187316.aspx
This will only work if the LAST thing you do in the sp is returning the rows to the client... Otherwise you're going to get the results of some other statement. Make sense?
A: @@ROWCOUNT
A: use an out parameter
A: I would think you could return the number of rows (using RETURN) or use an out parameter to get the value.
A: If you can rewrite other procedure to be a simple function that returns a resultset, you can simply select count(*) from it.
A: It seems that is someone else altering the other stored procedure and you need something to effectively check on the results no matter the changes to such procedure.
Create a tempt table and insert the result from that procedure in it.
Then you can perform a row count on the results. It is not the most efficient but most reliable solution if I understand correctly your problem.
Snipet:
DECLARE @res AS TABLE (
[EmpID] [int] NOT NULL,
[EmpName] [varchar](30) NULL,
[MgrID] [int] NULL
)
INSERT @res
EXEC dbo.ProcFoo
SELECT COUNT(*) FROM @res
A: Given that you don't really need to know the count, merely whether there is or isn't data for your sproc, I'd recommend something like this:
CREATE PROCEDURE subProcedure @param1, @param2, @Param3 tinyint OUTPUT
AS
BEGIN
IF EXISTS(SELECT * FROM table1 WHERE we have something to work with)
BEGIN
-- The body of your sproc
SET @Param3 = 1
END
ELSE
SET @Param3 = 0
END
Now you can execute the sproc and check the value of @Param3:
DECLARE @ThereWasData tinyint
exec subProcedure 'foo', 'bar', @ThereWasData OUTPUT
IF @ThereWasData = 1
PRINT 'subProcedure had data'
ELSE
PRINT 'subProcedure had NO data'
A: I think you should do something like this:
Create Procedure [dbo].[GetResult] (
@RowCount BigInt = -1 Output
) As Begin
/*
You can do whatever else you should do here.
*/
Select @RowCount = Count_Big(*)
From dbo.SomeLargeOrSmallTable
Where SomeColumn = 'Somefilters'
;
/*
You can do whatever else you should do here.
*/
--Reporting how your procedure has done the statements. It's just a sample to show you how to work with the procedures. There are many ways for doing these things.
Return @@Error;
End;
After writing that you can get the output result like this:
Declare @RowCount BigInt
, @Result Int
;
Execute @Result = [dbo].[GetResult] @RowCount Out
Select @RowCount
, @Result
;
Cheers
A: create proc test
as
begin
select top 10 * from customers
end
go
create proc test2 (@n int out)
as
begin
exec test
set @n = @@rowcount
--print @n
end
go
declare @n1 int =0
exec test2 @n1 out
print @n1
--output result: 10
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: C# utility to create a CA I'd like to create a utility in C# to allow someone to easily create a Certificate Authority (CA) in Windows. Any ideas/suggestions?
I know I can use OpenSSL to do this. In the end, I'll want this utility to do more than just generate a CA. I'd also like to avoid requiring the installation of OpenSSL in order to run my utility.
A: Since OpenSSL is Apache-licensed (i.e. BSD-style), you can simply distribute it as a DLL along with your application. (Maybe build it yourself to have only the features you need and all in a single DLL.) Then use p/invoke calls to talk with this DLL.
(Maybe you can even link the native code straight into your .NET executable? Not sure about that.)
A: Take a look at BouncyCastle http://www.bouncycastle.org/csharp/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: ActiveRecord#save_only_valid_attributes I'm looking for a variation on the #save method that will only save
attributes that do not have errors attached to them.
So a model can be updated without being valid overall, and this will
still prevent saving invalid data to the database.
By "valid attributes", I mean those attributes that give nil when calling @model_instance.errors.on(:attribute)
Anyone have an idea of how to accomplish this?
So far, I have the following:
def save_valid_attributes
valid?
update_atrtibutes attributes.inject({}){|k, v, m| m[k] = v unless errors_on(k.to_sym); m}
end
This works if there's no processing done on assignment, which in my case there is.
For example, I have a database column "start_date", and two methods defined:
def nice_start_date=(startdate)
self.start_date = Chronic.parse(startdate) || startdate
end
def nice_start_date
self.start_date.to_s
end
These two methods allow me to properly parse the user inputted dates using Chronic before saving. So, second way of doing this, one attribute at a time:
def save_valid_attributes(attrib)
valid?
attrib.each{|(k,v)| send("${k}=", v); save; reload)
end
The model needs to be reloaded each time since, if one of the dates is invalid and doesn't save, it will prevent all further attributes from saving.
Is there a better way to do this? I'm sure this isn't an uncommon problem in the Rails world, I just can't seem to find anything in the Google universe of knowledge.
A: I'm not sure how much luck you will have with this without a lot of messing around.
No matter how DRY and OO and easy your framework makes things ( which is in this case - alot =) you've still got to remember it's running in front of a bog-standard relational database, which has atomic commits as one of it's defining features. It's designed from the ground up to make sure either all of your changes are commited, or none.
You're effectively going to be over riding this standard functionality with something that goes 100% against the grain of how rails + was designed to work. This is likely to lead (as said already) to inconsistent data.
Having said that . . . it's always possible. I would look along the lines of doing manual validation of the attributes you care about, and then using the built-in method object.update_attribute_with_validation_skipping.
Good luck!
A: You can overwrite #save like this:
def save
errors.each do |attr, msg|
send("#{attr}=", send("#{attr}_was"))
end
super
end
This will reset all attributes with errors attached to their original value.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the best testing tool for Swing-based applications? While we try to set up as many unit tests as time allows for our applications, I always find the amount of UI-level tests lacking. There are many options out there, but I'm not sure what would be a good place to start.
What is your preferred unit testing tool for testing Swing applications? Why do you like it?
A: If your target application has custom components, I would definitely recommend Marathon to automate your tests.
I was given the task of automating an application with several extremely complicated custom components, written in-house from the ground up. I went through a review process that lasted two months, in which I made the decision on which test tool to use, from a list of close to 30 test tools that were available, both commercial and FOSS.
It was the only test tool that was able to successfully automate our particular custom components; where IBM's Rational Functional Tester, Microfocus' TestPartner, QF-Test, Abbot & FEST failed.
I have since been able to successfully integrate the tests with Cruise Control such that they run upon completing each build of the application.
A word of warning though:
1) it is rather rough around the edges in the way it handles JTables. I got around this by writing my own proxy class for them.
2) Does not support record/replay of drag-and-drop actions yet.
A: Consider Marathon (http://www.marathontesting.com/Home.html)--tests are written in Jython, so it's easy to write any sort of predicates based on object state.
A: I had the chance to play around with QF-TEST once. It is commercial, but offers a lot of functionality. Maybe you have a look at it: http://www.qftest.de/en/index.html
A: You can try to use Cucumber and Swinger for writing functional acceptance tests in plain english for Swing GUI applications. Swinger uses Netbeans' Jemmy library under the hood to drive the app.
Cucumber allows you to write tests like this:
Scenario: Dialog manipulation
Given the frame "SwingSet" is visible
And the frame "SwingSet" is the container
When I click the menu "File/About"
Then I should see the dialog "About Swing!"
Given the dialog "About Swing!" is the container
When I click the button "OK"
Then I should not see the dialog "About Swing!"
Take a look at this Swinger video demo to see it in action.
A: I can highly recommend QFTest. I have used it for my commercial product and it works very well with almost zero code (my app requires the use java client APIs for some things). It handles identification of swing components well, and is pretty tolerant of updates to your GUI - (resizing,repositioning and adding components does not break existing tests). I have done major updates to functionality and have my tests still work.
Its expensive, but I think it will pay itself off in a couple of months.
Before QFTest I tried:
1) Automatedqa - good tool, but windows centric and does not understand Swing. Similar to Quick test Pro.
2)UISpec4J - After devoting a solid 50 hour week to this, I had issues with fragility and the arcane java code it produced. Using it was just too arduous - trying to debug/update hundreds of lines of java performing a sequence of a dozen GUI operations just did not work for my brain. I ended up avoiding writing tests because it much more complicated than actually writing the app itself!
A: Not an answer, but a refining.
Record-and-playback is the wrong thing to want. Teams need the ability to write tests before the code has been written. Otherwise, the coders finish their work and wait around while the testers scramble to record tests (interrupted by fixes when they spot issues).
In a BDD/TDD/ATDD kind of setup, you really need some kind of tool that allows you to script tests for code that hasn't been written yet, specifying UI element names and the like.
Are there tools that work for non-waterfall testing?
A: On our side, we use to test SWING GUI with FEST. This is an adapter on the classical swing robot, but it ease dramatically its use.
Combined with TestNG, We found it an easy way to simulate "human" actions trough the GUI.
A: try pounder : http://pounder.sourceforge.net/
A: I like Jemmy, the library written to test Netbeans.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How does vxWorks deal with two tasks at the same priority? We have two tasks (T1 and T2) in our vxWorks embedded system that have the same priority (110).
How does the regular vxWorks scheduler deal with this if both tasks are ready to run?
Which task executes first?
A: The task that will run first is the task that is spawned first as realized by the VxWorks scheduler task. VxWorks uses priority-based scheduling by default. So in your case, since T1 and T2 have the same priority, whichever one gets the CPU first will continue to run indefinitely until it is explicitly blocked (using taskSuspend or taskDelay), at which time the other READY task will execute until it is blocked, and so on. This ought to be controlled by semaphores or mutexes (mutices?)
The main problem with priority-based scheduling is illuminated by this exact problem. How do we determine how long to let these tasks run? The fact that they have the same priority complicates things. Another concern is that VxWorks tasks that have high priority (lower number means higher priority) can preempt your application which you must be prepared for in your code. These problems can be solved by using round-robin scheduling. The additional problems posed by round-robin scheduling and the solutions are all described here.
A: VxWorks has 256 priority levels (0 is highest, 255 is lowest). At any given time, the highest priority task runs on the CPU. Each priority level conceptually has a queue where multiple tasks queue up for execution.
We have 3 tasks at the same priority A, B, C. Assume A is executing.
When A blocks (taskDelay, SemTake, msgQReceive), B will start execution.
When A unblocks, it is put at the end of the queue. We now have B, C, A.
When B blocks, C takes over, etc...
If Round Robin scheduling (Time slicing) is enabled, the same concept applies, but the task gets put at the end of the queue when its time slice is over.
Note that a task being pre-empted by a higher priority task will NOT affect the order of the queue. If A was running and gets pre-empted, it will continue execution when the higher priority task is done. It does not get put at the end of the queue.
A: By default the one which is spawned first will be executing and unless it gives up the CPU the other will never run.
You can explicitly enable round robin, than they will timeslice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: mspdbsrv.exe living forever? Is there a way to prevent mspdbsrv.exe from continuing to run
after finishing the compilation? or even after I terminate visual studio? or perhaps even prevent it from even spawning in the first place?
what is this guy good for anyway?
using vs2005
A: A little googling seems to indicate that mspdbsrv.exe zombies are a known issue in VS2005. We've had similar (intermittent) problems, but there did not seem to be a solution.
Yes, it sucks.
A: MS recommends to add a postbuild-event to the Project options here.
[...]Sometimes it is possible that
mspdbsrv.exe stays alive even after
the build is over. In such scenarios,
it is safe to add a post build event
to kill the mspdbsrv.exe.
Background infos on postbuild-Events can be found on the linked page.
A: mspdbsrv.exe is the process Visual Studio uses to create .pdb files when you compile; these are the symbol files that let you debug an application. Sometimes it goes berserk and doesn't shutdown correctly when you exit Visual Studio. I've had this cause bad compiles even after quitting and restarting Visual Studio. Use Process Explorer or the task list (Ctrl+Alt+Delete in Windows) to manually kill mspdbsrv.exe if it's broken on you.
For what it's worth, I haven't seen this problem happen in Visual Studio 2008 as of yet, but I've only been using it a few days.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Integrating static analysis tools with each other? How are folks integrating various static analysis tools such as PMD, Checkstyle, and FindBugs so that they are used together in some uniform way? In particular, I'm interested in generating a single uniform report that includes warnings from all tools. Also, I want to be able to mark-up my code with reasonably consistent looking warning suppressions.
My question here is not meant to address tool "overlap" where, say, PMD and Checkstyle are looking for the same things. That is another issue.
Please see some of my thoughts on the matter in an answer to a related question.
A: I stumbled across JcReport today, which I think does exactly what you are looking for. At least, it handles the reports in a combined way; suppressions are still tool-specific. This tool claims to support automatically integrating the output of PMD, CPD, FindBugs, CheckStyle, and Cobertura into a single HTML report.
I haven't tried it yet, but definitely intend to soon.
A: Another option is glean. http://jbrugge.com/glean/
From their website: Glean is a framework of Ant scripts for generating feedback on a team's source code. Glean's goal is to make it possible to add feedback to your build cycle with as little pain as possible. The scripts drive a number of open-source tools and collect the resulting HTML for you to deploy to a project website or some other common team area. Add it at the end of a daily build cycle and it is a quick way to keep a number of feedback sources up to date and in one place.
A: I am not clear on what qualifies as a single uniform report in your book but here is what I do.
I use Maven2 for builds and with it you can configure a series of reporting plugins (including PMD, CPD, checkstyle, coberturba, etc). Maven will also auto-generate a website (site plugin) for your project which contains all the reports in a nice easy-to-navigate webpage format.
A: If you build your project with Maven, and you have those tools "plugged in" to your Maven build, then the Maven report that is generated for the build will include the output of those static analysis tools.
A: Thanks for the responses!
The goal here is to configure these tools so that they behave in some similar manner with respect to each other. This goes beyond simply dumping whatever report they generate automatically, or using whatever warning suppression hint they use out-of-the-box.
For example, I have PMD, Checkstyle, and FindBugs configured to report all warnings in the following format:
/absolute-path/filename:line-number:column-number: warning(tool-name): message
So a warning might look like this:
/project/src/com/example/Foo.java:425:9: warning(Checkstyle): Missing a Javadoc comment.
Also, all warning suppressions in my source code are marked with a symbol that includes the string "SuppressWarnings" regardless of the static analysis tool being surpressed. Sometimes this symbol is an annotation, sometimes it's in a comment, but it always has that name.
I explain these ideas in a bit more detail here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What and where are the stack and heap?
*
*What are the stack and heap?
*Where are they located physically in a computer's memory?
*To what extent are they controlled by the OS or language run-time?
*What is their scope?
*What determines their sizes?
*What makes one faster?
A: You can do some interesting things with the stack. For instance, you have functions like alloca (assuming you can get past the copious warnings concerning its use), which is a form of malloc that specifically uses the stack, not the heap, for memory.
That said, stack-based memory errors are some of the worst I've experienced. If you use heap memory, and you overstep the bounds of your allocated block, you have a decent chance of triggering a segment fault. (Not 100%: your block may be incidentally contiguous with another that you have previously allocated.) But since variables created on the stack are always contiguous with each other, writing out of bounds can change the value of another variable. I have learned that whenever I feel that my program has stopped obeying the laws of logic, it is probably buffer overflow.
A: Simply, the stack is where local variables get created. Also, every time you call a subroutine the program counter (pointer to the next machine instruction) and any important registers, and sometimes the parameters get pushed on the stack. Then any local variables inside the subroutine are pushed onto the stack (and used from there). When the subroutine finishes, that stuff all gets popped back off the stack. The PC and register data gets and put back where it was as it is popped, so your program can go on its merry way.
The heap is the area of memory dynamic memory allocations are made out of (explicit "new" or "allocate" calls). It is a special data structure that can keep track of blocks of memory of varying sizes and their allocation status.
In "classic" systems RAM was laid out such that the stack pointer started out at the bottom of memory, the heap pointer started out at the top, and they grew towards each other. If they overlap, you are out of RAM. That doesn't work with modern multi-threaded OSes though. Every thread has to have its own stack, and those can get created dynamicly.
A: From WikiAnwser.
Stack
When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value.
This chain of suspended function calls is the stack, because elements in the stack (function calls) depend on each other.
The stack is important to consider in exception handling and thread executions.
Heap
The heap is simply the memory used by programs to store variables.
Element of the heap (variables) have no dependencies with each other and can always be accessed randomly at any time.
A: (I have moved this answer from another question that was more or less a dupe of this one.)
The answer to your question is implementation specific and may vary across compilers and processor architectures. However, here is a simplified explanation.
*
*Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
*In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.
The heap
*
*The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block.
*As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
*Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
*When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.
The stack
*
*The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
*The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don't be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
*If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
*When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
*When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
*Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
*As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.
Can a function be allocated on the heap instead of a stack?
No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.
How the heap is managed is really up to the runtime environment. C uses malloc and C++ uses new, but many other languages have garbage collection.
However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn't too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.
A: The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.
The heap is memory set aside for dynamic allocation. Unlike the stack, there's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.
Each thread gets a stack, while there's typically only one heap for the application (although it isn't uncommon to have multiple heaps for different types of allocation).
To answer your questions directly:
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
What is their scope?
The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.
What determines the size of each of them?
The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor's cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be - typically - synchronized with "all" other heap accesses in the program.
A clear demonstration:
Image source: vikashazrati.wordpress.com
A: Stack
*
*Very fast access
*Don't have to explicitly de-allocate variables
*Space is managed efficiently by CPU, memory will not become fragmented
*Local variables only
*Limit on stack size (OS-dependent)
*Variables cannot be resized
Heap
*
*Variables can be accessed globally
*No limit on memory size
*(Relatively) slower access
*No guaranteed efficient use of space, memory may become fragmented over time as blocks of memory are allocated, then freed
*You must manage memory (you're in charge of allocating and freeing variables)
*Variables can be resized using realloc()
A: In Short
A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer's RAM.
In Detail
The Stack
The stack is a "LIFO" (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is "pushed" onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.
The advantage of using the stack to store variables, is that memory is managed for you. You don't have to allocate memory by hand, or free it once you don't need it any more. What's more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.
More can be found here.
The Heap
The heap is a region of your computer's memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don't need it any more.
If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won't be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.
Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.
Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.
More can be found here.
Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.
Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.
You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don't know exactly how much data you will need at runtime or if you need to allocate a lot of data.
In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.
Each thread gets a stack, while there's typically only one heap for the application (although it isn't uncommon to have multiple heaps for different types of allocation).
At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.
Even, more detail is given here and here.
Now come to your question's answers.
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
More can be found here.
What is their scope?
Already given in top.
"You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don't know exactly how much data you will need at runtime or if you need to allocate a lot of data."
More can be found in here.
What determines the size of each of them?
The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.
Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.
Details can be found from here.
A: OK, simply and in short words, they mean ordered and not ordered...!
Stack: In stack items, things get on the top of each-other, means gonna be faster and more efficient to be processed!...
So there is always an index to point the specific item, also processing gonna be faster, there is relationship between the items as well!...
Heap: No order, processing gonna be slower and values are messed up together with no specific order or index... there are random and there is no relationship between them... so execution and usage time could be vary...
I also create the image below to show how they may look like:
A: stack, heap and data of each process in virtual memory:
A: In the following C# code
public void Method1()
{
int i = 4;
int y = 2;
class1 cls1 = new class1();
}
Here's how the memory is managed
Local Variables that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don't really know up front but we expect them to last a while. In most languages it's critical that we know at compile time how large a variable is if we want to store it on the stack.
Objects (which vary in size as we update them) go on the heap because we don't know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.
In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you're not dealing with pointers.
More information can be found here:
The difference between stack and heap memory allocation « timmurphy.org
and here:
Creating Objects on the Stack and Heap
This article is the source of picture above: Six important .NET concepts: Stack, heap, value types, reference types, boxing, and unboxing - CodeProject
but be aware it may contain some inaccuracies.
A: In the 1980s, UNIX propagated like bunnies with big companies rolling their own.
Exxon had one as did dozens of brand names lost to history.
How memory was laid out was at the discretion of the many implementors.
A typical C program was laid out flat in memory with
an opportunity to increase by changing the brk() value.
Typically, the HEAP was just below this brk value
and increasing brk increased the amount of available heap.
The single STACK was typically an area below HEAP which was a tract of memory
containing nothing of value until the top of the next fixed block of memory.
This next block was often CODE which could be overwritten by stack data
in one of the famous hacks of its era.
One typical memory block was BSS (a block of zero values)
which was accidentally not zeroed in one manufacturer's offering.
Another was DATA containing initialized values, including strings and numbers.
A third was CODE containing CRT (C runtime), main, functions, and libraries.
The advent of virtual memory in UNIX changes many of the constraints.
There is no objective reason why these blocks need be contiguous,
or fixed in size, or ordered a particular way now.
Of course, before UNIX was Multics which didn't suffer from these constraints.
Here is a schematic showing one of the memory layouts of that era.
A: A couple of cents: I think, it will be good to draw memory graphical and more simple:
Arrows - show where grow stack and heap, process stack size have limit, defined in OS, thread stack size limits by parameters in thread create API usually. Heap usually limiting by process maximum virtual memory size, for 32 bit 2-4 GB for example.
So simple way: process heap is general for process and all threads inside, using for memory allocation in common case with something like malloc().
Stack is quick memory for store in common case function return pointers and variables, processed as parameters in function call, local function variables.
A: Stack:
*
*Stored in computer RAM just like the heap.
*Variables created on the stack will go out of scope and are automatically deallocated.
*Much faster to allocate in comparison to variables on the heap.
*Implemented with an actual stack data structure.
*Stores local data, return addresses, used for parameter passing.
*Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
*Data created on the stack can be used without pointers.
*You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
*Usually has a maximum size already determined when your program starts.
Heap:
*
*Stored in computer RAM just like the stack.
*In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with delete, delete[], or free.
*Slower to allocate in comparison to variables on the stack.
*Used on demand to allocate a block of data for use by the program.
*Can have fragmentation when there are a lot of allocations and deallocations.
*In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
*Can have allocation failures if too big of a buffer is requested to be allocated.
*You would use the heap if you don't know exactly how much data you will need at run time or if you need to allocate a lot of data.
*Responsible for memory leaks.
Example:
int foo()
{
char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
bool b = true; // Allocated on the stack.
if(b)
{
//Create 500 bytes on the stack
char buffer[500];
//Create 500 bytes on the heap
pBuffer = new char[500];
}//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;
A: Since some answers went nitpicking, I'm going to contribute my mite.
Surprisingly, no one has mentioned that multiple (i.e. not related to the number of running OS-level threads) call stacks are to be found not only in exotic languages (PostScript) or platforms (Intel Itanium), but also in fibers, green threads and some implementations of coroutines.
Fibers, green threads and coroutines are in many ways similar, which leads to much confusion. The difference between fibers and green threads is that the former use cooperative multitasking, while the latter may feature either cooperative or preemptive one (or even both). For the distinction between fibers and coroutines, see here.
In any case, the purpose of both fibers, green threads and coroutines is having multiple functions executing concurrently, but not in parallel (see this SO question for the distinction) within a single OS-level thread, transferring control back and forth from one another in an organized fashion.
When using fibers, green threads or coroutines, you usually have a separate stack per function. (Technically, not just a stack but a whole context of execution is per function. Most importantly, CPU registers.) For every thread there're as many stacks as there're concurrently running functions, and the thread is switching between executing each function according to the logic of your program. When a function runs to its end, its stack is destroyed. So, the number and lifetimes of stacks are dynamic and are not determined by the number of OS-level threads!
Note that I said "usually have a separate stack per function". There're both stackful and stackless implementations of couroutines. Most notable stackful C++ implementations are Boost.Coroutine and Microsoft PPL's async/await. (However, C++'s resumable functions (a.k.a. "async and await"), which were proposed to C++17, are likely to use stackless coroutines.)
Fibers proposal to the C++ standard library is forthcoming. Also, there're some third-party libraries. Green threads are extremely popular in languages like Python and Ruby.
A: I have something to share, although the major points are already covered.
Stack
*
*Very fast access.
*Stored in RAM.
*Function calls are loaded here along with the local variables and function parameters passed.
*Space is freed automatically when program goes out of a scope.
*Stored in sequential memory.
Heap
*
*Slow access comparatively to Stack.
*Stored in RAM.
*Dynamically created variables are stored here, which later requires freeing the allocated memory after use.
*Stored wherever memory allocation is done, accessed by pointer always.
Interesting note:
*
*Should the function calls had been stored in heap, it would had resulted in 2 messy points:
*
*Due to sequential storage in stack, execution is faster. Storage in heap would have resulted in huge time consumption thus making the whole program execute slower.
*If functions were stored in heap (messy storage pointed by pointer), there would have been no way to return to the caller address back (which stack gives due to sequential storage in memory).
A: Other answers just avoid explaining what static allocation means. So I will explain the three main forms of allocation and how they usually relate to the heap, stack, and data segment below. I also will show some examples in both C/C++ and Python to help people understand.
"Static" (AKA statically allocated) variables are not allocated on the stack. Do not assume so - many people do only because "static" sounds a lot like "stack". They actually exist in neither the stack nor the heap. They are part of what's called the data segment.
However, it is generally better to consider "scope" and "lifetime" rather than "stack" and "heap".
Scope refers to what parts of the code can access a variable. Generally we think of local scope (can only be accessed by the current function) versus global scope (can be accessed anywhere) although scope can get much more complex.
Lifetime refers to when a variable is allocated and deallocated during program execution. Usually we think of static allocation (variable will persist through the entire duration of the program, making it useful for storing the same information across several function calls) versus automatic allocation (variable only persists during a single call to a function, making it useful for storing information that is only used during your function and can be discarded once you are done) versus dynamic allocation (variables whose duration is defined at runtime, instead of compile time like static or automatic).
Although most compilers and interpreters implement this behavior similarly in terms of using stacks, heaps, etc, a compiler may sometimes break these conventions if it wants as long as behavior is correct. For instance, due to optimization a local variable may only exist in a register or be removed entirely, even though most local variables exist in the stack. As has been pointed out in a few comments, you are free to implement a compiler that doesn't even use a stack or a heap, but instead some other storage mechanisms (rarely done, since stacks and heaps are great for this).
I will provide some simple annotated C code to illustrate all of this. The best way to learn is to run a program under a debugger and watch the behavior. If you prefer to read python, skip to the end of the answer :)
// Statically allocated in the data segment when the program/DLL is first loaded
// Deallocated when the program/DLL exits
// scope - can be accessed from anywhere in the code
int someGlobalVariable;
// Statically allocated in the data segment when the program is first loaded
// Deallocated when the program/DLL exits
// scope - can be accessed from anywhere in this particular code file
static int someStaticVariable;
// "someArgument" is allocated on the stack each time MyFunction is called
// "someArgument" is deallocated when MyFunction returns
// scope - can be accessed only within MyFunction()
void MyFunction(int someArgument) {
// Statically allocated in the data segment when the program is first loaded
// Deallocated when the program/DLL exits
// scope - can be accessed only within MyFunction()
static int someLocalStaticVariable;
// Allocated on the stack each time MyFunction is called
// Deallocated when MyFunction returns
// scope - can be accessed only within MyFunction()
int someLocalVariable;
// A *pointer* is allocated on the stack each time MyFunction is called
// This pointer is deallocated when MyFunction returns
// scope - the pointer can be accessed only within MyFunction()
int* someDynamicVariable;
// This line causes space for an integer to be allocated in the heap
// when this line is executed. Note this is not at the beginning of
// the call to MyFunction(), like the automatic variables
// scope - only code within MyFunction() can access this space
// *through this particular variable*.
// However, if you pass the address somewhere else, that code
// can access it too
someDynamicVariable = new int;
// This line deallocates the space for the integer in the heap.
// If we did not write it, the memory would be "leaked".
// Note a fundamental difference between the stack and heap
// the heap must be managed. The stack is managed for us.
delete someDynamicVariable;
// In other cases, instead of deallocating this heap space you
// might store the address somewhere more permanent to use later.
// Some languages even take care of deallocation for you... but
// always it needs to be taken care of at runtime by some mechanism.
// When the function returns, someArgument, someLocalVariable
// and the pointer someDynamicVariable are deallocated.
// The space pointed to by someDynamicVariable was already
// deallocated prior to returning.
return;
}
// Note that someGlobalVariable, someStaticVariable and
// someLocalStaticVariable continue to exist, and are not
// deallocated until the program exits.
A particularly poignant example of why it's important to distinguish between lifetime and scope is that a variable can have local scope but static lifetime - for instance, "someLocalStaticVariable" in the code sample above. Such variables can make our common but informal naming habits very confusing. For instance when we say "local" we usually mean "locally scoped automatically allocated variable" and when we say global we usually mean "globally scoped statically allocated variable". Unfortunately when it comes to things like "file scoped statically allocated variables" many people just say... "huh???".
Some of the syntax choices in C/C++ exacerbate this problem - for instance many people think global variables are not "static" because of the syntax shown below.
int var1; // Has global scope and static allocation
static int var2; // Has file scope and static allocation
int main() {return 0;}
Note that putting the keyword "static" in the declaration above prevents var2 from having global scope. Nevertheless, the global var1 has static allocation. This is not intuitive! For this reason, I try to never use the word "static" when describing scope, and instead say something like "file" or "file limited" scope. However many people use the phrase "static" or "static scope" to describe a variable that can only be accessed from one code file. In the context of lifetime, "static" always means the variable is allocated at program start and deallocated when program exits.
Some people think of these concepts as C/C++ specific. They are not. For instance, the Python sample below illustrates all three types of allocation (there are some subtle differences possible in interpreted languages that I won't get into here).
from datetime import datetime
class Animal:
_FavoriteFood = 'Undefined' # _FavoriteFood is statically allocated
def PetAnimal(self):
curTime = datetime.time(datetime.now()) # curTime is automatically allocatedion
print("Thank you for petting me. But it's " + str(curTime) + ", you should feed me. My favorite food is " + self._FavoriteFood)
class Cat(Animal):
_FavoriteFood = 'tuna' # Note since we override, Cat class has its own statically allocated _FavoriteFood variable, different from Animal's
class Dog(Animal):
_FavoriteFood = 'steak' # Likewise, the Dog class gets its own static variable. Important to note - this one static variable is shared among all instances of Dog, hence it is not dynamic!
if __name__ == "__main__":
whiskers = Cat() # Dynamically allocated
fido = Dog() # Dynamically allocated
rinTinTin = Dog() # Dynamically allocated
whiskers.PetAnimal()
fido.PetAnimal()
rinTinTin.PetAnimal()
Dog._FavoriteFood = 'milkbones'
whiskers.PetAnimal()
fido.PetAnimal()
rinTinTin.PetAnimal()
# Output is:
# Thank you for petting me. But it's 13:05:02.255000, you should feed me. My favorite food is tuna
# Thank you for petting me. But it's 13:05:02.255000, you should feed me. My favorite food is steak
# Thank you for petting me. But it's 13:05:02.255000, you should feed me. My favorite food is steak
# Thank you for petting me. But it's 13:05:02.255000, you should feed me. My favorite food is tuna
# Thank you for petting me. But it's 13:05:02.255000, you should feed me. My favorite food is milkbones
# Thank you for petting me. But it's 13:05:02.256000, you should feed me. My favorite food is milkbones
A: The Stack
When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there.
When you declare a variable inside your function, that variable is also allocated on the stack.
Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).
The Heap
The heap is a generic name for where you put the data that you create on the fly. If you don't know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.
Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are - memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).
Implementation
Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.
This is only practical if your memory usage is quite different from the norm - i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.
Physical location in memory
This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implimentation specific) and frankly not important.
A:
The stack is essentially an easy-to-access memory that simply manages its items
as a - well - stack. Only items for which the size is known in advance can go onto the stack. This is the case for numbers, strings, booleans.
The heap is a memory for items of which you can’t predetermine the
exact size and structure. Since objects and arrays can be mutated and
change at runtime, they have to go into the heap.
Source: Academind
A: I feel most answers are very convoluted and technical, while I didn't find one that could explain simply the reasoning behind those two concepts (i.e. why people created them in the first place?) and why you should care. Here is my attempt at one:
Data on the Stack is temporary and auto-cleaning
Data on the Heap is permanent until manually deleted
That's it.
Still, for more explanations :
The stack is meant to be used as the ephemeral or working memory, a memory space that we know will be entirely deleted regularly no matter what mess we put in there during the lifetime of our program. That's like the memo on your desk that you scribble on with anything going through your mind that you barely feel may be important, which you know you will just throw away at the end of the day because you will have filtered and organized the actual important notes in another medium, like a document or a book. We don't care for presentation, crossing-outs or unintelligible text, this is just for our work of the day and will remember what we meant an hour or two ago, it's just our quick and dirty way to store ideas we want to remember later without hurting our current stream of thoughts. That's what people mean by "the stack is the scratchpad".
The heap however is the long-term memory, the actual important document that will we stored, consulted and depended on for a very long time after its creation. It consequently needs to have perfect form and strictly contain the important data. That why it costs a lot to make and can't be used for the use-case of our precedent memo. It wouldn't be worthwhile, or even simply useless, to take all my notes in an academic paper presentation, writing the text as calligraphy. However this presentation is extremely useful for well curated data. That's what the heap is meant to be. Well known data, important for the lifetime application, which is well controlled and needed at many places in your code. The system will thus never delete this precious data without you explicitly asking for it, because it knows "that's where the important data is!".
This is why you need to manage and take care of memory allocation on the heap, but don't need to bother with it for the stack.
Most top answers are merely technical details of the actual implementations of that concept in real computers.
So what to take away from this is that:
Unimportant, working, temporary, data just needed to make our functions and objects work is (generally) more relevant to be stored on the stack.
Important, permanent and foundational application data is (generally) more relevant to be stored on the heap.
This of course needs to be thought of only in the context of the lifetime of your program. Actual humanly important data generated by your program will need to be stored on an external file evidently. (Since whether it is the heap or the stack, they are both cleared entirely when your program terminates.)
PS: Those are just general rules, you can always find edge cases and each language comes with its own implementation and resulting quirks, this is meant to be taken as a guidance to the concept and a rule of thumb.
A: Others have answered the broad strokes pretty well, so I'll throw in a few details.
*
*Stack and heap need not be singular. A common situation in which you have more than one stack is if you have more than one thread in a process. In this case each thread has its own stack. You can also have more than one heap, for example some DLL configurations can result in different DLLs allocating from different heaps, which is why it's generally a bad idea to release memory allocated by a different library.
*In C you can get the benefit of variable length allocation through the use of alloca, which allocates on the stack, as opposed to alloc, which allocates on the heap. This memory won't survive your return statement, but it's useful for a scratch buffer.
*Making a huge temporary buffer on Windows that you don't use much of is not free. This is because the compiler will generate a stack probe loop that is called every time your function is entered to make sure the stack exists (because Windows uses a single guard page at the end of your stack to detect when it needs to grow the stack. If you access memory more than one page off the end of the stack you will crash). Example:
void myfunction()
{
char big[10000000];
// Do something that only uses for first 1K of big 99% of the time.
}
A: Wow! So many answers and I don't think one of them got it right...
1) Where and what are they (physically in a real computer's memory)?
The stack is memory that begins as the highest memory address allocated to your program image, and it then decrease in value from there. It is reserved for called function parameters and for all temporary variables used in functions.
There are two heaps: public and private.
The private heap begins on a 16-byte boundary (for 64-bit programs) or a 8-byte boundary (for 32-bit programs) after the last byte of code in your program, and then increases in value from there. It is also called the default heap.
If the private heap gets too large it will overlap the stack area, as will the stack overlap the heap if it gets too big. Because the stack starts at a higher address and works its way down to lower address, with proper hacking you can get make the stack so large that it will overrun the private heap area and overlap the code area. The trick then is to overlap enough of the code area that you can hook into the code. It's a little tricky to do and you risk a program crash, but it's easy and very effective.
The public heap resides in it's own memory space outside of your program image space. It is this memory that will be siphoned off onto the hard disk if memory resources get scarce.
2) To what extent are they controlled by the OS or language runtime?
The stack is controlled by the programmer, the private heap is managed by the OS, and the public heap is not controlled by anyone because it is an OS service -- you make requests and either they are granted or denied.
2b) What is their scope?
They are all global to the program, but their contents can be private, public, or global.
2c) What determines the size of each of them?
The size of the stack and the private heap are determined by your compiler runtime options. The public heap is initialized at runtime using a size parameter.
2d) What makes one faster?
They are not designed to be fast, they are designed to be useful. How the programmer utilizes them determines whether they are "fast" or "slow"
REF:
https://norasandler.com/2019/02/18/Write-a-Compiler-10.html
https://learn.microsoft.com/en-us/windows/desktop/api/heapapi/nf-heapapi-getprocessheap
https://learn.microsoft.com/en-us/windows/desktop/api/heapapi/nf-heapapi-heapcreate
A: Others have directly answered your question, but when trying to understand the stack and the heap, I think it is helpful to consider the memory layout of a traditional UNIX process (without threads and mmap()-based allocators). The Memory Management Glossary web page has a diagram of this memory layout.
The stack and heap are traditionally located at opposite ends of the process's virtual address space. The stack grows automatically when accessed, up to a size set by the kernel (which can be adjusted with setrlimit(RLIMIT_STACK, ...)). The heap grows when the memory allocator invokes the brk() or sbrk() system call, mapping more pages of physical memory into the process's virtual address space.
In systems without virtual memory, such as some embedded systems, the same basic layout often applies, except the stack and heap are fixed in size. However, in other embedded systems (such as those based on Microchip PIC microcontrollers), the program stack is a separate block of memory that is not addressable by data movement instructions, and can only be modified or read indirectly through program flow instructions (call, return, etc.). Other architectures, such as Intel Itanium processors, have multiple stacks. In this sense, the stack is an element of the CPU architecture.
A: The most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.
*
*In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).
The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.
*In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear 'top' item.
Heap allocation requires maintaining a full record of what memory is allocated and what isn't, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting - identifying at runtime when memory is no longer in scope and deallocating it.
These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!
*
*To what extent are they controlled by the OS or language runtime?
As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn't really have runtime control over it; it's determined by the programming language, OS and even the system architecture.
A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.
*What is their scope?
The call stack is such a low level concept that it doesn't relate to 'scope' in the sense of programming. If you disassemble some code you'll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you'd expect it to work given how your programming languages work. In a heap, it's also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a "scope" is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).
*What determines the size of each of them?
Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don't store huge chunks of data on the stack, so it'll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, "stack overflow") or other unusual programming decisions.
A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don't normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn't use memory that you haven't allocated yet or memory that you have freed.
*What makes one faster?
The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What's more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.
A: What is a stack?
A stack is a pile of objects, typically one that is neatly arranged.
Stacks in computing architectures are regions of memory where data is added or removed in a last-in-first-out manner.
In a multi-threaded application, each thread will have its own stack.
What is a heap?
A heap is an untidy collection of things piled up haphazardly.
In computing architectures the heap is an area of dynamically-allocated memory that is managed automatically by the operating system or the memory manager library.
Memory on the heap is allocated, deallocated, and resized regularly during program execution, and this can lead to a problem called fragmentation.
Fragmentation occurs when memory objects are allocated with small spaces in between that are too small to hold additional memory objects.
The net result is a percentage of the heap space that is not usable for further memory allocations.
Both together
In a multi-threaded application, each thread will have its own stack. But, all the different threads will share the heap.
Because the different threads share the heap in a multi-threaded application, this also means that there has to be some coordination between the threads so that they don’t try to access and manipulate the same piece(s) of memory in the heap at the same time.
Which is faster – the stack or the heap? And why?
The stack is much faster than the heap.
This is because of the way that memory is allocated on the stack.
Allocating memory on the stack is as simple as moving the stack pointer up.
For people new to programming, it’s probably a good idea to use the stack since it’s easier.
Because the stack is small, you would want to use it when you know exactly how much memory you will need for your data, or if you know the size of your data is very small.
It’s better to use the heap when you know that you will need a lot of memory for your data, or you just are not sure how much memory you will need (like with a dynamic array).
Java Memory Model
The stack is the area of memory where local variables (including method parameters) are stored. When it comes to object variables, these are merely references (pointers) to the actual objects on the heap.
Every time an object is instantiated, a chunk of heap memory is set aside to hold the data (state) of that object. Since objects can contain other objects, some of this data can in fact hold references to those nested objects.
A: The stack is a portion of memory that can be manipulated via several key assembly language instructions, such as 'pop' (remove and return a value from the stack) and 'push' (push a value to the stack), but also call (call a subroutine - this pushes the address to return to the stack) and return (return from a subroutine - this pops the address off of the stack and jumps to it). It's the region of memory below the stack pointer register, which can be set as needed. The stack is also used for passing arguments to subroutines, and also for preserving the values in registers before calling subroutines.
The heap is a portion of memory that is given to an application by the operating system, typically through a syscall like malloc. On modern OSes this memory is a set of pages that only the calling process has access to.
The size of the stack is determined at runtime, and generally does not grow after the program launches. In a C program, the stack needs to be large enough to hold every variable declared within each function. The heap will grow dynamically as needed, but the OS is ultimately making the call (it will often grow the heap by more than the value requested by malloc, so that at least some future mallocs won't need to go back to the kernel to get more memory. This behavior is often customizable)
Because you've allocated the stack before launching the program, you never need to malloc before you can use the stack, so that's a slight advantage there. In practice, it's very hard to predict what will be fast and what will be slow in modern operating systems that have virtual memory subsystems, because how the pages are implemented and where they are stored is an implementation detail.
A: I think many other people have given you mostly correct answers on this matter.
One detail that has been missed, however, is that the "heap" should in fact probably be called the "free store". The reason for this distinction is that the original free store was implemented with a data structure known as a "binomial heap." For that reason, allocating from early implementations of malloc()/free() was allocation from a heap. However, in this modern day, most free stores are implemented with very elaborate data structures that are not binomial heaps.
A: A lot of answers are correct as concepts, but we must note that a stack is needed by the hardware (i.e. microprocessor) to allow calling subroutines (CALL in assembly language..). (OOP guys will call it methods)
On the stack you save return addresses and call → push / ret → pop is managed directly in hardware.
You can use the stack to pass parameters.. even if it is slower than using registers (would a microprocessor guru say or a good 1980s BIOS book...)
*
*Without stack no microprocessor can work. (we can't imagine a program, even in assembly language, without subroutines/functions)
*Without the heap it can. (An assembly language program can work without, as the heap is a OS concept, as malloc, that is a OS/Lib call.
Stack usage is faster as:
*
*Is hardware, and even push/pop are very efficient.
*malloc requires entering kernel mode, use lock/semaphore (or other synchronization primitives) executing some code and manage some structures needed to keep track of allocation.
A:
Where and what are they (physically in a real computer's memory)?
ANSWER: Both are in RAM.
ASIDE:
RAM is like a desk and HDDs/SSDs (permanent storage) are like bookshelves. To read anything, you must have a book open on your desk, and you can only have as many books open as fit on your desk. To get a book, you pull it from your bookshelf and open it on your desk. To return a book, you close the book on your desk and return it to its bookshelf.
Stack and heap are names we give to two ways compilers store different kinds of data in the same place (i.e. in RAM).
What is their scope?
What determines the size of each of them?
What makes one faster?
ANSWER:
*
*The stack is for static (fixed size) data
a. At compile time, the compiler reads the variable types used in your code.
i. It allocates a fixed amount of memory for these variables.
ii. This size of this memory cannot grow.
b. The memory is contiguous (a single block), so access is sometimes faster than the heap
c. An object placed on the stack that grows in memory during runtime beyond the size of the stack causes a stack overflow error
*The heap is for dynamic (changing size) data
a. The amount of memory is limited only by the amount of empty space available in RAM
i. The amount used can grow or shrink as needed at runtime
b. Since items are allocated on the heap by finding empty space wherever it exists in RAM, data is not always in a contiguous section, which sometimes makes access slower than the stack
c. Programmers manually put items on the heap with the new keyword and MUST manually deallocate this memory when they are finished using it.
i. Code that repeatedly allocates new memory without deallocating it when it is no longer needed leads to a memory leak.
ASIDE:
The stack and heap were not primarily introduced to improve speed; they were introduced to handle memory overflow. The first concern regarding use of the stack vs. the heap should be whether memory overflow will occur. If an object is intended to grow in size to an unknown amount (like a linked list or an object whose members can hold an arbitrary amount of data), place it on the heap. As far as possible, use the C++ standard library (STL) containers vector, map, and list as they are memory and speed efficient and added to make your life easier (you don't need to worry about memory allocation/deallocation).
After getting your code to run, if you find it is running unacceptably slow, then go back and refactor your code and see if it can be programmed more efficiently. It may turn out the problem has nothing to do with the stack or heap directly at all (e.g. use an iterative algorithm instead of a recursive one, look at I/O vs. CPU-bound tasks, perhaps add multithreading or multiprocessing).
I say sometimes slower/faster above because the speed of the program might not have anything to do with items being allocated on the stack or heap.
To what extent are they controlled by the OS or language run-time?
ANSWER:
*
*The stack size is determined at compile time by the compiler.
*The heap size varies during runtime. (The heap works with the OS during runtime to allocate memory.)
ASIDE:
Below is a little more about control and compile-time vs. runtime operations.
Each computer has a unique instruction set architecture (ISA), which are its hardware commands (e.g. "MOVE", "JUMP", "ADD", etc.).
*
*An OS is nothing more than a resource manager (controls how/when/ and where to use memory, processors, devices, and information).
*The ISA of the OS is called the bare machine and the remaining commands are called the extended machine. The kernel is the first layer of the extended machine. It controls things like
*
*determining what tasks get to use a processor (the scheduler),
*how much memory or how many hardware registers to allocate to a task (the dispatcher), and
*the order in which tasks should be performed (the traffic controller).
*When we say "compiler", we generally mean the compiler, assembler, and linker together
*
*The compiler turns source code into assembly language and passes it to the assembler,
*The assembler turns the assembly language into machine code (ISA commands), and passes it to the linker
*The linker takes all machine code (possibly generated from multiple source files) and combines it into one program.
*The machine code gets passed to the kernel when executed, which determines when it should run and take control, but the machine code itself contains ISA commands for requesting files, requesting memory, etc. So the code issues ISA commands, but everything has to pass by the kernel.
A: CPU stack and heap are physically related to how CPU and registers works with memory, how machine-assembly language works, not high-level languages themselves, even if these languages can decide little things.
All modern CPUs work with the "same" microprocessor theory: they are all based on what's called "registers" and some are for "stack" to gain performance. All CPUs have stack registers since the beginning and they had been always here, way of talking, as I know. Assembly languages are the same since the beginning, despite variations... up to Microsoft and its Intermediate Language (IL) that changed the paradigm to have a OO virtual machine assembly language. So we'll be able to have some CLI/CIL CPU in the future (one project of MS).
CPUs have stack registers to speed up memories access, but they are limited compared to the use of others registers to get full access to all the available memory for the processus. It why we talked about stack and heap allocations.
In summary, and in general, the heap is hudge and slow and is for "global" instances and objects content, as the stack is little and fast and for "local" variables and references (hidden pointers to forget to manage them).
So when we use the new keyword in a method, the reference (an int) is created in the stack, but the object and all its content (value-types as well as objects) is created in the heap, if I remember. But local elementary value-types and arrays are created in the stack.
The difference in memory access is at the cells referencing level: addressing the heap, the overall memory of the process, requires more complexity in terms of handling CPU registers, than the stack which is "more" locally in terms of addressing because the CPU stack register is used as base address, if I remember.
It is why when we have very long or infinite recurse calls or loops, we got stack overflow quickly, without freezing the system on modern computers...
C# Heap(ing) Vs Stack(ing) In .NET
Stack vs Heap: Know the Difference
Static class memory allocation where it is stored C#
What and where are the stack and heap?
https://en.wikipedia.org/wiki/Memory_management
https://en.wikipedia.org/wiki/Stack_register
Assembly language resources:
Assembly Programming Tutorial
Intel® 64 and IA-32 Architectures Software Developer Manuals
A: When a process is created then after loading code and data OS setup heap start just after data ends and stack to top of address space based on architecture
When more heap is required OS will allocate dynamically and heap chunk is always virtually contiguous
Please see brk(), sbrk() and alloca() system call in linux
A: Thank you for a really good discussion but as a real noob I wonder where instructions are kept? In the BEGINNING scientists were deciding between two architectures (von NEUMANN where everything is considered DATA and HARVARD where an area of memory was reserved for instructions and another for data). Ultimately, we went with the von Neumann design and now everything is considered 'the same'. This made it hard for me when I was learning assembly
https://www.cs.virginia.edu/~evans/cs216/guides/x86.html
because they talk about registers and stack pointers.
Everything above talks about DATA. My guess is that since an instruction is a defined thing with a specific memory footprint, it would go on the stack and so all 'those' registers discussed in assembly are on the stack. Of course then came object oriented programming with instructions and data comingled into a structure that was dynamic so now instructions would be kept on the heap as well?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9225"
} |
Q: What is the best way to change the encoding of text in PHP I want to run text through a filter to ensure it is all UTF-8 encoded. What is the recommended way to do this with PHP?
A: Your question is unclear, are you trying to encode something? If so utf8_encode is your friend. Are you trying to determine if it doesn't need to be encoded? If so, utf8_encode is still your friend, because you can check that the result is the same as the input!
A: Check the multi-byte string functions here
A: You need to know in what character set your input string is encoded, or this will go nowhere fast.
If you want to do it correctly, this article may be helpful: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
A: Given a stream of bytes, you have to know what encoding it is to begin with - email use mime headers to specify the encoding, http uses http headers to specify the encoding. Also, you can specify the encoding in a meta tag in a web page, but it is not always honored.
Anyway, once you know what encoding you want to convert from, use iconv to convert it to utf8. look at the iconv section of the php docs, there's lots of good info there.
Ah, Thomas posted the link I was looking for. A must read.
A: The easiest way to check for UTF-8 validity:
*
*If only one line allowed:
preg_match('/^.*$/Du', $value)
*If multiple lines allowed:
preg_match('/^.*$/sDu', $value)
This works for PHP >= 4.3.5 and does not require any non-default PHP modules.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What was the name of the Mac (68000) assembler? I'm sure there were several, but the one I was thinking of would display a nice text screen when you crashed the computer thoroughly.
The Text was "Well smoke me a kipper."
A: No one else has answered this, so I'll answer the part I can: the original Macintosh debugger was MacsBug.
So far as the kipper quote goes, the only thing that comes to mind is the Pathogen computer virus.
A: The name of the assembler was Fantasm.
A: There was another 68000 Macintosh debugger called TMON. I don't remember the kipper quote being in it, but it's been a while.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there an equivalent to Java's Robot class (java.awt.Robot) for Perl? Is there an equivalent to Java's Robot class (java.awt.Robot) for Perl?
A: Alternatively, you can surely use the WWW::Mechanize module to create an agent as we do here at work. We have a tool called AppMon that is really just a dramatized wrapper around Mechanize.
The Mechanize module allows you to use scripts that look a lot like this:
use WWW::Mechanize;
my $Agent = WWW::Mechanize->new(cookie_jar => {});
$Agent->get("http://www.google.com/search?q=stack+overflow+mechanize");
print "Found Mechanize" $Agent->content =~ /WWW::Mechanize/;
and will result in "Found Mechanize" being output. This is a very simple script, but rest assured you can interact with forms quite well.
You can also move to Ruby and use Watir, or Selenium as another alternative, albeit not as interesting (in terms of coding) or automate-able. Selenium has a firefox extension that is quite useful for creating the selenium scripts and can change them between the various languages that it supports, which is pretty extensive in terms of automation.
Update - Nov 2016
Although I haven't had much of an opportunity to play with it, there are also webdriver packages for most languages, and Perl is no different.
Selenium::Remote::Driver
A: If you're looking for a way to control a browser for the purpose of functional testing, Selenium has Perl bindings: http://selenium.openqa.org/
A: For X (Linux/Unix), there's X11::GUITest.
For Windows, there's Win32::CtrlGUI, although it can be a bit tricky to install its prerequisites.
A: On Windows, I've always used Win32::GuiTest.
A: There is on Linux/Unix:
http://sourceforge.net/projects/x11guitest
I'm not familiar of anything similar for Windows or Mac that uses Perl.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to load Many to many LINQ query? I have the following (pretty standard) table structure:
Post <-> PostTag <-> Tag
Suppose I have the following records:
PostID Title
1, 'Foo'
2, 'Bar'
3, 'Baz'
TagID Name
1, 'Foo'
2, 'Bar'
PostID TagID
1 1
1 2
2 2
In other words, the first post has two tags, the second has one and the third one doesn't have any.
I'd like to load all posts and it's tags in one query but haven't been able to find the right combination of operators. I've been able to load either posts with tags only or repeated posts when more than one tag.
Given the database above, I'd like to receive three posts and their tags (if any) in a collection property of the Post objects. Is it possible at all?
Thanks
A: Yay! It worked.
If anyone is having the same problem here's what I did:
public IList<Post> GetPosts(int page, int record)
{
var options = new DataLoadOptions();
options.LoadWith<Post>(p => p.PostTags);
options.LoadWith<PostTag>(pt => pt.Tag);
using (var db = new DatabaseDataContext(m_connectionString))
{
var publishDateGmt = (from p in db.Posts
where p.Status != PostStatus.Hidden
orderby p.PublishDateGmt descending
select p.PublishDateGmt)
.Skip(page * record)
.Take(record)
.ToList()
.Last();
db.LoadOptions = options;
return (from p in db.Posts
where p.Status != PostStatus.Closed
&& p.PublishDateGmt >= publishDateGmt
orderby p.PublishDateGmt descending
select p)
.Skip(page * record)
.ToList();
}
}
This executes only two queries and loads all tags for each post.
The idea is to get some value to limit the query at the last post that we need (in this case the PublishDateGmt column will suffice) and then limit the second query with that value instead of Take().
Thanks for your help sirrocco.
A: It's a bit strange because
DataLoadOptions o = new DataLoadOptions ( );
o.LoadWith<Listing> ( l => l.ListingStaffs );
o.LoadWith<ListingStaff> ( ls => ls.MerchantStaff );
ctx.LoadOptions = o;
IQueryable<Listing> listings = (from a in ctx.Listings
where a.IsActive == false
select a);
List<Listing> list = listings.ToList ( );
results in a query like :
SELECT [t0].*, [t1].*, [t2].*, (
SELECT COUNT(*)
FROM [dbo].[LStaff] AS [t3]
INNER JOIN [dbo].[MStaff] AS [t4] ON [t4].[MStaffId] = [t3].[MStaffId]
WHERE [t3].[ListingId] = [t0].[ListingId]
) AS [value]
FROM [dbo].[Listing] AS [t0]
LEFT OUTER JOIN ([dbo].[LStaff] AS [t1]
INNER JOIN [dbo].[MStaff] AS [t2] ON [t2].[MStaffId] = [t1].[MStaffId]) ON
[t1].[LId] = [t0].[LId] WHERE NOT ([t0].[IsActive] = 1)
ORDER BY [t0].[LId], [t1].[LStaffId], [t2].[MStaffId]
(I've shortened the names and added the * on the select).
So it seems to do the select ok.
A: I'm sorry. The solution you give works, but I found out that it breaks when paginating with Take(N). The complete method I'm using is the following:
public IList<Post> GetPosts(int page, int records)
{
var options = new DataLoadOptions();
options.LoadWith<Post>(p => p.PostTags);
options.LoadWith<PostTag>(pt => pt.Tag);
using (var db = new BlogDataContext())
{
db.LoadOptions = options;
return (from p in db.Posts
where p.Status != PostStatus.Closed
orderby p.PublishDateGmt descending
select p)
.Skip(page * records)
//.Take(records)
.ToList();
}
}
With the Take() method commented it generates a query similar to to what you posted but if I add the Take() again it generates 1 + N x M queries.
So, I guess my question now is: Is there a replacement to the Take() method to paginate records?
Thanks
A: I've answered this in another post : About eager loading. In your case it would probably be something like :
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<Post>(p => p.PostTag);
options.LoadWith<PostTag>(pt => pt.Tag);
Though be careful - the DataLoadOptions must be set BEFORE ANY query is sent to the database - if not, an exception is thrown (no idea why it's like this in Linq2Sql - probably will be fixed in a later version).
A: I'm sorry no, Eager Loading will execute one extra query per tag per post.
Tested with this code:
var options = new DataLoadOptions();
options.LoadWith<Post>(p => p.PostTags);
options.LoadWith<PostTag>(pt => pt.Tag);
using (var db = new BlogDataContext())
{
db.LoadOptions = options;
return (from p in db.Posts
where p.Status != PostStatus.Closed
orderby p.PublishDateGmt descending
select p);
}
In the example database it would execute 4 queries which is not acceptable in production. Can anyone suggest another solution?
Thanks
A: I know this is an old post, but I have discovered a way to use Take() while only performing one query. The trick is to perform the Take() inside of a nested query.
var q = from p in db.Posts
where db.Posts.Take(10).Contains(p)
select p;
Using DataLoadOptions with the query above will give you the first ten posts, including their associated tags, all in one query. The resulting SQL will be a much less concise version of the following:
SELECT p.PostID, p.Title, pt.PostID, pt.TagID, t.TagID, t.Name FROM Posts p
JOIN PostsTags pt ON p.PostID = pt.PostID
JOIN Tags t ON pt.TagID = t.TagID
WHERE p.PostID IN (SELECT TOP 10 PostID FROM Posts)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Best way of store only date on datetime field? Scenario:
A stored procedure receives from code a DateTime with, let's say DateTime.Now value, as a datetime parameter.
The stored procedure needs to store only the date part of the datetime on the row, but preserving all date related arithmetics for, to say, do searches over time intervals and doing reports based on dates.
I know there is a couple of ways, but what is the better having in mind performance and wasted space?
A: Business Logic should be handled outside of the proc. The procs jobs should be to save the data passed to it. If the requirment is to only store Date and not time, then the BL/DL should pass in DateTime.Now**.Date** (or the equiv...basically the Date part of your DateTime object).
If you can't control the code for some reason, there's always convert(varchar(10), @YOURDATETIME, 101)
A: store the date with time = midnight
EDIT: i was assuming MS SQL Server
A: Essentially you're only going to store the Date part of your DateTime object. This means regardless of how you wish to handle querying the data the Date returned will always be set to 00:00:00.
Time related functions are useless in this scenario (even though your original DateTime object uses them) as your database drops this info.
Date related arithmetics will still apply though you will have to assume a time of midnight for each date returned from the database.
A: SQL Server 2008 has a date only type (DATE) that does not store the time. Consider upgrading.
http://www.sqlteam.com/article/using-the-date-data-type-in-sql-server-2008
A: If you're working on Oracle, inside your stored procedure use the TRUNC function on the datetime. This will return ONLY the date portion.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Reporting with db4o I've used db4o with much success on many projects in the past. Over time it seems to have evolved greatly, and with modern trends like LINQ on everyone's tongue it has peaked my interest again, especially now that I know that it is starting to support transparent activation and persistence which intrigue me quite a bit, but a friend posed a very good question to me when I first mentioned db4o and, even with modern innovation, I'm still not sure how to answer it.
What are the best/fastest/most common methods to generate reports similar to the large cross-table complex constraint reports that can be done so effectively on platforms such as SQL? I understand quite well how much time, effort and development time are saved and even many of the performance gains, especially over ORMs, but some applications require complex reports that I'm not sure how to express using objects and object queries and I am also concerned about performance, since it can be overwhelming to optimize and maintain complex reports even on systems designed specifically for that purpose.
--
Edit:
To be more clear, object data sources and the like can be used to pull db4o into the same data-rich controls as SqlDataSource et al. I've been referred to documents on the db4o site about using it with ReportViewer as well as advised to denormalize data into a reporting database, but the question is meant to pose a conceptual challenge on what can be done to accomplish the types of queries that RDBMSs perform so well on that they hold the industry. I love db4o, but I can't think of a truly efficient means of reporting on aggregate data that exists across several different types (or tables in SQL) without pulling all of the relevant objects out of the database, activating them and performing the calculations in application-level code. I may be wrong, but this seems like it couldn't hope to compete with the optimizations possible with an RDBMS.
I'm hoping amongst the bright minds we've managed to gather here that somebody knows something I don't or has innovative ideas for future implementation that could expand the ODBMS arena. I know that various ORMs implement methodologies for complex reporting objects and I'm wondering if anybody with experience with any of these technologies might have something creative that doesn't depend on any technologies outside of my code and db4o (I can generate reports with an SQL server alone).
A: In order to work around the performance cost of reporting via db4o, I'd suggest maintaining a highly denormalized (sqlite ?) database in parallel to your db4o container. Run reports against the db and normal app logic against db4o.
Yes it's more work, but this way you'll have high performance reporting while keeping the usefulness of db4o.
If you have properly separated your data access code, it should be easy to update any code that saves objects to also update the reporting db.
A: Please see this page.
Best,
German
A: My limited understanding of the issue is that currently reporting is very difficult to do with DB4O, due to some missing functionality such as Count, Aggregate, etc. As it stands, you have to implement these yourself with all the poor performance that implies (e.g. activating all records, then doing a Count operation on the records).
A: I am not familiar with db4o. But I know some over reporting software. Some of it have a data interface that you can write your own connector like i-net Crystal-Clear. If you can query a plain list of simple objects (Strings, Numbers, ... ) then it is simple.
Another simple solution is to write a dummy JDBC driver. There is a sample for it. The queries that you want run on your db4o will be available as virtual stored procedure. With the optional parameters you can filter your data on db4o for best performance. Such dummy JDBC driver can be written 3-4 hours.
A: It may also boil down to what reporting tool you use. For example, I've implemented a project which uses Microsoft's Reporting Services client-side engine to render reports - no dependency on SQL server - just feed it objects. All of the aggregation is performed by the reporting engine, which means that your code merely needs to find and materialize the underlying objects.
A: Far too late to be useful to you. But I suggest that people who find this question might want to look at Jasper Reports. There is a "commercial" version of the product. However, it's actually an open source solution and can be found on sourceforge.
Seems it's actually pretty good. There's even a report server and BI functionality (again, all open source). So it might be worth a look for somebody who is a bit interested.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Visual Studio opens the default browser instead of Internet Explorer When I debug in Visual Studio, Firefox opens and that is annoying because of the hookups that Internet Explorer and Visual Studio have, such as when you close the Internet Explorer browser that starting debug opened, Visual Studio stops debugging. How can I get Visual Studio to open Internet Explorer instead without having to set Internet Explorer as my default browser?
A: Also may be helpful for ASP.NET MVC:
In an MVC app, you have to right-click
on Default.aspx, which is the only
‘real’ web page in that solution. The
default page displays ‘Browse with…’
From http://avaricesoft.wordpress.com/2008/08/04/...
A: If you're running an MVC 3 application - in your solution explorer click the show all files icon and then under the Global.asax file there should be a file called YourProjectName.Publish.XML right-click it and then click "Browse With..." and select your favorite browser as the default.
A: For MVC3 you don't have to add any dummy files to set a certain browser. All you have to do is:
*
*"Show all files" for the project
*go to bin folder
*right click the only .xml file to find the "Browse With..." option
A: In the Solution Explorer, right-click any ASPX page and select "Browse With" and select IE as the default.
Note... the same steps can be used to add Google Chrome as a browser option and to optionally set it as the default browser.
A: Quick note if you don't have an .aspx in your project (i.e. its XBAP) but you still need to debug using IE, just add a htm page to your project and right click on that to set the default. It's hacky, but it works :P
A: Scott Guthrie has made a post on how to change Visual Studio's default browser:
1) Right click on a .aspx page in your
solution explorer
2) Select the "browse with" context
menu option
3) In the dialog you can select or add
a browser. If you want Firefox in the
list, click "add" and point to the
firefox.exe filename
4) Click the "Set as Default" button
to make this the default browser when
you run any page on the site.
I however dislike the fact that this isn't as straightforward as it should be.
A: Right-click on an aspx file and choose 'browse with'. I think there's an option there to set as default.
A: In Visual Studio 2010 the default browser gets reset often (just about every time an IDE setting is changed or even after restarting Visual Studio). There is now a default browser selector extension for 2010 to help combat this:
!!!Update!!! It appears that the WoVS Default Browser Switcher is no longer available for free according to @Cory. You might try Default Browser Changer instead but I have not tested it. If you already have the WoVS plugin I would recommend backing it up so that you can install it later.
The following solution may no longer work:
WoVS Default Browser Switcher:
http://visualstudiogallery.msdn.microsoft.com/en-us/bb424812-f742-41ef-974a-cdac607df921
Edit: This works with ASP.NET MVC applications as well.
Note: One negative side effect of installing this extension is that it seems to nag to be updated about once a month. This has caused some to uninstall it because, to them, its more bothersome then the problem it fixes. Regardless it is easily updated through the extension manager and I still find it very useful.
You will see the following error when starting VS:
The Default Browser Switcher beta bits have expired. Please use the
Extension Manager or visit the VS Gallery to download updated bits.
A: You may debug by firefox also.
Follow these steps: Tool->Attach to process and select firefox.exe or your default browser. Then debugger will work with this browser. But I had some trouble when firefox is 32 bit and and VS2010 is 64 bit.
Anyway right click the current document, browse with --> than choose your browser, than set it as default. This way is better. B'cause firefox's process id may change, so you will be annoyed for attaching the process again.
A: With VS 2017, debugging ASP.NET project with Chrome doesn't sign you in with your Google account.
To fix that go to Tools -> Options -> Debugging -> General and turn off the setting Enable JavaScript Debugging for ASP.NET (Chrome and IE).
A: In visual studio 2013, this can be done as follows:
1) Ensure you have selected a start up project from your solution explore window
2) This brings a drop down to the left of the debug drop down. You can choose browser from this new drop down.
Key is there should be a project selected as start up
A: You mentioned Visual Studio. This is for Visual Studio 2013. In the "Menu and Tools" in the second line , right below Debug you have a dropdown box giving you the list / option of "Emulators" .Your IE should be in the option , select that and you are good to go. Easy way .
A: Your project might not have aspx files since it might be another kind of web project.
However, if it has a ClientApp folder:
*
*go to the standard view of the Solution Explorer (Ctrl+Alt+L) where you can find your-project name solution (click on the folders icon at the top to be sure (saying "Solutions and Folders"))
*right-click on the ClientApp folder itself
*Browse with... will show up near the top (near View in Browser option), click on it and the browsers dialog shows up
*click on your preferred browser
*click on Set as Default
*click on Browse to confirm (this will open the browser you just chose on that folder)
A: Another way is to do the following in Visual Studio:
*
*Select Debug
*Options and Settings
*Expand Environment
*Select Web Browser
*Click the 'Internet Explorer Options' button
*Select the 'Programs' tab
*Select 'Make Default' button for Internet Explorer
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "208"
} |
Q: How to Truncate a string in PHP to the word closest to a certain number of characters? I have a code snippet written in PHP that pulls a block of text from a database and sends it out to a widget on a webpage. The original block of text can be a lengthy article or a short sentence or two; but for this widget I can't display more than, say, 200 characters. I could use substr() to chop off the text at 200 chars, but the result would be cutting off in the middle of words-- what I really want is to chop the text at the end of the last word before 200 chars.
A: Keep in mind whenever you're splitting by "word" anywhere that some languages such as Chinese and Japanese do not use a space character to split words. Also, a malicious user could simply enter text without any spaces, or using some Unicode look-alike to the standard space character, in which case any solution you use may end up displaying the entire text anyway. A way around this may be to check the string length after splitting it on spaces as normal, then, if the string is still above an abnormal limit - maybe 225 characters in this case - going ahead and splitting it dumbly at that limit.
One more caveat with things like this when it comes to non-ASCII characters; strings containing them may be interpreted by PHP's standard strlen() as being longer than they really are, because a single character may take two or more bytes instead of just one. If you just use the strlen()/substr() functions to split strings, you may split a string in the middle of a character! When in doubt, mb_strlen()/mb_substr() are a little more foolproof.
A: Use strpos and substr:
<?php
$longString = "I have a code snippet written in PHP that pulls a block of text.";
$truncated = substr($longString,0,strpos($longString,' ',30));
echo $truncated;
This will give you a string truncated at the first space after 30 characters.
A: Here you go:
function neat_trim($str, $n, $delim='…') {
$len = strlen($str);
if ($len > $n) {
preg_match('/(.{' . $n . '}.*?)\b/', $str, $matches);
return rtrim($matches[1]) . $delim;
}
else {
return $str;
}
}
A: Here is my function based on @Cd-MaN's approach.
function shorten($string, $width) {
if(strlen($string) > $width) {
$string = wordwrap($string, $width);
$string = substr($string, 0, strpos($string, "\n"));
}
return $string;
}
A: $shorttext = preg_replace('/^([\s\S]{1,200})[\s]+?[\s\S]+/', '$1', $fulltext);
Description:
*
*^ - start from beginning of string
*([\s\S]{1,200}) - get from 1 to 200 of any character
*[\s]+? - not include spaces at the end of short text so we can avoid word ... instead of word...
*[\s\S]+ - match all other content
Tests:
*
*regex101.com let's add to or few other r
*regex101.com orrrr exactly 200 characters.
*regex101.com after fifth r orrrrr excluded.
Enjoy.
A: $WidgetText = substr($string, 0, strrpos(substr($string, 0, 200), ' '));
And there you have it — a reliable method of truncating any string to the nearest whole word, while staying under the maximum string length.
I've tried the other examples above and they did not produce the desired results.
A: The following solution was born when I've noticed a $break parameter of wordwrap function:
string wordwrap ( string $str [, int $width = 75 [, string $break =
"\n" [, bool $cut = false ]]] )
Here is the solution:
/**
* Truncates the given string at the specified length.
*
* @param string $str The input string.
* @param int $width The number of chars at which the string will be truncated.
* @return string
*/
function truncate($str, $width) {
return strtok(wordwrap($str, $width, "...\n"), "\n");
}
Example #1.
print truncate("This is very long string with many chars.", 25);
The above example will output:
This is very long string...
Example #2.
print truncate("This is short string.", 25);
The above example will output:
This is short string.
A: It's surprising how tricky it is to find the perfect solution to this problem. I haven't yet found an answer on this page that doesn't fail in at least some situations (especially if the string contains newlines or tabs, or if the word break is anything other than a space, or if the string has UTF-8 multibyte characters).
Here is a simple solution that works in all cases. There were similar answers here, but the "s" modifier is important if you want it to work with multi-line input, and the "u" modifier makes it correctly evaluate UTF-8 multibyte characters.
function wholeWordTruncate($s, $characterCount)
{
if (preg_match("/^.{1,$characterCount}\b/su", $s, $match)) return $match[0];
return $s;
}
One possible edge case with this... if the string doesn't have any whitespace at all in the first $characterCount characters, it will return the entire string. If you prefer it forces a break at $characterCount even if it isn't a word boundary, you can use this:
function wholeWordTruncate($s, $characterCount)
{
if (preg_match("/^.{1,$characterCount}\b/su", $s, $match)) return $match[0];
return mb_substr($return, 0, $characterCount);
}
One last option, if you want to have it add ellipsis if it truncates the string...
function wholeWordTruncate($s, $characterCount, $addEllipsis = ' …')
{
$return = $s;
if (preg_match("/^.{1,$characterCount}\b/su", $s, $match))
$return = $match[0];
else
$return = mb_substr($return, 0, $characterCount);
if (strlen($s) > strlen($return)) $return .= $addEllipsis;
return $return;
}
A: This is a small fix for mattmac's answer:
preg_replace('/\s+?(\S+)?$/', '', substr($string . ' ', 0, 201));
The only difference is to add a space at the end of $string. This ensures the last word isn't cut off as per ReX357's comment.
I don't have enough rep points to add this as a comment.
A: By using the wordwrap function. It splits the texts in multiple lines such that the maximum width is the one you specified, breaking at word boundaries. After splitting, you simply take the first line:
substr($string, 0, strpos(wordwrap($string, $your_desired_width), "\n"));
One thing this oneliner doesn't handle is the case when the text itself is shorter than the desired width. To handle this edge-case, one should do something like:
if (strlen($string) > $your_desired_width)
{
$string = wordwrap($string, $your_desired_width);
$string = substr($string, 0, strpos($string, "\n"));
}
The above solution has the problem of prematurely cutting the text if it contains a newline before the actual cutpoint. Here a version which solves this problem:
function tokenTruncate($string, $your_desired_width) {
$parts = preg_split('/([\s\n\r]+)/', $string, null, PREG_SPLIT_DELIM_CAPTURE);
$parts_count = count($parts);
$length = 0;
$last_part = 0;
for (; $last_part < $parts_count; ++$last_part) {
$length += strlen($parts[$last_part]);
if ($length > $your_desired_width) { break; }
}
return implode(array_slice($parts, 0, $last_part));
}
Also, here is the PHPUnit testclass used to test the implementation:
class TokenTruncateTest extends PHPUnit_Framework_TestCase {
public function testBasic() {
$this->assertEquals("1 3 5 7 9 ",
tokenTruncate("1 3 5 7 9 11 14", 10));
}
public function testEmptyString() {
$this->assertEquals("",
tokenTruncate("", 10));
}
public function testShortString() {
$this->assertEquals("1 3",
tokenTruncate("1 3", 10));
}
public function testStringTooLong() {
$this->assertEquals("",
tokenTruncate("toooooooooooolooooong", 10));
}
public function testContainingNewline() {
$this->assertEquals("1 3\n5 7 9 ",
tokenTruncate("1 3\n5 7 9 11 14", 10));
}
}
EDIT :
Special UTF8 characters like 'à' are not handled. Add 'u' at the end of the REGEX to handle it:
$parts = preg_split('/([\s\n\r]+)/u', $string, null, PREG_SPLIT_DELIM_CAPTURE);
A: I would use the preg_match function to do this, as what you want is a pretty simple expression.
$matches = array();
$result = preg_match("/^(.{1,199})[\s]/i", $text, $matches);
The expression means "match any substring starting from the beginning of length 1-200 that ends with a space." The result is in $result, and the match is in $matches. That takes care of your original question, which is specifically ending on any space. If you want to make it end on newlines, change the regular expression to:
$result = preg_match("/^(.{1,199})[\n]/i", $text, $matches);
A: Ok so I got another version of this based on the above answers but taking more things in account(utf-8, \n and   ; ), also a line stripping the wordpress shortcodes commented if used with wp.
function neatest_trim($content, $chars)
if (strlen($content) > $chars)
{
$content = str_replace(' ', ' ', $content);
$content = str_replace("\n", '', $content);
// use with wordpress
//$content = strip_tags(strip_shortcodes(trim($content)));
$content = strip_tags(trim($content));
$content = preg_replace('/\s+?(\S+)?$/', '', mb_substr($content, 0, $chars));
$content = trim($content) . '...';
return $content;
}
A: /*
Cut the string without breaking any words, UTF-8 aware
* param string $str The text string to split
* param integer $start The start position, defaults to 0
* param integer $words The number of words to extract, defaults to 15
*/
function wordCutString($str, $start = 0, $words = 15 ) {
$arr = preg_split("/[\s]+/", $str, $words+1);
$arr = array_slice($arr, $start, $words);
return join(' ', $arr);
}
Usage:
$input = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna liqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.';
echo wordCutString($input, 0, 10);
This will output first 10 words.
The preg_split function is used to split a string into substrings. The boundaries along which the string is to be split, are specified using a regular expressions pattern.
preg_split function takes 4 parameters, but only the first 3 are relevant to us right now.
First Parameter – Pattern
The first parameter is the regular expressions pattern along which the string is to be split. In our case, we want to split the string across word boundaries. Therefore we use a predefined character class \s which matches white space characters such as space, tab, carriage return and line feed.
Second Parameter – Input String
The second parameter is the long text string which we want to split.
Third Parameter – Limit
The third parameter specifies the number of substrings which should be returned. If you set the limit to n, preg_split will return an array of n elements. The first n-1 elements will contain the substrings. The last (n th) element will contain the rest of the string.
A: You can use this:
function word_shortener($text, $words=10, $sp='...'){
$all = explode(' ', $text);
$str = '';
$count = 1;
foreach($all as $key){
$str .= $key . ($count >= $words ? '' : ' ');
$count++;
if($count > $words){
break;
}
}
return $str . (count($all) <= $words ? '' : $sp);
}
Examples:
word_shortener("Hello world, this is a text", 3); // Hello world, this...
word_shortener("Hello world, this is a text", 3, ''); // Hello world, this
word_shortener("Hello world, this is a text", 3, '[read more]'); // Hello world, this[read more]
Edit
How it's work:
1. Explode space from input text:
$all = explode(' ', $text);
for example, if $text will be "Hello world" then $all is an array with exploded values:
["Hello", "world"]
2. For each word:
Select each element in exploded text:
foreach($all as $key){...
Append current word($key) to $str and space if it's the last word:
$str .= $key . ($count >= $words ? '' : ' ');
Then add 1 to $count and check if it's greater than max limit($words) break the loop:
if($count > $words){
break;
}
Then return $str and separator($sp) only if the final text is less than input text:
return $str . (count($all) <= $words ? '' : $sp);
A: This will return the first 200 characters of words:
preg_replace('/\s+?(\S+)?$/', '', substr($string, 0, 201));
A: Based on @Justin Poliey's regex:
// Trim very long text to 120 characters. Add an ellipsis if the text is trimmed.
if(strlen($very_long_text) > 120) {
$matches = array();
preg_match("/^(.{1,120})[\s]/i", $very_long_text, $matches);
$trimmed_text = $matches[0]. '...';
}
A: I have a function that does almost what you want, if you'll do a few edits, it will fit exactly:
<?php
function stripByWords($string,$length,$delimiter = '<br>') {
$words_array = explode(" ",$string);
$strlen = 0;
$return = '';
foreach($words_array as $word) {
$strlen += mb_strlen($word,'utf8');
$return .= $word." ";
if($strlen >= $length) {
$strlen = 0;
$return .= $delimiter;
}
}
return $return;
}
?>
A: This is how i did it:
$string = "I appreciate your service & idea to provide the branded toys at a fair rent price. This is really a wonderful to watch the kid not just playing with variety of toys but learning faster compare to the other kids who are not using the BooksandBeyond service. We wish you all the best";
print_r(substr($string, 0, strpos(wordwrap($string, 250), "\n")));
A: While this is a rather old question, I figured I would provide an alternative, as it was not mentioned and valid for PHP 4.3+.
You can use the sprintf family of functions to truncate text, by using the %.ℕs precision modifier.
A period . followed by an integer who's meaning depends on the
specifier:
*
*For e, E, f and F specifiers: this is the number of digits to be printed after the decimal point (by default, this is 6).
*For g and G specifiers: this is the maximum number of significant digits to be printed.
*For s specifier: it acts as a cutoff point, setting a maximum character limit to the string
Simple Truncation https://3v4l.org/QJDJU
$string = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ';
var_dump(sprintf('%.10s', $string));
Result
string(10) "0123456789"
Expanded Truncation https://3v4l.org/FCD21
Since sprintf functions similarly to substr and will partially cut off words. The below approach will ensure words are not cutoff by using strpos(wordwrap(..., '[break]'), '[break]') with a special delimiter. This allows us to retrieve the position and ensure we do not match on standard sentence structures.
Returning a string without partially cutting off words and that does not exceed the specified width, while preserving line-breaks if desired.
function truncate($string, $width, $on = '[break]') {
if (strlen($string) > $width && false !== ($p = strpos(wordwrap($string, $width, $on), $on))) {
$string = sprintf('%.'. $p . 's', $string);
}
return $string;
}
var_dump(truncate('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ', 20));
var_dump(truncate("Lorem Ipsum is simply dummy text of the printing and typesetting industry.", 20));
var_dump(truncate("Lorem Ipsum\nis simply dummy text of the printing and typesetting industry.", 20));
Result
/*
string(36) "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
string(14) "Lorem Ipsum is"
string(14) "Lorem Ipsum
is"
*/
Results using wordwrap($string, $width) or strtok(wordwrap($string, $width), "\n")
/*
string(14) "Lorem Ipsum is"
string(11) "Lorem Ipsum"
*/
A: I know this is old, but...
function _truncate($str, $limit) {
if(strlen($str) < $limit)
return $str;
$uid = uniqid();
return array_shift(explode($uid, wordwrap($str, $limit, $uid)));
}
A: I create a function more similar to substr, and using the idea of @Dave.
function substr_full_word($str, $start, $end){
$pos_ini = ($start == 0) ? $start : stripos(substr($str, $start, $end), ' ') + $start;
if(strlen($str) > $end){ $pos_end = strrpos(substr($str, 0, ($end + 1)), ' '); } // IF STRING SIZE IS LESSER THAN END
if(empty($pos_end)){ $pos_end = $end; } // FALLBACK
return substr($str, $pos_ini, $pos_end);
}
Ps.: The full length cut may be less than substr.
A: Added IF/ELSEIF statements to the code from Dave and AmalMurali for handling strings without spaces
if ((strpos($string, ' ') !== false) && (strlen($string) > 200)) {
$WidgetText = substr($string, 0, strrpos(substr($string, 0, 200), ' '));
}
elseif (strlen($string) > 200) {
$WidgetText = substr($string, 0, 200);
}
A: I find this works:
function abbreviate_string_to_whole_word($string, $max_length, $buffer) {
if (strlen($string) > $max_length) {
$string_cropped = substr($string, 0, $max_length - $buffer);
$last_space = strrpos($string_cropped, " ");
if ($last_space > 0) {
$string_cropped = substr($string_cropped, 0, $last_space);
}
$abbreviated_string = $string_cropped . " ...";
}
else {
$abbreviated_string = $string;
}
return $abbreviated_string;
}
The buffer allows you to adjust the length of the returned string.
A:
As far as I've seen, all the solutions here are only valid for the case when the starting point is fixed.
Allowing you to turn this:
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna liqua. Ut enim ad minim veniam.
Into this:
Lorem ipsum dolor sit amet, consectetur...
What if you want to truncate words surrounding a specific set of keywords?
Truncate the text surrounding a specific set of keywords.
The goal is to be able to convert this:
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna liqua. Ut enim ad minim veniam.
Into this:
...consectetur adipisicing elit, sed do eiusmod tempor...
Which is a very common situation when displaying search results, excerpts, etc. To achieve this we can use these two methods combined:
/**
* Return the index of the $haystack matching $needle,
* or NULL if there is no match.
*
* This function is case-insensitive
*
* @param string $needle
* @param array $haystack
* @return false|int
*/
function regexFindInArray(string $needle, array $haystack): ?int
{
for ($i = 0; $i < count($haystack); $i++) {
if (preg_match('/' . preg_quote($needle) . '/i', $haystack[$i]) === 1) {
return $i;
}
}
return null;
}
/**
* If the keyword is not present, it returns the maximum number of full
* words that the max number of characters provided by $maxLength allow,
* starting from the left.
*
* If the keyword is present, it adds words to both sides of the keyword
* keeping a balanace between the length of the suffix and the prefix.
*
* @param string $text
* @param string $keyword
* @param int $maxLength
* @param string $ellipsis
* @return string
*/
function truncateWordSurroundingsByLength(string $text, string $keyword,
int $maxLength, string $ellipsis): string
{
if (strlen($text) < $maxLength) {
return $text;
}
$pattern = '/' . '^(.*?)\s' .
'([^\s]*' . preg_quote($keyword) . '[^\s]*)' .
'\s(.*)$' . '/i';
preg_match($pattern, $text, $matches);
// break everything into words except the matching keywords,
// which can contain spaces
if (count($matches) == 4) {
$words = preg_split("/\s+/", $matches[1], -1, PREG_SPLIT_NO_EMPTY);
$words[] = $matches[2];
$words = array_merge($words,
preg_split("/\s+/", $matches[3], -1, PREG_SPLIT_NO_EMPTY));
} else {
$words = preg_split("/\s+/", $text, -1, PREG_SPLIT_NO_EMPTY);
}
// find the index of the matching word
$firstMatchingWordIndex = regexFindInArray($keyword, $words) ?? 0;
$length = false;
$prefixLength = $suffixLength = 0;
$prefixIndex = $firstMatchingWordIndex - 1;
$suffixIndex = $firstMatchingWordIndex + 1;
// Initialize the text with the matching word
$text = $words[$firstMatchingWordIndex];
while (($prefixIndex >= 0 or $suffixIndex <= count($words))
and strlen($text) < $maxLength and strlen($text) !== $length) {
$length = strlen($text);
if (isset($words[$prefixIndex])
and (strlen($text) + strlen($words[$prefixIndex]) <= $maxLength)
and ($prefixLength <= $suffixLength
or strlen($text) + strlen($words[$suffixIndex]) <= $maxLength)) {
$prefixLength += strlen($words[$prefixIndex]);
$text = $words[$prefixIndex] . ' ' . $text;
$prefixIndex--;
}
if (isset($words[$suffixIndex])
and (strlen($text) + strlen($words[$suffixIndex]) <= $maxLength)
and ($suffixLength <= $prefixLength
or strlen($text) + strlen($words[$prefixIndex]) <= $maxLength)) {
$suffixLength += strlen($words[$suffixIndex]);
$text = $text . ' ' . $words[$suffixIndex];
$suffixIndex++;
}
}
if ($prefixIndex > 0) {
$text = $ellipsis . ' ' . $text;
}
if ($suffixIndex < count($words)) {
$text = $text . ' ' . $ellipsis;
}
return $text;
}
Now you can do:
$text = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do' .
'iusmod tempor incididunt ut labore et dolore magna liqua. Ut enim' .
'ad minim veniam.';
$text = truncateWordSurroundingsByLength($text, 'elit', 25, '...');
var_dump($text); // string(32) "... adipisicing elit, sed do ..."
Run code.
A: function trunc($phrase, $max_words) {
$phrase_array = explode(' ',$phrase);
if(count($phrase_array) > $max_words && $max_words > 0)
$phrase = implode(' ',array_slice($phrase_array, 0, $max_words)).'...';
return $phrase;
}
A: Here you can try this
substr( $str, 0, strpos($str, ' ', 200) );
A: May be this will help someone:
<?php
$string = "Your line of text";
$spl = preg_match("/([, \.\d\-''\"\"_()]*\w+[, \.\d\-''\"\"_()]*){50}/", $string, $matches);
if (isset($matches[0])) {
$matches[0] .= "...";
echo "<br />" . $matches[0];
} else {
echo "<br />" . $string;
}
?>
A: I used this before
<?php
$your_desired_width = 200;
$string = $var->content;
if (strlen($string) > $your_desired_width) {
$string = wordwrap($string, $your_desired_width);
$string = substr($string, 0, strpos($string, "\n")) . " More...";
}
echo $string;
?>
A: I believe this is the easiest way to do it:
$lines = explode('♦♣♠',wordwrap($string, $length, '♦♣♠'));
$newstring = $lines[0] . ' • • •';
I'm using the special characters to split the text and cut it.
A: Use this:
the following code will remove ','. If you have anyother character or sub-string, you may use that instead of ','
substr($string, 0, strrpos(substr($string, 0, $comparingLength), ','))
// if you have another string account for
substr($string, 0, strrpos(substr($string, 0, $comparingLength-strlen($currentString)), ','))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "203"
} |
Q: Split a string by spaces -- preserving quoted substrings -- in Python I have a string which is like this:
this is "a test"
I'm trying to write something in Python to split it up by space while ignoring spaces within quotes. The result I'm looking for is:
['this', 'is', 'a test']
PS. I know you are going to ask "what happens if there are quotes within the quotes, well, in my application, that will never happen.
A: Since this question is tagged with regex, I decided to try a regex approach. I first replace all the spaces in the quotes parts with \x00, then split by spaces, then replace the \x00 back to spaces in each part.
Both versions do the same thing, but splitter is a bit more readable then splitter2.
import re
s = 'this is "a test" some text "another test"'
def splitter(s):
def replacer(m):
return m.group(0).replace(" ", "\x00")
parts = re.sub('".+?"', replacer, s).split()
parts = [p.replace("\x00", " ") for p in parts]
return parts
def splitter2(s):
return [p.replace("\x00", " ") for p in re.sub('".+?"', lambda m: m.group(0).replace(" ", "\x00"), s).split()]
print splitter2(s)
A: Have a look at the shlex module, particularly shlex.split.
>>> import shlex
>>> shlex.split('This is "a test"')
['This', 'is', 'a test']
A: Speed test of different answers:
import re
import shlex
import csv
line = 'this is "a test"'
%timeit [p for p in re.split("( |\\\".*?\\\"|'.*?')", line) if p.strip()]
100000 loops, best of 3: 5.17 µs per loop
%timeit re.findall(r'[^"\s]\S*|".+?"', line)
100000 loops, best of 3: 2.88 µs per loop
%timeit list(csv.reader([line], delimiter=" "))
The slowest run took 9.62 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.4 µs per loop
%timeit shlex.split(line)
10000 loops, best of 3: 50.2 µs per loop
A: You want split, from the built-in shlex module.
>>> import shlex
>>> shlex.split('this is "a test"')
['this', 'is', 'a test']
This should do exactly what you want.
If you want to preserve the quotation marks, then you can pass the posix=False kwarg.
>>> shlex.split('this is "a test"', posix=False)
['this', 'is', '"a test"']
A: I see regex approaches here that look complex and/or wrong. This surprises me, because regex syntax can easily describe "whitespace or thing-surrounded-by-quotes", and most regex engines (including Python's) can split on a regex. So if you're going to use regexes, why not just say exactly what you mean?:
test = 'this is "a test"' # or "this is 'a test'"
# pieces = [p for p in re.split("( |[\\\"'].*[\\\"'])", test) if p.strip()]
# From comments, use this:
pieces = [p for p in re.split("( |\\\".*?\\\"|'.*?')", test) if p.strip()]
Explanation:
[\\\"'] = double-quote or single-quote
.* = anything
( |X) = space or X
.strip() = remove space and empty-string separators
shlex probably provides more features, though.
A: To preserve quotes use this function:
def getArgs(s):
args = []
cur = ''
inQuotes = 0
for char in s.strip():
if char == ' ' and not inQuotes:
args.append(cur)
cur = ''
elif char == '"' and not inQuotes:
inQuotes = 1
cur += char
elif char == '"' and inQuotes:
inQuotes = 0
cur += char
else:
cur += char
args.append(cur)
return args
A: Depending on your use case, you may also want to check out the csv module:
import csv
lines = ['this is "a string"', 'and more "stuff"']
for row in csv.reader(lines, delimiter=" "):
print(row)
Output:
['this', 'is', 'a string']
['and', 'more', 'stuff']
A: As an option try tssplit:
In [1]: from tssplit import tssplit
In [2]: tssplit('this is "a test"', quote='"', delimiter='')
Out[2]: ['this', 'is', 'a test']
A: I use shlex.split to process 70,000,000 lines of squid log, it's so slow. So I switched to re.
Please try this, if you have performance problem with shlex.
import re
def line_split(line):
return re.findall(r'[^"\s]\S*|".+?"', line)
A: It seems that for performance reasons re is faster. Here is my solution using a least greedy operator that preserves the outer quotes:
re.findall("(?:\".*?\"|\S)+", s)
Result:
['this', 'is', '"a test"']
It leaves constructs like aaa"bla blub"bbb together as these tokens are not separated by spaces. If the string contains escaped characters, you can match like that:
>>> a = "She said \"He said, \\\"My name is Mark.\\\"\""
>>> a
'She said "He said, \\"My name is Mark.\\""'
>>> for i in re.findall("(?:\".*?[^\\\\]\"|\S)+", a): print(i)
...
She
said
"He said, \"My name is Mark.\""
Please note that this also matches the empty string "" by means of the \S part of the pattern.
A: The main problem with the accepted shlex approach is that it does not ignore escape characters outside quoted substrings, and gives slightly unexpected results in some corner cases.
I have the following use case, where I need a split function that splits input strings such that either single-quoted or double-quoted substrings are preserved, with the ability to escape quotes within such a substring. Quotes within an unquoted string should not be treated differently from any other character. Some example test cases with the expected output:
input string | expected output
===============================================
'abc def' | ['abc', 'def']
"abc \\s def" | ['abc', '\\s', 'def']
'"abc def" ghi' | ['abc def', 'ghi']
"'abc def' ghi" | ['abc def', 'ghi']
'"abc \\" def" ghi' | ['abc " def', 'ghi']
"'abc \\' def' ghi" | ["abc ' def", 'ghi']
"'abc \\s def' ghi" | ['abc \\s def', 'ghi']
'"abc \\s def" ghi' | ['abc \\s def', 'ghi']
'"" test' | ['', 'test']
"'' test" | ['', 'test']
"abc'def" | ["abc'def"]
"abc'def'" | ["abc'def'"]
"abc'def' ghi" | ["abc'def'", 'ghi']
"abc'def'ghi" | ["abc'def'ghi"]
'abc"def' | ['abc"def']
'abc"def"' | ['abc"def"']
'abc"def" ghi' | ['abc"def"', 'ghi']
'abc"def"ghi' | ['abc"def"ghi']
"r'AA' r'.*_xyz$'" | ["r'AA'", "r'.*_xyz$'"]
'abc"def ghi"' | ['abc"def ghi"']
'abc"def ghi""jkl"' | ['abc"def ghi""jkl"']
'a"b c"d"e"f"g h"' | ['a"b c"d"e"f"g h"']
'c="ls /" type key' | ['c="ls /"', 'type', 'key']
"abc'def ghi'" | ["abc'def ghi'"]
"c='ls /' type key" | ["c='ls /'", 'type', 'key']
I ended up with the following function to split a string such that the expected output results for all input strings:
import re
def quoted_split(s):
def strip_quotes(s):
if s and (s[0] == '"' or s[0] == "'") and s[0] == s[-1]:
return s[1:-1]
return s
return [strip_quotes(p).replace('\\"', '"').replace("\\'", "'") \
for p in re.findall(r'(?:[^"\s]*"(?:\\.|[^"])*"[^"\s]*)+|(?:[^\'\s]*\'(?:\\.|[^\'])*\'[^\'\s]*)+|[^\s]+', s)]
It ain't pretty; but it works. The following test application checks the results of other approaches (shlex and csv for now) and the custom split implementation:
#!/bin/python2.7
import csv
import re
import shlex
from timeit import timeit
def test_case(fn, s, expected):
try:
if fn(s) == expected:
print '[ OK ] %s -> %s' % (s, fn(s))
else:
print '[FAIL] %s -> %s' % (s, fn(s))
except Exception as e:
print '[FAIL] %s -> exception: %s' % (s, e)
def test_case_no_output(fn, s, expected):
try:
fn(s)
except:
pass
def test_split(fn, test_case_fn=test_case):
test_case_fn(fn, 'abc def', ['abc', 'def'])
test_case_fn(fn, "abc \\s def", ['abc', '\\s', 'def'])
test_case_fn(fn, '"abc def" ghi', ['abc def', 'ghi'])
test_case_fn(fn, "'abc def' ghi", ['abc def', 'ghi'])
test_case_fn(fn, '"abc \\" def" ghi', ['abc " def', 'ghi'])
test_case_fn(fn, "'abc \\' def' ghi", ["abc ' def", 'ghi'])
test_case_fn(fn, "'abc \\s def' ghi", ['abc \\s def', 'ghi'])
test_case_fn(fn, '"abc \\s def" ghi', ['abc \\s def', 'ghi'])
test_case_fn(fn, '"" test', ['', 'test'])
test_case_fn(fn, "'' test", ['', 'test'])
test_case_fn(fn, "abc'def", ["abc'def"])
test_case_fn(fn, "abc'def'", ["abc'def'"])
test_case_fn(fn, "abc'def' ghi", ["abc'def'", 'ghi'])
test_case_fn(fn, "abc'def'ghi", ["abc'def'ghi"])
test_case_fn(fn, 'abc"def', ['abc"def'])
test_case_fn(fn, 'abc"def"', ['abc"def"'])
test_case_fn(fn, 'abc"def" ghi', ['abc"def"', 'ghi'])
test_case_fn(fn, 'abc"def"ghi', ['abc"def"ghi'])
test_case_fn(fn, "r'AA' r'.*_xyz$'", ["r'AA'", "r'.*_xyz$'"])
test_case_fn(fn, 'abc"def ghi"', ['abc"def ghi"'])
test_case_fn(fn, 'abc"def ghi""jkl"', ['abc"def ghi""jkl"'])
test_case_fn(fn, 'a"b c"d"e"f"g h"', ['a"b c"d"e"f"g h"'])
test_case_fn(fn, 'c="ls /" type key', ['c="ls /"', 'type', 'key'])
test_case_fn(fn, "abc'def ghi'", ["abc'def ghi'"])
test_case_fn(fn, "c='ls /' type key", ["c='ls /'", 'type', 'key'])
def csv_split(s):
return list(csv.reader([s], delimiter=' '))[0]
def re_split(s):
def strip_quotes(s):
if s and (s[0] == '"' or s[0] == "'") and s[0] == s[-1]:
return s[1:-1]
return s
return [strip_quotes(p).replace('\\"', '"').replace("\\'", "'") for p in re.findall(r'(?:[^"\s]*"(?:\\.|[^"])*"[^"\s]*)+|(?:[^\'\s]*\'(?:\\.|[^\'])*\'[^\'\s]*)+|[^\s]+', s)]
if __name__ == '__main__':
print 'shlex\n'
test_split(shlex.split)
print
print 'csv\n'
test_split(csv_split)
print
print 're\n'
test_split(re_split)
print
iterations = 100
setup = 'from __main__ import test_split, test_case_no_output, csv_split, re_split\nimport shlex, re'
def benchmark(method, code):
print '%s: %.3fms per iteration' % (method, (1000 * timeit(code, setup=setup, number=iterations) / iterations))
benchmark('shlex', 'test_split(shlex.split, test_case_no_output)')
benchmark('csv', 'test_split(csv_split, test_case_no_output)')
benchmark('re', 'test_split(re_split, test_case_no_output)')
Output:
shlex
[ OK ] abc def -> ['abc', 'def']
[FAIL] abc \s def -> ['abc', 's', 'def']
[ OK ] "abc def" ghi -> ['abc def', 'ghi']
[ OK ] 'abc def' ghi -> ['abc def', 'ghi']
[ OK ] "abc \" def" ghi -> ['abc " def', 'ghi']
[FAIL] 'abc \' def' ghi -> exception: No closing quotation
[ OK ] 'abc \s def' ghi -> ['abc \\s def', 'ghi']
[ OK ] "abc \s def" ghi -> ['abc \\s def', 'ghi']
[ OK ] "" test -> ['', 'test']
[ OK ] '' test -> ['', 'test']
[FAIL] abc'def -> exception: No closing quotation
[FAIL] abc'def' -> ['abcdef']
[FAIL] abc'def' ghi -> ['abcdef', 'ghi']
[FAIL] abc'def'ghi -> ['abcdefghi']
[FAIL] abc"def -> exception: No closing quotation
[FAIL] abc"def" -> ['abcdef']
[FAIL] abc"def" ghi -> ['abcdef', 'ghi']
[FAIL] abc"def"ghi -> ['abcdefghi']
[FAIL] r'AA' r'.*_xyz$' -> ['rAA', 'r.*_xyz$']
[FAIL] abc"def ghi" -> ['abcdef ghi']
[FAIL] abc"def ghi""jkl" -> ['abcdef ghijkl']
[FAIL] a"b c"d"e"f"g h" -> ['ab cdefg h']
[FAIL] c="ls /" type key -> ['c=ls /', 'type', 'key']
[FAIL] abc'def ghi' -> ['abcdef ghi']
[FAIL] c='ls /' type key -> ['c=ls /', 'type', 'key']
csv
[ OK ] abc def -> ['abc', 'def']
[ OK ] abc \s def -> ['abc', '\\s', 'def']
[ OK ] "abc def" ghi -> ['abc def', 'ghi']
[FAIL] 'abc def' ghi -> ["'abc", "def'", 'ghi']
[FAIL] "abc \" def" ghi -> ['abc \\', 'def"', 'ghi']
[FAIL] 'abc \' def' ghi -> ["'abc", "\\'", "def'", 'ghi']
[FAIL] 'abc \s def' ghi -> ["'abc", '\\s', "def'", 'ghi']
[ OK ] "abc \s def" ghi -> ['abc \\s def', 'ghi']
[ OK ] "" test -> ['', 'test']
[FAIL] '' test -> ["''", 'test']
[ OK ] abc'def -> ["abc'def"]
[ OK ] abc'def' -> ["abc'def'"]
[ OK ] abc'def' ghi -> ["abc'def'", 'ghi']
[ OK ] abc'def'ghi -> ["abc'def'ghi"]
[ OK ] abc"def -> ['abc"def']
[ OK ] abc"def" -> ['abc"def"']
[ OK ] abc"def" ghi -> ['abc"def"', 'ghi']
[ OK ] abc"def"ghi -> ['abc"def"ghi']
[ OK ] r'AA' r'.*_xyz$' -> ["r'AA'", "r'.*_xyz$'"]
[FAIL] abc"def ghi" -> ['abc"def', 'ghi"']
[FAIL] abc"def ghi""jkl" -> ['abc"def', 'ghi""jkl"']
[FAIL] a"b c"d"e"f"g h" -> ['a"b', 'c"d"e"f"g', 'h"']
[FAIL] c="ls /" type key -> ['c="ls', '/"', 'type', 'key']
[FAIL] abc'def ghi' -> ["abc'def", "ghi'"]
[FAIL] c='ls /' type key -> ["c='ls", "/'", 'type', 'key']
re
[ OK ] abc def -> ['abc', 'def']
[ OK ] abc \s def -> ['abc', '\\s', 'def']
[ OK ] "abc def" ghi -> ['abc def', 'ghi']
[ OK ] 'abc def' ghi -> ['abc def', 'ghi']
[ OK ] "abc \" def" ghi -> ['abc " def', 'ghi']
[ OK ] 'abc \' def' ghi -> ["abc ' def", 'ghi']
[ OK ] 'abc \s def' ghi -> ['abc \\s def', 'ghi']
[ OK ] "abc \s def" ghi -> ['abc \\s def', 'ghi']
[ OK ] "" test -> ['', 'test']
[ OK ] '' test -> ['', 'test']
[ OK ] abc'def -> ["abc'def"]
[ OK ] abc'def' -> ["abc'def'"]
[ OK ] abc'def' ghi -> ["abc'def'", 'ghi']
[ OK ] abc'def'ghi -> ["abc'def'ghi"]
[ OK ] abc"def -> ['abc"def']
[ OK ] abc"def" -> ['abc"def"']
[ OK ] abc"def" ghi -> ['abc"def"', 'ghi']
[ OK ] abc"def"ghi -> ['abc"def"ghi']
[ OK ] r'AA' r'.*_xyz$' -> ["r'AA'", "r'.*_xyz$'"]
[ OK ] abc"def ghi" -> ['abc"def ghi"']
[ OK ] abc"def ghi""jkl" -> ['abc"def ghi""jkl"']
[ OK ] a"b c"d"e"f"g h" -> ['a"b c"d"e"f"g h"']
[ OK ] c="ls /" type key -> ['c="ls /"', 'type', 'key']
[ OK ] abc'def ghi' -> ["abc'def ghi'"]
[ OK ] c='ls /' type key -> ["c='ls /'", 'type', 'key']
shlex: 0.335ms per iteration
csv: 0.036ms per iteration
re: 0.068ms per iteration
So performance is much better than shlex, and can be improved further by precompiling the regular expression, in which case it will outperform the csv approach.
A: Hmm, can't seem to find the "Reply" button... anyway, this answer is based on the approach by Kate, but correctly splits strings with substrings containing escaped quotes and also removes the start and end quotes of the substrings:
[i.strip('"').strip("'") for i in re.split(r'(\s+|(?<!\\)".*?(?<!\\)"|(?<!\\)\'.*?(?<!\\)\')', string) if i.strip()]
This works on strings like 'This is " a \\\"test\\\"\\\'s substring"' (the insane markup is unfortunately necessary to keep Python from removing the escapes).
If the resulting escapes in the strings in the returned list are not wanted, you can use this slightly altered version of the function:
[i.strip('"').strip("'").decode('string_escape') for i in re.split(r'(\s+|(?<!\\)".*?(?<!\\)"|(?<!\\)\'.*?(?<!\\)\')', string) if i.strip()]
A: To get around the unicode issues in some Python 2 versions, I suggest:
from shlex import split as _split
split = lambda a: [b.decode('utf-8') for b in _split(a.encode('utf-8'))]
A: I suggest:
test string:
s = 'abc "ad" \'fg\' "kk\'rdt\'" zzz"34"zzz "" \'\''
to capture also "" and '':
import re
re.findall(r'"[^"]*"|\'[^\']*\'|[^"\'\s]+',s)
result:
['abc', '"ad"', "'fg'", '"kk\'rdt\'"', 'zzz', '"34"', 'zzz', '""', "''"]
to ignore empty "" and '':
import re
re.findall(r'"[^"]+"|\'[^\']+\'|[^"\'\s]+',s)
result:
['abc', '"ad"', "'fg'", '"kk\'rdt\'"', 'zzz', '"34"', 'zzz']
A: Try this:
def adamsplit(s):
result = []
inquotes = False
for substring in s.split('"'):
if not inquotes:
result.extend(substring.split())
else:
result.append(substring)
inquotes = not inquotes
return result
Some test strings:
'This is "a test"' -> ['This', 'is', 'a test']
'"This is \'a test\'"' -> ["This is 'a test'"]
A: If you don't care about sub strings than a simple
>>> 'a short sized string with spaces '.split()
Performance:
>>> s = " ('a short sized string with spaces '*100).split() "
>>> t = timeit.Timer(stmt=s)
>>> print "%.2f usec/pass" % (1000000 * t.timeit(number=100000)/100000)
171.39 usec/pass
Or string module
>>> from string import split as stringsplit;
>>> stringsplit('a short sized string with spaces '*100)
Performance: String module seems to perform better than string methods
>>> s = "stringsplit('a short sized string with spaces '*100)"
>>> t = timeit.Timer(s, "from string import split as stringsplit")
>>> print "%.2f usec/pass" % (1000000 * t.timeit(number=100000)/100000)
154.88 usec/pass
Or you can use RE engine
>>> from re import split as resplit
>>> regex = '\s+'
>>> medstring = 'a short sized string with spaces '*100
>>> resplit(regex, medstring)
Performance
>>> s = "resplit(regex, medstring)"
>>> t = timeit.Timer(s, "from re import split as resplit; regex='\s+'; medstring='a short sized string with spaces '*100")
>>> print "%.2f usec/pass" % (1000000 * t.timeit(number=100000)/100000)
540.21 usec/pass
For very long strings you should not load the entire string into memory and instead either split the lines or use an iterative loop
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "334"
} |
Q: Why can't I single-step Clipboard-code within the VS.NET debugger? Ideally the reader has upgraded a native C++ program to Visual Studio 2008, which contains an OpenClipboard() block. Why not try setting a breakpoint just after getting a successful return-code from OpenClipboard() and step through your code. According to the Internet it may work on your system, but of course, not on mine, thanks for trying.
Googling on e.g. (( OpenClipboard 1418 vc6 )) finds articles like "GetClipboardData fails in debugger" and "No Error in VC++6 but Error in VC++ 2005". Pragmatically for-the-moment, problem solved - I simply cannot set breakpoints within such code, I need to squirrel information and set the breakpoint after the clipboard operations are done. Error 1418 is "Thread does not have a clipboard open" but it works fine as long as you don't step with VS.NET, or like I say if you keep breakpoints outside of the clipboard-open-close-block.
I would feel better knowing what the exact issue is with the VS.NET debugger.
Being a C++ person I am only dimly aware that you are not supposed to think in terms of threads when doing dot-Net. Anyway I did not find a guru-quality explanation of what's really going on, whether in-fact the problem is that the dot-Net debugger is subtly interfering with the thread-information somehow, when you single-step thru native C++ code.
System-wise: about a year old, two dual-core Xeon's, 4 CPU's according to XP-pro.
I had just finished debugging the code by single-stepping thru it in vc6 under XP-SP2-32-bit. So I know the code was pretty-much-fine under vc6. However when I tested with a 10-megabyte CF_TEXT I got exceptions. I thought to try debugging under the nicer exception model of XP-x64.
Recompiled with visual-studio-2008, I could not get the code to single-step at all. OpenClipboard worked, but EnumClipboardFormats() did not work, nothing worked when single-stepped. However, when I set the breakpoint below the complete block of code, everything worked fine. And YES vc2008 made a pinpoint diagnostic 'stack frame corruption around szBuf. There is a lot to like about vc2008. It would be nice if this were somehow merely a clipboard problem - without knowing that I would feel compelled to worry about stepping thru ANYTHING, whether thread-context-issues might be due to the dot-Net-debugger.
A: I've never looked into this, but it's easy enough to guess:
*
*The clipboard is a shared resource
*Only one app (per desktop) can "own" the clipboard at any given point in time
*Your app owns it (after calling OpenClipboard())
*VS wants it (probably because, among other things, it's an editor)
*While your app is stopped at a breakpoint, no amount of waiting will ever find the clipboard not owned by your app.
*Hilarity ensues!
A: Don't waste time suspecting it's a .NET thing. At times, the relation between Visual Studio.NET and the .NET runtime is like ActiveX and ActiveDirectory - it tells you which marketeer was involved, Visual Studio.NET in fact has a number of debuggers. Native, script, or managed - only the latter is really .NET-related. You will be using the Native debugger.
If you want to investigate, I suggest hooking OpenClipboard using Microsoft Detours, then running your app in the debugger. You'd be able to see who is competing for the clipboard.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What parallel programming model do you recommend today to take advantage of the manycore processors of tomorrow? If you were writing a new application from scratch today, and wanted it to scale to all the cores you could throw at it tomorrow, what parallel programming model/system/language/library would you choose? Why?
I am particularly interested in answers along these axes:
*
*Programmer productivity / ease of use (can mortals successfully use it?)
*Target application domain (what problems is it (not) good at?)
*Concurrency style (does it support tasks, pipelines, data parallelism, messages...?)
*Maintainability / future-proofing (will anybody still be using it in 20 years?)
*Performance (how does it scale on what kinds of hardware?)
I am being deliberately vague on the nature of the application in anticipation of getting good general answers useful for a variety of applications.
A: For heavy computations and the like, purely functional languages like Haskell are easily parallelizable without any effort on the part of the programmer. Apart from learning Haskell, that is.
However, I do not think that this is the way of the (near) future, simply because too many programmers are too used to the imperative programming paradigm.
A: kamaelia is a python framework for building applications with lots of communicating processes.
Kamaelia - Concurrency made useful, fun
In Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :)
What sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :)
See also the Question Multi-Core and Concurrency - Languages, Libraries and Development Techniques
A: I'm betting on communicating event loops with promises, as realized in systems like Twisted, E, AmbientTalk, and others. They retain the ability to write code with the same execution-model assumptions as non-concurrent/parallel applications, but scaling to distributed and parallel systems. (That's why I'm working on Ecru.)
A: Multi-core programming may actually require more than one paradigm. Some current contenders are:
*
*MapReduce. This works well where a problem can be easily decomposed into parallel chunks.
*Nested Data Parallelism. This is similar to MapReduce, but actually supports recursive decomposition of a problem, even when the recursive chunks are of irregular size. Look for NDP to be a big win in purely functional languages running on massively parallel but limited hardware (like GPUs).
*Software Transactional Memory. If you need traditional threads, STM makes them bearable. You pay a 50% performance hit in critical sections, but you can scale complex locking schemes to 100s of processors without pain. This will not, however, work for distributed systems.
*Parallel object threads with messaging. This really clever model is used by Erlang. Each "object" becomes a lightweight thread, and objects communicate by asynchronous messages and pattern matching. It's basically true parallel OO. This has succeeded nicely in several real-world applications, and it works great for unreliable distributed systems.
Some of these paradigms give you maximum performance, but only work if the problem decomposes cleanly. Others sacrifice some performance, but allow a wider variety of algorithms. I suspect that some combination of the above will ultimately become a standard toolkit.
A: Check out Erlang. Google for it, and watch the various presentations and videos. Many of the programmers and architects I respect are quite taken with its scalability. We're using it where I work pretty heavily...
A: As mentioned, purely functional languages are inherently parallelizable. However, imperative languages are much more intuitive for many people, and we are deeply entrenched in imperative legacy code. The fundamental issue is that pure functional languages express side-effects explicitly, while side effects are expressed implicitly in imperative languages by the order of statements.
I believe that techniques to declaratively express side effects (e.g., in an object oriented framework) will allow compilers to decompose imperative statements into their functional relationships. This should then allow the code to be automatically parallelized in much the same way pure functional code would be.
Of course, just as today it is still desirable to write certain performance-critical code in assembly language, it will still be necessary to write performance-critical explicitly parallel code tomorrow. However, techniques such as I outlined should help automatically take advantage of manycore architectures with minimal effort expended by the developer.
A: I'm surprised nobody has suggested MPI (Message Passing Interface). While designed for distributed memory, MPI programs with essential and frequent global coupling (solving linear and nonlinear equations with billions of unknowns) have been shown to scale to 200k cores.
A: This question seems to keep appearing with different wording - maybe there are different constituencies within StackOverflow. Flow-Based Programming (FBP) is a concept/methodology that has been around for over 30 years, and is being used to handle most of the batch processing at a major Canadian bank. It has thread-based implementations in Java and C#, although earlier implementations were fiber-based (C++, and mainframe Assembler - the one used at the bank). Most approaches to the problem of taking advantage of multicore involve trying to take a conventional single-threaded program and figure out which parts can run in parallel. FBP takes a different approach: the application is designed from the start in terms of multiple "black-box" components running asynchronously (think of a manufacturing assembly line). Since the interface between components is data streams, FBP is essentially language-independent, and therefore supports mixed-language applications, and domain-specific languages. For the same reason, side-effects are minimized. It could also be described as a "share nothing" model, and a MOM (message-oriented middleware). MapReduce seems to be a special case of FBP. FBP differs from Erlang mostly in that Erlang operates in terms of many short-lived threads, so green threads are more appropriate there, whereas FBP uses fewer (typically a few 10s to a few hundred) longer-lived threads. For a piece of a batch network that has been in daily use for over 30 years, see part of batch network. For a high-level design of an interactive app, see Brokerage app high-level design. FBP has been found to result in much more maintainable applications, and improved elapsed times - even on single core machines!
A: Two solutions I really like are join calculus (JoCaml, Polyphonic C#, Cω) and the actor model (Erlang, Scala, E, Io).
I'm not particularly impressed with Software Transactional Memory. It just feels like it's only there to allow threads to cling on to life a little while longer, even though they should have died decades ago. However, it does have three major advantages:
*
*People understand transactions in databases
*There is already talk of transactional RAM hardware
*As much as we all wish them gone, threads are probably going to be the dominant concurrency model for the next couple of decades, sad as that may be. STM could significantly reduce the pain.
A: The mapreduce/hadoop paradigm is useful, and relevant. Especially for people who are used to languages like perl, the idea of mapping over an array and doing some action on each element should come pretty fluidly and naturally, and mapreduce/hadoop just takes it to the next stage and says that there's no reason that each element of the array need be processed on the same machine.
In a sense it's more battle tested, because Google is using mapreduce and plenty of people have been using hadoop, and has shown that it works well for scaling applications across multiple machines over the network. And if you can scale over multiple machines across the network, you can scale over multiple cores in a single machine.
A: For .NET application I choose ".NET Parallel Extensions (PLINQ)" it's extremely easy to use and allows me to parallelize existing code in minutes.
*
*It's simple to learn
*Used to perform complex operations over large arrays of objects, so I can't comment on other applications
*Supports tasks and piplines
*Should be supported for a next couple of years, but who knows for sure?
*CTP version has some performance issues, but already looks very promising.
Mono will likely get support for PLINQ, so it could be a cross-platform solution as well.
A: Qt concurrent offers an implementation of MapReduce for multicore which is really easy to use. It is multiOS.
A: If your problem domain permits try to think about a share nothing model. The less you share between processes and threads the less you have to design complex concurrency models.
A: See also the question Multi-Core and Concurrency - Languages, Libraries and Development Techniques
A: A job-queue with multiple workers system (not sure of the correct terminology - message queue?)
Why?
Mainly, because it's an absurdly simple concept. You have a list of stuff that needs processing, then a bunch of processes that get jobs and process them.
Also, unlike the reasons, say, Haskell or Erlang are so concurrent/parallelisable(?), it's entirely language-agnostic - you can trivially implement such a system in C, or Python, or any other language (even using shell scripting), whereas I doubt bash will get software transactional memory or join-calculus anytime soon.
A: We've been using PARLANSE, a parallel programming langauge with explicit partial-order specification of concurrency for the last decade, to implement a scalable program analysis and transformation system (DMS Software Reengineering Toolkit) that mostly does symbolic rather than numeric computation. PARLANSE is a compiled, C-like language with traditional scalar data types character, integer, float, dynamic data types string and array, compound data types structure and union, and lexically-scoped functions. While most of the language is vanilla (arithmetic expressions over operands, if-then-else statements, do loops, function calls), the parallelism is not. Parallelism is expressed by defining a "precedes" relation over blocks of code (e.g, a before b, a before c, d before c)
written as
(|; a (... a's computation)
(<< a) b ( ... b's computation ... )
(<< a) c ( ....c's computation ...)
(>> c) d ( ... d's computation...)
)|;
where the << and >> operators refer to "order in time". The PARLANSE compiler can see these parallel computations and preallocate all the structures necessary for computation
grains a,b,c,d, and generate custom code to start/stop each one, thus minimizing the overhead to start and stop these parallel grains.
See this link for parallel iterative deepening search for optimals solutions to the 15-puzzle, which is the 4x4 big-brother of the 8-puzzle. It only uses potential parallel as a parallelism construct (|| a b c d ) which says there are no partial order constraints on the computations a b c d, but it also uses speculation and asynchronously aborts tasks that won't find solutions. Its a lot of ideas in a pretty small bit of code.
PARLANSE runs on multicore PCs. A big PARLANSE program (we've built many with 1 million+ lines or more) will have thousands of these partial orders, some of which call functions that contain others.
So far we've had good results with up to 8 CPUs, and modest payoff with up to 16, and we're still tuning the system. (We think a real problem with larger numbers of cores on current PCs is memory bandwidth: 16 cores pounding a memory subsystem creates a huge bandwidth demand).
Most other languages don't expose the parallelism so it is hard to find, and the runtime systems pay a high price for scheduling computation grains by using general-purpose scheduling primitives. We think that's a recipe for disaster or at least poor performance because of Amhdahl's law: if the number of machine instructions to schedule a grain is large compared to the work, you can't be efficient. OTOH, if you insist on computation grains with many machine instructions to keep the scheduling costs relatively low, you can't find computation grains that are independent and so you don't have any useful parallelism to schedule. So the key idea behind PARLANSE is to minimize the cost of scheduling grains, so that grains can be small, so that there can be many of them found in real code. The insight into this tradeoff came from the abject failure of the pure dataflow paradigm, that did everything in parallel with tiny parallel chunks (e.g., the add operator).
We've been working on this on and off for a decade. Its hard to get this right. I don't see how folks that haven't been building parallel langauges and using/tuning them for this time frame have any serious chance of building effective parallel systems.
A: I really like the model Clojure has chosen. Clojure uses a combination of immutable data structures and software transactional memory.
Immutable data structures are ones that never change. New versions of the structures can be created with modified data, but if you have a "pointer" to a data structure, it will never change out from under you. This is good because you can access that data without worrying about concurrency problems.
Software transactional memory is discussed elsewhere in these answers but suffice it to say that it is a mechanism whereby multiple threads can all act upon some data and if they collide, one of the threads is rolled back to try again. This allows for much faster speed when the risk of collision is present but unlikely.
There is a video from author Rich Hickey that goes into a lot more detail.
A: A worthwhile path might be OpenCL, which provides a means of distributing certain kinds of compute loads across heterogeneous compute resources, IE the same code will run on a multicore CPU and also commodity GPU's. ATI has recently released exactly such a toolchain. NVidia's CUDA toolchain is similar, although somewhat more restricted. It also appears that Nvidia has an OpenCL sdk in the works
It should be noted that this probably won't help much where the workloads are not of a data-parallel nature, for instance it won't help much with typical transaction processing. OpenCL is mainly oriented toward the kinds of computing that are math intensive, such as scientific/engineering simulation or financial modeling.
A:
If you were writing a new application from scratch today, and wanted it to scale to all the cores you could throw at it tomorrow, what parallel programming model/system/language/library would you choose?
Perhaps the most widely applicable today is Cilk-style task queues (now available in .NET 4). They are great for problems that can be solved using divide and conquer with predictable complexity for subtasks (such as parallel map and reduce over arrays where the complexity of the function arguments is known as well as algorithms like quicksort) and that covers many real problems.
Moreover, this only applies to shared-memory architectures like today's multicores. While I do not believe this basic architecture will disappear anytime soon I do believe it must be combined with distributed parallelism at some point. This will either be in the form of a cluster of multicores on a manycore CPU with message passing between multicores, or in the form of a hierarchy of cores with predictable communication times between them. These will require substantially different programming models to obtain maximum efficiency and I do not believe much is known about them.
A: There are three parts to parallel programming IMO: identify the parallelism and specify the parallelism. Identify=Breaking down the algorithm into concurrent chunks of work, specify=doing the actual coding/debugging. Identify is independent of which framework you will use to specify the parallelism and I don't think a framework can help there much. It comes with good understanding of your app, target platform, common parallel programming trade-offs (hardware latencies etc), and MOST importantly experience. Specify however can be discussed and this is what I try to answer below:
I have tried many frameworks (at school and work). Since you asked about multicores, which are all shared memory, I will stick with three shared memory frameworks I have used.
Pthreads (Its no really a framework, but definitely applicable):
Pro:
-Pthreads is extremely general. To me, pthreads are like the assembly of parallel programming. You can code any paradigm in pthreads.
-Its flexible so you can make it as high performance as you want. There are no inherent limitations to slow you down. You can write your own constructs and primitives and get as much speed as possible.
Con:
-Requires you to do all the plumbing like managing work-queues, task distribution, yourself.
-The actual syntax is ugly and your app often has a lot of extra code which makes code hard to write and then hard to read.
OpenMP:
Pros:
-The code looks clean, plumbing and task-splitting is mostly down under the hood
-Semi-flexible. It gives you several interesting options for work-scheduling
Cons:
-Meant for simple for-loop like parallelism. (The newer Intel verion does support tasks too but the tasks are the same as Cilk).
-Underlying structures may or may not be well-written for performance. GNUs implementation is just ok. Intel's ICC worked better for me but I would rather write some stuff myself for higher performance.
Cilk, Intel TBB, Apple GCD:
Pros:
-Provably optimal underlying algorithms for task-level parallelism
-Decent control of serial/parallel tasks
-TBB also has a pipeline parallelism framework which I used (it isn't the best to be frank)
-Eliminates the task of writing a lot of code for task-based systems which can be a big plus if you are short-handed
Cons:
-Less control of underlying structures' performance. I know the Intel TBB has very poorly-performing underlying data structures, e.g., the work queue was bad (in 2008 when I saw it).
-The code looks awful sometimes with all the keywords and buzzwords they want you to use
-Requires reading a lot of references to understand their "flexible" APIs
A: I'd use Java - its portable so future processors wont be a problem. I'd also code my application with layers seperating interface / logic and data (more like 3 tier web app) with standard mutex routines as a library (less debugging of parallel code). Remember that web servers scale to many processors really well and are the least painful path to multicore. Either that or look at the old Connection Machine model with a virtual processor tied to data.
A: Erlang is the more "mature solution" and is portable and open source. I fiddled around with Polyphonic C# , I don't know how it would be to program everyday in it.
There are libraries and extensions for almost every language/OS under the sun, google transactional memory . It's an interesting approach from MS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: Is it me, or is Eclipse horribly unpredictable? I recently started using Eclipse at work for my Java servlet projects. I've been using jEdit for years, which is a uber-powerful text editor. It has syntax highlighting, but it doesn't have any language-specific features like code completion and intelligent refactoring. I'm finding that's hindering my productivity. I desperately want to like Eclipse. I really do. But I keep running into problem after problem.
*
*Now that Eclipse can use an ant script to build, instead of just creating its own build environment from an ant script then ignoring any changes to it, I found some online guides and set it up. It doesn't seem ready for prime time, though. My ant script builds fine from the command line, but I get all these build errors because I need to tell Eclipse all this stuff the build.xml already has in it, like the CLASSPATH, and where external jars are.
*When I leave Eclipse running for too long, or sometimes after my laptop wakes up from hibernate, the UI starts breaking. For instance, the tabs on the editor pane disappear, so I can only edit one file at a time, and it doesn't say which one it is.
*We have faced several instances where classes weren't rebuilt that should have been, leading to inaccurate line numbers in debugging walkthroughs and other unpredictable behavior (this isn't just me; the two other developers trying it out with me are seeing the same thing).
*I find it a huge hassle that the workspace is in a different place than my source code. I have other files I need to edit (xml files, etc), and for each directory I want to edit files in, I need to set up a special entry, and it doesn't even default to where my source code is when setting that up.
Do others face these same issues?
Are there better alternatives?
A: I love IntelliJ, but it's commercial. Eclipse feels like a buggy, half-hearted knockoff compared to it. To the point that IntelliJ's worth the cost.
A: Try NetBeans
A free, open-source Integrated
Development Environment for software
developers. You get all the tools you
need to create professional desktop,
enterprise, web, and mobile
applications with the Java language,
C/C++, and Ruby.
A: Eclipse works best if you leave the project folder structure to its management. We are working with 15 developers on a project of several thousand classes and even more XML and .properties files.
I agree there are problems with ANT integration, so for production and nightly builds I recommend an external build system based on ANT scripts started from a shell.
However while working in Eclipse make sure you have the automatic build feature on (it should be by default, but checking does not hurt). This should free you from any concerns regarding what to build and when. Sometimes (very rarely for me) there are problems when I have to switch of the automatic build, clean all projects and trigger a manual build via the menu. From time to time I have to trigger the build multiple times (not the cleaning!), but once everything has been built again, turning the auto-build on works great again.
As for long running instances: My machine keeps logged in basically all the time (day and night) and there are at least two Eclipse instances running at all times. I have not seen any problems with these sessions, even when they remain open for literally weeks.
Most of the problems I have seen in the 5 years I have been using Eclipse originated from people installing too many plugins - the only stuff I have added is Checkstyle, the "implementors plugin" and some proprietary stuff for the application framework we are using.
Maybe you can try using a rather clean Eclipse installation the "usual way" for a while (i. e. with the sources imported to the workspace folder).
Regarding NetBeans: I use it from time to time as well, but I think it is a matter of taste and I like Eclipse better. This may be different for you.
A: Eclipse can be quite a change, especially coming from something like just a text editor, or Visual Studio
*
*try to let Eclipse build the project itself, without the help of ant. Leave ant to a handwritten build.xml file to build the project from the command line outside of eclipse, e.g., on yer build/release machine.
*low on memory?
*are you going back and forth between building the project w/ant and then having Eclipse trying to build the project too? i.e., are the builds "fighting" with each other? see 1.
*yes, one of the things that you would need to get used to... accept, rather than fight, the "eclipse way"; you have to put your working source files somewhere, then why not in Eclipse's workspace folder?
hope that helps/makes sense
A: Like some other people already have suggested: there are more good Java IDEs besides Eclipse. The strong point of Eclipse is the plug-in system. There's a wealth of functionality available and some of it is indeed very good. That said: I do not use Eclipse, but NetBeans at the moment. NetBeans feels less clunky, is more responsive and has a cleaner feel.
When my main job was Java programming I've used IntelliJ a lot. IMHO IntelliJ beats both NetBeans and Eclipse as far as coding is concerned. It's faster, has better refactoring possibilities, better search, quick navigation and the list goes on.
To a large extent, picking an IDE is a matter of taste, as well as experience. A lot of people feel more happy with the devil they know...
A: For issue number 1, you can setup custom builders for eclipse. To do so, right click on the Project and select Properties. On the left there is a item called Builders, select that.
Based off of what you are saying, you will want to remove the Java builder and replace it with a new Ant Builder. This can be done by clicking New and selecting Ant Builder. This will bring up a some configuration to fill out.
In the configuration, the two most important parts are the Build File in the Main tab and the Targets tab.
For issue 4, I would recommend having your project try to be independent of its location on disc. That way everything is in the same tree. Otherwise, the solution would be to setup external directories. From what it sounds like, not everything is in the same 'source tree', which brings up source control issues.
A: Partial, hopefully helpful answer to
4. I find it a huge hassle that the workspace is in a different place than my source code. I have other files I need to edit (xml files, etc), and for each directory I want to edit files in, I need to set up a special entry, and it doesn't even default to where my source code is when setting that up.
... you can configure the location of both your workspace and your source code, if you want.
A: Most of my time in Eclipse has been spent doing ColdFusion so I can't speak to ANT scripts or compilation. I too noticed that odd things would be more likely to happen if Eclipse was left running for an excessive amount of time. Aside from that, most other buggy-ness could be resolved by making sure that my JRE was the latest version.
A: As someone else has mentioned, try NetBeans, It's similar to Eclipse in that it is a platform that supports an IDE, and is plug-in based. Its build system is also already based around Ant, allowing you to tap in at various extension points. In general, I've found it a bit more stable than Eclipse as well, but YMMV.
A: We have eclipse manage things the way it wants, and use ant4eclipse (a set of ant tasks) for continuous builds. Works great!
A: Eclipse is a great tool. Hardly ever had problems with it in the many years that I've used it. It always amazes me how so many people can have problems with it. Then again, I'm using it as a fairly basic editor. I'm either lucky or my lack of problems stems from the fact that I'm not expecting it to be much more than a smart editor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What can cause Web.sitemap to not be found? I have a asp:menu object which I set up to use a SiteMapDataSource but everytime I try to run the site, I get a yellow screen from firefox saying it cannot find the web.sitemap. Here's the code for the sitemapdatasource and the menu. The Web.sitemap file is sitting in the root directory of the website.
<div>
<asp:Menu ID="MainMenu" CssClass="wTheme" Orientation="Horizontal" runat="server" DataSourceID="SiteMapDataSource1">
</asp:Menu>
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" SiteMapProvider="Web.sitemap" />
</div>
And this is the Web.sitemap looks like so:
<?xml version="1.0" encoding="utf-8" ?>
A: I had a similar problem where I was specifying the path to the SiteMap from within my DataSource control. I tried removing it and it worked.
Try removing the path from the SiteMapDataSource and ensure that web.sitemap is in the root directory and see if that fixes it.
A: You need to specify in web.config to use XmlSiteMapProvider and provide it with correct path to .sitemap file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Byte buffer transfer via UDP Can you provide an example of a byte buffer transferred between two java classes via UDP datagram?
A: Hows' this ?
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetSocketAddress;
public class Server {
public static void main(String[] args) throws IOException {
DatagramSocket socket = new DatagramSocket(new InetSocketAddress(5000));
byte[] message = new byte[512];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket.receive(packet);
System.out.println(new String(packet.getData(), packet.getOffset(), packet.getLength()));
}
}
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetSocketAddress;
public class Client {
public static void main(String[] args) throws IOException {
DatagramSocket socket = new DatagramSocket();
socket.connect(new InetSocketAddress(5000));
byte[] message = "Oh Hai!".getBytes();
DatagramPacket packet = new DatagramPacket(message, message.length);
socket.send(packet);
}
}
A: @none
The DatagramSocket classes sure need a polish up, DatagramChannel is slightly better for clients, but confusing for server programming. For example:
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.DatagramChannel;
public class Client {
public static void main(String[] args) throws IOException {
DatagramChannel channel = DatagramChannel.open();
ByteBuffer buffer = ByteBuffer.wrap("Oh Hai!".getBytes());
channel.send(buffer, new InetSocketAddress("localhost", 5000));
}
}
Bring on JSR-203 I say
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to add monsters to a Pokemon game? My friends and I are starting a game like Pokemon and we wanted to know how will we add monsters to the game? We're using VisualBasic because my friend's brother said it would be easier.
So far we can put pictures of the monsters on the screen and you can click to attack and stuff.
Right now when we want to add a monster we have to make a new window. This will take us a long time to make all the windows for each type of monster. Is there a tool or something to make this go faster? How do game companies do this?
A: I think the best solution would be to make a generic window which can take a few parameters which describe the monster.
Im not entirely up-to-date with VB, but in an OO language we would have a Monster base class, and inheritance to create a Pikachu. The base class would define basic things a monster has (like a picture and a name and a type) and things a monster could do (like attack, run away etc). You could even use a second level, and have base classes for each type (like ElectricMonster which inherits from Monster, and Pikachu inherits from ElectricMonster).
It then becomes really easy to pass a Monster object to a window, and have the window know how to pull out all the relevant information.
A: I'd suggest making a list of all the attributes you would need for each monster and store all of that in a database like MySQL. This way you don't need to make windows for each monster, only each time a monster appears (in which case you'd just get the necessary info from the database).
If you're not familiar with any database, check out the MySQL tutorial to get up and going.
A: I think the biggest problem will be creating all the different angles (for when the characters turn, etc.). Can you develop 3d models of the characters based on different frames from the tv show / card game?
A: I would suggest that you should try extract the various attributes that a monster might possess. Think Top-Trumps...
Then you can create a single Monster class with each attribute represented by a Property/Field.
Something like
Class Monster
Public Name as String
Public Filename as String ' Location of graphics file on disk
Public Strength as Integer
Public Speed as Integer
Public Sub New(Name as String, Filename as String, Strength as Integer, Speed as Integer)
Me.Name = Name
Me.Filename = Filename
Me.Strength = Strength
Me.Speed = Speed
End Sub
End Class
Then you'll be able to create monsters like this.
Dim Monster1 as New Monster("monster1", "C:\Graphic1.jpg", 50, 10)
Dim Monster2 as New Monster("monster2", "C:\Graphic2.jpg", 1, 100)
Dim Monster3 as New Monster("monster3", "C:\Graphic3.jpg", 60, 17)
but you've not needed to create a new "Window" each time.
Equally you will be able to get you "Monster" data from elsewhere... like a database for example.
A: Once you have created your artwork, I would load it dynamically from the hard disk rather than compile it into one big EXE. You can use the PictureBox control's LoadPicture method.
A: You need to learn about data, data structures and loops. Your monsters should consist of data, and maybe some code, then your monster display screen will display and operate a monster based upon this data and code.
Copy and pasting widgets will not work out for you. Learn to abstract data and logic from widgets.
Stop using VB right now and go play with http://scratch.mit.edu it is much more suitable.
A: What do you mean by, 'when we want to add a monster'? Do you mean you have an individual window for each monster, which is shown when that monster appears? To build on what sit said; design, design, design. Ad Hoc design methods do not scale beyond the smallest of programs.
A: You have to have your monster data stored in files or a database and load them from a generic window. For example you have a picture of pikachu and one of bulbasaur stored in your hard disk. Then you make a window with a blank picture, when you show the window you tell the picture object to load the picture you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Best Mocking Library Which is the best mocking library for C# 3.0/ ASP.NET MVC? Why?
A: Very subjective question. What do you mean by "best"? Maybe you should provide some more context on your situation.
RhinoMocks is one of the most popular, as to whether it's the best for you, who knows?
A: Moq
It's amazing, fully supports the new language features of C# 3.0 and it's very easy to get going with. I would highly recommend it.
A: I'm going through that process now, weighing them up for use by my team, and I have to say Moq as well, it seems to have the least of a learning curve, and some nice features, I love the use of Moq generics to specify a Mock class
A: there's also this other post about this topic:
Best mock framework that can do both WebForms and MVC?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: In Javascript, why is the "this" operator inconsistent? In JavaScript, the "this" operator can refer to different things under different scenarios.
Typically in a method within a JavaScript "object", it refers to the current object.
But when used as a callback, it becomes a reference to the calling object.
I have found that this causes problems in code, because if you use a method within a JavaScript "object" as a callback function you can't tell whether "this" refers to the current "object" or whether "this" refers to the calling object.
Can someone clarify usage and best practices regarding how to get around this problem?
function TestObject() {
TestObject.prototype.firstMethod = function(){
this.callback();
YAHOO.util.Connect.asyncRequest(method, uri, callBack);
}
TestObject.prototype.callBack = function(o){
// do something with "this"
//when method is called directly, "this" resolves to the current object
//when invoked by the asyncRequest callback, "this" is not the current object
//what design patterns can make this consistent?
this.secondMethod();
}
TestObject.prototype.secondMethod = function() {
alert('test');
}
}
A: Quick advice on best practices before I babble on about the magic this variable. If you want Object-oriented programming (OOP) in Javascript that closely mirrors more traditional/classical inheritance patterns, pick a framework, learn its quirks, and don't try to get clever. If you want to get clever, learn javascript as a functional language, and avoid thinking about things like classes.
Which brings up one of the most important things to keep in mind about Javascript, and to repeat to yourself when it doesn't make sense. Javascript does not have classes. If something looks like a class, it's a clever trick. Javascript has objects (no derisive quotes needed) and functions. (that's not 100% accurate, functions are just objects, but it can sometimes be helpful to think of them as separate things)
The this variable is attached to functions. Whenever you invoke a function, this is given a certain value, depending on how you invoke the function. This is often called the invocation pattern.
There are four ways to invoke functions in javascript. You can invoke the function as a method, as a function, as a constructor, and with apply.
As a Method
A method is a function that's attached to an object
var foo = {};
foo.someMethod = function(){
alert(this);
}
When invoked as a method, this will be bound to the object the function/method is a part of. In this example, this will be bound to foo.
As A Function
If you have a stand alone function, the this variable will be bound to the "global" object, almost always the window object in the context of a browser.
var foo = function(){
alert(this);
}
foo();
This may be what's tripping you up, but don't feel bad. Many people consider this a bad design decision. Since a callback is invoked as a function and not as a method, that's why you're seeing what appears to be inconsistent behaviour.
Many people get around the problem by doing something like, um, this
var foo = {};
foo.someMethod = function (){
var that=this;
function bar(){
alert(that);
}
}
You define a variable that which points to this. Closure (a topic all it's own) keeps that around, so if you call bar as a callback, it still has a reference.
As a Constructor
You can also invoke a function as a constructor. Based on the naming convention you're using (TestObject) this also may be what you're doing and is what's tripping you up.
You invoke a function as a Constructor with the new keyword.
function Foo(){
this.confusing = 'hell yeah';
}
var myObject = new Foo();
When invoked as a constructor, a new Object will be created, and this will be bound to that object. Again, if you have inner functions and they're used as callbacks, you'll be invoking them as functions, and this will be bound to the global object. Use that var that = this; trick/pattern.
Some people think the constructor/new keyword was a bone thrown to Java/traditional OOP programmers as a way to create something similar to classes.
With the Apply Method.
Finally, every function has a method (yes, functions are objects in Javascript) named apply. Apply lets you determine what the value of this will be, and also lets you pass in an array of arguments. Here's a useless example.
function foo(a,b){
alert(a);
alert(b);
alert(this);
}
var args = ['ah','be'];
foo.apply('omg',args);
A: this corresponds to the context for the function call. For functions not called as part of an object (no . operator), this is the global context (window in web pages). For functions called as object methods (via the . operator), it's the object.
But, you can make it whatever you want. All functions have .call() and .apply() methods that can be used to invoke them with a custom context. So if i set up an object Chile like so:
var Chile = { name: 'booga', stuff: function() { console.log(this.name); } };
...and invoke Chile.stuff(), it'll produce the obvious result:
booga
But if i want, i can take and really screw with it:
Chile.stuff.apply({ name: 'supercalifragilistic' });
This is actually quite useful...
A: In JavaScript, this always refers to the object invoking the function that is being executed. So if the function is being used as an event handler, this will refer to the node that fired the event. But if you have an object and call a function on it like:
myObject.myFunction();
Then this inside myFunction will refer to myObject. Does it make sense?
To get around it you need to use closures. You can change your code as follows:
function TestObject() {
TestObject.prototype.firstMethod = function(){
this.callback();
YAHOO.util.Connect.asyncRequest(method, uri, callBack);
}
var that = this;
TestObject.prototype.callBack = function(o){
that.secondMethod();
}
TestObject.prototype.secondMethod = function() {
alert('test');
}
}
A: If you're using a javascript framework, there may be a handy method for dealing with this. In Prototype, for example, you can call a method and scope it to a particular "this" object:
var myObject = new TestObject();
myObject.firstMethod.bind(myObject);
Note: bind() returns a function, so you can also use it to pre-scope callbacks inside your class:
callBack.bind(this);
http://www.prototypejs.org/api/function/bind
A: I believe this may be due to how the idea of [closures](http://en.wikipedia.org/wiki/Closure_(computer_science) work in Javascript.
I am just getting to grips with closures myself. Have a read of the linked wikipedia article.
Here's another article with more information.
Anyone out there able to confirm this?
A: As soon as callback methods are called from other context I'm usually using something that I'm call callback context:
var ctx = function CallbackContext()
{
_callbackSender
...
}
function DoCallback(_sender, delegate, callbackFunc)
{
ctx = _callbackSender = _sender;
delegate();
}
function TestObject()
{
test = function()
{
DoCallback(otherFunc, callbackHandler);
}
callbackHandler = function()
{
ctx._callbackSender;
//or this = ctx._callbacjHandler;
}
}
A: You can also use Function.Apply(thisArg, argsArray)... Where thisArg determines the value of this inside your function...the second parameter is an optional arguments array that you can also pass to your function.
If you don't plan on using the second argument, don't pass anything to it. Internet Explorer will throw a TypeError at you if you pass null (or anything that is not an array) to function.apply()'s second argument...
With the example code you gave it would look something like:
YAHOO.util.Connect.asyncRequest(method, uri, callBack.Apply(this));
A: If you're using Prototype you can use bind() and bindAsEventListener() to get around that problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Best Reporting application Desinging web application and for reporting iam using crystal reports.sometimes the crystal reports gives runtimes error. What can i do to make my report faster. Retrieving around 1MB of data. It is the month end sales report. what is the best method to solve this issue.
A: You can use i-net Clear Reports (used to be i-net Crystal-Clear). It can read Crystal Reports report files. That way, you do not need to redesign your reports. A report of 1 MB is not a problem. We have reports with 100MB and more. It has also a txt output format.
Client-side reporting is also possible but of course this is a little more complicated.
A: 1Mb sounds like far too much data for a report -
I would try to filter it down more on the server
A: You haven't noted what DB platform you are on, but if it's MSSQL based you may want to try using the embedded SQL Reporting Services Reporting. We found functionality wise it can't do everything crystal can, but it was easy to use, free, and didn`t have some of the annoying bugs that we had with crystal.
A: I would be fascinated to hear exactly what you think CR can do that SSRS can't. Include only capabilities that cannot be trivially achieved by alternate strategies. Exclude "capabilities" that only work in carefully contrived vendor demos.
A: I'm not that knowledgeable at CR, so this is tough for me (I'm a SSRS fan). So you're basically asking me to attack the thing I would normally recommend. Maybe a tough order but here goes.
The feedback I've gotten from others (but who are much smarter than me :-) is that various formatting options/functions are better right now in crystal. One example that I've had to deal with myself in SSRS is manipulating dates - there are (i'm told) many more functions for manipulating dates in CR. Fixing this may be trivial for some, but not everybody and not me.
What else - export to word is, I believe, available in CR, not in SSRS. Also, I believe combining dataset results is at least somewhat easier in CR. These might be better in the newly released 2008 version/
Again, keep in mind this is based more on what I've been told when I complain occasionally about SSRS - I still really like it however.
A: The reason why i used crystal report is because i had to print the report into LG Matrix printer that is basically i had to export the report in text format. I really dont know how it is possible to do in SSRS but i find it easier in crystal reports. That is the main reason i have to switch back to crystal. If anyone can suggest me the alternative option then i may try that for my application. I want to do client side reporting more than sever side.
A: First thing I would do is make sure your queries are running out side of your reporting tool correctly and quickly.
I would also look at the indexes on your tables and tune the query, maybe create a view to contain the data or some of it to help speed up the process
A: You didn't provide any specifics but How to Turbo Charge your Report Speed has a number of general suggestions to improve your speed. (Disclaimer - I wrote the blog entry.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: diff a ruby string or array How do I do a diff of two strings or arrays in Ruby?
A: There is also diff-lcs which is available as a gem. It hasn't been updated since 2004 but we have been using it without any problem.
Edit: A new version was released in 2011. Looks like it's back in active development.
http://rubygems.org/gems/diff-lcs
A: The HTMLDiff that @da01 mentions above worked for me.
script/plugin install git://github.com/myobie/htmldiff.git
# bottom of environment.rb
require 'htmldiff'
# in model
class Page < ActiveRecord::Base
extend HTMLDiff
end
# in view
<h1>Revisions for <%= @page.name %></h1>
<ul>
<% @page.revisions.each do |revision| %>
<li>
<b>Revised <%= distance_of_time_in_words_to_now revision.created_at %> ago</b><BR>
<%= Page.diff(
revision.changes['description'][0],
revision.changes['description'][1]
) %>
<BR><BR>
</li>
<% end %>
# in style.css
ins.diffmod, ins.diffins { background: #d4fdd5; text-decoration: none; }
del.diffmod, del.diffdel { color: #ff9999; }
Looks pretty good. By the way I used this with the acts_as_audited plugin.
A: t=s2.chars; s1.chars.map{|c| c == t.shift ? c : '^'}.join
This simple line gives a ^ in the positions that don't match. That's often enough and it's copy/paste-able.
A: For arrays, use the minus operator. For example:
>> foo = [1, 2, 3]
=> [1, 2, 3]
>> goo = [2, 3, 4]
=> [2, 3, 4]
>> foo - goo
=> [1]
Here the last line removes everything from foo that is also in goo, leaving just the element 1. I don't know how to do this for two strings, but until somebody who knows posts about it, you could just convert each string to an array, use the minus operator, and then convert the result back.
A: I got frustrated with the lack of a good library for this in ruby, so I wrote http://github.com/samg/diffy. It uses diff under the covers, and focuses on being convenient, and providing pretty output options.
A: diff.rb is what you want, which is available at http://users.cybercity.dk/~dsl8950/ruby/diff.html via internet archive:
http://web.archive.org/web/20140421214841/http://users.cybercity.dk:80/~dsl8950/ruby/diff.html
A: For strings, I would first try out the Ruby Gem that @sam-saffron mentioned below. It's easier to install:
http://github.com/pvande/differ/tree/master
gem install differ
irb
require 'differ'
one = "one two three"
two = "one two 3"
Differ.format = :color
puts Differ.diff_by_word(one, two).to_s
Differ.format = :html
puts Differ.diff_by_word(one, two).to_s
A: I just found a new project that seems pretty flexible:
http://github.com/pvande/differ/tree/master
Trying it out and will try to post some sort of report.
A: I had the same doubt and the solution I found is not 100% ruby, but is the best for me.
The problem with diff.rb is that it doesn't have a pretty formatter, to show the diffs in a humanized way. So I used diff from the OS with this code:
def diff str1, str2
system "diff #{file_for str1} #{file_for str2}"
end
private
def file_for text
exp = Tempfile.new("bk", "/tmp").open
exp.write(text)
exp.close
exp.path
end
A: Just for the benefit of Windows people: diffy looks brilliant but I belive it will only work on *nix (correct me if I'm wrong). Certainly it didn't work on my machine.
Differ worked a treat for me (Windows 7 x64, Ruby 1.8.7).
A: Maybe Array.diff via monkey-patch helps...
http://grosser.it/2011/07/07/ruby-array-diffother-difference-between-2-arrays/
A: To get character by character resolution I added a new function to damerau-levenshtein gem
require "damerau-levenshtein"
differ = DamerauLevenshtein::Differ.new
differ.run "Something", "Smothing"
# returns ["S<ins>o</ins>m<subst>e</subst>thing",
# "S<del>o</del>m<subst>o</subst>thing"]
or with parsing:
require "damerau-levenshtein"
require "nokogiri"
differ = DamerauLevenshtein::Differ.new
res = differ.run("Something", "Smothing!")
nodes = Nokogiri::XML("<root>#{res.first}</root>")
markup = nodes.root.children.map do |n|
case n.name
when "text"
n.text
when "del"
"~~#{n.children.first.text}~~"
when "ins"
"*#{n.children.first.text}*"
when "subst"
"**#{n.children.first.text}**"
end
end.join("")
puts markup
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: iCal Format - Organizer Property I am currently programming a scheduling application which loosely based on iCalendar standard. Does anyone knows in which property can I store the event creator's information? By browsing through the iCalendar RFC 2445, I find this property: Organizer. can I store the event creator's information in the property even if he/she is the only person involved in the event? or there is already a field to store the event creator's information???!
A: ORGANIZER;CN="Sally Example":mailto:[email protected]
Looks like the answer
A: Some notes from the rfc2445
Conformance: This property MUST be specified in an iCalendar object
that specifies a group scheduled calendar entity. This property MUST
be specified in an iCalendar object that specifies the publication of
a calendar user's busy time. This property MUST NOT be specified in
an iCalendar object that specifies only a time zone definition or
that defines calendar entities that are not group scheduled entities,
but are entities only on a single user's calendar.
A: I am researching a similar application, concerned with event tracking and handling, and came to the same conclusions as Jeffrey04.
Specifically, to represent warning or alarm, it would seem appropriate to use the VJOURNAL component, as the event is in the past, and maybe continues through the present, but is certainly not a meeting. VJOURNAL also does not occupy space on the calendar.
IMHO the best field for representing the originator is X-WR-RELCALID, which is not RFC5545, but seems to fit the idea of a creator UID. I will link this to a vCard UID.
I cannot understand why the idea of an event creator was unimportant for the writers of iCal specs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Detect "Clone Mode" display setup How can I determine if my displays are in "Clone Mode" without using either COPP (Computer Output Protection Protocol) or OPM (Output Protection Protocol) on Windows?
Vista solution:
hMonitor = MonitorFromWindow (HWND_DESKTOP, MONITOR_DEFAULTTOPRIMARY);
bSuccess = GetNumberOfPhysicalMonitorsFromHMONITOR (hMonitor, &dwMonitorCount);
A: I assume you've already tried EnumDisplayMonitors() and it didn't work. So if that returns a single HMONITOR for each set of cloned displays, you could compare this set of results to the result of EnumDisplayDevices(). Devices returned by EnumDisplayDevices() that are attached to the desktop but aren't returned by EnumDisplayMonitors() should be clones.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's the best way to distribute Java applications? Java is one of my programming languages of choice. I always run into the problem though of distributing my application to end-users.
Giving a user a JAR is not always as user friendly as I would like and using Java WebStart requires that I maintain a web server.
What's the best way to distribute a Java application? What if the Java application needs to install artifacts to the user's computer? Are there any good Java installation/packaging systems out there?
A: There are a variety of solutions, depending on your distribution requirements.
*
*Just use a jar. This assumes that the user has the the correct java version installed, otherwise the user will get "class-file format version" exceptions. This is fine for internal distribution inside a company.
*Use launch4j and an installer like NSIS. This gives you a lot more control, although the user can still do stupid stuff like un-installing the java runtime. This is probably the most popular approach, and what I currently use.
*Use Webstart. This also assumes that the user has the correct java version installed, but it's a lot easier to get going. My experience is that this is fine for tightly controlled intranet environments, but becomes a pain with larger deployments because it has some many weird failures. It may get better with the new plug-in technology in Java 1.7.
*Use a native-code compiler like Excelsior JET and distribute as a executable, or wrap it up in an installer. Expensive, and it generally ties you to a slightly older version of java, and there is some pain with dynamic class-loading, but its very effective for large-scale deployment where you need to minimise your support hassles.
A: advanced installer makes it easy to package java apps as windows executables, and it's quite flexible in the way you can set it up. I've found that for distributing java applications to windows clients, this is the easiest way to go.
A: JSmooth is a simple program that takes your jar and wraps it up in a standard windows executable file. It comes with a simple GUI that allows you to configure the required JVM, bundle it with the application or provide an option to download it if it's not already installed. You can send the exe file as is or zip it with possible dependencies (or let the program download the extra dependencies from the net on startup). It's also free, as in beer and speech, which may (or may not) be a good thing.
A: If it's a real GUI-having end user application you should ignore the lanaguage in which you wrote the program (Java) and use a native installer for each of your chosen platforms. Mac folks want a .dmg and on windows a .msi or a .exe installer is the way to go. On Windows I prefer NSIS from NullSoft only because it's less objectionable than InstallShield or InstallAnywhere. On OSX you can count on the JVM already being there. On Windows you'll need to check and install it for them if necessary. Linux people won't run Java GUI applications, and the few that will, know what to do with an executable .jar.
A: It depends on how sophisticated your target users are. In most cases you want to isolate them from the fact that you are running a Java-based app. Give them with a native installer that does the right thing (create start menu entries, launchers, register with add/remove programs, etc.) and already bundles a Java runtime (so the user does not need to know or care about it). I would like to suggest our cross platform installation tool, BitRock InstallBuilder. Although it is not Java-based, it is commonly used to package Java applications. It can be easily integrated with Ant and you can build Windows installers from Unix/Linux/Mac and the other way around. Because the generated installers are native, they do not require a self-extraction step or a JRE to be already present in the target system, which means smaller installers and saves you some headaches. I also would like to mention we have free licenses for open source projects
A: Although I haven't used NSIS (Nullsoft Scriptable Installer System) myself, there are install scripts that will check whether or not the required JRE is installed on the target system.
Many sample scripts are available from the Code Examples and Real World Installers pages, such as:
*
*Java Launcher with automatic JRE installation
*Simple Java Runtime Download Script
(Please note that I haven't actually used any of the scripts, so please don't take it as an endorsement.)
A: executable files are best but they are platform limited i.e. use gcj : http://gcc.gnu.org/java/ for linux to produce executables and use launch4j : http://launch4j.sourceforge.net/ to produce windows executables.
To package on linux you can use any rpm or deb packager. For win32 try http://en.wikipedia.org/wiki/Nullsoft_Scriptable_Install_System
A: I needed a way to package my project and its dependencies into a single jar file.
I found what I needed using the Maven2 Assembly plugin: Maven2 Assembly plugin
This appears to duplicate the functionality of one-jar, but requires no additional configuration to get it going.
A: For simple Java apps I like to use Jar's. It is very simple to distribute one file that a user can just click on (Windows), or
java -jar jarname.jar
IMHO, jar is the way to go when simplicity is a main requirement.
A: I develop eclipse RCP applications. Normally to start an eclipse application an executable launcher is included. I include the java virtual machine inside the application folder in a /jre sub directory to ensure that the right java version will be used.
Then we package with Inno Setup for installation on the user's machine.
A:
What's the best way to distribute a
Java application? What if the Java
application needs to install artifacts
to the user's computer? Are there any
good Java installation/packaging
systems out there?
In my experience (from evaluating a number of options), install4j is a good solution. It creates native installers for any platform, and is specifically geared towards installing Java apps. For details, see "Features" on its website.
install4j is a commercial tool, though. Especially if your needs are relatively simple (just distribute an application and install some artifacts), many other good options exist, including free ones (like izPack or the already mentioned Lauch4j). But you asked for the best way, and to my current knowledge install4j is the one, especially for distributing larger or more complicated Java (EE) apps.
A: The best answer depends on the platform. For deployment on Windows, I have had good results using a combination of one-jar and launch4j. It did take a little while to set up my build environment properly (ant scripts, mostly) but now it's fairly painless.
A: Well from my point of view the superior distribution mechanism is to use something like ClickOnce, or WebStart technology. You just deploy the version to the server and it gets automatically to the clients when the version is released.
Also the Eclipse RCP platform contains UpdateManager that does what WebStart do, but also much more.
Since I am using Maven2 for building, the deployment is just a piece of cake: copy the built jar to the location on the server, update the jnlp file if needed and you are done.
A: installanywhere is good but expensive one - i have not found (as) good free one
A: I would zip the jar file along with other dependent jars, configuration files and documentation along with a run.bat/run.sh. End user should be able unzip it to any location and edit the run.bat if required (It should run without editing in most of the cases).
An installer may be useful if you want to create entries in start menu, desktop, system tray etc.
As a user I prefer unzip and run kind of installation (no start menu entries please). However People outside IT industry may have different preferences. So if the application is largely targeted for developers zip-run.bat route and applications for general public may be installed using a installer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "119"
} |
Q: What's the difference between XML-RPC and SOAP? I've never really understand why a web service implementer would choose one over the other. Is XML-RPC generally found in older systems? Any help in understanding this would be greatly appreciated.
A: Differences?
SOAP is more powerful, and is much preferred by software tool vendors (MSFT .NET, Java Enterprise edition, that sort of things).
SOAP was for a long time (2001-2007ish) seen as the protocol of choice for SOA. xml-rpc not so much. REST is the new SOA darling, although it's not a protocol.
SOAP is more verbose, but more capable.
SOAP is not supported in some of the older stuff. For example, no SOAP libs for classic ASP (that I could find).
SOAP is not well supported in python. XML-RPC has great support in python, in the standard library.
SOAP supports document-level transfer, whereas xml-rpc is more about values transfer, although it can transfer structures such as structs, lists, etc.
xm-rpc is really about program to program language agnostic transfer. It primarily goes over http/https. SOAP messages can go over email as well.
xml-rpc is more unixy. It lets you do things simply, and when you know what you're doing, it's very fast to deploy quality web services, even when using terminal text editors. Doing SOAP that way is a zoo; you really need a good IDE to make it feasible.
Knowing SOAP, though, will look much better on your resume/CV if you're vying for a Fortune 500 IT job.
xml-rpc has some issues with non-ascii character sets.
XML-RPC does not support named parameters. They must be in correct order. Not sure about SOAP, but think so.
A: Kate Rhodes has a great essay on the differences at http://weblog.masukomi.org/2006/11/21/xml-rpc-vs-soap
A: Just to add to the other answers, I would encourage you to look at actual textual representations of SOAP and XML-RPC calls, perhaps by capturing one with Ethereal. The whole, "XML-RPC is simpler" argument doesn't make much sense until you see how incredibly verbose a SOAP call is. Many of the fairly popular web sites out there shy away from SOAP as their API due to just the amount of bandwidth it would consume if people started using it extensively.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Cruise Control .Net vs Team Foundation Build Our team is setting up nightly and continuous integration builds. We own Team Foundation Server and could use Team Foundation Build. I'm more familiar with CC.Net and lean that way but management sees all the money spent on TFS and wants to use it.
Some things I like better about CC.Net is the flexibility of notifications as well as the ease of implementing custom scripts.
If you have experience with both products, which do you prefer and why?
A: The real value in Team Foundation Build is that it associates changesets and work items with builds.
This enables a couple of useful scenarios:
*
*You can look at a work item and find out what build it is included in
*You can look at a build and see which code changes (and work items) it includes
Then of course there's the reports built on top of this information. But even these links by themselves are useful to non-management types.
Have a look at www.tfsbuild.com for "recipes" on different Team Build configurations.
A: SVN is an OK tool far superior is not true, SVN vs. TFS is similar to a Ford pickup vs a Mercedes 500, it gets the job done but it isnt pretty nor is it comfortable, the merging has a lot to be desired. I prefer the TFS merging tool as it seems like the branching dev is right there working with you, that is how smart it is. Our internal SVN seemed to get corrupted a lot, this is the reason we ditched it and went to TFS and have not looked back. The shelving of changesets is wonderful for an agile development shop, currently have 270+ engineers on TFS with no issues or problems, SVN simply was not capable of handling that kind of load without someone having issues.
I prefer CC.NET simply because of tools we have developed in house to extend the functionality in reporting and administration. TFS build is very closely integrated however and we anticipate a switch when we upgrade to SQL 2008
A: I've used both. I guess it depends on what your organization values.
Since you are familiar with CC Net, I won't speak much to that. You already know what makes it cool.
Here's what I like about Team Foundation Build:
*
*Build Agents. It's very simple to turn any box into a build machine and run a build on it. MSFT got this one right.
*Reporting. All relevant build results (test included) are stored in a SQL database and reported on via SQL Server Reporting Services. This is an immensely powerful tool for charting build and test results over time. CC Net doesn't have this built in.
*You can do similar customizations via MSBUILD. It's basically the same as using NAnt with CC Net
Here's what drives me up the wall about Team Foundation Build:
*
*To build C++/CLI projects (or run unit tests...?) the build agent must have VSTS Dev or Team Suite installed. This, friends, is just batsh*t crazy.
*It must be connected to the TFS Mothership
If you're in a big org with lots of bosses who have huge budgets and love reports (and don't get me wrong, this has huge value) OR you need to scale up to a multi-machine build farm, I'd prefer Team Foundation Build.
If you're a leaner shop, stick with CC Net and grow your own reporting solutions. That's what we did.
Until we got acquired. And got TFS :P
A: We've been using CruiseControl.net since June '07 and it's worked great for us. Best part, it integrates to SVN easily which is a far superior source control provider.
So our setup is:
*
*Cruise Control.Net
*SVN
*Trac - for bug reports and project management (integrates perfectly with SVN)
*nunit - for unit testing
We've had some major parallel development going and the branching and merging experience was spectacular. If you have the choice I'd go with the setup above!
A: I'm assuming that as you own TFS you'll be using it for version control. In that case I would lean towards Team Foundation Build. That said, I pretty much agree with Nick.
I wrote the CruiseControl.NET integration for TFS. It works fine and gives you the same build capabilities that you are used to. To me, CC.NET's main advantage is that it is completely extensible and has integrations with all the major SCM and build systems under the sun. The main reason I wrote the CC.NET integration to TFS it is that in TFS2005 the build system did not have out-the-box CI support. However the TFS2008 version is much improved and the team continue to very actively improve it for future releases of TFS.
The main reason for switching to TFS Build would be so that it automatically reports the build information back into TFS which helps complete the software development picture in terms of reporting. It also integrates nicely with the work item tracking side of TFS and inside the IDE (both in Visual Studio and Eclipse).
That said, if you have large investments in Nant scripts that do more than just compile and test your code or you already have a home-brewed reporting solution you might want to stick with what you have.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: storing revision changes of a message What algorithms and processes are involved in storing revision changes like stackoverflow and wikipedia do?
Is only one copy of the message kept? And if so is it only the latest copy? Then only changes to go back to the previous version(s) are stored from there? (This would make for a faster display of the main message).
Or are complete messages stored? And if so is the compare done between these on each display?
What algorithms are best used to determine the exact changes in the message? How is this data stored in a database?
If anyone knows exactly what wikipedia or stackoverlfow does I'd love to know.
A: Mediawiki (the sotware for wikipedia) stores full text for all revision see the database schema. Each entry in the text table in Mediawiki has flags that tells if the content has been e.g. gziped, using a standard compression is often the sanest option.
I can't tell you how to do the diffs algorithmically, but what ever algorithm you use you should do it from two full versions of the text. That is fetch the complete version of old and new object from database then do the diff. This makes it possible to easily change the diffing algorithm.
Git is a great example of a Unix application that can do very cheap (storage and speedwise) delta storage. There are wikis that can use git e.g. ikiwiki, but I'm guessing you want to do it with a database.
A: Usually messages are stored as complete snapshots. Previous versions are disabled, and the most recent is displayed. There may be optimizations used like caching which version is the most recent.
A: The longest common substring algorithm can be used to detect differences between versions, but it is limited. For example, it does not detect the moving around of text as such, but it would see this as unrelated removals and insertions.
I suppose that websites normally store the latest copy in full, and apply reverse diffs from there. This is also the way CVS works, but Subversion uses forward diffs, which results in slower checkouts.
To store this in a database, one could maintain a main table with the latest versions, and have a separate table with the reverse differences. This table would have rows in the format (article_id, revision_id, differences).
A: Typical revision changes are stored using a delta algorithm, so the only data stored are the changes in each revision in relation to the original. I am unsure of wikipedia or stackoverflow how they have it implemented.
A: I would use the following technique:
*
*Store the current message as complete text.
*Store the history using the delta algorithm.
This will keep your performance good with regular display, while keeping the storage to a minimum for the history.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Comparing MySQL Cross and Inner Joins What are the potential pros and cons of each of these queries given different databases, configurations, etc? Is there ever a time when one would be more efficient than the other? Vice versa? Is there an even better way to do it? Can you explain why?
Query 1:
SELECT
*
FROM
table_a, table_b, table_c
WHERE
table_a.id = table_b.id AND
table_a.id = table_c.id AND
table_a.create_date > DATE('1998-01-01');
Query 2:
SELECT
*
FROM
table_a
INNER JOIN table_b ON
table_a.id = table_b.id
INNER JOIN table_c ON
table_a.id = table_c.id
WHERE
table_a.create_date > DATE('1998-01-01');
A: Same query, different revision of SQL spec. The query optimizer should come up with the same query plan for those.
A: Nope. I'm just sharing a large, overwhelmed database with some coworkers and am trying to come up with some ways to get more processor bang for our buck. I've been looking around online but haven't found a good explanation for some questions like this.
Sorry for sounding homework-y. I guess I spent too many years as a TA.
A: Actually, I think query 2's more readable. Think about when you get to say 5,6, or 7 tables when you hit the where clause in query one. Following the joins could get messy.
As for performance, I have no idea. I bet if you go to the MySQL website, there would be info there - probably examples of joins.
Professionally, I've only worked on one project. But it was a big one, and they always followed query 2's format. This was using Microsoft SQL Server though.
A: I agree, it's sounding a bit too much like Homework!
If it isn't homework then I guess the simplest answer is readability.
As stated before, both queries will produce the same execution plan. If this is the case then the only thing you need to worry about it maintainability.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What does COINIT_SPEED_OVER_MEMORY do? When calling CoInitializeEx, you can specify the following values for dwCoInit:
typedef enum tagCOINIT {
COINIT_MULTITHREADED = 0x0,
COINIT_APARTMENTTHREADED = 0x2,
COINIT_DISABLE_OLE1DDE = 0x4,
COINIT_SPEED_OVER_MEMORY = 0x8,
} COINIT;
What does the suggestively titled "speed over memory" value do? Is it ignored these days in COM?
A: No idea if it's still used but it was meant to change the balance used by the COM algorithms.
If you had tons of memory and wanted speed at all costs, you would set that flag.
In low-memory environments, leaving that flag off would favor reduced memory usage.
As it turns out, the marvellous Raymond Chen (of "The Old New Thing" fame) has now weighed in on the subject and, despite what that flag was meant to do, it apparently does nothing at all.
See What does the COINIT_SPEED_OVER_MEMORY flag to CoInitializeEx do? for more details:
When should you enable this mode? It doesn't matter, because as far as I can tell, there is no code anywhere in COM that changes its behavior based on whether the process has been placed into this mode! It looks like the flag was added when DCOM was introduced, but it never got hooked up to anything. (Or whatever code that had been hooked up to it never shipped.)
Also http://archives.neohapsis.com/archives/microsoft/various/dcom/2001-q1/0160.html from Steve Swartz, one of the original COM+ architects:
COINIT_SPEED_OVER_MEMORY is ignored by COM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I hide a column only on the list page in ASP.NET Dynamic Data? This is somewhat similar to this question.
However, rather than wanting to hide a column all together, I am just looking to hide the column on the List.aspx page.
My specific example is that fields that are long (or at least nvarchar(MAX)) automatically hide from the List.aspx page as is but are still visible on the Edit.aspx page.
I would like to replicate this behaviour for other (shorter) columns.
Is this possible?
A: You can create a custom page for the particular table you want to change. There's an example here.
Within your custom page, you can then set AutoGenerateColumns="false" within the asp:GridView control, and then define exactly the columns you want, like this:
<Columns>
...
<asp:DynamicField DataField="Product" HeaderText="Product" />
<asp:DynamicField DataField="Colour" HeaderText="Colour" />
</Columns>
A: I think this solution is a really useful one, because it allow you to use the attribute model to specify which columns go where:
http://csharpbits.notaclue.net/2008/10/dynamic-data-hiding-columns-in-selected.html
A: if u r using bootstrap u can set like this
<asp:DynamicField DataField="Id" ItemStyle-CssClass="hidden" HeaderStyle-CssClass="hidden" FooterStyle-CssClass="hidden"/>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to track changes to business objects? I get the concept of creating a business object or entity to represent something like a Person. I can then serialize the Person using a DTO and send it down to the client. If the client changes the object, it can have an IsDirty flag on there so when it gets sent back to the server I know to update it.
But what if I have an Order object? This has the main header informaton, customer, supplier, required date, etc. Then it has OrderItems which is a List< OrderItem>, being the items to be ordered. I want to be able to use this business object on my UI. So I have some textboxes hooked up to the location, supplier, required date, etc and a grid hooked up to OrderItems. Since OrderItems is a List I can easily add and delete records to it. But how do I track this, especially the deleted items. I don't want the deleted items to be visible in my grid and I shouldn't be able to iterate over them if I used foreach, because they have been deleted. But I still need to track the fact there was a deletion. How do I track the changes. I think I need to use a unit of work? But then the code seems to become quite complex. So then I wonder why not simply use DataTables and get the change tracking for free? But then I read how business objects are the way to go.
I’ve found various examples on simple Person examples, bnut not header-detail examples like Orders.
BTW using C# 3.5 for this.
A: Firstly, you can use an existing framework that addresses these issues, like CSLA.NET. The author of this framework has tackled these very issues. Go to http://www.rockfordlhotka.net/cslanet/ for this. Even if you don't use the full framework, the concepts are still applicable.
If you wanted to roll your own, what I've done in the past was to instead of using List for my collections, I've used a custom type derived from BindingList. Inhereting from BindingList allows you to override the behaviour of add/remove item. So you can for example have another internal collection of "delteted" items. Every time the overriden Remove method is called on your collection, put the item into the "deleted" collection, and then call the base implementation of the Remove method. You can do the same for added items or changed items.
A: You're spot on about needing a unit of work, but don't write one. Use NHibernate or some other ORM. That is what they're made for. They have Unit of Works built in.
Business objects are indeed "the way to go" for most applications. You're diving into a deep area and there will be much learning to do. Look into DDD.
I'd also strongly advise against code like that in your code-behind. Look into the MVP pattern.
I'd also (while I was bothering to learn lots of new, highly critical things) look into SOLID.
You may want to check out JP Boodhoo's nothing but .net course as it covers a lot of these things.
A: The data objects don't track changes. The change tracking occurs on the DataContext and objects that you've retrieved through the DataContext. So in order to track changes you need to do the following:
public class FooDataContext : DataContext
{
public Table<Order> Orders;
}
public class Order
{
[DbColumn(Identity = true)]
[Column(DbType = "Int NOT NULL IDENTITY", IsPrimaryKey = true, IsDbGenerated = true)]
public int Id { get; set; }
[DbColumn(Default = "(getutcdate())")]
[Column(DbType = "DateTime", CanBeNull = false, IsDbGenerated = true)]
public DateTime DateCreated { get; set; }
[Column(DbType = "varchar(50)", CanBeNull = false, IsDbGenerated = false)]
public string Name { get; set; }
}
Now in your codebehind you can do something like:
public void UpdateOrder(int id, string name)
{
FooDataContext db = new FooDataContext();
Order order = db.Orders.Where(o=>o.Id == id).FirstOrDefault();
if (order == null) return;
order.Name = name;
db.SubmitChanges();
}
I wouldn't recommend directly using the data context in the code behind, but this is a good way to get started with Linq To SQL. I would recommend putting all your database interactions in an external project and call from the GUI to the classes that encapsulate this behavior.
I would recommend creating a Linq To Sql (dbml) file if you're new to Linq To Sql.
Right click on your project in solution explorer, and select Add New Item. Select Linq To SQL file, and it will then let you connect to your database and select the tables.
You can then look at the generated code, and get some great ideas on how Linq To Sql works and what you can do with it.
Use that as a guideline on working with Linq to SQL and that will take you far...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Using X-Sendfile with Apache/PHP I can't seem to find much documentation on X-Sendfile or example code for PHP (there is some rails code).
Anyone used it before and would mind giving a quick snippet of code and a brief description?
A: X-Sendfile is an HTTP header, so you want something like this:
header("X-Sendfile: $filename");
Your web server picks it up if correctly configured. Here's some more details:
http://www.jasny.net/articles/how-i-php-x-sendfile/
A: If tweaking the web server configuration is not an option, consider PHP's standard readfile() function. It won't be quite as fast as sendfiling, but it will be more widely compatible. Also note that when doing this, you should also send a Content-Type header at the very least.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: High Availability Storage I would like to make 2 TB or so available via NFS and CIFS. I am looking for a 2 (or more) server solution for high availability and the ability to load balance across the servers if possible. Any suggestions for clustering or high availability solutions?
This is business use, planning on growing to 5-10 TB over next few years. Our facility is almost 24 hours a day, six days a week. We could have 15-30 minutes of downtime, but we want to minimize data loss. I want to minimize 3 AM calls.
We are currently running one server with ZFS on Solaris and we are looking at AVS for the HA part, but we have had minor issues with Solaris (CIFS implementation doesn't work with Vista, etc) that have held us up.
We have started looking at
*
*DRDB over GFS (GFS for distributed
lock capability)
*Gluster (needs
client pieces, no native CIFS
support?)
*Windows DFS (doc says only
replicates after file closes?)
We are looking for a "black box" that serves up data.
We currently snapshot the data in ZFS and send the snapshot over the net to a remote datacenter for offsite backup.
Our original plan was to have a 2nd machine and rsync every 10 - 15 min. The issue on a failure would be that ongoing production processes would lose 15 minutes of data and be left "in the middle". They would almost be easier to start from the beginning than to figure out where to pickup in the middle. That is what drove us to look at HA solutions.
A: I've recently deployed hanfs using DRBD as the backend, in my situation, I'm running active/standby mode, but I've tested it successfully using OCFS2 in primary/primary mode too. There unfortunately isn't much documentation out there on how best to achieve this, most that exists is barely useful at best. If you do go along the drbd route, I highly recommend joining the drbd mailing list, and reading all of the documentation. Here's my ha/drbd setup and script I wrote to handle ha's failures:
DRBD8 is required - this is provided by drbd8-utils and drbd8-source. Once these are installed (I believe they're provided by backports), you can use module-assistant to install it - m-a a-i drbd8. Either depmod -a or reboot at this point, if you depmod -a, you'll need to modprobe drbd.
You'll require a backend partition to use for drbd, do not make this partition LVM, or you'll hit all sorts of problems. Do not put LVM on the drbd device or you'll hit all sorts of problems.
Hanfs1:
/etc/drbd.conf
global {
usage-count no;
}
common {
protocol C;
disk { on-io-error detach; }
}
resource export {
syncer {
rate 125M;
}
on hanfs2 {
address 172.20.1.218:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
on hanfs1 {
address 172.20.1.219:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
}
Hanfs2's /etc/drbd.conf:
global {
usage-count no;
}
common {
protocol C;
disk { on-io-error detach; }
}
resource export {
syncer {
rate 125M;
}
on hanfs2 {
address 172.20.1.218:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
on hanfs1 {
address 172.20.1.219:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
}
Once configured, we need to bring up drbd next.
drbdadm create-md export
drbdadm attach export
drbdadm connect export
We must now perform an initial synchronization of data - obviously, if this is a brand new drbd cluster, it doesn't matter which node you choose.
Once done, you'll need to mkfs.yourchoiceoffilesystem on your drbd device - the device in our config above is /dev/drbd1. http://www.drbd.org/users-guide/p-work.html is a useful document to read while working with drbd.
Heartbeat
Install heartbeat2. (Pretty simple, apt-get install heartbeat2).
/etc/ha.d/ha.cf on each machine should consist of:
hanfs1:
logfacility local0
keepalive 2
warntime 10
deadtime 30
initdead 120
ucast eth1 172.20.1.218
auto_failback no
node hanfs1
node hanfs2
hanfs2:
logfacility local0
keepalive 2
warntime 10
deadtime 30
initdead 120
ucast eth1 172.20.1.219
auto_failback no
node hanfs1
node hanfs2
/etc/ha.d/haresources should be the same on both ha boxes:
hanfs1 IPaddr::172.20.1.230/24/eth1
hanfs1 HeartBeatWrapper
I wrote a wrapper script to deal with the idiosyncracies caused by nfs and drbd in a failover scenario. This script should exist within /etc/ha.d/resources.d/ on each machine.
!/bin/bash
heartbeat fails hard.
so this is a wrapper
to get around that stupidity
I'm just wrapping the heartbeat scripts, except for in the case of umount
as they work, mostly
if [[ -e /tmp/heartbeatwrapper ]]; then
runningpid=$(cat /tmp/heartbeatwrapper)
if [[ -z $(ps --no-heading -p $runningpid) ]]; then
echo "PID found, but process seems dead. Continuing."
else
echo "PID found, process is alive, exiting."
exit 7
fi
fi
echo $$ > /tmp/heartbeatwrapper
if [[ x$1 == "xstop" ]]; then
/etc/init.d/nfs-kernel-server stop #>/dev/null 2>&1
NFS init script isn't LSB compatible, exit codes are 0 no matter what happens.
Thanks guys, you really make my day with this bullshit.
Because of the above, we just have to hope that nfs actually catches the signal
to exit, and manages to shut down its connections.
If it doesn't, we'll kill it later, then term any other nfs stuff afterwards.
I found this to be an interesting insight into just how badly NFS is written.
sleep 1
#we don't want to shutdown nfs first!
#The lock files might go away, which would be bad.
#The above seems to not matter much, the only thing I've determined
#is that if you have anything mounted synchronously, it's going to break
#no matter what I do. Basically, sync == screwed; in NFSv3 terms.
#End result of failing over while a client that's synchronous is that
#the client hangs waiting for its nfs server to come back - thing doesn't
#even bother to time out, or attempt a reconnect.
#async works as expected - it insta-reconnects as soon as a connection seems
#to be unstable, and continues to write data. In all tests, md5sums have
#remained the same with/without failover during transfer.
#So, we first unmount /export - this prevents drbd from having a shit-fit
#when we attempt to turn this node secondary.
#That's a lie too, to some degree. LVM is entirely to blame for why DRBD
#was refusing to unmount. Don't get me wrong, having /export mounted doesn't
#help either, but still.
#fix a usecase where one or other are unmounted already, which causes us to terminate early.
if [[ "$(grep -o /varlibnfs/rpc_pipefs /etc/mtab)" ]]; then
for ((test=1; test <= 10; test++)); do
umount /export/varlibnfs/rpc_pipefs >/dev/null 2>&1
if [[ -z $(grep -o /varlibnfs/rpc_pipefs /etc/mtab) ]]; then
break
fi
if [[ $? -ne 0 ]]; then
#try again, harder this time
umount -l /var/lib/nfs/rpc_pipefs >/dev/null 2>&1
if [[ -z $(grep -o /varlibnfs/rpc_pipefs /etc/mtab) ]]; then
break
fi
fi
done
if [[ $test -eq 10 ]]; then
rm -f /tmp/heartbeatwrapper
echo "Problem unmounting rpc_pipefs"
exit 1
fi
fi
if [[ "$(grep -o /dev/drbd1 /etc/mtab)" ]]; then
for ((test=1; test <= 10; test++)); do
umount /export >/dev/null 2>&1
if [[ -z $(grep -o /dev/drbd1 /etc/mtab) ]]; then
break
fi
if [[ $? -ne 0 ]]; then
#try again, harder this time
umount -l /export >/dev/null 2>&1
if [[ -z $(grep -o /dev/drbd1 /etc/mtab) ]]; then
break
fi
fi
done
if [[ $test -eq 10 ]]; then
rm -f /tmp/heartbeatwrapper
echo "Problem unmount /export"
exit 1
fi
fi
#now, it's important that we shut down nfs. it can't write to /export anymore, so that's fine.
#if we leave it running at this point, then drbd will screwup when trying to go to secondary.
#See contradictory comment above for why this doesn't matter anymore. These comments are left in
#entirely to remind me of the pain this caused me to resolve. A bit like why churches have Jesus
#nailed onto a cross instead of chilling in a hammock.
pidof nfsd | xargs kill -9 >/dev/null 2>&1
sleep 1
if [[ -n $(ps aux | grep nfs | grep -v grep) ]]; then
echo "nfs still running, trying to kill again"
pidof nfsd | xargs kill -9 >/dev/null 2>&1
fi
sleep 1
/etc/init.d/nfs-kernel-server stop #>/dev/null 2>&1
sleep 1
#next we need to tear down drbd - easy with the heartbeat scripts
#it takes input as resourcename start|stop|status
#First, we'll check to see if it's stopped
/etc/ha.d/resource.d/drbddisk export status >/dev/null 2>&1
if [[ $? -eq 2 ]]; then
echo "resource is already stopped for some reason..."
else
for ((i=1; i <= 10; i++)); do
/etc/ha.d/resource.d/drbddisk export stop >/dev/null 2>&1
if [[ $(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2) == "Secondary/Secondary" ]] || [[ $(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2) == "Secondary/Unknown" ]]; then
echo "Successfully stopped DRBD"
break
else
echo "Failed to stop drbd for some reason"
cat /proc/drbd
if [[ $i -eq 10 ]]; then
exit 50
fi
fi
done
fi
rm -f /tmp/heartbeatwrapper
exit 0
elif [[ x$1 == "xstart" ]]; then
#start up drbd first
/etc/ha.d/resource.d/drbddisk export start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "Something seems to have broken. Let's check possibilities..."
testvar=$(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2)
if [[ $testvar == "Primary/Unknown" ]] || [[ $testvar == "Primary/Secondary" ]]
then
echo "All is fine, we are already the Primary for some reason"
elif [[ $testvar == "Secondary/Unknown" ]] || [[ $testvar == "Secondary/Secondary" ]]
then
echo "Trying to assume Primary again"
/etc/ha.d/resource.d/drbddisk export start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "I give up, something's seriously broken here, and I can't help you to fix it."
rm -f /tmp/heartbeatwrapper
exit 127
fi
fi
fi
sleep 1
#now we remount our partitions
for ((test=1; test <= 10; test++)); do
mount /dev/drbd1 /export >/tmp/mountoutput
if [[ -n $(grep -o export /etc/mtab) ]]; then
break
fi
done
if [[ $test -eq 10 ]]; then
rm -f /tmp/heartbeatwrapper
exit 125
fi
#I'm really unsure at this point of the side-effects of not having rpc_pipefs mounted.
#The issue here, is that it cannot be mounted without nfs running, and we don't really want to start
#nfs up at this point, lest it ruin everything.
#For now, I'm leaving mine unmounted, it doesn't seem to cause any problems.
#Now we start up nfs.
/etc/init.d/nfs-kernel-server start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "There's not really that much that I can do to debug nfs issues."
echo "probably your configuration is broken. I'm terminating here."
rm -f /tmp/heartbeatwrapper
exit 129
fi
#And that's it, done.
rm -f /tmp/heartbeatwrapper
exit 0
elif [[ "x$1" == "xstatus" ]]; then
#Lets check to make sure nothing is broken.
#DRBD first
/etc/ha.d/resource.d/drbddisk export status >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "stopped"
rm -f /tmp/heartbeatwrapper
exit 3
fi
#mounted?
grep -q drbd /etc/mtab >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "stopped"
rm -f /tmp/heartbeatwrapper
exit 3
fi
#nfs running?
/etc/init.d/nfs-kernel-server status >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "stopped"
rm -f /tmp/heartbeatwrapper
exit 3
fi
echo "running"
rm -f /tmp/heartbeatwrapper
exit 0
fi
With all of the above done, you'll then just want to configure /etc/exports
/export 172.20.1.0/255.255.255.0(rw,sync,fsid=1,no_root_squash)
Then it's just a case of starting up heartbeat on both machines and issuing hb_takeover on one of them. You can test that it's working by making sure the one you issued the takeover on is primary - check /proc/drbd, that the device is mounted correctly, and that you can access nfs.
--
Best of luck man. Setting it up from the ground up was, for me, an extremely painful experience.
A: These days 2TB fits in one machine, so you've got options, from simple to complex. These all presume linux servers:
*
*You can get poor-man's HA by setting up two machines and doing a periodic rsync from the main one to the backup.
*You can use DRBD to mirror one from the other at the block level. This has the disadvantage of being somewhat difficult to expand in the future.
*You can use OCFS2 to cluster the disks instead, for future expandability.
There are also plenty of commercial solutions, but 2TB is a bit small for most of them these days.
You haven't mentioned your application yet, but if hot failover isn't necessary, and all you really want is something that will stand up to losing a disk or two, find a NAS that support RAID-5, at least 4 drives, and hotswap and you should be good to go.
A: I would recommend NAS Storage. (Network Attached Storage).
HP has some nice ones you can choose from.
http://h18006.www1.hp.com/storage/aiostorage.html
as well as Clustered versions:
http://h18006.www1.hp.com/storage/software/clusteredfs/index.html?jumpid=reg_R1002_USEN
A: Are you looking for an "enterprise" solution or a "home" solution? It is hard to tell from your question, because 2TB is very small for an enterprise and a little on the high end for a home user (especially two servers). Could you clarify the need so we can discuss tradeoffs?
A: There's two ways to go at this. The first is to just go buy a SAN or a NAS from Dell or HP and throw money at the problem. Modern storage hardware just makes all of this easy to do, saving your expertise for more core problems.
If you want to roll your own, take a look at using Linux with DRBD.
http://www.drbd.org/
DRBD allows you to create networked block devices. Think RAID 1 across two servers instead of just two disks. DRBD deployments are usually done using Heartbeat for failover in case one system dies.
I'm not sure about load balancing, but you might investigate and see if LVS can be used to load balance across your DRBD hosts:
http://www.linuxvirtualserver.org/
To conclude, let me just reiterate that you're probably going to save yourself a lot of time in the long run just forking out the money for a NAS.
A: I assume from the body of your question is you're a business user? I purchased a 6TB RAID 5 unit from Silicon Mechanics and have it NAS attached and my engineer installed NFS on our servers. Backups performed via rsync to another large capacity NAS.
A: Have a look at Amazon Simple Storage Service (Amazon S3)
http://www.amazon.com/S3-AWS-home-page-Money/b/ref=sc_fe_l_2?ie=UTF8&node=16427261&no=3435361&me=A36L942TSJ2AJA
--
This may be of interest re. High Availability
Dear AWS Customer:
Many of you have asked us to let you know ahead of time about features and services that are currently under development so that you can better plan for how that functionality might integrate with your applications. To that end, we are excited to share some early details with you about a new offering we have under development here at AWS -- a content delivery service.
This new service will provide you a high performance method of distributing content to end users, giving your customers low latency and high data transfer rates when they access your objects. The initial release will help developers and businesses who need to deliver popular, publicly readable content over HTTP connections. Our goal is to create a content delivery service that:
Lets developers and businesses get started easily - there are no minimum fees and no commitments. You will only pay for what you actually use.
Is simple and easy to use - a single, simple API call is all that is needed to get started delivering your content.
Works seamlessly with Amazon S3 - this gives you durable storage for the original, definitive versions of your files while making the content delivery service easier to use.
Has a global presence - we use a global network of edge locations on three continents to deliver your content from the most appropriate location.
You'll start by storing the original version of your objects in Amazon S3, making sure they are publicly readable. Then, you'll make a simple API call to register your bucket with the new content delivery service. This API call will return a new domain name for you to include in your web pages or application. When clients request an object using this domain name, they will be automatically routed to the nearest edge location for high performance delivery of your content. It's that simple.
We're currently working with a small group of private beta customers, and expect to have this service widely available before the end of the year. If you'd like to be notified when we launch, please let us know by clicking here.
Sincerely,
The Amazon Web Services Team
A: Your best bet maybe to work with experts who do this sort of thing for a living. These guys are actually in our office complex...I've had a chance to work with them on a similar project I was lead on.
http://www.deltasquare.com/About
A: May I suggest you visit the F5 site and check out http://www.f5.com/solutions/virtualization/file/
A: You can look at Mirror File System. It does the file replication on file system level.
The same file on both primary and backup systems are live file.
http://www.linux-ha.org/RelatedTechnologies/Filesystems
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to insert a text-like element into document using javascript and CSS? I want to use javascript to insert some elements into the current page.
Such as this is the original document:
<p>Hello world!</p>
Now I want to insert an element in to the text so that it will become:
<p>Hello <span id=span1>new</span> world!</p>
I need the span tag because I want to handle it later.Show or hide.
But now problem comes out, if the original page has already defined a strange CSS style on all <span> tags, the "new" I just inserted will not appear to be the same as "Hello" and "world". How can I avoid this? I want the "new" be exactly the same as the "Hello" and "world".
A: Well, I don't know how married you are to using a <span> tag, but why not do this?
<p style="display: inline">Hello <p id="myIdValue" style="display: inline">new</p> World</p>
That way the inserted html retains the same styling as the outer, and you can still have a handle to it, etc. Granted, you will have to add the inline CSS style, but it would work.
A: The only way to do this is to either modify the other spans to include a class name and only apply the styles to spans with that class, or override the styles set for all spans for your new span.
So if you've done:
span {
display: block;
margin: 10px;
padding: 10px;
}
You could override with:
<span style="display: inline; margin: 0; padding: 0;">New Span</span>
A: Simply override any span styles. Set layout properties back to browser defaults and set formating to inherit from the parent:
span#yourSpan {
/* defaults */
position: static;
display: inline;
margin: 0;
padding: 0;
background: transparent;
border: none;
/* inherit from parent node */
font: inherit;
color: inherit;
text-decoration: inherit;
line-height: inherit;
letter-spacing: inherit;
text-transform: inherit;
white-space: inherit;
word-spacing: inherit;
}
This should be sufficient, although you may need to add !important if you are not using an id:
<span class="hello-node">hello</span>
span.hello-node {
/* defaults */
position: static !important;
display: inline !important;
...
}
A: Include the class definition that's defined in CSS on your JavaScript version of the <span> tag as well.
<span class="class_defined_in_css">
(where this <span> tag would be part of your JavaScript code.)
A: Why not give the paragraph an id and then use Javascript to add the word, or remove it, if necessary? Surely it will retain the same formatting as the paragraph when you insert the word "new", or change the contents of the paragraph entirely.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Btrieve without Pervasive? Is there any library available to query Btrieve databases without buying something from Pervasive? I'm looking to code in C# or Python.
A: As far as I know that is not possible. It is not an open source database, so writing drivers for it is really hard.
A: If you download one of the trial versions, you can get/install the odbc client and connect that way.
In our version of pervasive (older version) on the server where the database is installed, you can also find this client install.
A: This depends a lot on the version of Btrieve. I've been working with btrieve for a long time and have found that the best API for the old 6.15 version was in pascal. That having been said there was definately a C api around as well.
Pervasive have recently released a 6.15 ultimate patch. Using this and the C api should allow you to work effectively with older btrieve databases. It is possible for instance to build new modules for python using C.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Do you use application frameworks? Application frameworks such as DotNetNuke, Eclipse, Websphere and so forth are available today which offer customizable frameworks that can be used as dashboard applications. Do you use these or do you and your peers keep writing amazing, modular, maintainable dashboard frameworks which you support yourselves?
Are there any good web based, OS independent frameworks out there that you suggest using to build your own enterprise class infrastructure around?
A: The one I use is Oracle Application Development Framework. It's a complete, fully supported framework, and Oracle use it themselves to build their own enterprise applications. It comes with a lot of JSF components that are very easy to bind to the underlying data objects.
I'd recommend this for all Java applications that need database data.
You find a discussion of it on the Oracle Wiki:
http://wiki.oracle.com/page/ADF+Methodology+-+Work+in+Progressent
A: There's no one right answer. Look at the business need... if you're doing fairly typical things, then starting from an established framework is a good place to start. If you feel you may need some custom components or widgets, look for a framework that's extensible using the knowledge and skills that you have in-house.
Unless your line of business is to build application frameworks or dashboards, one should look very hard before building a whole new framework or dashboard.
A: At work, we try to create from scratch as little as possible. We use Frameworks a lot (maybe not always end to end frameworks). We have used Dot Net Nuke a lot. Another framework we use a lot is CSLA.
A: I personally use DotNetNuke quite extensively for both personal and business related ventures. However DNN does not meet one of your requirements as it is a .NET solution so it is windows dependent.
I have found that using DotNetNuke has greatly reduced our time to delivery, and we can focus on our core needs rather than the implementation of the common pieces.
A: Be careful to consider how scalable the framework is. There are several frameworks out there that like to hammer your database because they think it's nothing but a glorified file system... those frameworks don't scale well at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Open .NET form in designer mode - get "The path is not of a legal form" I attempted to open a C#/VB form in designer mode, and instead of the form, I got an ugly error message saying "The path is not of a legal form". This form used to work! What happened?
Thanks to all who have answered. This question is a problem I hit a while back, and I struggled with it for a long time, until I found the answer, which I posted below in order to help other people who might have hit this problem.
Thanks!
A: I don't know what this error message means, but it seems to be associated with third-party controls on the form. Anyway, the solution is almost as absurd as the problem:
*
*Close the designer/error message.
*Open the form code.
*Right-click on the form code and select "View Designer".
Presto! The designer opens!
A: Debugging design mode would help. From here:
*
*List item
*In visual studio, select the project you want to debug.
*Right click -> Properties.
*Select the debugging tab.
*Change the debug mode to Program.
*Set the “Start Application” to be your visual studio IDE (C:\Program Files\Microsoft Visual Studio .NET 2003\Common7\IDE\devenv.exe)
*Set your solution file in the “command line argument field”.
*Apply -> OK
*Select the project you want to debug as the startup project.
*Run.
*Set a break point in the place you want to start debug (for example, your control constructor)
A: This problem happened with me, and I found out it is because of bad reference. You have to review the assemblies that your application references.
A: By path, it might be referring to a path to a file or folder. There could be a malformed path that you are trying to reference, i.e. forward slash instead of backslash. Also, what changed since the error came up? Did you move any files around? Did you save any previously unsaved code? Update from a version control system?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Does Test Driven Development take the focus from Design? I have mixed feelings about TDD. While I believe in testing I have a issues with the idea of the test driving my development effort.
When you code to satisfy some tests written for an interface for requirements you have right now, you might shift your focus from building maintainable code, from clean design and from sound architecture.
I have a problem with driven not with testing. Any thoughts?
A: No.
If done right, Test Driven Development IS your design tool.
I hope you forgive me for linking to my own blog entry, wherein I discuss the pitfalls of Test Driven Development that went wrong simply because developers treated their tests as, merely, tests.
In a previous project, devs used a highly damaging singleton pattern that enforced dependencies throughout the project, which just broke the whole thing when requirements were changed:
TDD was treated as a task, when it
should have been treated as an an
approach. [...]
There was a failure to recognize
that TDD is not about tests, it’s
about design. The rampant case of
singleton abuse in the unit tests made
this obvious: instead of the test
writers thinking “WTF are these
singleton = value; statements doing in
my tests?”, the test writers just
propagated the singleton into the
tests. 330 times.
The unfortunate consequence is that
the build server-enforced testing was
made to pass, whatever it took.
Test Driven Development, done right, should make developers highly aware of design pitfalls like tight coupling, violations of DRY (don't repeat yourself), violations of SRP (Single Responsibility Principle), etc.
If you write passing code for your tests for the sake of passing your tests, you have already failed: you should treat hard to write tests as signposts that make you ask: why is this done this way? Why can't I test this code without depending on some other code? Why can't I reuse this code? Why is this code breaking when used by itself?
Besides if your design is truly clean, and your code truly maintainable why is it not trivial to write a test for it?
A: I completely agree with pjz. There is no one right way to design software. If you take TDD to an extreme, without any forethought except the next unit test, you may make things harder on yourself. Ditto for the person who sets out on a grand software project by spending months on diagrams and documentation, but no code.
Moderate. If feel the urge to draw up a quick diagram that helps you visualize the structure of your code, go for it. If you need two pages, it might be time to start writing some code. And if you want to do that before you write your tests, so what. The goal is working, quality software, not absolute conformity to any particular software development doctrine. Do what works for you and your team. Find areas where improvements can be made. Iterate.
A: There are many informal opinions here, including the popular opinion (from Jon Limjap) that bad results come from doing it wrong, and claims that seem unsupported by little more than personal experience. The preponderance empirical evidence and published results point in an opposite direction from that experience.
The theory is that a method that requires you to write tests before the code will lead to thinking about design at the level of individual code fragments — i.e., programming-in-the-small. Since procedures are all you can test (you still test an object one method at a time, and you simply can't test classes in most languages), your design focus goes to the individual methods and how they compose. That leads, in theory, to a bottom-up procedural design and, in turn, to bad coupling and cohesion among objects.
The broad empirical data substantiate the theory. Siniaalto and Abrahamsson, (Comparative Case Study on the Effect of Test-Driven Development on Program Design and Test Coverage), ESEM 2007, found that "Our results indicate that the cohesion may be worse (even though Beck claims that TDD produces highly cohesive systems). In our second study we noticed that the complexity measures were better with TDD, but the dependency management metrics were clearly worse." Janzen and Saledian (Does Test-Driven Development Really Improve Software Design Quality? IEEE Software 25(2), March/April 2008, pp. 77 - 84) found that “[T]he results didn't support claims for lower coupling and increased cohesion with TDD”.
A literature review will uncover other publications furthering these cases.
Even my dear friend Uncle Bob writes: "One of the more insidious and persistent myths of agile development is that up-front architecture and design are bad; that you should never spend time up front making architectural decisions. That instead you should evolve your architecture and design from nothing, one test-case at a time. Pardon me, but that’s Horse Shit." ("The Scatology of Agile Architecture,"
http://blog.objectmentor.com/articles/2009/04/25/the-scatology-of-agile-architecture)
However, it's worth noting that the broader failure is that people think it's a testing technique rather than a design technique. Osherov points out a host of approaches that are often casually equated with TDD. I can't be sure what's meant by the posters here. See: http://weblogs.asp.net/rosherove/archive/2007/10/08/the-various-meanings-of-tdd.aspx.
A: I completely agree with you on that subject. In practice I think TDD often has some very negative effects on the code base (crappy design, procedural code, no encapsulation, production code littered with test code, interfaces everywhere, hard to refactor production code because everything is tightly coupled to many tests etc.).
Jim Coplien has given talks on exactly this topic for a while now:
Recent studies (Siniaalto and
Abrahamsson) of TDD show that it may
have no benefits over traditional
test-last development and that in some
cases has deteriorated the code and
that it has other alarming (their
word) effects. The one that worries me
the most is that it deteriorates the
architecture.
--Jim's blog
There is also a discussion over on InfoQ between Robert C. Martin and James Coplien where they touch on this subject.
A: My way to think about it is, write what you want your code to look like first.
Once you have a sample of your target code (that right now does nothing) see if you can place a test scaffolding onto it.
If you can't do that, figure out why you can't.
Most of the time it's because you made a poor design decision (99%), however if that's not the case (1%) try the following:
*
*determine what the crazy requirements are that you need to abide to that wont let you test your code. One you understand the issues redesign your API.
*if someone else decided this requirements discuss about it him/her. They probably had a good reason for the requirement and once you know their reason you'll be able to perfect your design and make it testable. If not now you can both rework the requirement and you'll both be the better for it.
After you have your target code and the test scaffolding. Implement the code. Now you even have the advantage of knowing how well your progressing as you pass your own test (Its a great motivator!)
The only case where testing may be superfluous, from personal experience, is when you are making an early prototype because at that point you still don't understand the problem well enough to design or test your code accurately.
A: There are three steps to complete software:
*
*Make it work
*Make it right
*Make it fast
Tests get you #1. Your code is not done just because the tests have passed. Preferably you have some concept of project structure (Utilities, commonly accessed objects, layers, framework) before you start writing your tests/code. After you've written your code to make the tests pass, you need to re-evaluate it to see which parts can be refactored out to the different aspects of your application. Yuo can do this confidently, because you know that as long as your tests are still passing, you code is still functional (or at least meeting the requirements).
At the start of a project, give thought to the structure. As the project goes on continue to evaluate and re-evaluate your code to keep the design in place or change the design if it stops making sense. All of these items must be taken into account when you estimate, or you will end up with spagetti code, TDD or not.
A: There's always a risk of overdoing either the TDD design or the upfront design. So the answer is that it depends. I prefer starting with a user story/acceptance test which is the base of the requirement that my tests will aid in producing. Only after I've established that, I start writing detailed unit tests TDD-style. If the only design and thinking you do is through TDD, then you risk too much of a bottom up approach, which might give you units and classes that are excellent in isolation, but when you try to integrate them into the user story fulfilling task you might be surprised by having done it all wrong. For more inspiration on this, look att BDD.
A great "debate" about this has been recorded between Robert C. Martin and James Coplien, where the former is a TDD advocate and the latter has stated that it ruins the design of a system. This is what Robert said about TDD and design:
"There has been a feeling in the Agile
community since about '99 that
architecture is irrelevant, we don't
need to do architecture, all we need
to do is write a lots of tests and do
lots of stories and do quick
iterations and the code will assemble
itself magically, and this has always
been horse shit. I even think most of
the original Agile proponents would
agree that was a silliness."
James Coplien states that merely driving your design from TDD has a great risk:
"One of the things we see a lot, in a
lot of projects, is that projects go
south on about their 3rd sprint and
they crash and burn because they
cannot go any further, because they
have cornered themselves
architecturally. And you can't
refactor your way out of this because
the refactoring has to be across class
categories, across class hierarchies,
and you no longer can have any
assurances about having the same
functionality."
Also he gives a great example of how a bank account probably would look if you test drove it as compared to using your upfront knowledge to drive the architecture:
"I remember when I was talking with
Kent once, about in the early days
when he was proposing TDD, and this
was in the sense of YAGNI and doing
the simplest thing that could possibly
work, and he says: 'Ok. Let's make a
bank account, a savings account.'
What's a savings account? It's a
number and you can add to the number
and you can subtract from the number.
So what a saving account is, is a
calculator. Let's make a calculator,
and we can show that you can add to
the balance and subtract from the
balance. That's the simplest thing
that could possibly work, everything
else is an evolution of that.
If you do a real banking system, a
savings account is not even an object
and you are not going to refactor your
way to the right architecture from
that one. What a savings account is,
is a process that does an iteration
over an audit trail of database
transactions, of deposits and interest
gatherings and other shifts of the
money. It's not like the savings
account is some money sitting on the
shelf on a bank somewhere, even though
that is the user perspective, and
you've just got to know that there are
these relatively intricate structures
in the foundations of a banking system
to support the tax people and the
actuaries and all these other folks,
that you can't get to in an
incremental way. Well, you can,
because of course the banking industry
has come to this after 40 years. You
want to give yourself 40 years? It's
not agile."
The interesting thing here is that both the TDD proponent and the TDD antagonist are saying that you need design up front.
If you have the time, watch the video. It's a great discussion between two highly influential experts, and it's only 22 minutes long.
A: It's always a balance:
- too much TDD and you end up with code that works, but is a pain to work on.
- too much 'maintable code, clean design, and sound architecture' and you end up with Architecture Astronauts that have talked themselves into coding paralysis
Moderation in all things.
A: I'm relatively new to TDD and unit testing, but in the two side projects I've used it on, I've found it to be a design aide rather than alternative to design. The abilty to test and verify components / sub-components independently has made it easier for me to make rapid changes and try out new design ideas.
The difference I've experienced with TDD is reliability. The process of working out component interfacing on smaller levels of component at the begining of the design process, rather than later, is that I've got components I can trust will work earlier, so I can stop worrying about the little pieces and instead get to work on the tough problems.
And when I inevitably need to come back and maintain the little pieces, I can spend less time doing so, so I can get back to the work I want to be doing.
A: For the most part I agree that TDD does provide a sort of design tool. The most important part of that to me is the way that it builds in the ability to make more changes (you know, when you have that flash of insight moment where you can add functionality by deleting code) with greatly reduced risk.
That said, some of the more algorithmic work I've contracted on lately has suffered a bit under TDD without a careful balance of design thought. The statement above about safer refactoring was still a great benefit, but for some algorithms TDD is (although still useful) not sufficient to get you to an ideal solution. Take sorting as a simple example. TDD could easily lead you to a suboptimal (N^2) algorithm (and scads of passing tests that allow you to refactor to a quick sort) like a bubble sort. TDD is a tool, a very good tool, but like many things needs to be used appropriately for the context of the problem being solved.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Implementations of interface through Reflection How can I get all implementations of an interface through reflection in C#?
A: The answer is this; it searches through the entire application domain -- that is, every assembly currently loaded by your application.
/// <summary>
/// Returns all types in the current AppDomain implementing the interface or inheriting the type.
/// </summary>
public static IEnumerable<Type> TypesImplementingInterface(Type desiredType)
{
return AppDomain
.CurrentDomain
.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes())
.Where(type => desiredType.IsAssignableFrom(type));
}
It is used like this;
var disposableTypes = TypesImplementingInterface(typeof(IDisposable));
You may also want this function to find actual concrete types -- i.e., filtering out abstracts, interfaces, and generic type definitions.
public static bool IsRealClass(Type testType)
{
return testType.IsAbstract == false
&& testType.IsGenericTypeDefinition == false
&& testType.IsInterface == false;
}
A: Have a look at Assembly.GetTypes() method. It returns all the types that can be found in an assembly. All you have to do is to iterate through every returned type and check if it implements necessary interface.
On of the way to do so is using Type.IsAssignableFrom method.
Here is the example. myInterface is the interface, implementations of which you are searching for.
Assembly myAssembly;
Type myInterface;
foreach (Type type in myAssembly.GetTypes())
{
if (myInterface.IsAssignableFrom(type))
Console.WriteLine(type.FullName);
}
I do believe that it is not a very efficient way to solve your problem, but at least, it is a good place to start.
A: Assembly assembly = Assembly.GetExecutingAssembly();
List<Type> types = assembly.GetTypes();
List<Type> childTypes = new List<Type>();
foreach (Type type in Types) {
foreach (Type interfaceType in type.GetInterfaces()) {
if (interfaceType.Equals(typeof([yourinterfacetype)) {
childTypes.Add(type)
break;
}
}
}
Maybe something like that....
A: Here are some Type extension methods that may be useful for this, as suggested by Simon Farrow. This code is just a restructuring of the accepted answer.
Code
/// <summary>
/// Returns all types in <paramref name="assembliesToSearch"/> that directly or indirectly implement or inherit from the given type.
/// </summary>
public static IEnumerable<Type> GetImplementors(this Type abstractType, params Assembly[] assembliesToSearch)
{
var typesInAssemblies = assembliesToSearch.SelectMany(assembly => assembly.GetTypes());
return typesInAssemblies.Where(abstractType.IsAssignableFrom);
}
/// <summary>
/// Returns the results of <see cref="GetImplementors"/> that match <see cref="IsInstantiable"/>.
/// </summary>
public static IEnumerable<Type> GetInstantiableImplementors(this Type abstractType, params Assembly[] assembliesToSearch)
{
var implementors = abstractType.GetImplementors(assembliesToSearch);
return implementors.Where(IsInstantiable);
}
/// <summary>
/// Determines whether <paramref name="type"/> is a concrete, non-open-generic type.
/// </summary>
public static bool IsInstantiable(this Type type)
{
return !(type.IsAbstract || type.IsGenericTypeDefinition || type.IsInterface);
}
Examples
To get the instantiable implementors in the calling assembly:
var callingAssembly = Assembly.GetCallingAssembly();
var httpModules = typeof(IHttpModule).GetInstantiableImplementors(callingAssembly);
To get the implementors in the current AppDomain:
var appDomainAssemblies = AppDomain.CurrentDomain.GetAssemblies();
var httpModules = typeof(IHttpModule).GetImplementors(appDomainAssemblies);
A: Do you mean all interfaces a Type implements?
Like this:
ObjX foo = new ObjX();
Type tFoo = foo.GetType();
Type[] tFooInterfaces = tFoo.GetInterfaces();
foreach(Type tInterface in tFooInterfaces)
{
// do something with it
}
Hope tha helpts.
A: You have to loop over all assemblies that you are interested in. From the assembly you can get all the types it defines. Note that when you do AppDomain.CurrentDomain.Assemblies you only get the assemblies that are loaded. Assemblies are not loaded until they are needed, so that means that you have to explicitly load the assemblies before you start searching.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: How to get someone started with ALT.NET What is the order of topics to explain to a .NET developer or user group to get them started and interested with alt.net tools and practices.
*
*ORM
*IoC
*TDD
*DDD
*DSL
*CI
*MVC - MVP
*Version Control (I think this is the one they get the fastest)
*Agile
*Etc, etc...
A: ALT.NET is more of an attitude than a set of tools and practices.
I don't know that you can "get someone started with ALT.NET," per se.
To me it is an attitude born of experience, not something you can put on like a coat. But that is my opinion, subject to change.
A: The essential principles to drive home are:
*
*Microsoft tools are a good place to start, but it's possible to write better software faster by using other companion products
*Change is good, so always think about ways that code can be changed and verified quickly
*If it isn't tested, it's not production quality
Then, after version control (!), I'd start with continuous integration, and show how getting immediate feedback on the quality of a build can help improve quality from the first moment. Doing CI first doesn't change the codebase.
Then I'd introduce automated end-to-end testing of the application with FitNesse, Watin or somesuch. This should then illustrate how refactoring code isn't something to be afraid of if you have good testing tools that will verify that the code still works.
Then I'd do gentle refactoring to break out business logic and domain objects from the UI (if they're not there already) and introduce unit testing. This further shows how refactoring is a good thing.
As we aim to get some sort of seperation of concerns, design patterns (such as IoC) will naturally start to become apparent. It's also going to be obvious that we can replace the data layer with ORM.
As we refactor, I'd also show how test-driven development can actually speed creation of better code. This is probably easiest shown for the first time with new development, as otherwise it's quite a culture shock!
A: I think it depends on the individual or group. Almost all shops have some exposure to one of these concepts. From there, I would introduce new concepts only as fast as you think the developer or team can absorb them. It's quite depressing to see teams start rejecting some of the important principles and concepts because they are over-loaded. And try not to assume someone understands the principles behind using CI, IoC, or mocking frameworks.
A: I don't mean becoming an ALT.NETter just in a way of letting them know that the stuff are out there but in a way that they can understand it and feel that it can help them.
A: I think a lot of people don't know about Generics, delegates, Linq and Lambda expressions.
If you will tell then all at about the same time then they will just drop everything.
Like you wouldn't teach a beginner programmer whats a DSL but you can let him know about SVN.
A: The Alt.NET Podcast may be a good place to get some ideas. They have podcasts on continuous improvement, agile, DI/IoC, ORM, OOP w/ Ruby, etc. (in that order).
A: For me it was a colleague who championed IoC/DI and TDD. He also got me going to .net user groups so I could see that he wasn't just a one-off crazy guy who loved using new and strange technologies for the sake of using them.
A: I would build a Web application using Nancy with C# (or Boo, Iron*, other language) using SharpDevelop (there is a book on this) or Rider (JetBrains' C# IDE). I view Alt.NET as non-Microsoft .net development, specifically focused on open source, and sometimes out-of-the-box thinking. There is a conference .NET Fringe in Portland Oregon every year now, that caters to this attitude toward development.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I refresh a training database with the data from production database? I have a particular system on out network where we need to maintain a training installation. The system uses SQL Server 2000 as its database engine and I need to set up a system for refreshing the data in the training database with the data from the production database on a regular basis.
I want to use SSIS as we have SQL 2005 servers I can run the process from. I have a fair bit of SQL experience, but not much with SSIS. I have been trying to do this with the "Transfer Database Task" but haven't been having much luck, as it always throws an error.
If we ignore the use of configuration items etc and pretend all the database names and so forth are hard-coded, I have the following:
A Single SSIS "Transfer Database Task" with the following properties:
*
*Destination Overwrite: True
*Action: Copy
*Method: DatabaseOnline
The error I receive is:
Error: The Execute method on the task returned error code 0x80131500 (ERROR : errorCode=-1073548784 description=Executing the query "EXEC dbo.sp_addrole @rolename = N'XXXXX' " failed with the following error: "The role 'XXXXX' already exists in the current database.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. helpFile= helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC}). The Execute method must succeed, and indicate the result using an "out" parameter.
I'm sure there is something obvious going on here, but surely if the task is set to overwrite the pre-existance of the role shouldn't matter? Does anyone know what I need to do to get this working?
A: Apparently this should be fix in SQLServer 2005 SP2 see here. Looks like you need to make sure to patch the client machine too if you are running the SSIS package from within Visual Studio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Using Google Maps in ColdFusion I am trying to use the Google Maps API in a ColdFusion template that is a border type cflayoutarea container. However, the map simply doesn't show up:
<cfif isdefined("url.lat")>
<cfset lat="#url.lat#">
<cfset lng="#url.lng#">
</cfif>
<head>
<script src= "http://maps.google.com/maps?file=api&v=2&key=xxxx" type="text/javascript">
function getMap(lat,lng){
if (GBrowserIsCompatible()) {
var map = new GMap2(document.getElementById("map_canvas"));
var pt= new GLatLng(lat,lng);
map.setCenter(pt, 18,G_HYBRID_MAP);
map.addOverlay(new GMarker(pt));
}
}
</script>
</head>
<cfoutput>
<body onLoad="getMap(#lat#,#lng#)" onUnload="GUnload()">
Map:<br>
<div id="map_canvas" style="width: 500px; height: 300px"/>
</body>
</cfoutput>"
where lat and lng are the co-ordinates in degree.decimal format. I have traced down to the line where GBrowserIsCompatible() somehow never returns TRUE and thus no further action was taken.
If opened separately the template works perfectly but just not when opened as a cflayoutarea container. Anyone has experience in this? Any suggestions is much appreciated.
Lawrence
Using CF 8.01, Dreamweaver 8
Tried your suggestion but still doesn't work; the map only shows when the calling code is inline. However, if this container page was called from yet another div the map disappears again.
I suspect this issue is related to the cflayout container; I'll look up the Extjs doc to see if there're any leads to a solution.
A: Success! (sort of...)
Finally got it working, but not in the way Adam suggested:
<script src= "http://maps.google.com/maps?file=api&v=2&key=xxxx" type="text/javascript"></script>
<script type="text/javascript">
getMap=function(lat,lng){
if (GBrowserIsCompatible()){
var map = new GMap2(document.getElementById("map_canvas"));
var pt = new GLatLng(lat,lng);
map.setCenter(pt, 18,G_HYBRID_MAP);
map.addOverlay(new GMarker(pt));
}
}
</script>
<cflayout name="testlayout" type="border">
<cflayoutarea name="left" position="left" size="250"/>
<cflayoutarea name="center" position="center">
<!--- sample hard-coded co-ordinates --->
<body onLoad="getMap(22.280161,114.185096)">
Map:<br />
<div id="map_canvas" style="width:500px; height: 300px"/>
</body>
</cflayoutarea>
<!--- <cflayoutarea name="center" position="center" source="map_content.cfm?lat=22.280161&lng=114.185096"/> --->
</cflayout>
The whole thing must be contained within the same file or it would not work. My suspicion is that the getElementByID function, as it stands, cannot not reference an element that is outside of its own file. If the div is in another file (as in Adam's exmaple), it results in an undefined map, ie a map object is created but with nothing in it.
So I think this question is now elevated to a different level: how do you reference an element that is inside an ajax container?
A:
So I think this question is now elevated to a different level: how do you reference an element that is inside an ajax container?
It should be possible reference an element loaded via AJAX -- just not until the element is on screen (so not on page load). It looks like getMap() triggers everything. (Is that right?)
Try this: Take exactly what you have as your inline-content for the map tab, and make it the content of map_content.cfm; then instead of using body onload to fire the event, write it inline, after the div is defined:
<body>
Map:<br />
<div id="map_canvas" style="width:500px; height: 300px"/>
<script type="text/javascript">
getMap(22.280161,114.185096);
</script>
</body>
A: Maybe the layout area doesn't have the right style. I think you may have to give the map_canvas a
position: absolute
or
position: relative
That's just a hunch.
A: CFLayoutArea is a new AJAX tag added with ColdFusion version 8. (In addition to tags like CFWindow, CFDiv, etc.)
Within the AJAX-loaded content of any of these new tags, external JavaScript must be included from the containing page. In your case, that would be the page that includes the <cflayout> tag.
Try something like this:
in index.cfm (or whatever your containing file is):
<script src="http://maps.google.com/maps?file=api&v=2&key=xxxx" type="text/javascript">
function getMap(lat,lng){
if (GBrowserIsCompatible()) {
var map = new GMap2(document.getElementById("map_canvas"));
var pt= new GLatLng(lat,lng);
map.setCenter(pt, 18,G_HYBRID_MAP);
map.addOverlay(new GMarker(pt));
}
}
</script>
<cflayout>...</cflayout>
map.cfm (content of your map CFLayout tab):
<cfif structKeyExists(url, "lat")>
<cfset variables.lat = url.lat />
<cfset variables.lng = url.lng />
</cfif>
<head></head>
<cfoutput>
<body onLoad="getMap(#variables.lat#,#variables.lng#)" onUnload="GUnload()">
Map:<br>
<div id="map_canvas" style="width: 500px; height: 300px"/>
</body>
</cfoutput>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What tools generate JavaScript? Are there any good tools to generate JavaScript? I remember in .NET, there was Script# - don't know its status today.
Anyone have experience with any tools?
A: I use my keyboard, a text editor and my brain to generate JavaScript.
:P
A: As others have said, GWT is a very good option. To summarize some good points:
*
*fast, very portable code using deferred binding; only loads the code that works on the user's browser, and only loads functions that are actually called; also, they're compressed
*reliability; very few known issues
*easier debugging using a Java-based IDE; you can also look directly at un-obfuscated javascript if you want to, but it seems (based on some reports I've seen & personal experience) that you basically never need this
*good library support including a nice inline javascript interface, the ability to use existing Java libraries, and special support for ajax / rpc calls
*extensible & stylistically flexible; you can fine-tune all styles with your own css rules, and extend the Widget base with your own Java subclasses
So I humbly disagree with dominic that the results are ugly since it is up to the coder to 'prettify' the basic functionality with their own css rules and other decorations. It would be the same mistake to call HTML 'ugly' - if you don't try hard, it isn't pretty, but the power and flexibility is in the hands of the coder.
Oh, and it's open source, too.
A: Latest version of Script# was posted less than a month ago. Nikhil continues to actively work on that project and it's a very good tool for generating JavaScript code from C#. It is actively used in a couple of different internal Microsoft projects.
Some of the benefits of Script# are:
*
*Intellisense
*Build errors at compile time
*Refactoring support
*Documentation support
*FxCop code analysis
*MSBuild support
A: Nice list: https://github.com/jashkenas/coffee-script/wiki/List-of-languages-that-compile-to-JS
A: There is currently a lot of tools to generate JavaScript, like GWT.
But giving you a good answer really depends on what is your originator language and what king of JavaScript functionnality you want to use.
A: Google Web Toolkit is one option. Write Java code, debug it with a standard Java debugger, then press the "Compile" button and turn it into highly-optimized JavaScript. It generates completely separate JavaScript for each major browser family (IE, Firefox, Safari, etc.).
Very mature, very powerful, and easy to embed into an existing site. One downside is that the UIs it creates are ugly nested tables.
A: I've used D templates (think C++ without the pain and you'll be 50% there) to generate a AJAX based Object proxy.
A: Try Haxe.
It can target JavaScript, ActionScript and Neko bytecode. The language is close to Java.
A: Kotlin can generate JavaScript out of Kotlin code. For Kotlin see http://kotlin.jetbrains.org/ and also http://devnet.jetbrains.com/thread/447468?tstart=0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I build a 'dependency tree diagram' from my .NET solution I can get easily see what projects and dlls a single project references from within a Visual Studio .NET project.
Is there any application or use of reflection that can build me a full dependency tree that I can use to plot a graphical chart of dependencies?
A: I needed something similar, but didn't want to pay for (or install) a tool to do it. I created a quick PowerShell script that goes through the project references and spits them out in a yuml.me friendly-format instead:
Function Get-ProjectReferences ($rootFolder)
{
$projectFiles = Get-ChildItem $rootFolder -Filter *.csproj -Recurse
$ns = @{ defaultNamespace = "http://schemas.microsoft.com/developer/msbuild/2003" }
$projectFiles | ForEach-Object {
$projectFile = $_ | Select-Object -ExpandProperty FullName
$projectName = $_ | Select-Object -ExpandProperty BaseName
$projectXml = [xml](Get-Content $projectFile)
$projectReferences = $projectXml | Select-Xml '//defaultNamespace:ProjectReference/defaultNamespace:Name' -Namespace $ns | Select-Object -ExpandProperty Node | Select-Object -ExpandProperty "#text"
$projectReferences | ForEach-Object {
"[" + $projectName + "] -> [" + $_ + "]"
}
}
}
Get-ProjectReferences "C:\Users\DanTup\Documents\MyProject" | Out-File "C:\Users\DanTup\Documents\MyProject\References.txt"
(source: yuml.me)
A: In addition to NDepend, you can also try this addin for Reflector for showing assembly dependency graph.
A: You can create a dependency graph of projects and assemblies in Visual Studio 2010 Ultimate by using Architecture Explorer to browse your solution, select projects and the relationships that you want to visualize, and then create a dependency graph from your selection.
For more info, see the following topics:
How to: Generate Graph Documents from Code: http://msdn.microsoft.com/en-us/library/dd409453%28VS.100%29.aspx#SeeSpecificSource
How to: Find Code Using Architecture Explorer: http://msdn.microsoft.com/en-us/library/dd409431%28VS.100%29.aspx
RC download: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=457bab91-5eb2-4b36-b0f4-d6f34683c62a.
Visual Studio 2010 Architectural Discovery & Modeling Tools forum: http://social.msdn.microsoft.com/Forums/en-US/vsarch/threads
A: NDepend comes with an interactive dependency graph coupled with a dependency matrix. You can download and use the free trial edition of NDepend for a while.
More on NDepend Dependency Graph
More on NDepend Dependency Matrix:
Disclaimer: I am part of the tool team
A: Structure101 can do that. You can browse a model by assembly and/or namespace, and clicking on any dependency at any level give you all the code-level references that cause the dependency. The .NET version is in beta, but it's been available for other languages for years, so it's very mature. Here's an example screen shot.
alt text http://www.headwaysoftware.com/images/assemblies.jpg
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Granting access to hundreds of SPs? In Sql Server 2000/2005, I have a few NT user groups that need to be granted access to hundreds of stored procedures.
Is there a nice easy way to do that?
A: *
*Create a role in sql server.
*Write a
script that grants that role
permission to use those sprocs.
*Add those NT user groups to that role.
A: Here's a script that I use for granting permissions to lots of procedures:
DECLARE @DB sysname ; set @DB = DB_NAME()
DECLARE @U sysname ; set @U = QUOTENAME('UserID')
DECLARE @ID integer,
@LAST_ID integer,
@NAME varchar(1000),
@SQL varchar(4000)
SET @LAST_ID = 0
WHILE @LAST_ID IS NOT NULL
BEGIN
SELECT @ID = MIN(id)
FROM dbo.sysobjects
WHERE id > @LAST_ID AND type = 'P' AND category = 0
SET @LAST_ID = @ID
-- We have a record so go get the name
IF @ID IS NOT NULL
BEGIN
SELECT @NAME = name
FROM dbo.sysobjects
WHERE id = @ID
-- Build the DCL to do the GRANT
SET @SQL = 'GRANT EXECUTE ON ' + @NAME + ' TO ' + @U
-- Run the SQL Statement you just generated
EXEC master.dbo.xp_execresultset @SQL, @DB
END
END
You can modify the select to get to a more specific group of stored procs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: mysqli_stmt::num_rows() returns the wrong value I was writing a database handler class in PHP using the mysqli class and prepared statements. I was attempting to print out the result. It didn't work right off the bat so I decided to do some debugging. I tried to use the num_rows() method from the mysqli_statement class, but it kept returning 0. I decided to write a small portion of the test code to keep it simpler so I could see what was going wrong. I was then able to return the data I wanted, but the num_rows() method still returns 0 even when it is actually selecting and retrieving some data. Here is the code:
$mysqli = new mysqli('localhost', 'username', 'password', 'database');
if(mysqli_connect_errno())
{
die('connection failed');
}
$statement = $mysqli->stmt_init();
$query = "SELECT name FROM table WHERE id = '2000'";
if($statement->prepare($query))
{
$statement->execute();
$statement->bind_result($name);
$statement->fetch();
$statement->store_result();
echo $statement->num_rows();
echo $name;
}
else
{
echo 'prepare statement failed';
exit();
}
The expected result is:
1name
And the actual result is:
0name
Can anyone tell me why this is?
A: I wonder if num_rows() is reporting relative to the current resultset. Try capturing num_rows() prior to fetching the data. e.g.
if($statement->prepare($query))
{
$statement->execute();
$statement->store_result();
echo $statement->num_rows();
$statement->bind_result($name);
$statement->fetch();
echo $name;
}
Does that have any effect?
A: num_rows is not a method, it's a property.
A: In order to be able to use mysqli_stmt::num_rows(), you need to fetch all rows into PHP. There are two ways to fetch everything: buffering using store_result() or manual fetching of all rows using fetch().
In your case, you have started manual fetching by calling fetch() once. You can't call store_result() when another fetch process is ongoing. The call to store_result() fails with an error*.
$statement->fetch();
$statement->store_result(); // produces error. See $mysqli->error;
echo $statement->num_rows();
The easiest solution is to swap the order in which you call these two methods.
$statement->store_result();
$statement->fetch(); // This will initiate fetching from PHP buffer instead of MySQL buffer
echo $statement->num_rows(); // This will tell you the total number of rows fetched to PHP
* Due to a bug in PHP, this error will not trigger an exception in the exception error reporting mode. The error message can only be seen with mysqli_error() function or its corresponding property.
A: It doesn't look like you've declared $name.
Also, try removing bind_result() and fetch() so it reads something like this:
$statement->execute();
$statement->store_result();
printf("Number of rows: %d.\n", $statement->num_rows);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Test Coverage for visual basic project We are developing a Visual Basic 6.0 project. We have written a library, which we were testing using vbunit and vbmock. But soon found that the tests were not maintainable. So, we decided to write tests using MBunit. Now, we want to know the test coverage. How can we do it?
thanks
A: The only VB6 test coverage tool I know is http://www.aivosto.com/vbwatch.html Aivisto seems to have a generally good reputation for their VB tools.
A: Look into NCover
| {
"language": "en",
"url": "https://stackoverflow.com/questions/80305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.